aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1905.01013 | 2949520894 | It is common to view people in real applications walking in arbitrary directions, holding items, or wearing heavy coats. These factors are challenges in gait-based application methods because they significantly change a person's appearance. This paper proposes a novel method for classifying human gender in real time using gait information. The use of an average gait image (AGI), rather than a gait energy image (GEI), allows this method to be computationally efficient and robust against view changes. A viewpoint (VP) model is created for automatically determining the viewing angle during the testing phase. A distance signal (DS) model is constructed to remove any areas with an attachment (carried items, worn coats) from a silhouette to reduce the interference in the resulting classification. Finally, the human gender is classified using multiple view-dependent classifiers trained using a support vector machine. Experiment results confirm that the proposed method achieves a high accuracy of 98.8 on the CASIA Dataset B and outperforms the recent state-of-the-art methods. | In the model-based methods @cite_29 @cite_20 @cite_19 , the human body is divided into various parts, the structures of which are then fitted using primitive shapes such as ellipses, rectangles, and cylinders. Then, the gait feature is encoded using the parameters of the primitive shapes to measure the time-varying motion of the subject. In @cite_29 , L. divide a human silhouette into seven different parts corresponding to the head and shoulder region, the front of the torso, back of the torso, front thigh, back thigh, front calf and foot, and back calf and foot. They then use ellipses to fit the model and capture the parameters of the ellipses such as the mean, standard deviation, orientation, and magnitude of the major components as feature vectors for classification. Although such methods are robust to noise and occlusions, they typically require a relatively high computational cost. | {
"cite_N": [
"@cite_19",
"@cite_29",
"@cite_20"
],
"mid": [
"2117281018",
"1502651450",
"56484167"
],
"abstract": [
"Gait based gender identification has received a great attention from researchers in the last decade due to its potential in different applications. This will help a human identification system to focus only on the identified gender related features, which can improve search speed and efficiency of the retrieval system by limiting the subsequent searching space into either a male database or female database. In this paper after preprocessing, four binary moment features and two spatial features are extracted from human silhouette. Then the extracted features are used for training and testing two different pattern classifiers kNearest Neighbor (kNN) and Support Vector Machine(SVM). Experimental results show superior performance of our approach to the existing gender classifiers. To evaluate the performance of the proposed algorithm experiments have been conducted on NLPR database.",
"This paper describes a set of representations of gait appearance features for the purpose of person identification. Our gait representation has two stages: the first stage computes a set of image features that are based on moments extracted from orthogonal view video silhouettes of human walking motion; the second stage applies three methods of aggregating these image features over time to create the gait sequence features. Despite their simplicity, the resulting gait sequence feature vectors contain enough information to perform well on human identification. We demonstrate the accuracy of recognition using gait video sequences collected over different daysa nd times, under varying lighting environments an d explore the differences in the three time-aggregation methods for the purpose of recognition.",
"Model-based Gait Recognition concerns identification using an underlying mathematical construct(s) representing the discriminatory gait characteristics (be they static or dynamic), with a set of parameters and a set of logical and quantitative relationships between them. These models are often simplified based on justifiable assumptions such as the system only accounts for pathologically normal gait. Such a system normally consists of gait capture, a model(s), a feature extraction scheme, a gait signature and a classifier (Figure 1). The model can be a 2- or 3-dimensional structural (or shape) model and motion model that lays the foundation for the extraction and tracking of a moving person. An alternative to a model-based approach is to analyse the motion of the human silhouette deriving recognition from the body’s shape and motion. A gait signature that is unique to each person in the database is then derived from the extracted gait characteristics. In the classification stage, many pattern classification techniques can be used, such as the k-nearest neighbour approach."
]
} |
1905.00399 | 2943159235 | We propose a formal timing analysis of an extension of the AFDX standard, incorporating the TSN BLS shaper, to homogenize the avionics communication architecture, and enable the interconnection of different avionics domains with mixed-criticality levels, e.g., current AFDX traffic, Flight Control and In-Flight Entertainment. Existing Network Calculus models are limited to three classes, but applications with heterogeneous traffic require additional classes. Hence, we propose to generalize an existing Network Calculus model to do a worst-case timing analysis of an architecture with multiple BLS on multi-hop networks, to infer real-time bounds. Then, we conduct the performance analysis of such a proposal. First we evaluate the model on a simple 3-classes single-hop network to assess the sensitivity and tightness of the model, and compare it to existing models (CPA and Network Calculus). Secondly, we study a realistic AFDX configuration with six classes and two BLS. Finally, we compute a real use-case to add A350 flight control traffic to the AFDX. Results show the good properties of the generalized Network Calculus model compared to the CPA model and the efficiency of the extended AFDX to noticeably enhance the medium priority level delay bounds, while respecting the higher priority level constraints, in comparison with the current AFDX standard. | The CPA model @cite_16 computes the impact of the other flows by dividing them in four categories: the lower-priority blocking, the same-priority blocking, the higher priority blocking, and the BLS shaper blocking. The latter is defined as follows for a flow of class @math : @math with: @math , @math and @math BLS parameters of class @math ; @math , with @math the Maximum Frame Size of flow @math , @math the streams with a priority lower than @math : the maximum blocking time, called the replenishment interval @math the shortest service interval for class @math . | {
"cite_N": [
"@cite_16"
],
"mid": [
"2346206503"
],
"abstract": [
"Future in-vehicle networks will use Ethernet as their communication backbone. As many automotive applications are latency-sensitive and have strict real-time requirements, a key challenge in automotive network design is the deterministic low-latency transport of latency-critical Ethernet frames. Time-sensitive networking (TSN) is an upcoming set of Ethernet standards, which address these requirements by specifying new quality of service mechanisms in the form of different traffic shapers. One of these traffic shapers is the burst-limiting shaper (BLS). In this paper, we evaluate whether BLS is able to fulfill these strict timing requirements. We present a formal timing analysis for BLS in order to compute worst-case latency bounds. We use a realistic automotive Ethernet setup to compare BLS against Ethernet AVB and Ethernet following IEEE 802.1Q."
]
} |
1905.00399 | 2943159235 | We propose a formal timing analysis of an extension of the AFDX standard, incorporating the TSN BLS shaper, to homogenize the avionics communication architecture, and enable the interconnection of different avionics domains with mixed-criticality levels, e.g., current AFDX traffic, Flight Control and In-Flight Entertainment. Existing Network Calculus models are limited to three classes, but applications with heterogeneous traffic require additional classes. Hence, we propose to generalize an existing Network Calculus model to do a worst-case timing analysis of an architecture with multiple BLS on multi-hop networks, to infer real-time bounds. Then, we conduct the performance analysis of such a proposal. First we evaluate the model on a simple 3-classes single-hop network to assess the sensitivity and tightness of the model, and compare it to existing models (CPA and Network Calculus). Secondly, we study a realistic AFDX configuration with six classes and two BLS. Finally, we compute a real use-case to add A350 flight control traffic to the AFDX. Results show the good properties of the generalized Network Calculus model compared to the CPA model and the efficiency of the extended AFDX to noticeably enhance the medium priority level delay bounds, while respecting the higher priority level constraints, in comparison with the current AFDX standard. | The inherent idea of the WbA @cite_21 is based on the different possible combinations of idle and sending BLS windows to model the minimum and maximum service curves, i.e. availability of the traversed node. However, when taking a closer look at the credit behavior of the BLS covering the worst-case scenario of the minimum service curve, @math , we found a pessimism inherent to the WbA model. We illustrate this behavior in Fig., where the minimum sending window @math and the maximum idle window @math introduce a credit discontinuity, which is not a realistic behavior. Moreover, we have also noticed a similar discontinuity when studying the best-case scenario of the maximum service curve. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2885217174"
],
"abstract": [
"A homogeneous avionic communication architecture based on the AFDX supporting mixed-criticality applications will bring significant advantages, i.e., easier maintenance and reduced costs. To cope with this emerging issue, the AFDX may integrate multiple traffic classes: Safety-Critical Traffic (SCT) with hard real-time constraints, Rate-Constrained (RC) traffic requiring bounded latencies and Best Effort (BE) traffic with no delivery constraints. These traffic classes are managed based on a Non-Preemptive Strict Priority (NP-SP) Scheduler, where the highest priority traffic (SCT) is shaped with a Burst Limiting Shaper (BLS). The latter has been defined by the Time Sensitive Networking (TSN) task group to limit the impact of high priority flows on lower priority ones. This paper proposes a Network Calculus-based approach to compute the end-to-end delay bounds of SCT and RC classes. We consider the impact of the BLS and the multi-hop network architecture. We also provide proofs of service curves guaranteed to SCT and RC classes, needed to derive delay bounds with Network Calculus. The proposed approach is evaluated on a realistic AFDX configuration. Results show the efficiency of incorporating the TSN BLS on top of a NP-SP scheduler in the AFDX to noticeably enhance the RC delay bounds while guaranteeing the SCT deadline, in comparison to an AFDX implementing only a NP-SP scheduler."
]
} |
1905.00399 | 2943159235 | We propose a formal timing analysis of an extension of the AFDX standard, incorporating the TSN BLS shaper, to homogenize the avionics communication architecture, and enable the interconnection of different avionics domains with mixed-criticality levels, e.g., current AFDX traffic, Flight Control and In-Flight Entertainment. Existing Network Calculus models are limited to three classes, but applications with heterogeneous traffic require additional classes. Hence, we propose to generalize an existing Network Calculus model to do a worst-case timing analysis of an architecture with multiple BLS on multi-hop networks, to infer real-time bounds. Then, we conduct the performance analysis of such a proposal. First we evaluate the model on a simple 3-classes single-hop network to assess the sensitivity and tightness of the model, and compare it to existing models (CPA and Network Calculus). Secondly, we study a realistic AFDX configuration with six classes and two BLS. Finally, we compute a real use-case to add A350 flight control traffic to the AFDX. Results show the good properties of the generalized Network Calculus model compared to the CPA model and the efficiency of the extended AFDX to noticeably enhance the medium priority level delay bounds, while respecting the higher priority level constraints, in comparison with the current AFDX standard. | We notice that the discontinuities of the BLS credit happen between the end of the idle window and the start of the sending window for both the minimum and maximum service curves. This issue is situated around @math . This highlights the fact that @math is not taken into account in an accurate way by the WbA model. This led to the second model, CCbA @cite_8 , which is based on the continuity of the credit. The results presented in @cite_8 show the tightness of the CCbA. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2880300384"
],
"abstract": [
"In this paper, we model and analyse the timing performance of an extended AFDX standard, incorporating the Burst Limiting Shaper (BLS) proposed by the Time Sensitive Networking group. The extended AFDX will enable the interconnection of different avionics domains with mixed-criticality levels, e.g., current AFDX traffic, Flight Control and In-Flight Entertainment. First, we present the model and the worst-case timing analysis, using the Network Calculus framework, of such an extended AFDX to infer real-time guarantees. Secondly, we conduct its performance evaluation on a representative AFDX configuration. Results show the tightness of the proposed model, with reference to simulation results. Moreover, they confirm the efficiency of incorporating the BLS in the AFDX standard to noticeably enhance the medium priority level delay bounds, while respecting the higher priority level constraints, in comparison with the current AFDX standard."
]
} |
1905.00526 | 2943515156 | Region proposal algorithms play an important role in most state-of-the-art two-stage object detection networks by hypothesizing object locations in the image. Nonetheless, region proposal algorithms are known to be the bottleneck in most two-stage object detection networks, increasing the processing time for each image and resulting in slow networks not suitable for real-time applications such as autonomous driving vehicles. In this paper we introduce RRPN, a Radar-based real-time region proposal algorithm for object detection in autonomous driving vehicles. RRPN generates object proposals by mapping Radar detections to the image coordinate system and generating pre-defined anchor boxes for each mapped Radar detection point. These anchor boxes are then transformed and scaled based on the object's distance from the vehicle, to provide more accurate proposals for the detected objects. We evaluate our method on the newly released NuScenes dataset [1] using the Fast R-CNN object detection network [2]. Compared to the Selective Search object proposal algorithm [3], our model operates more than 100x faster while at the same time achieves higher detection precision and recall. Code has been made publicly available at this https URL . | Authors in @cite_2 discussed the application of Radars in navigation for autonomous vehicles, using an extended Kalman filter to fuse the radar and vehicle control signals for estimating vehicle position. In @cite_8 authors proposed a correlation based pattern matching algorithm in addition to a range-window to detect and track objects in front of a vehicle. In @cite_9 Ji proposed an attention selection system based on Radar detections to find candidate targets and employ a classification network to classify those objects. They generated a single attention window for each Radar detection, and used a Multi-layer In-place Learning Network (MILN) as the classifier. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_2"
],
"mid": [
"2231187102",
"1489146341",
"2106533209"
],
"abstract": [
"",
"We propose an object classification system that incorporates information from a video camera and an automotive radar. The system implements three processes. The first process is attention selection, in which the radar guides a selection of a small number of candidate images for analysis by the camera and our learning method. In the second process, normalized attention windows are processed by orientation-selective feature detectors, generating a sparse representation for each window. In the final process, a multilayer in-place learning network is used to distinguish sparse representations of different objects. Though it is more flexible in terms of variety of classification tasks, the system currently demonstrates its high accuracy in comparison with others on real-world data of a two-class recognition problem.",
"Fuzzy logic techniques have become popular to address various processes for multisensor data fusion. Examples include: (1) fuzzy membership functions for data association, (2) evaluation of alternative hypotheses in multiple hypothesis trackers, (3) fuzzy-logic-based pattern recognition (e.g., for feature-based object identification), and (4) fuzzy inference schemes for sensor resource allocation. These approaches have been individually successful but are limited to only a single subprocess within a data fusion system. At The Pennsylvania State University, Applied Research Laboratory, a general-purpose fuzzy logic architecture has been developed that provides for control of sensing resources, fusion of data for tracking, automatic object recognition, control of system resources and elements, and automated situation assessment. This general architecture has been applied to implement an autonomous vehicle capable of self-direction, obstacle avoidance, and mission completion. The fuzzy logic architecture provides interpretation and fusion of multisensor data (i.e., perception) as well as logic for process control (action). This paper provides an overview of the fuzzy logic architecture and a discussion of its application to data fusion in the context of the Department of Defense (DoD) Joint Directors of Laboratories (JDL) Data Fusion Process Model. A new, robust, fuzzy calculus is introduced. An example is provided by modeling a component of the perception processing of a bat. >"
]
} |
1905.00526 | 2943515156 | Region proposal algorithms play an important role in most state-of-the-art two-stage object detection networks by hypothesizing object locations in the image. Nonetheless, region proposal algorithms are known to be the bottleneck in most two-stage object detection networks, increasing the processing time for each image and resulting in slow networks not suitable for real-time applications such as autonomous driving vehicles. In this paper we introduce RRPN, a Radar-based real-time region proposal algorithm for object detection in autonomous driving vehicles. RRPN generates object proposals by mapping Radar detections to the image coordinate system and generating pre-defined anchor boxes for each mapped Radar detection point. These anchor boxes are then transformed and scaled based on the object's distance from the vehicle, to provide more accurate proposals for the detected objects. We evaluate our method on the newly released NuScenes dataset [1] using the Fast R-CNN object detection network [2]. Compared to the Selective Search object proposal algorithm [3], our model operates more than 100x faster while at the same time achieves higher detection precision and recall. Code has been made publicly available at this https URL . | Authors in @cite_0 proposed a LIDAR and vision-based pedestrian detection system using both a centralized and decentralized fusion architecture. In the former, authors proposed a feature level fusion system where features from LIDAR and vision spaces are combined in a single vector which is classified using a single classifier. In the latter, two classifiers are employed, one per sensor‐feature space. More recently, Choi in @cite_17 proposed a multi-sensor fusion system addressing the fusion of 14 sensors integrated on a vehicle. This system uses an Extended Kalman Filter to process the observations from individual sensors and is able to detect and track pedestrian, bicyclists and vehicles. | {
"cite_N": [
"@cite_0",
"@cite_17"
],
"mid": [
"2074602185",
"2031031248"
],
"abstract": [
"A perception system for pedestrian detection in urban scenarios using information from a LIDAR and a single camera is presented. Two sensor fusion architectures are described, a centralized and a decentralized one. In the former, the fusion process occurs at the feature level, i.e., features from LIDAR and vision spaces are combined in a single vector for posterior classification using a single classifier. In the latter, two classifiers are employed, one per sensor-feature space, which were offline selected based on information theory and fused by a trainable fusion method applied over the likelihoods provided by the component classifiers. The proposed schemes for sensor combination, and more specifically the trainable fusion method, lead to enhanced detection performance and, in addition, maintenance of false-alarms under tolerable values in comparison with single-based classifiers. Experimental results highlight the performance and effectiveness of the proposed pedestrian detection system and the related sensor data combination strategies. © 2009 Wiley Periodicals, Inc.",
"A self-driving car, to be deployed in real-world driving environments, must be capable of reliably detecting and effectively tracking of nearby moving objects. This paper presents our new, moving object detection and tracking system that extends and improves our earlier system used for the 2007 DARPA Urban Challenge. We revised our earlier motion and observation models for active sensors (i.e., radars and LIDARs) and introduced a vision sensor. In the new system, the vision module detects pedestrians, bicyclists, and vehicles to generate corresponding vision targets. Our system utilizes this visual recognition information to improve a tracking model selection, data association, and movement classification of our earlier system. Through the test using the data log of actual driving, we demonstrate the improvement and performance gain of our new tracking system."
]
} |
1905.00526 | 2943515156 | Region proposal algorithms play an important role in most state-of-the-art two-stage object detection networks by hypothesizing object locations in the image. Nonetheless, region proposal algorithms are known to be the bottleneck in most two-stage object detection networks, increasing the processing time for each image and resulting in slow networks not suitable for real-time applications such as autonomous driving vehicles. In this paper we introduce RRPN, a Radar-based real-time region proposal algorithm for object detection in autonomous driving vehicles. RRPN generates object proposals by mapping Radar detections to the image coordinate system and generating pre-defined anchor boxes for each mapped Radar detection point. These anchor boxes are then transformed and scaled based on the object's distance from the vehicle, to provide more accurate proposals for the detected objects. We evaluate our method on the newly released NuScenes dataset [1] using the Fast R-CNN object detection network [2]. Compared to the Selective Search object proposal algorithm [3], our model operates more than 100x faster while at the same time achieves higher detection precision and recall. Code has been made publicly available at this https URL . | Vision based object proposal algorithms have been very popular among object detection networks. Authors in @cite_5 proposed the Selective Search algorithm, diversifying the search for objects by using a variety of complementary image partitionings. Despite its high accuracy, Selective Search is computationally expensive, operating at 2-7 seconds per image. Edge Boxes @cite_16 is another vision based object proposal algorithm using edges to detect objects. Edge Boxes is faster than the Selective Search algorithm with a run time of @math seconds per image, but it is still considered very slow in real-time applications such as autonomous driving. | {
"cite_N": [
"@cite_5",
"@cite_16"
],
"mid": [
"2088049833",
"7746136"
],
"abstract": [
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).",
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy."
]
} |
1905.00453 | 2914904117 | Personalized recommendation algorithms learn a user's preference for an item by measuring a distance similarity between them. However, some of the existing recommendation models (e.g., matrix factorization) assume a linear relationship between the user and item. This approach limits the capacity of recommender systems, since the interactions between users and items in real-world applications are much more complex than the linear relationship. To overcome this limitation, in this paper, we design and propose a deep learning framework called Signed Distance-based Deep Memory Recommender, which captures non-linear relationships between users and items explicitly and implicitly, and work well in both general recommendation task and shopping basket-based recommendation task. Through an extensive empirical study on six real-world datasets in the two recommendation tasks, our proposed approach achieved significant improvement over ten state-of-the-art recommendation models. | Latent Factor Models (LFM) have been extensively studied in the literature, which include Matrix Factorization @cite_0 , Bayesian Personalized Ranking @cite_46 , fast matrix factorization for implicit feedbacks (eALS) @cite_43 , Despite their success, LFM suffer from several limitations. First, LFM overlook associations between the user's previously consumed items and the target item (e.g. mobile phones and phone cases). Second, LFM usually rely on inner product function, whose linearity limits the capability of modeling complex user-item interactions. To address the second issue, several non-linear latent factor models have been proposed, with the help of Gaussian process @cite_61 or kernels @cite_13 @cite_4 . However, they either require expensive hyper-parameter tuning or face difficulties in finding good kernels for different data patterns. | {
"cite_N": [
"@cite_61",
"@cite_4",
"@cite_0",
"@cite_43",
"@cite_46",
"@cite_13"
],
"mid": [
"2170551306",
"2403959208",
"2101409192",
"2340502990",
"2140310134",
"2511635700"
],
"abstract": [
"A popular approach to collaborative filtering is matrix factorization. In this paper we develop a non-linear probabilistic matrix factorization using Gaussian process latent variable models. We use stochastic gradient descent (SGD) to optimize the model. SGD allows us to apply Gaussian processes to data sets with millions of observations without approximate methods. We apply our approach to benchmark movie recommender data sets. The results show better than previous state-of-the-art performance.",
"We propose a new matrix completion algorithm— Kernelized Probabilistic Matrix Factorization (KPMF), which effectively incorporates external side information into the matrix factorization process. Unlike Probabilistic Matrix Factorization (PMF) [14], which assumes an independent latent vector for each row (and each column) with Gaussian priors, KMPF works with latent vectors spanning all rows (and columns) with Gaussian Process (GP) priors. Hence, KPMF explicitly captures the underlying (nonlinear) covariance structures across rows and columns. This crucial difference greatly boosts the performance of KPMF when appropriate side information, e.g., users’ social network in recommender systems, is incorporated. Furthermore, GP priors allow the KPMF model to fill in a row that is entirely missing in the original matrix based on the side information alone, which is not feasible for standard PMF formulation. In our paper, we mainly work on the matrix completion problem with a graph among the rows and or columns as side information, but the proposed framework can be easily used with other types of side information as well. Finally, we demonstrate the efficacy of KPMF through two different applications: 1) recommender systems and 2) image restoration.",
"A common task of recommender systems is to improve customer experience through personalized recommendations based on prior implicit feedback. These systems passively track different sorts of user behavior, such as purchase history, watching habits and browsing activity, in order to model user preferences. Unlike the much more extensively researched explicit feedback, we do not have any direct input from the users regarding their preferences. In particular, we lack substantial evidence on which products consumer dislike. In this work we identify unique properties of implicit feedback datasets. We propose treating the data as indication of positive and negative preference associated with vastly varying confidence levels. This leads to a factor model which is especially tailored for implicit feedback recommenders. We also suggest a scalable optimization procedure, which scales linearly with the data size. The algorithm is used successfully within a recommender system for television shows. It compares favorably with well tuned implementations of other known methods. In addition, we offer a novel way to give explanations to recommendations given by this factor model.",
"This paper contributes improvements on both the effectiveness and efficiency of Matrix Factorization (MF) methods for implicit feedback. We highlight two critical issues of existing works. First, due to the large space of unobserved feedback, most existing works resort to assign a uniform weight to the missing data to reduce computational complexity. However, such a uniform assumption is invalid in real-world settings. Second, most methods are also designed in an offline setting and fail to keep up with the dynamic nature of online data. We address the above two issues in learning MF models from implicit feedback. We first propose to weight the missing data based on item popularity, which is more effective and flexible than the uniform-weight assumption. However, such a non-uniform weighting poses efficiency challenge in learning the model. To address this, we specifically design a new learning algorithm based on the element-wise Alternating Least Squares (eALS) technique, for efficiently optimizing a MF model with variably-weighted missing data. We exploit this efficiency to then seamlessly devise an incremental update strategy that instantly refreshes a MF model given new feedback. Through comprehensive experiments on two public datasets in both offline and online protocols, we show that our implemented, open-source (https: github.com hexiangnan sigir16-eals) eALS consistently outperforms state-of-the-art implicit MF methods.",
"Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive k-nearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion.",
""
]
} |
1905.00453 | 2914904117 | Personalized recommendation algorithms learn a user's preference for an item by measuring a distance similarity between them. However, some of the existing recommendation models (e.g., matrix factorization) assume a linear relationship between the user and item. This approach limits the capacity of recommender systems, since the interactions between users and items in real-world applications are much more complex than the linear relationship. To overcome this limitation, in this paper, we design and propose a deep learning framework called Signed Distance-based Deep Memory Recommender, which captures non-linear relationships between users and items explicitly and implicitly, and work well in both general recommendation task and shopping basket-based recommendation task. Through an extensive empirical study on six real-world datasets in the two recommendation tasks, our proposed approach achieved significant improvement over ten state-of-the-art recommendation models. | NeuMF @cite_55 is a neural network that generalizes matrix factorization via Multi Layer Perceptron (MLP) for learning non-linear interaction functions. Similarly, some other works @cite_24 @cite_56 @cite_48 @cite_40 substitute MLP with auto-encoder architecture. It is worth noting that all these approaches are limited by only considering the user-item latent space, and overlook the correlations in the item-item latent space. Besides, some deep learning based works @cite_14 @cite_7 @cite_44 @cite_45 @cite_6 employ auxiliary information such as item description @cite_29 , music content @cite_1 , item visual features @cite_16 @cite_8 , reviews @cite_12 to address the cold-start problem. However, this auxiliary information is not always available, and it limits their applicability in many real-world systems. Another line of works use deep neural networks to model temporal effects of consumed items @cite_50 @cite_35 @cite_19 @cite_15 . Although our proposed methods do not explicitly consider the temporal effects, SDM utilizes the time information to select a set of recently consumed items as the context items of the target item. | {
"cite_N": [
"@cite_35",
"@cite_29",
"@cite_44",
"@cite_15",
"@cite_8",
"@cite_48",
"@cite_7",
"@cite_55",
"@cite_6",
"@cite_56",
"@cite_19",
"@cite_40",
"@cite_50",
"@cite_16",
"@cite_12",
"@cite_14",
"@cite_1",
"@cite_24",
"@cite_45"
],
"mid": [
"2583674722",
"2515144511",
"2749348810",
"2783272285",
"2741249238",
"2038585576",
"2786995169",
"2605350416",
"2893346805",
"2253995343",
"2625746539",
"1720514416",
"2262817822",
"2742143754",
"2788953034",
"2790271700",
"2137028279",
"2963085847",
"2905305843"
],
"abstract": [
"Recommender systems traditionally assume that user profiles and movie attributes are static. Temporal dynamics are purely reactive, that is, they are inferred after they are observed, e.g. after a user's taste has changed or based on hand-engineered temporal bias corrections for movies. We propose Recurrent Recommender Networks (RRN) that are able to predict future behavioral trajectories. This is achieved by endowing both users and movies with a Long Short-Term Memory (LSTM) autoregressive model that captures dynamics, in addition to a more traditional low-rank factorization. On multiple real-world datasets, our model offers excellent prediction accuracy and it is very compact, since we need not learn latent state but rather just the state transition function.",
"Sparseness of user-to-item rating data is one of the major factors that deteriorate the quality of recommender system. To handle the sparsity problem, several recommendation techniques have been proposed that additionally consider auxiliary information to improve rating prediction accuracy. In particular, when rating data is sparse, document modeling-based approaches have improved the accuracy by additionally utilizing textual data such as reviews, abstracts, or synopses. However, due to the inherent limitation of the bag-of-words model, they have difficulties in effectively utilizing contextual information of the documents, which leads to shallow understanding of the documents. This paper proposes a novel context-aware recommendation model, convolutional matrix factorization (ConvMF) that integrates convolutional neural network (CNN) into probabilistic matrix factorization (PMF). Consequently, ConvMF captures contextual information of documents and further enhances the rating prediction accuracy. Our extensive evaluations on three real-world datasets show that ConvMF significantly outperforms the state-of-the-art recommendation models even when the rating data is extremely sparse. We also demonstrate that ConvMF successfully captures subtle contextual difference of a word in a document. Our implementation and datasets are available at http: dm.postech.ac.kr ConvMF.",
"Recently, many e-commerce websites have encouraged their users to rate shopping items and write review texts. This review information has been very useful for understanding user preferences and item properties, as well as enhancing the capability to make personalized recommendations of these websites. In this paper, we propose to model user preferences and item properties using convolutional neural networks (CNNs) with dual local and global attention, motivated by the superiority of CNNs to extract complex features. By using aggregated review texts from a user and aggregated review text for an item, our model can learn the unique features (embedding) of each user and each item. These features are then used to predict ratings. We train these user and item networks jointly which enable the interaction between users and items in a similar way as matrix factorization. The local attention provides us insight on a user's preferences or an item's properties. The global attention helps CNNs focus on the semantic meaning of the whole review text. Thus, the combined local and global attentions enable an interpretable and better-learned representation of users and items. We validate the proposed models by testing on popular review datasets in Yelp and Amazon and compare the results with matrix factorization (MF), the hidden factor and topical (HFT) model, and the recently proposed convolutional matrix factorization (ConvMF+). Our proposed CNNs with dual attention model outperforms HFT and ConvMF+ in terms of mean square errors (MSE). In addition, we compare the user item embeddings learned from these models for classification and recommendation. These results also confirm the superior quality of user item embeddings learned from our model.",
"Top-N sequential recommendation models each user as a sequence of items interacted in the past and aims to predict top-N ranked items that a user will likely interact in a »near future». The order of interaction implies that sequential patterns play an important role where more recent items in a sequence have a larger impact on the next item. In this paper, we propose a Convolutional Sequence Embedding Recommendation Model »Caser» as a solution to address this requirement. The idea is to embed a sequence of recent items into an »image» in the time and latent spaces and learn sequential patterns as local features of the image using convolutional filters. This approach provides a unified and flexible network structure for capturing both general preferences and sequential patterns. The experiments on public data sets demonstrated that Caser consistently outperforms state-of-the-art sequential recommendation methods on a variety of common evaluation metrics.",
"Multimedia content is dominating today's Web information. The nature of multimedia user-item interactions is 1 0 binary implicit feedback (e.g., photo likes, video views, song downloads, etc.), which can be collected at a larger scale with a much lower cost than explicit feedback (e.g., product ratings). However, the majority of existing collaborative filtering (CF) systems are not well-designed for multimedia recommendation, since they ignore the implicitness in users' interactions with multimedia content. We argue that, in multimedia recommendation, there exists item- and component-level implicitness which blurs the underlying users' preferences. The item-level implicitness means that users' preferences on items (e.g. photos, videos, songs, etc.) are unknown, while the component-level implicitness means that inside each item users' preferences on different components (e.g. regions in an image, frames of a video, etc.) are unknown. For example, a 'view'' on a video does not provide any specific information about how the user likes the video (i.e.item-level) and which parts of the video the user is interested in (i.e.component-level). In this paper, we introduce a novel attention mechanism in CF to address the challenging item- and component-level implicit feedback in multimedia recommendation, dubbed Attentive Collaborative Filtering (ACF). Specifically, our attention model is a neural network that consists of two attention modules: the component-level attention module, starting from any content feature extraction network (e.g. CNN for images videos), which learns to select informative components of multimedia items, and the item-level attention module, which learns to score the item preferences. ACF can be seamlessly incorporated into classic CF models with implicit feedback, such as BPR and SVD++, and efficiently trained using SGD. Through extensive experiments on two real-world multimedia Web services: Vine and Pinterest, we show that ACF significantly outperforms state-of-the-art CF methods.",
"Collaborative filtering (CF) has been widely employed within recommender systems to solve many real-world problems. Learning effective latent factors plays the most important role in collaborative filtering. Traditional CF methods based upon matrix factorization techniques learn the latent factors from the user-item ratings and suffer from the cold start problem as well as the sparsity problem. Some improved CF methods enrich the priors on the latent factors by incorporating side information as regularization. However, the learned latent factors may not be very effective due to the sparse nature of the ratings and the side information. To tackle this problem, we learn effective latent representations via deep learning. Deep learning models have emerged as very appealing in learning effective representations in many applications. In particular, we propose a general deep architecture for CF by integrating matrix factorization with deep feature learning. We provide a natural instantiations of our architecture by combining probabilistic matrix factorization with marginalized denoising stacked auto-encoders. The combined framework leads to a parsimonious fit over the latent features as indicated by its improved performance in comparison to prior state-of-art models over four large datasets for the tasks of movie book recommendation and response prediction.",
"Many recent state-of-the-art recommender systems such as D-ATT, TransNet and DeepCoNN exploit reviews for representation learning. This paper proposes a new neural architecture for recommendation with reviews. Our model operates on a multi-hierarchical paradigm and is based on the intuition that not all reviews are created equal, i.e., only a selected few are important. The importance, however, should be dynamically inferred depending on the current target. To this end, we propose a review-by-review pointer-based learning scheme that extracts important reviews from user and item reviews and subsequently matches them in a word-by-word fashion. This enables not only the most informative reviews to be utilized for prediction but also a deeper word-level interaction. Our pointer-based method operates with a gumbel-softmax based pointer mechanism that enables the incorporation of discrete vectors within differentiable neural architectures. Our pointer mechanism is co-attentive in nature, learning pointers which are co-dependent on user-item relationships. Finally, we propose a multi-pointer learning scheme that learns to combine multiple views of user-item interactions. We demonstrate the effectiveness of our proposed model via extensive experiments on 24 benchmark datasets from Amazon and Yelp. Empirical results show that our approach significantly outperforms existing state-of-the-art models, with up to 19 and 71 relative improvement when compared to TransNet and DeepCoNN respectively. We study the behavior of our multi-pointer learning mechanism, shedding light on 'evidence aggregation' patterns in review-based recommender systems.",
"In recent years, deep neural networks have yielded immense success on speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks on recommender systems has received relatively less scrutiny. In this work, we strive to develop techniques based on neural networks to tackle the key problem in recommendation --- collaborative filtering --- on the basis of implicit feedback. Although some recent work has employed deep learning for recommendation, they primarily used it to model auxiliary information, such as textual descriptions of items and acoustic features of musics. When it comes to model the key factor in collaborative filtering --- the interaction between user and item features, they still resorted to matrix factorization and applied an inner product on the latent features of users and items. By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a general framework named NCF, short for Neural network-based Collaborative Filtering. NCF is generic and can express and generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to leverage a multi-layer perceptron to learn the user-item interaction function. Extensive experiments on two real-world datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical evidence shows that using deeper layers of neural networks offers better recommendation performance.",
"The rapid growth of Location-based Social Networks (LBSNs) provides a great opportunity to satisfy the strong demand for personalized Point-of-Interest (POI) recommendation services. However, with the tremendous increase of users and POIs, POI recommender systems still face several challenging problems: (1) the hardness of modeling complex user-POI interactions from sparse implicit feedback; (2) the difficulty of incorporating the geographical context information. To cope with these challenges, we propose a novel autoencoder-based model to learn the complex user-POI relations, namely SAE-NAD, which consists of a self-attentive encoder (SAE) and a neighbor-aware decoder (NAD). In particular, unlike previous works equally treat users' checked-in POIs, our self-attentive encoder adaptively differentiates the user preference degrees in multiple aspects, by adopting a multi-dimensional attention mechanism. To incorporate the geographical context information, we propose a neighbor-aware decoder to make users' reachability higher on the similar and nearby neighbors of checked-in POIs, which is achieved by the inner product of POI embeddings together with the radial basis function (RBF) kernel. To evaluate the proposed model, we conduct extensive experiments on three real-world datasets with many state-of-the-art methods and evaluation metrics. The experimental results demonstrate the effectiveness of our model.",
"Most real-world recommender services measure their performance based on the top-N results shown to the end users. Thus, advances in top-N recommendation have far-ranging consequences in practical applications. In this paper, we present a novel method, called Collaborative Denoising Auto-Encoder (CDAE), for top-N recommendation that utilizes the idea of Denoising Auto-Encoders. We demonstrate that the proposed model is a generalization of several well-known collaborative filtering models but with more flexible components. Thorough experiments are conducted to understand the performance of CDAE under various component settings. Furthermore, experimental results on several public datasets demonstrate that CDAE consistently outperforms state-of-the-art top-N recommendation methods on a variety of common evaluation metrics.",
"Session-based recommendations are highly relevant in many modern on-line services (e.g. e-commerce, video streaming) and recommendation settings. Recently, Recurrent Neural Networks have been shown to perform very well in session-based settings. While in many session-based recommendation domains user identifiers are hard to come by, there are also domains in which user profiles are readily available. We propose a seamless way to personalize RNN models with cross-session information transfer and devise a Hierarchical RNN model that relays end evolves latent hidden states of the RNNs across user sessions. Results on two industry datasets show large improvements over the session-only RNNs.",
"This paper proposes AutoRec, a novel autoencoder framework for collaborative filtering (CF). Empirically, AutoRec's compact and efficiently trainable model outperforms state-of-the-art CF techniques (biased matrix factorization, RBM-CF and LLORMA) on the Movielens and Netflix datasets.",
"We apply recurrent neural networks (RNN) on a new domain, namely recommender systems. Real-life recommender systems often face the problem of having to base recommendations only on short session-based data (e.g. a small sportsware website) instead of long user histories (as in the case of Netflix). In this situation the frequently praised matrix factorization approaches are not accurate. This problem is usually overcome in practice by resorting to item-to-item recommendations, i.e. recommending similar items. We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN-based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modifications to classic RNNs such as a ranking loss function that make it more viable for this specific problem. Experimental results on two data-sets show marked improvements over widely used approaches.",
"Visual information is an important factor in recommender systems. Some studies have been done to model user preferences for visual recommendation. Usually, an item consists of two fundamental components: style and category. Conventional methods model items in a common visual feature space. In these methods, visual representations always can only capture the categorical information but fail in capturing the styles of items. Style information indicates the preferences of users and has significant effect in visual recommendation. Accordingly, we propose a DeepStyle method for learning style features of items and sensing preferences of users. Experiments conducted on two real-world datasets illustrate the effectiveness of DeepStyle for visual recommendation.",
"Collaborative filtering (CF) is a common recommendation approach that relies on user-item ratings. However, the natural sparsity of user-item rating data can be problematic in many domains and settings, limiting the ability to generate accurate predictions and effective recommendations. Moreover, in some CF approaches latent features are often used to represent users and items, which can lead to a lack of recommendation transparency and explainability. User-generated, customer reviews are now commonplace on many web sites, providing users with an opportunity to convey their experiences and opinions of products and services. As such, these reviews have the potential to serve as a useful source of recommendation data, through capturing valuable sentiment information about particular product features. In this paper, we present a novel deep learning recommendation model, which co-learns user and item information from ratings and customer reviews, by optimizing matrix factorization and an attention-based GRU network. Using real-world datasets we show a significant improvement in recommendation performance, compared to a variety of alternatives. Furthermore, the approach is useful when it comes to assigning intuitive meanings to latent features to improve the transparency and explainability of recommender systems.",
"In this paper, we introduce a novel recommendation model, which harnesses a convolutional neural network to mine meaningful information from customer reviews, and integrates it with matrix factorization algorithm seamlessly. It is a valid method to improve the transparency of CF algorithms.",
"Automatic music recommendation has become an increasingly relevant problem in recent years, since a lot of music is now sold and consumed digitally. Most recommender systems rely on collaborative filtering. However, this approach suffers from the cold start problem: it fails when no usage data is available, so it is not effective for recommending new and unpopular songs. In this paper, we propose to use a latent factor model for recommendation, and predict the latent factors from music audio when they cannot be obtained from usage data. We compare a traditional approach using a bag-of-words representation of the audio signals with deep convolutional neural networks, and evaluate the predictions quantitatively and qualitatively on the Million Song Dataset. We show that using predicted latent factors produces sensible recommendations, despite the fact that there is a large semantic gap between the characteristics of a song that affect user preference and the corresponding audio signal. We also show that recent advances in deep learning translate very well to the music recommendation setting, with deep convolutional neural networks significantly outperforming the traditional approach.",
"We extend variational autoencoders (VAEs) to collaborative filtering for implicit feedback. This non-linear probabilistic model enables us to go beyond the limited modeling capacity of linear factor models which still largely dominate collaborative filtering research. We introduce a generative model with multinomial likelihood and use Bayesian inference to learn this powerful generative model. Despite widespread use in language modeling and economics, the multinomial likelihood receives less attention in the recommender systems literature. We introduce a different regularization parameter for the learning objective, which proves to be crucial for achieving competitive performance. Remarkably, there is an efficient way to tune the parameter using annealing. The resulting model and learning algorithm has information-theoretic connections to maximum entropy discrimination and the information bottleneck principle. Empirically, we show that the proposed approach significantly outperforms several state-of-the-art baselines, including two recently-proposed neural network approaches, on several real-world datasets. We also provide extended experiments comparing the multinomial likelihood with other commonly used likelihood functions in the latent factor collaborative filtering literature and show favorable results. Finally, we identify the pros and cons of employing a principled Bayesian inference approach and characterize settings where it provides the most significant improvements.",
"The rapid growth of Internet services and mobile devices provides an excellent opportunity to satisfy the strong demand for the personalized item or product recommendation. However, with the tremendous increase of users and items, personalized recommender systems still face several challenging problems: (1) the hardness of exploiting sparse implicit feedback; (2) the difficulty of combining heterogeneous data. To cope with these challenges, we propose a gated attentive-autoencoder (GATE) model, which is capable of learning fused hidden representations of items' contents and binary ratings, through a neural gating structure. Based on the fused representations, our model exploits neighboring relations between items to help infer users' preferences. In particular, a word-level and a neighbor-level attention module are integrated with the autoencoder. The word-level attention learns the item hidden representations from items' word sequences, while favoring informative words by assigning larger attention weights. The neighbor-level attention learns the hidden representation of an item's neighborhood by considering its neighbors in a weighted manner. We extensively evaluate our model with several state-of-the-art methods and different validation metrics on four real-world datasets. The experimental results not only demonstrate the effectiveness of our model on top-N recommendation but also provide interpretable results attributed to the attention modules."
]
} |
1905.00453 | 2914904117 | Personalized recommendation algorithms learn a user's preference for an item by measuring a distance similarity between them. However, some of the existing recommendation models (e.g., matrix factorization) assume a linear relationship between the user and item. This approach limits the capacity of recommender systems, since the interactions between users and items in real-world applications are much more complex than the linear relationship. To overcome this limitation, in this paper, we design and propose a deep learning framework called Signed Distance-based Deep Memory Recommender, which captures non-linear relationships between users and items explicitly and implicitly, and work well in both general recommendation task and shopping basket-based recommendation task. Through an extensive empirical study on six real-world datasets in the two recommendation tasks, our proposed approach achieved significant improvement over ten state-of-the-art recommendation models. | The most closely related work to our work is recently proposed ( (CMN) @cite_58 ). In this work, Memory Network @cite_42 is adapted to measure similarities between users and user neighbors. Key differences between our work and CMN are as follows: (i) First, we follow an item neighborhood based design, whereas CMN follows a user neighborhood based design. The prior work showed that item neighborhood based models slightly outperformed user neighbor based models @cite_52 @cite_62 ; (ii) Second, our proposed SDM model uses our proposed personalized metric-based attention mechanism and produces signed distance scores as output, whereas CMN exploited a traditional inner product based attention; (iii) Third, we use a gated multi-hop architecture @cite_20 , which was shown to perform better than the original multi-hop design @cite_42 . | {
"cite_N": [
"@cite_62",
"@cite_42",
"@cite_52",
"@cite_58",
"@cite_20"
],
"mid": [
"2042281163",
"2951008357",
"2159094788",
"2798972759",
"2534274346"
],
"abstract": [
"Recommender systems apply knowledge discovery techniques to the problem of making personalized recommendations for information, products or services during a live interaction. These systems, especially the k-nearest neighbor collaborative ltering based ones, are achieving widespread success on the Web. The tremendous growth in the amount of available information and the number of visitors to Web sites in recent years poses some key challenges for recommender systems. These are: producing high quality recommendations, performing many recommendations per second for millions of users and items and achieving high coverage in the face of data sparsity. In traditional collaborative ltering systems the amount of work increases with the number of participants in the system. New recommender system technologies are needed that can quickly produce high quality recommendations, even for very large-scale problems. To address these issues we have explored item-based collaborative ltering techniques. Item-based techniques rst analyze the user-item matrix to identify relationships between di erent items, and then use these relationships to indirectly compute recommendations for users. In this paper we analyze di erent item-based recommendation generation algorithms. We look into di erent techniques for computing item-item similarities (e.g., item-item correlation vs. cosine similarities between item vectors) and di erent techniques for obtaining recommendations from them (e.g., weighted sum vs. regression model). Finally, we experimentally evaluate our results and compare them to the basic k-nearest neighbor approach. Our experiments suggest that item-based algorithms provide dramatically better performance than user-based algorithms, while at the same time providing better quality than the best available userbased algorithms.",
"We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (, 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.",
"Recommendation algorithms are best known for their use on e-commerce Web sites, where they use input about a customer's interests to generate a list of recommended items. Many applications use only the items that customers purchase and explicitly rate to represent their interests, but they can also use other attributes, including items viewed, demographic data, subject interests, and favorite artists. At Amazon.com, we use recommendation algorithms to personalize the online store for each customer. The store radically changes based on customer interests, showing programming titles to a software engineer and baby toys to a new mother. There are three common approaches to solving the recommendation problem: traditional collaborative filtering, cluster models, and search-based methods. Here, we compare these methods with our algorithm, which we call item-to-item collaborative filtering. Unlike traditional collaborative filtering, our algorithm's online computation scales independently of the number of customers and number of items in the product catalog. Our algorithm produces recommendations in real-time, scales to massive data sets, and generates high quality recommendations.",
"Recommendation systems play a vital role to keep users engaged with personalized content in modern online platforms. Deep learning has revolutionized many research fields and there is a recent surge of interest in applying it to collaborative filtering (CF). However, existing methods compose deep learning architectures with the latent factor model ignoring a major class of CF models, neighborhood or memory-based approaches. We propose Collaborative Memory Networks (CMN), a deep architecture to unify the two classes of CF models capitalizing on the strengths of the global structure of latent factor model and local neighborhood-based structure in a nonlinear fashion. Motivated by the success of Memory Networks, we fuse a memory component and neural attention mechanism as the neighborhood component. The associative addressing scheme with the user and item memories in the memory module encodes complex user-item relations coupled with the neural attention mechanism to learn a user-item specific neighborhood. Finally, the output module jointly exploits the neighborhood with the user and item memories to produce the ranking score. Stacking multiple memory modules together yield deeper architectures capturing increasingly complex user-item relations. Furthermore, we show strong connections between CMN components, memory networks and the three classes of CF models. Comprehensive experimental results demonstrate the effectiveness of CMN on three public datasets outperforming competitive baselines. Qualitative visualization of the attention weights provide insight into the model's recommendation process and suggest the presence of higher order interactions.",
"A method and apparatus for gating an end-to-end memory network are disclosed. For example, the method includes receiving a question as an input, calculating an updated state of a memory controller by applying a gate mechanism to an output based on the input and a current state of the memory controller of the end-to-end memory network, wherein the updated state of the memory controller determines a next read operation of a memory cell of a plurality of memory cells in the end-to-end memory network, repeating the calculating for a pre-determined number of hops and predicting an answer to the question by applying a softmax function to a sum of the output and the state of the memory controller of each one of the pre-determined number of hops."
]
} |
1905.00458 | 2943195349 | Yield estimation and forecasting are of special interest in the field of grapevine breeding and viticulture. The number of harvested berries per plant is strongly correlated with the resulting quality. Therefore, early yield forecasting can enable a focused thinning of berries to ensure a high quality end product. Traditionally yield estimation is done by extrapolating from a small sample size and by utilizing historic data. Moreover, it needs to be carried out by skilled experts with much experience in this field. Berry detection in images offers a cheap, fast and non-invasive alternative to the otherwise time-consuming and subjective on-site analysis by experts. We apply fully convolutional neural networks on images acquired with the Phenoliner, a field phenotyping platform. We count single berries in images to avoid the error-prone detection of grapevine clusters. Clusters are often overlapping and can vary a lot in the size which makes the reliable detection of them difficult. We address especially the detection of white grapes directly in the vineyard. The detection of single berries is formulated as a classification task with three classes, namely 'berry', 'edge' and 'background'. A connected component algorithm is applied to determine the number of berries in one image. We compare the automatically counted number of berries with the manually detected berries in 60 images showing Riesling plants in vertical shoot positioned trellis (VSP) and semi minimal pruned hedges (SMPH). We are able to detect berries correctly within the VSP system with an accuracy of 94.0 and for the SMPH system with 85.6 . | An overview about agricultural applications was done, for example, by @cite_10 , in which they compare different sensors and algorithms for fruit detection and localization with a robotic background. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2396098103"
],
"abstract": [
"We reviewed successes and challenges of sensing systems for fruit detection.We also reviewed the capabilities of sensors used for fruit localization.Challenges included occlusion, clustering and variable lighting condition.We presented future direction including amendment of orchard environment.Sensor fusion and human machine collaboration can also be areas for research. This paper reviews the research and development of machine vision systems for fruit detection and localization for robotic harvesting and or crop-load estimation of specialty tree crops including apples, pears, and citrus. Variable lighting condition, occlusions, and clustering are some of the important issues needed to be addressed for accurate detection and localization of fruit in orchard environment. To address these issues, various techniques have been investigated using different types of sensors and their combinations as well as with different image processing techniques. This paper summarizes various techniques and their advantages and disadvantages in detecting fruit in plant or tree canopies. The paper also summarizes the sensors and systems developed and used by researchers to localize fruit as well as the potential and limitations of those systems. Finally, major challenges for the successful application of machine vision system for robotic fruit harvesting and crop-load estimation, and potential future directions for research and development are discussed."
]
} |
1905.00458 | 2943195349 | Yield estimation and forecasting are of special interest in the field of grapevine breeding and viticulture. The number of harvested berries per plant is strongly correlated with the resulting quality. Therefore, early yield forecasting can enable a focused thinning of berries to ensure a high quality end product. Traditionally yield estimation is done by extrapolating from a small sample size and by utilizing historic data. Moreover, it needs to be carried out by skilled experts with much experience in this field. Berry detection in images offers a cheap, fast and non-invasive alternative to the otherwise time-consuming and subjective on-site analysis by experts. We apply fully convolutional neural networks on images acquired with the Phenoliner, a field phenotyping platform. We count single berries in images to avoid the error-prone detection of grapevine clusters. Clusters are often overlapping and can vary a lot in the size which makes the reliable detection of them difficult. We address especially the detection of white grapes directly in the vineyard. The detection of single berries is formulated as a classification task with three classes, namely 'berry', 'edge' and 'background'. A connected component algorithm is applied to determine the number of berries in one image. We compare the automatically counted number of berries with the manually detected berries in 60 images showing Riesling plants in vertical shoot positioned trellis (VSP) and semi minimal pruned hedges (SMPH). We are able to detect berries correctly within the VSP system with an accuracy of 94.0 and for the SMPH system with 85.6 . | Some research focuses on simple image analysis frameworks by detecting geometric objects in images. @cite_17 detect fruits in uncontrolled conditions by detecting convex surfaces. The surfaces are classified with a k-nearest neighbour approach into 'fruit' and 'not fruit'. They performed experiments on 4 different fruits including tomato, nectarine, pear and plum. @cite_1 investigate berry sizes in images taken by a consumer camera using an image analysis framework. They apply a circular Hough-Transform to the images and classify the results into 'berry' and 'not berry' using a Markov random field. However, the counting of grapes is not realized. The first large scale experiment was presented by @cite_0 in 2014. They evaluated their system in a realistic experimental setup over several years and hundreds of vines. use a camera system with illumination which is mounted on a vehicle. A circular Hough-transform is applied to the images, and resulting berry candidates are classified by texture, colour and shape features. Neighbouring berries are grouped into clusters. | {
"cite_N": [
"@cite_0",
"@cite_1",
"@cite_17"
],
"mid": [
"1892082890",
"2023726356",
"2884598383"
],
"abstract": [
"We present a vision system that automatically predicts yield in vineyards accurately and with high resolution. Yield estimation traditionally requires tedious hand measurement, which is destructive, sparse in sampling, and inaccurate. Our method is efficient, high-resolution, and it is the first such system evaluated in realistic experimentation over several years and hundreds of vines spread over several acres of different vineyards. Other existing research is limited to small test sets of 10 vines or less, or just isolated grape clusters, with tightly controlled image acquisition and with artificially induced yield distributions. The system incorporates cameras and illumination mounted on a vehicle driving through the vineyard. We process images by exploiting the three prominent visual cues of texture, color, and shape into a strong classifier that detects berries even when they are of similar color to the vine leaves. We introduce methods to maximize the spatial and the overall accuracy of the yield estimates by optimizing the relationship between image measurements and yield. Our experimentation is conducted over four growing seasons in several wine and table-grape vineyards. These are the first such results from experimentation that is sufficiently sized for fair evaluation against true yield variation and real-world imaging conditions from a moving vehicle. Analysis of the results demonstrates yield estimates that capture up to 75 of spatial yield variance and with an average error between 3 and 11 of total yield.",
"Automatic non-invasive detection of grapevine berries.High-throughput determination of berry size.Utilization of active learning without user interaction and conditional random fields.Extraction of a reliable quantity of phenotypic data while keeping the quality high. The berry size is one of the most important fruit traits in grapevine breeding. Non-invasive, image-based phenotyping promises a fast and precise method for the monitoring of the grapevine berry size. In the present study an automated image analyzing framework was developed in order to estimate the size of grapevine berries from images in a high-throughput manner. The framework includes (i) the detection of circular structures which are potentially berries and (ii) the classification of these into the class 'berry' or 'non-berry' by utilizing a conditional random field. The approach used the concept of a one-class classification, since only the target class 'berry' is of interest and needs to be modeled. Moreover, the classification was carried out by using an automated active learning approach, i.e. no user interaction is required during the classification process and in addition, the process adapts automatically to changing image conditions, e.g. illumination or berry color. The framework was tested on three datasets consisting in total of 139 images. The images were taken in an experimental vineyard at different stages of grapevine growth according to the BBCH scale. The mean berry size of a plant estimated by the framework correlates with the manually measured berry size by 0.88.",
"Abstract Automatic fruit picking is a challenging problem in robotics with a wide application field. A prerequisite for realization of a robotic fruit picker is its ability to detect fruits in tree tops. An expert system, which would be able to compete with human perception, must be capable of recognizing fruits among leaves and branches under uncontrolled conditions, where fruits are occluded and shaded. In this paper, a novel approach for fruit recognition in RGB-D images based on detection and classification of convex surfaces is proposed. The input RGB-D image is first segmented into convex surfaces by a region growing procedure. Each convex surface is then described by an appropriate descriptor and classified with the aid of the associated descriptor. A novel descriptor of approximately convex surfaces is proposed, which we named Convex Template Instance (CTI) descriptor. It is based on approximating surfaces by convex polyhedrons with quantized face orientations, where every polyhedron face corresponds to one descriptor component. Computation of the proposed descriptor is simple and can be performed very efficiently. The proposed CTI descriptor is compared to the SHOT descriptor, a standard descriptor for 3D point clouds. Two variants of the both CTI and SHOT descriptor are evaluated, a variant which uses color and a variant which does not. A k-nearest neighbor classifier is used to classify detected surfaces into two classes: fruit and other . The main advantage of the proposed expert system in comparison to other fruit recognition solutions is its computational efficiency, which is of great importance for its target application – an automatic fruit picker. The proposed approach is evaluated using a challenging dataset containing RGB-D images of four fruit sorts acquired under uncontrolled conditions, which has been made publicly available to the scientific community, allowing benchmarking of novel fruit recognition methods."
]
} |
1905.00458 | 2943195349 | Yield estimation and forecasting are of special interest in the field of grapevine breeding and viticulture. The number of harvested berries per plant is strongly correlated with the resulting quality. Therefore, early yield forecasting can enable a focused thinning of berries to ensure a high quality end product. Traditionally yield estimation is done by extrapolating from a small sample size and by utilizing historic data. Moreover, it needs to be carried out by skilled experts with much experience in this field. Berry detection in images offers a cheap, fast and non-invasive alternative to the otherwise time-consuming and subjective on-site analysis by experts. We apply fully convolutional neural networks on images acquired with the Phenoliner, a field phenotyping platform. We count single berries in images to avoid the error-prone detection of grapevine clusters. Clusters are often overlapping and can vary a lot in the size which makes the reliable detection of them difficult. We address especially the detection of white grapes directly in the vineyard. The detection of single berries is formulated as a classification task with three classes, namely 'berry', 'edge' and 'background'. A connected component algorithm is applied to determine the number of berries in one image. We compare the automatically counted number of berries with the manually detected berries in 60 images showing Riesling plants in vertical shoot positioned trellis (VSP) and semi minimal pruned hedges (SMPH). We are able to detect berries correctly within the VSP system with an accuracy of 94.0 and for the SMPH system with 85.6 . | Neural networks have also been applied for the analysis of grapevine images. In 2016, @cite_5 presented a smartphone application which was able to recognize and count green berries in images. They put a black box around single clusters and took an image. They applied a circular light reflection detection and classified the results with a neural net. In a later approach, they implemented a more automated data collection approach and discarded the need for a black box around the grape cluster @cite_2 . @cite_13 concentrated on the detection and quantization of grapevine inflorescences in images. They first identified image regions containing inflorescences with a neural network and applied a circular Hough-transform on the resulting image regions. | {
"cite_N": [
"@cite_5",
"@cite_13",
"@cite_2"
],
"mid": [
"2588938825",
"",
"2773985483"
],
"abstract": [
"A new image analysis algorithm based on mathematical morphology and pixel classification for grapevine berry counting is presented in this paper. First, a set of berry candidates represented by connected components was extracted. Then, six descriptors were calculated using key features of these components, and were employed for false positive (FP) discrimination using a supervised approach. More specifically, the set of descriptors modelled the grapes' distinctive shape, light reflection pattern and colour. Two classifiers were tested, a three-layer neural network and an optimised support vector machine. A dataset of 152 images was acquired with a low-cost smart phone camera. Images came from seven grapevine varieties, 18 per variety, at the two phenological stages in the Baggiolini scale between berry set (named stage K; 94 images) and cluster-closure (named stage L; 32 images). 126 of these images were kept for external validation and the remaining 26 were used for training (12 at stage L and 14 at K). From these training images, 5438 true false positive samples were generated and labelled in terms of the six descriptors. The neural network performed better than the support vector machine, yielding consistent Recall and Precision average values of 0.9572 and 0.8705, respectively. The presented algorithm, implemented as a smartphone application, can constitute a useful diagnosis tool for the in-the-field and non-destructive yield prediction and berry set assessing for the grape and wine industry.",
"",
"Abstract Early grapevine yield assessment provides information to viticulturists to help taking management decisions to achieve the desired grape quality and yield amount. In previous works, image analysis has been explored to this effect, but with systems performing either manually, on a single variety or close to harvest-time, when there are few rectifiable agronomic aspects. This study presents a solution based on image analysis for the non-invasive and in-field yield prediction in vines of several varieties, at phenological stages previous to veraison, around 100 days from harvest. To this end, an all-terrain vehicle (ATV) was modified with equipment to autonomously capture images of 30 vine segments of five different varieties at night-time. The images were analysed with a new image analysis algorithm based on mathematical morphology and pixel classification, which yielded overall average Recall and Precision values of 0.8764 and 0.9582, respectively. Finally, a model was calibrated to produce yield predictions from the number of detected berries in images with a Root-Mean-Square-Error per vine of 0.16 kg. This accuracy makes the proposed methodology ideal for early yield prediction as a very helpful tool for the grape and wine industry."
]
} |
1905.00579 | 2943770669 | Time-sync comment (TSC) is a new form of user-interaction review associated with real-time video contents, which contains a user's preferences for videos and therefore well suited as the data source for video recommendations. However, existing review-based recommendation methods ignore the context-dependent (generated by user-interaction), real-time, and time-sensitive properties of TSC data. To bridge the above gaps, in this paper, we use video images and users' TSCs to design an Image-Text Fusion model with a novel Herding Effect Attention mechanism (called ITF-HEA), which can predict users' favorite videos with model-based collaborative filtering. Specifically, in the HEA mechanism, we weight the context information based on the semantic similarities and time intervals between each TSC and its context, thereby considering influences of the herding effect in the model. Experiments show that ITF-HEA is on average 3.78 higher than the state-of-the-art method upon F1-score in baselines. | are first introduced by Wu @cite_25 . Then, Yang @cite_3 sum up the features of TSCs, which inspire our work. LV @cite_27 propose a video understanding framework to assign temporal labels to highlighted video shots. They are the first to analyze the TSCs using the neural network. Recently, Liao @cite_13 present a larger-scale TSC dataset with four-level structures and rich self-labeled attributes, which brings convenience for future research on TSCs. The above methods show that TSC is a kind of data with great potential and development value. | {
"cite_N": [
"@cite_27",
"@cite_13",
"@cite_25",
"@cite_3"
],
"mid": [
"2562476021",
"2793321059",
"2060505035",
"2752791813"
],
"abstract": [
"Recent years have witnessed the boom of online sharing media contents, which raise significant challenges in effective management and retrieval. Though a large amount of efforts have been made, precise retrieval on video shots with certain topics has been largely ignored. At the same time, due to the popularity of novel time-sync comments, or so-called \"bullet-screen comments\", video semantics could be now combined with timestamps to support further research on temporal video labeling. In this paper, we propose a novel video understanding framework to assign temporal labels on highlighted video shots. To be specific, due to the informal expression of bullet-screen comments, we first propose a temporal deep structured semantic model (T-DSSM) to represent comments into semantic vectors by taking advantage of their temporal correlation. Then, video highlights are recognized and labeled via semantic vectors in a supervised way. Extensive experiments on a real-world dataset prove that our framework could effectively label video highlights with a significant margin compared with baselines, which clearly validates the potential of our framework on video understanding, as well as bullet-screen comments interpretation.",
"Time-Sync Comment (TSC) is a type of crowdsourced user review embedded in online video websites, which provides better real-time user interaction than traditional user comment type. Various TSC-related problems and approaches have been studied to improve user experience by taking advantage of special characteristics of TSCs such as strong time reliance. However, there are three major drawbacks to these TSC researches. First, they did not explicitly show advantage of TSC features over the traditional features in terms of users' experience. Second, the experiments were conducted on some inconsistent TSC datasets crawled from different source, which makes the effectiveness of their methods less convincing. Third, the methods were manually evaluated by a limited number of so-called \"experts\" in these experiments, so it is hard for other researchers to obtain the data labels and reproduce the results. In order to overcome these drawbacks, this paper aims to explore the usefulness of TSC data for for the improvement of user experience online by exploiting the TSC pattern inside a new dataset. Specifically, we present a larger-scale TSC dataset with four-level structures and rich self-labeled attributes and formally define a group of TSC-related research problems based on this dataset. The problems are solved by adapted state-of-the-art methods and evaluated through crowdsourced labels in the dataset. The result can be regarded as a baseline for further research.",
"Time-sync video tagging aims to automatically generate tags for each video shot. It can improve the user's experience in previewing a video's timeline structure compared to traditional schemes that tag an entire video clip. In this paper, we propose a new application which extracts time-sync video tags by automatically exploiting crowdsourced comments from video websites such as Nico Nico Douga, where videos are commented on by online crowd users in a time-sync manner. The challenge of the proposed application is that users with bias interact with one another frequently and bring noise into the data, while the comments are too sparse to compensate for the noise. Previous techniques are unable to handle this task well as they consider video semantics independently, which may overfit the sparse comments in each shot and thus fail to provide accurate modeling. To resolve these issues, we propose a novel temporal and personalized topic model that jointly considers temporal dependencies between video semantics, users' interaction in commenting, and users' preferences as prior knowledge. Our proposed model shares knowledge across video shots via users to enrich the short comments, and peels off user interaction and user bias to solve the noisy-comment problem. Log-likelihood analyses and user studies on large datasets show that the proposed model outperforms several state-of-the-art baselines in video tagging quality. Case studies also demonstrate our model's capability of extracting tags from the crowdsourced short and noisy comments.",
"Time-sync comments reveal a new way of extracting the online video tags. However, such time-sync comments have lots of noises due to users' diverse comments, introducing great challenges for accurate and fast video tag extractions. In this paper, we propose an unsupervised video tag extraction algorithm named Semantic Weight-Inverse Document Frequency (SW-IDF). SW-IDF first generates corresponding semantic association graph (SAG) using semantic similarities and timestamps of the time-sync comments. Then it clusters the comments into sub-graphs of different topics and assigns weight to each comment based on SAG. This can clearly differentiate the meaningful comments with the noises. In this way, the noises can be identified, and effectively eliminated. Extensive experiments have shown that SW-IDF can achieve 0.3045 precision and 0.6530 recall in high-density comments; 0.3800 precision and 0.4460 recall in low-density comments. It is the best performance among the existing unsupervised algorithms."
]
} |
1905.00579 | 2943770669 | Time-sync comment (TSC) is a new form of user-interaction review associated with real-time video contents, which contains a user's preferences for videos and therefore well suited as the data source for video recommendations. However, existing review-based recommendation methods ignore the context-dependent (generated by user-interaction), real-time, and time-sensitive properties of TSC data. To bridge the above gaps, in this paper, we use video images and users' TSCs to design an Image-Text Fusion model with a novel Herding Effect Attention mechanism (called ITF-HEA), which can predict users' favorite videos with model-based collaborative filtering. Specifically, in the HEA mechanism, we weight the context information based on the semantic similarities and time intervals between each TSC and its context, thereby considering influences of the herding effect in the model. Experiments show that ITF-HEA is on average 3.78 higher than the state-of-the-art method upon F1-score in baselines. | has attracted great attention from both the industry and academia. Most of the current state-of-the-art methods are based on CF. Mcauley @cite_6 combine latent rating dimensions with latent review topics, which is a review-based method. Diao @cite_15 propose a probabilistic model based on CF and topic modeling, which is an LDA @cite_5 based method and allows capturing the interest distribution of users and the content distribution of movies. He @cite_18 propose a scalable factorization model to incorporate visual signals into predictors of people's opinions, which is the state-of-the-art visual-based model. However, the above recommendation methods are not well-designed for TSCs as they ignore the interactive, real-time, and timeliness properties of TSC data. | {
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_18",
"@cite_6"
],
"mid": [
"1880262756",
"2142972908",
"2963655167",
"2061873838"
],
"abstract": [
"We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.",
"Recommendation and review sites offer a wealth of information beyond ratings. For instance, on IMDb users leave reviews, commenting on different aspects of a movie (e.g. actors, plot, visual effects), and expressing their sentiments (positive or negative) on these aspects in their reviews. This suggests that uncovering aspects and sentiments will allow us to gain a better understanding of users, movies, and the process involved in generating ratings. The ability to answer questions such as \"Does this user care more about the plot or about the special effects?\" or \"What is the quality of the movie in terms of acting?\" helps us to understand why certain ratings are generated. This can be used to provide more meaningful recommendations. In this work we propose a probabilistic model based on collaborative filtering and topic modeling. It allows us to capture the interest distribution of users and the content distribution for movies; it provides a link between interest and relevance on a per-aspect basis and it allows us to differentiate between positive and negative sentiments on a per-aspect basis. Unlike prior work our approach is entirely unsupervised and does not require knowledge of the aspect specific ratings or genres for inference. We evaluate our model on a live copy crawled from IMDb. Our model offers superior performance by joint modeling. Moreover, we are able to address the cold start problem -- by utilizing the information inherent in reviews our model demonstrates improvement for new users and movies.",
"Modern recommender systems model people and items by discovering or 'teasing apart' the underlying dimensions that encode the properties of items and users' preferences toward them. Critically, such dimensions are uncovered based on user feedback, often in implicit form (such as purchase histories, browsing logs, etc.); in addition, some recommender systems make use of side information, such as product attributes, temporal information, or review text. However one important feature that is typically ignored by existing personalized recommendation and ranking methods is the visual appearance of the items being considered. In this paper we propose a scalable factorization model to incorporate visual signals into predictors of people's opinions, which we apply to a selection of large, real-world datasets. We make use of visual features extracted from product images using (pre-trained) deep networks, on top of which we learn an additional layer that uncovers the visual dimensions that best explain the variation in people's feedback. This not only leads to significantly more accurate personalized ranking methods, but also helps to alleviate cold start issues, and qualitatively to analyze the visual dimensions that influence people's opinions.",
"In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as well as the user's level of interest in wizardry. User feedback is required to discover these latent product and user dimensions. Such feedback often comes in the form of a numeric rating accompanied by review text. However, traditional methods often discard review text, which makes user and product latent dimensions difficult to interpret, since they ignore the very text that justifies a user's rating. In this paper, we aim to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics (such as those learned by topic models like LDA). Our approach has several advantages. Firstly, we obtain highly interpretable textual labels for latent rating dimensions, which helps us to justify' ratings with text. Secondly, our approach more accurately predicts product ratings by harnessing the information present in review text; this is especially true for new products and users, who may have too few ratings to model their latent factors, yet may still provide substantial information from the text of even a single review. Thirdly, our discovered topics can be used to facilitate other tasks such as automated genre discovery, and to identify useful and representative reviews."
]
} |
1905.00585 | 2942685597 | We study the idea of information design for inducing prosocial behavior in the context of electricity consumption. We consider a continuum of agents. Each agent has a different intrinsic motivation to reduce her power consumption. Each agent models the power consumption of the others via a distribution. Using this distribution, agents will anticipate their reputational benefit and choose a power consumption by trading off their own intrinsic motivation to do a prosocial action, the cost of this prosocial action and their reputation. Initially, the service provider can provide two types of quantized feedbacks of the power consumption. We study their advantages and disadvantages. For each feedback, we characterize the corresponding mean field equilibrium, using a fixed point equation. Besides computing the mean field equilibrium, we highlight the need for a systematic study of information design, by showing that revealing less information to the society can lead to more prosociality. In the last part of the paper, we introduce the notion of privacy and provide a new quantized feedback, more flexible than the previous ones, that respects agents' privacy concern but at the same time improves prosociality. The results of this study are also applicable to generic resource sharing problems. | This section is dedicated to the related works and especially the main difference between our work with (1) the information design paradigm framed in @cite_8 and (2) the model suggested in @cite_6 . | {
"cite_N": [
"@cite_6",
"@cite_8"
],
"mid": [
"2153112490",
"2592361821"
],
"abstract": [
"We develop a theory of prosocial behavior that combines heterogeneity in individual altruism and greed with concerns for social reputation or self-respect. Rewards or punishments (whether material or image-related) create doubt about the true motive for which good deeds are performed and this \"overjustification effect\" can induce a partial or even net crowding out of prosocial behavior by extrinsic incentives. We also identify settings that are conducive to multiple social norms and those where disclosing one's generosity may backfire. Finally, we analyze the choice by public and private sponsors of incentive levels, their degree of confidentiality and the publicity given to agents' behavior. Sponsor competition is shown to potentially reduce social welfare.",
"Fixing a game with uncertain payoffs, information design identifies the information structure and equilibrium that maximizes the payoff of an information designer. We show how this perspective unifies existing work, including that on communication in games (Myerson (1991)), Bayesian persuasion (Kamenica and Gentzkow (2011)) and some of our own recent work. Information design has a literal interpretation, under which there is a real information designer who can commit to the choice of the best information structure (from her perspective) for a set of participants in a game. We emphasize a metaphorical interpretation, under which the information design problem is used by the analyst to characterize play in the game under many different information structures."
]
} |
1905.00585 | 2942685597 | We study the idea of information design for inducing prosocial behavior in the context of electricity consumption. We consider a continuum of agents. Each agent has a different intrinsic motivation to reduce her power consumption. Each agent models the power consumption of the others via a distribution. Using this distribution, agents will anticipate their reputational benefit and choose a power consumption by trading off their own intrinsic motivation to do a prosocial action, the cost of this prosocial action and their reputation. Initially, the service provider can provide two types of quantized feedbacks of the power consumption. We study their advantages and disadvantages. For each feedback, we characterize the corresponding mean field equilibrium, using a fixed point equation. Besides computing the mean field equilibrium, we highlight the need for a systematic study of information design, by showing that revealing less information to the society can lead to more prosociality. In the last part of the paper, we introduce the notion of privacy and provide a new quantized feedback, more flexible than the previous ones, that respects agents' privacy concern but at the same time improves prosociality. The results of this study are also applicable to generic resource sharing problems. | There is a wide range of applications of information design such as routing game @cite_7 @cite_4 , queueing game @cite_11 @cite_5 , economic applications concerning persuasion @cite_17 @cite_15 , and matching markets @cite_25 . These works are not using the framework of information design defined in @cite_8 or the one defined in our paper. But still, information (size of the queue, roads available) is hidden to improve the efficiency of a given system (delay in a queue, congestion in a city), and in that respect is related to our work. | {
"cite_N": [
"@cite_11",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_5",
"@cite_15",
"@cite_25",
"@cite_17"
],
"mid": [
"",
"2232678697",
"2784112925",
"2592361821",
"1604458142",
"2370370515",
"2048101191",
""
],
"abstract": [
"",
"To systematically study the implications of additional information about routes provided to certain users (e.g., via GPS-based route guidance systems), we introduce a new class of congestion games in which users have differing information sets about the available edges and can only use routes consisting of edges in their information set. After defining the notion of Information Constrained Wardrop Equilibrium (ICWE) for this class of congestion games and studying its basic properties, we turn to our main focus: whether additional information can be harmful (in the sense of generating greater equilibrium costs delays). We formulate this question in the form of Informational Braes' Paradox (IBP), which extends the classic Braess' Paradox in traffic equilibria, and asks whether users receiving additional information can become worse off. We provide a comprehensive answer to this question showing that in any network in the series of linearly independent (SLI) class, which is a strict subset of series-parallel networks, IBP cannot occur, and in any network that is not in the SLI class, there exists a configuration of edge-specific cost functions for which IBP will occur. In the process, we establish several properties of the SLI class of networks, which include the characterization of the complement of the SLI class in terms of embedding a specific set of networks, and also an algorithm which determines whether a graph is SLI in linear time. We further prove that the worst-case inefficiency performance of ICWE is no worse than the standard Wardrop equilibrium.",
"We consider the problem of designing information in games of uncertain congestion, such as traffic networks where road conditions are uncertain. Using the framework of Bayesian persuasion, we show that suitable information structures can mitigate congestion and improve social welfare.",
"Fixing a game with uncertain payoffs, information design identifies the information structure and equilibrium that maximizes the payoff of an information designer. We show how this perspective unifies existing work, including that on communication in games (Myerson (1991)), Bayesian persuasion (Kamenica and Gentzkow (2011)) and some of our own recent work. Information design has a literal interpretation, under which there is a real information designer who can commit to the choice of the best information structure (from her perspective) for a set of participants in a game. We emphasize a metaphorical interpretation, under which the information design problem is used by the analyst to characterize play in the game under many different information structures.",
"We consider both cooperative as well as non-cooperative admission into an M M 1 queue. The only information available is a signal that says whether the queue size is smaller than some L or not. We first compute the globally optimal and the Nash equilibrium stationary policy as a function of L. We compare the performance to that of full information on the queue size. We identify the L that optimizes the equilibrium performance.",
"A set of players have preferences over a set of outcomes. We consider the problem of an \"information designer\" who can choose an information structure for the players to serve his ends, but has no ability to change the mechanism (or force the players to make particular action choices). We describe a unifying perspective for information design. We consider a simple example of Bayesian persuasion with both an uninformed and informed receiver. We extend information design to many players and relate it to the literature on incomplete information correlated equilibrium.",
"This paper explores information disclosure in matching markets, e.g., the informativeness of transcripts given out by universities. We show that the same, \"benchmark,\" amount of information is disclosed in essentially all equilibria. We then demonstrate that if universities disclose the benchmark amount of information, students and employers will not find it profitable to contract early; if they disclose more, unraveling will occur.",
""
]
} |
1905.00480 | 2942833478 | Large scale electronic structure calculations require modern high performance computing (HPC) resources and, as important, mature HPC applications that can make efficient use of those. Real-space grid-based applications of Density Functional Theory (DFT) using the Projector Augmented Wave method (PAW) can give the same accuracy as DFT codes relying on a plane wave basis set but exhibit an improved scalability on distributed memory machines. The projection operations of the PAW Hamiltonian are known to be the performance critical part due to their limitation by the available memory bandwidth. We investigate on the utility of a 3D factorizable basis of Hermite functions for the localized PAW projector functions which allows to reduce the bandwidth requirements for the grid representation of the projector functions in projection operations. Additional on-the-fly sampling of the 1D basis functions eliminates the memory transfer almost entirely. For an quantitative assessment of the expected memory bandwidth savings we show performance results of a first implementation on GPUs. Finally, we suggest a PAW generation scheme adjusted to the analytically given projector functions. | Already 1986, Obara and Saika presented the ansatz of 3D products of primitive Cartesian Gaussian (pCG) functions @math in order to evaluate two- and three-center integrals @cite_21 . In fact, there is a strong relation between pCGs and the 1D harmonic oscillator states. The pCG functions form a non-orthogonal basis set. Applying Gram-Schmidt orthogonalization to pCGs leads to 1D Hermite functions. Many works in quantum chemistry, as e.g. Schlegel and Frisch @cite_15 , have mentioned the increased basis size @math of the pCGs compared to @math but this has usually been taken as a weakness rather than a strength. | {
"cite_N": [
"@cite_15",
"@cite_21"
],
"mid": [
"1979069170",
"2029228724"
],
"abstract": [
"Spherical Gaussians can be expressed as linear combinations of the appropriate Cartesian Gaussians. General expressions for the transformation coefficients are given. Values for the transformation coefficients are tabulated up to h-type functions. © 1995 John Wiley & Sons, Inc.",
"Recurrence expressions are derived for various types of molecular integrals over Cartesian Gaussian functions by the use of the recurrence formula for three‐center overlap integrals. A number of characteristics inherent in the recursive formalism allow an efficient scheme to be developed for molecular integral computations. With respect to electron repulsion integrals and their derivatives, the present scheme with a significant saving of computer time is found superior to other currently available methods. A long innermost loop incorporated in the present scheme facilitates a fast computation on a vector processing computer."
]
} |
1905.00480 | 2942833478 | Large scale electronic structure calculations require modern high performance computing (HPC) resources and, as important, mature HPC applications that can make efficient use of those. Real-space grid-based applications of Density Functional Theory (DFT) using the Projector Augmented Wave method (PAW) can give the same accuracy as DFT codes relying on a plane wave basis set but exhibit an improved scalability on distributed memory machines. The projection operations of the PAW Hamiltonian are known to be the performance critical part due to their limitation by the available memory bandwidth. We investigate on the utility of a 3D factorizable basis of Hermite functions for the localized PAW projector functions which allows to reduce the bandwidth requirements for the grid representation of the projector functions in projection operations. Additional on-the-fly sampling of the 1D basis functions eliminates the memory transfer almost entirely. For an quantitative assessment of the expected memory bandwidth savings we show performance results of a first implementation on GPUs. Finally, we suggest a PAW generation scheme adjusted to the analytically given projector functions. | The idea of using the SHO basis has probably not received attention as the related basis set has been found not to converge as rapidly as a set of only Gaussian-type orbitals (GTOs) contracted over a set of different spread parameters @math @cite_10 when used as a basis set for quantum chemistry calculations @cite_16 . Note that the subset of radial SHO states with @math are GTOs. | {
"cite_N": [
"@cite_16",
"@cite_10"
],
"mid": [
"2078146328",
"2006144138"
],
"abstract": [
"Various contracted Gaussian basis sets for atoms up to Kr are presented which have been determined by optimizing atomic self‐consistent field ground state energies with respect to all basis set parameters, i.e., orbital exponents and contraction coefficients.",
"The use of gradient techniques for the development of energy-optimized basis sets has been investigated. The region where the energy surface is approximately quadratic with a positive definite Hessian is found to be very small for large basis sets. However, scaled Newton-Raphson methods prove quite effective even when the starting point is outside this region. The analytic calculation of the Hessian is found to be most efficient in terms of computing time."
]
} |
1905.00567 | 2943396575 | The number of posts made by a single user account on a social media platform Twitter in any given time interval is usually quite low. However, there is a subset of users whose volume of posts is much higher than the median. In this paper, we investigate the content diversity and the social neighborhood of these extreme users and others. We define a metric called "interest narrowness", and identify that a subset of extreme users, termed anomalous users, write posts with very low topic diversity, including posts with no text content. Using a few interaction patterns we show that anomalous groups have the strongest within-group interactions, compared to their interaction with others. Further, they exhibit different information sharing behaviors with other anomalous users compared to non-anomalous extreme tweeters. | A 2016 survey @cite_2 covers a wide bandwidth of different approaches for user characterization. The behavioral properties they report range from conscientiousness and extroversion to privacy behavior, deceptive traits and response to social attacks. In contrast to our approach, many of the reported analyses studied in this paper are based on focused user surveys. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2546977514"
],
"abstract": [
"Online social network analysis has attracted great attention with a vast number of users sharing information and availability of APIs that help to crawl online social network data. In this paper, we study the research studies that are helpful for user characterization as online users may not always reveal their true identity or attributes. We especially focused on user attribute determination such as gender and age; user behavior analysis such as motives for deception; mental models that are indicators of user behavior; user categorization such as bots versus humans; and entity matching on different social networks. We believe our summary of analysis of user characterization will provide important insights into researchers and better services to online users."
]
} |
1905.00567 | 2943396575 | The number of posts made by a single user account on a social media platform Twitter in any given time interval is usually quite low. However, there is a subset of users whose volume of posts is much higher than the median. In this paper, we investigate the content diversity and the social neighborhood of these extreme users and others. We define a metric called "interest narrowness", and identify that a subset of extreme users, termed anomalous users, write posts with very low topic diversity, including posts with no text content. Using a few interaction patterns we show that anomalous groups have the strongest within-group interactions, compared to their interaction with others. Further, they exhibit different information sharing behaviors with other anomalous users compared to non-anomalous extreme tweeters. | @cite_0 take a bond-theory based approach and distinguish between social user groups and topical user groups in a network based on features like reciprocity, topicality and activity. Of these, topicality, defined based on a metric called normalized entropy'' measures how much the topics of discussion vary within a group. The higher the entropy, the greater is the variety of terms and, according to the theory, the more social the group is. However, this measure considers the words to be independent, which usually does not hold. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2043383814"
],
"abstract": [
"Social groups play a crucial role in social media platforms because they form the basis for user participation and engagement. Groups are created explicitly by members of the community, but also form organically as members interact. Due to their importance, they have been studied widely (e.g., community detection, evolution, activity, etc.). One of the key questions for understanding how such groups evolve is whether there are different types of groups and how they differ. In Sociology, theories have been proposed to help explain how such groups form. In particular, the common identity and common bond theory states that people join groups based on identity (i.e., interest in the topics discussed) or bond attachment (i.e., social relationships). The theory has been applied qualitatively to small groups to classify them as either topical or social. We use the identity and bond theory to define a set of features to classify groups into those two categories. Using a dataset from Flickr, we extract user-defined groups and automatically-detected groups, obtained from a community detection algorithm. We discuss the process of manual labeling of groups into social or topical and present results of predicting the group label based on the defined features. We directly validate the predictions of the theory showing that the metrics are able to forecast the group type with high accuracy. In addition, we present a comparison between declared and detected groups along topicality and sociality dimensions."
]
} |
1905.00567 | 2943396575 | The number of posts made by a single user account on a social media platform Twitter in any given time interval is usually quite low. However, there is a subset of users whose volume of posts is much higher than the median. In this paper, we investigate the content diversity and the social neighborhood of these extreme users and others. We define a metric called "interest narrowness", and identify that a subset of extreme users, termed anomalous users, write posts with very low topic diversity, including posts with no text content. Using a few interaction patterns we show that anomalous groups have the strongest within-group interactions, compared to their interaction with others. Further, they exhibit different information sharing behaviors with other anomalous users compared to non-anomalous extreme tweeters. | Similar to our notion of anomalous users, @cite_7 investigates the concept of dedicators'', users who transmit information in selected topic areas to the people in their egocentric networks. The concept of dedication is determined by volume, engagement, personal tendencies and topic weight, where personal tendency includes the user's topic diversity as measured by Latent Dirichlet Allocation (LDA), and engagement measures the activity level of conversations. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2193014111"
],
"abstract": [
"Influential people are known to play a key role in diffusing information in a social network. When measuring influence in a social network, most studies have focused on the use of the graph topology representing a network. As a result, popular or famous people tend to be identified as influencers. While they have a potential to influence people with the network connections by propagating information to their friends or followers, it is not clear whether they can indeed serve as an influencer as expected, especially for specific topic areas. In this paper, we introduce the notion of dedicators, which measures the extent to which a user has dedicated to transmit information in selected topic areas to the people in their egocentric networks. To detect topic-based dedicators, we propose a measure that combines both community-level and individual-level factors, which are related to the volume and the engagement level of their conversations and the degree of focus on specific topics. Having analyzed a Twitter conversation data set, we show that dedicators are not co-related with topology-based influencers; users with high in-degree influence tend to have a low dedication level while top dedicators tend to have richer conversations with others, taking advantage of smaller and manageable social networks."
]
} |
1905.00567 | 2943396575 | The number of posts made by a single user account on a social media platform Twitter in any given time interval is usually quite low. However, there is a subset of users whose volume of posts is much higher than the median. In this paper, we investigate the content diversity and the social neighborhood of these extreme users and others. We define a metric called "interest narrowness", and identify that a subset of extreme users, termed anomalous users, write posts with very low topic diversity, including posts with no text content. Using a few interaction patterns we show that anomalous groups have the strongest within-group interactions, compared to their interaction with others. Further, they exhibit different information sharing behaviors with other anomalous users compared to non-anomalous extreme tweeters. | Diversity of topics within text posts is also analyzed in papers like @cite_6 that perform LDA to compute topics and then determine topic diversity across a user’s posts as the number of distinct probable topics found across all of the user’s posts. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2104247698"
],
"abstract": [
"An increasing portion of modern socializing takes place via online social networks. Members of these communities often play distinct roles that can be deduced from observations of users' online activities. One such activity is the sharing of multimedia, the popularity of which can vary dramatically. Here we discuss our initial analysis of anonymized, scraped data from consenting Facebook users, together with associated demographic and psychological profiles. We present five clusters of users with common observed online behaviors, where these users also show correlated profile characteristics. Finally, we identify some common properties of the most popular multimedia content."
]
} |
1905.00567 | 2943396575 | The number of posts made by a single user account on a social media platform Twitter in any given time interval is usually quite low. However, there is a subset of users whose volume of posts is much higher than the median. In this paper, we investigate the content diversity and the social neighborhood of these extreme users and others. We define a metric called "interest narrowness", and identify that a subset of extreme users, termed anomalous users, write posts with very low topic diversity, including posts with no text content. Using a few interaction patterns we show that anomalous groups have the strongest within-group interactions, compared to their interaction with others. Further, they exhibit different information sharing behaviors with other anomalous users compared to non-anomalous extreme tweeters. | Closer to our application, @cite_10 used a POS-tagged BOW model to analyze Facebook posts for political content and derived a network of correlated concepts. They demonstrated how some advocacy organizations produce social media messages that inspire far-ranging conversation among social media users. Interestingly, their network analysis is based on the co-occurrence of concept terms from which they extract interesting connection patterns that characterize influence modalities for advocacy groups. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2525230286"
],
"abstract": [
"Social media sites are rapidly becoming one of the most important forums for public deliberation about advocacy issues. However, social scientists have not explained why some advocacy organizations produce social media messages that inspire far-ranging conversation among social media users, whereas the vast majority of them receive little or no attention. I argue that advocacy organizations are more likely to inspire comments from new social media audiences if they create “cultural bridges,” or produce messages that combine conversational themes within an advocacy field that are seldom discussed together. I use natural language processing, network analysis, and a social media application to analyze how cultural bridges shaped public discourse about autism spectrum disorders on Facebook over the course of 1.5 years, controlling for various characteristics of advocacy organizations, their social media audiences, and the broader social context in which they interact. I show that organizations that create substantial cultural bridges provoke 2.52 times more comments about their messages from new social media users than those that do not, controlling for these factors. This study thus offers a theory of cultural messaging and public deliberation and computational techniques for text analysis and application-based survey research."
]
} |
1905.00587 | 2970161307 | Coordination recognition and subtle pattern prediction of future trajectories play a significant role when modeling interactive behaviors of multiple agents. Due to the essential property of uncertainty in the future evolution, deterministic predictors are not sufficiently safe and robust. In order to tackle the task of probabilistic prediction for multiple, interactive entities, we propose a coordination and trajectory prediction system (CTPS), which has a hierarchical structure including a macro-level coordination recognition module and a micro-level subtle pattern prediction module which solves a probabilistic generation task. We illustrate two types of representation of the coordination variable: categorized and real-valued, and compare their effects and advantages based on empirical studies. We also bring the ideas of Bayesian deep learning into deep generative models to generate diversified prediction hypotheses. The proposed system is tested on multi-ple driving datasets in various traffic scenarios, which achieves better performance than baseline approaches in terms of a set of evaluation metrics. The results also show that using categorized coordination can better capture multi-modality and generate more diversified samples than the real-valued coordination, while the latter can generate prediction hypotheses with smaller errors with a sacrifice of sample diversity. Moreover, employing neural networks with weight uncertainty is able to generate samples with larger variance and diversity. | Many research efforts have been devoted to forecasting behaviors and trajectories of on-road vehicles and pedestrians. Many rule-based and model-based approaches were proposed to estimate future states on time-series data, such as variants of Kalman filter (KF) applied to system dynamic models @cite_19 , auto-regressive models @cite_5 and time-series analysis @cite_0 . However, these approaches are not able to incorporate the coordination among highly interactive agents due to the limited model flexibility. Therefore, more learning-based approaches were put forward to tackle complicated scenarios. In @cite_14 , a variational neural network was employed to predict future trajectories at an urban intersection. In @cite_3 , a dynamic Bayesian network was designed to predict driver maneuvers in highway scenarios. Other methodologies in the existing literature include hidden Markov models @cite_13 , Gaussian mixture models @cite_22 , Gaussian process @cite_15 and inverse reinforcement learning @cite_1 @cite_11 . In this paper, we propose a hierarchical prediction system based on Bayesian generative modeling and variational inference, which can approximate the data distribution and capture multi-modalities. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_11"
],
"mid": [
"2909736964",
"2964034892",
"2891385160",
"2907326933",
"1517729176",
"2293058369",
"2070681743",
"2079150870",
"2892122874",
"2963787234"
],
"abstract": [
"Predicting the motion of a driver's vehicle is crucial for advanced driving systems, enabling detection of potential risks towards shared control between the driver and automation systems. In this paper, we propose a variational neural network approach that predicts future driver trajectory distributions for the vehicle based on multiple sensors. Our predictor generates both a conditional variational distribution of future trajectories, as well as a confidence estimate for different time horizons. Our approach allows us to handle inherently uncertain situations, and reason about information gain from each input, as well as combine our model with additional predictors, creating a mixture of experts. We show how to augment the variational predictor with a physics-based predictor, and based on their confidence estimations, improve overall system performance. The resulting combined model is aware of the uncertainty associated with its predictions, which can help the vehicle autonomy to make decisions with more confidence. The model is validated on real-world urban driving data collected in multiple locations. This validation demonstrates that our approach improves the prediction error of a physics-based model by 25 while successfully identifying the uncertain cases with 82 accuracy.",
"Accurate and robust tracking of surrounding road participants plays an important role in autonomous driving. However, there is usually no prior knowledge of the number of tracking targets due to object emergence, object disappearance and false alarms. To overcome this challenge, we propose a generic vehicle tracking framework based on modified mixture particle filter, which can make the number of tracking targets adaptive to real-time observations and track all the vehicles within sensor range simultaneously in a uniform architecture without explicit data association. Each object corresponds to a mixture component whose distribution is non-parametric and approximated by particle hypotheses. Most tracking approaches employ vehicle kinematic models as the prediction model. However, it is hard for these models to make proper predictions when sensor measurements are lost or become low quality due to partial or complete occlusions. Moreover, these models are incapable of forecasting sudden maneuvers. To address these problems, we propose to incorporate learning-based behavioral models instead of pure vehicle kinematic models to realize prediction in the prior update of recursive Bayesian state estimation. Two typical driving scenarios including lane keeping and lane change are demonstrated to verify the effectiveness and accuracy of the proposed framework as well as the advantages of employing learning-based models.",
"Autonomous vehicles (AVs) are on the road. To safely and efficiently interact with other road participants, AVs have to accurately predict the behavior of surrounding vehicles and plan accordingly. Such prediction should be probabilistic, to address the uncertainties in human behavior. Such prediction should also be interactive, since the distribution over all possible trajectories of the predicted vehicle depends not only on historical information, but also on future plans of other vehicles that interact with it. To achieve such interaction-aware predictions, we propose a probabilistic prediction approach based on hierarchical inverse reinforcement learning (IRL). First, we explicitly consider the hierarchical trajectory-generation process of human drivers involving both discrete and continuous driving decisions. Based on this, the distribution over all future trajectories of the predicted vehicle is formulated as a mixture of distributions partitioned by the discrete decisions. Then we apply IRL hierarchically to learn the distributions from real human demonstrations. A case study for the ramp-merging driving scenario is provided. The quantitative results show that the proposed approach can accurately predict both the discrete driving decisions such as yield or pass as well as the continuous trajectories.",
"Accurate maneuver prediction for surrounding vehicles enables intelligent vehicles to make safe and socially compliant decisions in advance, thus improving the safety and comfort of the driving. The main contribution of this paper is proposing a practical, high-performance, and low-cost maneuver-prediction approach for intelligent vehicles. Our approach is based on a dynamic Bayesian network, which exploits multiple predictive features, namely, historical states of predicting vehicles, road structures, as well as traffic interactions for inferring the probability of each maneuver. The paper also presents algorithms of feature extraction for the network. Our approach is verified on real traffic data in large-scale publicly available datasets. The results show that our approach can recognize the lane-change maneuvers with an F1 score of 80 and an advanced prediction time of 3.75 s, which greatly improves the performance on prediction compared to other baseline approaches.",
"Part I. Basic Concepts: 1. Introduction: why nonlinear methods? 2. Linear tools and general considerations 3. Phase space methods 4. Determinism and predictability 5. Instability: Lyapunov exponents 6. Self-similarity: dimensions 7. Using nonlinear methods when determinism is weak 8. Selected nonlinear phenomena Part II. Advanced Topics: 9. Advanced embedding methods 10. Chaotic data and noise 11. More about invariant quantities 12. Modeling and forecasting 13. Chaos control 14. Other selected topics Appendix 1. Efficient neighbour searching Appendix 2. Program listings Appendix 3. Description of the experimental data sets.",
"Abstract This paper researched an estimation method based on the Minimum Model Error (MME) criterion combing with the Extended Kalman Filter (EKF) for 4WD vehicle states. A general 5-input–3-output and 3 states estimation system was established, considering both the arbitrary nonlinear model error and the white Gauss measurement noise. Aiming at eliminating the estimation error caused by the arbitrary nonlinear model error, the prediction algorithm for the dynamic tire force error was deduced based on the MME criterion, based on which the system model can be effectively updated for higher estimation accuracy. The estimation algorithm was applied to a two-motor-driven vehicle during a double-lane-change process with varying speed under simulative experimental condition. The results showed that the dynamic tire force error could be effectively found for updating the system model, and higher estimation accuracy of the vehicle states were achieved, when compared with the traditional EKF estimator.",
"This is a preliminary report on a newly developed simple and practical procedure of statistical identification of predictors by using autoregressive models. The use of autoregressive representation of a stationary time series (or the innovations approach) in the analysis of time series has recently been attracting attentions of many research workers and it is expected that this time domain approach will give answers to many problems, such as the identification of noisy feedback systems, which could not be solved by the direct application of frequency domain approach [1], [2], [3], [9].",
"The article deals with the analysis and interpretation of dynamic scenes typical of urban driving. The key objective is to assess risks of collision for the ego-vehicle. We describe our concept and methods, which we have integrated and tested on our experimental platform on a Lexus car and a driving simulator. The on-board sensors deliver visual, telemetric and inertial data for environment monitoring. The sensor fusion uses our Bayesian Occupancy Filter for a spatio-temporal grid representation of the traffic scene. The underlying probabilistic approach is capable of dealing with uncertainties when modeling the environment as well as detecting and tracking dynamic objects. The collision risks are estimated as stochastic variables and are predicted for a short period ahead with the use of Hidden Markov Models and Gaussian processes. The software implementation takes advantage of our methods, which allow for parallel computation. Our tests have proven the relevance and feasibility of our approach for improving the safety of car driving.",
"Accurate and robust recognition and prediction of traffic situation plays an important role in autonomous driving, which is a prerequisite for risk assessment and effective decision making. Although there exist a lot of works dealing with modeling driver behavior of a single object, it remains a challenge to make predictions for multiple highly interactive agents that react to each other simultaneously. In this work, we propose a generic probabilistic hierarchical recognition and prediction framework which employs a two-layer Hidden Markov Model (TLHMM) to obtain the distribution of potential situations and a learning-based dynamic scene evolution model to sample a group of future trajectories. Instead of predicting motions of a single entity, we propose to get the joint distribution by modeling multiple interactive agents as a whole system. Moreover, due to the decoupling property of the layered structure, our model is suitable for knowledge transfer from simulation to real world applications as well as among different traffic scenarios, which can reduce the computational efforts of training and the demand for a large data amount. A case study of highway ramp merging scenario is demonstrated to verify the effectiveness and accuracy of the proposed framework.",
"Typically, autonomous cars optimize for a combination of safety, efficiency, and driving quality. But as we get better at this optimization, we start seeing behavior go from too conservative to too aggressive. The car's behavior exposes the incentives we provide in its cost function. In this work, we argue for cars that are not optimizing a purely selfish cost, but also try to be courteous to other interactive drivers. We formalize courtesy as a term in the objective that measures the increase in another driver's cost induced by the autonomous car's behavior. Such a courtesy term enables the robot car to be aware of possible irrationality of the human behavior, and plan accordingly. We analyze the effect of courtesy in a variety of scenarios. We find, for example, that courteous robot cars leave more space when merging in front of a human driver. Moreover, we find that such a courtesy term can help explain real human driver behavior on the NGSIM dataset."
]
} |
1905.00587 | 2970161307 | Coordination recognition and subtle pattern prediction of future trajectories play a significant role when modeling interactive behaviors of multiple agents. Due to the essential property of uncertainty in the future evolution, deterministic predictors are not sufficiently safe and robust. In order to tackle the task of probabilistic prediction for multiple, interactive entities, we propose a coordination and trajectory prediction system (CTPS), which has a hierarchical structure including a macro-level coordination recognition module and a micro-level subtle pattern prediction module which solves a probabilistic generation task. We illustrate two types of representation of the coordination variable: categorized and real-valued, and compare their effects and advantages based on empirical studies. We also bring the ideas of Bayesian deep learning into deep generative models to generate diversified prediction hypotheses. The proposed system is tested on multi-ple driving datasets in various traffic scenarios, which achieves better performance than baseline approaches in terms of a set of evaluation metrics. The results also show that using categorized coordination can better capture multi-modality and generate more diversified samples than the real-valued coordination, while the latter can generate prediction hypotheses with smaller errors with a sacrifice of sample diversity. Moreover, employing neural networks with weight uncertainty is able to generate samples with larger variance and diversity. | The parameters of deep neural networks in most deep learning literature are deterministic, which may not fully represent the model uncertainty. In recent years, more attention has been paid to avoid over-fitting or being over-confident on unseen cases by incorporating distributions over network parameters @cite_17 @cite_2 . In @cite_18 , a stochastic dynamical system was modeled by a Bayesian neural network (BNN) and used for model-based reinforcement learning. Moreover, in real-world applications such as autonomous driving, the surrounding environment is highly dynamic and full of uncertainty. @cite_8 suggested employing Bayesian deep learning to address safety issues. In this work, we treat the parameters in the proposed system as sampled instances from the learned distributions, which improves uncertainty modeling and increases the diversity of prediction hypotheses. | {
"cite_N": [
"@cite_8",
"@cite_18",
"@cite_2",
"@cite_17"
],
"mid": [
"2740379828",
"2404067440",
"2643184551",
""
],
"abstract": [
"Adrian Weller acknowledges support by the Alan Turing Institute under the EPSRC grant EP N510129 1, and by the Leverhulme Trust via the CFI.",
"We present an algorithm for model-based reinforcement learning that combines Bayesian neural networks (BNNs) with random roll-outs and stochastic optimization for policy learning. The BNNs are trained by minimizing @math -divergences, allowing us to capture complicated statistical patterns in the transition dynamics, e.g. multi-modality and heteroskedasticity, which are usually missed by other common modeling approaches. We illustrate the performance of our method by solving a challenging benchmark where model-based approaches usually fail and by obtaining promising results in a real-world scenario for controlling a gas turbine.",
"Traditional GANs use a deterministic generator function (typically a neural network) to transform a random noise input @math to a sample @math that the discriminator seeks to distinguish. We propose a new GAN called Bayesian Conditional Generative Adversarial Networks (BC-GANs) that use a random generator function to transform a deterministic input @math to a sample @math . Our BC-GANs extend traditional GANs to a Bayesian framework, and naturally handle unsupervised learning, supervised learning, and semi-supervised learning problems. Experiments show that the proposed BC-GANs outperforms the state-of-the-arts.",
""
]
} |
1905.00006 | 2942746079 | The widespread popularization of vehicles has facilitated all people's life during the last decades. However, the emergence of a large number of vehicles poses the critical but challenging problem of vehicle re-identification (reID). Till now, for most vehicle reID algorithms, both the training and testing processes are conducted on the same annotated datasets under supervision. However, even a well-trained model will still cause fateful performance drop due to the severe domain bias between the trained dataset and the real-world scenes. To address this problem, this paper proposes a domain adaptation framework for vehicle reID (DAVR), which narrows the cross-domain bias by fully exploiting the labeled data from the source domain to adapt the target domain. DAVR develops an image-to-image translation network named Dual-branch Adversarial Network (DAN), which could promote the images from the source domain (well-labeled) to learn the style of target domain (unlabeled) without any annotation and preserve identity information from source domain. Then the generated images are employed to train the vehicle reID model by a proposed attention-based feature learning model with more reasonable styles. Through the proposed framework, the well-trained reID model has better domain adaptation ability for various scenes in real-world situations. Comprehensive experimental results have demonstrated that our proposed DAVR can achieve excellent performances on both VehicleID dataset and VeRi-776 dataset. | For the problem of cross-domain reID, the biggest challenge is that it is difficult to maintain the image identity information when transferring the annotated images from source to target domain in an unsupervised method. Hence, to overcome this problem, SPGAN @cite_17 was designed to integrate of a siamese network and a CycleGAN to preserve the self-similarity about an image before and after translation, and domain-dissimilarity about a translated source image and a target image. TJ-AIDL @cite_12 introduced simultaneously learned an attribute-semantic and identity discriminative feature representation space transferable to any new target domain for reID tasks without the need for collecting new labeled training data from the target domain. PTGAN @cite_30 was designed for person transfer, which employed the PSPNet and CycleGAN to generate high quality person images and keep the person identities and style. | {
"cite_N": [
"@cite_30",
"@cite_12",
"@cite_17"
],
"mid": [
"2963047834",
"2963557071",
"2963000559"
],
"abstract": [
"Although the performance of person Re-Identification (ReID) has been significantly boosted, many challenging issues in real scenarios have not been fully investigated, e.g., the complex scenes and lighting variations, viewpoint and pose changes, and the large number of identities in a camera network. To facilitate the research towards conquering those issues, this paper contributes a new dataset called MSMT171 with many important features, e.g., 1) the raw videos are taken by an 15-camera network deployed in both indoor and outdoor scenes, 2) the videos cover a long period of time and present complex lighting variations, and 3) it contains currently the largest number of annotated identities, i.e., 4,101 identities and 126,441 bounding boxes. We also observe that, domain gap commonly exists between datasets, which essentially causes severe performance drop when training and testing on different datasets. This results in that available training data cannot be effectively leveraged for new testing domains. To relieve the expensive costs of annotating new training samples, we propose a Person Transfer Generative Adversarial Network (PTGAN) to bridge the domain gap. Comprehensive experiments show that the domain gap could be substantially narrowed-down by the PTGAN.",
"Most existing person re-identification (re-id) methods require supervised model learning from a separate large set of pairwise labelled training data for every single camera pair. This significantly limits their scalability and usability in real-world large scale deployments with the need for performing re-id across many camera views. To address this scalability problem, we develop a novel deep learning method for transferring the labelled information of an existing dataset to a new unseen (unlabelled) target domain for person re-id without any supervised learning in the target domain. Specifically, we introduce an Transferable Joint Attribute-Identity Deep Learning (TJ-AIDL) for simultaneously learning an attribute-semantic and identity-discriminative feature representation space transferrable to any new (unseen) target domain for re-id tasks without the need for collecting new labelled training data from the target domain (i.e. unsupervised learning in the target domain). Extensive comparative evaluations validate the superiority of this new TJ-AIDL model for unsupervised person re-id over a wide range of state-of-the-art methods on four challenging benchmarks including VIPeR, PRID, Market-1501, and DukeMTMC-ReID.",
"Person re-identification (re-ID) models trained on one domain often fail to generalize well to another. In our attempt, we present a \"learning via translation\" framework. In the baseline, we translate the labeled images from source to target domain in an unsupervised manner. We then train re-ID models with the translated images by supervised methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation. Our motivation is two-fold. First, for each image, the discriminative cues contained in its ID label should be maintained after translation. Second, given the fact that two domains have entirely different persons, a translated image should be dissimilar to any of the target IDs. To this end, we propose to preserve two types of unsupervised similarities, 1) self-similarity of an image before and after translation, and 2) domain-dissimilarity of a translated source image and a target image. Both constraints are implemented in the similarity preserving generative adversarial network (SPGAN) which consists of an Siamese network and a CycleGAN. Through domain adaptation experiment, we show that images generated by SPGAN are more suitable for domain adaptation and yield consistent and competitive re-ID accuracy on two large-scale datasets."
]
} |
1905.00286 | 2950359401 | Attribute guided face image synthesis aims to manipulate attributes on a face image. Most existing methods for image-to-image translation can either perform a fixed translation between any two image domains using a single attribute or require training data with the attributes of interest for each subject. Therefore, these methods could only train one specific model for each pair of image domains, which limits their ability in dealing with more than two domains. Another disadvantage of these methods is that they often suffer from the common problem of mode collapse that degrades the quality of the generated images. To overcome these shortcomings, we propose attribute guided face image generation method using a single model, which is capable to synthesize multiple photo-realistic face images conditioned on the attributes of interest. In addition, we adopt the proposed model to increase the realism of the simulated face images while preserving the face characteristics. Compared to existing models, synthetic face images generated by our method present a good photorealistic quality on several face datasets. Finally, we demonstrate that generated facial images can be used for synthetic data augmentation, and improve the performance of the classifier used for facial expression recognition. | Recently, GAN based models @cite_13 have achieved impressive results in many image synthesis applications, including image super-resolution @cite_25 , image-to-image translation (pix2pix) @cite_20 and CycleGAN @cite_32 . We summarize contributions of few important related works in below: | {
"cite_N": [
"@cite_32",
"@cite_13",
"@cite_25",
"@cite_20"
],
"mid": [
"2962793481",
"",
"2963470893",
"2963073614"
],
"abstract": [
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"",
"Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either."
]
} |
1905.00286 | 2950359401 | Attribute guided face image synthesis aims to manipulate attributes on a face image. Most existing methods for image-to-image translation can either perform a fixed translation between any two image domains using a single attribute or require training data with the attributes of interest for each subject. Therefore, these methods could only train one specific model for each pair of image domains, which limits their ability in dealing with more than two domains. Another disadvantage of these methods is that they often suffer from the common problem of mode collapse that degrades the quality of the generated images. To overcome these shortcomings, we propose attribute guided face image generation method using a single model, which is capable to synthesize multiple photo-realistic face images conditioned on the attributes of interest. In addition, we adopt the proposed model to increase the realism of the simulated face images while preserving the face characteristics. Compared to existing models, synthetic face images generated by our method present a good photorealistic quality on several face datasets. Finally, we demonstrate that generated facial images can be used for synthetic data augmentation, and improve the performance of the classifier used for facial expression recognition. | @cite_19 proposed a domain transfer network to tackle the problem of emoji generation for a given facial image. @cite_7 proposed attribute-guided face generation to translate low-resolution face images to high-resolution face images. @cite_24 proposed a Two-Pathway Generative Adversarial Network (TP-GAN) for photorealistic face synthesis by simultaneously considering local face details and global structures. | {
"cite_N": [
"@cite_24",
"@cite_19",
"@cite_7"
],
"mid": [
"2964337551",
"2553897675",
"2895749211"
],
"abstract": [
"Photorealistic frontal view synthesis from a single face image has a wide range of applications in the field of face recognition. Although data-driven deep learning methods have been proposed to address this problem by seeking solutions from ample face data, this problem is still challenging because it is intrinsically ill-posed. This paper proposes a Two-Pathway Generative Adversarial Network (TP-GAN) for photorealistic frontal view synthesis by simultaneously perceiving global structures and local details. Four landmark located patch networks are proposed to attend to local textures in addition to the commonly used global encoderdecoder network. Except for the novel architecture, we make this ill-posed problem well constrained by introducing a combination of adversarial loss, symmetry loss and identity preserving loss. The combined loss function leverages both frontal face distribution and pre-trained discriminative deep face models to guide an identity preserving inference of frontal views from profiles. Different from previous deep learning methods that mainly rely on intermediate features for recognition, our method directly leverages the synthesized identity preserving image for downstream tasks like face recognition and attribution estimation. Experimental results demonstrate that our method not only presents compelling perceptual results but also outperforms state-of-theart results on large pose face recognition.",
"We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given function f, which accepts inputs in either domains, would remain unchanged. Other than the function f, the training data is unsupervised and consist of a set of samples from each domain. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity.",
"We are interested in attribute-guided face generation: given a low-res face input image, an attribute vector that can be extracted from a high-res image (attribute image), our new method generates a high-res face image for the low-res input that satisfies the given attributes. To address this problem, we condition the CycleGAN and propose conditional CycleGAN, which is designed to (1) handle unpaired training data because the training low high-res and high-res attribute images may not necessarily align with each other, and to (2) allow easy control of the appearance of the generated face via the input attributes. We demonstrate high-quality results on the attribute-guided conditional CycleGAN, which can synthesize realistic face images with appearance easily controlled by user-supplied attributes (e.g., gender, makeup, hair color, eyeglasses). Using the attribute image as identity to produce the corresponding conditional vector and by incorporating a face verification network, the attribute-guided network becomes the identity-guided conditional CycleGAN which produces high-quality and interesting results on identity transfer. We demonstrate three applications on identity-guided conditional CycleGAN: identity-preserving face superresolution, face swapping, and frontal face generation, which consistently show the advantage of our new method."
]
} |
1905.00286 | 2950359401 | Attribute guided face image synthesis aims to manipulate attributes on a face image. Most existing methods for image-to-image translation can either perform a fixed translation between any two image domains using a single attribute or require training data with the attributes of interest for each subject. Therefore, these methods could only train one specific model for each pair of image domains, which limits their ability in dealing with more than two domains. Another disadvantage of these methods is that they often suffer from the common problem of mode collapse that degrades the quality of the generated images. To overcome these shortcomings, we propose attribute guided face image generation method using a single model, which is capable to synthesize multiple photo-realistic face images conditioned on the attributes of interest. In addition, we adopt the proposed model to increase the realism of the simulated face images while preserving the face characteristics. Compared to existing models, synthetic face images generated by our method present a good photorealistic quality on several face datasets. Finally, we demonstrate that generated facial images can be used for synthetic data augmentation, and improve the performance of the classifier used for facial expression recognition. | @cite_14 proposed a method by disentangling the attributes (expression and pose) for simultaneous pose-invariant facial expression recognition and face images synthesis. Instead, we seek to learn attribute-invariant information in the latent space by imposing auxiliary classifier to classify the generated images. @cite_34 proposed a Geometry-Contrastive Generative Adversarial Network (GC-GAN) for transferring continuous emotions across different subjects. However, this requires a training data with expression information, which may be expensive to obtain. Alternatively, our self-supervised approach automatically learns the required factors of variation by transferring the learnt characteristics between different emotion classes. @cite_27 investigated GANs for data augmentation for the task of emotion classification. @cite_9 proposed a multi-task GAN-based network that learns to synthesize the frontal face images from profile face images. However, they require paired training data of frontal and profile faces. Instead, we seek to add realism to the synthetic frontal face images without requiring real frontal face images during training. Our method could produce synthesis faces using synthetic frontal faces and real faces with arbitrary poses as input. | {
"cite_N": [
"@cite_9",
"@cite_27",
"@cite_14",
"@cite_34"
],
"mid": [
"2805613995",
"",
"2798553619",
"2785903339"
],
"abstract": [
"Face frontalization is one way to overcome the pose variation problem, which simplifies multi-view recognition into one canonical-view recognition. This paper presents a multi-task learning approach based on the generative adversarial network (GAN) that learns the emotion-preserving representations in the face frontalization framework. Taking advantage of adversarial relationship between the generator and the discriminator in GAN, the generator can frontalize input non-frontal face images into frontal face images while preserving the identity and expression characteristics; in the meantime, it can employ the learnt emotion-preserving representations to predict the expression class label from the input face. The proposed network is optimized by combining both synthesis and classification objective functions to make the learnt representations generative and discriminative simultaneously. Experimental results demonstrate that the proposed face frontalization system is very effective for expression recognition with large head pose variations.",
"",
"Facial expression recognition (FER) is a challenging task due to different expressions under arbitrary poses. Most conventional approaches either perform face frontalization on a non-frontal facial image or learn separate classifiers for each pose. Different from existing methods, in this paper, we propose an end-to-end deep learning model by exploiting different poses and expressions jointly for simultaneous facial image synthesis and pose-invariant facial expression recognition. The proposed model is based on generative adversarial network (GAN) and enjoys several merits. First, the encoder-decoder structure of the generator can learn a generative and discriminative identity representation for face images. Second, the identity representation is explicitly disentangled from both expression and pose variations through the expression and pose codes. Third, our model can automatically generate face images with different expressions under arbitrary poses to enlarge and enrich the training set for FER. Quantitative and qualitative evaluations on both controlled and in-the-wild datasets demonstrate that the proposed algorithm performs favorably against state-of-the-art methods.",
"In this paper, we propose a geometry-contrastive generative adversarial network GC-GAN for generating facial expression images conditioned on geometry information. Specifically, given an input face and a target expression designated by a set of facial landmarks, an identity-preserving face can be generated guided by the target expression. In order to embed facial geometry onto a semantic manifold, we incorporate contrastive learning into conditional GANs. Experiment results demonstrate that the manifold is sensitive to the changes of facial geometry both globally and locally. Benefited from the semantic manifold, dynamic smooth transitions between different facial expressions are exhibited via geometry interpolation. Furthermore, our method can also be applied in facial expression transfer even there exist big differences in face shape between target faces and driving faces."
]
} |
1905.00397 | 2943305281 | Data augmentation is an essential technique for improving generalization ability of deep learning models. Recently, AutoAugment has been proposed as an algorithm to automatically search for augmentation policies from a dataset and has significantly enhanced performances on many image recognition tasks. However, its search method requires thousands of GPU hours even for a relatively small dataset. In this paper, we propose an algorithm called Fast AutoAugment that finds effective augmentation policies via a more efficient search strategy based on density matching. In comparison to AutoAugment, the proposed algorithm speeds up the search time by orders of magnitude while achieves comparable performances on image recognition tasks with various models and datasets including CIFAR-10, CIFAR-100, SVHN, and ImageNet. | There are many studies on data augmentation, especially for image recognition. On the benchmark image dataset, such as CIFAR and ImageNet, random crop, flip, rotation, scaling, and color transformation, have been performed as baseline augmentation methods @cite_1 @cite_12 @cite_41 . Mixup @cite_26 , Cutout @cite_31 , and CutMix @cite_32 have been recently proposed to either replace or mask out the image patches randomly and obtained more improved performances on image recognition tasks. However, these methods are designed manually based on domain knowledge. | {
"cite_N": [
"@cite_26",
"@cite_41",
"@cite_1",
"@cite_32",
"@cite_31",
"@cite_12"
],
"mid": [
"2765407302",
"267862395",
"2163605009",
"2944223741",
"2746314669",
"2962971773"
],
"abstract": [
"Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.",
"Deep neural networks have been exhibiting splendid accuracies in many of visual pattern classification problems. Many of the state-of-the-art methods employ a technique known as data augmentation at the training stage. This paper addresses an issue of decision rule for classifiers trained with augmented data. Our method is named as APAC: the Augmented PAttern Classification, which is a way of classification using the optimal decision rule for augmented data learning. Discussion of methods of data augmentation is not our primary focus. We show clear evidences that APAC gives far better generalization performance than the traditional way of class prediction in several experiments. Our convolutional neural network model with APAC achieved a state-of-the-art accuracy on the MNIST dataset among non-ensemble classifiers. Even our multilayer perceptron model beats some of the convolutional models with recently invented stochastic regularization techniques on the CIFAR-10 dataset.",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers. They have proved to be effective for guiding the model to attend on less discriminative parts of objects (e.g. leg as opposed to head of a person), thereby letting the network generalize better and have better object localization capabilities. On the other hand, current methods for regional dropout remove informative pixels on training images by overlaying a patch of either black pixels or random noise. Such removal is not desirable because it leads to information loss and inefficiency during training. We therefore propose the CutMix augmentation strategy: patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches. By making efficient use of training pixels and retaining the regularization effect of regional dropout, CutMix consistently outperforms the state-of-the-art augmentation strategies on CIFAR and ImageNet classification tasks, as well as on the ImageNet weakly-supervised localization task. Moreover, unlike previous augmentation methods, our CutMix-trained ImageNet classifier, when used as a pretrained model, results in consistent performance gains in Pascal detection and MS-COCO image captioning benchmarks. We also show that CutMix improves the model robustness against input corruptions and its out-of-distribution detection performances. Source code and pretrained models are available at https: github.com clovaai CutMix-PyTorch https: github.com clovaai CutMix-PyTorch .",
"Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR-10, CIFAR-100, and SVHN datasets, yielding new state-of-the-art results of 2.56 , 15.20 , and 1.30 test error respectively. Code is available at this https URL",
"Deep convolutional neural networks (DCNNs) have shown remarkable performance in image classification tasks in recent years. Generally, deep neural network architectures are stacks consisting of a large number of convolutional layers, and they perform downsampling along the spatial dimension via pooling to reduce memory usage. Concurrently, the feature map dimension (i.e., the number of channels) is sharply increased at downsampling locations, which is essential to ensure effective performance because it increases the diversity of high-level attributes. This also applies to residual networks and is very closely related to their performance. In this research, instead of sharply increasing the feature map dimension at units that perform downsampling, we gradually increase the feature map dimension at all units to involve as many locations as possible. This design, which is discussed in depth together with our new insights, has proven to be an effective means of improving generalization ability. Furthermore, we propose a novel residual unit capable of further improving the classification accuracy with our new network architecture. Experiments on benchmark CIFAR-10, CIFAR-100, and ImageNet datasets have shown that our network architecture has superior generalization ability compared to the original residual networks. Code is available at https: github.com jhkim89 PyramidNet."
]
} |
1905.00397 | 2943305281 | Data augmentation is an essential technique for improving generalization ability of deep learning models. Recently, AutoAugment has been proposed as an algorithm to automatically search for augmentation policies from a dataset and has significantly enhanced performances on many image recognition tasks. However, its search method requires thousands of GPU hours even for a relatively small dataset. In this paper, we propose an algorithm called Fast AutoAugment that finds effective augmentation policies via a more efficient search strategy based on density matching. In comparison to AutoAugment, the proposed algorithm speeds up the search time by orders of magnitude while achieves comparable performances on image recognition tasks with various models and datasets including CIFAR-10, CIFAR-100, SVHN, and ImageNet. | Naturally, automatically finding data augmentation methods from data in principle has emerged to overcome the performance limitation that originated from a cumbersome exploration of methods by a human. Smart Augmentation @cite_22 introduced a network that learns to generate augmented data by merging two or more samples in the same class. @cite_0 employed a generative adversarial network (GAN) @cite_17 to generate images that augment datasets. Bayesian DA @cite_36 combined Monte Carlo expectation maximization algorithm with GAN to generate data by treating augmented data as missing data points on the distribution of the training set. | {
"cite_N": [
"@cite_0",
"@cite_36",
"@cite_22",
"@cite_17"
],
"mid": [
"2963709863",
"2963552443",
"2604262106",
"2099471712"
],
"abstract": [
"With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulators output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a self-regularization term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.",
"Data augmentation is an essential part of the training process applied to deep learning models. The motivation is that a robust training process for deep learning models depends on large annotated datasets, which are expensive to be acquired, stored and processed. Therefore a reasonable alternative is to be able to automatically generate new annotated training samples using a process known as data augmentation. The dominant data augmentation approach in the field assumes that new training samples can be obtained via random geometric or appearance transformations applied to annotated training samples, but this is a strong assumption because it is unclear if this is a reliable generative model for producing new training samples. In this paper, we provide a novel Bayesian formulation to data augmentation, where new annotated training points are treated as missing variables and generated based on the distribution learned from the training set. For learning, we introduce a theoretically sound algorithm --- generalised Monte Carlo expectation maximisation, and demonstrate one possible implementation via an extension of the Generative Adversarial Network (GAN). Classification results on MNIST, CIFAR-10 and CIFAR-100 show the better performance of our proposed method compared to the current dominant data augmentation approach mentioned above --- the results also show that our approach produces better classification results than similar GAN models.",
"A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks. There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method, which we call smart augmentation and we show how to use it to increase the accuracy and reduce over fitting on a target network. Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss. This allows us to learn augmentations that minimize the error of that network. Smart augmentation has shown the potential to increase accuracy by demonstrably significant measures on all data sets tested. In addition, it has shown potential to achieve similar or improved performance levels with significantly smaller network sizes in a number of tested cases.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."
]
} |
1905.00060 | 2964048600 | Foreground object segmentation is a critical step for many image analysis tasks. While automated methods can produce high-quality results, their failures disappoint users in need of practical solutions. We propose a resource allocation framework for predicting how best to allocate a fixed budget of human annotation effort in order to collect higher quality segmentations for a given batch of images and automated methods. The framework is based on a prediction module that estimates the quality of given algorithm-drawn segmentations. We demonstrate the value of the framework for two novel tasks related to predicting how to distribute annotation efforts between algorithms and humans. Specifically, we develop two systems that automatically decide, for a batch of images, when to recruit humans versus computers to create (1) coarse segmentations required to initialize segmentation tools and (2) final, fine-grained segmentations. Experiments demonstrate the advantage of relying on a mix of human and computer efforts over relying on either resource alone for segmenting objects in images coming from three diverse modalities (visible, phase contrast microscopy, and fluorescence microscopy). | Interactive methods address the issue of relying on human input to initialize segmentation tools for every image in a batch @cite_27 @cite_25 @cite_28 . However, unlike our approach, these methods require that all images in the batch show related content (e.g., dogs). Moreover, interactive co-segmentation involves continual back-and-forth with an annotator to incrementally refine the segmentation. Avoiding a continual back-and-forth is particularly important for segmentation tools such as level set methods @cite_36 @cite_3 that take on the order of minutes or more per image to compute a segmentation from the initialization. We instead recruit human input at most once per image and consider the more general problem of annotating unrelated, unknown objects in a batch. | {
"cite_N": [
"@cite_28",
"@cite_36",
"@cite_3",
"@cite_27",
"@cite_25"
],
"mid": [
"1971994867",
"2116040950",
"2115664739",
"1964884769",
"2171898823"
],
"abstract": [
"",
"We propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah (1989) functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by the gradient. We minimize an energy which can be seen as a particular case of the minimal partition problem. In the level set formulation, the problem becomes a \"mean-curvature flow\"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. We give a numerical algorithm using finite differences. Finally, we present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. Also, the initial curve can be anywhere in the image, and interior contours are automatically detected.",
"In this paper, we propose a natural framework that allows any region-based segmentation energy to be re-formulated in a local way. We consider local rather than global image statistics and evolve a contour based on local information. Localized contours are capable of segmenting objects with heterogeneous feature profiles that would be difficult to capture correctly using a standard global method. The presented technique is versatile enough to be used with any global region-based active contour energy and instill in it the benefits of localization. We describe this framework and demonstrate the localization of three well-known energies in order to illustrate how our framework can be applied to any energy. We then compare each localized energy to its global counterpart to show the improvements that can be achieved. Next, an in-depth study of the behaviors of these energies in response to the degree of localization is given. Finally, we show results on challenging images to illustrate the robust and accurate segmentations that are possible with this new class of active contour models.",
"This paper presents an algorithm for Interactive Co-segmentation of a foreground object from a group of related images. While previous approaches focus on unsupervised co-segmentation, we use successful ideas from the interactive object-cutout literature. We develop an algorithm that allows users to decide what foreground is, and then guide the output of the co-segmentation algorithm towards it via scribbles. Interestingly, keeping a user in the loop leads to simpler and highly parallelizable energy functions, allowing us to work with significantly more images per group. However, unlike the interactive single image counterpart, a user cannot be expected to exhaustively examine all cutouts (from tens of images) returned by the system to make corrections. Hence, we propose iCoseg, an automatic recommendation system that intelligently recommends where the user should scribble next. We introduce and make publicly available the largest co-segmentation datasetyet, the CMU-Cornell iCoseg Dataset, with 38 groups, 643 images, and pixelwise hand-annotated groundtruth. Through machine experiments and real user studies with our developed interface, we show that iCoseg can intelligently recommend regions to scribble on, and users following these recommendations can achieve good quality cutouts with significantly lower time and effort than exhaustively examining all cutouts.",
"In this paper, we address the issue of transducing the object cutout model from an example image to novel image instances. We observe that although object and background are very likely to contain similar colors in natural images, it is much less probable that they share similar color configurations. Motivated by this observation, we propose a local color pattern model to characterize the color configuration in a robust way. Additionally, we propose an edge profile model to modulate the contrast of the image, which enhances edges along object boundaries and attenuates edges inside object or background. The local color pattern model and edge model are integrated in a graph-cut framework. Higher accuracy and improved robustness of the proposed method are demonstrated through experimental comparison with state-of-the-art algorithms."
]
} |
1905.00060 | 2964048600 | Foreground object segmentation is a critical step for many image analysis tasks. While automated methods can produce high-quality results, their failures disappoint users in need of practical solutions. We propose a resource allocation framework for predicting how best to allocate a fixed budget of human annotation effort in order to collect higher quality segmentations for a given batch of images and automated methods. The framework is based on a prediction module that estimates the quality of given algorithm-drawn segmentations. We demonstrate the value of the framework for two novel tasks related to predicting how to distribute annotation efforts between algorithms and humans. Specifically, we develop two systems that automatically decide, for a batch of images, when to recruit humans versus computers to create (1) coarse segmentations required to initialize segmentation tools and (2) final, fine-grained segmentations. Experiments demonstrate the advantage of relying on a mix of human and computer efforts over relying on either resource alone for segmenting objects in images coming from three diverse modalities (visible, phase contrast microscopy, and fluorescence microscopy). | Our aim to minimize human involvement while collecting accurate image annotations is shared by active learning @cite_41 . Specifically, active learners try to identify the most impactful, yet least expensive information necessary to train accurate prediction models @cite_31 @cite_41 @cite_29 . For example, some methods iteratively supplement a training dataset with images predicted to require little human annotation time to label @cite_29 . Other methods actively solicit human feedback to identify features with stronger predictive power than those currently available @cite_31 . Unlike active learners, which leverage human input at to improve the utility of a single algorithm, our method leverages human effort at to recover from failures by different algorithms. | {
"cite_N": [
"@cite_41",
"@cite_31",
"@cite_29"
],
"mid": [
"2903158431",
"2112055291",
"2037820867"
],
"abstract": [
"",
"Active learning provides useful tools to reduce annotation costs without compromising classifier performance. However it traditionally views the supervisor simply as a labeling machine. Recently a new interactive learning paradigm was introduced that allows the supervisor to additionally convey useful domain knowledge using attributes. The learner first conveys its belief about an actively chosen image e.g. \"I think this is a forest, what do you think?\". If the learner is wrong, the supervisor provides an explanation e.g. \"No, this is too open to be a forest\". With access to a pre-trained set of relative attribute predictors, the learner fetches all unlabeled images more open than the query image, and uses them as negative examples of forests to update its classifier. This rich human-machine communication leads to better classification performance. In this work, we propose three improvements over this set-up. First, we incorporate a weighting scheme that instead of making a hard decision reasons about the likelihood of an image being a negative example. Second, we do away with pre-trained attributes and instead learn the attribute models on the fly, alleviating overhead and restrictions of a pre-determined attribute vocabulary. Finally, we propose an active learning framework that accounts for not just the label-but also the attributes-based feedback while selecting the next query image. We demonstrate significant improvement in classification accuracy on faces and shoes. We also collect and make available the largest relative attributes dataset containing 29 attributes of faces from 60 categories.",
"We present an active learning framework that predicts the tradeoff between the effort and information gain associated with a candidate image annotation, thereby ranking unlabeled and partially labeled images according to their expected \"net worth\" to an object recognition system. We develop a multi-label multiple-instance approach that accommodates realistic images containing multiple objects and allows the category-learner to strategically choose what annotations it receives from a mixture of strong and weak labels. Since the annotation cost can vary depending on an image's complexity, we show how to improve the active selection by directly predicting the time required to segment an unlabeled image. Our approach accounts for the fact that the optimal use of manual effort may call for a combination of labels at multiple levels of granularity, as well as accurate prediction of manual effort. As a result, it is possible to learn more accurate category models with a lower total expenditure of annotation effort. Given a small initial pool of labeled data, the proposed method actively improves the category models with minimal manual intervention."
]
} |
1905.00060 | 2964048600 | Foreground object segmentation is a critical step for many image analysis tasks. While automated methods can produce high-quality results, their failures disappoint users in need of practical solutions. We propose a resource allocation framework for predicting how best to allocate a fixed budget of human annotation effort in order to collect higher quality segmentations for a given batch of images and automated methods. The framework is based on a prediction module that estimates the quality of given algorithm-drawn segmentations. We demonstrate the value of the framework for two novel tasks related to predicting how to distribute annotation efforts between algorithms and humans. Specifically, we develop two systems that automatically decide, for a batch of images, when to recruit humans versus computers to create (1) coarse segmentations required to initialize segmentation tools and (2) final, fine-grained segmentations. Experiments demonstrate the advantage of relying on a mix of human and computer efforts over relying on either resource alone for segmenting objects in images coming from three diverse modalities (visible, phase contrast microscopy, and fluorescence microscopy). | Our novel tasks rely on a module to estimate the quality of computer-generated segmentations. Related methods find top object-like" region proposals for a given image @cite_42 @cite_13 @cite_37 @cite_26 @cite_12 . However, most of these methods are inadequate for ranking object-like" proposals across a batch of images because they only return relative rankings of proposals per image @cite_37 . Another method proposes an absolute segmentation difficulty measure based on the image content alone @cite_39 . However, this method does not account for the different performances that are observed from different segmentation tools when applied to the same image. | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_42",
"@cite_39",
"@cite_13",
"@cite_12"
],
"mid": [
"1555385401",
"2963983744",
"1991367009",
"2208417422",
"2017691720",
"130387664"
],
"abstract": [
"We propose a category-independent method to produce a bag of regions and rank them, such that top-ranked regions are likely to be good segmentations of different objects. Our key objectives are completeness and diversity: every object should have at least one good proposed region, and a diverse set should be top-ranked. Our approach is to generate a set of segmentations by performing graph cuts based on a seed region and a learned affinity function. Then, the regions are ranked using structured learning based on various cues. Our experiments on BSDS and PASCAL VOC 2008 demonstrate our ability to find most objects within a small bag of proposed regions.",
"We propose an end-to-end learning framework for segmenting generic objects in videos. Our method learns to combine appearance and motion information to produce pixel level segmentation masks for all prominent objects in videos. We formulate this task as a structured prediction problem and design a two-stream fully convolutional neural network which fuses together motion and appearance in a unified framework. Since large-scale video datasets with pixel level segmentations are problematic, we show how to bootstrap weakly annotated videos together with existing image recognition datasets for training. Through experiments on three challenging video segmentation benchmarks, our method substantially improves the state-of-the-art for segmenting generic (unseen) objects. Code and pre-trained models are available on the project website.",
"We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.",
"The heavy use of camera phones and other mobile devices all over the world has produced a market for mobile image analysis, including image segmentation to separate out objects of interest. Automatic image segmentation algorithms, when employed by many different users for multiple applications, cannot guarantee high quality results. Yet interactive algorithms require human effort that may become quite tedious. To reduce human effort and achieve better results, it is worthwhile to know in advance which images are difficult to segment and may require further user interaction or alternate processing. For this purpose, we introduce a new research problem: how to estimate the image segmentation difficulty level without actually performing image segmentation. We propose to formulate it as an estimation problem, and we develop a linear regression model using image features to predict segmentation difficulty level. Different image features, including graytone, color, gradient, and texture features are tested as the predictive variables, and the segmentation algorithm performance measure is the response variable. We use the benchmark images of the Berkeley segmentation dataset with corresponding F-measures to fit, test, and choose the optimal model. Additional standard image datasets are used to further verify the model's applicability to a variety of images. A new feature that combines information from the log histogram of log gradient and the local binary pattern histogram is a good predictor and provides the best balance of predictive performance and model complexity.",
"We present a novel framework for generating and ranking plausible objects hypotheses in an image using bottom-up processes and mid-level cues. The object hypotheses are represented as figure-ground segmentations, and are extracted automatically, without prior knowledge about properties of individual object classes, by solving a sequence of constrained parametric min-cut problems (CPMC) on a regular image grid. We then learn to rank the object hypotheses by training a continuous model to predict how plausible the segments are, given their mid-level region properties. We show that this algorithm significantly outperforms the state of the art for low-level segmentation in the VOC09 segmentation dataset. It achieves the same average best segmentation covering as the best performing technique to date [2], 0.61 when using just the top 7 ranked segments, instead of the full hierarchy in [2]. Our method achieves 0.78 average best covering using 154 segments. In a companion paper [18], we also show that the algorithm achieves state-of-the art results when used in a segmentation-based recognition pipeline.",
"The automatic delineation of the boundaries of organs and other anatomical structures is a key component of many medical image processing systems. In this paper we present a generic learning approach based on a novel space of segmentation features, which can be trained to predict the overlap error and Dice coefficient of an arbitrary organ segmentation without knowing the ground truth delineation. We show the regressor to be much stronger a predictor of these error metrics than the responses of Probabilistic Boosting Classifiers trained on the segmentation boundary. The presented approach not only allows us to build reliable confidence measures and fidelity checks, but also to rank several segmentation hypotheses against each other during online usage of the segmentation algorithm in clinical practice."
]
} |
1905.00060 | 2964048600 | Foreground object segmentation is a critical step for many image analysis tasks. While automated methods can produce high-quality results, their failures disappoint users in need of practical solutions. We propose a resource allocation framework for predicting how best to allocate a fixed budget of human annotation effort in order to collect higher quality segmentations for a given batch of images and automated methods. The framework is based on a prediction module that estimates the quality of given algorithm-drawn segmentations. We demonstrate the value of the framework for two novel tasks related to predicting how to distribute annotation efforts between algorithms and humans. Specifically, we develop two systems that automatically decide, for a batch of images, when to recruit humans versus computers to create (1) coarse segmentations required to initialize segmentation tools and (2) final, fine-grained segmentations. Experiments demonstrate the advantage of relying on a mix of human and computer efforts over relying on either resource alone for segmenting objects in images coming from three diverse modalities (visible, phase contrast microscopy, and fluorescence microscopy). | Our prediction framework most closely aligns with methods that predict the error quality of a given algorithm-drawn segmentation in absolute terms @cite_13 @cite_12 . In particular, we also perform supervised learning to train a regression model. However, prior work trained prediction models using segmentations created by a single popular algorithm (coming from the medical @cite_12 and computer vision @cite_13 communities respectively). In contrast, our model is trained using a diversity of popular algorithms from different communities applied to images coming from three imaging modalities (visible, phase contrast microscopy, fluorescence microscopy). Specifically, we populate our training data with 14 algorithm-generated segmentation algorithms per image as well as ground truth data to capture a rich diversity of the possible quality of segmentations. Our approach consistently predicts well, outperforming a widely-used method @cite_13 on four diverse datasets. Our experiments demonstrate the value of our prediction model for intelligently deciding which among multiple segmentation algorithms is preferable for each image. | {
"cite_N": [
"@cite_13",
"@cite_12"
],
"mid": [
"2017691720",
"130387664"
],
"abstract": [
"We present a novel framework for generating and ranking plausible objects hypotheses in an image using bottom-up processes and mid-level cues. The object hypotheses are represented as figure-ground segmentations, and are extracted automatically, without prior knowledge about properties of individual object classes, by solving a sequence of constrained parametric min-cut problems (CPMC) on a regular image grid. We then learn to rank the object hypotheses by training a continuous model to predict how plausible the segments are, given their mid-level region properties. We show that this algorithm significantly outperforms the state of the art for low-level segmentation in the VOC09 segmentation dataset. It achieves the same average best segmentation covering as the best performing technique to date [2], 0.61 when using just the top 7 ranked segments, instead of the full hierarchy in [2]. Our method achieves 0.78 average best covering using 154 segments. In a companion paper [18], we also show that the algorithm achieves state-of-the art results when used in a segmentation-based recognition pipeline.",
"The automatic delineation of the boundaries of organs and other anatomical structures is a key component of many medical image processing systems. In this paper we present a generic learning approach based on a novel space of segmentation features, which can be trained to predict the overlap error and Dice coefficient of an arbitrary organ segmentation without knowing the ground truth delineation. We show the regressor to be much stronger a predictor of these error metrics than the responses of Probabilistic Boosting Classifiers trained on the segmentation boundary. The presented approach not only allows us to build reliable confidence measures and fidelity checks, but also to rank several segmentation hypotheses against each other during online usage of the segmentation algorithm in clinical practice."
]
} |
1905.00060 | 2964048600 | Foreground object segmentation is a critical step for many image analysis tasks. While automated methods can produce high-quality results, their failures disappoint users in need of practical solutions. We propose a resource allocation framework for predicting how best to allocate a fixed budget of human annotation effort in order to collect higher quality segmentations for a given batch of images and automated methods. The framework is based on a prediction module that estimates the quality of given algorithm-drawn segmentations. We demonstrate the value of the framework for two novel tasks related to predicting how to distribute annotation efforts between algorithms and humans. Specifically, we develop two systems that automatically decide, for a batch of images, when to recruit humans versus computers to create (1) coarse segmentations required to initialize segmentation tools and (2) final, fine-grained segmentations. Experiments demonstrate the advantage of relying on a mix of human and computer efforts over relying on either resource alone for segmenting objects in images coming from three diverse modalities (visible, phase contrast microscopy, and fluorescence microscopy). | More broadly, our work is a contribution to the emerging research field at the intersection of human computation and computer vision to build hybrid systems that take advantage of the strengths of humans and computers together. For example, hybrid systems combine non-expert and algorithm strengths to perform the challenging fine-grained bird classification task typically performed by experts @cite_7 @cite_1 . Another system decides how much human effort to allocate per image in order to segment the diversity of plausible foreground objects in a batch of images @cite_40 . While our hybrid system design demonstrates the advantages of combining human and computer efforts, our work differs by deciding how to distribute work between more costly crowd workers and less expensive algorithms for the image segmentation task. | {
"cite_N": [
"@cite_40",
"@cite_1",
"@cite_7"
],
"mid": [
"2963573141",
"2026124029",
"1972939083"
],
"abstract": [
"We propose the ambiguity problem for the foreground object segmentation task and motivate the importance of estimating and accounting for this ambiguity when designing vision systems. Specifically, we distinguish between images which lead multiple annotators to segment different foreground objects (ambiguous) versus minor inter-annotator differences of the same object. Taking images from eight widely used datasets, we crowdsource labeling the images as “ambiguous” or “not ambiguous” to segment in order to construct a new dataset we call STATIC. Using STATIC, we develop a system that automatically predicts which images are ambiguous. Experiments demonstrate the advantage of our prediction system over existing saliency-based methods on images from vision benchmarks and images taken by blind people who are trying to recognize objects in their environment. Finally, we introduce a crowdsourcing system to achieve cost savings for collecting the diversity of all valid “ground truth” foreground object segmentations by collecting extra segmentations only when ambiguity is expected. Experiments show our system eliminates up to 47 of human effort compared to existing crowdsourcing methods with no loss in capturing the diversity of ground truths.",
"Current similarity-based approaches to interactive fine grained categorization rely on learning metrics from holistic perceptual measurements of similarity between objects or images. However, making a single judgment of similarity at the object level can be a difficult or overwhelming task for the human user to perform. Secondly, a single general metric of similarity may not be able to adequately capture the minute differences that discriminate fine-grained categories. In this work, we propose a novel approach to interactive categorization that leverages multiple perceptual similarity metrics learned from localized and roughly aligned regions across images, reporting state-of-the-art results and outperforming methods that use a single nonlocalized similarity metric.",
"We present a visual recognition system for fine-grained visual categorization. The system is composed of a human and a machine working together and combines the complementary strengths of computer vision algorithms and (non-expert) human users. The human users provide two heterogeneous forms of information object part clicks and answers to multiple choice questions. The machine intelligently selects the most informative question to pose to the user in order to identify the object class as quickly as possible. By leveraging computer vision and analyzing the user responses, the overall amount of human effort required, measured in seconds, is minimized. Our formalism shows how to incorporate many different types of computer vision algorithms into a human-in-the-loop framework, including standard multiclass methods, part-based methods, and localized multiclass and attribute methods. We explore our ideas by building a field guide for bird identification. The experimental results demonstrate the strength of combining ignorant humans with poor-sighted machines the hybrid system achieves quick and accurate bird identification on a dataset containing 200 bird species."
]
} |
1905.00060 | 2964048600 | Foreground object segmentation is a critical step for many image analysis tasks. While automated methods can produce high-quality results, their failures disappoint users in need of practical solutions. We propose a resource allocation framework for predicting how best to allocate a fixed budget of human annotation effort in order to collect higher quality segmentations for a given batch of images and automated methods. The framework is based on a prediction module that estimates the quality of given algorithm-drawn segmentations. We demonstrate the value of the framework for two novel tasks related to predicting how to distribute annotation efforts between algorithms and humans. Specifically, we develop two systems that automatically decide, for a batch of images, when to recruit humans versus computers to create (1) coarse segmentations required to initialize segmentation tools and (2) final, fine-grained segmentations. Experiments demonstrate the advantage of relying on a mix of human and computer efforts over relying on either resource alone for segmenting objects in images coming from three diverse modalities (visible, phase contrast microscopy, and fluorescence microscopy). | We initially presented these ideas at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016 @cite_21 . This work offers considerable redesigns to all methods which in turn yields significant improvements in our experimental results. Specifically, we propose an improved approach for predicting the quality of an algorithm-drawn segmentation by employing a larger training dataset (created using a larger collection of candidate algorithms) with an expanded feature set and ensemble regression model. We also introduce a hierarchical, two-stage prediction system that predicts which is the best algorithm per image to produce the initialization fed to the refinement-algorithm and then predicts the quality of the output from the refinement-algorithm in order to decide whether to solicit human input. Experimental results reveal our redesigned methods yield significant improvements for predicting the segmentation quality and producing high quality segmentations. We also expanded our experiments to explore the performance of our models and systems when using different feature sets and testing with different datasets in order to learn when, how, and why they succeed versus fail. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2464002812"
],
"abstract": [
"Foreground object segmentation is a critical step for many image analysis tasks. While automated methods can produce high-quality results, their failures disappoint users in need of practical solutions. We propose a resource allocation framework for predicting how best to allocate a fixed budget of human annotation effort in order to collect higher quality segmentations for a given batch of images and automated methods. The framework is based on a proposed prediction module that estimates the quality of given algorithm-drawn segmentations. We demonstrate the value of the framework for two novel tasks related to \"pulling the plug\" on computer and human annotators. Specifically, we implement two systems that automatically decide, for a batch of images, when to replace 1) humans with computers to create coarse segmentations required to initialize segmentation tools and 2) computers with humans to create final, fine-grained segmentations. Experiments demonstrate the advantage of relying on a mix of human and computer efforts over relying on either resource alone for segmenting objects in three diverse datasets representing visible, phase contrast microscopy, and fluorescence microscopy images."
]
} |
1905.00261 | 2942572760 | Virtual Human Simulation has been widely used for different purposes, such as comfort or accessibility analysis. In this paper, we investigate the possibility of using this type of technique to extend the training datasets of pedestrians to be used with machine learning techniques. Our main goal is to verify if Computer Graphics (CG) images of virtual humans with a simplistic rendering can be efficient in order to augment datasets used for training machine learning methods. In fact, from a machine learning point of view, there is a need to collect and label large datasets for ground truth, which sometimes demands manual annotation. In addition, find out images and videos with real people and also provide ground truth of people detection and counting is not trivial. If CG images, which can have a ground truth automatically generated, can also be used as training in machine learning techniques for pedestrian detection and counting, it can certainly facilitate and optimize the whole process of event detection. In particular, we propose to parametrize virtual humans using a data-driven approach. Results demonstrated that using the extended datasets with CG images outperforms the results when compared to only real images sequences. | In the last years, several works have been developed for crowd counting @cite_40 @cite_8 @cite_1 @cite_11 @cite_26 @cite_6 with different purposes, as crowd control, urban planning and video surveillance @cite_32 . This problem consists in the definition of the number of people in a crowd @cite_24 , and has been addressed over the years using several approaches @cite_33 @cite_19 @cite_31 @cite_8 @cite_1 , such as Support Vector Machine (SVM) classifier @cite_23 and object detection using a boosted cascade of features @cite_41 . | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_8",
"@cite_41",
"@cite_1",
"@cite_32",
"@cite_6",
"@cite_24",
"@cite_19",
"@cite_40",
"@cite_23",
"@cite_31",
"@cite_11"
],
"mid": [
"",
"1992825118",
"",
"2164598857",
"",
"2318253906",
"",
"2343818649",
"",
"2541389513",
"",
"",
""
],
"abstract": [
"",
"This paper describes a pedestrian detection system that integrates image intensity information with motion information. We use a detection style algorithm that scans a detector over two consecutive frames of a video sequence. The detector is trained (using AdaBoost) to take advantage of both motion and appearance information to detect a walking person. Past approaches have built detectors based on appearance information, but ours is the first to combine both sources of information in a single detector. The implementation described runs at about 4 frames second, detects pedestrians at very small scales (as small as 20 spl times 15 pixels), and has a very low false positive rate. Our approach builds on the detection work of Viola and Jones. Novel contributions of this paper include: i) development of a representation of image motion which is extremely efficient, and ii) implementation of a state of the art pedestrian detection system which operates on low resolution images under difficult conditions (such as rain and snow).",
"",
"This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the \"integral image\" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a \"cascade\" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection.",
"",
"We propose an integer programming method for estimating the instantaneous count of pedestrians crossing a line of interest (LOI) in a video sequence. Through a line sampling process, the video is first converted into a temporal slice image. Next, the number of people is estimated in a set of overlapping sliding windows on the temporal slice image, using a regression function that maps from local features to a count. Given that the count in a sliding window is the sum of the instantaneous counts in the corresponding time interval, an integer programming method is proposed to recover the number of pedestrians crossing the LOI in each frame. Integrating over a specific time interval yields the cumulative count of pedestrians crossing the line. Compared with current methods for line counting, our proposed approach achieves state-of-the-art performance on several challenging crowd video data sets.",
"",
"Crowd counting is an important task in computer vision, which has many applications in video surveillance. Although the regression-based framework has achieved great improvements for crowd counting, how to improve the discriminative power of image representation is still an open problem. Conventional holistic features used in crowd counting often fail to capture semantic attributes and spatial cues of the image. In this paper, we propose integrating semantic information into learning locality-aware feature (LAF) sets for accurate crowd counting. First, with the help of a convolutional neural network, the original pixel space is mapped onto a dense attribute feature map, where each dimension of the pixelwise feature indicates the probabilistic strength of a certain semantic class. Then, LAF built on the idea of spatial pyramids on neighboring patches is proposed to explore more spatial context and local information. Finally, the traditional vector of locally aggregated descriptor (VLAD) encoding method is extended to a more generalized form weighted-VLAD (W-VLAD) in which diverse coefficient weights are taken into consideration. Experimental results validate the effectiveness of our presented method.",
"",
"Poisson regression models the noisy output of a counting function as a Poisson random variable, with a log-mean parameter that is a linear function of the input vector. In this work, we analyze Poisson regression in a Bayesian setting, by introducing a prior distribution on the weights of the linear function. Since exact inference is analytically unobtainable, we derive a closed-form approximation to the predictive distribution of the model. We show that the predictive distribution can be kernelized, enabling the representation of non-linear log-mean functions. We also derive an approximate marginal likelihood that can be optimized to learn the hyperparameters of the kernel. We then relate the proposed approximate Bayesian Poisson regression to Gaussian processes. Finally, we present experimental results using Bayesian Poisson regression for crowd counting from low-level features.",
"",
"",
""
]
} |
1905.00261 | 2942572760 | Virtual Human Simulation has been widely used for different purposes, such as comfort or accessibility analysis. In this paper, we investigate the possibility of using this type of technique to extend the training datasets of pedestrians to be used with machine learning techniques. Our main goal is to verify if Computer Graphics (CG) images of virtual humans with a simplistic rendering can be efficient in order to augment datasets used for training machine learning methods. In fact, from a machine learning point of view, there is a need to collect and label large datasets for ground truth, which sometimes demands manual annotation. In addition, find out images and videos with real people and also provide ground truth of people detection and counting is not trivial. If CG images, which can have a ground truth automatically generated, can also be used as training in machine learning techniques for pedestrian detection and counting, it can certainly facilitate and optimize the whole process of event detection. In particular, we propose to parametrize virtual humans using a data-driven approach. Results demonstrated that using the extended datasets with CG images outperforms the results when compared to only real images sequences. | @cite_38 , e.g., proposed a Multi-column Convolutional Neural Network (MCNN) that allows input images of arbitrary resolution. However, besides using existing datasets, they also had to collect and annotate a huge dataset to perform the experiments in order to verify the effectiveness of their method. Another large-scale dataset with annotated pedestrians for crowd counting algorithms was provided by @cite_15 . @cite_9 combined the Adaboost algorithm and the CNN for head detection and used a classroom surveillance dataset also manually annotated to evaluate the proposed method. | {
"cite_N": [
"@cite_38",
"@cite_15",
"@cite_9"
],
"mid": [
"2463631526",
"2519786711",
"2413409465"
],
"abstract": [
"This paper aims to develop a method than can accurately estimate the crowd count from an individual image with arbitrary crowd density and arbitrary perspective. To this end, we have proposed a simple but effective Multi-column Convolutional Neural Network (MCNN) architecture to map the image to its crowd density map. The proposed MCNN allows the input image to be of arbitrary size or resolution. By utilizing filters with receptive fields of different sizes, the features learned by each column CNN are adaptive to variations in people head size due to perspective effect or image resolution. Furthermore, the true density map is computed accurately based on geometry-adaptive kernels which do not need knowing the perspective map of the input image. Since exiting crowd counting datasets do not adequately cover all the challenging situations considered in our work, we have collected and labelled a large new dataset that includes 1198 images with about 330,000 heads annotated. On this challenging new dataset, as well as all existing datasets, we conduct extensive experiments to verify the effectiveness of the proposed model and method. In particular, with the proposed simple MCNN model, our method outperforms all existing methods. In addition, experiments show that our model, once trained on one dataset, can be readily transferred to a new dataset.",
"In this paper, we propose a deep Convolutional Neural Network (CNN) for counting the number of people across a line-of-interest (LOI) in surveillance videos. It is a challenging problem and has many potential applications. Observing the limitations of temporal slices used by state-of-the-art LOI crowd counting methods, our proposed CNN directly estimates the crowd counts with pairs of video frames as inputs and is trained with pixel-level supervision maps. Such rich supervision information helps our CNN learn more discriminative feature representations. A two-phase training scheme is adopted, which decomposes the original counting problem into two easier sub-problems, estimating crowd density map and estimating crowd velocity map. Learning to solve the sub-problems provides a good initial point for our CNN model, which is then fine-tuned to solve the original counting problem. A new dataset with pedestrian trajectory annotations is introduced for evaluating LOI crowd counting methods and has more annotations than any existing one. Our extensive experiments show that our proposed method is robust to variations of crowd density, crowd velocity, and directions of the LOI, and outperforms state-of-the-art LOI counting methods.",
"People counting is one of the key techniques in video surveillance. This task usually encounters many challenges in crowded environment, such as heavy occlusion, low resolution, imaging viewpoint variability, etc. Motivated by the success of R-CNN (, 2014) 1 on object detection, in this paper we propose a head detection based people counting method combining the Adaboost algorithm and the CNN. Unlike the R-CNN which uses the general object proposals as the inputs of CNN, our method uses the cascade Adaboost algorithm to obtain the head region proposals for CNN, which can greatly reduce the following classification time. Resorting to the strong ability of feature learning of the CNN, it is used as a feature extractor in this paper, instead of as a classifier as its commonly-used strategy. The final classification is done by a linear SVM classifier trained on the features extracted using the CNN feature extractor. Finally, the prior knowledge can be applied to post-process the detection results to increase the precision of head detection and the people count is obtained by counting the head detection results. A real classroom surveillance dataset is used to evaluate the proposed method and experimental results show that this method has good performance and outperforms the baseline methods, including deformable part model and cascade Adaboost methods."
]
} |
1905.00261 | 2942572760 | Virtual Human Simulation has been widely used for different purposes, such as comfort or accessibility analysis. In this paper, we investigate the possibility of using this type of technique to extend the training datasets of pedestrians to be used with machine learning techniques. Our main goal is to verify if Computer Graphics (CG) images of virtual humans with a simplistic rendering can be efficient in order to augment datasets used for training machine learning methods. In fact, from a machine learning point of view, there is a need to collect and label large datasets for ground truth, which sometimes demands manual annotation. In addition, find out images and videos with real people and also provide ground truth of people detection and counting is not trivial. If CG images, which can have a ground truth automatically generated, can also be used as training in machine learning techniques for pedestrian detection and counting, it can certainly facilitate and optimize the whole process of event detection. In particular, we propose to parametrize virtual humans using a data-driven approach. Results demonstrated that using the extended datasets with CG images outperforms the results when compared to only real images sequences. | Due to the need for this large amount of training data, @cite_35 performed an augmentation of their training dataset cropping patches from the multi-scale pyramidal representation of each training image. @cite_18 , on the other hand, claim that the task to manually label the datasets is time-consuming and error-prone, besides needing several human operators. Therefore, they proposed a procedural framework called LCrowdV, to generate labeled crowd videos. | {
"cite_N": [
"@cite_35",
"@cite_18"
],
"mid": [
"2517615595",
"2963423876"
],
"abstract": [
"Our work proposes a novel deep learning framework for estimating crowd density from static images of highly dense crowds. We use a combination of deep and shallow, fully convolutional networks to predict the density map for a given crowd image. Such a combination is used for effectively capturing both the high-level semantic information (face body detectors) and the low-level features (blob detectors), that are necessary for crowd counting under large scale variations. As most crowd datasets have limited training samples (",
"We present a novel procedural framework to generate an arbitrary number of labeled crowd videos (LCrowdV). The resulting crowd video datasets are used to design accurate algorithms or training models for crowded scene understanding. Our overall approach is composed of two components: a procedural simulation framework for generating crowd movements and behaviors, and a procedural rendering framework to generate different videos or images. Each video or image is automatically labeled based on the environment, number of pedestrians, density, behavior (agent personality), flow, lighting conditions, viewpoint, noise, etc. Furthermore, we can increase the realism by combining synthetically-generated behaviors with real-world background videos. We demonstrate the benefits of LCrowdV over prior labeled crowd datasets, by augmenting real dataset with it and improving the accuracy in pedestrian detection. LCrowdV has been made available as an online resource."
]
} |
1905.00199 | 2943265406 | In this paper, we present a cyber-physical testbed created to enable a human-robot team to preform a shared task in a shared workspace. The testbed is suitable for the implementation of a tabletop manipulation task, a common human-robot collaboration scenario. The testbed integrates elements that exists in the physical and virtual world. In this work, we report the insights we gathered through out our exploration in understanding and implementing task planning and execution for human-robot team. | Task planning is a key ability for intelligent robotic systems, increasing their autonomy through the construction of sequences of actions to achieve a final goal. However, when working in a team, planning for the sequence of actions without taking into consideration the actions of the other partner is not enough, especially when collaborating in achieving a common task. Therefore, adaptability and flexibility of planning are crucial in such scenarios. This has been investigated extensively in previous work. Our testbed is inspired by this work @cite_1 , that offers a comprehensive system that identifies and integrates individual and collaborative human inspired cognitive skills a robot should have to share space and task with a human partner. The scenario we are interested in is similar to the one investigated by @cite_3 , where a robot and a human manipulate an overlapping set of objects, without using the same object simultaneously. A key element in task planning and execution is the robot ability to recognized and anticipate the human's actions. Similar to other work in the literature @cite_2 @cite_8 , we recognize and anticipate human actions through the human arm motion within the workspace to enable task planning and execution. | {
"cite_N": [
"@cite_8",
"@cite_1",
"@cite_3",
"@cite_2"
],
"mid": [
"2774716914",
"2482333547",
"2562485623",
"1581800915"
],
"abstract": [
"Allowing a cobot to predict what the human operator is about to do can definitely enhance the effectiveness of human-robot collaboration. This paper addresses the problem of inferring the most likely reaching target of the human hand. The method allows the robot to promptly recognise the intention of the human to reach a certain position within the scene and can be thus used by the controller of the robot to take the optimal decision on what to do. A novel method based on Bayesian statistics has been developed in this work and its applicability in a realistic context has been verified within an industrial use case, consisting of a commercial collaborative robot and a human operator performing a collaborative assembly task.",
"HumanRobot Interaction challenges Artificial Intelligence in many regards: dynamic, partially unknown environments that were not originally designed for robots; a broad variety of situations with rich semantics to understand and interpret; physical interactions with humans that requires fine, low-latency yet socially acceptable control strategies; natural and multi-modal communication which mandates common-sense knowledge and the representation of possibly divergent mental models. This article is an attempt to characterise these challenges and to exhibit a set of key decisional issues that need to be addressed for a cognitive robot to successfully share space and tasks with a human.We identify first the needed individual and collaborative cognitive skills: geometric reasoning and situation assessment based on perspective-taking and affordance analysis; acquisition and representation of knowledge models for multiple agents (humans and robots, with their specificities); situated, natural and multi-modal dialogue; human-aware task planning; humanrobot joint task achievement. The article discusses each of these abilities, presents working implementations, and shows how they combine in a coherent and original deliberative architecture for humanrobot interaction. Supported by experimental results, we eventually show how explicit knowledge management, both symbolic and geometric, proves to be instrumental to richer and more natural humanrobot interactions by pushing for pervasive, human-level semantics within the robot's deliberative system.",
"Our human-robot collaboration research aims to improve the fluency and efficiency of interactions between humans and robots when executing a set of tasks in a shared workspace. During human-robot collaboration, a robot and a user must often complete a disjoint set of tasks that use an overlapping set of objects, without using the same object simultaneously. A key challenge is deciding what task the robot should perform next in order to facilitate fluent and efficient collaboration. Most prior work does so by first predicting the human's intended goal, and then selecting actions given that goal. However, it is often difficult, and sometimes impossible, to infer the human's exact goal in real time, and this serial predict-then-act method is not adaptive to changes in human goals. In this paper, we present a system for inferring a probability distribution over human goals, and producing assistance actions given that distribution in real time. The aim is to minimize the disruption caused by the nature of human-robot shared workspace. We extend recent work utilizing Partially Observable Markov Decision Processes (POMDPs) for shared autonomy in order to provide assistance without knowing the exact goal. We evaluate our system in a study with 28 participants, and show that our POMDP model outperforms state of the art predict-then-act models by producing fewer human-robot collisions and less human idling time.",
"Interest in human-robot coexistence, in which humans and robots share a common work volume, is increasing in manufacturing environments. Efficient work coordination requires both awareness of the human pose and a plan of action for both human and robot agents in order to compute robot motion trajectories that synchronize naturally with human motion. In this paper, we present a data-driven approach that synthesizes anticipatory knowledge of both human motions and subsequent action steps in order to predict in real-time the intended target of a human performing a reaching motion. Motion-level anticipatory models are constructed using multiple demonstrations of human reaching motions. We produce a library of motions from human demonstrations, based on a statistical representation of the degrees of freedom of the human arm, using time series analysis, wherein each time step is encoded as a multivariate Gaussian distribution. We demonstrate the benefits of this approach through offline statistical analysis of human motion data. The results indicate a considerable improvement over prior techniques in early prediction, achieving 70 or higher correct classification on average for the first third of the trajectory (< 500msec). We also indicate proof-of-concept through the demonstration of a human-robot cooperative manipulation task performed with a PR2 robot. Finally, we analyze the quality of task-level anticipatory knowledge required to improve prediction performance early in the human motion trajectory."
]
} |
1905.00151 | 2943667773 | Training neural networks for source separation involves presenting a mixture recording at the input of the network and updating network parameters in order to produce an output that resembles the clean source. Consequently, supervised source separation depends on the availability of paired mixture-clean training examples. In this paper, we interpret source separation as a style transfer problem. We present a variational auto-encoder network that exploits the commonality across the domain of mixtures and the domain of clean sounds and learns a shared latent representation across the two domains. Using these cycle-consistent variational auto-encoders, we learn a mapping from the mixture domain to the domain of clean sounds and perform source separation without explicitly supervising with paired training examples. | More recently, GANs have also been successfully used for unsupervised image-to-image translation @cite_24 . To simplify the complexity of GAN based networks, Liu et.al., proposed the UNIT algorithm @cite_27 using Variational Auto-Encoders (VAEs) to learn generative models for the source and target domains. As described in @cite_27 , the source and target domain training examples define a marginal distribution for the source and target domains respectively. Further, the goal of UDT is to estimate the joint distribution of the two domains, given their marginal distributions. This problem is an ill-posed problem and requires additional assumptions on the joint distribution @cite_21 . To this end, Liu et. al., show that a necessary condition for UDT is to have a shared latent space between the source and target domains. This implies that a pair of points having the same content in the source and target domains should be mapped to the same point in the latent space. Since we do not have access to such paired training examples, we use additional consistency terms in the cost-function to enforce a shared latent space as described in . | {
"cite_N": [
"@cite_24",
"@cite_27",
"@cite_21"
],
"mid": [
"2962793481",
"2962947361",
"1535793628"
],
"abstract": [
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions. To address the problem, we make a shared-latent space assumption and propose an unsupervised image-to-image translation framework based on Coupled GANs. We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation. We also apply the proposed framework to domain adaptation and achieve state-of-the-art performance on benchmark datasets. Code and additional results are available in https: github.com mingyuliutw unit.",
"Preliminaries Discrete Theory Continuous Theory Inequalities Intensity-Governed Processes Diffusions Appendix Frequently Used Notation References Index."
]
} |
1904.13387 | 2943626021 | In this paper, we study multi-armed bandit problems in explore-then-commit setting. In our proposed explore-then-commit setting, the goal is to identify the best arm after a pure experimentation (exploration) phase and exploit it once or for a given finite number of times. We identify that although the arm with the highest expected reward is the most desirable objective for infinite exploitations, it is not necessarily the one that is most probable to have the highest reward in a single or finite-time exploitations. Alternatively, we advocate the idea of risk-aversion where the objective is to compete against the arm with the best risk-return trade-off. Then, we propose two algorithms whose objectives are to select the arm that is most probable to reward the most. Using a new notion of finite-time exploitation regret, we find an upper bound for the minimum number of experiments before commitment, to guarantee an upper bound for the regret. As compared to existing risk-averse bandit algorithms, our algorithms do not rely on hyper-parameters, resulting in a more robust behavior in practice, which is verified by the numerical evaluation. | Explore-then-commit is a class of multi-armed bandit problems that has two consecutive phases named as exploration (experimentation) and commitment. The decision maker can arbitrarily explore each arm in the experimentation phase; however, he she needs to commit to one selected arm in the commitment phase. There are several studies on explore-then-commit bandits in the literature as follows. @cite_8 studied the optimal number of explorations when cost is incurred in both phases. @cite_27 designed an explore-then-commit algorithm for the case where there is a limited space to record the arm reward statistics. @cite_19 studied explore-then-commit policy under the assumption that the employed policy must split explorations into a number of batches. None of these works have addressed the risk-averse issue on explore-then-commit bandits. In the following, we present an overview on risk-averse bandits. | {
"cite_N": [
"@cite_19",
"@cite_27",
"@cite_8"
],
"mid": [
"2762385100",
"2777439363",
""
],
"abstract": [
"Motivated by practical applications, chiefly clinical trials, we study the regret achievable for stochastic bandits under the constraint that the employed policy must split trials into a small number of batches. We propose a simple policy, and show that a very small number of batches gives close to minimax optimal regret bounds. As a byproduct, we derive optimal policies with low switching cost for stochastic bandits.",
"We consider the stochastic bandit problem in the sublinear space setting, where one cannot record the win-loss record for all @math arms. We give an algorithm using @math words of space with regret [ i=1 ^ K 1 T ] where @math is the gap between the best arm and arm @math and @math is the gap between the best and the second-best arms. If the rewards are bounded away from @math and @math , this is within an @math factor of the optimum regret possible without space constraints.",
""
]
} |
1904.13215 | 2943722115 | We present techniques for automatically inferring invariant properties of feed-forward neural networks. Our insight is that feed forward networks should be able to learn a decision logic that is captured in the activation patterns of its neurons. We propose to extract such decision patterns that can be considered as invariants of the network with respect to a certain output behavior. We present techniques to extract input invariants as convex predicates on the input space, and layer invariants that represent features captured in the hidden layers. We apply the techniques on the networks for the MNIST and ACASXU applications. Our experiments highlight the use of invariants in a variety of applications, such as explainability, providing robustness guarantees, detecting adversaries, simplifying proofs and network distillation. | We survey the works that are the most closely related to ours. @cite_24 it has been shown that neural networks are highly vulnerable to small adversarial perturbations. Since then, many works have focused on methods for finding adversarial examples. They range from heuristic and optimization-based methods @cite_13 @cite_30 @cite_15 @cite_5 @cite_11 to analysis techniques which can also provide formal guarantees. In the latter category, tools based on constraint solving, interval analysis or abstract interpretation, such as DLV @cite_2 , Reluplex @cite_6 , AI @math @cite_28 ReluVal @cite_4 , Neurify @cite_23 and others @cite_29 @cite_9 , are gaining prominence. Our work is complementary as it focuses on inferring likely properties of neural networks and in principle it can leverage the previous analysis techniques to verify the inferred properties. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_28",
"@cite_29",
"@cite_9",
"@cite_6",
"@cite_24",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_11"
],
"mid": [
"2552767274",
"2798356176",
"",
"",
"2777012514",
"",
"1673923490",
"",
"",
"2516574342",
"",
"1945616565",
"2950159395"
],
"abstract": [
"Adversarial examples are malicious inputs designed to fool machine learning models. They often transfer from one model to another, allowing attackers to mount black box attacks without knowledge of the target model's parameters. Adversarial training is the process of explicitly training a model on adversarial examples, in order to make it more robust to attack or to reduce its test error on clean inputs. So far, adversarial training has primarily been applied to small problems. In this research, we apply adversarial training to ImageNet. Our contributions include: (1) recommendations for how to succesfully scale adversarial training to large models and datasets, (2) the observation that adversarial training confers robustness to single-step attack methods, (3) the finding that multi-step attack methods are somewhat less transferable than single-step attack methods, so single-step attacks are the best for mounting black-box attacks, and (4) resolution of a \"label leaking\" effect that causes adversarially trained models to perform better on adversarial examples than on clean examples, because the adversarial example construction process uses the true label and the model can learn to exploit regularities in the construction process.",
"Due to the increasing deployment of Deep Neural Networks (DNNs) in real-world security-critical domains including autonomous vehicles and collision avoidance systems, formally checking security properties of DNNs, especially under different attacker capabilities, is becoming crucial. Most existing security testing techniques for DNNs try to find adversarial examples without providing any formal security guarantees about the non-existence of adversarial examples. Recently, several projects have used different types of Satisfiability Modulo Theory (SMT) solvers to formally check security properties of DNNs. However, all of these approaches are limited by the high overhead caused by the solver. In this paper, we present a new direction for formally checking security properties of DNNs without using SMT solvers. Instead, we leverage interval arithmetic to formally check security properties by computing rigorous bounds on the DNN outputs. Our approach, unlike existing solver-based approaches, is easily parallelizable. We further present symbolic interval analysis along with several other optimizations to minimize overestimations. We design, implement, and evaluate our approach as part of ReluVal, a system for formally checking security properties of Relu-based DNNs. Our extensive empirical results show that ReluVal outperforms Reluplex, a state-of-the-art solver-based system, by 200 times on average for the same security properties. ReluVal is able to prove a security property within 4 hours on a single 8-core machine without GPUs, while Reluplex deemed inconclusive due to timeout (more than 5 days). Our experiments demonstrate that symbolic interval analysis is a promising new direction towards rigorously analyzing different security properties of DNNs.",
"",
"",
"Deep Neural Networks (DNNs) are very popular these days, and are the subject of a very intense investigation. A DNN is made by layers of internal units (or neurons), each of which computes an affine combination of the output of the units in the previous layer, applies a nonlinear operator, and outputs the corresponding value (also known as activation). A commonly-used nonlinear operator is the so-called rectified linear unit (ReLU), whose output is just the maximum between its input value and zero. In this (and other similar cases like max pooling, where the max operation involves more than one input value), one can model the DNN as a 0-1 Mixed Integer Linear Program (0-1 MILP) where the continuous variables correspond to the output values of each unit, and a binary variable is associated with each ReLU to model its yes no nature. In this paper we discuss the peculiarity of this kind of 0-1 MILP models, and describe an effective bound-tightening technique intended to ease its solution. We also present possible applications of the 0-1 MILP model arising in feature visualization and in the construction of adversarial examples. Preliminary computational results are reported, aimed at investigating (on small DNNs) the computational performance of a state-of-the-art MILP solver when applied to a known test case, namely, hand-written digit recognition.",
"",
"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.",
"",
"",
"Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input @math and any target classification @math , it is possible to find a new input @math that is similar to @math but classified as @math . This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from @math to @math . In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with @math probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"",
"Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.",
"State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust."
]
} |
1904.13215 | 2943722115 | We present techniques for automatically inferring invariant properties of feed-forward neural networks. Our insight is that feed forward networks should be able to learn a decision logic that is captured in the activation patterns of its neurons. We propose to extract such decision patterns that can be considered as invariants of the network with respect to a certain output behavior. We present techniques to extract input invariants as convex predicates on the input space, and layer invariants that represent features captured in the hidden layers. We apply the techniques on the networks for the MNIST and ACASXU applications. Our experiments highlight the use of invariants in a variety of applications, such as explainability, providing robustness guarantees, detecting adversaries, simplifying proofs and network distillation. | There is a large literature on invariant inference, including @cite_25 @cite_12 @cite_10 @cite_19 to name just a few, although none of the previous works have addressed neural networks. The programs considered in this literature tend to be small but have complex constructs such as loops, arrays, pointers. In contrast, neural networks have simpler structure but can be massive in scale. | {
"cite_N": [
"@cite_19",
"@cite_10",
"@cite_25",
"@cite_12"
],
"mid": [
"2185676247",
"145069693",
"1498946538",
"2110908283"
],
"abstract": [
"Inductive invariants can be robustly synthesized using a learning model where the teacher is a program verifier who instructs the learner through concrete program configurations, classified as positive, negative, and implications. We propose the first learning algorithms in this model with implication counter-examples that are based on machine learning techniques. In particular, we extend classical decision-tree learning algorithms in machine learning to handle implication samples, building new scalable ways to construct small decision trees using statistical measures. We also develop a decision-tree learning algorithm in this model that is guaranteed to converge to the right concept (invariant) if one exists. We implement the learners and an appropriate teacher, and show that the resulting invariant synthesis is efficient and convergent for a large suite of programs.",
"We introduce ICE, a robust learning paradigm for synthesizing invariants, that learns using examples, counter-examples, and implications, and show that it admits honest teachers and strongly convergent mechanisms for invariant synthesis. We observe that existing algorithms for black-box abstract interpretation can be interpreted as ICE-learning algorithms. We develop new strongly convergent ICE-learning algorithms for two domains, one for learning Boolean combinations of numerical invariants for scalar variables and one for quantified invariants for arrays and dynamic lists. We implement these ICE-learning algorithms in a verification tool and show they are robust, practical, and efficient.",
"A static program checker that performs modular checking can check one program module for errors without needing to analyze the entire program. Modular checking requires that each module be accompanied by annotations that specify the module. To help reduce the cost of writing specifications, this paper presents Houdini, an annotation assistant for the modular checker ESC Java. To infer suitable ESC Java annotations for a given program, Houdini generates a large number of candidate annotations and uses ESC Java to verify or refute each of these annotations. The paper describes the design, implementation, and preliminary evaluation of Houdini.",
"Daikon is an implementation of dynamic detection of likely invariants; that is, the Daikon invariant detector reports likely program invariants. An invariant is a property that holds at a certain point or points in a program; these are often used in assert statements, documentation, and formal specifications. Examples include being constant (x=a), non-zero (x 0), being in a range (a@?x@?b), linear relationships (y=ax+b), ordering (x@?y), functions from a library (x=fn(y)), containment (x@?y), sortedness (xissorted), and many more. Users can extend Daikon to check for additional invariants. Dynamic invariant detection runs a program, observes the values that the program computes, and then reports properties that were true over the observed executions. Dynamic invariant detection is a machine learning technique that can be applied to arbitrary data. Daikon can detect invariants in C, C++, Java, and Perl programs, and in record-structured data sources; it is easy to extend Daikon to other applications. Invariants can be useful in program understanding and a host of other applications. Daikon's output has been used for generating test cases, predicting incompatibilities in component integration, automating theorem proving, repairing inconsistent data structures, and checking the validity of data streams, among other tasks. Daikon is freely available in source and binary form, along with extensive documentation, at http: pag.csail.mit.edu daikon ."
]
} |
1904.13072 | 2884652459 | Processing and fusing information among multi-modal is a very useful technique to achieving high performance in many computer vision problem. In order to tackle multi-modal information more effectively, we introduce a novel framework for multi-modal fusion: Cross-modal Message Passing (CMMP). Specifically, we propose a cross-modal message passing mechanism to fuse two-stream network for action recognition, which composes of an appearance modal network (RGB image) and a motion modal (optical flow image) network. The objectives of individual networks in this framework are two-fold: a standard classification objective and a competing objective. The classification object ensures that each modal network predicts the true action category while the competing objective encourages each modal network to outperform the other one. We quantitatively show that the proposed CMMP fuse the traditional two-stream network more effectively, and outperforms all existing two-stream fusion method on UCF-101 and HMDB-51 datasets. | Following the great success of ConvNets for image classification @cite_0 , semantic segmentation @cite_13 , road detection @cite_2 and other computer vision tasks, deep learning algorithms have been used in video-based action recognition as well. Multiple frames in each sequence are feed into ConvNets in @cite_18 , and then pooled using single, late, early and slow fusion across the temporal domain. However, this native method does not obtain significant improvement compared those methods based on a single frame, which indicates that motion features are difficult to obtain by simply and directly pooling spatial features from ConvNets. | {
"cite_N": [
"@cite_0",
"@cite_18",
"@cite_13",
"@cite_2"
],
"mid": [
"2163605009",
"2016053056",
"1903029394",
"2736675202"
],
"abstract": [
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3 to 63.9 ), but only a surprisingly modest improvement compared to single-frame models (59.3 to 60.9 ). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3 up from 43.9 ).",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"Road detection from the perspective of moving vehicles is a challenging issue in autonomous driving. Recently, many deep learning methods spring up for this task because they can extract high-level local features to find road regions from raw RGB data, such as Convolutional Neural Networks (CNN) and Fully Convolutional Networks (FCN). However, how to detect the boundary of road accurately is still an intractable problem. In this paper, we propose a siamesed fully convolutional network (named as “s-FCN-loc”) based on VGG-net architecture, which is able to consider RGB-channel, semantic contour and location prior simultaneously to segment road region elaborately. To be specific, the s-FCN-loc has two streams to process original RGB images and contour maps respectively. At the same time, the location prior is directly appended to the last feature map to promote the final detection performance. Experiments demonstrate that the proposed s-FCN-loc can learn more discriminative features of road boundaries and converge 30 faster than the original FCN during the training stage. Finally, the proposed approach is evaluated on KITTI road detection benchmark, and achieves a competitive result."
]
} |
1904.13072 | 2884652459 | Processing and fusing information among multi-modal is a very useful technique to achieving high performance in many computer vision problem. In order to tackle multi-modal information more effectively, we introduce a novel framework for multi-modal fusion: Cross-modal Message Passing (CMMP). Specifically, we propose a cross-modal message passing mechanism to fuse two-stream network for action recognition, which composes of an appearance modal network (RGB image) and a motion modal (optical flow image) network. The objectives of individual networks in this framework are two-fold: a standard classification objective and a competing objective. The classification object ensures that each modal network predicts the true action category while the competing objective encourages each modal network to outperform the other one. We quantitatively show that the proposed CMMP fuse the traditional two-stream network more effectively, and outperforms all existing two-stream fusion method on UCF-101 and HMDB-51 datasets. | In light of this, Karen @cite_19 propose a two-stream ConvNet architecture which incorporates spatial and temporal networks, and demonstrate that optical flow is an effective way to model motion information. Since this scheme improves the accuracy of action recognition significantly, the two-stream architecture followed by many other researchers. For example, Bowen @cite_12 accelerate the original two-stream method by replacing optical flow with motion vector which can be obtained directly from compressed videos without extra calculation. Recently, Limin @cite_5 introduce a temporal segment network for long-range temporal structure modeling, which achieves the best performance on benchmark datasets so far. In detail, a sparse temporal sampling strategy is combined with video-level supervision to enable efficient and effective learning using the whole action video. Despite the success of those methods, all of them just simply fuse the final probability scores from two-stream ConvNets, which does not take advantage of complementarity between appearance and motion information when training the network. | {
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_12"
],
"mid": [
"",
"2507009361",
"2342776425"
],
"abstract": [
"",
"Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( ( 69.4 , )) and UCF101 ( ( 94.2 , )). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices (Models and code at https: github.com yjxiong temporal-segment-networks).",
"The deep two-stream architecture [23] exhibited excellent performance on video based action recognition. The most computationally expensive step in this approach comes from the calculation of optical flow which prevents it to be real-time. This paper accelerates this architecture by replacing optical flow with motion vector which can be obtained directly from compressed videos without extra calculation. However, motion vector lacks fine structures, and contains noisy and inaccurate motion patterns, leading to the evident degradation of recognition performance. Our key insight for relieving this problem is that optical flow and motion vector are inherent correlated. Transferring the knowledge learned with optical flow CNN to motion vector CNN can significantly boost the performance of the latter. Specifically, we introduce three strategies for this, initialization transfer, supervision transfer and their combination. Experimental results show that our method achieves comparable recognition performance to the state-of-the-art, while our method can process 390.7 frames per second, which is 27 times faster than the original two-stream method."
]
} |
1904.13072 | 2884652459 | Processing and fusing information among multi-modal is a very useful technique to achieving high performance in many computer vision problem. In order to tackle multi-modal information more effectively, we introduce a novel framework for multi-modal fusion: Cross-modal Message Passing (CMMP). Specifically, we propose a cross-modal message passing mechanism to fuse two-stream network for action recognition, which composes of an appearance modal network (RGB image) and a motion modal (optical flow image) network. The objectives of individual networks in this framework are two-fold: a standard classification objective and a competing objective. The classification object ensures that each modal network predicts the true action category while the competing objective encourages each modal network to outperform the other one. We quantitatively show that the proposed CMMP fuse the traditional two-stream network more effectively, and outperforms all existing two-stream fusion method on UCF-101 and HMDB-51 datasets. | In order to fuse the appearance and motion information more effectively, feature-level fusion is a straightforward method. Christoph @cite_11 propose a temporal fusion layer that incorporating 3D convolutions and pooling to fuse the feature from the two-stream network, and demonstrate its superiority compared with several simple strategies such as sum, max and concatenation. For leveraging the relation between RGB and corresponding optical flow, Lin @cite_10 extend the traditional LSTM by learning independent hidden state transitions of memory cells for individual spatial locations, and it is trained by both RGB and optical flow and learned weights are shared by both modalities. Moreover, convolutional 3D (C3D) @cite_7 is another way to exploit spatial and temporal information jointly, which learns 3D convolution kernels in both space and time based on the straightforward extension of the established 2D CNNs. However, the performance of C3D is inferior to the two-stream architecture method, which verifies the advantage of the two-stream architecture in another way. Therefore, two-stream architecture is the baseline of the proposed method, and a novel message passing mechanism is introduced to harness the appearance and motion information when training and test. | {
"cite_N": [
"@cite_10",
"@cite_7",
"@cite_11"
],
"mid": [
"2746131160",
"1522734439",
"2342662179"
],
"abstract": [
"Human actions captured in video sequences are three-dimensional signals characterizing visual appearance and motion dynamics. To learn action patterns, existing methods adopt Convolutional and or Recurrent Neural Networks (CNNs and RNNs). CNN based methods are effective in learning spatial appearances, but are limited in modeling long-term motion dynamics. RNNs, especially Long Short-Term Memory (LSTM), are able to learn temporal motion dynamics. However, naively applying RNNs to video sequences in a convolutional manner implicitly assumes that motions in videos are stationary across different spatial locations. This assumption is valid for short-term motions but invalid when the duration of the motion is long. In this work, we propose Lattice-LSTM (L2STM), which extends LSTM by learning independent hidden state transitions of memory cells for individual spatial locations. This method effectively enhances the ability to model dynamics across time and addresses the non-stationary issue of long-term motion dynamics without significantly increasing the model complexity. Additionally, we introduce a novel multi-modal training procedure for training our network. Unlike traditional two-stream architectures which use RGB and optical flow information as input, our two-stream model leverages both modalities to jointly train both input gates and both forget gates in the network rather than treating the two streams as separate entities with no information about the other. We apply this end-to-end system to benchmark datasets (UCF-101 and HMDB-51) of human action recognition. Experiments show that on both datasets, our proposed method outperforms all existing ones that are based on LSTM and or CNNs of similar model complexities.",
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.",
"Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters, (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy, finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results."
]
} |
1904.13102 | 2943494813 | Facial pose estimation has gained a lot of attentions in many practical applications, such as human-robot interaction, gaze estimation and driver monitoring. Meanwhile, end-to-end deep learning-based facial pose estimation is becoming more and more popular. However, facial pose estimation suffers from a key challenge: the lack of sufficient training data for many poses, especially for large poses. Inspired by the observation that the faces under close poses look similar, we reformulate the facial pose estimation as a label distribution learning problem, considering each face image as an example associated with a Gaussian label distribution rather than a single label, and construct a convolutional neural network which is trained with a multi-loss function on AFLW dataset and 300WLP dataset to predict the facial poses directly from color image. Extensive experiments are conducted on several popular benchmarks, including AFLW2000, BIWI, AFLW and AFW, where our approach shows a significant advantage over other state-of-the-art methods. | The face detector arrays @cite_6 @cite_8 whose idea is to train multiple face detectors for different facial poses once became popular as the success of frontal face detection @cite_14 @cite_37 @cite_13 . The method in @cite_8 uses a sequence of five multi-view face detectors to estimate facial pose. It is evident that many face detectors are required for each corresponding discrete facial pose, and it is difficult to implement a real time facial pose estimator with a large detector array. | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_8",
"@cite_6",
"@cite_13"
],
"mid": [
"2124351082",
"2164598857",
"1734489760",
"2131227845",
"2566703758"
],
"abstract": [
"We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs., 1985) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. The decision surfaces are found by solving a linearly constrained quadratic programming problem. This optimization problem is challenging because the quadratic form is completely dense and the memory requirements grow with the square of the number of data points. We present a decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of optimality conditions which are used both to generate improved iterative values, and also establish the stopping criteria for the algorithm. We present experimental results of our implementation of SVM, and demonstrate the feasibility of our approach on a face detection problem that involves a data set of 50,000 data points.",
"This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the \"integral image\" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a \"cascade\" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection.",
"Head pose estimation in low resolution is a challenge problem. Traditional pose estimation algorithms, which assume faces have been well aligned before pose estimation, would face much difficulty in this situation, since face alignment itself does not work well in this low resolution scenario. In this paper, we propose to estimate head pose using view-based multi-view face detectors directly. Naive Bayesian classifier is then applied to fuse the information of head pose from multiple camera views. To model the temporal changing of head pose, Hidden Markov Model is used to obtain the optimal sequence of head pose with greatest likelihood.",
"This paper describes an approach for the problem of face pose discrimination using support vector machines (SVM). Face pose discrimination means that one can label the face image as one of several known poses. Face images are drawn from the standard FERET database. The training set consists of 150 images equally distributed among frontal, approximately 33.75 spl deg rotated left and right poses, respectively, and the test set consists of 450 images again equally distributed among the three different types of poses. SVM achieved perfect accuracy-100 -discriminating between the three possible face poses on unseen test data, using either polynomials of degree 3 or radial basis functions (RBF) as kernel approximation functions.",
"We present a neural network-based face detection system. A retinally connected neural network examines small windows of an image and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We use a bootstrap algorithm for training the networks, which adds false detections into the training set as training progresses. This eliminates the difficult task of manually selecting non-face training examples, which must be chosen to span the entire space of non-face images. Comparisons with other state-of-the-art face detection systems are presented; our system has better performance in terms of detection and false-positive rates."
]
} |
1904.13102 | 2943494813 | Facial pose estimation has gained a lot of attentions in many practical applications, such as human-robot interaction, gaze estimation and driver monitoring. Meanwhile, end-to-end deep learning-based facial pose estimation is becoming more and more popular. However, facial pose estimation suffers from a key challenge: the lack of sufficient training data for many poses, especially for large poses. Inspired by the observation that the faces under close poses look similar, we reformulate the facial pose estimation as a label distribution learning problem, considering each face image as an example associated with a Gaussian label distribution rather than a single label, and construct a convolutional neural network which is trained with a multi-loss function on AFLW dataset and 300WLP dataset to predict the facial poses directly from color image. Extensive experiments are conducted on several popular benchmarks, including AFLW2000, BIWI, AFLW and AFW, where our approach shows a significant advantage over other state-of-the-art methods. | The label distribution learning (LDL) is a novel machine learning paradigm recently proposed for facial age estimation @cite_35 @cite_29 . The LDL is based on the observation that age is ambiguous and faces with adjacent ages are strongly correlated. The main idea of LDL is to utilize adjacent ages when learning a particular age. And a label distribution covers a number of class labels, representing the degree that each label describes the instance. Hence, the LDL is able to deal with insufficient and incomplete training data. Some other problems which share the same characteristic as facial age estimation, such as facial attractiveness computation @cite_21 , crowd counting @cite_18 and pre-release movie rating prediction @cite_9 have achieved outstanding performances by using LDL. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_29",
"@cite_21",
"@cite_9"
],
"mid": [
"",
"295732247",
"2248200858",
"2964118336",
"2269758193"
],
"abstract": [
"",
"The increase of population causes the raise of security threat in crowed environment, which makes crowd counting becoming more and more important. For common complexity scenes, existing crowd counting approaches are mainly based on regression models which learn a mapping between low-level features and class labels. One of the major challenges for generating a good regression function is the insufficient and imbalanced training data. Observationally, the problem of crowd counting has the characteristic that crowd images with adjacent class labels contain similar features, which can be utilized to reduce the effect of insufficiency and imbalance by the strategy of information reuse. Consequently, this paper introduces a label distribution learning (LDL) strategy into crowd counting, where crowd images are labelled with label distributions instead of the conventional single labels. In this way, one crowd instance can contribute to not only the learning of its real class label, but also the learning of neighboring class labels. Hence the training data are increased significantly, and the classes with insufficient training data are supplied with more training data, which belongs to their adjacent classes. Experimental results prove the effectiveness and robustness of the LDL method for crowd counting. We have also shown the outstanding performance of the approach in different dataset.",
"Apparent age estimation from face image has attracted more and more attentions as it is favorable in some real-world applications. In this work, we propose an end-to-end learning approach for robust apparent age estimation, named by us AgeNet. Specifically, we address the apparent age estimation problem by fusing two kinds of models, i.e., real-value based regression models and Gaussian label distribution based classification models. For both kind of models, large-scale deep convolutional neural network is adopted to learn informative age representations. Another key feature of the proposed AgeNet is that, to avoid the problem of over-fitting on small apparent age training set, we exploit a general-to-specific transfer learning scheme. Technically, the AgeNet is first pre-trained on a large-scale web-collected face dataset with identity label, and then it is fine-tuned on a large-scale real age dataset with noisy age label. Finally, it is fine-tuned on a small training set with apparent age label. The experimental results on the ChaLearn 2015 Apparent Age Competition demonstrate that our AgeNet achieves the state-of-the-art performance in apparent age estimation.",
"Two key challenges lie in the facial attractiveness computation research: the lack of discriminative face representations, and the scarcity of sufficient and complete training data. Motivated by recent promising work in face recognition using deep neural networks to learn effective features, the first challenge is expected to be addressed from a deep learning point of view. A very deep residual network is utilized to enable automatic learning of hierarchical aesthetics representation. The inspiration to deal with the second challenge comes from the natural representation of the training data, where each training face can be associated with a label (score) distribution given by human raters rather than a single label (average score). This paper, therefore, recasts facial attractiveness computation as a label distribution learning problem. Integrating these two ideas, an end-to-end attractiveness learning framework is established. We also perform feature-level fusion by incorporating the low-level geometric features to further improve the computational performance. Extensive experiments are conducted on a standard benchmark, the SCUT-FBP dataset, where our approach shows significant advantages over the other state-of-the-art work.",
"This paper studies an interesting problem: is it possible to predict the crowd opinion about a movie before the movie is actually released? The crowd opinion is here expressed by the distribution of ratings given by a sufficient amount of people. Consequently, the pre-release crowd opinion prediction can be regarded as a Label Distribution Learning (LDL) problem. In order to solve this problem, a Label Distribution Support Vector Regressor (LDSVR) is proposed in this paper. The basic idea of LDSVR is to fit a sigmoid function to each component of the label distribution simultaneously by a multi-output support vector machine. Experimental results show that LDSVR can accurately predict peoples's rating distribution about a movie just based on the pre-release metadata of the movie."
]
} |
1904.13205 | 2942779489 | Dou Shou Qi is a Chinese strategy board game for two players. We use a EXPTIME-hardness framework to analyse computational complexity of the game. We construct all gadgets of the hardness framework. In conclusion, we prove that Dou Shou Qi is EXPTIME-complete. | More recently, researchers focus on frameworks which could be used to prove hardness of games. In 2010, Fori s ek @cite_4 introduced a basic NP-hardness framework for 2D platform games. In 2012, Aloupis, Demaine and Guo @cite_15 introduced a elegant NP-hardness framework, and they used the framework to analyse complexity of several video games. In 2014, Viglietta @cite_0 established some general schemes relating the computational complexity of 2D platform games. In 2015, @cite_10 introduced a new PSPACE-hardness framework. In our note @cite_7 , we introduced a EXPTIME-hardness framework for 2D games, and we will use this framework to discuss complexity of Dou Shou Qi in this note. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_0",
"@cite_15",
"@cite_10"
],
"mid": [
"1501441673",
"2925971084",
"2152152733",
"",
"1749523692"
],
"abstract": [
"We analyze the computational complexity of various two-dimensional platform games. We state and prove several meta-theorems that identify a class of these games for which the set of solvable levels is NP-hard, and another class for which the set is even PSPACE-hard. Notably COMMANDERKEEN is shown to be NP-hard, and PRINCE OF PERSIA is shown to be PSPACE-complete. We then analyze the related game Lemmings, where we construct a set of instances which only have exponentially long solutions. This shows that an assumption by Cormode in [3] is false and invalidates the proof that the general version of the LEMMINGS decision problem is in NP. We then augment our construction to only include one entrance, which makes our instances perfectly natural within the context of the original game.",
"We review NP-hardness framework and PSPACE-hardness framework for a type of 2D platform games. We introduce a EXPTIME-hardness framework by defining some new gadgets. We use these hardness frameworks to analyse computational complexity of Xiangqi (Chinese Chess) and Janggi (Korean Chess). We construct all gadgets of the hardness frameworks in Xiangqi and Janggi. In conclusion, we prove that Xiangqi and Janggi are both EXPTIME-complete.",
"We establish some general schemes relating the computational complexity of a video game to the presence of certain common elements or mechanics, such as destroyable paths, collectible items, doors opened by keys or activated by buttons or pressure plates, etc. Then we apply such \"metatheorems\" to several video games published between 1980 and 1998, including Pac-Man, Tron, Lode Runner, Boulder Dash, Pipe Mania, Skweek, Prince of Persia, Lemmings, and Starcraft. We obtain both new results, and improvements or alternative proofs of previously known results.",
"",
"We prove NP-hardness results for five of Nintendo's largest video game franchises: Mario, Donkey Kong, Legend of Zelda, Metroid, and Pokemon. Our results apply to generalized versions of Super Mario Bros.?1-3, The Lost Levels, and Super Mario World; Donkey Kong Country 1-3; all Legend of Zelda games; all Metroid games; and all Pokemon role-playing games. In addition, we prove PSPACE-completeness of the Donkey Kong Country games and several Legend of Zelda games."
]
} |
1904.13001 | 2943350287 | Applied Data Scientists throughout various industries are commonly faced with the challenging task of encoding high-cardinality categorical features into digestible inputs for machine learning algorithms. This paper describes a Bayesian encoding technique developed for WeWork's lead scoring engine which outputs the probability of a person touring one of our office spaces based on interaction, enrichment, and geospatial data. We present a paradigm for ensemble modeling which mitigates the need to build complicated preprocessing and encoding schemes for categorical variables. In particular, domain-specific conjugate Bayesian models are employed as base learners for features in a stacked ensemble model. For each column of a categorical feature matrix we fit a problem-specific prior distribution, for example, the Beta distribution for a binary classification problem. In order to analytically derive the moments of the posterior distribution, we update the prior with the conjugate likelihood of the corresponding target variable for each unique value of the given categorical feature. This function of column and value encodes the categorical feature matrix so that the final learner in the ensemble model ingests low-dimensional numerical input. Experimental results on both curated and real world datasets demonstrate impressive accuracy and computational efficiency on a variety of problem archetypes. Particularly, for the lead scoring engine at WeWork -- where some categorical features have as many as 300,000 levels -- we have seen an AUC improvement from 0.87 to 0.97 through implementing conjugate Bayesian model encoding. | Weinberger, K. et al propose a method for dimensionality reduction which they call the hashing trick @cite_10 . In this approach hashing techniques are used to map the feature space to a reduced vector space. This has the desirable effect of not only reducing dimensionality and therefore computation, but it also provides an elegant way of dealing with new or missing entries. One shortcoming is the presence of collision where different entries are assigned the same hash. However it has been shown the the gain in accuracy and performance outweigh the effects of this phenomenon. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2953133476"
],
"abstract": [
"Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case -- multitask learning with hundreds of thousands of tasks."
]
} |
1904.13001 | 2943350287 | Applied Data Scientists throughout various industries are commonly faced with the challenging task of encoding high-cardinality categorical features into digestible inputs for machine learning algorithms. This paper describes a Bayesian encoding technique developed for WeWork's lead scoring engine which outputs the probability of a person touring one of our office spaces based on interaction, enrichment, and geospatial data. We present a paradigm for ensemble modeling which mitigates the need to build complicated preprocessing and encoding schemes for categorical variables. In particular, domain-specific conjugate Bayesian models are employed as base learners for features in a stacked ensemble model. For each column of a categorical feature matrix we fit a problem-specific prior distribution, for example, the Beta distribution for a binary classification problem. In order to analytically derive the moments of the posterior distribution, we update the prior with the conjugate likelihood of the corresponding target variable for each unique value of the given categorical feature. This function of column and value encodes the categorical feature matrix so that the final learner in the ensemble model ingests low-dimensional numerical input. Experimental results on both curated and real world datasets demonstrate impressive accuracy and computational efficiency on a variety of problem archetypes. Particularly, for the lead scoring engine at WeWork -- where some categorical features have as many as 300,000 levels -- we have seen an AUC improvement from 0.87 to 0.97 through implementing conjugate Bayesian model encoding. | Similarity might also be quantified by measuring the morphological resemblance of the actual text describing the level itself. Patricio propose such a similarity encoding technique to encode ‘dirty’, non-curated categorical data @cite_2 . This method is shown to outperform classic encoding techniques in a setting of high redundancy and very high cardinality. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2963105378"
],
"abstract": [
"For statistical learning, categorical variables in a table are usually considered as discrete entities and encoded separately to feature vectors, e.g., with one-hot encoding. “Dirty” non-curated data give rise to categorical variables with a very high cardinality but redundancy: several categories reflect the same entity. In databases, this issue is typically solved with a deduplication step. We show that a simple approach that exposes the redundancy to the learning algorithm brings significant gains. We study a generalization of one-hot encoding, similarity encoding, that builds feature vectors from similarities across categories. We perform a thorough empirical validation on non-curated tables, a problem seldom studied in machine learning. Results on seven real-world datasets show that similarity encoding brings significant gains in predictive performance in comparison with known encoding methods for categories or strings, notably one-hot encoding and bag of character n-grams. We draw practical recommendations for encoding dirty categories: 3-gram similarity appears to be a good choice to capture morphological resemblance. For very high-cardinalities, dimensionality reduction significantly reduces the computational cost with little loss in performance: random projections or choosing a subset of prototype categories still outperform classic encoding approaches."
]
} |
1904.13001 | 2943350287 | Applied Data Scientists throughout various industries are commonly faced with the challenging task of encoding high-cardinality categorical features into digestible inputs for machine learning algorithms. This paper describes a Bayesian encoding technique developed for WeWork's lead scoring engine which outputs the probability of a person touring one of our office spaces based on interaction, enrichment, and geospatial data. We present a paradigm for ensemble modeling which mitigates the need to build complicated preprocessing and encoding schemes for categorical variables. In particular, domain-specific conjugate Bayesian models are employed as base learners for features in a stacked ensemble model. For each column of a categorical feature matrix we fit a problem-specific prior distribution, for example, the Beta distribution for a binary classification problem. In order to analytically derive the moments of the posterior distribution, we update the prior with the conjugate likelihood of the corresponding target variable for each unique value of the given categorical feature. This function of column and value encodes the categorical feature matrix so that the final learner in the ensemble model ingests low-dimensional numerical input. Experimental results on both curated and real world datasets demonstrate impressive accuracy and computational efficiency on a variety of problem archetypes. Particularly, for the lead scoring engine at WeWork -- where some categorical features have as many as 300,000 levels -- we have seen an AUC improvement from 0.87 to 0.97 through implementing conjugate Bayesian model encoding. | These methods of continuousification of a nominal variable can be extended to more advanced preprocessing schemes which take full advantage of the target variable at hand. Micci-Barreca apply an Empirical Bayes approach to estimate the mean of the target variable, @math , conditional on each level of the categorical, @math , at hand, e.g. @math @cite_12 . Conversely the MDV approach encodes each level by measuring the mean of the respective level conditional on the target variable @math @cite_1 . Both these methods are variations of the more general Value Difference Metric (VDM) continuousification scheme @cite_0 @cite_13 . Once the entries have been transformed on a continuous scale further clustering can be performed grouping similar levels together @cite_12 . | {
"cite_N": [
"@cite_0",
"@cite_1",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"1556996173",
"2163952039",
"1990836268"
],
"abstract": [
"",
"Most of Computational Intelligence models (e.g. neural networks or distance based methods) are designed to operate on continuous data and provide no tools to adapt their parameters to data described by symbolic values. Two new conversion methods which replace symbolic by continuous attributes are presented and compared to two commonly known ones. The advantages of the continuousification are illustrated with the results obtained with a neural network, SVM and a kNN systems for the converted data.",
"Instance-based learning techniques typically handle continuous and linear input values well, but often do not handle nominal input attributes appropriately. The Value Difference Metric (VDM) was designed to find reasonable distance values between nominal attribute values, but it largely ignores continuous attributes, requiring discretization to map continuous values into nominal values. This paper proposes three new heterogeneous distance functions, called the Heterogeneous Value Difference Metric (HVDM), the Interpolated Value Difference Metric (IVDM), and the Windowed Value Difference Metric (WVDM). These new distance functions are designed to handle applications with nominal attributes, continuous attributes, or both. In experiments on 48 applications the new distance metrics achieve higher classification accuracy on average than three previous distance functions on those datasets that have both nominal and continuous attributes.",
"Categorical data fields characterized by a large number of distinct values represent a serious challenge for many classification and regression algorithms that require numerical inputs. On the other hand, these types of data fields are quite common in real-world data mining applications and often contain potentially relevant information that is difficult to represent for modeling purposes.This paper presents a simple preprocessing scheme for high-cardinality categorical data that allows this class of attributes to be used in predictive models such as neural networks, linear and logistic regression. The proposed method is based on a well-established statistical method (empirical Bayes) that is straightforward to implement as an in-database procedure. Furthermore, for categorical attributes with an inherent hierarchical structure, like ZIP codes, the preprocessing scheme can directly leverage the hierarchy by blending statistics at the various levels of aggregation.While the statistical methods discussed in this paper were first introduced in the mid 1950's, the use of these methods as a preprocessing step for complex models, like neural networks, has not been previously discussed in any literature."
]
} |
1904.13001 | 2943350287 | Applied Data Scientists throughout various industries are commonly faced with the challenging task of encoding high-cardinality categorical features into digestible inputs for machine learning algorithms. This paper describes a Bayesian encoding technique developed for WeWork's lead scoring engine which outputs the probability of a person touring one of our office spaces based on interaction, enrichment, and geospatial data. We present a paradigm for ensemble modeling which mitigates the need to build complicated preprocessing and encoding schemes for categorical variables. In particular, domain-specific conjugate Bayesian models are employed as base learners for features in a stacked ensemble model. For each column of a categorical feature matrix we fit a problem-specific prior distribution, for example, the Beta distribution for a binary classification problem. In order to analytically derive the moments of the posterior distribution, we update the prior with the conjugate likelihood of the corresponding target variable for each unique value of the given categorical feature. This function of column and value encodes the categorical feature matrix so that the final learner in the ensemble model ingests low-dimensional numerical input. Experimental results on both curated and real world datasets demonstrate impressive accuracy and computational efficiency on a variety of problem archetypes. Particularly, for the lead scoring engine at WeWork -- where some categorical features have as many as 300,000 levels -- we have seen an AUC improvement from 0.87 to 0.97 through implementing conjugate Bayesian model encoding. | In our proposed method, we build on the approaches outlined above while drawing further inspiration from the works of Vilnis and McCallum where they introduce the novel idea of using gaussian embeddings to learn word representations @cite_3 . Word embedding is a method of mapping a particular word into a N-dimensional vector of real numbers. Embeddings have the ability to translate large sparse vectors into a lower-dimensional space whilst preserving semantic relationships and was popularized by Mikolov, Tomas, et al in their paper on word2vec' @cite_11 . Gaussian embedding moves beyond the point vector representation, embedding words directly as Gaussian distributional potential functions and distances between these distributions are calculated using KL - divergence. In this word2gauss' approach, a preprocessing step is similarly required to learn the mean and variance of each distribution, thus the uncertainty surrounding the word is fully captured and utilized. | {
"cite_N": [
"@cite_3",
"@cite_11"
],
"mid": [
"2962796133",
"2153579005"
],
"abstract": [
"Abstract: Current work in lexical distributed representations maps each word to a point vector in low-dimensional space. Mapping instead to a density provides many interesting advantages, including better capturing uncertainty about a representation and its relationships, expressing asymmetries more naturally than dot product or cosine similarity, and enabling more expressive parameterization of decision boundaries. This paper advocates for density-based distributed embeddings and presents a method for learning representations in the space of Gaussian distributions. We compare performance on various word embedding benchmarks, investigate the ability of these embeddings to model entailment and other asymmetric relationships, and explore novel properties of the representation.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible."
]
} |
1904.13268 | 2942871801 | Automatic character generation is an appealing solution for new typeface design, especially for Chinese typefaces including over 3700 most commonly-used characters. This task has two main pain points: (i) handwritten characters are usually associated with thin strokes of few information and complex structure which are error prone during deformation; (ii) thousands of characters with various shapes are needed to synthesize based on a few manually designed characters. To solve those issues, we propose a novel convolutional-neural-network-based model with three main techniques: collaborative stroke refinement, using collaborative training strategy to recover the missing or broken strokes; online zoom-augmentation, taking the advantage of the content-reuse phenomenon to reduce the size of training set; and adaptive pre-deformation, standardizing and aligning the characters. The proposed model needs only 750 paired training samples; no pre-trained network, extra dataset resource or labels is needed. Experimental results show that the proposed method significantly outperforms the state-of-the-art methods under the practical restriction on handwritten font synthesis. | Most image-to-image translation tasks such as art-style transfer @cite_0 @cite_10 , coloring @cite_14 @cite_6 , super-resolution @cite_19 , dehazing @cite_5 have achieved the progressive improvement since the raise of CNNs. Recently, several works model font glyph synthesis as a supervised image-to-image translation task mapping the character from one typeface to another by CNN-framework analogous to an encoder decoder backbone @cite_11 @cite_16 @cite_27 @cite_3 @cite_26 @cite_25 . Specially, generative adversarial networks (GANs) @cite_8 @cite_1 @cite_20 are widely adopted by them for obtaining promising results. Differently, EMD @cite_2 and SA-VAE @cite_13 are proposed to disentangle the style content representation of Chinese fonts for a more flexible transfer. | {
"cite_N": [
"@cite_13",
"@cite_14",
"@cite_26",
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"2780436321",
"2590274298",
"2775226934",
"2099471712",
"2125389028",
"2326925005",
"2893096152",
"2604737827",
"2523714292",
"2962968458",
"2963801463",
"2899026108",
"",
"2331128040",
"2963802693",
"2173520492",
""
],
"abstract": [
"Automatically writing stylized Chinese characters is an attractive yet challenging task due to its wide applicabilities. In this paper, we propose a novel framework named Style-Aware Variational Auto-Encoder (SA-VAE) to flexibly generate Chinese characters. Specifically, we propose to capture the different characteristics of a Chinese character by disentangling the latent features into content-related and style-related components. Considering of the complex shapes and structures, we incorporate the structure information as prior knowledge into our framework to guide the generation. Our framework shows a powerful one-shot low-shot generalization ability by inferring the style component given a character with unseen style. To the best of our knowledge, this is the first attempt to learn to write new-style Chinese characters by observing only one or a few examples. Extensive experiments demonstrate its effectiveness in generating different stylized Chinese characters by fusing the feature vectors corresponding to different contents and styles, which is of significant importance in real-world applications.",
"Colorization of grayscale images is a hot topic in computer vision. Previous research mainly focuses on producing a color image to recover the original one in a supervised learning fashion. However, since many colors share the same gray value, an input grayscale image could be diversely colorized while maintaining its reality. In this paper, we design a novel solution for unsupervised diverse colorization. Specifically, we leverage conditional generative adversarial networks to model the distribution of real-world item colors, in which we develop a fully convolutional generator with multi-layer noise to enhance diversity, with multi-layer condition concatenation to maintain reality, and with stride 1 to keep spatial information. With such a novel network architecture, the model yields highly competitive performance on the open LSUN bedroom dataset. The Turing test on 80 humans further indicates our generated color schemes are highly convincible.",
"Building a complete personalized Chinese font library for an ordinary person is a tough task due to the existence of huge amounts of characters with complicated structures. Yet, existing automatic font generation methods still have many drawbacks. To address the problem, this paper proposes an end-to-end learning system, DCFont, to automatically generate the whole GB2312 font library that consists of 6763 Chinese characters from a small number (e.g., 775) of characters written by the user. Our system has two major advantages. On the one hand, the system works in an end-to-end manner, which means that human interventions during offline training and online generating periods are not required. On the other hand, a novel deep neural network architecture is designed to solve the font feature reconstruction and handwriting synthesis problems through adversarial training, which requires fewer input data but obtains more realistic and high-quality synthesis results compared to other deep learning based approaches. Experimental results verify the superiority of our method against the state of the art.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a “colorization Turing test,” asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32 of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks.",
"",
"We propose StyleBank, which is composed of multiple convolution filter banks and each filter bank explicitly represents one style, for neural image style transfer. To transfer an image to a specific style, the corresponding filter bank is operated on top of the intermediate feature embedding produced by a single auto-encoder. The StyleBank and the auto-encoder are jointly learnt, where the learning is conducted in such a way that the auto-encoder does not encode any style information thanks to the flexibility introduced by the explicit filter bank representation. It also enables us to conduct incremental learning to add a new image style by learning a new filter bank while holding the auto-encoder fixed. The explicit style representation along with the flexible network design enables us to fuse styles at not only the image level, but also the region level. Our method is the first style transfer network that links back to traditional texton mapping methods, and hence provides new understanding on neural style transfer. Our method is easy to train, runs in real-time, and produces results that qualitatively better or at least comparable to existing methods.",
"Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.",
"In this work, we focus on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface. To generate a set of multi-content images following a consistent style from very few examples, we propose an end-to-end stacked conditional GAN model considering content along channels and style along network layers. Our proposed network transfers the style of given glyphs to the contents of unseen ones, capturing highly stylized fonts found in the real-world such as those on movie posters or infographics. We seek to transfer both the typographic stylization (ex. serifs and ears) as well as the textual stylization (ex. color gradients and effects.) We base our experiments on our collected data set including 10,000 fonts with different styles and demonstrate effective generalization from a very small number of observed glyphs.",
"Neural style transfer has drawn broad attention in recent years. However, most existing methods aim to explicitly model the transformation between different styles, and the learned model is thus not generalizable to new styles. We here attempt to separate the representations for styles and contents, and propose a generalized style transfer network consisting of style encoder, content encoder, mixer and decoder. The style encoder and content encoder are used to extract the style and content factors from the style reference images and content reference images, respectively. The mixer employs a bilinear model to integrate the above two factors and finally feeds it into a decoder to generate images with target style and content. To separate the style features and content features, we leverage the conditional dependence of styles and contents given an image. During training, the encoder network learns to extract styles and contents from two sets of reference images in limited size, one with shared style and the other with shared content. This learning framework allows simultaneous style transfer among multiple styles and can be deemed as a special 'multi-task' learning scenario. The encoders are expected to capture the underlying features for different styles and contents which is generalizable to new styles and contents. For validation, we applied the proposed algorithm to the Chinese Typeface transfer problem. Extensive experiment results on character generation have demonstrated the effectiveness and robustness of our method.",
"This paper reviews the first challenge on image dehazing (restoration of rich details in hazy image) with focus on proposed solutions and results. The challenge had 2 tracks. Track 1 employed the indoor images (using I-HAZE dataset), while Track 2 outdoor images (using O-HAZE dataset). The hazy images have been captured in presence of real haze, generated by professional haze machines. I-HAZE dataset contains 35 scenes that correspond to indoor domestic environments, with objects with different colors and specularities. O-HAZE contains 45 different outdoor scenes depicting the same visual content recorded in haze-free and hazy conditions, under the same illumination parameters. The dehazing process was learnable through provided pairs of haze-free and hazy train images. Each track had 120 registered participants and 21 teams competed in the final testing phase. They gauge the state-of-the-art in image dehazing.",
"",
"We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.",
"In this paper, we investigate the Chinese calligraphy synthesis problem: synthesizing Chinese calligraphy images with specified style from standard font(eg. Hei font) images (Fig. 1(a)). Recent works mostly follow the stroke extraction and assemble pipeline which is complex in the process and limited by the effect of stroke extraction. In this work we treat the calligraphy synthesis problem as an image-to-image translation problem and propose a deep neural network based model which can generate calligraphy images from standard font images directly. Besides, we also construct a large scale benchmark that contains various styles for Chinese calligraphy synthesis. We evaluate our method as well as some baseline methods on the proposed dataset, and the experimental results demonstrate the effectiveness of our proposed model.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
""
]
} |
1904.13268 | 2942871801 | Automatic character generation is an appealing solution for new typeface design, especially for Chinese typefaces including over 3700 most commonly-used characters. This task has two main pain points: (i) handwritten characters are usually associated with thin strokes of few information and complex structure which are error prone during deformation; (ii) thousands of characters with various shapes are needed to synthesize based on a few manually designed characters. To solve those issues, we propose a novel convolutional-neural-network-based model with three main techniques: collaborative stroke refinement, using collaborative training strategy to recover the missing or broken strokes; online zoom-augmentation, taking the advantage of the content-reuse phenomenon to reduce the size of training set; and adaptive pre-deformation, standardizing and aligning the characters. The proposed model needs only 750 paired training samples; no pre-trained network, extra dataset resource or labels is needed. Experimental results show that the proposed method significantly outperforms the state-of-the-art methods under the practical restriction on handwritten font synthesis. | Preserving the content consistency is important in font synthesis task. From A to Z'' assigned each Latin alphabet a one-hot label into the network to constrain the content @cite_23 . Similarly, SA-VAE @cite_13 embedded a pre-trained Chinese character recognition network @cite_15 @cite_7 into the framework to provide a correct content label. SA-VAE also embedded 133-bits coding denoting the structure content-reuse knowledge of inputs, which must rely on the extra labeling related to structure and content beforehand. In order to synthesize multiple font, zi2zi @cite_16 utilized a one-hot category embedding. Similarly, DCFont @cite_26 involved a pre-trained 100-class font-classifier in the framework to provide a better style representation. And recently, MC-GAN @cite_27 can synthesize ornamented glyphs from a few examples. However, the generation of Chinese characters are very different from English alphabet no matter in complexity or considerations. EMD @cite_2 is the first model which can achieve good performance on new Chinese font transfer with a few samples, but it does work poorly on handwritten fonts. | {
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_13"
],
"mid": [
"2775226934",
"2609955731",
"2962968458",
"2294780673",
"2963801463",
"2592329255",
"",
"2780436321"
],
"abstract": [
"Building a complete personalized Chinese font library for an ordinary person is a tough task due to the existence of huge amounts of characters with complicated structures. Yet, existing automatic font generation methods still have many drawbacks. To address the problem, this paper proposes an end-to-end learning system, DCFont, to automatically generate the whole GB2312 font library that consists of 6763 Chinese characters from a small number (e.g., 775) of characters written by the user. Our system has two major advantages. On the one hand, the system works in an end-to-end manner, which means that human interventions during offline training and online generating periods are not required. On the other hand, a novel deep neural network architecture is designed to solve the font feature reconstruction and handwriting synthesis problems through adversarial training, which requires fewer input data but obtains more realistic and high-quality synthesis results compared to other deep learning based approaches. Experimental results verify the superiority of our method against the state of the art.",
"This paper considers using deep neural networks for handwritten Chinese character recognition (HCCR) with arbitrary position, scale, and orientations. To solve this problem, we combine the recently proposed spatial transformer network (STN) with the deep residual network (DRN). The STN acts like a character shape normalization procedure. Different from the traditional heuristic shape normalization methods, STN is learned directly from the data. Furthermore, the DRN makes the training of very deep network to be both efficient and effective. With the combination of STN and DRN, the whole model can be trained jointly in an end-to-end manner. In this paper, new state-of-the-art performance has been achieved by our proposed model on the offline ICDAR-2013 Chinese handwriting competition database. Moreover, the experiment on randomly distorted samples shows that the STN is very effective for robust HCCR in rectifying the shape of distorted characters.",
"In this work, we focus on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface. To generate a set of multi-content images following a consistent style from very few examples, we propose an end-to-end stacked conditional GAN model considering content along channels and style along network layers. Our proposed network transfers the style of given glyphs to the contents of unseen ones, capturing highly stylized fonts found in the real-world such as those on movie posters or infographics. We seek to transfer both the typographic stylization (ex. serifs and ears) as well as the textual stylization (ex. color gradients and effects.) We base our experiments on our collected data set including 10,000 fonts with different styles and demonstrate effective generalization from a very small number of observed glyphs.",
"We propose a new neural network architecture for solving single-image analogies - the generation of an entire set of stylistically similar images from just a single input image. Solving this problem requires separating image style from content. Our network is a modified variational autoencoder (VAE) that supports supervised training of single-image analogies and in-network evaluation of outputs with a structured similarity objective that captures pixel covariances. On the challenging task of generating a 62-letter font from a single example letter we produce images with 22.4 lower dissimilarity to the ground truth than state-of-the-art.",
"Neural style transfer has drawn broad attention in recent years. However, most existing methods aim to explicitly model the transformation between different styles, and the learned model is thus not generalizable to new styles. We here attempt to separate the representations for styles and contents, and propose a generalized style transfer network consisting of style encoder, content encoder, mixer and decoder. The style encoder and content encoder are used to extract the style and content factors from the style reference images and content reference images, respectively. The mixer employs a bilinear model to integrate the above two factors and finally feeds it into a decoder to generate images with target style and content. To separate the style features and content features, we leverage the conditional dependence of styles and contents given an image. During training, the encoder network learns to extract styles and contents from two sets of reference images in limited size, one with shared style and the other with shared content. This learning framework allows simultaneous style transfer among multiple styles and can be deemed as a special 'multi-task' learning scenario. The encoders are expected to capture the underlying features for different styles and contents which is generalizable to new styles and contents. For validation, we applied the proposed algorithm to the Chinese Typeface transfer problem. Extensive experiment results on character generation have demonstrated the effectiveness and robustness of our method.",
"We propose a new method for building fast and compact CNN model for large scale handwritten Chinese character recognition (HCCR).We propose a new technique, namely Adaptive Drop-weight (ADW), for effectively pruning CNN parameters.We proposed the Global Supervised Low Rank Expansions (GSLRE) method for accelerating CNN model.Comparing with the state-of-the-art CNN method for HCCR, our approach is about 30-times faster yet 10-times smaller. Like other problems in computer vision, offline handwritten Chinese character recognition (HCCR) has achieved impressive results using convolutional neural network (CNN)-based methods. However, larger and deeper networks are needed to deliver state-of-the-art results in this domain. Such networks intuitively appear to incur high computational cost, and require the storage of a large number of parameters, which render them unfeasible for deployment in portable devices. To solve this problem, we propose a Global Supervised Low-rank Expansion (GSLRE) method and an Adaptive Drop-weight (ADW) technique to solve the problems of speed and storage capacity. We design a nine-layer CNN for HCCR consisting of 3755 classes, and devise an algorithm that can reduce the networks computational cost by nine times and compress the network to 1 18 of the original size of the baseline model, with only a 0.21 drop in accuracy. In tests, the proposed algorithm can still surpass the best single-network performance reported thus far in the literature while requiring only 2.3MB for storage. Furthermore, when integrated with our effective forward implementation, the recognition of an offline character image takes only 9.7ms on a CPU. Compared with the state-of-the-art CNN model for HCCR, our approach is approximately 30 times faster, yet 10 times more cost efficient.",
"",
"Automatically writing stylized Chinese characters is an attractive yet challenging task due to its wide applicabilities. In this paper, we propose a novel framework named Style-Aware Variational Auto-Encoder (SA-VAE) to flexibly generate Chinese characters. Specifically, we propose to capture the different characteristics of a Chinese character by disentangling the latent features into content-related and style-related components. Considering of the complex shapes and structures, we incorporate the structure information as prior knowledge into our framework to guide the generation. Our framework shows a powerful one-shot low-shot generalization ability by inferring the style component given a character with unseen style. To the best of our knowledge, this is the first attempt to learn to write new-style Chinese characters by observing only one or a few examples. Extensive experiments demonstrate its effectiveness in generating different stylized Chinese characters by fusing the feature vectors corresponding to different contents and styles, which is of significant importance in real-world applications."
]
} |
1904.13281 | 2943048304 | Infarcted brain tissue resulting from acute stroke readily shows up as hyperintense regions within diffusion-weighted magnetic resonance imaging (DWI). It has also been proposed that computed tomography perfusion (CTP) could alternatively be used to triage stroke patients, given improvements in speed and availability, as well as reduced cost. However, CTP has a lower signal to noise ratio compared to MR. In this work, we investigate whether a conditional mapping can be learned by a generative adversarial network to map CTP inputs to generated MR DWI that more clearly delineates hyperintense regions due to ischemic stroke. We detail the architectures of the generator and discriminator and describe the training process used to perform image-to-image translation from multi-modal CT perfusion maps to diffusion weighted MR outputs. We evaluate the results both qualitatively by visual comparison of generated MR to ground truth, as well as quantitatively by training fully convolutional neural networks that make use of generated MR data inputs to perform ischemic stroke lesion segmentation. Segmentation networks trained using generated CT-to-MR inputs result in at least some improvement on all metrics used for evaluation, compared with networks that only use CT perfusion input. | The high cost of acquiring ground truth labels means that many medical imaging datasets are either not large enough or exhibit large class imbalances between healthy and pathological cases. Generative models and GANs attempt to circumvent this problem by generating synthesized data that can be used to augment datasets during model training. Consequently, GANs are increasingly being utilized within medical image synthesis and analysis @cite_8 @cite_19 @cite_17 @cite_20 . We make use of the pix2pix framework @cite_15 as it is able to learn both the image translation map and an appropriate loss function for training. The framework has been used effectively in several tasks, such as to simulate myocardial scar tissue in cardiovascular MR scans @cite_17 . Here, GANs were used to augment healthy MR scans with realistic-looking scar tissue. Using two GANs, @cite_17 were able to both generate the scar tissue and refine intensity values using a domain-specific heuristic. The generation of scar tissue was effective in training and led to realistic results -- experienced physicians mistook the generated images as being real. | {
"cite_N": [
"@cite_8",
"@cite_19",
"@cite_15",
"@cite_20",
"@cite_17"
],
"mid": [
"2745006834",
"2952038462",
"",
"2797591221",
"2950025006"
],
"abstract": [
"MR-only radiotherapy treatment planning requires accurate MR-to-CT synthesis. Current deep learning methods for MR-to-CT synthesis depend on pairwise aligned MR and CT training images of the same patient. However, misalignment between paired images could lead to errors in synthesized CT images. To overcome this, we propose to train a generative adversarial network (GAN) with unpaired MR and CT images. A GAN consisting of two synthesis convolutional neural networks (CNNs) and two discriminator CNNs was trained with cycle consistency to transform 2D brain MR image slices into 2D brain CT image slices and vice versa. Brain MR and CT images of 24 patients were analyzed. A quantitative evaluation showed that the model was able to synthesize CT images that closely approximate reference CT images, and was able to outperform a GAN model trained with paired MR and CT images.",
"Computed tomography (CT) is critical for various clinical applications, e.g., radiotherapy treatment planning and also PET attenuation correction. However, CT exposes radiation during acquisition, which may cause side effects to patients. Compared to CT, magnetic resonance imaging (MRI) is much safer and does not involve any radiations. Therefore, recently, researchers are greatly motivated to estimate CT image from its corresponding MR image of the same subject for the case of radiotherapy planning. In this paper, we propose a data-driven approach to address this challenging problem. Specifically, we train a fully convolutional network to generate CT given an MR image. To better model the nonlinear relationship from MRI to CT and to produce more realistic images, we propose to use the adversarial training strategy and an image gradient difference loss function. We further apply AutoContext Model to implement a context-aware generative adversarial network. Experimental results show that our method is accurate and robust for predicting CT images from MRI images, and also outperforms three state-of-the-art methods under comparison.",
"",
"Computationally synthesized blood vessels can be used for training and evaluation of medical image analysis applications. We propose a deep generative model to synthesize blood vessel geometries, with an application to coronary arteries in cardiac CT angiography (CCTA). In the proposed method, a Wasserstein generative adversarial network (GAN) consisting of a generator and a discriminator network is trained. While the generator tries to synthesize realistic blood vessel geometries, the discriminator tries to distinguish synthesized geometries from those of real blood vessels. Both real and synthesized blood vessel geometries are parametrized as 1D signals based on the central vessel axis. The generator can optionally be provided with an attribute vector to synthesize vessels with particular characteristics. The GAN was optimized using a reference database with parametrizations of 4,412 real coronary artery geometries extracted from CCTA scans. After training, plausible coronary artery geometries could be synthesized based on random vectors sampled from a latent space. A qualitative analysis showed strong similarities between real and synthesized coronary arteries. A detailed analysis of the latent space showed that the diversity present in coronary artery anatomy was accurately captured by the generator. Results show that Wasserstein generative adversarial networks can be used to synthesize blood vessel geometries.",
"Medical images with specific pathologies are scarce, but a large amount of data is usually required for a deep convolutional neural network (DCNN) to achieve good accuracy. We consider the problem of segmenting the left ventricular (LV) myocardium on late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) scans of which only some of the scans have scar tissue. We propose ScarGAN to simulate scar tissue on healthy myocardium using chained generative adversarial networks (GAN). Our novel approach factorizes the simulation process into 3 steps: 1) a mask generator to simulate the shape of the scar tissue; 2) a domain-specific heuristic to produce the initial simulated scar tissue from the simulated shape; 3) a refining generator to add details to the simulated scar tissue. Unlike other approaches that generate samples from scratch, we simulate scar tissue on normal scans resulting in highly realistic samples. We show that experienced radiologists are unable to distinguish between real and simulated scar tissue. Training a U-Net with additional scans with scar tissue simulated by ScarGAN increases the percentage of scar pixels correctly included in LV myocardium prediction from 75.9 to 80.5 ."
]
} |
1904.13281 | 2943048304 | Infarcted brain tissue resulting from acute stroke readily shows up as hyperintense regions within diffusion-weighted magnetic resonance imaging (DWI). It has also been proposed that computed tomography perfusion (CTP) could alternatively be used to triage stroke patients, given improvements in speed and availability, as well as reduced cost. However, CTP has a lower signal to noise ratio compared to MR. In this work, we investigate whether a conditional mapping can be learned by a generative adversarial network to map CTP inputs to generated MR DWI that more clearly delineates hyperintense regions due to ischemic stroke. We detail the architectures of the generator and discriminator and describe the training process used to perform image-to-image translation from multi-modal CT perfusion maps to diffusion weighted MR outputs. We evaluate the results both qualitatively by visual comparison of generated MR to ground truth, as well as quantitatively by training fully convolutional neural networks that make use of generated MR data inputs to perform ischemic stroke lesion segmentation. Segmentation networks trained using generated CT-to-MR inputs result in at least some improvement on all metrics used for evaluation, compared with networks that only use CT perfusion input. | @cite_8 investigates the problem of CT generation from MR for radiation therapy planning -- a task that requires both MR and CT volumes. They synthesize CT from MR scans using CycleGAN @cite_6 , which employs a forward and backward cycle-consistency loss. The CycleGAN framework allows processing of unpaired MR and CT slices and does not rely on having co-registered images. Interestingly, @cite_8 shows that a model trained using unpaired MR and CT data was able to outperform models that used paired data. | {
"cite_N": [
"@cite_6",
"@cite_8"
],
"mid": [
"2962793481",
"2745006834"
],
"abstract": [
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"MR-only radiotherapy treatment planning requires accurate MR-to-CT synthesis. Current deep learning methods for MR-to-CT synthesis depend on pairwise aligned MR and CT training images of the same patient. However, misalignment between paired images could lead to errors in synthesized CT images. To overcome this, we propose to train a generative adversarial network (GAN) with unpaired MR and CT images. A GAN consisting of two synthesis convolutional neural networks (CNNs) and two discriminator CNNs was trained with cycle consistency to transform 2D brain MR image slices into 2D brain CT image slices and vice versa. Brain MR and CT images of 24 patients were analyzed. A quantitative evaluation showed that the model was able to synthesize CT images that closely approximate reference CT images, and was able to outperform a GAN model trained with paired MR and CT images."
]
} |
1904.13281 | 2943048304 | Infarcted brain tissue resulting from acute stroke readily shows up as hyperintense regions within diffusion-weighted magnetic resonance imaging (DWI). It has also been proposed that computed tomography perfusion (CTP) could alternatively be used to triage stroke patients, given improvements in speed and availability, as well as reduced cost. However, CTP has a lower signal to noise ratio compared to MR. In this work, we investigate whether a conditional mapping can be learned by a generative adversarial network to map CTP inputs to generated MR DWI that more clearly delineates hyperintense regions due to ischemic stroke. We detail the architectures of the generator and discriminator and describe the training process used to perform image-to-image translation from multi-modal CT perfusion maps to diffusion weighted MR outputs. We evaluate the results both qualitatively by visual comparison of generated MR to ground truth, as well as quantitatively by training fully convolutional neural networks that make use of generated MR data inputs to perform ischemic stroke lesion segmentation. Segmentation networks trained using generated CT-to-MR inputs result in at least some improvement on all metrics used for evaluation, compared with networks that only use CT perfusion input. | 3D CGANs were also used in @cite_25 to learn shape information about pulmonary nodules in CT volumes. They generated synthetic nodules by training a generator on pairs of nodule-masked 3D input patches together with their ground truth segmentations. They used the generator network to augment their CT training set with nodules close to lung borders to improve segmentation. As in our work, and others @cite_21 @cite_17 , the authors of @cite_25 use a combination of the L1 and conditional GAN loss to train their networks. | {
"cite_N": [
"@cite_21",
"@cite_25",
"@cite_17"
],
"mid": [
"2552465644",
"2806470514",
"2950025006"
],
"abstract": [
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.",
"Data availability plays a critical role for the performance of deep learning systems. This challenge is especially acute within the medical image domain, particularly when pathologies are involved, due to two factors: 1) limited number of cases, and 2) large variations in location, scale, and appearance. In this work, we investigate whether augmenting a dataset with artificially generated lung nodules can improve the robustness of the progressive holistically nested network (P-HNN) model for pathological lung segmentation of CT scans. To achieve this goal, we develop a 3D generative adversarial network (GAN) that effectively learns lung nodule property distributions in 3D space. In order to embed the nodules within their background context, we condition the GAN based on a volume of interest whose central part containing the nodule has been erased. To further improve realism and blending with the background, we propose a novel multi-mask reconstruction loss. We train our method on over 1000 nodules from the LIDC dataset. Qualitative results demonstrate the effectiveness of our method compared to the state-of-art. We then use our GAN to generate simulated training images where nodules lie on the lung border, which are cases where the published P-HNN model struggles. Qualitative and quantitative results demonstrate that armed with these simulated images, the P-HNN model learns to better segment lung regions under these challenging situations. As a result, our system provides a promising means to help overcome the data paucity that commonly afflicts medical imaging.",
"Medical images with specific pathologies are scarce, but a large amount of data is usually required for a deep convolutional neural network (DCNN) to achieve good accuracy. We consider the problem of segmenting the left ventricular (LV) myocardium on late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) scans of which only some of the scans have scar tissue. We propose ScarGAN to simulate scar tissue on healthy myocardium using chained generative adversarial networks (GAN). Our novel approach factorizes the simulation process into 3 steps: 1) a mask generator to simulate the shape of the scar tissue; 2) a domain-specific heuristic to produce the initial simulated scar tissue from the simulated shape; 3) a refining generator to add details to the simulated scar tissue. Unlike other approaches that generate samples from scratch, we simulate scar tissue on normal scans resulting in highly realistic samples. We show that experienced radiologists are unable to distinguish between real and simulated scar tissue. Training a U-Net with additional scans with scar tissue simulated by ScarGAN increases the percentage of scar pixels correctly included in LV myocardium prediction from 75.9 to 80.5 ."
]
} |
1904.13281 | 2943048304 | Infarcted brain tissue resulting from acute stroke readily shows up as hyperintense regions within diffusion-weighted magnetic resonance imaging (DWI). It has also been proposed that computed tomography perfusion (CTP) could alternatively be used to triage stroke patients, given improvements in speed and availability, as well as reduced cost. However, CTP has a lower signal to noise ratio compared to MR. In this work, we investigate whether a conditional mapping can be learned by a generative adversarial network to map CTP inputs to generated MR DWI that more clearly delineates hyperintense regions due to ischemic stroke. We detail the architectures of the generator and discriminator and describe the training process used to perform image-to-image translation from multi-modal CT perfusion maps to diffusion weighted MR outputs. We evaluate the results both qualitatively by visual comparison of generated MR to ground truth, as well as quantitatively by training fully convolutional neural networks that make use of generated MR data inputs to perform ischemic stroke lesion segmentation. Segmentation networks trained using generated CT-to-MR inputs result in at least some improvement on all metrics used for evaluation, compared with networks that only use CT perfusion input. | In addition to the works mentioned above, adversarial networks have also been used for improving lung segmentation in chest radiographs @cite_24 , correcting motion-related artifacts in cardiac MR images @cite_7 , and in registering medical images @cite_11 . The motivation for our work is to explore whether hyperintensitities in MR scans can be emulated from CTP inputs with CGANs to improve the performance of ischemic stroke lesion segmentation. | {
"cite_N": [
"@cite_24",
"@cite_7",
"@cite_11"
],
"mid": [
"2794103425",
"2891214351",
"2806695658"
],
"abstract": [
"Chest X-ray (CXR) is one of the most commonly prescribed medical imaging procedures, often with over 2–10x more scans than other imaging modalities. These voluminous CXR scans place significant workloads on radiologists and medical practitioners. Organ segmentation is a key step towards effective computer-aided detection on CXR. In this work, we propose Structure Correcting Adversarial Network (SCAN) to segment lung fields and the heart in CXR images. SCAN incorporates a critic network to impose on the convolutional segmentation network the structural regularities inherent in human physiology. Specifically, the critic network learns the higher order structures in the masks in order to discriminate between the ground truth organ annotations from the masks synthesized by the segmentation network. Through an adversarial process, the critic network guides the segmentation network to achieve more realistic segmentation that mimics the ground truth. Extensive evaluation shows that our method produces highly accurate and realistic segmentation. Using only very limited training data available, our model reaches human-level performance without relying on any pre-trained model. Our method surpasses the current state-of-the-art and generalizes well to CXR images from different patient populations and disease profiles.",
"Incorrect ECG gating of cardiac magnetic resonance (CMR) acquisitions can lead to artefacts, which hampers the accuracy of diagnostic imaging. Therefore, there is a need for robust reconstruction methods to ensure high image quality. In this paper, we propose a method to automatically correct motion-related artefacts in CMR acquisitions during reconstruction from k-space data. Our method is based on the Automap reconstruction method, which directly reconstructs high quality MR images from k-space using deep learning. Our main methodological contribution is the addition of an adversarial element to this architecture, in which the quality of image reconstruction (the generator) is increased by using a discriminator. We train the reconstruction network to automatically correct for motion-related artefacts using synthetically corrupted CMR k-space data and uncorrupted reconstructed images. Using 25000 images from the UK Biobank dataset we achieve good image quality in the presence of synthetic motion artefacts, but some structural information was lost. We quantitatively compare our method to a standard inverse Fourier reconstruction. In addition, we qualitatively evaluate the proposed technique using k-space data containing real motion artefacts.",
"Conventional approaches to image registration consist of time consuming iterative methods. Most current deep learning (DL) based registration methods extract deep features to use in an iterative setting. We propose an end-to-end DL method for registering multimodal images. Our approach uses generative adversarial networks (GANs) that eliminates the need for time consuming iterative methods, and directly generates the registered image with the deformation field. Appropriate constraints in the GAN cost function produce accurately registered images in less than a second. Experiments demonstrate their accuracy for multimodal retinal and cardiac MR image registration."
]
} |
1904.13196 | 2943464153 | Understanding why machine learning algorithms may fail is usually the task of the human expert that uses domain knowledge and contextual information to discover systematic shortcomings in either the data or the algorithm. In this paper, we propose a semantic referee, which is able to extract qualitative features of the errors emerging from deep machine learning frameworks and suggest corrections. The semantic referee relies on ontological reasoning about spatial knowledge in order to characterize errors in terms of their spatial relations with the environment. Using semantics, the reasoner interacts with the learning algorithm as a supervisor. In this paper, the proposed method of the interaction between a neural network classifier and a semantic referee shows how to improve the performance of semantic segmentation for satellite imagery data. | As discussed in the work done by , 2017, in neural-symbolic systems where the learning is based on a connectionist learning system, one way of interpreting the learning process is to explain the classification outputs using the concepts related to the classifier's decision @cite_34 . However, there is a limited body of work where symbolic techniques are used to explain the conclusions. The work presented by , 2016 introduces a learning system based on a Long-term Convolutional Network (LTCN) @cite_24 that provides explanations over the decisions of the classifier @cite_26 . An explanation is in the form of a justification text. In order to generate the text, the authors have proposed a loss function upon sampled concepts that, by enforcing global sentence constraints, helps the system to construct sentences based on discriminating features of the objects found in the scene. However, no specific symbolic representation was provided, and the features related to the objects are taken from the sentences that are already available for each image in the dataset (CUB dataset @cite_14 ). | {
"cite_N": [
"@cite_24",
"@cite_14",
"@cite_34",
"@cite_26"
],
"mid": [
"2508429489",
"",
"2769828938",
"2332488709"
],
"abstract": [
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent are effective for tasks involving sequences, visual and otherwise. We describe a class of recurrent convolutional architectures which is end-to-end trainable and suitable for large-scale visual understanding tasks, and demonstrate the value of these models for activity recognition, image captioning, and video description. In contrast to previous models which assume a fixed visual representation or perform simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they learn compositional representations in space and time. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Differentiable recurrent models are appealing in that they can directly map variable-length inputs (e.g., videos) to variable-length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent sequence models are directly connected to modern visual convolutional network models and can be jointly trained to learn temporal dynamics and convolutional perceptual representations. Our results show that such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined or optimized.",
"",
"Many current methods to interpret convolutional neural networks (CNNs) use visualization techniques and words to highlight concepts of the input seemingly relevant to a CNN's decision. The methods hypothesize that the recognition of these concepts are instrumental in the decision a CNN reaches, but the nature of this relationship has not been well explored. To address this gap, this paper examines the quality of a concept's recognition by a CNN and the degree to which the recognitions are associated with CNN decisions. The study considers a CNN trained for scene recognition over the ADE20k dataset. It uses a novel approach to find and score the strength of minimally distributed representations of input concepts (defined by objects in scene images) across late stage feature maps. Subsequent analysis finds evidence that concept recognition impacts decision making. Strong recognition of concepts frequently-occurring in few scenes are indicative of correct decisions, but recognizing concepts common to many scenes may mislead the network.",
"Clearly explaining a rationale for a classification decision to an end user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image aspects which justify visual predictions. We propose a new model that focuses on the discriminating properties of the visible object, jointly predicts a class label, and explains why the predicted label is appropriate for the image. Through a novel loss function based on sampling and reinforcement learning, our model learns to generate sentences that realize a global sentence property, such as class specificity. Our results on the CUB dataset show that our model is able to generate explanations which are not only consistent with an image but also more discriminative than descriptions produced by existing captioning methods."
]
} |
1904.13196 | 2943464153 | Understanding why machine learning algorithms may fail is usually the task of the human expert that uses domain knowledge and contextual information to discover systematic shortcomings in either the data or the algorithm. In this paper, we propose a semantic referee, which is able to extract qualitative features of the errors emerging from deep machine learning frameworks and suggest corrections. The semantic referee relies on ontological reasoning about spatial knowledge in order to characterize errors in terms of their spatial relations with the environment. Using semantics, the reasoner interacts with the learning algorithm as a supervisor. In this paper, the proposed method of the interaction between a neural network classifier and a semantic referee shows how to improve the performance of semantic segmentation for satellite imagery data. | With focus on the knowledge model, ,. 2016 proposed a system that explains the classifier's outputs based on the background knowledge @cite_37 . The key tool of the system, called DL-Learner, works in parallel with the classifier and accepts the same data as input. Using the Suggested Upper Merged Ontology (SUMO) http: www.adampease.org OP as the symbolic knowledge model, the DL-Learner is also able to categorize the images by reasoning upon the objects together with the concepts defined in the ontology. The compatibility between the output of the DL-Learner and the classifier can be seen as a reliability support and at the same time as an interpretation of the classification process. | {
"cite_N": [
"@cite_37"
],
"mid": [
"2761788421"
],
"abstract": [
"The ever increasing prevalence of publicly available structured data on the World Wide Web enables new applications in a variety of domains. In this paper, we provide a conceptual approach that leverages such data in order to explain the input-output behavior of trained artificial neural networks. We apply existing Semantic Web technologies in order to provide an experimental proof of concept."
]
} |
1904.13196 | 2943464153 | Understanding why machine learning algorithms may fail is usually the task of the human expert that uses domain knowledge and contextual information to discover systematic shortcomings in either the data or the algorithm. In this paper, we propose a semantic referee, which is able to extract qualitative features of the errors emerging from deep machine learning frameworks and suggest corrections. The semantic referee relies on ontological reasoning about spatial knowledge in order to characterize errors in terms of their spatial relations with the environment. Using semantics, the reasoner interacts with the learning algorithm as a supervisor. In this paper, the proposed method of the interaction between a neural network classifier and a semantic referee shows how to improve the performance of semantic segmentation for satellite imagery data. | Our approach can also be compared with Explanation-based learning (EBL) @cite_23 approaches. EBL refers to a form of machine learning method that is able to learn by generalizing examples where the features of the examples are formalized as domain theory. In EBL, the explanations, which consist of the features of the observation, are directly considered and generalized by the learner, whereas in our semantic based model, although the features of the misclassified regions are inferred from the ontology and send back to the classifier, they are not directly applied on the classification output; rather, they are only treated as a new set of data that is sent through the learning process. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2017626051"
],
"abstract": [
"Abstract This article outlines explanation-based learning (EBL) and its role in improving problem solving performance through experience. Unlike inductive systems, which learn by abstracting common properties from multiple examples, EBL systems explain why a particular example is an instance of a concept. The explanations are then converted into operational recognition rules. In essence, the EBL approach is analytical and knowledge-intensive, whereas inductive methods are empirical and knowledge-poor. This article focuses on extensions of the basic EBL method and their integration with the prodigy problem solving system. prodigy 's EBL method is specifically designed to acquire search control rules that are effective in reducing total search time for complex task domains. Domain-specific search control rules are learned from successful problem solving decisions, costly failures, and unforeseen goal interactions. The ability to specify multiple learning strategies in a declarative manner enables EBL to serve as a general technique for performance improvement. prodigy 's EBL method is analyzed, illustrated with several examples and performance results, and compared with other methods for integrating EBL and problem solving."
]
} |
1904.13030 | 2966992759 | Place recognition and loop-closure detection are main challenges in the localization, mapping and navigation tasks of self-driving vehicles. In this paper, we solve the loop-closure detection problem by incorporating the deep-learning based point cloud description method and the coarse-to-fine sequence matching strategy. More specifically, we propose a deep neural network to extract a global descriptor from the original large-scale 3D point cloud, then based on which, a typical place analysis approach is presented to investigate the feature space distribution of the global descriptors and select several super keyframes. Finally, a coarse-to-fine strategy, which includes a super keyframe based coarse matching stage and a local sequence matching stage, is presented to ensure the loop-closure detection accuracy and real-time performance simultaneously. Thanks to the sequence matching operation, the proposed approach obtains an improvement against the existing deep-learning based methods. Experiment results on a self-driving vehicle validate the effectiveness of the proposed loop-closure detection algorithm. | Handcrafted local features, such as Shape context @cite_21 and SHOT @cite_9 , first find a keypoint, divide neighboring points into different bins, and encode a histogram with the defined pattern of neighboring bins. Moreover, FPHF @cite_5 , present for 2.5D scans, which is a point-wise histogram based 3D feature descriptor. Global descriptors of the local point cloud were also presented. M2DP was proposed in @cite_13 , which projects the original point cloud to several planes and extracts a global representation. ESF @cite_8 implemented the concatenation of histograms generated from shape functions. Local features usually require normal vectors of keypoints and thus are not suitable for large-scale place recognition tasks in outdoor scenarios. Global descriptors are usually tailored to specific tasks and therefore have poor generalization features. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_5",
"@cite_13"
],
"mid": [
"2041469568",
"1972485825",
"2057175746",
"2160821342",
"2565998037"
],
"abstract": [
"This work addresses the problem of real-time 3D shape based object class recognition, its scaling to many categories and the reliable perception of categories. A novel shape descriptor for partial point clouds based on shape functions is presented, capable of training on synthetic data and classifying objects from a depth sensor in a single partial view in a fast and robust manner. The classification task is stated as a 3D retrieval task finding the nearest neighbors from synthetically generated views of CAD-models to the sensed point cloud with a Kinect-style depth sensor. The presented shape descriptor shows that the combination of angle, point-distance and area shape functions gives a significant boost in recognition rate against the baseline descriptor and outperforms the state-of-the-art descriptors in our experimental evaluation on a publicly available dataset of real-world objects in table scene contexts with up to 200 categories.",
"AbstractThis paper presents alocal3D descriptor forsurface matching dubbed SHOT.Our proposal stems from a taxonomy of existing methods which highlightstwo major approaches, referred to as Signatures and Histograms, inher-ently emphasizing descriptiveness and robustness respectively. We formulatea comprehensive proposal which encompasses a repeatable local referenceframe as well as a 3D descriptor, the latter featuring an hybrid structurebetween Signatures and Histograms so as to aim at a more favorable bal-ance between descriptive power and robustness. A quite peculiar trait of ourmethod concerns seamless integration of multiple cues within the descrip-tor to improve distinctiveness, which is particularly relevant nowadays dueto the increasing availability of affordable RGB-D sensors which can gatherboth depth and color information. A thorough experimental evaluation basedon datasets acquired with different types of sensors, including a novel RGB-D dataset, vouches that SHOT outperforms state-of-the-art local descriptorsin experiments addressing descriptor matching for object recognition, 3Dreconstruction and shape retrieval.Keywords: Surface Matching, 3D Descriptors, Object Recognition, 3DReconstruction1. IntroductionAutomatic recognition of shapes in 3D data, also referred to as surfacematching, attracts growing interest in the research community, with appli-cation to areas such as shape retrieval, 3D reconstruction, object recogni-tion categorization, manipulation and grasping, robot localization and nav-igation. A key enabling factor for the development of this technology is",
"We present a novel approach to measuring similarity between shapes and exploit it for object recognition. In our framework, the measurement of similarity is preceded by: (1) solving for correspondences between points on the two shapes; (2) using the correspondences to estimate an aligning transform. In order to solve the correspondence problem, we attach a descriptor, the shape context, to each point. The shape context at a reference point captures the distribution of the remaining points relative to it, thus offering a globally discriminative characterization. Corresponding points on two similar shapes will have similar shape contexts, enabling us to solve for correspondences as an optimal assignment problem. Given the point correspondences, we estimate the transformation that best aligns the two shapes; regularized thin-plate splines provide a flexible class of transformation maps for this purpose. The dissimilarity between the two shapes is computed as a sum of matching errors between corresponding points, together with a term measuring the magnitude of the aligning transform. We treat recognition in a nearest-neighbor classification framework as the problem of finding the stored prototype shape that is maximally similar to that in the image. Results are presented for silhouettes, trademarks, handwritten digits, and the COIL data set.",
"In our recent work [1], [2], we proposed Point Feature Histograms (PFH) as robust multi-dimensional features which describe the local geometry around a point p for 3D point cloud datasets. In this paper, we modify their mathematical expressions and perform a rigorous analysis on their robustness and complexity for the problem of 3D registration for overlapping point cloud views. More concretely, we present several optimizations that reduce their computation times drastically by either caching previously computed values or by revising their theoretical formulations. The latter results in a new type of local features, called Fast Point Feature Histograms (FPFH), which retain most of the discriminative power of the PFH. Moreover, we propose an algorithm for the online computation of FPFH features for realtime applications. To validate our results we demonstrate their efficiency for 3D registration and propose a new sample consensus based method for bringing two datasets into the convergence basin of a local non-linear optimizer: SAC-IA (SAmple Consensus Initial Alignment).",
"In this paper, we present a novel global descriptor M2DP for 3D point clouds, and apply it to the problem of loop closure detection. In M2DP, we project a 3D point cloud to multiple 2D planes and generate a density signature for points for each of the planes. We then use the left and right singular vectors of these signatures as the descriptor of the 3D point cloud. Our experimental results show that the proposed algorithm outperforms state-of-the-art global 3D descriptors in both accuracy and efficiency."
]
} |
1904.13030 | 2966992759 | Place recognition and loop-closure detection are main challenges in the localization, mapping and navigation tasks of self-driving vehicles. In this paper, we solve the loop-closure detection problem by incorporating the deep-learning based point cloud description method and the coarse-to-fine sequence matching strategy. More specifically, we propose a deep neural network to extract a global descriptor from the original large-scale 3D point cloud, then based on which, a typical place analysis approach is presented to investigate the feature space distribution of the global descriptors and select several super keyframes. Finally, a coarse-to-fine strategy, which includes a super keyframe based coarse matching stage and a local sequence matching stage, is presented to ensure the loop-closure detection accuracy and real-time performance simultaneously. Thanks to the sequence matching operation, the proposed approach obtains an improvement against the existing deep-learning based methods. Experiment results on a self-driving vehicle validate the effectiveness of the proposed loop-closure detection algorithm. | Taking into account the limitations of the local and global descriptors, some researches have also presented to utilize planes or objects for place recognition tasks. A plane-based place recognition method was proposed in @cite_3 , and the covariance of the plane parameters are adopted for matching. The drawback of this approach is that it only applies to small and indoor environments, and the plane model assumption is not valid in some practical scenes. Recently, SegMatch @cite_11 presented a matching method based on segments. This is a high-level perception but the points are required to be represented in a global coordinates, and the enough static objects assumption will not always be satisfied for real applications. | {
"cite_N": [
"@cite_3",
"@cite_11"
],
"mid": [
"2187295369",
"2527172881"
],
"abstract": [
"Image registration, and more generally scene registration, needs to be solved in mobile robotics for a number of tasks including localization, mapping, object recognition, visual odometry and loop-closure. This paper presents a flexible strategy to register scenes based on its planar structure, which can be used with different sensors that acquire 3D data like LIDAR, time-of-flight cameras, RGB-D sensors and stereo vision. The proposed strategy is based on the segmentation of the planar surfaces from the scene, and its representation using a graph which stores the geometric relationships between neighbouring planar patches. Uncertainty information from the planar patches is exploited in a hierarchical fashion to improve both the robustness and the efficiency of registration. Quick registration is achieved in indoor structured scenarios, offering advantages like a compact representation, and flexibility to adapt to different environments and sensors. Our method is validated with different sensors: a hand-held RGB-D camera and an omnidirectional RGB-D sensor; and for different applications: from visual-range odometry to loop closure and SLAM. A geometric representation based on planar patches is proposed for scene registration.Photometric information is combined with the geometric description for robustness.A novel formulation encoding uncertainty information is proposed to improve accuracy.An implementation of this work is freely available (MRPT and PCL).",
"Place recognition in 3D data is a challenging task that has been commonly approached by adapting image-based solutions. Methods based on local features suffer from ambiguity and from robustness to environment changes while methods based on global features are viewpoint dependent. We propose SegMatch, a reliable place recognition algorithm based on the matching of 3D segments. Segments provide a good compromise between local and global descriptions, incorporating their strengths while reducing their individual drawbacks. SegMatch does not rely on assumptions of ‘perfect segmentation’, or on the existence of ‘objects’ in the environment, which allows for reliable execution on large scale, unstructured environments. We quantitatively demonstrate that SegMatch can achieve accurate localization at a frequency of 1Hz on the largest sequence of the KITTI odometry dataset. We furthermore show how this algorithm can reliably detect and close loops in real-time, during online operation. In addition, the source code for the SegMatch algorithm is made publicly available1."
]
} |
1904.13030 | 2966992759 | Place recognition and loop-closure detection are main challenges in the localization, mapping and navigation tasks of self-driving vehicles. In this paper, we solve the loop-closure detection problem by incorporating the deep-learning based point cloud description method and the coarse-to-fine sequence matching strategy. More specifically, we propose a deep neural network to extract a global descriptor from the original large-scale 3D point cloud, then based on which, a typical place analysis approach is presented to investigate the feature space distribution of the global descriptors and select several super keyframes. Finally, a coarse-to-fine strategy, which includes a super keyframe based coarse matching stage and a local sequence matching stage, is presented to ensure the loop-closure detection accuracy and real-time performance simultaneously. Thanks to the sequence matching operation, the proposed approach obtains an improvement against the existing deep-learning based methods. Experiment results on a self-driving vehicle validate the effectiveness of the proposed loop-closure detection algorithm. | In order to solve the above problems, deep neural network was introduced for 3D point cloud feature learning and achieved state-of-the-art performance. Some work attempts to convert point cloud input to a regular 3D volume representation to alleviate the orderless problem of the point cloud, such as the volumetric CNNs @cite_20 and 3D ShapeNets @cite_15 . They are suitable for point cloud-based classification and recognition, respectively. On the other hand, different from volumetric representation, Multiview CNNs @cite_16 projects the 3D data into 2D images so that 2D CNN can be performed. Additionally, by achieving the permutation invariance in the network, PointNet @cite_7 makes it possible to learn features directly from the raw point cloud data. Although PointNet has achieved superior performance on small-scale shape classification and recognition tasks, it did not scale well for large-scale place recognition problem. PointNetVLAD @cite_23 is the first work that directly applies 3D point cloud to large-scale place recognition, but this method dose not consider local feature extraction adequately, and the network is heavy in memory when the feature dimension is huge. | {
"cite_N": [
"@cite_7",
"@cite_23",
"@cite_15",
"@cite_16",
"@cite_20"
],
"mid": [
"2560609797",
"2963708168",
"1920022804",
"1644641054",
"2962731536"
],
"abstract": [
"Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.",
"Unlike its image based counterpart, point cloud based retrieval for place recognition has remained as an unexplored and unsolved problem. This is largely due to the difficulty in extracting local feature descriptors from a point cloud that can subsequently be encoded into a global descriptor for the retrieval task. In this paper, we propose the PointNetVLAD where we leverage on the recent success of deep networks to solve point cloud based retrieval for place recognition. Specifically, our PointNetVLAD is a combination modification of the existing PointNet and NetVLAD, which allows end-to-end training and inference to extract the global descriptor from a given 3D point cloud. Furthermore, we propose the \"lazy triplet and quadruplet\" loss functions that can achieve more discriminative and generalizable global descriptors to tackle the retrieval task. We create benchmark datasets for point cloud based retrieval for place recognition, and the experimental results on these datasets show the feasibility of our PointNetVLAD. Our code and datasets are publicly available on the project website1.",
"3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representation automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet - a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.",
"A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives.",
"3D shape models are becoming widely available and easier to capture, making available 3D information crucial for progress in object classification. Current state-of-theart methods rely on CNNs to address this problem. Recently, we witness two types of CNNs being developed: CNNs based upon volumetric representations versus CNNs based upon multi-view representations. Empirical results from these two types of CNNs exhibit a large gap, indicating that existing volumetric CNN architectures and approaches are unable to fully exploit the power of 3D representations. In this paper, we aim to improve both volumetric CNNs and multi-view CNNs according to extensive analysis of existing approaches. To this end, we introduce two distinct network architectures of volumetric CNNs. In addition, we examine multi-view CNNs, where we introduce multiresolution filtering in 3D. Overall, we are able to outperform current state-of-the-art methods for both volumetric CNNs and multi-view CNNs. We provide extensive experiments designed to evaluate underlying design choices, thus providing a better understanding of the space of methods available for object classification on 3D data."
]
} |
1904.13148 | 2943383888 | In this paper, we analyze the inner product of weight vector and input vector in neural networks from the perspective of vector orthogonal decomposition and prove that the local direction gradient of weight vector decreases as the angle between them gets closer to 0 or @math . We propose the PR Product, a substitute for the inner product, which makes the local direction gradient of weight vector independent of the angle and consistently larger than the one in the conventional inner product while keeping the forward propagation identical. As the basic operation in neural networks, the PR Product can be applied into many existing deep learning modules, so we develop the PR Product version of the fully connected layer, convolutional layer, and LSTM layer. In static image classification, the experiments on CIFAR10 and CIFAR100 datasets demonstrate that the PR Product can robustly enhance the ability of various state-of-the-art classification networks. On the task of image captioning, even without any bells and whistles, our PR Product version of captioning model can compete or outperform the state-of-the-art models on MS COCO dataset. | Several recent investigations have considered variants of standard Backpropagation. In particular, @cite_40 presents a surprisingly simple backpropagation mechanism that assigns blame by multiplying errors signals with random weights, instead of the synaptic weights on each neuron, and further downstream. @cite_8 exhaustively considers many Hebbian learning algorithms. The straight-through estimator proposed in @cite_10 heuristically copies the gradient with respect to the stochastic output directly as an estimator of the gradient with respect to the sigmoid argument. @cite_7 proposes Linear Backprop that backpropagates error terms only linearly. Different from these methods, our proposed PR Product changes the local gradients of weights during backpropagation while maintaining the identical forward propagation. | {
"cite_N": [
"@cite_40",
"@cite_10",
"@cite_7",
"@cite_8"
],
"mid": [
"2552737632",
"2242818861",
"2907408105",
"2482290902"
],
"abstract": [
"Multi-layered neural architectures that implement learning require elaborate mechanisms for symmetric backpropagation of errors that are biologically implausible. Here the authors propose a simple resolution to this problem of blame assignment that works even with feedback using random synaptic weights.",
"Stochastic neurons and hard non-linearities can be useful for a number of reasons in deep learning models, but in many cases they pose a challenging problem: how to estimate the gradient of a loss function with respect to the input of such stochastic or non-smooth neurons? I.e., can we \"back-propagate\" through these stochastic neurons? We examine this question, existing approaches, and compare four families of solutions, applicable in different settings. One of them is the minimum variance unbiased gradient estimator for stochatic binary neurons (a special case of the REINFORCE algorithm). A second approach, introduced here, decomposes the operation of a binary stochastic neuron into a stochastic binary part and a smooth differentiable part, which approximates the expected effect of the pure stochatic binary neuron to first order. A third approach involves the injection of additive or multiplicative noise in a computational graph that is otherwise differentiable. A fourth approach heuristically copies the gradient with respect to the stochastic output directly as an estimator of the gradient with respect to the sigmoid argument (we call this the straight-through estimator). To explore a context where these estimators are useful, we consider a small-scale version of conditional computation , where sparse stochastic units form a distributed representation of gaters that can turn off in combinatorially many ways large chunks of the computation performed in the rest of the neural network. In this case, it is important that the gating units produce an actual 0 most of the time. The resulting sparsity can be potentially be exploited to greatly reduce the computational cost of large deep networks for which conditional computation would be useful.",
"",
"In a physical neural system, where storage and processing are intimately intertwined, the rules for adjusting the synaptic weights can only depend on variables that are available locally, such as the activity of the pre- and post-synaptic neurons, resulting in local learning rules. A systematic framework for studying the space of local learning rules is obtained by first specifying the nature of the local variables, and then the functional form that ties them together into each learning rule. Such a framework enables also the systematic discovery of new learning rules and exploration of relationships between learning rules and group symmetries. We study polynomial local learning rules stratified by their degree and analyze their behavior and capabilities in both linear and non-linear units and networks. Stacking local learning rules in deep feedforward networks leads to deep local learning. While deep local learning can learn interesting representations, it cannot learn complex input-output functions, even when targets are available for the top layer. Learning complex input-output functions requires local deep learning where target information is communicated to the deep layers through a backward learning channel. The nature of the communicated information about the targets and the structure of the learning channel partition the space of learning algorithms. For any learning algorithm, the capacity of the learning channel can be defined as the number of bits provided about the error gradient per weight, divided by the number of required operations per weight. We estimate the capacity associated with several learning algorithms and show that backpropagation outperforms them by simultaneously maximizing the information rate and minimizing the computational cost. This result is also shown to be true for recurrent networks, by unfolding them in time. The theory clarifies the concept of Hebbian learning, establishes the power and limitations of local learning rules, introduces the learning channel which enables a formal analysis of the optimality of backpropagation, and explains the sparsity of the space of learning rules discovered so far."
]
} |
1904.13148 | 2943383888 | In this paper, we analyze the inner product of weight vector and input vector in neural networks from the perspective of vector orthogonal decomposition and prove that the local direction gradient of weight vector decreases as the angle between them gets closer to 0 or @math . We propose the PR Product, a substitute for the inner product, which makes the local direction gradient of weight vector independent of the angle and consistently larger than the one in the conventional inner product while keeping the forward propagation identical. As the basic operation in neural networks, the PR Product can be applied into many existing deep learning modules, so we develop the PR Product version of the fully connected layer, convolutional layer, and LSTM layer. In static image classification, the experiments on CIFAR10 and CIFAR100 datasets demonstrate that the PR Product can robustly enhance the ability of various state-of-the-art classification networks. On the task of image captioning, even without any bells and whistles, our PR Product version of captioning model can compete or outperform the state-of-the-art models on MS COCO dataset. | Deep convolutional neural networks @cite_3 @cite_12 @cite_5 @cite_11 @cite_18 @cite_36 @cite_37 have become the dominant machine learning approaches for image classification. To train very deep networks, shortcut connections have become an essential part of modern networks. For example, Highway Networks @cite_21 @cite_33 present shortcut connections with gating functions, while variants of ResNet @cite_5 @cite_11 @cite_18 @cite_36 use identity shortcut connections. DenseNet @cite_37 , a more recent network with several parallel shortcut connections, connects each layer to every other layer in a feed-forward fashion. | {
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_33",
"@cite_36",
"@cite_21",
"@cite_3",
"@cite_5",
"@cite_12",
"@cite_11"
],
"mid": [
"2401231614",
"2963446712",
"1026270304",
"2549139847",
"",
"1598866093",
"2194775991",
"1686810756",
"2302255633"
],
"abstract": [
"Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. To tackle these problems, in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly used thin and very deep counterparts. For example, we demonstrate that even a simple 16-layer-deep wide residual network outperforms in accuracy and efficiency all previous deep residual networks, including thousand-layer-deep networks, achieving new state-of-the-art results on CIFAR, SVHN, COCO, and significant improvements on ImageNet. Our code and models are available at this https URL",
"Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet.",
"Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highways. They are inspired by Long Short-Term Memory recurrent networks and use adaptive gating units to regulate the information flow. Even with hundreds of layers, highway networks can be trained directly through simple gradient descent. This enables the study of extremely deep and efficient architectures.",
"We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call cardinality (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.",
"",
"I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62 error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: https: github.com KaimingHe resnet-1k-layers."
]
} |
1904.13178 | 2951738795 | Fine-grained Entity Recognition (FgER) is the task of detecting and classifying entity mentions to a large set of types spanning diverse domains such as biomedical, finance and sports. We observe that when the type set spans several domains, detection of entity mention becomes a limitation for supervised learning models. The primary reason being lack of dataset where entity boundaries are properly annotated while covering a large spectrum of entity types. Our work directly addresses this issue. We propose Heuristics Allied with Distant Supervision (HAnDS) framework to automatically construct a quality dataset suitable for the FgER task. HAnDS framework exploits the high interlink among Wikipedia and Freebase in a pipelined manner, reducing annotation errors introduced by naively using distant supervision approach. Using HAnDS framework, we create two datasets, one suitable for building FgER systems recognizing up to 118 entity types based on the FIGER type hierarchy and another for up to 1115 entity types based on the TypeNet hierarchy. Our extensive empirical experimentation warrants the quality of the generated datasets. Along with this, we also provide a manually annotated dataset for benchmarking FgER systems. | In the context of CgER task, @cite_2 @cite_19 @cite_29 proposed an approach to create a training dataset for CgER task using a combination of bootstrapping process and heuristics. The bootstrapping was used to classify a Wikipedia article into five categories, namely , , , and . The bootstrapping requires initial manually annotated seed examples for each type, which limits its scalability to thousands of types. The heuristics were used to infer additional links in un-linked text, however the proposed heuristics limit the scope of entity and non-entity mentions. For example, one of the heuristics used mostly restricts entity mentions to have at least one character capitalized. This assumption is not true in the context for FgER where entity mentions are from several diverse domains including biomedical domain. | {
"cite_N": [
"@cite_19",
"@cite_29",
"@cite_2"
],
"mid": [
"2073420389",
"2120844411",
"2252061787"
],
"abstract": [
"Named entity recognition (ner) for English typically involves one of three gold standards: muc, conll, or bbn, all created by costly manual annotation. Recent work has used Wikipedia to automatically create a massive corpus of named entity annotated text. We present the first comprehensive cross-corpus evaluation of ner. We identify the causes of poor cross-corpus performance and demonstrate ways of making them more compatible. Using our process, we develop a Wikipedia corpus which outperforms gold standard corpora on cross-corpus evaluation by up to 11 .",
"We automatically create enormous, free and multilingual silver-standard training annotations for named entity recognition (ner) by exploiting the text and structure of Wikipedia. Most ner systems rely on statistical models of annotated data to identify and classify names of people, locations and organisations in text. This dependence on expensive annotation is the knowledge bottleneck our work overcomes. We first classify each Wikipedia article into named entity (ne) types, training and evaluating on 7200 manually-labelled Wikipedia articles across nine languages. Our cross-lingual approach achieves up to 95 accuracy. We transform the links between articles into ne annotations by projecting the target [email protected]?s classifications onto the anchor text. This approach yields reasonable annotations, but does not immediately compete with existing gold-standard data. By inferring additional links and heuristically tweaking the Wikipedia corpora, we better align our automatic annotations to gold standards. We annotate millions of words in nine languages, evaluating English, German, Spanish, Dutch and Russian Wikipedia-trained models against conll shared task data and other gold-standard corpora. Our approach outperforms other approaches to automatic ne annotation (Richman and Schone, 2008 [61], , 2008 [46]) competes with gold-standard training when tested on an evaluation corpus from a different source; and performs 10 better than newswire-trained models on manually-annotated Wikipedia text.",
"Statistical named entity recognisers require costly hand-labelled training data and, as a result, most existing corpora are small. We exploit Wikipedia to create a massive corpus of named entity annotated text. We transform Wikipedia’s links into named entity annotations by classifying the target articles into common entity types (e.g. person, organisation and location). Comparing to MUC, CONLL and BBN corpora, Wikipedia generally performs better than other cross-corpus train test pairs."
]
} |
1904.13178 | 2951738795 | Fine-grained Entity Recognition (FgER) is the task of detecting and classifying entity mentions to a large set of types spanning diverse domains such as biomedical, finance and sports. We observe that when the type set spans several domains, detection of entity mention becomes a limitation for supervised learning models. The primary reason being lack of dataset where entity boundaries are properly annotated while covering a large spectrum of entity types. Our work directly addresses this issue. We propose Heuristics Allied with Distant Supervision (HAnDS) framework to automatically construct a quality dataset suitable for the FgER task. HAnDS framework exploits the high interlink among Wikipedia and Freebase in a pipelined manner, reducing annotation errors introduced by naively using distant supervision approach. Using HAnDS framework, we create two datasets, one suitable for building FgER systems recognizing up to 118 entity types based on the FIGER type hierarchy and another for up to 1115 entity types based on the TypeNet hierarchy. Our extensive empirical experimentation warrants the quality of the generated datasets. Along with this, we also provide a manually annotated dataset for benchmarking FgER systems. | An automatic dataset construction process involving heuristics and distant supervision will inevitably introduce noise and its characteristics depend on the dataset construction task. In the context of the Fine-ET task @cite_7 @cite_27 , the dominant noise in false positives. Whereas, for the relation extraction task both false negatives and false positives noise is present @cite_23 @cite_24 . | {
"cite_N": [
"@cite_24",
"@cite_27",
"@cite_23",
"@cite_7"
],
"mid": [
"2798507502",
"2132096166",
"2023352384",
"2283140239"
],
"abstract": [
"",
"Entity type tagging is the task of assigning category labels to each mention of an entity in a document. While standard systems focus on a small set of types, recent work (Ling and Weld, 2012) suggests that using a large fine-grained label set can lead to dramatic improvements in downstream tasks. In the absence of labeled training data, existing fine-grained tagging systems obtain examples automatically, using resolved entities and their types extracted from a knowledge base. However, since the appropriate type often depends on context (e.g. Washington could be tagged either as city or government), this procedure can result in spurious labels, leading to poorer generalization. We propose the task of context-dependent fine type tagging, where the set of acceptable labels for a mention is restricted to only those deducible from the local context (e.g. sentence or document). We introduce new resources for this task: 11,304 mentions annotated with their context-dependent fine types, and we provide baseline experimental results on this data.",
"We survey recent approaches to noise reduction in distant supervision learning for relation extraction. We group them according to the principles they are based on: at-least-one constraints, topic-based models, or pattern correlations. Besides describing them, we illustrate the fundamental differences and attempt to give an outlook to potentially fruitful further research. In addition, we identify related work in sentiment analysis which could profit from approaches to noise reduction.",
"Current systems of fine-grained entity typing use distant supervision in conjunction with existing knowledge bases to assign categories (type labels) to entity mentions. However, the type labels so obtained from knowledge bases are often noisy (i.e., incorrect for the entity mention's local context). We define a new task, Label Noise Reduction in Entity Typing (LNR), to be the automatic identification of correct type labels (type-paths) for training examples, given the set of candidate type labels obtained by distant supervision with a given type hierarchy. The unknown type labels for individual entity mentions and the semantic similarity between entity types pose unique challenges for solving the LNR task. We propose a general framework, called PLE, to jointly embed entity mentions, text features and entity types into the same low-dimensional space where, in that space, objects whose types are semantically close have similar representations. Then we estimate the type-path for each training example in a top-down manner using the learned embeddings. We formulate a global objective for learning the embeddings from text corpora and knowledge bases, which adopts a novel margin-based loss that is robust to noisy labels and faithfully models type correlation derived from knowledge bases. Our experiments on three public typing datasets demonstrate the effectiveness and robustness of PLE, with an average of 25 improvement in accuracy compared to next best method."
]
} |
1904.13270 | 2968347155 | Abstract Sentinel-2 multi-spectral images collected over periods of several months were used to estimate vegetation height for Gabon and Switzerland. A deep convolutional neural network (CNN) was trained to extract suitable spectral and textural features from reflectance images and to regress per-pixel vegetation height. In Gabon, reference heights for training and validation were derived from airborne LiDAR measurements. In Switzerland, reference heights were taken from an existing canopy height model derived via photogrammetric surface reconstruction. The resulting maps have a mean absolute error (MAE) of 1.7 m in Switzerland and 4.3 m in Gabon (a root mean square error (RMSE) of 3.4 m and 5.6 m, respectively), and correctly estimate vegetation heights up to >50 m. They also show good qualitative agreement with existing vegetation height maps. Our work demonstrates that, given a moderate amount of reference data (i.e., 2000 km2 in Gabon and ≈5800 km2 in Switzerland), high-resolution vegetation height maps with 10 m ground sampling distance (GSD) can be derived at country scale from Sentinel-2 imagery. | At first sight it may appear an ill-posed task to measure canopy height in monocular images. However, the spectral pixel signatures do provide information about proxies like shadowing, the type of vegetation and its density. Representative examples are for instance @cite_17 , where Landsat ETM data are used to classify rainforest into different ecological forest types, using discriminant analysis; and @cite_2 , where the fractional tree cover is retrieved from multi-temporal signatures of MODIS, respectively AVHRR, imagery, using a tree-based regression algorithm. | {
"cite_N": [
"@cite_2",
"@cite_17"
],
"mid": [
"2131448468",
"1988628953"
],
"abstract": [
"The continuous fields Moderate Resolution Imaging Spectroradiometer (MODIS) land cover products are 500-m sub-pixel representations of basic vegetation characteristics including tree, herbaceous and bare ground cover. Our previous approach to deriving continuous fields used a linear mixture model based on spectral endmembers of forest, grassland and bare ground training. We present here a new approach for estimating percent tree cover employing continuous training data over the whole range of tree cover. The continuous training data set is derived by aggregating high-resolution tree cover to coarse scales and is used with multi-temporal metrics based on a full year of coarse resolution satellite data. A regression tree algorithm is used to predict the dependent variable of tree cover based on signatures from the multitemporal metrics. The automated algorithm was tested globally using Advanced Very High Resolution Radiometer (AVHRR) data, as a full year of MODIS data has not yet been collected. A root mean square error (rmse) of 9.06 tree cover was found from the global training data set. Preliminary MODIS products are also presented, including a 250-m map of the lower 48 United States and 500-m maps of tree cover and leaf type for North America. Results show that the new approach used with MODIS data offers an improved characterization of land cover.",
"Abstract The spectral separability of thirteen topical vegetation classes, including twelve forest types, was assessed. Although the thirteen classes could not be classified to a high accuracy the results of a set of supervised and unsupervised classifications revealed that three groups of classes were highly separable; a classification of the three groups by a discriminant analysis had an accuracy of 92·20 per cent. These three spectrally separable groups also corresponded closely to ecological groups identified from an ordination of data on tree species contained within a detailed ground data set. On the basis of the class separability analyses the three spectrally separable groups were mapped, with an accuracy of 94·84 per cent, from Landsat TM data by a maximum likelihood classification. It was apparent that some of the errors in this classification could be resolved through the use of contextual information and ancillary information, particularly on topography."
]
} |
1904.13270 | 2968347155 | Abstract Sentinel-2 multi-spectral images collected over periods of several months were used to estimate vegetation height for Gabon and Switzerland. A deep convolutional neural network (CNN) was trained to extract suitable spectral and textural features from reflectance images and to regress per-pixel vegetation height. In Gabon, reference heights for training and validation were derived from airborne LiDAR measurements. In Switzerland, reference heights were taken from an existing canopy height model derived via photogrammetric surface reconstruction. The resulting maps have a mean absolute error (MAE) of 1.7 m in Switzerland and 4.3 m in Gabon (a root mean square error (RMSE) of 3.4 m and 5.6 m, respectively), and correctly estimate vegetation heights up to >50 m. They also show good qualitative agreement with existing vegetation height maps. Our work demonstrates that, given a moderate amount of reference data (i.e., 2000 km2 in Gabon and ≈5800 km2 in Switzerland), high-resolution vegetation height maps with 10 m ground sampling distance (GSD) can be derived at country scale from Sentinel-2 imagery. | With a similar technique, @cite_7 produced a global canopy height map, using GLAS observations as ground truth. The input to their regression are multi-temporal optical signatures, obtained by transforming MODIS to the space, stacking observations from 9 consecutive months into a time series, and compressing those to 3-dimensional signatures with principal component analysis (PCA), independently per channel. Remarkably, that model is able to retrieve canopy heights up to 70m; while its mapping resolution is of course limited by the GSD of MODIS. | {
"cite_N": [
"@cite_7"
],
"mid": [
"1566532634"
],
"abstract": [
"[1] The value of lidar derives from its ability to map ecosystem vertical structure which can be used to estimate aboveground carbon storage. Spaceborne lidar sensors collect data along transects and gain value for the global change science community when combined with data sources that have complete horizontal coverage. Data sources and methods for this type of analysis require evaluation. In this work we use image segmentation of 500 m Moderate Resolution Imaging Spectroradiometer (MODIS) data to produce a global map of 4.4 million forest patches. Where a Geoscience Laser Altimeter System (GLAS) transect intersects a patch, its height is calculated from the GLAS observations directly. Regression analysis is then used to estimate the heights of those patches without GLAS observations. Regression goodness-of-fit statistics indicate moderately strong relationships for predicting the 90th percentile patch height, with a mean RMSE of 5.9 m and mean correlation (R2) of 0.67."
]
} |
1904.13270 | 2968347155 | Abstract Sentinel-2 multi-spectral images collected over periods of several months were used to estimate vegetation height for Gabon and Switzerland. A deep convolutional neural network (CNN) was trained to extract suitable spectral and textural features from reflectance images and to regress per-pixel vegetation height. In Gabon, reference heights for training and validation were derived from airborne LiDAR measurements. In Switzerland, reference heights were taken from an existing canopy height model derived via photogrammetric surface reconstruction. The resulting maps have a mean absolute error (MAE) of 1.7 m in Switzerland and 4.3 m in Gabon (a root mean square error (RMSE) of 3.4 m and 5.6 m, respectively), and correctly estimate vegetation heights up to >50 m. They also show good qualitative agreement with existing vegetation height maps. Our work demonstrates that, given a moderate amount of reference data (i.e., 2000 km2 in Gabon and ≈5800 km2 in Switzerland), high-resolution vegetation height maps with 10 m ground sampling distance (GSD) can be derived at country scale from Sentinel-2 imagery. | Moreover, tree height can also be regressed from spatially explicit maps of correlated environmental parameters (many of which are in turn derived, at least in part, from satellite observations): @cite_19 use quantities like precipitation, temperature, elevation, tree cover etc. to infer canopy height, again based on GLAS samples. | {
"cite_N": [
"@cite_19"
],
"mid": [
"1981263865"
],
"abstract": [
"[1] Data from spaceborne light detection and ranging (lidar) opens the possibility to map forest vertical structure globally. We present a wall-to-wall, global map of canopy height at 1-km spatial resolution, using 2005 data from the Geoscience Laser Altimeter System (GLAS) aboard ICESat (Ice, Cloud, and land Elevation Satellite). A challenge in the use of GLAS data for global vegetation studies is the sparse coverage of lidar shots (mean = 121 data points degree2 for the L3C campaign). However, GLAS-derived canopy height (RH100) values were highly correlated with other, more spatially dense, ancillary variables available globally, which allowed us to model global RH100 from forest type, tree cover, elevation, and climatology maps. The difference between the model predicted RH100 and footprint level lidar-derived RH100 values showed that error increased in closed broadleaved forests such as the Amazon, underscoring the challenges in mapping tall (>40 m) canopies. The resulting map was validated with field measurements from 66 FLUXNET sites. The modeled RH100 versus in situ canopy height error (RMSE = 6.1 m, R2 = 0.5; or, RMSE = 4.4 m, R2 = 0.7 without 7 outliers) is conservative as it also includes measurement uncertainty and sub pixel variability within the 1-km pixels. Our results were compared against a recently published canopy height map. We found our values to be in general taller and more strongly correlated with FLUXNET data. Our map reveals a global latitudinal gradient in canopy height, increasing towards the equator, as well as coarse forest disturbance patterns."
]
} |
1904.13270 | 2968347155 | Abstract Sentinel-2 multi-spectral images collected over periods of several months were used to estimate vegetation height for Gabon and Switzerland. A deep convolutional neural network (CNN) was trained to extract suitable spectral and textural features from reflectance images and to regress per-pixel vegetation height. In Gabon, reference heights for training and validation were derived from airborne LiDAR measurements. In Switzerland, reference heights were taken from an existing canopy height model derived via photogrammetric surface reconstruction. The resulting maps have a mean absolute error (MAE) of 1.7 m in Switzerland and 4.3 m in Gabon (a root mean square error (RMSE) of 3.4 m and 5.6 m, respectively), and correctly estimate vegetation heights up to >50 m. They also show good qualitative agreement with existing vegetation height maps. Our work demonstrates that, given a moderate amount of reference data (i.e., 2000 km2 in Gabon and ≈5800 km2 in Switzerland), high-resolution vegetation height maps with 10 m ground sampling distance (GSD) can be derived at country scale from Sentinel-2 imagery. | Finally, we mention that similar regression approaches have been employed for the closely related task of (above-ground) biomass estimation from satellite images. For instance, @cite_3 map biomass in tropical Africa by tree-based regression from MODIS, using in-situ observations from forest inventories and logging as ground truth. @cite_18 regress forest biomass for Uganda from Landsat imagery and land cover maps, based on ground truth field plots. And @cite_9 derive indices from Landsat imagery and SRTM elevation data, and use them together with LiDAR ground truth to map carbon density in Colombia. | {
"cite_N": [
"@cite_9",
"@cite_18",
"@cite_3"
],
"mid": [
"2114750096",
"2059663113",
"2160454972"
],
"abstract": [
"Abstract. High-resolution mapping of tropical forest carbon stocks can assist forest management and improve implementation of large-scale carbon retention and enhancement programs. Previous high-resolution approaches have relied on field plot and or light detection and ranging (LiDAR) samples of aboveground carbon density, which are typically upscaled to larger geographic areas using stratification maps. Such efforts often rely on detailed vegetation maps to stratify the region for sampling, but existing tropical forest maps are often too coarse and field plots too sparse for high-resolution carbon assessments. We developed a top-down approach for high-resolution carbon mapping in a 16.5 million ha region (> 40 ) of the Colombian Amazon – a remote landscape seldom documented. We report on three advances for large-scale carbon mapping: (i) employing a universal approach to airborne LiDAR-calibration with limited field data; (ii) quantifying environmental controls over carbon densities; and (iii) developing stratification- and regression-based approaches for scaling up to regions outside of LiDAR coverage. We found that carbon stocks are predicted by a combination of satellite-derived elevation, fractional canopy cover and terrain ruggedness, allowing upscaling of the LiDAR samples to the full 16.5 million ha region. LiDAR-derived carbon maps have 14 uncertainty at 1 ha resolution, and the regional map based on stratification has 28 uncertainty in any given hectare. High-resolution approaches with quantifiable pixel-scale uncertainties will provide the most confidence for monitoring changes in tropical forest carbon stocks. Improved confidence will allow resource managers and decision makers to more rapidly and effectively implement actions that better conserve and utilize forests in tropical regions.",
"Abstract Aboveground woody biomass for circa-2000 is mapped at national scale in Uganda at 30-m spatial resolution on the basis of Landsat ETM + images, a National land cover dataset and field data using an object-oriented approach. A regression tree-based model (Random Forest) produces good results (cross-validated R² 0.81, RMSE 13 T ha) when trained with a sufficient number of field plots representative of the vegetation variability at national scale. The Random Forest model captures non-linear relationships between satellite data and biomass density, and is able to use categorical data (land cover) in the regression to improve the results. Biomass estimates were strongly correlated (r = 0.90 and r = 0.83) with independent LiDAR measurements. In this study, we demonstrate that in certain contexts Landsat data provide the capability to spatialize field biomass measurements and produce accurate and detailed estimates of biomass distribution at national scale. We also investigate limitations of this approach, which tend to provide conservative biomass estimates. Specific limitations are mainly related to saturation of the optical signal at high biomass density and cloud cover, which hinders the compilation of a radiometrically consistent multi-temporal dataset. As a result, a Landsat mosaic created for Uganda with images acquired in the dry season during 1999–2003 does not contain phenological information useful for discriminating some vegetation types, such as deciduous formations. The addition of land cover data increases the model performance because it provides information on vegetation phenology. We note that Landsat data present higher spatial and thematic resolution compared to land cover and allow detailed and spatially continuous biomass estimates to be mapped. Fusion of satellite and ancillary data may improve biomass predictions but, to avoid error propagation, accurate, detailed and up-to-date land cover or other ancillary data are necessary.",
"Observations from the moderate resolution imaging spectroradiometer (MODIS) were used in combination with a large data set of field measurements to map woody above-ground biomass (AGB) across tropical Africa. We generated a best-quality cloud-free mosaic of MODIS satellite reflectance observations for the period 2000‐2003 and used a regression tree model to predict AGB at 1 km resolution. Results based on a cross-validation approach show that the model explained 82 of the variance in AGB, with a root mean square error of 50.5 Mg ha −1 for a range of biomass between 0 and 454 Mg ha −1 . Analysis of lidar metrics from the Geoscience Laser Altimetry System (GLAS), which are sensitive to vegetation structure, indicate that the model successfully captured the regional distribution of AGB. The results showed a strong positive correlation (R 2 = 0.90) between the GLAS height metrics and predicted AGB."
]
} |
1904.13270 | 2968347155 | Abstract Sentinel-2 multi-spectral images collected over periods of several months were used to estimate vegetation height for Gabon and Switzerland. A deep convolutional neural network (CNN) was trained to extract suitable spectral and textural features from reflectance images and to regress per-pixel vegetation height. In Gabon, reference heights for training and validation were derived from airborne LiDAR measurements. In Switzerland, reference heights were taken from an existing canopy height model derived via photogrammetric surface reconstruction. The resulting maps have a mean absolute error (MAE) of 1.7 m in Switzerland and 4.3 m in Gabon (a root mean square error (RMSE) of 3.4 m and 5.6 m, respectively), and correctly estimate vegetation heights up to >50 m. They also show good qualitative agreement with existing vegetation height maps. Our work demonstrates that, given a moderate amount of reference data (i.e., 2000 km2 in Gabon and ≈5800 km2 in Switzerland), high-resolution vegetation height maps with 10 m ground sampling distance (GSD) can be derived at country scale from Sentinel-2 imagery. | Although neural networks are a generic machine learning technology that can, with the same machinery, solve both classification and regression tasks, relatively few works have used them to retrieve continuous biophysical variables or indicators. @cite_16 estimate crop yields from MODIS EVI and weather maps. @cite_5 retrieve sea ice concentration from Radarsat images. @cite_13 set up a single deep network with two output branches to jointly solve the classification of landcover and the retrieval of height-above-terrain (a.k.a. normalised digital surface model, nDSM) from airborne G-R-NIR images. @cite_14 map a poverty indicator from downsampled satellite images. To sidestep the problem that deep learning needs large amounts of reference data for training, they use transfer learning from the auxiliary task of predicting night-time lights, for which world-wide ground truth is available from NASA . Perhaps most closely related to our work, @cite_15 generate large-scale maps of tree density from Sentinel-2 images. To obtain enough training data, they employ another deep network that detects individual trees in satellite images, and downscale the resulting maps. | {
"cite_N": [
"@cite_14",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_13"
],
"mid": [
"2963578416",
"2319040573",
"2962686771",
"2014847057",
"2771852828"
],
"abstract": [
"The lack of reliable data in developing countries is a major obstacle to sustainable development, food security, and disaster relief. Poverty data, for example, is typically scarce, sparse in coverage, and labor-intensive to obtain. Remote sensing data such as high-resolution satellite imagery, on the other hand, is becoming increasingly available and inexpensive. Unfortunately, such data is highly unstructured and currently no techniques exist to automatically extract useful insights to inform policy decisions and help direct humanitarian efforts. We propose a novel machine learning approach to extract large-scale socioeconomic indicators from high-resolution satellite imagery. The main challenge is that training data is very scarce, making it difficult to apply modern techniques such as Convolutional Neural Networks (CNN). We therefore propose a transfer learning approach where nighttime light intensities are used as a data-rich proxy. We train a fully convolutional CNN model to predict nighttime lights from daytime imagery, simultaneously learning features that are useful for poverty prediction. The model learns filters identifying different terrains and man-made structures, including roads, buildings, and farmlands, without any supervision beyond nighttime lights. We demonstrate that these learned features are highly informative for poverty mapping, even approaching the predictive performance of survey data collected in the field.",
"High-resolution ice concentration maps are of great interest for ship navigation and ice hazard forecasting. In this case study, a convolutional neural network (CNN) has been used to estimate ice concentration using synthetic aperture radar (SAR) scenes captured during the melt season. These dual-pol RADARSAT-2 satellite images are used as input, and the ice concentration is the direct output from the CNN. With no feature extraction or segmentation postprocessing, the absolute mean errors of the generated ice concentration maps are less than 10 on average when compared with manual interpretation of the ice state by ice experts. The CNN is demonstrated to produce ice concentration maps with more detail than produced operationally. Reasonable ice concentration estimations are made in melt regions and in regions of low ice concentration.",
"We propose a new method to count objects of specific categories that are significantly smaller than the ground sampling distance of a satellite image. This task is hard due to the cluttered nature of scenes where different object categories occur. Target objects can be partially occluded, vary in appearance within the same class and look alike to different categories. Since traditional object detection is infeasible due to the small size of objects with respect to the pixel size, we cast object counting as a density estimation problem. To distinguish objects of different classes, our approach combines density estimation with semantic segmentation in an end-to-end learnable convolutional neural network (CNN). Experiments show that deep semantic density estimation can robustly count objects of various classes in cluttered scenes. Experiments also suggest that we need specific CNN architectures in remote sensing instead of blindly applying existing ones from computer vision.",
"This paper describes Illinois corn yield estimation using deep learning and another machine learning, SVR. Deep learning is a technique that has been attracting attention in recent years of machine learning, it is possible to implement using the Caffe. High accuracy estimation of crop yield is very important from the viewpoint of food security. However, since every country prepare data inhomogeneously, the implementation of the crop model in all regions is difficult. Deep learning is possible to extract important features for estimating the object from the input data, so it can be expected to reduce dependency of input data. The network model of two InnerProductLayer was the best algorithm in this study, achieving RMSE of 6.298 (standard value). This study highlights the advantages of deep learning for agricultural yield estimating.",
"We aim to jointly estimate height and semantically label monocular aerial images. These two tasks are traditionally addressed separately in remote sensing, despite their strong correlation. Therefore, a model learning both height and classes jointly seems advantageous and so, we propose a multitask Convolutional Neural Network (CNN) architecture with two losses: one performing semantic labeling, and another predicting normalized Digital Surface Model (nDSM) from the pixel values. Since the nDSM height information is used only in the second loss, there is no need to have a nDSM map at test time, and the model can estimate height automatically on new images. We test our proposed method on a set of sub-decimeter resolution images and show that our model equals the performances of two separate models, but at the cost of a single one."
]
} |
1904.13173 | 2942668923 | We expect an increase in frequency and severity of cyber-attacks that comes along with the need of efficient security countermeasures. The process of attributing a cyber-attack helps in constructing efficient and targeted mitigative and preventive security measures. In this work, we propose an argumentation-based reasoner (ABR) that helps the analyst during the analysis of forensic evidence and the attribution process. Given the evidence collected from the cyber-attack, our reasoner helps the analyst to identify who performed the attack and suggests the analyst where to focus further analyses by giving hints of the missing evidence, or further investigation paths to follow. ABR is the first automatic reasoner that analyzes and attributes cyber-attacks by using technical and social evidence, as well as incomplete and conflicting information. ABR was tested on realistic cyber-attacks cases. | Attribution of a cyber-attack is the process of determining the identity or location of an attacker or an attackers intermediary'' @cite_28 . Tracing the origin of a cyber-attack is difficult since attackers can easily forge or obscure source information, and use anti-forensics tools, as usually they want to avoid being detected and identified @cite_12 . Digital forensics plays an enormous role in attribution by collecting, examining, analyzing and reporting the evidence @cite_35 . Also other techniques created for protecting the systems are used to collect forensic data, e.g., traceback techniques @cite_28 , honeypots @cite_34 , other deception techniques @cite_4 @cite_22 . | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_22",
"@cite_28",
"@cite_34",
"@cite_12"
],
"mid": [
"2611632074",
"2064462335",
"2476275724",
"1594990268",
"1510508184",
"2385910157"
],
"abstract": [
"This publication is intended to help organizations in investigating computer security incidents and troubleshooting some information technology (IT) operational problems by providing practical guidance on performing computer and network forensics. The guide presents forensics from an IT view, not a law enforcement view. Specifically, the publication describes the processes for performing effective forensics activities and provides advice regarding different data sources, including files, operating systems (OS), network traffic, and applications. The publication is not to be used as an all-inclusive step-by-step guide for executing a digital forensic investigation or construed as legal advice. Its purpose is to inform readers of various technologies and potential ways of using them in performing incident response or troubleshooting activities. Readers are advised to apply the recommended practices only after consulting with management and legal counsel for compliance concerning laws and regulations (i.e., local, state, Federal, and international) that pertain to their situation.",
"Deceptive techniques played a prominent role in many human conflicts throughout history. Digital conflicts are no different as the use of deception has found its way to computing since at least the 1980s. However, many computer defenses that use deception were ad-hoc attempts to incorporate deceptive elements. In this paper, we present a model that can be used to plan and integrate deception in computer security defenses. We present an overview of fundamental reasons why deception works and the essential principles involved in using such techniques. We investigate the unique advantages deception-based mechanisms bring to traditional computer security defenses. Furthermore, we show how our model can be used to incorporate deception in many part of computer systems and discuss how we can use such techniques effectively. A successful deception should present plausible alternative(s) to the truth and these should be designed to exploit specific adversaries' biases. We investigate these biases and discuss how can they be used by presenting a number of examples.",
"Our physical and digital worlds are converging at a rapid pace, putting a lot of our valuable information in digital formats. Currently, most computer systems’ predictable responses provide attackers with valuable information on how to infiltrate them. In this chapter, we discuss how the use of deception can play a prominent role in enhancing the security of current computer systems. We show how deceptive techniques have been used in many successful computer breaches. Phishing, social engineering, and drive-by-downloads are some prime examples. We discuss why deception has only been used haphazardly in computer security. Additionally, we discuss some of the unique advantages deception-based security mechanisms bring to computer security. Finally, we present a framework where deception can be planned and integrated into computer defenses.",
"Abstract : This paper summarizes various techniques to perform attribution of computer attackers who are exploiting data networks. Attribution can be defined as determining the identity or location of an attacker or an attacker's intermediary. In the public literature \"traceback\" or \"source tracking\" are often used as terms instead of \"attribution.\" This paper is intended for use by the U.S. Department of Defense (DoD) as it considers if it should improve its attribution capability, and if so, how to do so. However, since the focus of this paper is on technology, it may also be of use to many others such as law enforcement personnel. This is a technical report, and assumes that the reader understands the basics of network technology, especially the Transmission Control Protocol Internet Protocol (TCP IP) suite of protocols.",
"We present Shadow Honeypots, a novel hybrid architecture that combines the best features of honeypots and anomaly detection. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network service. Traffic that is considered anomalous is processed by a \"shadow honeypot\" to determine the accuracy of the anomaly prediction. The shadow is an instance of the protected software that shares all internal state with a regular (\"production\") instance of the application, and is instrumented to detect potential attacks. Attacks against the shadow are caught, and any incurred state changes are discarded. Legitimate traffic that was misclassified will be validated by the shadow and will be handled correctly by the system transparently to the end user. The outcome of processing a request by the shadow is used to filter future attack instances and could be used to update the anomaly detector. Our architecture allows system designers to fine-tune systems for performance, since false positives will be filtered by the shadow. Contrary to regular honeypots, our architecture can be used both for server and client applications. We demonstrate the feasibility of our approach in a proof-of-concept implementation of the Shadow Honeypot architecture for the Apache web server and the Mozilla Firefox browser. We show that despite a considerable overhead in the instrumentation of the shadow honeypot (up to 20 for Apache), the overall impact on the system is diminished by the ability to minimize the rate of false-positives.",
"basic objective for the construction of internet was to provide a common platform to researchers and students to share information among them. It was not constructed to track and trace the behavior of cyber criminals. Due to this reason, it is difficult to locate the origin of cybercrime. The paper defines cybercrime with treat and vulnerability and suggests a number of different techniques which are often used by the cybercriminals to Spoofing the IP. The purpose of this paper is not only to detail about why the attribution of cybercrime is difficult but also to present TTL and step stone logic which are now being heavily used in cybercrime. Keywordsrcrime, Attribution, IP forging."
]
} |
1904.13173 | 2942668923 | We expect an increase in frequency and severity of cyber-attacks that comes along with the need of efficient security countermeasures. The process of attributing a cyber-attack helps in constructing efficient and targeted mitigative and preventive security measures. In this work, we propose an argumentation-based reasoner (ABR) that helps the analyst during the analysis of forensic evidence and the attribution process. Given the evidence collected from the cyber-attack, our reasoner helps the analyst to identify who performed the attack and suggests the analyst where to focus further analyses by giving hints of the missing evidence, or further investigation paths to follow. ABR is the first automatic reasoner that analyzes and attributes cyber-attacks by using technical and social evidence, as well as incomplete and conflicting information. ABR was tested on realistic cyber-attacks cases. | A theoretical social science model is proposed in @cite_14 , called the that describes how the analysts put together technical and social evidence during the attribution process. In this model attribution is described as an passing from one level of attribution to the other. The represents how the forensic investigators perform the attribution process and a particular attention is put on the social evidence, where contextual knowledge such as ongoing conflicts between countries or rivalry between corporations are very useful in detecting motives of potential culprits. | {
"cite_N": [
"@cite_14"
],
"mid": [
"1993595248"
],
"abstract": [
"AbstractWho did it? Attribution is fundamental. Human lives and the security of the state may depend on ascribing agency to an agent. In the context of computer network intrusions, attribution is commonly seen as one of the most intractable technical problems, as either solvable or not solvable, and as dependent mainly on the available forensic evidence. But is it? Is this a productive understanding of attribution? — This article argues that attribution is what states make of it. To show how, we introduce the Q Model: designed to explain, guide, and improve the making of attribution. Matching an offender to an offence is an exercise in minimising uncertainty on three levels: tactically, attribution is an art as well as a science; operationally, attribution is a nuanced process not a black-and-white problem; and strategically, attribution is a function of what is at stake politically. Successful attribution requires a range of skills on all levels, careful management, time, leadership, stress-testing, prud..."
]
} |
1904.13173 | 2942668923 | We expect an increase in frequency and severity of cyber-attacks that comes along with the need of efficient security countermeasures. The process of attributing a cyber-attack helps in constructing efficient and targeted mitigative and preventive security measures. In this work, we propose an argumentation-based reasoner (ABR) that helps the analyst during the analysis of forensic evidence and the attribution process. Given the evidence collected from the cyber-attack, our reasoner helps the analyst to identify who performed the attack and suggests the analyst where to focus further analyses by giving hints of the missing evidence, or further investigation paths to follow. ABR is the first automatic reasoner that analyzes and attributes cyber-attacks by using technical and social evidence, as well as incomplete and conflicting information. ABR was tested on realistic cyber-attacks cases. | We have priority rules between rules that are in conflict between each other or better that derives conflicting conclusions. allows the analyst to handle non-monotonic reasoning @cite_15 in attribution, where the introduction of new evidence might change the result of the attribution (due to conflicting arguments) and the analyst's confidence on the results. Argumentation is particularly useful as it permits to represent the reasoning rules in an intuitive and simple way. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2104126268"
],
"abstract": [
"The purpose of this paper is to study the fundamental mechanism, humans use in argumentation, and to explore ways to implement this mechanism on computers. We do so by first developing a theory for argumentation whose central notion is the acceptability of arguments. Then we argue for the “correctness” or “appropriateness” of our theory with two strong arguments. The first one shows that most of the major approaches to nonmonotonic reasoning in AI and logic programming are special forms of our theory of argumentation. The second argument illustrates how our theory can be used to investigate the logical structure of many practical problems. This argument is based on a result showing that our theory captures naturally the solutions of the theory of n-person games and of the well-known stable marriage problem. By showing that argumentation can be viewed as a special form of logic programming with negation as failure, we introduce a general logic-programming-based method for generating meta-interpreters for argumentation systems, a method very much similar to the compiler-compiler idea in conventional programming. Keyword: Argumentation; Nonmonotonic reasoning; Logic programming; n-person games; The stable marriage problem"
]
} |
1904.13285 | 2943532091 | The quality of outputs produced by deep generative models for music have seen a dramatic improvement in the last few years. However, most deep learning models perform in "offline" mode, with few restrictions on the processing time. Integrating these types of models into a live structured performance poses a challenge because of the necessity to respect the beat and harmony. Further, these deep models tend to be agnostic to the style of a performer, which often renders them impractical for live performance. In this paper we propose a system which enables the integration of out-of-the-box generative models by leveraging the musician's creativity and expertise. | Closely related to our use of continuations' are The Continuator of @cite_8 , where the authors use Markov models to adapt to a user's style. In contrast to our work, however, the continuator is agnostic to the underlying beat of a performance, which is essential to jazz improvisation. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2169264582"
],
"abstract": [
"We propose a system, the Continuator, that bridges the gap between two classes of traditionally incompatible musical systems: (1) interactive musical systems, limited in their ability to generate stylistically consistent material, and (2) music imitation systems, which are fundamentally not interactive. Our purpose is to allow musicians to extend their technical ability with stylistically consistent, automatically learnt material. This goal requires the ability for the system to build operational representations of musical styles in a real time context. Our approach is based on a Markov model of musical styles augmented to account for musical issues such as management of rhythm, beat, harmony, and imprecision. The resulting system is able to learn and generate music in any style, either in standalone mode, as continuations of musician’s input, or as interactive improvisation back up. Lastly, the very design of the system makes possible new modes of musical collaborative playing. We describe the architectu..."
]
} |
1904.13285 | 2943532091 | The quality of outputs produced by deep generative models for music have seen a dramatic improvement in the last few years. However, most deep learning models perform in "offline" mode, with few restrictions on the processing time. Integrating these types of models into a live structured performance poses a challenge because of the necessity to respect the beat and harmony. Further, these deep models tend to be agnostic to the style of a performer, which often renders them impractical for live performance. In this paper we propose a system which enables the integration of out-of-the-box generative models by leveraging the musician's creativity and expertise. | @cite_2 propose a deep autoencoder model for encoding melodies into a latent space, combined with a deep decoder for converting points from that latent space into cohesive melodies. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2792210438"
],
"abstract": [
"The Variational Autoencoder (VAE) has proven to be an effective model for producing semantically meaningful latent representations for natural data. However, it has thus far seen limited application to sequential data, and, as we demonstrate, existing recurrent VAE models have difficulty modeling sequences with long-term structure. To address this issue, we propose the use of a hierarchical decoder, which first outputs embeddings for subsequences of the input and then uses these embeddings to generate each subsequence independently. This structure encourages the model to utilize its latent code, thereby avoiding the \"posterior collapse\" problem which remains an issue for recurrent VAEs. We apply this architecture to modeling sequences of musical notes and find that it exhibits dramatically better sampling, interpolation, and reconstruction performance than a \"flat\" baseline model. An implementation of our \"MusicVAE\" is available online at this http URL"
]
} |
1904.12640 | 2941099186 | In this paper, we propose a pixel-wise method named TextCohesion for scene text detection, which splits a text instance into five key components: a Text Skeleton and four Directional Pixel Regions. These components are easier to handle than the entire text instance. A confidence scoring mechanism is designed to filter characters that are similar to text. Our method can integrate text contexts intensively when backgrounds are complex. Experiments on two curved challenging benchmarks demonstrate that TextCohesion outperforms state-of-the-art methods, achieving the F-measure of 84.6 on Total-Text and bfseries86.3 on SCUT-CTW1500. | Detecting text in the wild has been widely studied in the past few years. Before deep learning era, most detectors adopt Connected Components Analysis @cite_33 @cite_2 @cite_45 @cite_30 @cite_13 @cite_23 or Sliding Window based classification @cite_22 @cite_14 @cite_11 @cite_15 . | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_33",
"@cite_22",
"@cite_45",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_11"
],
"mid": [
"1972065312",
"2017278151",
"2142159465",
"2076014259",
"1972774730",
"2148214126",
"2128854450",
"1607307044",
"2166949156",
"1998042868"
],
"abstract": [
"With the increasing popularity of practical vision systems and smart phones, text detection in natural scenes becomes a critical yet challenging task. Most existing methods have focused on detecting horizontal or near-horizontal texts. In this paper, we propose a system which detects texts of arbitrary orientations in natural images. Our algorithm is equipped with a two-level classification scheme and two sets of features specially designed for capturing both the intrinsic characteristics of texts. To better evaluate our algorithm and compare it with other competing algorithms, we generate a new dataset, which includes various texts in diverse real-world scenarios; we also propose a protocol for performance evaluation. Experiments on benchmark datasets and the proposed dataset demonstrate that our algorithm compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on texts of arbitrary orientations in complex natural scenes.",
"Detecting text regions in natural scenes is an important part of computer vision. We propose a novel text detection algorithm that extracts six different classes features of text, and uses Modest AdaBoost with multi-scale sequential search. Experiments show that our algorithm can detect text regions with a f= 0.70, from the ICDAR 2003 datasets which include images with text of various fonts, sizes, colors, alphabets and scripts.",
"We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages.",
"Reading text from photographs is a challenging problem that has received a significant amount of attention. Two key components of most systems are (i) text detection from images and (ii) character recognition, and many recent methods have been proposed to design better feature representations and models for both. In this paper, we apply methods recently developed in machine learning -- specifically, large-scale algorithms for learning the features automatically from unlabeled data -- and show that they allow us to construct highly effective classifiers for both detection and recognition to be used in a high accuracy end-to-end system.",
"Abstract Textual data is very important in a number of applications such as image database indexing and document understanding. The goal of automatic text location without character recognition capabilities is to extract image regions that contain only text. These regions can then be either fed to an optical character recognition module or highlighted for a user. Text location is a very difficult problem because the characters in text can vary in font, size, spacing, alignment, orientation, color and texture. Further, characters are often embedded in a complex background in the image. We propose a new text location algorithm that is suitable in a number of applications, including conversion of newspaper advertisements from paper documents to their electronic versions, World Wide Web search, color image indexing and video indexing. In many of these applications, it is not necessary to extract all the text, so we emphasize on extracting important text with large size and high contrast. Our algorithm is very fast and has been shown to be successful in extracting important text in a large number of test images.",
"Text detection in natural scene images is an important prerequisite for many content-based image analysis tasks. In this paper, we propose an accurate and robust method for detecting texts in natural scene images. A fast and effective pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSERs) as character candidates using the strategy of minimizing regularized variations. Character candidates are grouped into text candidates by the single-link clustering algorithm, where distance weights and clustering threshold are learned automatically by a novel self-training distance metric learning algorithm. The posterior probabilities of text candidates corresponding to non-text are estimated with a character classifier; text candidates with high non-text probabilities are eliminated and texts are identified with a text classifier. The proposed system is evaluated on the ICDAR 2011 Robust Reading Competition database; the f-measure is over 76 , much better than the state-of-the-art performance of 71 . Experiments on multilingual, street view, multi-orientation and even born-digital databases also demonstrate the effectiveness of the proposed method.",
"In this paper, we present a new approach for text localization in natural images, by discriminating text and non-text regions at three levels: pixel, component and text line levels. Firstly, a powerful low-level filter called the Stroke Feature Transform (SFT) is proposed, which extends the widely-used Stroke Width Transform (SWT) by incorporating color cues of text pixels, leading to significantly enhanced performance on inter-component separation and intra-component connection. Secondly, based on the output of SFT, we apply two classifiers, a text component classifier and a text-line classifier, sequentially to extract text regions, eliminating the heuristic procedures that are commonly used in previous approaches. The two classifiers are built upon two novel Text Covariance Descriptors (TCDs) that encode both the heuristic properties and the statistical characteristics of text stokes. Finally, text regions are located by simply thresholding the text-line confident map. Our method was evaluated on two benchmark datasets: ICDAR 2005 and ICDAR 2011, and the corresponding F-measure values are 0.72 and 0.73, respectively, surpassing previous methods in accuracy by a large margin.",
"Full end-to-end text recognition in natural images is a challenging problem that has received much attention recently. Traditional systems in this area have relied on elaborate models incorporating carefully hand-engineered features or large amounts of prior knowledge. In this paper, we take a different route and combine the representational power of large, multilayer neural networks together with recent developments in unsupervised feature learning, which allows us to use a common framework to train highly-accurate text detector and character recognizer modules. Then, using only simple off-the-shelf methods, we integrate these two modules into a full end-to-end, lexicon-driven, scene text recognition system that achieves state-of-the-art performance on standard benchmarks, namely Street View Text and ICDAR 2003.",
"Text information in natural scene images serves as important clues for many image-based applications such as scene understanding, content-based image retrieval, assistive navigation, and automatic geocoding. However, locating text from a complex background with multiple colors is a challenging task. In this paper, we explore a new framework to detect text strings with arbitrary orientations in complex natural scene images. Our proposed framework of text string detection consists of two steps: 1) image partition to find text character candidates based on local gradient features and color uniformity of character components and 2) character candidate grouping to detect text strings based on joint structural features of text characters in each text string such as character size differences, distances between neighboring characters, and character alignment. By assuming that a text string has at least three characters, we propose two algorithms of text string detection: 1) adjacent character grouping method and 2) text line grouping method. The adjacent character grouping method calculates the sibling groups of each character candidate as string segments and then merges the intersecting sibling groups into text string. The text line grouping method performs Hough transform to fit text line among the centroids of text candidates. Each fitted text line describes the orientation of a potential text string. The detected text string is presented by a rectangle region covering all characters whose centroids are cascaded in its text line. To improve efficiency and accuracy, our algorithms are carried out in multi-scales. The proposed methods outperform the state-of-the-art results on the public Robust Reading Dataset, which contains text only in horizontal orientation. Furthermore, the effectiveness of our methods to detect text strings with arbitrary orientations is evaluated on the Oriented Scene Text Dataset collected by ourselves containing text strings in nonhorizontal orientations.",
"This paper focuses on the problem of word detection and recognition in natural images. The problem is significantly more challenging than reading text in scanned documents, and has only recently gained attention from the computer vision community. Sub-components of the problem, such as text detection and cropped image word recognition, have been studied in isolation [7, 4, 20]. However, what is unclear is how these recent approaches contribute to solving the end-to-end problem of word recognition. We fill this gap by constructing and evaluating two systems. The first, representing the de facto state-of-the-art, is a two stage pipeline consisting of text detection followed by a leading OCR engine. The second is a system rooted in generic object recognition, an extension of our previous work in [20]. We show that the latter approach achieves superior performance. While scene text recognition has generally been treated with highly domain-specific methods, our results demonstrate the suitability of applying generic computer vision methods. Adopting this approach opens the door for real world scene text recognition to benefit from the rapid advances that have been taking place in object recognition."
]
} |
1904.12724 | 2941110390 | Convolutional neural networks (CNNs) deliver exceptional results for computer vision, including medical image analysis. With the growing number of available architectures, picking one over another is far from obvious. Existing art suggests that, when performing transfer learning, the performance of CNN architectures on ImageNet correlates strongly with their performance on target tasks. We evaluate that claim for melanoma classification, over 9 CNNs architectures, in 5 sets of splits created on the ISIC Challenge 2017 dataset, and 3 repeated measures, resulting in 135 models. The correlations we found were, to begin with, much smaller than those reported by existing art, and disappeared altogether when we considered only the top-performing networks: uncontrolled nuisances (i.e., splits and randomness) overcome any of the analyzed factors. Whenever possible, the best approach for melanoma classification is still to create ensembles of multiple models. We compared two choices for selecting which models to ensemble: picking them at random (among a pool of high-quality ones) vs. using the validation set to determine which ones to pick first. For small ensembles, we found a slight advantage on the second approach but found that random choice was also competitive. Although our aim in this paper was not to maximize performance, we easily reached AUCs comparable to the first place on the ISIC Challenge 2017. | In this section, we briefly review the literature in transfer learning, and model design, in the context of skin lesion classification. We focus on convolutional neural networks, which are the state of the art on the field @cite_9 . For more information, the reader may refer to recent works on deep learning for skin lesion analysis @cite_19 @cite_3 and for medical images in general @cite_30 . | {
"cite_N": [
"@cite_30",
"@cite_19",
"@cite_9",
"@cite_3"
],
"mid": [
"2592929672",
"2338708481",
"2911818805",
""
],
"abstract": [
"Abstract Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskelet al. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research.",
"In this paper we survey, analyze and criticize current art on automated melanoma screening, reimplementing a baseline technique, and proposing two novel ones. Melanoma, although highly curable when detected early, ends as one of the most dangerous types of cancer, due to delayed diagnosis and treatment. Its incidence is soaring, much faster than the number of trained professionals able to diagnose it. Automated screening appears as an alternative to make the most of those professionals, focusing their time on the patients at risk while safely discharging the other patients. However, the potential of automated melanoma diagnosis is currently unfulfilled, due to the emphasis of current literature on outdated computer vision models. Even more problematic is the irreproducibility of current art. We show how streamlined pipelines based upon current Computer Vision outperform conventional models — a model based on an advanced bags of words reaches an AUC of 84.6 , and a model based on deep neural networks reaches 89.3 , while the baseline (a classical bag of words) stays at 81.2 . We also initiate a dialog to improve reproducibility in our community.",
"Dermoscopy is a non-invasive skin imaging technique that permits visualization of features of pigmented melanocytic neoplasms that are not discernable by examination with the naked eye. While studies on the automated analysis of dermoscopy images date back to the late 1990s, because of various factors (lack of publicly available datasets, open-source software, computational power, etc.), the field progressed rather slowly in its first two decades. With the release of a large public dataset by the International Skin Imaging Collaboration in 2016, development of open-source software for convolutional neural networks, and the availability of inexpensive graphics processing units, dermoscopy image analysis has recently become a very active research field. In this paper, we present a brief overview of this exciting subfield of medical image analysis, primarily focusing on three aspects of it, namely, segmentation, feature extraction, and classification. We then provide future directions for researchers.",
""
]
} |
1904.12724 | 2941110390 | Convolutional neural networks (CNNs) deliver exceptional results for computer vision, including medical image analysis. With the growing number of available architectures, picking one over another is far from obvious. Existing art suggests that, when performing transfer learning, the performance of CNN architectures on ImageNet correlates strongly with their performance on target tasks. We evaluate that claim for melanoma classification, over 9 CNNs architectures, in 5 sets of splits created on the ISIC Challenge 2017 dataset, and 3 repeated measures, resulting in 135 models. The correlations we found were, to begin with, much smaller than those reported by existing art, and disappeared altogether when we considered only the top-performing networks: uncontrolled nuisances (i.e., splits and randomness) overcome any of the analyzed factors. Whenever possible, the best approach for melanoma classification is still to create ensembles of multiple models. We compared two choices for selecting which models to ensemble: picking them at random (among a pool of high-quality ones) vs. using the validation set to determine which ones to pick first. For small ensembles, we found a slight advantage on the second approach but found that random choice was also competitive. Although our aim in this paper was not to maximize performance, we easily reached AUCs comparable to the first place on the ISIC Challenge 2017. | Most CNN architectures were primarily created for ImageNet @cite_23 --- a very large dataset, with 1.3 million images and 1000 categories. Due to ImageNet's diversity, those networks learn features that generalize well for other tasks @cite_31 . That fact was established by the seminal study of Yosinski al @cite_2 , which quantified the large effect of transfer learning on accuracies. | {
"cite_N": [
"@cite_31",
"@cite_23",
"@cite_2"
],
"mid": [
"2510153535",
"2108598243",
"2149933564"
],
"abstract": [
"The tremendous success of ImageNet-trained deep features on a wide range of transfer tasks begs the question: what are the properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class? To answer these and related questions, we pre-trained CNN features on various subsets of the ImageNet dataset and evaluated transfer performance on PASCAL detection, PASCAL action classification, and SUN scene classification tasks. Our overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset."
]
} |
1904.12724 | 2941110390 | Convolutional neural networks (CNNs) deliver exceptional results for computer vision, including medical image analysis. With the growing number of available architectures, picking one over another is far from obvious. Existing art suggests that, when performing transfer learning, the performance of CNN architectures on ImageNet correlates strongly with their performance on target tasks. We evaluate that claim for melanoma classification, over 9 CNNs architectures, in 5 sets of splits created on the ISIC Challenge 2017 dataset, and 3 repeated measures, resulting in 135 models. The correlations we found were, to begin with, much smaller than those reported by existing art, and disappeared altogether when we considered only the top-performing networks: uncontrolled nuisances (i.e., splits and randomness) overcome any of the analyzed factors. Whenever possible, the best approach for melanoma classification is still to create ensembles of multiple models. We compared two choices for selecting which models to ensemble: picking them at random (among a pool of high-quality ones) vs. using the validation set to determine which ones to pick first. For small ensembles, we found a slight advantage on the second approach but found that random choice was also competitive. Although our aim in this paper was not to maximize performance, we easily reached AUCs comparable to the first place on the ISIC Challenge 2017. | More recently, Kornblith al @cite_38 confirmed those findings with an even stronger result, which established a direct correlation between accuracies on ImageNet, and on the target tasks. They evaluated 16 CNN architectures on 12 target tasks and found strong correlations between Top-1 accuracies on ImageNet, and the accuracies on target tasks. The evaluated architectures comprised 5 variations of Inception, 3 of ResNet, 3 of DenseNet, 3 of MobileNet, and 2 of NASNet. The tasks comprised general-purpose computer vision tasks (CIFAR, Caltech, SUN397, Pascal VOC), and more specialized --- but still aimed at natural photos --- tasks (Birdsnap, Food-101, Describable Textures, Oxford Flowers, Oxford Pets, Stanford Cars). They found very strong correlations for both fine-tuned ( @math , @math , p-value @math ) and non-fine-tuned ( @math , @math , p-value @math ) models. Curiously, the performance of architectures (initialized at random) also showed some correlation ( @math , @math , p-value @math ). That shows that the correlation is due not only to the learned weights but also --- although in a lesser degree --- to the architecture design. | {
"cite_N": [
"@cite_38"
],
"mid": [
"2804935296"
],
"abstract": [
"Transfer learning is a cornerstone of computer vision, yet little work has been done to evaluate the relationship between architecture and transfer. An implicit hypothesis in modern computer vision research is that models that perform better on ImageNet necessarily perform better on other vision tasks. However, this hypothesis has never been systematically tested. Here, we compare the performance of 16 classification networks on 12 image classification datasets. We find that, when networks are used as fixed feature extractors or fine-tuned, there is a strong correlation between ImageNet accuracy and transfer accuracy ( @math and @math , respectively). In the former setting, we find that this relationship is very sensitive to the way in which networks are trained on ImageNet; many common forms of regularization slightly improve ImageNet accuracy but yield penultimate layer features that are much worse for transfer learning. Additionally, we find that, on two small fine-grained image classification datasets, pretraining on ImageNet provides minimal benefits, indicating the learned features from ImageNet do not transfer well to fine-grained tasks. Together, our results show that ImageNet architectures generalize well across datasets, but ImageNet features are less general than previously suggested."
]
} |
1904.12724 | 2941110390 | Convolutional neural networks (CNNs) deliver exceptional results for computer vision, including medical image analysis. With the growing number of available architectures, picking one over another is far from obvious. Existing art suggests that, when performing transfer learning, the performance of CNN architectures on ImageNet correlates strongly with their performance on target tasks. We evaluate that claim for melanoma classification, over 9 CNNs architectures, in 5 sets of splits created on the ISIC Challenge 2017 dataset, and 3 repeated measures, resulting in 135 models. The correlations we found were, to begin with, much smaller than those reported by existing art, and disappeared altogether when we considered only the top-performing networks: uncontrolled nuisances (i.e., splits and randomness) overcome any of the analyzed factors. Whenever possible, the best approach for melanoma classification is still to create ensembles of multiple models. We compared two choices for selecting which models to ensemble: picking them at random (among a pool of high-quality ones) vs. using the validation set to determine which ones to pick first. For small ensembles, we found a slight advantage on the second approach but found that random choice was also competitive. Although our aim in this paper was not to maximize performance, we easily reached AUCs comparable to the first place on the ISIC Challenge 2017. | None of the Kornblith al's tasks are medical tasks. Studies evaluating the importance of transfer learning for medical applications are few, and for skin lesion analysis even fewer. Menegola al @cite_36 compared several schemes for transfer learning on melanoma classification and found that fine-tuning a network pre-trained on ImageNet gives the better results when compared with using a network pre-trained on another medical task (diabetic retinopathy). Performing a double transfer (ImageNet @math retinopathy @math melanoma) did not improve the results, compared with transferring from ImageNet alone. Using an exhaustive factorial design with 7 factors (network architecture, training dataset, input resolution, training augmentation, training length, test augmentation, and transfer learning) over 5 test datasets, for a total of 1280 experiments, Valle al @cite_15 showed that the use of transfer learning is, by far, the most critical factor. In a factorial Analysis of Variance, it explains 14.7 In any event, transfer learning and fine-tuning are heavily used for skin lesion classification. All top-ranked submissions for ISIC Challenges 2016 @cite_22 , 2017 @cite_33 @cite_8 @cite_24 @cite_7 and 2018 @cite_20 @cite_4 @cite_28 @cite_5 @cite_37 used CNNs pre-trained on ImageNet. | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_22",
"@cite_33",
"@cite_8",
"@cite_36",
"@cite_7",
"@cite_28",
"@cite_24",
"@cite_5",
"@cite_15",
"@cite_20"
],
"mid": [
"",
"",
"2564782580",
"2602631755",
"",
"2949296127",
"",
"",
"",
"",
"2765414121",
"2886537398"
],
"abstract": [
"",
"",
"Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams, respectively. This study corroborates that very deep CNNs with effective training mechanisms can be employed to solve complicated medical image analysis tasks, even with limited training data.",
"This extended abstract describes the participation of RECOD Titans in parts 1 and 3 of the ISIC Challenge 2017 \"Skin Lesion Analysis Towards Melanoma Detection\" (ISBI 2017). Although our team has a long experience with melanoma classification, the ISIC Challenge 2017 was the very first time we worked on skin-lesion segmentation. For part 1 (segmentation), our final submission used four of our models: two trained with all 2000 samples, without a validation split, for 250 and for 500 epochs respectively; and other two trained and validated with two different 1600 400 splits, for 220 epochs. Those four models, individually, achieved between 0.780 and 0.783 official validation scores. Our final submission averaged the output of those four models achieved a score of 0.793. For part 3 (classification), the submitted test run as well as our last official validation run were the result from a meta-model that assembled seven base deep-learning models: three based on Inception-V4 trained on our largest dataset; three based on Inception trained on our smallest dataset; and one based on ResNet-101 trained on our smaller dataset. The results of those component models were stacked in a meta-learning layer based on an SVM trained on the validation set of our largest dataset.",
"",
"Knowledge transfer impacts the performance of deep learning — the state of the art for image classification tasks, including automated melanoma screening. Deep learning's greed for large amounts of training data poses a challenge for medical tasks, which we can alleviate by recycling knowledge from models trained on different tasks, in a scheme called transfer learning. Although much of the best art on automated melanoma screening employs some form of transfer learning, a systematic evaluation was missing. Here we investigate the presence of transfer, from which task the transfer is sourced, and the application of fine tuning (i.e., retraining of the deep learning model after transfer). We also test the impact of picking deeper (and more expensive) models. Our results favor deeper models, pretrained over ImageNet, with fine-tuning, reaching an AUC of 80.7 and 84.5 for the two skin-lesion datasets evaluated.",
"",
"",
"",
"",
"State of the art on melanoma screening evolved rapidly in the last two years, with the adoption of deep learning. Those models, however, pose challenges of their own, as they are expensive to train and difficult to parameterize. Objective: We investigate the methodological issues for designing and evaluating deep learning models for melanoma screening, by exploring nine choices often faced to design deep networks: model architecture, training dataset, image resolution, type of data augmentation, input normalization, use of segmentation, duration of training, additional use of SVM, and test data augmentation. Methods: We perform a two-level full factorial experiment, for five different test datasets, resulting in 2560 exhaustive trials, which we analyze using a multi-way ANOVA. Results: The main finding is that the size of training data has a disproportionate influence, explaining almost half the variation in performance. Of the other factors, test data augmentation and input resolution are the most helpful. Deeper models, when combined, with extra data, also help. We show that the costly full factorial design, or the unreliable sequential optimization, are not the only options: ensembles of models provide reliable results with limited resources. Conclusions and Significance: To move research forward on automated melanoma screening, we need to curate larger shared datasets. Optimizing hyperparameters and measuring performance on the same dataset is common, but leads to overoptimistic results. Ensembles of models are a cost-effective alternative to the expensive full-factorial and to the unstable sequential designs.",
"In this report, we are presenting our automated prediction system for disease classification within dermoscopic images. The proposed solution is based on deep learning, where we employed transfer learning strategy on VGG16 and GoogLeNet architectures. The key feature of our solution is preprocessing based primarily on image augmentation and colour normalization. The solution was evaluated on Task 3: Lesion Diagnosis of the ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection."
]
} |
1904.12724 | 2941110390 | Convolutional neural networks (CNNs) deliver exceptional results for computer vision, including medical image analysis. With the growing number of available architectures, picking one over another is far from obvious. Existing art suggests that, when performing transfer learning, the performance of CNN architectures on ImageNet correlates strongly with their performance on target tasks. We evaluate that claim for melanoma classification, over 9 CNNs architectures, in 5 sets of splits created on the ISIC Challenge 2017 dataset, and 3 repeated measures, resulting in 135 models. The correlations we found were, to begin with, much smaller than those reported by existing art, and disappeared altogether when we considered only the top-performing networks: uncontrolled nuisances (i.e., splits and randomness) overcome any of the analyzed factors. Whenever possible, the best approach for melanoma classification is still to create ensembles of multiple models. We compared two choices for selecting which models to ensemble: picking them at random (among a pool of high-quality ones) vs. using the validation set to determine which ones to pick first. For small ensembles, we found a slight advantage on the second approach but found that random choice was also competitive. Although our aim in this paper was not to maximize performance, we easily reached AUCs comparable to the first place on the ISIC Challenge 2017. | While there is a consensus on the use of transfer learning for skin lesion models, when choosing the architecture no choice is universal. On the contrary, the classification task of the ISIC Challenge shows a progressive of architectures. In 2017, the top four submissions used two networks: ResNet @cite_33 @cite_8 @cite_24 @cite_7 , and Inception-v4 @cite_33 , while the latest challenge @cite_27 , in 2018, showcased a wider range of choices among the top performers --- not only ResNets and Inceptions, but also DenseNet @cite_29 , Dual Path Networks @cite_18 , InceptionResNet @cite_13 , PNASNet @cite_16 , SENet @cite_6 , among others --- usually in ensembles of multiple architectures @cite_33 @cite_14 @cite_4 @cite_28 @cite_37 . | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_33",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_37",
"@cite_29",
"@cite_6",
"@cite_24",
"@cite_27",
"@cite_16",
"@cite_13"
],
"mid": [
"2964166828",
"",
"",
"2602631755",
"",
"",
"",
"",
"2888508006",
"",
"",
"2932083555",
"2963821229",
"2964350391"
],
"abstract": [
"In this work, we present a simple, highly efficient and modularized Dual Path Network (DPN) for image classification which presents a new topology of connection paths internally. By revealing the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, we find that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations. To enjoy the benefits from both path topologies, our proposed Dual Path Network shares common features while maintaining the flexibility to explore new features through dual path architectures. Extensive experiments on three benchmark datasets, ImagNet-1k, Places365 and PASCAL VOC, clearly demonstrate superior performance of the proposed DPN over state-of-the-arts. In particular, on the ImagNet-1k dataset, a shallow DPN surpasses the best ResNeXt-101(64x4d) with 26 smaller model size, 25 less computational cost and 8 lower memory consumption, and a deeper DPN (DPN-131) further pushes the state-of-the-art single model performance with about 2 times faster training speed. Experiments on the Places365 large-scale scene dataset, PASCAL VOC detection dataset, and PASCAL VOC segmentation dataset also demonstrate its consistently better performance than DenseNet, ResNet and the latest ResNeXt model over various applications.",
"",
"",
"This extended abstract describes the participation of RECOD Titans in parts 1 and 3 of the ISIC Challenge 2017 \"Skin Lesion Analysis Towards Melanoma Detection\" (ISBI 2017). Although our team has a long experience with melanoma classification, the ISIC Challenge 2017 was the very first time we worked on skin-lesion segmentation. For part 1 (segmentation), our final submission used four of our models: two trained with all 2000 samples, without a validation split, for 250 and for 500 epochs respectively; and other two trained and validated with two different 1600 400 splits, for 220 epochs. Those four models, individually, achieved between 0.780 and 0.783 official validation scores. Our final submission averaged the output of those four models achieved a score of 0.793. For part 3 (classification), the submitted test run as well as our last official validation run were the result from a meta-model that assembled seven base deep-learning models: three based on Inception-V4 trained on our largest dataset; three based on Inception trained on our smallest dataset; and one based on ResNet-101 trained on our smaller dataset. The results of those component models were stacked in a meta-learning layer based on an SVM trained on the validation set of our largest dataset.",
"",
"",
"",
"",
"This extended abstract describes the participation of RECOD Titans in parts 1 to 3 of the ISIC Challenge 2018 \"Skin Lesion Analysis Towards Melanoma Detection\" (MICCAI 2018). Although our team has a long experience with melanoma classification and moderate experience with lesion segmentation, the ISIC Challenge 2018 was the very first time we worked on lesion attribute detection. For each task we submitted 3 different ensemble approaches, varying combinations of models and datasets. Our best results on the official testing set, regarding the official metric of each task, were: 0.728 (segmentation), 0.344 (attribute detection) and 0.803 (classification). Those submissions reached, respectively, the 56th, 14th and 9th places.",
"",
"",
"This work summarizes the results of the largest skin image analysis challenge in the world, hosted by the International Skin Imaging Collaboration (ISIC), a global partnership that has organized the world's largest public repository of dermoscopic images of skin. The challenge was hosted in 2018 at the Medical Image Computing and Computer Assisted Intervention (MICCAI) conference in Granada, Spain. The dataset included over 12,500 images across 3 tasks. 900 users registered for data download, 115 submitted to the lesion segmentation task, 25 submitted to the lesion attribute detection task, and 159 submitted to the disease classification task. Novel evaluation protocols were established, including a new test for segmentation algorithm performance, and a test for algorithm ability to generalize. Results show that top segmentation algorithms still fail on over 10 of images on average, and algorithms with equal performance on test data can have different abilities to generalize. This is an important consideration for agencies regulating the growing set of machine learning tools in the healthcare domain, and sets a new standard for future public challenges in healthcare.",
"We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.",
"Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question: Are there any benefits to combining Inception architectures with residual connections? Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4 networks, we achieve 3.08 top-5 error on the test set of the ImageNet classification (CLS) challenge."
]
} |
1904.12724 | 2941110390 | Convolutional neural networks (CNNs) deliver exceptional results for computer vision, including medical image analysis. With the growing number of available architectures, picking one over another is far from obvious. Existing art suggests that, when performing transfer learning, the performance of CNN architectures on ImageNet correlates strongly with their performance on target tasks. We evaluate that claim for melanoma classification, over 9 CNNs architectures, in 5 sets of splits created on the ISIC Challenge 2017 dataset, and 3 repeated measures, resulting in 135 models. The correlations we found were, to begin with, much smaller than those reported by existing art, and disappeared altogether when we considered only the top-performing networks: uncontrolled nuisances (i.e., splits and randomness) overcome any of the analyzed factors. Whenever possible, the best approach for melanoma classification is still to create ensembles of multiple models. We compared two choices for selecting which models to ensemble: picking them at random (among a pool of high-quality ones) vs. using the validation set to determine which ones to pick first. For small ensembles, we found a slight advantage on the second approach but found that random choice was also competitive. Although our aim in this paper was not to maximize performance, we easily reached AUCs comparable to the first place on the ISIC Challenge 2017. | The best architectures for skin lesion classification remain, thus, an open issue. The Challenge results do not allow analyzing a single factor from a multitude of choices among participants. The study of Valle al @cite_15 , although exhaustive for 9 factors, evaluates only two levels for each factor, picking ResNet-101 and Inception-v4 for architectures. No other study, as far as we know, attempts to answer the question systematically. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2765414121"
],
"abstract": [
"State of the art on melanoma screening evolved rapidly in the last two years, with the adoption of deep learning. Those models, however, pose challenges of their own, as they are expensive to train and difficult to parameterize. Objective: We investigate the methodological issues for designing and evaluating deep learning models for melanoma screening, by exploring nine choices often faced to design deep networks: model architecture, training dataset, image resolution, type of data augmentation, input normalization, use of segmentation, duration of training, additional use of SVM, and test data augmentation. Methods: We perform a two-level full factorial experiment, for five different test datasets, resulting in 2560 exhaustive trials, which we analyze using a multi-way ANOVA. Results: The main finding is that the size of training data has a disproportionate influence, explaining almost half the variation in performance. Of the other factors, test data augmentation and input resolution are the most helpful. Deeper models, when combined, with extra data, also help. We show that the costly full factorial design, or the unreliable sequential optimization, are not the only options: ensembles of models provide reliable results with limited resources. Conclusions and Significance: To move research forward on automated melanoma screening, we need to curate larger shared datasets. Optimizing hyperparameters and measuring performance on the same dataset is common, but leads to overoptimistic results. Ensembles of models are a cost-effective alternative to the expensive full-factorial and to the unstable sequential designs."
]
} |
1904.12724 | 2941110390 | Convolutional neural networks (CNNs) deliver exceptional results for computer vision, including medical image analysis. With the growing number of available architectures, picking one over another is far from obvious. Existing art suggests that, when performing transfer learning, the performance of CNN architectures on ImageNet correlates strongly with their performance on target tasks. We evaluate that claim for melanoma classification, over 9 CNNs architectures, in 5 sets of splits created on the ISIC Challenge 2017 dataset, and 3 repeated measures, resulting in 135 models. The correlations we found were, to begin with, much smaller than those reported by existing art, and disappeared altogether when we considered only the top-performing networks: uncontrolled nuisances (i.e., splits and randomness) overcome any of the analyzed factors. Whenever possible, the best approach for melanoma classification is still to create ensembles of multiple models. We compared two choices for selecting which models to ensemble: picking them at random (among a pool of high-quality ones) vs. using the validation set to determine which ones to pick first. For small ensembles, we found a slight advantage on the second approach but found that random choice was also competitive. Although our aim in this paper was not to maximize performance, we easily reached AUCs comparable to the first place on the ISIC Challenge 2017. | In this work, we focus on that issue, by applying the design of Kornblith al @cite_38 to the task of melanoma classification. We evaluate the performance of 9 networks (listed in Table ) pre-trained on ImageNet, and fine-tuned for melanoma classification, on the dataset of the ISIC 2017 challenge. | {
"cite_N": [
"@cite_38"
],
"mid": [
"2804935296"
],
"abstract": [
"Transfer learning is a cornerstone of computer vision, yet little work has been done to evaluate the relationship between architecture and transfer. An implicit hypothesis in modern computer vision research is that models that perform better on ImageNet necessarily perform better on other vision tasks. However, this hypothesis has never been systematically tested. Here, we compare the performance of 16 classification networks on 12 image classification datasets. We find that, when networks are used as fixed feature extractors or fine-tuned, there is a strong correlation between ImageNet accuracy and transfer accuracy ( @math and @math , respectively). In the former setting, we find that this relationship is very sensitive to the way in which networks are trained on ImageNet; many common forms of regularization slightly improve ImageNet accuracy but yield penultimate layer features that are much worse for transfer learning. Additionally, we find that, on two small fine-grained image classification datasets, pretraining on ImageNet provides minimal benefits, indicating the learned features from ImageNet do not transfer well to fine-grained tasks. Together, our results show that ImageNet architectures generalize well across datasets, but ImageNet features are less general than previously suggested."
]
} |
1904.12641 | 2606879377 | Video-based vehicle detection and tracking is one of the most important components for intelligent transportation systems. When it comes to road junctions, the problem becomes even more difficult due to the occlusions and complex interactions among vehicles. In order to get a precise detection and tracking result, in this paper we propose a novel tracking-by-detection framework. In the detection stage, we present a sequential detection model to deal with serious occlusions. In the tracking stage, we model group behavior to treat complex interactions with overlaps and ambiguities. The main contributions of this paper are twofold: 1) shape prior is exploited in the sequential detection model to tackle occlusions in crowded scene and 2) traffic force is defined in the traffic scene to model group behavior, and it can assist to handle complex interactions among vehicles. We evaluate the proposed approach on real surveillance videos at road junctions and the performance has demonstrated the effectiveness of our method. | Object detection has a very wide range of applications. In this paper, we mainly discuss vehicles in the traffic scene, so we will just review the vehicle detection methods instead of general object detection. Vision-based vehicle detection for traffic surveillance video has received considerable attention. As a rigid target, vehicles have significant structural characteristics, which is more stable than flexible objects. In this paper, we follow the common two steps in vehicle detection @cite_41 : Hypothesis Generation (HG) and Hypothesis Verification (HV). | {
"cite_N": [
"@cite_41"
],
"mid": [
"2136767008"
],
"abstract": [
"Developing on-board automotive driver assistance systems aiming to alert drivers about driving environments, and possible collision with other vehicles has attracted a lot of attention lately. In these systems, robust and reliable vehicle detection is a critical step. This paper presents a review of recent vision-based on-road vehicle detection systems. Our focus is on systems where the camera is mounted on the vehicle rather than being fixed such as in traffic driveway monitoring systems. First, we discuss the problem of on-road vehicle detection using optical sensors followed by a brief review of intelligent vehicle research worldwide. Then, we discuss active and passive sensors to set the stage for vision-based vehicle detection. Methods aiming to quickly hypothesize the location of vehicles in an image as well as to verify the hypothesized locations are reviewed next. Integrating detection with tracking is also reviewed to illustrate the benefits of exploiting temporal continuity for vehicle detection. Finally, we present a critical overview of the methods discussed, we assess their potential for future deployment, and we present directions for future research."
]
} |
1904.12728 | 2942208613 | Center-based clustering is a fundamental primitive for data analysis and becomes very challenging for large datasets. In this paper, we focus on the popular @math -median and @math -means variants which, given a set @math of points from a metric space and a parameter @math , require to identify a set @math of @math centers minimizing respectively the sum of the distances and of the squared distances of all points in @math from their closest centers. Our specific focus is on general metric spaces for which it is reasonable to require that the centers belong to the input set (i.e., @math ). We present coreset-based 2-round distributed approximation algorithms for the above problems using the MapReduce computational model. The algorithms are rather simple and obliviously adapt to the intrinsic complexity of the dataset, captured by the doubling dimension @math of the metric space. Remarkably, the algorithms attain approximation ratios that can be made arbitrarily close to those achievable by the best known polynomial-time sequential approximations, and they are very space efficient for small @math , requiring local memories sizes substantially sublinear in the input size. To the best of our knowledge, no previous distributed approaches were able to attain similar quality-performance guarantees in general metric spaces. | In the continuous setting, @cite_5 present randomized 2-round algorithms to build coresets in @math of size @math for @math -median, and @math for @math -means, for any choice of @math , where the computation is distributed among @math processing elements. By using an @math -approximation algorithm on the coresets, the overall approximation factor is @math . For @math -means, a recent improved construction yields a coreset which is a factor @math smaller and features very fast distributed implementation @cite_14 . It is not difficult to show that a straightforward adaptation of these algorithms to general spaces (hence in a non-continuous setting) would yield @math -approximations, with @math , thus introducing a non-negligible gap with respect to the quality of the best sequential approximations. | {
"cite_N": [
"@cite_5",
"@cite_14"
],
"mid": [
"199097577",
"2953116466"
],
"abstract": [
"This paper provides new algorithms for distributed clustering for two popular center-based objectives, @math -median and @math -means. These algorithms have provable guarantees and improve communication complexity over existing approaches. Following a classic approach in clustering by har2004coresets , we reduce the problem of finding a clustering with low cost to the problem of finding a coreset' of small size. We provide a distributed method for constructing a global coreset which improves over the previous methods by reducing the communication complexity, and which works over general communication topologies. We provide experimental evidence for this approach on both synthetic and real data sets.",
"Coresets are compact representations of data sets such that models trained on a coreset are provably competitive with models trained on the full data set. As such, they have been successfully used to scale up clustering models to massive data sets. While existing approaches generally only allow for multiplicative approximation errors, we propose a novel notion of lightweight coresets that allows for both multiplicative and additive errors. We provide a single algorithm to construct lightweight coresets for k-means clustering as well as soft and hard Bregman clustering. The algorithm is substantially faster than existing constructions, embarrassingly parallel, and the resulting coresets are smaller. We further show that the proposed approach naturally generalizes to statistical k-means clustering and that, compared to existing results, it can be used to compute smaller summaries for empirical risk minimization. In extensive experiments, we demonstrate that the proposed algorithm outperforms existing data summarization strategies in practice."
]
} |
1904.12742 | 2940544655 | Background: Communicating software engineering research to industry practitioners and to other researchers can be challenging due to its context dependent nature. Design science is recognized as a pragmatic research paradigm, addressing this and other characteristics of applied and prescriptive research. Applying the design science lens to software engineering research may improve the communication of research contributions. Aim: The aim of this study is to 1) evaluate how well the design science lens helps frame software engineering research contributions, and 2) identify and characterize different types of design science contributions in the software engineering literature. Method: In previous research we developed a visual abstract template, summarizing the core constructs of the design science paradigm. In this study, we use this template in a review of a selected set of 38 top software engineering publications to extract and analyze their design science contributions. Results: We identified five clusters of papers, classified based on their alignment to the design science paradigm. Conclusions: The design science lens helps to pinpoint the theoretical contribution of a research output, which in turn is the core for assessing the practical relevance and novelty of the prescribed rule as well as the rigor of applied empirical methods in support of the rule. | Design science has been conceptualized by several researchers in software engineering @cite_24 and other disciplines, such as information systems @cite_37 , @cite_14 , @cite_39 and organization and management @cite_9 . Wieringa describes design science as an act of producing knowledge by designing useful things @cite_24 and makes a distinction between knowledge problems and practical problems. Similarly, Gregor and Hevner emphasize the dual focus on the artifact and its design @cite_14 in information systems, and argue for an iterative design process where evaluation of the artifact provides feedback to improve both the design process and the artifact. | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_9",
"@cite_39",
"@cite_24"
],
"mid": [
"2136451344",
"1276866904",
"2156256602",
"431380525",
"1999898351"
],
"abstract": [
"Two paradigms characterize much of the research in the Information Systems discipline: behavioral science and design science. The behavioral-science paradigm seeks to develop and verify theories that explain or predict human or organizational behavior. The design-science paradigm seeks to extend the boundaries of human and organizational capabilities by creating new and innovative artifacts. Both paradigms are foundational to the IS discipline, positioned as it is at the confluence of people, organizations, and technology. Our objective is to describe the performance of design-science research in Information Systems via a concise conceptual framework and clear guidelines for understanding, executing, and evaluating the research. In the design-science paradigm, knowledge and understanding of a problem domain and its solution are achieved in the building and application of the designed artifact. Three recent exemplars in the research literature are used to demonstrate the application of these guidelines. We conclude with an analysis of the challenges of performing high-quality design-science research in the context of the broader IS community.",
"Design science research (DSR) has staked its rightful ground as an important and legitimate Information Systems (IS) research paradigm. We contend that DSR has yet to attain its full potential impact on the development and use of information systems due to gaps in the understanding and application of DSR concepts and methods. This essay aims to help researchers (1) appreciate the levels of artifact abstractions that may be DSR contributions, (2) identify appropriate ways of consuming and producing knowledge when they are preparing journal articles or other scholarly works, (3) understand and position the knowledge contributions of their research projects, and (4) structure a DSR article so that it emphasizes significant contributions to the knowledge base. Our focal contribution is the DSR knowledge contribution framework with two dimensions based on the existing state of knowledge in both the problem and solution domains for the research opportunity under study. In addition, we propose a DSR communication schema with similarities to more conventional publication patterns, but which substitutes the description of the DSR artifact in place of a traditional results section. We evaluate the DSR contribution framework and the DSR communication schema via examinations of DSR exemplar publications.",
"The relevance problem of academic management research in organization and management is an old and thorny one. Recent discussions on this issue have resulted in proposals to use more Mode 2 knowledge production in our field. These discussions focused mainly on the process of research itself and less on the products produced by this process. Here the focus is on the so-called field-tested and grounded technological rule as a possible product of Mode 2 research with the potential to improve the relevance of academic research in management. Technological rules can be seen as solution-oriented knowledge. Such knowledge may be called Management Theory, while more description-oriented knowledge may be called Organization Theory. In this article the nature of technological rules in management is discussed, as well as their development, their use in actual management practice and the potential for cross-fertilization between Management Theory and Organization Theory.",
"This book is an introductory text on design science, intended to support both graduate students and researchers in structuring, undertaking and presenting design science work. It builds on established design science methods as well as recent work on presenting design science studies and ethical principles for design science, and also offers novel instruments for visualizing the results, both in the form of process diagrams and through a canvas format. While the book does not presume any prior knowledge of design science, it provides readers with a thorough understanding of the subject and enables them to delve into much deeper detail, thanks to extensive sections on further reading. Design science in information systems and technology aims to create novel artifacts in the form of models, methods, and systems that support people in developing, using and maintaining IT solutions. This work focuses on design science as applied to information systems and technology, but it also includes examples from, and perspectives of, other fields of human practice. Chapter 1 provides an overview of design science and outlines its ties with empirical research. Chapter 2 discusses the various types and forms of knowledge that can be used and produced by design science research, while Chapter 3 presents a brief overview of common empirical research strategies and methods. Chapter 4 introduces a methodological framework for supporting researchers in doing design science research as well as in presenting their results. This framework includes five core activities, which are described in detail in Chapters 5 to 9. Chapter 10 discusses how to communicate design science results, while Chapter 11 compares the proposed methodological framework with methods for systems development and shows how they can be combined. Chapter 12 discusses how design science relates to research paradigms, in particular to positivism and interpretivism. Lastly, Chapter 13 discusses ethical issues and principles for design science research.",
"Design science emphasizes the connection between knowledge and practice by showing that we can produce scientific knowledge by designing useful things. However, without further guidelines, aspiring design science researchers tend to identify practical problems with knowledge questions, which may lead to methodologically unsound research designs. To solve a practical problem, the real world is changed to suit human purposes, but to solve a knowledge problem, we acquire knowledge about the world without necessarily changing it. In design science, these two kinds of problems are mutually nested, but this nesting should not blind us for the fact that their problem-solving and solution justification methods are different. This paper analyzes the mutual nesting of practical problems and knowledge problems, derives some methodological guidelines from this for design science researchers, and gives an example of a design science project following this problem nesting."
]
} |
1904.12742 | 2940544655 | Background: Communicating software engineering research to industry practitioners and to other researchers can be challenging due to its context dependent nature. Design science is recognized as a pragmatic research paradigm, addressing this and other characteristics of applied and prescriptive research. Applying the design science lens to software engineering research may improve the communication of research contributions. Aim: The aim of this study is to 1) evaluate how well the design science lens helps frame software engineering research contributions, and 2) identify and characterize different types of design science contributions in the software engineering literature. Method: In previous research we developed a visual abstract template, summarizing the core constructs of the design science paradigm. In this study, we use this template in a review of a selected set of 38 top software engineering publications to extract and analyze their design science contributions. Results: We identified five clusters of papers, classified based on their alignment to the design science paradigm. Conclusions: The design science lens helps to pinpoint the theoretical contribution of a research output, which in turn is the core for assessing the practical relevance and novelty of the prescribed rule as well as the rigor of applied empirical methods in support of the rule. | In this paper, we do not distinguish between knowledge problems and solution problems within the design sciences but stress that the researcher's task is always to produce knowledge, which in turn can be used by practitioners for solving their problems. Such knowledge may be embedded in artefacts such as tools, models and techniques or distilled to simple technological rules. In line with van Aken @cite_9 , we distinguish between the explanatory sciences and the design sciences as two different paradigms producing different types of theory (explanatory and prescriptive respectively) with different validity criteria. This is similar to Stol and Fizgerald's distinction between knowledge seeking research and solution seeking research @cite_44 . In our study, we identified one cluster of software engineering papers belonging to the explanatory sciences ('descriptive') and three clusters of papers belonging to the design sciences ('problem-solution', 'solution-design' and 'solution evaluation'). | {
"cite_N": [
"@cite_44",
"@cite_9"
],
"mid": [
"2890801208",
"2156256602"
],
"abstract": [
"A variety of research methods and techniques are available to SE researchers, and while several overviews exist, there is consistency neither in the research methods covered nor in the terminology used. Furthermore, research is sometimes critically reviewed for characteristics inherent to the methods. We adopt a taxonomy from the social sciences, termed here the ABC framework for SE research, which offers a holistic view of eight archetypal research strategies. ABC refers to the research goal that strives for generalizability over Actors (A) and precise measurement of their Behavior (B), in a realistic Context (C). The ABC framework uses two dimensions widely considered to be key in research design: the level of obtrusiveness of the research and the generalizability of research findings. We discuss metaphors for each strategy and their inherent limitations and potential strengths. We illustrate these research strategies in two key SE domains, global software engineering and requirements engineering, and apply the framework on a sample of 75 articles. Finally, we discuss six ways in which the framework can advance SE research.",
"The relevance problem of academic management research in organization and management is an old and thorny one. Recent discussions on this issue have resulted in proposals to use more Mode 2 knowledge production in our field. These discussions focused mainly on the process of research itself and less on the products produced by this process. Here the focus is on the so-called field-tested and grounded technological rule as a possible product of Mode 2 research with the potential to improve the relevance of academic research in management. Technological rules can be seen as solution-oriented knowledge. Such knowledge may be called Management Theory, while more description-oriented knowledge may be called Organization Theory. In this article the nature of technological rules in management is discussed, as well as their development, their use in actual management practice and the potential for cross-fertilization between Management Theory and Organization Theory."
]
} |
1904.12742 | 2940544655 | Background: Communicating software engineering research to industry practitioners and to other researchers can be challenging due to its context dependent nature. Design science is recognized as a pragmatic research paradigm, addressing this and other characteristics of applied and prescriptive research. Applying the design science lens to software engineering research may improve the communication of research contributions. Aim: The aim of this study is to 1) evaluate how well the design science lens helps frame software engineering research contributions, and 2) identify and characterize different types of design science contributions in the software engineering literature. Method: In previous research we developed a visual abstract template, summarizing the core constructs of the design science paradigm. In this study, we use this template in a review of a selected set of 38 top software engineering publications to extract and analyze their design science contributions. Results: We identified five clusters of papers, classified based on their alignment to the design science paradigm. Conclusions: The design science lens helps to pinpoint the theoretical contribution of a research output, which in turn is the core for assessing the practical relevance and novelty of the prescribed rule as well as the rigor of applied empirical methods in support of the rule. | In the management domain, van Aken propose to distinguish management theory, that is prescriptive, from organizational theory, that is explanatory @cite_9 . A corresponding division of software engineering theory has not been proposed yet, although theory types are discussed by software engineering researchers @cite_5 , @cite_16 . | {
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_16"
],
"mid": [
"2161676644",
"2156256602",
"2156807806"
],
"abstract": [
"In mature sciences, building theories is the principal method of acquir- ing and accumulating knowledge that may be used in a wide range of settings. In software engineering, there is relatively little focus on theories. In particular, there is little use and development of empirically-based theories. We propose, and illustrate with examples, an initial framework for describing software engineering theories, and give advice on how to start proposing, testing, modifying and using theories to support both research and practise in software engineering.",
"The relevance problem of academic management research in organization and management is an old and thorny one. Recent discussions on this issue have resulted in proposals to use more Mode 2 knowledge production in our field. These discussions focused mainly on the process of research itself and less on the products produced by this process. Here the focus is on the so-called field-tested and grounded technological rule as a possible product of Mode 2 research with the potential to improve the relevance of academic research in management. Technological rules can be seen as solution-oriented knowledge. Such knowledge may be called Management Theory, while more description-oriented knowledge may be called Organization Theory. In this article the nature of technological rules in management is discussed, as well as their development, their use in actual management practice and the potential for cross-fertilization between Management Theory and Organization Theory.",
"There has been a growing interest in the role of theory within Software Engineering (SE) research. For several decades, researchers within the SE research community have argued that, to become a real engineering science, SE needs to develop stronger theoretical foundations. A few authors have proposed guidelines for constructing theories, building on insights from other disciplines. However, so far, much SE research is not guided by explicit theory, nor does it produce explicit theory. In this paper we argue that SE research does, in fact, show traces of theory, which we call theory fragments. We have adapted an analytical framework from the social sciences, named the Validity Network Schema (VNS), that we use to illustrate the role of theorizing in SE research. We illustrate the use of this framework by dissecting three well known research papers, each of which has had significant impact on their respective subdisciplines. We conclude this paper by outlining a number of implications for future SE research, and show how by increasing awareness and training, development of SE theories can be improved."
]
} |
1904.12742 | 2940544655 | Background: Communicating software engineering research to industry practitioners and to other researchers can be challenging due to its context dependent nature. Design science is recognized as a pragmatic research paradigm, addressing this and other characteristics of applied and prescriptive research. Applying the design science lens to software engineering research may improve the communication of research contributions. Aim: The aim of this study is to 1) evaluate how well the design science lens helps frame software engineering research contributions, and 2) identify and characterize different types of design science contributions in the software engineering literature. Method: In previous research we developed a visual abstract template, summarizing the core constructs of the design science paradigm. In this study, we use this template in a review of a selected set of 38 top software engineering publications to extract and analyze their design science contributions. Results: We identified five clusters of papers, classified based on their alignment to the design science paradigm. Conclusions: The design science lens helps to pinpoint the theoretical contribution of a research output, which in turn is the core for assessing the practical relevance and novelty of the prescribed rule as well as the rigor of applied empirical methods in support of the rule. | In the area of Information Systems, several literature reviews were conducted of design science research. Indulska and Recker @cite_46 analyzed design science articles from 2005--07 from well-known Information Systems conferences. They identified 142 articles, which they divided into groups, such as methodology- and discussion-oriented papers and papers presenting implementations of the design science approach. They found an increasing number of design science papers over the studied years. | {
"cite_N": [
"@cite_46"
],
"mid": [
"1686689482"
],
"abstract": [
"The publication of the work on design science by Alan Hevner and his colleagues has fostered much discussion on what is and what is not considered to be design science in Information Systems (IS) research. Anecdotal evidence suggests that some authors claim design science as a methodology in their work, without much consideration of theoretical or methodological aspects, or the appropriateness of their artefact. Also it would appear that design science papers have been proliferating rapidly of late. Accordingly, we were interested to identify the proliferation, nature and quality of design science research in Information Systems conference publications since the publication of ’s work in 2004. We examine design science articles published at five major IS conferences over the last three years. We subject 83 articles, identified as relevant via a rigorous analysis process, to three types of analysis - statistical, thematic and methodological. The results of these analyses indicate that design science appears to be a growing stream of research in IS. We also found design science research to be strongly prevalent in the research domains of process, knowledge and information management. The most interesting results stem out of our methodological analysis, which suggests that only a small percentage of the papers discuss a concise and consistent implementation of the design science methodology suggested by"
]
} |
1904.12742 | 2940544655 | Background: Communicating software engineering research to industry practitioners and to other researchers can be challenging due to its context dependent nature. Design science is recognized as a pragmatic research paradigm, addressing this and other characteristics of applied and prescriptive research. Applying the design science lens to software engineering research may improve the communication of research contributions. Aim: The aim of this study is to 1) evaluate how well the design science lens helps frame software engineering research contributions, and 2) identify and characterize different types of design science contributions in the software engineering literature. Method: In previous research we developed a visual abstract template, summarizing the core constructs of the design science paradigm. In this study, we use this template in a review of a selected set of 38 top software engineering publications to extract and analyze their design science contributions. Results: We identified five clusters of papers, classified based on their alignment to the design science paradigm. Conclusions: The design science lens helps to pinpoint the theoretical contribution of a research output, which in turn is the core for assessing the practical relevance and novelty of the prescribed rule as well as the rigor of applied empirical methods in support of the rule. | @cite_51 @cite_50 have also published a systematic review of design science articles in Information Systems. They identified articles by searching in top Information Systems journals and conferences from the years 2001--15, filtering the results and applying snow-balling, resulting in a final review sample of 119 papers or books. In their review, they analyze the topic addressed, artifact type, and evaluation method used. In our review we have classified papers along another dimension, i.e., what types of software engineering design science contributions the papers present in terms of problem understanding, solution design and solution validation. To our knowledge no reviews of software engineering literature have been made from a design science perspective before. | {
"cite_N": [
"@cite_51",
"@cite_50"
],
"mid": [
"2735117255",
"2904138048"
],
"abstract": [
"In the last few years, design science research has received wide attention within the IS community. It is increasingly recognized as an equal companion to IS behavioral science research and being applied to address IS topics. With the aim of providing an overview of its current state, this paper presents a systematic literature review on design science research in IS field. The results of this paper reveal the focuses of previous theoretical and empirical design science research and provide some directions for future IS design science research.",
"Design science research is a research paradigm focusing on problem-solving. It is increasingly accepted and adopted by Information Systems (IS) researchers as a legitimate research paradigm because of its capability in balancing research relevance and rigor. In the last fifteen years, many design science research has been published in top IS journals and has received a lot of attentions from IS researchers. However, current confusion and misunderstandings of DSR’s central ideas (e.g., definition, philosophical foundation, research outcomes, etc.) are obstructing it from having a more striking influence on the IS field. The purpose of this paper is to present a comprehensive and critical review of existing DSR literature. In total, 119 papers, published in top IS journals and conference proceedings, were included in the review. The results of this study portray a big picture of current DSR in IS field and build a comprehensive theoretical knowledge base in terms of DSR-related issues. This study also identifies many research issues which can be examined by future DSR."
]
} |
1904.12665 | 2951333473 | We present the first purely event-based, energy-efficient approach for object detection and categorization using an event camera. Compared to traditional frame-based cameras, choosing event cameras results in high temporal resolution (order of microseconds), low power consumption (few hundred mW) and wide dynamic range (120 dB) as attractive properties. However, event-based object recognition systems are far behind their frame-based counterparts in terms of accuracy. To this end, this paper presents an event-based feature extraction method devised by accumulating local activity across the image frame and then applying principal component analysis (PCA) to the normalized neighborhood region. Subsequently, we propose a backtracking-free k-d tree mechanism for efficient feature matching by taking advantage of the low-dimensionality of the feature representation. Additionally, the proposed k-d tree mechanism allows for feature selection to obtain a lower-dimensional dictionary representation when hardware resources are limited to implement dimensionality reduction. Consequently, the proposed system can be realized on a field-programmable gate array (FPGA) device leading to high performance over resource ratio. The proposed system is tested on real-world event-based datasets for object categorization, showing superior classification performance and relevance to state-of-the-art algorithms. Additionally, we verified the object detection method and real-time FPGA performance in lab settings under non-controlled illumination conditions with limited training data and ground truth annotations. | Since event-based vision is relatively new, only a limited amount of work addresses object detection using these devices @cite_22 @cite_13 . Liu . @cite_22 focuses on combining a frame-based CNN detector to facilitate the event-based module. We argue that works using deep neural networks for event-based object detection may achieve good performance with lots of training data and computing power, but they go against the idea of low-latency, low-power event-based vision. In contrast, @cite_13 presents a practical event-based approach to face detection by looking for pairs of blinking eyes. While @cite_13 is applicable to human faces in the presence of activity, we develop a general purpose event-based, object detection method using a simple feature representation based on local event aggregation. Thus, this paper is similar in spirit to the recently spawned ideas of generating event-based descriptors, such as histogram of averaged time surfaces @cite_2 and log-polar grids @cite_18 @cite_12 . Moreover, the proposed object detection and categorization method was accommodated on FPGA to demonstrate energy-efficient low-power vision. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_2",
"@cite_13",
"@cite_12"
],
"mid": [
"2765979150",
"2512653618",
"2793704846",
"2795036582",
"2795192875"
],
"abstract": [
"We introduce a new event-based visual descriptor, termed as distribution aware retinal transform (DART), for pattern recognition using silicon retina cameras. The DART descriptor captures the information of the spatio-temporal distribution of events, and forms a rich structural representation. Consequently, the event context encoded by DART greatly simplifies the feature correspondence problem, which is highly relevant to many event-based vision problems. The proposed descriptor is robust to scale and rotation variations without the need for spectral analysis. To demonstrate the effectiveness of the DART descriptors, they are employed as local features in the bag-of-features classification framework. The proposed framework is tested on the N-MNIST, MNIST-DVS, CIFAR10-DVS, NCaltech-101 datasets, as well as a new object dataset, N-SOD (Neuromorphic-Single Object Dataset), collected to test unconstrained viewpoint recognition. We report a competitive classification accuracy of 97.95 on the N-MNIST and the best classification accuracy compared to existing works on the MNIST-DVS (99 ), CIFAR10-DVS (65.9 ) and NCaltech-101 (70.3 ). Using the in-house N-SOD, we demonstrate real-time classification performance on an Intel Compute Stick directly interfaced to an event camera flying on-board a quadcopter. In addition, taking advantage of the high-temporal resolution of event cameras, the classification system is extended to tackle object tracking. Finally, we demonstrate efficient feature matching for event-based cameras using kd-trees.",
"This paper reports an object tracking algorithm for a moving platform using the dynamic and active-pixel vision sensor (DAVIS). It takes advantage of both the active pixel sensor (APS) frame and dynamic vision sensor (DVS) event outputs from the DAVIS. The tracking is performed in a three step-manner: regions of interest (ROIs) are generated by a cluster-based tracking using the DVS output, likely target locations are detected by using a convolutional neural network (CNN) on the APS output to classify the ROIs as foreground and background, and finally a particle filter infers the target location from the ROIs. Doing convolution only in the ROIs boosts the speed by a factor of 70 compared with full-frame convolutions for the 240×180 frame input from the DAVIS. The tracking accuracy on a predator and prey robot database reaches 90 with a cost of less than 20ms frame in Matlab on a normal PC without using a GPU.",
"Event-based cameras have recently drawn the attention of the Computer Vision community thanks to their advantages in terms of high temporal resolution, low power consumption and high dynamic range, compared to traditional frame-based cameras. These properties make event-based cameras an ideal choice for autonomous vehicles, robot navigation or UAV vision, among others. However, the accuracy of event-based object classification algorithms, which is of crucial importance for any reliable system working in real-world conditions, is still far behind their frame-based counterparts. Two main reasons for this performance gap are: 1. The lack of effective low-level representations and architectures for event-based object classification and 2. The absence of large real-world event-based datasets. In this paper we address both problems. First, we introduce a novel event-based feature representation together with a new machine learning architecture. Compared to previous approaches, we use local memory units to efficiently leverage past temporal information and build a robust event-based representation. Second, we release the first large real-world event-based dataset for object classification. We compare our method to the state-of-the-art with extensive experiments, showing better classification performance and real-time computation.",
"We present the first purely event-based approach for face detection using an ATIS, a neuromorphic camera. We look for pairs of blinking eyes by comparing local activity across the input frame to a predefined range, which is defined by the number of events per second in a specific location. If within range, the signal is checked for additional constraints such as duration, synchronicity and distance between the eyes. After a valid blink is registered, we await a second blink in the same spot to initiate Gaussian trackers above the eyes. Based on their position, a bounding box around the estimated outlines of the face is drawn. The face can then be tracked until it is occluded.",
"Although the neuromorphic vision community has developed useful event-based descriptors in the recent past, a robust general purpose descriptor that can handle scale and rotation variations has been elusive to achieve. This is partly because event cameras do not output frames at fixed intervals (like standard cameras) that are easy to work with, but an asynchronous sequence of spikes at microsecond to millisecond time resolutions. In this paper, we present Spike Context, a spatio-temporal neuromorphic descriptor that is inspired by the distribution of photo-receptors in the primate fovea. To demonstrate the effectiveness of the spike context descriptors, they are employed as semi-local features in the bag-of-features classification framework. In the first set of experiments on the N-MNIST dataset, we obtained very high results compared to existing works. In addition, we outperformed the state-of-the-art algorithms on the smaller MNIST-DVS dataset. Finally, we demonstrate the ability of the descriptor in handling scale variations by using the leave-one-scale-out protocol on the MNIST-DVS dataset."
]
} |
1904.12887 | 2943230607 | For any financial organization, computing accurate quarterly forecasts for various products is one of the most critical operations. As the granularity at which forecasts are needed increases, traditional statistical time series models may not scale well. We apply deep neural networks in the forecasting domain by experimenting with techniques from Natural Language Processing (Encoder-Decoder LSTMs) and Computer Vision (Dilated CNNs), as well as incorporating transfer learning. A novel contribution of this paper is the application of curriculum learning to neural network models built for time series forecasting. We illustrate the performance of our models using Microsoft's revenue data corresponding to Enterprise, and Small, Medium & Corporate products, spanning approximately 60 regions across the globe for 8 different business segments, and totaling in the order of tens of billions of USD. We compare our models' performance to the ensemble model of traditional statistics and machine learning techniques currently used by Microsoft Finance. With this in-production model as a baseline, our experiments yield an approximately 30 improvement in overall accuracy on test data. We find that our curriculum learning LSTM-based model performs best, showing that it is reasonable to implement our proposed methods without overfitting on medium-sized data. | We next comment on how we have applied the concept of transfer learning in our work. While there has been work done on transfer learning across tasks for CNNs @cite_12 and RNNs @cite_2 , as well as research on meta-learning across time series @cite_17 , there has not yet been an extensively applied example showing the ability to ameliorate the missing data problem by forecasting one datarow using historical trends from (in our case) a different region, segment, or product. Prior work on similar transfer learning focuses on robustness of out-of-sample test results and testing predictions at different timesteps @cite_3 , which does not account for missing data and is relatively infeasible to reproduce given the much smaller size of the Microsoft data. We discuss implications at length in Section . | {
"cite_N": [
"@cite_3",
"@cite_17",
"@cite_12",
"@cite_2"
],
"mid": [
"2963751193",
"2883806129",
"2898843852",
"2811027900"
],
"abstract": [
"Using a large-scale Deep Learning approach applied to a high-frequency database containing billions of electronic market quotes and transactions for US equities, we uncover nonparametric evidence for the existence of a universal and stationary price formation mechanism relating the dynamics of supply and demand for a stock, as revealed through the order book, to subsequent variations in its market price. We assess the model by testing its out-of-sample predictions for the direction of price moves given the history of price and order flow, across a wide range of stocks and time periods. The universal price formation model is shown to exhibit a remarkably stable out-of-sample prediction accuracy across time, for a wide range of stocks from different sectors. Interestingly, these results also hold for stocks which are not part of the training sample, showing that the relations captured by the model are universal and not asset-specific. The universal model --- trained on data from all stocks --- outperforms, in terms of out-of-sample prediction accuracy, asset-specific linear and nonlinear models trained on time series of any given stock, showing that the universal nature of price formation weighs in favour of pooling together financial data from various stocks, rather than designing asset- or sector-specific models as commonly done. Standard data normalizations based on volatility, price level or average spread, or partitioning the training data into sectors or categories such as large small tick stocks, do not improve training results. On the other hand, inclusion of price and order flow history over many past observations is shown to improve forecasting performance, showing evidence of path-dependence in price dynamics.",
"A crucial task in time series forecasting is the identification of the most suitable forecasting method. We present a general framework for forecast-model selection using meta-learning. A random forest is used to identify the best forecasting method using only time series features. The framework is evaluated using time series from the M1 and M3 competitions and is shown to yield accurate forecasts comparable to several benchmarks and other commonly used automated approaches of time series forecasting. A key advantage of our proposed framework is that the time-consuming process of building a classifier is handled in advance of the forecasting task at hand.",
"Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network’s weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network’s generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike for image recognition problems, transfer learning techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved if the model is fine-tuned from a pre-trained neural network instead of training it from scratch. In this paper, we fill this gap by investigating how to transfer deep CNNs for the TSC task. To evaluate the potential of transfer learning, we performed extensive experiments using the UCR archive which is the largest publicly available TSC benchmark containing 85 datasets. For each dataset in the archive, we pre-trained a model and then fine-tuned it on the other datasets resulting in 7140 different deep neural networks. These experiments revealed that transfer learning can improve or degrade the models predictions depending on the dataset used for transfer. Therefore, in an effort to predict the best source dataset for a given target dataset, we propose a new method relying on Dynamic Time Warping to measure inter-datasets similarities. We describe how our method can guide the transfer to choose the best source dataset leading to an improvement in accuracy on 71 out of 85 datasets.",
"Deep neural networks have shown promising results for various clinical prediction tasks such as diagnosis, mortality prediction, predicting duration of stay in hospital, etc. However, training deep networks -- such as those based on Recurrent Neural Networks (RNNs) -- requires large labeled data, high computational resources, and significant hyperparameter tuning effort. In this work, we investigate as to what extent can transfer learning address these issues when using deep RNNs to model multivariate clinical time series. We consider transferring the knowledge captured in an RNN trained on several source tasks simultaneously using a large labeled dataset to build the model for a target task with limited labeled data. An RNN pre-trained on several tasks provides generic features, which are then used to build simpler linear models for new target tasks without training task-specific RNNs. For evaluation, we train a deep RNN to identify several patient phenotypes on time series from MIMIC-III database, and then use the features extracted using that RNN to build classifiers for identifying previously unseen phenotypes, and also for a seemingly unrelated task of in-hospital mortality. We demonstrate that (i) models trained on features extracted using pre-trained RNN outperform or, in the worst case, perform as well as task-specific RNNs; (ii) the models using features from pre-trained models are more robust to the size of labeled data than task-specific RNNs; and (iii) features extracted using pre-trained RNN are generic enough and perform better than typical statistical hand-crafted features."
]
} |
1904.12659 | 2955239580 | Action recognition with skeleton data has recently attracted much attention in computer vision. Previous studies are mostly based on fixed skeleton graphs, only capturing local physical dependencies among joints, which may miss implicit joint correlations. To capture richer dependencies, we introduce an encoder-decoder structure, called A-link inference module, to capture action-specific latent dependencies, i.e. actional links, directly from actions. We also extend the existing skeleton graphs to represent higher-order dependencies, i.e. structural links. Combing the two types of links into a generalized skeleton graph, we further propose the actional-structural graph convolution network (AS-GCN), which stacks actional-structural graph convolution and temporal convolution as a basic building block, to learn both spatial and temporal features for action recognition. A future pose prediction head is added in parallel to the recognition head to help capture more detailed action patterns through self-supervision. We validate AS-GCN in action recognition using two skeleton data sets, NTU-RGB+D and Kinetics. The proposed AS-GCN achieves consistently large improvement compared to the state-of-the-art methods. As a side product, AS-GCN also shows promising results for future pose prediction. | Skeleton data is widely used in action recognition. Numerous algorithms are developed based on two approaches: the hand-crafted-based and the deep-learning-based. The first approach designs algorithms to capture action patterns based on the physical intuitions, such as local occupancy features @cite_35 , temporal joint covariances @cite_2 and Lie group curves @cite_36 . On the other hand, the deep-learning-based approach automatically learns the action faetures from data. Some recurrent-neural-network (RNN)-based models capture the temporal dependencies between consecutive frames, such as bi-RNNs @cite_28 , deep LSTMs @cite_21 @cite_7 , and attention-based model @cite_17 . Convolutional neural networks (CNN) also achieve remarkable results, such as residual temporal CNN @cite_37 , information enhancement model @cite_16 and CNN on action representations @cite_33 . Recently, with the flexibility to exploit the body joint relations, the graph-based approach draws much attention @cite_20 @cite_11 . In this work, we adopt the graph-based approach for action recognition. Different from any previous method, we learn the graphs adaptively from data, which captures useful non-local information about actions. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_11",
"@cite_33",
"@cite_7",
"@cite_36",
"@cite_28",
"@cite_21",
"@cite_2",
"@cite_16",
"@cite_20",
"@cite_17"
],
"mid": [
"2143267104",
"2613570903",
"",
"2604321021",
"",
"2048821851",
"1950788856",
"2964134613",
"203345490",
"2593146028",
"2963076818",
""
],
"abstract": [
"Human action recognition is an important yet challenging task. The recently developed commodity depth sensors open up new possibilities of dealing with this problem but also present some unique challenges. The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions. In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed. They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions. The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system. The experimental evaluations show that the proposed approach achieves superior performance to the state of the art algorithms.",
"The discriminative power of modern deep learning models for 3D human action recognition is growing ever so potent. In conjunction with the recent resurgence of 3D human action representation with 3D skeletons, the quality and the pace of recent progress have been significant. However, the inner workings of state-of-the-art learning based methods in 3D human action recognition still remain mostly black-box. In this work, we propose to use a new class of models known as Temporal Convolutional Neural Networks (TCN) for 3D human action recognition. TCN provides us a way to explicitly learn readily interpretable spatio-temporal representations for 3D human action recognition. Through this work, we wish to take a step towards a spatio-temporal model that is easier to understand, explain and interpret. The resulting model, Res-TCN, achieves state-of-the-art results on the largest 3D human action recognition dataset, NTU-RGBD.",
"",
"This paper presents a new method for 3D action recognition with skeleton sequences (i.e., 3D trajectories of human skeleton joints). The proposed method first transforms each skeleton sequence into three clips each consisting of several frames for spatial temporal feature learning using deep neural networks. Each clip is generated from one channel of the cylindrical coordinates of the skeleton sequence. Each frame of the generated clips represents the temporal information of the entire skeleton sequence, and incorporates one particular spatial relationship between the joints. The entire clips include multiple frames with different spatial relationships, which provide useful spatial structural information of the human skeleton. We propose to use deep convolutional neural networks to learn long-term temporal information of the skeleton sequence from the frames of the generated clips, and then use a Multi-Task Learning Network (MTLN) to jointly process all frames of the clips in parallel to incorporate spatial structural information for action recognition. Experimental results clearly show the effectiveness of the proposed new representation and feature learning method for 3D action recognition.",
"",
"Recently introduced cost-effective depth sensors coupled with the real-time skeleton estimation algorithm of [16] have generated a renewed interest in skeleton-based human action recognition. Most of the existing skeleton-based approaches use either the joint locations or the joint angles to represent a human skeleton. In this paper, we propose a new skelet al representation that explicitly models the 3D geometric relationships between various body parts using rotations and translations in 3D space. Since 3D rigid body motions are members of the special Euclidean group SE(3), the proposed skelet al representation lies in the Lie group SE(3)×…×SE(3), which is a curved manifold. Using the proposed representation, human actions can be modeled as curves in this Lie group. Since classification of curves in this Lie group is not an easy task, we map the action curves from the Lie group to its Lie algebra, which is a vector space. We then perform classification using a combination of dynamic time warping, Fourier temporal pyramid representation and linear SVM. Experimental results on three action datasets show that the proposed representation performs better than many existing skelet al representations. The proposed approach also outperforms various state-of-the-art skeleton-based human action recognition approaches.",
"Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.",
"Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+Dbased action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art handcrafted features on the suggested cross-subject and crossview evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis.",
"Human action recognition from videos is a challenging machine vision task with multiple important application domains, such as human-robot machine interaction, interactive entertainment, multimedia information retrieval, and surveillance. In this paper, we present a novel approach to human action recognition from 3D skeleton sequences extracted from depth data. We use the covariance matrix for skeleton joint locations over time as a discriminative descriptor for a sequence. To encode the relationship between joint movement and time, we deploy multiple covariance matrices over sub-sequences in a hierarchical fashion. The descriptor has a fixed length that is independent from the length of the described sequence. Our experiments show that using the covariance descriptor with an off-the-shelf classification algorithm outperforms the state of the art in action recognition on multiple datasets, captured either via a Kinect-type sensor or a sophisticated motion capture system. We also include an evaluation on a novel large dataset using our own annotation.",
"Sequence-based view invariant transform can effectively cope with view variations.Enhanced skeleton visualization method encodes spatio-temporal skeletons as visual and motion enhanced color images in a compact yet distinctive manner.Multi-stream convolutional neural networks fusion model is able to explore complementary properties among different types of enhanced color images.Our method consistently achieves the highest accuracies on four datasets, including the largest and most challenging NTU RGB+D dataset for skeleton-based action recognition. Human action recognition based on skeletons has wide applications in humancomputer interaction and intelligent surveillance. However, view variations and noisy data bring challenges to this task. Whats more, it remains a problem to effectively represent spatio-temporal skeleton sequences. To solve these problems in one goal, this work presents an enhanced skeleton visualization method for view invariant human action recognition. Our method consists of three stages. First, a sequence-based view invariant transform is developed to eliminate the effect of view variations on spatio-temporal locations of skeleton joints. Second, the transformed skeletons are visualized as a series of color images, which implicitly encode the spatio-temporal information of skeleton joints. Furthermore, visual and motion enhancement methods are applied on color images to enhance their local patterns. Third, a convolutional neural networks-based model is adopted to extract robust and discriminative features from color images. The final action class scores are generated by decision level fusion of deep features. Extensive experiments on four challenging datasets consistently demonstrate the superiority of our method.",
"",
""
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.