aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1708.03151
2744911071
Unlike its deterministic counterpart, static and stochastic vehicle routing problems (SS-VRP) aim at modeling and solving real-life operational problems by considering uncertainty on data. We consider the SS-VRPTW-CR introduced in Saint- (2017). Like the SS-VRP introduced by Bertsimas (1992), we search for optimal first stage routes for a fleet of vehicles to handle a set of stochastic customer demands, i.e., demands are uncertain and we only know their probabilities. In addition to capacity constraints, customer demands are also constrained by time windows. Unlike all SS-VRP variants, the SS-VRPTW-CR does not make any assumption on the time at which a stochastic demand is revealed, i.e., the reveal time is stochastic as well. To handle this new problem, we introduce waiting locations: Each vehicle is assigned a sequence of waiting locations from which it may serve some associated demands, and the objective is to minimize the expected number of demands that cannot be satisfied in time. In this paper, we propose two new recourse strategies for the SS-VRPTW-CR, together with their closed-form expressions for efficiently computing their expectations: The first one allows us to take vehicle capacities into account; The second one allows us to optimize routes by avoiding some useless trips. We propose two algorithms for searching for routes with optimal expected costs: The first one is an extended branch-and-cut algorithm, based on a stochastic integer formulation, and the second one is a local search based heuristic method. We also introduce a new public benchmark for the SS-VRPTW-CR, based on real-world data coming from the city of Lyon. We evaluate our two algorithms on this benchmark and empirically demonstrate the expected superiority of the SS-VRPTW-CR anticipative actions over a basic "wait-and-serve" policy.
By definition, the SS-VRPTW-CR is a static problem. In this section we hence do not consider dynamic VRPs and rather focus on existing studies that have been carried on . Specific literature reviews on the SS-VRP may be found in @cite_21 , @cite_11 , @cite_38 , @cite_55 and recently in @cite_50 , @cite_28 , @cite_49 and @cite_20 . According to @cite_13 , the most studied cases in SS-VRPs are: Stochastic customers (SS-VRP-C), where customer presences are described by random variables; Stochastic demands (SS-VRP-D), where all customers are present but their demands are random variables (see for instance dror1989vehicle ( dror1989vehicle , dror1993vehicle ), @cite_37 , @cite_22 , @cite_2 , Mendoza2010 ( Mendoza2010 , Mendoza2011 ), Secomandi2000 ( Secomandi2000 , Secomandi2009 ) and @cite_57 ); Stochastic times (SS-VRP-T), where either travel and or service times are random variables (see for instance @cite_27 , @cite_17 , @cite_39 and @cite_29 ). Since the SS-VRPTW-CR belongs to the first category, we focus this review on customers uncertainty only.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_22", "@cite_28", "@cite_55", "@cite_21", "@cite_2", "@cite_17", "@cite_29", "@cite_39", "@cite_57", "@cite_27", "@cite_50", "@cite_49", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "", "", "2510786012", "2046587037", "", "2085495522", "2042967365", "2001012897", "2021533552", "1752958389", "2011300857", "2105972067", "2251882317", "1993426985", "2031236641", "", "1966197373" ], "abstract": [ "", "", "", "The purpose of this paper is to develop structural classification of Stochastic Vehicle Routing Problem (SVRP) by different domains and attributes. This research used a systematic review and meta-analysis on SVRP literatures. This includes browsing relevant researches and publications, screening related articles, identifying domains, attributes and categorising the articles based on the identified domains and attributes. The findings of the study show clear differences on the number of studies under each domain and attribute. Most studied attributes are stochastic customer demand, capacitated vehicle, synthesis data and objective function with cost minimization. Whereas the least studied are maximisation objective function, stochastic service time, and an applied model using stochastic with recurs. The research helps to summarise and map a comprehensive survey on SVRP literatures so that various contributions in the field are organised in a manner that provide a clear view for the readers and identify future research directions. This paper is the first of its kind in the field of SVRP that develop a classification scheme for articles published since 1993 to enhances the development of this newly emerging discipline.", "", "Abstract The purpose of this review article is to provide a summary of the scientific literature on stochastic vehicle routing problems. The main problems are described within a broad classification scheme and the most important contributions are summarized in table form.", "This article introduces a new exact algorithm for the capacitated vehicle routing problem with stochastic demands (CVRPSD). The CVRPSD can be formulated as a set partitioning problem and it is shown that the associated column generation subproblem can be solved using a dynamic programming scheme. Computational experiments show promising results.", "We consider stochastic vehicle routing problems on a network with random travel and service times. A fleet of one or more vehicles is available to be routed through the network to service each node. Two versions of the model are developed based on alternative objective functions. We provide bounds on optimal objective function values and conditions under which reductions to simpler models can be made. Our solution method embeds a branch-and-cut scheme within a Monte Carlo sampling-based procedure.", "This paper studies a version of stochastic vehicle routing problems, in which travel and service times are stochastic, and a time window constraint is associated with each customer. This problem is originally formulated as a chance constrained programming model and a stochastic programming model with recourse in terms of different optimization criteria. To efficiently solve these two models, a heuristic based on tabu search, which takes into account the stochastic nature of this problem, is then proposed. Finally, some testing instances with different properties are established to investigate the algorithmic performance, and the computational results are then reported.", "The sample average approximation (SAA) method is an approach for solving stochastic optimization problems by using Monte Carlo simulation. In this technique the expected objective function of the stochastic problem is approximated by a sample average estimate derived from a random sample. The resulting sample average approximating problem is then solved by deterministic optimization techniques. The process is repeated with different samples to obtain candidate solutions along with statistical estimates of their optimality gaps. We present a detailed computational study of the application of the SAA method to solve three classes of stochastic routing problems. These stochastic problems involve an extremely large number of scenarios and first-stage integer variables. For each of the three problem classes, we use decomposition and branch-and-cut to solve the approximating problem within the SAA scheme. Our computational results indicate that the proposed method is successful in solving problems with up to 21694 scenarios to within an estimated 1.0 of optimality. Furthermore, a surprising observation is that the number of optimality cuts required to solve the approximating problem to optimality does not significantly increase with the size of the sample. Therefore, the observed computation times needed to find optimal solutions to the approximating problems grow only linearly with the sample size. As a result, we are able to find provably near-optimal solutions to these difficult stochastic programs using only a moderate amount of computation time.", "Abstract This paper proposes a state-of-the-art branch-cut-and-price algorithm for the vehicle routing problem with stochastic demands (VRPSD). We adapt the model of Christiansen and Lysgaard [6] and formulate the VRPSD as a set partitioning model with additional constraints. Feasible routes are generated using a dynamic programming algorithm executed over a state-space graph. Our method combines 2-cycle elimination with ng -routes. In addition, our pricing problem is significantly accelerated by the introduction of a new aggregate dominance rule. To speed up the generation of negative reduced cost columns, we use a tabu search heuristic and a bidirectional labeling algorithm. We also add capacity and subset-row inequalities dynamically in order to strengthen the linear relaxation of the master problem. As extensive computational tests illustrate, our algorithm is very competitive with the one of [6] . We solve 20 additional instances from the 40-instance set considered by these authors and we considerably improve the computing times for instances already closed. We also solve 17 new instances from the literature.", "This paper considers vehicle routing problems (VRPs) with stochastic service and travel times, in which vehicles incur a penalty proportional to the duration of their route in excess of a preset constant. Three mathematical programming models are presented: a chance constrained model, a three-index simple recourse model and a two-index recourse model. A general branch and cut algorithm for the three models is described. Computational results indicate that moderate size problems can be solved to optimality.", "Vehicle routing problems, among the most studied in combinatorial optimization, arise in many practical contexts (freight distribution and collection, transportation, garbage collection, newspaper delivery, etc.). Operations researchers have made significant developments in the algorithms for their solution, and Vehicle Routing: Problems, Methods, and Applications, Second Edition reflects these advances. The text of the new edition is either completely new or significantly revised and provides extensive and complete state-of-the-art coverage of vehicle routing by those who have done most of the innovative research in the area; it emphasizes methodology related to specific classes of vehicle routing problems and, since vehicle routing is used as a benchmark for all new solution techniques, contains a complete overview of current solutions to combinatorial optimization problems. It also includes several chapters on important and emerging applications, such as disaster relief and green vehicle routing. Audience: This book is intended for both researchers and graduate level students in operations research and applied mathematics. Practitioners will find this book particularly useful. Readers need a basic knowledge of the main methods for the solution of combinatorial optimization problems.", "An increasing number of companies focus on customer satisfaction to increase the lifetime value of each customer. In vehicle routing, customer satisfaction is often a result of consistent service. Customers appreciate service at regular times of the day provided by the same driver each time. Additionally, drivers become more familiar with their tasks if they visit the same customers and service regions repeatedly. In this article, we survey literature that addresses service consistency in vehicle routing. We present early solution approaches, starting from the 1970s, that focus on reducing the operational complexity resulting from planning and executing new routes each day. One side benefit of these approaches is service consistency; therefore, many recent solution approaches devised for improving customer satisfaction are based on previous achievements. We classify the literature according to three consistency features: arrival time consistency, person-oriented consistency, and delivery consistency. For each feature, we survey different modeling concepts and measurements, demonstrate solution approaches, and examine the increase in cost of improving service consistency. We close the article by presenting challenging ideas for future research. © 2014 The Authors Networks Published by Wiley Periodicals, Inc. NETWORKS, Vol. 643, 192-213 2014", "A number of technological advances have led to a renewed interest in dynamic vehicle routing problems. This survey classifies routing problems from the perspective of information quality and evolution. After presenting a general description of dynamic routing, we introduce the notion of degree of dynamism, and present a comprehensive review of applications and solution methods for dynamic vehicle routing problems.", "", "In recent years new insights and algorithms have been obtained for the classical, deterministic vehicle routing problem as well as for natural stochastic and dynamic variations of it. These new developments are based on theoretical analysis, combine probabilistic and combinatorial modeling, and lead to new algorithms that produce near-optimal solutions, and a deeper understanding of uncertainty issues in vehicle routing. In this paper, we survey these new developments with an emphasis on the insights gained and on the algorithms proposed." ] }
1708.03151
2744911071
Unlike its deterministic counterpart, static and stochastic vehicle routing problems (SS-VRP) aim at modeling and solving real-life operational problems by considering uncertainty on data. We consider the SS-VRPTW-CR introduced in Saint- (2017). Like the SS-VRP introduced by Bertsimas (1992), we search for optimal first stage routes for a fleet of vehicles to handle a set of stochastic customer demands, i.e., demands are uncertain and we only know their probabilities. In addition to capacity constraints, customer demands are also constrained by time windows. Unlike all SS-VRP variants, the SS-VRPTW-CR does not make any assumption on the time at which a stochastic demand is revealed, i.e., the reveal time is stochastic as well. To handle this new problem, we introduce waiting locations: Each vehicle is assigned a sequence of waiting locations from which it may serve some associated demands, and the objective is to minimize the expected number of demands that cannot be satisfied in time. In this paper, we propose two new recourse strategies for the SS-VRPTW-CR, together with their closed-form expressions for efficiently computing their expectations: The first one allows us to take vehicle capacities into account; The second one allows us to optimize routes by avoiding some useless trips. We propose two algorithms for searching for routes with optimal expected costs: The first one is an extended branch-and-cut algorithm, based on a stochastic integer formulation, and the second one is a local search based heuristic method. We also introduce a new public benchmark for the SS-VRPTW-CR, based on real-world data coming from the city of Lyon. We evaluate our two algorithms on this benchmark and empirically demonstrate the expected superiority of the SS-VRPTW-CR anticipative actions over a basic "wait-and-serve" policy.
The first SS-VRP-C has been studied by @cite_35 , @cite_54 and @cite_44 as a generalization of the SS-TSP-C. @cite_5 considered general integer demands and compared different heuristics. @cite_0 considered a VRP with stochastic Customers and Demands (SS-VRP-CD). A customer demand is assumed to be revealed either when the vehicle leaves the previous customer or when it arrives at the customer's own location. Two different recourse strategies are proposed, as illustrated in Figure . For both strategies, closed-form mathematical expressions are provided to compute the expected total distance, provided a first stage solution. @cite_8 and @cite_43 developed the first exact algorithm for solving the SS-VRP-CD for instances up to 70 customers, by means of an integer L-shaped method. @cite_15 later proposed a tabu search to efficiently approximate the solution. Experimentations are reported on instances with up to 46 customers. @cite_3 later developed an adaptive memory programming metaheuristic for the SS-VRP-C and assessed it on benchmarks with up to 483 customers and 38 vehicles.
{ "cite_N": [ "@cite_35", "@cite_8", "@cite_54", "@cite_3", "@cite_44", "@cite_0", "@cite_43", "@cite_5", "@cite_15" ], "mid": [ "1546027924", "2085952738", "", "2031901413", "", "1984143531", "", "2047664145", "2047612779" ], "abstract": [ "", "In this article, the following stochastic vehicle routing problem is considered. Each customer has a known probability of presence and a random demand. This problem arises in several contexts, e.g., in the design of less-than-truckload collection routes. Because of uncertainty, it may not be possible to follow vehicle routes as planned. Using a stochastic programming framework, the problem is solved in two stages. In a first stage, planned collection routes are designed. In a second stage, when the set of present customers is known, these routes are followed as planned by skipping the absent customers. Whenever the vehicle capacity is attained or exceeded, the vehicle returns to the depot and resumes its collections along the planned route. This generates a penalty. The problem is to design a first stage solution in order to minimize the expected total cost of the second state solution. This is formulated as a stochastic integer program, and solved for the first time to optimality by means of an integer L-shaped method.", "", "We present an adaptive memory programming (AMP) metaheuristic to address the robust capacitated vehicle routing problem under demand uncertainty. Contrary to its deterministic counterpart, the robust formulation allows for uncertain customer demands, and the objective is to determine a minimum cost delivery plan that is feasible for all demand realizations within a prespecified uncertainty set. A crucial step in our heuristic is to verify the robust feasibility of a candidate route. For generic uncertainty sets, this step requires the solution of a convex optimization problem, which becomes computationally prohibitive for large instances. We present two classes of uncertainty sets for which route feasibility can be established much more efficiently. Although we discuss our implementation in the context of the AMP framework, our techniques readily extend to other metaheuristics. Computational studies on standard literature benchmarks with up to 483 customers and 38 vehicles demonstrate that the proposed approach is able to quickly provide high-quality solutions. In the process, we obtain new best solutions for a total of 123 benchmark instances.", "", "We consider a natural probabilistic variation of the classical vehicle routing problem (VRP), in which demands are stochastic. Given only a probabilistic description of the demand we need to design routes for the VRP. Motivated by applications in strategic planning and distribution systems, rather than resolving the problem when the demand becomes known, we propose to construct an a priori sequence among all customers of minimal expected total length. We analyze the problem using a variety of theoretical approaches. We find closed-form expressions and algorithms to compute the expected length of an a priori sequence under general probabilistic assumptions. Based on these expressions we find upper and lower bounds for the probabilistic VRP and the VRP re-optimization strategy, in which we find the optimal route at every instance. We propose heuristics and analyze their worst case performance as well as their average behavior using techniques from probabilistic analysis. Our results suggest that our approac...", "", "The standard vehicle-scheduling problem is deterministic, assuming all factors are known with certainty in advance of scheduling. In practice there are several areas which might contain uncertainty. This paper suggests ways of tackling these, but concentrates on problems where some customers do not need deliveries during a scheduling period. If the number of such customers is small, semi-fixed routes may be acceptable. As the number of customers omitted rises, there comes a point when rescheduling becomes preferable. The potential savings made by semi-fixed or variable routes over fixed routes are estimated for standard problems. The implications of these savings are then evaluated for a wholesale distributor.", "This paper considers a version of the stochastic vehicle routing problem where customers are present at locations with some probabilities and have random demands. A tabu search heuristic is developed for this problem. Comparisons with known optimal solutions on problems whose sizes vary from 6 to 46 customers indicate that the heuristic produces an optimal solution in 89.45 of cases, with an average deviation of 0.38 from optimality." ] }
1708.03151
2744911071
Unlike its deterministic counterpart, static and stochastic vehicle routing problems (SS-VRP) aim at modeling and solving real-life operational problems by considering uncertainty on data. We consider the SS-VRPTW-CR introduced in Saint- (2017). Like the SS-VRP introduced by Bertsimas (1992), we search for optimal first stage routes for a fleet of vehicles to handle a set of stochastic customer demands, i.e., demands are uncertain and we only know their probabilities. In addition to capacity constraints, customer demands are also constrained by time windows. Unlike all SS-VRP variants, the SS-VRPTW-CR does not make any assumption on the time at which a stochastic demand is revealed, i.e., the reveal time is stochastic as well. To handle this new problem, we introduce waiting locations: Each vehicle is assigned a sequence of waiting locations from which it may serve some associated demands, and the objective is to minimize the expected number of demands that cannot be satisfied in time. In this paper, we propose two new recourse strategies for the SS-VRPTW-CR, together with their closed-form expressions for efficiently computing their expectations: The first one allows us to take vehicle capacities into account; The second one allows us to optimize routes by avoiding some useless trips. We propose two algorithms for searching for routes with optimal expected costs: The first one is an extended branch-and-cut algorithm, based on a stochastic integer formulation, and the second one is a local search based heuristic method. We also introduce a new public benchmark for the SS-VRPTW-CR, based on real-world data coming from the city of Lyon. We evaluate our two algorithms on this benchmark and empirically demonstrate the expected superiority of the SS-VRPTW-CR anticipative actions over a basic "wait-and-serve" policy.
@cite_56 considered a variant of the SS-VRPTW-C, , the Courier Delivery Problem with Uncertainty. Potential customers have deterministic soft time windows but are present probabilistically, with uncertain service times. Vehicles are uncapacitated and share a common hard deadline for returning to the depot. The objective is to construct an a priori solution, to be used every day as a basis for adapting to daily customer requests. Unlike the SS-VRPTW-CR, the set of customers is revealed at the beginning of the operations.
{ "cite_N": [ "@cite_56" ], "mid": [ "2163352112" ], "abstract": [ "We consider the courier delivery problem (CDP), a variant of the vehicle routing problem with time windows (VRPTW) in which customers appear probabilistically and their service times are uncertain. We use scenario-based stochastic programming with recourse to model the uncertainty in customers and robust optimization for the uncertainty in service times. Our proposed model generates a master plan and daily schedules by maximizing the coverage of customers and the similarity of routes in each scenario, while minimizing the total time spent by the couriers and the total earliness and lateness penalty. To solve large-scale problem instances, we develop an insertion-based solution heuristic, called master and daily scheduler (MADS), and a tabu search improvement procedure. The computational results show that our heuristic improves the similarity of routes and the lateness penalty at the expense of increased total time spent when compared to a solution of independently scheduling each day. Our experimental results also show improvements over current industry practice on two real-world data sets." ] }
1708.03151
2744911071
Unlike its deterministic counterpart, static and stochastic vehicle routing problems (SS-VRP) aim at modeling and solving real-life operational problems by considering uncertainty on data. We consider the SS-VRPTW-CR introduced in Saint- (2017). Like the SS-VRP introduced by Bertsimas (1992), we search for optimal first stage routes for a fleet of vehicles to handle a set of stochastic customer demands, i.e., demands are uncertain and we only know their probabilities. In addition to capacity constraints, customer demands are also constrained by time windows. Unlike all SS-VRP variants, the SS-VRPTW-CR does not make any assumption on the time at which a stochastic demand is revealed, i.e., the reveal time is stochastic as well. To handle this new problem, we introduce waiting locations: Each vehicle is assigned a sequence of waiting locations from which it may serve some associated demands, and the objective is to minimize the expected number of demands that cannot be satisfied in time. In this paper, we propose two new recourse strategies for the SS-VRPTW-CR, together with their closed-form expressions for efficiently computing their expectations: The first one allows us to take vehicle capacities into account; The second one allows us to optimize routes by avoiding some useless trips. We propose two algorithms for searching for routes with optimal expected costs: The first one is an extended branch-and-cut algorithm, based on a stochastic integer formulation, and the second one is a local search based heuristic method. We also introduce a new public benchmark for the SS-VRPTW-CR, based on real-world data coming from the city of Lyon. We evaluate our two algorithms on this benchmark and empirically demonstrate the expected superiority of the SS-VRPTW-CR anticipative actions over a basic "wait-and-serve" policy.
@cite_45 introduced the Dial-a-Ride Problem (DARP) with stochastic customer delays. The DARP is a generalization of the VRPTW that distinguishes between pickup and delivery locations and involves customer ride time constraints. Each customer is present at its pickup location with a stochastic delay. A customer is then skipped if it is absent when the vehicle visits the corresponding location, involving the cost of fulfilling the request by an alternative service (, a taxi). In a sense, stochastic delays imply that each request is revealed at some uncertain time during the planning horizon. That study is thus related to our problem, although in the SS-VRPTW-CR only a subset of the requests are actually revealed. Similarly, @cite_51 studied a probabilistic DARP where a priori routes are modified by removing absent customers at the beginning of the day, and proposed local search based heuristics.
{ "cite_N": [ "@cite_45", "@cite_51" ], "mid": [ "2093856210", "2069358952" ], "abstract": [ "This paper considers a single-vehicle Dial-a-Ride Problem in which customers may experience stochastic delays at their pickup locations. If a customer is absent when the vehicle serves the pickup location, the request is fulfilled by an alternative service (e.g., a taxi) whose cost is added to the total cost of the tour. In this case, the vehicle skips the corresponding delivery location, which yields a reduction in the total tour cost. The aim of the problem is to determine an a priori Hamiltonian tour minimizing the expected cost of the solution. This problem is solved by means of an integer L-shaped algorithm. Computational experiments show that the algorithm yields optimal solutions on several instances within reasonable CPU times. It is also shown that the actual cost of an optimal solution obtained with this algorithm can be significantly smaller than that of an optimal solution obtained with a deterministic formulation.", "This paper introduces the probabilistic dial-a-ride problem, and describes an efficient request-relocation neighborhood evaluation procedure for the problem. The running time of the procedure is @math , compared to @math for a straightforward approach. For solving the problem we embed the suggested evaluation procedure in a pure local search heuristic and in a tabu search heuristic. The quality of the solutions obtained by the two heuristics have been compared experimentally. Computational results confirm that our neighborhood evaluation technique is much faster than the straightforward one, and for cases with 144 users and 4 vehicles it is demonstrated that the computation time can be reduced by a factor larger than 27." ] }
1708.03292
2745172755
We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point. Please see our supplementary video at this https URL
Alternative approaches synthesize images from new viewpoints without explicitly estimating geometry. The work of Shi al @cite_2 uses the observation that light fields are sparse in the continuous Fourier domain to reconstruct a full light field from a carefully-constructed 2D collection of views. Didyk al @cite_13 and Zhang al @cite_29 reconstruct 4D light fields from pairs of 2D slices using phase-based approaches.
{ "cite_N": [ "@cite_29", "@cite_13", "@cite_2" ], "mid": [ "", "2014008866", "2119781527" ], "abstract": [ "", "Multi-view autostereoscopic displays provide an immersive, glasses-free 3D viewing experience, but they require correctly filtered content from multiple viewpoints. This, however, cannot be easily obtained with current stereoscopic production pipelines. We provide a practical solution that takes a stereoscopic video as an input and converts it to multi-view and filtered video streams that can be used to drive multi-view autostereoscopic displays. The method combines a phase-based video magnification and an interperspective antialiasing into a single filtering process. The whole algorithm is simple and can be efficiently implemented on current GPUs to yield a near real-time performance. Furthermore, the ability to retarget disparity is naturally supported. Our method is robust and works well for challenging video scenes with defocus blur, motion blur, transparent materials, and specularities. We show that our results are superior when compared to the state-of-the-art depth-based rendering methods. Finally, we showcase the method in the context of a real-time 3D videoconferencing system that requires only two cameras.", "The ability to interactively control viewpoint while watching a video is an exciting application of image-based rendering. The goal of our work is to render dynamic scenes with interactive viewpoint control using a relatively small number of video cameras. In this paper, we show how high-quality video-based rendering of dynamic scenes can be accomplished using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms. Once these video streams have been processed, we can synthesize any intermediate view between cameras at any time, with the potential for space-time manipulation.In our approach, we first use a novel color segmentation-based stereo algorithm to generate high-quality photoconsistent correspondences across all camera views. Mattes for areas near depth discontinuities are then automatically extracted to reduce artifacts during view synthesis. Finally, a novel temporal two-layer compressed representation that handles matting is developed for rendering at interactive rates." ] }
1708.03292
2745172755
We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point. Please see our supplementary video at this https URL
Recent works have trained CNNs to synthesize slices of the light field that have dramatically different viewpoints than the input slices. Tatarchenko al @cite_20 and Yang al @cite_28 train CNNs to regress from a single input 2D view to another 2D view, given the desired camera rotation. The exciting work of Zhou al @cite_27 predicts a flow field that rearranges pixels from the input views to synthesize novel views that are sharper than directly regressing to pixel values. These methods are trained on synthetic images rendered from large databases of 3D models of objects such as cars and chairs @cite_12 , while we train on real light fields. Additionally, they are not able to explicitly take advantage of geometry because they attempt to synthesize views at arbitrary rotations with potentially no shared geometry between the input and target views. We instead focus on the problem of synthesizing a dense sampling of views around the input view, so we can explicitly estimate geometry to produce higher quality results.
{ "cite_N": [ "@cite_28", "@cite_27", "@cite_12", "@cite_20" ], "mid": [ "2951475414", "", "2190691619", "2495603374" ], "abstract": [ "An important problem for both graphics and vision is to synthesize novel views of a 3D object from a single image. This is particularly challenging due to the partial observability inherent in projecting a 3D object onto the image space, and the ill-posedness of inferring object shape and pose. However, we can train a neural network to address the problem if we restrict our attention to specific object categories (in our case faces and chairs) for which we can gather ample training data. In this paper, we propose a novel recurrent convolutional encoder-decoder network that is trained end-to-end on the task of rendering rotated objects starting from a single image. The recurrent structure allows our model to capture long-term dependencies along a sequence of transformations. We demonstrate the quality of its predictions for human faces on the Multi-PIE dataset and for a dataset of 3D chair models, and also show its ability to disentangle latent factors of variation (e.g., identity and pose) without using full supervision.", "", "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans.", "We present a convolutional network capable of inferring a 3D representation of a previously unseen object given a single image of this object. Concretely, the network can predict an RGB image and a depth map of the object as seen from an arbitrary view. Several of these depth maps fused together give a full point cloud of the object. The point cloud can in turn be transformed into a surface mesh. The network is trained on renderings of synthetic 3D models of cars and chairs. It successfully deals with objects on cluttered background and generates reasonable predictions for real images of cars." ] }
1708.03292
2745172755
We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point. Please see our supplementary video at this https URL
More recently, CNN-based view synthesis methods been proposed, starting with the inspiring DeepStereo method that uses unstructured images from Google's Street View @cite_32 to synthesize new views. This idea has been extended to view interpolation for light fields given 4 corner views @cite_21 , and the prediction of one image from a stereo pair given the other image @cite_23 @cite_31 @cite_5 .
{ "cite_N": [ "@cite_21", "@cite_32", "@cite_23", "@cite_5", "@cite_31" ], "mid": [ "", "2952809312", "2949634581", "2336968928", "2520707372" ], "abstract": [ "", "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision, but their use in graphics problems has been limited. In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches which consist of multiple complex stages of processing, each of which require careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. To verify our method we show that it can convincingly reproduce known test views from nearby imagery. Additionally we show images rendered from novel viewpoints. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.", "A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation.", "As 3D movie viewing becomes mainstream and the Virtual Reality (VR) market emerges, the demand for 3D contents is growing rapidly. Producing 3D videos, however, remains challenging. In this paper we propose to use deep neural networks to automatically convert 2D videos and images to a stereoscopic 3D format. In contrast to previous automatic 2D-to-3D conversion algorithms, which have separate stages and need ground truth depth map as supervision, our approach is trained end-to-end directly on stereo pairs extracted from existing 3D movies. This novel training scheme makes it possible to exploit orders of magnitude more data and significantly increases performance. Indeed, Deep3D outperforms baselines in both quantitative and human subject evaluations.", "Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth." ] }
1708.03292
2745172755
We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point. Please see our supplementary video at this https URL
Instead of synthesizing new imagery, many excellent works address the general inverse rendering problem of inferring the scene properties that produce an observed 2D image. The influential algorithm of Barron and Malik @cite_33 solves an optimization problem with priors on reflectance, shape, and illumination to infer these from a single image. Other interesting works @cite_30 @cite_0 focus on inferring just the 3D structure of the scene, and train on ground-truth geometry captured with 3D scanners or the Microsoft Kinect. A number of exciting works extend this idea to infer a 3D voxel @cite_17 @cite_8 @cite_1 or point set @cite_4 representation from a synthetic 2D image by training CNNs on large databases of 3D CAD models. Finally, recent methods @cite_10 @cite_16 @cite_34 learn to infer 3D voxel grids from a 2D image without any 3D supervision by using a rendering or projection layer within the network and minimizing the error of the rendered view. Our work is closely related to unsupervised 3D representation learning methods, but we represent geometry as 4D ray depths instead of voxels, and train on real light fields instead of views from synthetic 3D models of single objects.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_33", "@cite_8", "@cite_1", "@cite_34", "@cite_0", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "2951713345", "2560722161", "", "2335364074", "2949551726", "2950701417", "2132947399", "2609026071", "2469266052", "2342277278" ], "abstract": [ "In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.", "Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images, however, these representations obscure the natural invariance of 3D shapes under geometric transformations, and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output – point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthordox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3D reconstruction benchmarks, but it also shows strong performance for 3D shape completion and promising ability in making multiple plausible predictions.", "", "What is a good vector representation of an object? We believe that it should be generative in 3D, in the sense that it can produce new 3D objects; as well as be predictable from 2D, in the sense that it can be perceived from 2D images. We propose a novel architecture, called the TL-embedding network, to learn an embedding space with these properties. The network consists of two components: (a) an autoencoder that ensures the representation is generative; and (b) a convolutional network that ensures the representation is predictable. This enables tackling a number of tasks including voxel prediction from 2D images and 3D model retrieval. Extensive experimental analysis demonstrates the usefulness and versatility of this embedding.", "We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods.", "Understanding the 3D world is a fundamental problem in computer vision. However, learning a good representation of 3D objects is still an open problem due to the high dimensionality of the data and many factors of variation involved. In this work, we investigate the task of single-view 3D object reconstruction from a learning agent's perspective. We formulate the learning process as an interaction between 3D and 2D representations and propose an encoder-decoder network with a novel projection loss defined by the perspective transformation. More importantly, the projection loss enables the unsupervised learning using 2D observation without explicit 3D supervision. We demonstrate the ability of the model in generating 3D volume from a single 2D image with three sets of experiments: (1) learning from single-class objects; (2) learning from multi-class objects and (3) testing on novel object classes. Results show superior performance and better generalization ability for 3D object reconstruction when the projection loss is involved.", "We consider the problem of estimating detailed 3D structure from a single still image of an unstructured environment. Our goal is to create 3D models that are both quantitatively accurate as well as visually pleasing. For each small homogeneous patch in the image, we use a Markov random field (MRF) to infer a set of \"plane parametersrdquo that capture both the 3D location and 3D orientation of the patch. The MRF, trained via supervised learning, models both image depth cues as well as the relationships between different parts of the image. Other than assuming that the environment is made up of a number of small planes, our model makes no explicit assumptions about the structure of the scene; this enables the algorithm to capture much more detailed 3D structure than does prior art and also give a much richer experience in the 3D flythroughs created using image-based rendering, even for scenes with significant nonvertical structure. Using this approach, we have created qualitatively correct 3D models for 64.9 percent of 588 images downloaded from the Internet. We have also extended our model to produce large-scale 3D models from a few images.", "We study the notion of consistency between a 3D shape and a 2D observation and propose a differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view. We do so by reformulating view consistency using a differentiable ray consistency (DRC) term. We show that this formulation can be incorporated in a learning framework to leverage different types of multi-view observations e.g. foreground masks, depth, color images, semantics etc. as supervision for learning single-view 3D prediction. We present empirical analysis of our technique in a controlled setting. We also show that this approach allows us to improve over existing techniques for single-view reconstruction of objects from the PASCAL VOC dataset.", "A key goal of computer vision is to recover the underlying 3D structure from 2D observations of the world. In this paper we learn strong deep generative models of 3D structures, and recover these structures from 3D and 2D images via probabilistic inference. We demonstrate high-quality samples and report log-likelihoods on several datasets, including ShapeNet [2], and establish the first benchmarks in the literature. We also show how these models and their inference networks can be trained end-to-end from 2D images. This demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.", "Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2). The network learns a mapping from images of objects to their underlying 3D shapes from a large collection of synthetic data [13]. Our network takes in one or more images of an object instance from arbitrary viewpoints and outputs a reconstruction of the object in the form of a 3D occupancy grid. Unlike most of the previous works, our network does not require any image annotations or object class labels for training or testing. Our extensive experimental analysis shows that our reconstruction framework (i) outperforms the state-of-the-art methods for single view reconstruction, and (ii) enables the 3D reconstruction of objects in situations when traditional SFM SLAM methods fail (because of lack of texture and or wide baseline)." ] }
1708.03492
2743191642
Comprehending lyrics, as found in songs and poems, can pose a challenge to human and machine readers alike. This motivates the need for systems that can understand the ambiguity and jargon found in such creative texts, and provide commentary to aid readers in reaching the correct interpretation. We introduce the task of automated lyric annotation (ALA). Like text simplification, a goal of ALA is to rephrase the original text in a more easily understandable manner. However, in ALA the system must often include additional information to clarify niche terminology and abstract concepts. To stimulate research on this task, we release a large collection of crowdsourced annotations for song lyrics. We analyze the performance of translation and retrieval models on this task, measuring performance with both automated and human evaluation. We find that each model captures a unique type of information important to the task.
Text generation for artistic purposes, such as poetry and lyrics, has been explored most commonly using templates and constraints @cite_18 . In regard to rap lyrics, Wu al present a system for rap lyric generation that produces a single line of lyrics that is meant to be a response to a single line of input. Most recent work is that of Zhang al and Potash al, who show the effectiveness of RNNs for the generation of poetry and lyrics.
{ "cite_N": [ "@cite_18" ], "mid": [ "116271097" ], "abstract": [ "We address the issue of generating texts in the style of an existing author, that also satisfy structural constraints imposed by the genre of the text. We focus on song lyrics, for which structural constraints are well-defined: rhyme and meter. Although Markov processes are known to be suitable for representing style, they are difficult to control in order to satisfy non-local properties, such as structural constraints, that require long distance modeling. We show that the framework of Constrained Markov Processes allows us to precisely generate texts that are consistent with a corpus, while being controllable in terms of rhymes and meter, a result that no other technique, to our knowledge, could achieve to date. Controlled Markov processes consist in reformulating Markov processes in the context of constraint satisfaction. We describe how to represent stylistic and structural properties in terms of constraints in this framework and we provide an evaluation of our method by comparing it to both pure Markov and pure constraint-based approaches. We show how this approach can be used for the semi-automatic generation of lyrics in the style of a popular author that has the same structure as an existing song." ] }
1708.03492
2743191642
Comprehending lyrics, as found in songs and poems, can pose a challenge to human and machine readers alike. This motivates the need for systems that can understand the ambiguity and jargon found in such creative texts, and provide commentary to aid readers in reaching the correct interpretation. We introduce the task of automated lyric annotation (ALA). Like text simplification, a goal of ALA is to rephrase the original text in a more easily understandable manner. However, in ALA the system must often include additional information to clarify niche terminology and abstract concepts. To stimulate research on this task, we release a large collection of crowdsourced annotations for song lyrics. We analyze the performance of translation and retrieval models on this task, measuring performance with both automated and human evaluation. We find that each model captures a unique type of information important to the task.
The task of annotating song lyrics is also related to metaphor processing. As annotators often explain metaphors used in song lyrics, the Genius dataset can serve as a resource to study computational modeling of metaphors @cite_19 .
{ "cite_N": [ "@cite_19" ], "mid": [ "56200336" ], "abstract": [ "Besides making our thoughts more vivid and filling our communication with richer imagery, metaphor also plays an important structural role in our cognition. Although there is a consensus in the linguistics and NLP research communities that the phenomenon of metaphor is not restricted to similarity-based extensions of meanings of isolated words, but rather involves reconceptualization of a whole area of experience (target domain) in terms of another (source domain), there still has been no proposal for a comprehensive procedure for annotation of cross-domain mappings. However, a corpus annotated for conceptual mappings could provide a new starting point for both linguistic and cognitive experiments. The annotation scheme we present in this paper is a step towards filling this gap. We test our procedure in an experimental setting involving multiple annotators and estimate their agreement on the task. The associated corpus annotated for source target domain mappings will be publicly available." ] }
1708.03390
2952827547
We present a simple yet effective approach for learning word sense embeddings. In contrast to existing techniques, which either directly learn sense representations from corpora or rely on sense inventories from lexical resources, our approach can induce a sense inventory from existing word embeddings via clustering of ego-networks of related words. An integrated WSD mechanism enables labeling of words in context with learned sense vectors, which gives rise to downstream applications. Experiments show that the performance of our method is comparable to state-of-the-art unsupervised WSD systems.
introduced AdaGram, a non-parametric method for learning sense embeddings based on a Bayesian extension of the Skip-gram model. The granularity of learned sense embeddings is controlled by the parameter @math . Comparisons of their approach to @cite_17 on three SemEval word sense induction and disambiguation datasets show the advantage of their method. For this reason, we use AdaGram as a representative of the state-of-the-art methods in our experiments.
{ "cite_N": [ "@cite_17" ], "mid": [ "2949364118" ], "abstract": [ "There is rising interest in vector-space word embeddings and their use in NLP, especially given recent methods for their fast estimation at very large scale. Nearly all this work, however, assumes a single vector per word type ignoring polysemy and thus jeopardizing their usefulness for downstream tasks. We present an extension to the Skip-gram model that efficiently learns multiple embeddings per word type. It differs from recent related work by jointly performing word sense discrimination and embedding learning, by non-parametrically estimating the number of senses per word type, and by its efficiency and scalability. We present new state-of-the-art results in the word similarity in context task and demonstrate its scalability by training with one machine on a corpus of nearly 1 billion tokens in less than 6 hours." ] }
1708.03390
2952827547
We present a simple yet effective approach for learning word sense embeddings. In contrast to existing techniques, which either directly learn sense representations from corpora or rely on sense inventories from lexical resources, our approach can induce a sense inventory from existing word embeddings via clustering of ego-networks of related words. An integrated WSD mechanism enables labeling of words in context with learned sense vectors, which gives rise to downstream applications. Experiments show that the performance of our method is comparable to state-of-the-art unsupervised WSD systems.
Unsupervised WSD approaches rely neither on hand-annotated sense-labeled corpora, nor on handcrafted lexical resources. Instead, they automatically induce a sense inventory from raw corpora. Such unsupervised sense induction methods fall into two categories: context clustering, such as @cite_7 @cite_6 @cite_8 @cite_17 @cite_15 and word (ego-network) clustering, such as @cite_18 @cite_38 @cite_35 @cite_33 @cite_2 . Unsupervised methods use disambiguation clues from the induced sense inventory for word disambiguation. Usually, the WSD procedure is determined by the design of sense inventory. It might be the highest overlap between the instance's context words and the words of the sense cluster, as in @cite_2 or the smallest distance between context words and sense hubs in graph sense representation, as in @cite_16 .
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_35", "@cite_33", "@cite_7", "@cite_8", "@cite_6", "@cite_2", "@cite_15", "@cite_16", "@cite_17" ], "mid": [ "2050712820", "", "2079641356", "1987302197", "2951421399", "2164973920", "2130337399", "6380806", "", "2035242626", "2949364118" ], "abstract": [ "Inventories of manually compiled dictionaries usually serve as a source for word senses. However, they often include many rare senses while missing corpus domain-specific senses. We present a clustering algorithm called CBC (Clustering By Committee) that automatically discovers word senses from text. It initially discovers a set of tight clusters called committees that are well scattered in the similarity space. The centroid of the members of a committee is used as the feature vector of the cluster. We proceed by assigning words to their most similar clusters. After assigning an element to a cluster, we remove their overlapping features from the element. This allows CBC to discover the less frequent senses of a word and to avoid discovering duplicate senses. Each cluster that a word belongs to represents one of its senses. We also present an evaluation methodology for automatically measuring the precision and recall of discovered senses.", "", "This paper presents an unsupervised method for assembling semantic knowledge from a part-of-speech tagged corpus using graph algorithms. The graph model is built by linking pairs of words which participate in particular syntactic relationships. We focus on the symmetric relationship between pairs of nouns which occur together in lists. An incremental cluster-building algorithm using this part of the graph achieves 82 accuracy at a lexical acquisition task, evaluated against WordNet classes. The model naturally realises domain and corpus specific ambiguities as distinct components in the graph surrounding an ambiguous word.", "We introduce Chinese Whispers, a randomized graph-clustering algorithm, which is time-linear in the number of edges. After a detailed definition of the algorithm and a discussion of its strengths and weaknesses, the performance of Chinese Whispers is measured on Natural Language Processing (NLP) problems as diverse as language separation, acquisition of syntactic word classes and word sense disambiguation. At this, the fact is employed that the small-world property holds for many graphs in NLP.", "This paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text. The methods described in this paper, McQuitty's similarity analysis, Ward's minimum-variance method, and the EM algorithm, assign each instance of an ambiguous word to a known sense definition based solely on the values of automatically identifiable features in text. These methods and feature sets are found to be more successful in disambiguating nouns rather than adjectives or verbs. Overall, the most accurate of these procedures is McQuitty's similarity analysis in combination with a high dimensional feature set.", "Current vector-space models of lexical semantics create a single \"prototype\" vector to represent the meaning of a word. However, due to lexical ambiguity, encoding word meaning with a single vector is problematic. This paper presents a method that uses clustering to produce multiple \"sense-specific\" vectors for each word. This approach provides a context-dependent vector representation of word meaning that naturally accommodates homonymy and polysemy. Experimental comparisons to human judgements of semantic similarity for both isolated words as well as words in sentential contexts demonstrate the superiority of this approach over both prototype and exemplar based vector-space models.", "This paper presents context-group discrimination, a disambiguation algorithm based on clustering. Senses are interpreted as groups (or clusters) of similar contexts of the ambiguous word. Words, contexts, and senses are represented in Word Space, a high-dimensional, real-valued space in which closeness corresponds to semantic similarity. Similarity in Word Space is based on second-order co-occurrence: two tokens (or contexts) of the ambiguous word are assigned to the same sense cluster if the words they co-occur with in turn occur with similar words in a training corpus. The algorithm is automatic and unsupervised in both training and application: senses are induced from a corpus without labeled training instances or other external knowledge sources. The paper demonstrates good performance of context-group discrimination for a sample of natural and artificial ambiguous words.", "This paper introduces a linear time graph-based soft clustering algorithm. The algorithm applies a simple idea: given a graph, vertex pairs are assigned to the same cluster if either vertex has maximal affinity to the other. Clusters of varying size, shape, and density are found automatically making the algorithm suited to tasks such Word Sense Induction (WSI), where the number of classes is unknown and where class distributions may be skewed. The algorithm is applied to two WSI tasks, obtaining results comparable with those of systems adopting existing, state-of-the-art methods.", "", "Abstract This article describes an algorithm called HyperLex that is capable of automatically determining word uses in a textbase without recourse to a dictionary. The algorithm makes use of the specific properties of word cooccurrence graphs, which are shown as having “small world” properties. Unlike earlier dictionary-free methods based on word vectors, it can isolate highly infrequent uses (as rare as 1 of all occurrences) by detecting “hubs” and high-density components in the cooccurrence graphs. The algorithm is applied here to information retrieval on the Web, using a set of highly ambiguous test words. An evaluation of the algorithm showed that it only omitted a very small number of relevant uses. In addition, HyperLex offers automatic tagging of word uses in context with excellent precision (97 , compared to 73 for baseline tagging, with an 82 recall rate). Remarkably good precision (96 ) was also achieved on a selection of the 25 most relevant pages for each use (including highly infrequent ones). Finally, HyperLex is combined with a graphic display technique that allows the user to navigate visually through the lexicon and explore the various domains detected for each word use.", "There is rising interest in vector-space word embeddings and their use in NLP, especially given recent methods for their fast estimation at very large scale. Nearly all this work, however, assumes a single vector per word type ignoring polysemy and thus jeopardizing their usefulness for downstream tasks. We present an extension to the Skip-gram model that efficiently learns multiple embeddings per word type. It differs from recent related work by jointly performing word sense discrimination and embedding learning, by non-parametrically estimating the number of senses per word type, and by its efficiency and scalability. We present new state-of-the-art results in the word similarity in context task and demonstrate its scalability by training with one machine on a corpus of nearly 1 billion tokens in less than 6 hours." ] }
1708.03088
2744014255
In this work, we propose a technique to convert CNN models for semantic segmentation of static images into CNNs for video data. We describe a warping method that can be used to augment existing architectures with very little extra computational cost. This module is called NetWarp and we demonstrate its use for a range of network architectures. The main design principle is to use optical flow of adjacent frames for warping internal network representations across time. A key insight of this work is that fast optical flow methods can be combined with many different CNN architectures for improved performance and end-to-end training. Experiments validate that the proposed approach incurs only little extra computational cost, while improving performance, when video streams are available. We achieve new state-of-the-art results on the CamVid and Cityscapes benchmark datasets and show consistent improvements over different baseline networks. Our code and models will be available at this http URL
One possibility to address semantic video segmentation is by means of the 3D scene structure. Some works @cite_3 @cite_0 @cite_43 build models that use 3D point clouds that have been obtained with structure from motion. Based on these geometrical and or motion features, semantic segmentation is improved. More recent works @cite_26 @cite_32 propose the joint estimation of 2D semantics and 3D reconstruction of the scenes from the video data. While 3D information is very informative, it is also costly to obtain and comes with prediction errors that are hard to recover from.
{ "cite_N": [ "@cite_26", "@cite_32", "@cite_3", "@cite_0", "@cite_43" ], "mid": [ "801273237", "1971618559", "1913356549", "2047639605", "" ], "abstract": [ "We present an approach for joint inference of 3D scene structure and semantic labeling for monocular video. Starting with monocular image stream, our framework produces a 3D volumetric semantic + occupancy map, which is much more useful than a series of 2D semantic label images or a sparse point cloud produced by traditional semantic segmentation and Structure from Motion(SfM) pipelines respectively. We derive a Conditional Random Field (CRF) model defined in the 3D space, that jointly infers the semantic category and occupancy for each voxel. Such a joint inference in the 3D CRF paves the way for more informed priors and constraints, which is otherwise not possible if solved separately in their traditional frameworks. We make use of class specific semantic cues that constrain the 3D structure in areas, where multiview constraints are weak. Our model comprises of higher order factors, which helps when the depth is unobservable.We also make use of class specific semantic cues to reduce either the degree of such higher order factors, or to approximately model them with unaries if possible. We demonstrate improved 3D structure and temporally consistent semantic segmentation for difficult, large scale, forward moving monocular image sequences.", "In this paper we propose a robust algorithm that generates an efficient and accurate dense 3D reconstruction with associated semantic labellings. Intelligent autonomous systems require accurate 3D reconstructions for applications such as navigation and localisation. Such systems also need to recognise their surroundings in order to identify and interact with objects of interest. Considerable emphasis has been given to generating a good reconstruction but less effort has gone into generating a 3D semantic model. The inputs to our algorithm are street level stereo image pairs acquired from a camera mounted on a moving vehicle. The depth-maps, generated from the stereo pairs across time, are fused into a global 3D volume online in order to accommodate arbitrary long image sequences. The street level images are automatically labelled using a Conditional Random Field (CRF) framework exploiting stereo images, and label estimates are aggregated to annotate the 3D volume. We evaluate our approach on the KITTI odometry dataset and have manually generated ground truth for object class segmentation. Our qualitative evaluation is performed on various sequences of the dataset and we also quantify our results on a representative subset.", "We propose an algorithm for semantic segmentation based on 3D point clouds derived from ego-motion. We motivate five simple cues designed to model specific patterns of motion and 3D world structure that vary with object category. We introduce features that project the 3D cues back to the 2D image plane while modeling spatial layout and context. A randomized decision forest combines many such features to achieve a coherent 2D segmentation and recognize the object categories present. Our main contribution is to show how semantic segmentation is possible based solely on motion-derived 3D world structure. Our method works well on sparse, noisy point clouds, and unlike existing approaches, does not need appearance-based descriptors. Experiments were performed on a challenging new video database containing sequences filmed from a moving car in daylight and at dusk. The results confirm that indeed, accurate segmentation and recognition are possible using only motion and 3D world structure. Further, we show that the motion-derived information complements an existing state-of-the-art appearance-based method, improving both qualitative and quantitative performance.", "In this paper we propose a novel Conditional Random Field (CRF) formulation for the semantic scene labeling problem which is able to enforce temporal consistency between consecutive video frames and take advantage of the 3D scene geometry to improve segmentation quality. The main contribution of this work lies in the novel use of a 3D scene reconstruction as a means to temporally couple the individual image segmentations, allowing information flow from 3D geometry to the 2D image space. As our results show, the proposed framework outperforms state-of-the-art methods and opens a new perspective towards a tighter interplay of 2D and 3D information in the scene understanding problem.", "" ] }
1708.03088
2744014255
In this work, we propose a technique to convert CNN models for semantic segmentation of static images into CNNs for video data. We describe a warping method that can be used to augment existing architectures with very little extra computational cost. This module is called NetWarp and we demonstrate its use for a range of network architectures. The main design principle is to use optical flow of adjacent frames for warping internal network representations across time. A key insight of this work is that fast optical flow methods can be combined with many different CNN architectures for improved performance and end-to-end training. Experiments validate that the proposed approach incurs only little extra computational cost, while improving performance, when video streams are available. We achieve new state-of-the-art results on the CamVid and Cityscapes benchmark datasets and show consistent improvements over different baseline networks. Our code and models will be available at this http URL
More related to our technique are fast filtering techniques. For example, @cite_2 learns a similarity function between pixels of consecutive frames to propagate predictions across time. The approach of @cite_18 implements a neural network that uses learnable bilateral filters @cite_36 for long-range propagation of information across video frames. These filtering techniques propagate information after the semantic labels are computed for each frame, whereas in contrast, our approach does filtering based propagation across intermediate CNN representations making it more integrated into CNN training.
{ "cite_N": [ "@cite_36", "@cite_18", "@cite_2" ], "mid": [ "2264432461", "2562457735", "2157431481" ], "abstract": [ "Bilateral filters have wide spread use due to their edgepreserving properties. The common use case is to manually choose a parametric filter type, usually a Gaussian filter. In this paper, we will generalize the parametrization and in particular derive a gradient descent algorithm so the filter parameters can be learned from data. This derivation allows to learn high dimensional linear filters that operate in sparsely populated feature spaces. We build on the permutohedral lattice construction for efficient filtering. The ability to learn more general forms of high-dimensional filters can be used in several diverse applications. First, we demonstrate the use in applications where single filter applications are desired for runtime reasons. Further, we show how this algorithm can be used to learn the pairwise potentials in densely connected conditional random fields and apply these to different image segmentation tasks. Finally, we introduce layers of bilateral filters in CNNs and propose bilateral neural networks for the use of highdimensional sparse data. This view provides new ways to encode model structure into network architectures. A diverse set of experiments empirically validates the usage of general forms of filters.", "We propose a technique that propagates information forward through video data. The method is conceptually simple and can be applied to tasks that require the propagation of structured information, such as semantic labels, based on video content. We propose a Video Propagation Network that processes video frames in an adaptive manner. The model is applied online: it propagates information forward without the need to access future frames. In particular we combine two components, a temporal bilateral network for dense and video adaptive filtering, followed by a spatial network to refine features and increased flexibility. We present experiments on video object segmentation and semantic video segmentation and show increased performance comparing to the best previous task-specific methods, while having favorable runtime. Additionally we demonstrate our approach on an example regression task of color propagation in a grayscale video.", "We address the problem of image-based scene analysis from streaming video, as would be seen from a moving platform, in order to efficiently generate spatially and temporally consistent predictions of semantic categories over time. In contrast to previous techniques which typically address this problem in batch and or through graphical models, we demonstrate that by learning visual similarities between pixels across frames, a simple filtering algorithfiltering algorithmm is able to achieve high performance predictions in an efficient and online causal manner. Our technique is a meta-algorithm that can be efficiently wrapped around any scene analysis technique that produces a per-pixel semantic category distribution. We validate our approach over three different scene analysis techniques on three different datasets that contain different semantic object categories. Our experiments demonstrate that our approach is very efficient in practice and substantially improves the consistency of the predictions over time." ] }
1708.03088
2744014255
In this work, we propose a technique to convert CNN models for semantic segmentation of static images into CNNs for video data. We describe a warping method that can be used to augment existing architectures with very little extra computational cost. This module is called NetWarp and we demonstrate its use for a range of network architectures. The main design principle is to use optical flow of adjacent frames for warping internal network representations across time. A key insight of this work is that fast optical flow methods can be combined with many different CNN architectures for improved performance and end-to-end training. Experiments validate that the proposed approach incurs only little extra computational cost, while improving performance, when video streams are available. We achieve new state-of-the-art results on the CamVid and Cityscapes benchmark datasets and show consistent improvements over different baseline networks. Our code and models will be available at this http URL
The use of CNNs ( , @cite_22 @cite_4 ) resulted in a surge of performance in semantic segmentation. But, most CNN techniques work on single images. The authors of @cite_8 observed that the semantics change slowly across time and re-use some intermediate representations from the previous frames while computing segmentation for the present frame. This results in faster runtime but a loss in accuracy. In contrast, our approach uses adjacent frame deep representations for consistent predictions across frames resulting in improved prediction accuracy.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_8" ], "mid": [ "1923697677", "2952632681", "2516114310" ], "abstract": [ "Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "Recent years have seen tremendous progress in still-image segmentation; however the na \"ive application of these state-of-the-art algorithms to every video frame requires considerable computation and ignores the temporal continuity inherent in video. We propose a video recognition framework that relies on two key observations: 1) while pixels may change rapidly from frame to frame, the semantic content of a scene evolves more slowly, and 2) execution can be viewed as an aspect of architecture, yielding purpose-fit computation schedules for networks. We define a novel family of \"clockwork\" convnets driven by fixed or adaptive clock signals that schedule the processing of different layers at different update rates according to their semantic stability. We design a pipeline schedule to reduce latency for real-time recognition and a fixed-rate schedule to reduce overall computation. Finally, we extend clockwork scheduling to adaptive video processing by incorporating data-driven clocks that can be tuned on unlabeled video. The accuracy and efficiency of clockwork convnets are evaluated on the Youtube-Objects, NYUD, and Cityscapes video datasets." ] }
1708.03088
2744014255
In this work, we propose a technique to convert CNN models for semantic segmentation of static images into CNNs for video data. We describe a warping method that can be used to augment existing architectures with very little extra computational cost. This module is called NetWarp and we demonstrate its use for a range of network architectures. The main design principle is to use optical flow of adjacent frames for warping internal network representations across time. A key insight of this work is that fast optical flow methods can be combined with many different CNN architectures for improved performance and end-to-end training. Experiments validate that the proposed approach incurs only little extra computational cost, while improving performance, when video streams are available. We achieve new state-of-the-art results on the CamVid and Cityscapes benchmark datasets and show consistent improvements over different baseline networks. Our code and models will be available at this http URL
Although several works proposed neural network approaches for processing several video frames together, they are mostly confined to video level tasks such as classification or captioning. The works of @cite_37 @cite_39 use 3D convolutions across frames for action recognition. @cite_25 , LSTMs are used in a recurrent network for recognition and captioning. Two stream optical flow and image CNNs @cite_46 @cite_12 @cite_45 are among the state-of-the-art approaches for visual action recognition. Unlike video level tasks, pixel-level semantic video segmentation requires filtering at pixel-level. This work proposes a way of doing local information propagation across video frames.
{ "cite_N": [ "@cite_37", "@cite_39", "@cite_45", "@cite_46", "@cite_25", "@cite_12" ], "mid": [ "", "2308045930", "1973071925", "2952186347", "2951183276", "2507009361" ], "abstract": [ "", "", "We tackle the problem of semantic segmentation of dynamic scene in video sequences. We propose to incorporate foreground object information into pixel labeling by jointly reasoning semantic labels of super-voxels, object instance tracks and geometric relations between objects. We take an exemplar approach to object modeling by using a small set of object annotations and exploring the temporal consistency of object motion. After generating a set of moving object hypotheses, we design a CRF framework that jointly models the super voxel and object instances. The optimal semantic labeling is inferred by the MAP estimation of the model, which is solved by a single move-making based optimization procedure. We demonstrate the effectiveness of our method on three public datasets and show that our model can achieve superior or comparable results than the state of-the-art with less object-level supervision.", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.", "Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( ( 69.4 , )) and UCF101 ( ( 94.2 , )). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices (Models and code at https: github.com yjxiong temporal-segment-networks)." ] }
1708.03088
2744014255
In this work, we propose a technique to convert CNN models for semantic segmentation of static images into CNNs for video data. We describe a warping method that can be used to augment existing architectures with very little extra computational cost. This module is called NetWarp and we demonstrate its use for a range of network architectures. The main design principle is to use optical flow of adjacent frames for warping internal network representations across time. A key insight of this work is that fast optical flow methods can be combined with many different CNN architectures for improved performance and end-to-end training. Experiments validate that the proposed approach incurs only little extra computational cost, while improving performance, when video streams are available. We achieve new state-of-the-art results on the CamVid and Cityscapes benchmark datasets and show consistent improvements over different baseline networks. Our code and models will be available at this http URL
A related task to semantic video segmentation is video object segmentation. Like in semantic video segmentation literature, several works @cite_5 @cite_27 @cite_19 @cite_15 @cite_30 aim to reduce the complexity of graphical model structure with spatio-temporal superpixels. Some other works use nearest neighbor fields @cite_20 or optical flow @cite_47 for estimating correspondence between different frame pixels. These works use pixel correspondences across frames to refine or propagate labels, whereas the proposed approach refines the intermediate CNN representations with a module that is easy to integrate into current CNN frameworks.
{ "cite_N": [ "@cite_30", "@cite_47", "@cite_19", "@cite_27", "@cite_5", "@cite_15", "@cite_20" ], "mid": [ "589665618", "2244837655", "", "", "2111421929", "2117435890", "2016163842" ], "abstract": [ "A major challenge in video segmentation is that the foreground object may move quickly in the scene at the same time its appearance and shape evolves over time. While pairwise potentials used in graph-based algorithms help smooth labels between neighboring (super)pixels in space and time, they offer only a myopic view of consistency and can be misled by inter-frame optical flow errors. We propose a higher order supervoxel label consistency potential for semi-supervised foreground segmentation. Given an initial frame with manual annotation for the foreground object, our approach propagates the foreground region through time, leveraging bottom-up supervoxels to guide its estimates towards long-range coherent regions. We validate our approach on three challenging datasets and achieve state-of-the-art results.", "This paper describes a new framework for video matting, the process of pulling a high-quality alpha matte and foreground from a video sequence. The framework builds upon techniques in natural image matting, optical flow computation, and background estimation. User interaction is comprised of garbage matte specification if background estimation is needed, and hand-drawn keyframe segmentations into \"foreground,\" \"background\" and \"unknown\". The segmentations, called trimaps, are interpolated across the video volume using forward and backward optical flow. Competing flow estimates are combined based on information about where flow is likely to be accurate. A Bayesian matting technique uses the flowed trimaps to yield high-quality mattes of moving foreground elements with complex boundaries filmed by a moving camera. A novel technique for smoke matte extraction is also demonstrated.", "", "", "Interactive video segmentation has become a popular topic in computer vision and computer graphics. Discrete optimization using maximum flow algorithms is one of the preferred techniques to perform interactive video segmentation. This paper extends pixel based graph cut approaches to overcome the problem of high memory requirements. The basic idea is to use a graph cut optimization framework on top of temporally coherent superpixels. While grouping spatially coherent pixels sharing similar color, these algorithms additionally exploit the temporal connections between those image regions. Thereby the number of variables in the optimization framework is severely reduced. The effectiveness of the proposed algorithm is shown quantitatively, qualitatively and through timing comparisons of different temporally coherent superpixel approaches. Experiments on video sequences show that temporally coherent superpixels lead to significant speed-up and reduced memory consumption. Thus, video sequences can be interactively segmented in a more efficient manner while producing better segmentation quality when compared to other approaches.", "We present an interactive system for efficiently extracting foreground objects from a video. We extend previous min-cut based image segmentation techniques to the domain of video with four new contributions. We provide a novel painting-based user interface that allows users to easily indicate the foreground object across space and time. We introduce a hierarchical mean-shift preprocess in order to minimize the number of nodes that min-cut must operate on. Within the min-cut we also define new local cost functions to augment the global costs defined in earlier work. Finally, we extend 2D alpha matting methods designed for images to work with 3D video volumes. We demonstrate that our matting approach preserves smoothness across both space and time. Our interactive video cutout system allows users to quickly extract foreground objects from video sequences for use in a variety of applications including compositing onto new backgrounds and NPR cartoon style rendering.", "We introduce JumpCut, a new mask transfer and interpolation method for interactive video cutout. Given a source frame for which a foreground mask is already available, we compute an estimate of the foreground mask at another, typically non-successive, target frame. Observing that the background and foreground regions typically exhibit different motions, we leverage these differences by computing two separate nearest-neighbor fields (split-NNF) from the target to the source frame. These NNFs are then used to jointly predict a coherent labeling of the pixels in the target frame. The same split-NNF is also used to aid a novel edge classifier in detecting silhouette edges (S-edges) that separate the foreground from the background. A modified level set method is then applied to produce a clean mask, based on the pixel labels and the S-edges computed by the previous two steps. The resulting mask transfer method may also be used for coherently interpolating the foreground masks between two distant source frames. Our results demonstrate that the proposed method is significantly more accurate than the existing state-of-the-art on a wide variety of video sequences. Thus, it reduces the required amount of user effort, and provides a basis for an effective interactive video object cutout tool." ] }
1708.03423
2746703465
Video deblurring is a challenging problem as the blur is complex and usually caused by the combination of camera shakes, object motions, and depth variations. Optical flow can be used for kernel estimation since it predicts motion trajectories. However, the estimates are often inaccurate in complex scenes at object boundaries, which are crucial in kernel estimation. In this paper, we exploit semantic segmentation in each blurry frame to understand the scene contents and use different motion models for image regions to guide optical flow estimation. While existing pixel-wise blur models assume that the blur kernel is the same as optical flow during the exposure time, this assumption does not hold when the motion blur trajectory at a pixel is different from the estimated linear optical flow. We analyze the relationship between motion blur trajectory and optical flow, and present a novel pixel-wise non-linear kernel model to account for motion blur. The proposed blur model is based on the non-linear optical flow, which describes complex motion blur more effectively. Extensive experiments on challenging blurry videos demonstrate the proposed algorithm performs favorably against the state-of-the-art methods.
Video deblurring based on motion transformation detects sharp images or patches by computing the absolute displacements of pixels between adjacent frames, from which the clear contents are restored @cite_43 . Matsushita al @cite_45 transfer and interpolate sharper image pixels of neighboring frames for deblurring. Clear regions in a blurry video are detected to restore blurry regions of the same content in nearby frames @cite_22 . A multi-image enhancement method based on a unified Bayesian framework is proposed by Sunkavalli al @cite_12 to establish correspondence among neighboring frames. However, these transformation based methods do not involve deconvolution and rely on sharp patches from nearby frames which may not exist.
{ "cite_N": [ "@cite_43", "@cite_45", "@cite_22", "@cite_12" ], "mid": [ "2128376275", "2112529814", "2054619618", "1977907437" ], "abstract": [ "Blurred frames may happen sparsely in a video sequence acquired by consumer devices such as digital camcorders and digital cameras. In order to avoid visually annoying artifacts due to those blurred frames, this paper presents a novel motion deblurring algorithm in which a blurred frame can be reconstructed utilizing the high-resolution information of adjacent unblurred frames. First, a motion-compensated predictor for the blurred frame is derived from its neighboring unblurred frame via specific motion estimation. Then, an accurate blur kernel, which is difficult to directly obtain from the blurred frame itself, is computed using both the predictor and the blurred frame. Next, a residual deconvolution is applied to both of those frames in order to reduce the ringing artifacts inherently caused by conventional deconvolution. The blur kernel estimation and deconvolution processes are iteratively performed for the deblurred frame. Simulation results show that the proposed algorithm provides superior deblurring results over conventional deblurring algorithms while preserving details and reducing ringing artifacts.", "Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing low resolution stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighbouring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.", "Videos captured by hand-held Cameras often contain significant camera shake, causing many frames to be blurry. Restoring shaky videos not only requires smoothing the camera motion and stabilizing the content, but also demands removing blur from video frames. However, video blur is hard to remove using existing single or multiple image deblurring techniques, as the blur kernel is both spatially and temporally varying. This paper presents a video deblurring method that can effectively restore sharp frames from blurry ones caused by camera shake. Our method is built upon the observation that due to the nature of camera shake, not all video frames are equally blurry. The same object may appear sharp on some frames while blurry on others. Our method detects sharp regions in the video, and uses them to restore blurry regions of the same content in nearby frames. Our method also ensures that the deblurred frames are both spatially and temporally coherent using patch-based synthesis. Experimental results show that our method can effectively remove complex video blur under the presence of moving objects and other outliers, which cannot be achieved using previous deconvolution-based approaches.", "We describe a unified framework for generating a single high-quality still image (\"snapshot”) from a short video clip. Our system allows the user to specify the desired operations for creating the output image, such as super resolution, noise and blur reduction, and selection of best focus. It also provides a visual summary of activity in the video by incorporating saliency-based objectives in the snapshot formation process. We show examples on a number of different video clips to illustrate the utility and flexibility of our system." ] }
1708.03423
2746703465
Video deblurring is a challenging problem as the blur is complex and usually caused by the combination of camera shakes, object motions, and depth variations. Optical flow can be used for kernel estimation since it predicts motion trajectories. However, the estimates are often inaccurate in complex scenes at object boundaries, which are crucial in kernel estimation. In this paper, we exploit semantic segmentation in each blurry frame to understand the scene contents and use different motion models for image regions to guide optical flow estimation. While existing pixel-wise blur models assume that the blur kernel is the same as optical flow during the exposure time, this assumption does not hold when the motion blur trajectory at a pixel is different from the estimated linear optical flow. We analyze the relationship between motion blur trajectory and optical flow, and present a novel pixel-wise non-linear kernel model to account for motion blur. The proposed blur model is based on the non-linear optical flow, which describes complex motion blur more effectively. Extensive experiments on challenging blurry videos demonstrate the proposed algorithm performs favorably against the state-of-the-art methods.
Deconvolution based methods @cite_6 can be categorized into three approaches based on uniform kernel, layered blur model, and pixel-wise kernel. Uniform kernel based methods @cite_23 @cite_40 assume that the blur in each frame is spatial invariant. These methods are less effective for complex scenes with spatially variant blurs.
{ "cite_N": [ "@cite_40", "@cite_23", "@cite_6" ], "mid": [ "", "2171500271", "2963661589" ], "abstract": [ "", "Recovery of degraded images due to motion blurring is a challenging problem in digital imaging. Most existing techniques on blind deblurring are not capable of removing complex motion blurring from the blurred images of complex structures. One promising approach is to recover the clear image using multiple images captured for the scene. However, in practice it is observed that such a multi-frame approach can recover a high-quality clear image of the scene only after multiple blurred image frames are accurately aligned during pre-processing, which is a very challenging task even with user interactions. In this paper, by exploring the sparsity of the motion blur kernel and the clear image under certain domains, we propose an alternative iteration approach to simultaneously identify the blur kernels of given blurred images and restore a clear image. Our proposed approach is not only robust to image formation noises, but is also robust to the alignment errors among multiple images. A modified version of linearized Bregman iteration is then developed to efficiently solve the resulting minimization problem. Experiments show that our proposed algorithm is capable of accurately estimating the blur kernels of complex camera motions with minimal requirements on the accuracy of image alignment. As a result, our method is capable of automatically recovering a high-quality clear image from multiple blurred images.", "Videos captured with hand-held cameras often suffer from a significant amount of blur, mainly caused by the inevitable natural tremor of the photographer’s hand. In this work, we present an algorithm that removes blur due to camera shake by combining information in the Fourier domain from nearby frames in a video. The dynamic nature of typical videos with the presence of multiple moving objects and occlusions makes this problem of camera shake removal extremely challenging, in particular when low complexity is needed. Given an input video frame, we first create a consistent registered version of temporally adjacent frames. Then, the set of consistently registered frames is block-wise fused in the Fourier domain with weights depending on the Fourier spectrum magnitude. The method is motivated from the physiological fact that camera shake blur has a random nature; therefore, nearby video frames are generally blurred differently. Experiments with numerous videos recorded in the wild, along with extensive comparisons, show that the proposed algorithm achieves state-of-the-art results while at the same time being much faster than its competitors." ] }
1708.03423
2746703465
Video deblurring is a challenging problem as the blur is complex and usually caused by the combination of camera shakes, object motions, and depth variations. Optical flow can be used for kernel estimation since it predicts motion trajectories. However, the estimates are often inaccurate in complex scenes at object boundaries, which are crucial in kernel estimation. In this paper, we exploit semantic segmentation in each blurry frame to understand the scene contents and use different motion models for image regions to guide optical flow estimation. While existing pixel-wise blur models assume that the blur kernel is the same as optical flow during the exposure time, this assumption does not hold when the motion blur trajectory at a pixel is different from the estimated linear optical flow. We analyze the relationship between motion blur trajectory and optical flow, and present a novel pixel-wise non-linear kernel model to account for motion blur. The proposed blur model is based on the non-linear optical flow, which describes complex motion blur more effectively. Extensive experiments on challenging blurry videos demonstrate the proposed algorithm performs favorably against the state-of-the-art methods.
To deal with complex motion blurs, layered blur model is developed in the deblurring problem to handle locally varying blurs @cite_9 @cite_25 . Cho al @cite_9 simultaneously estimate multiple object motions, blur kernels, and the associated image segmentations to solve video deblurring problem. Kim al @cite_5 adopt a nonlocal regularization on the estimated residual and blurred image to handle object segmentation for dynamic scene deblurring. A layered motion model is proposed by Bar al @cite_27 to segment images into foreground as well as background layers, and estimate a linear blur kernel for the foreground layer. Wulff and Black @cite_25 extend this layered model to segment images into foreground and background regions from where the global motion blur kernels are estimated based on affine motion. However, these methods depend heavily on whether accurate segments can be obtained or not since each region is deblurred based on the segmentation.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_27", "@cite_25" ], "mid": [ "2118456997", "2166576327", "2166475174", "159023233" ], "abstract": [ "Most conventional single image deblurring methods assume that the underlying scene is static and the blur is caused by only camera shake. In this paper, in contrast to this restrictive assumption, we address the deblurring problem of general dynamic scenes which contain multiple moving objects as well as camera shake. In case of dynamic scenes, moving objects and background have different blur motions, so the segmentation of the motion blur is required for deblurring each distinct blur motion accurately. Thus, we propose a novel energy model designed with the weighted sum of multiple blur data models, which estimates different motion blurs and their associated pixel-wise weights, and resulting sharp image. In this framework, the local weights are determined adaptively and get high values when the corresponding data models have high data fidelity. And, the weight information is used for the segmentation of the motion blur. Non-local regularization of weights are also incorporated to produce more reliable segmentation results. A convex optimization-based method is used for the solution of the proposed energy model. Experimental results demonstrate that our method outperforms conventional approaches in deblurring both dynamic scenes and static scenes.", "We propose a method for removing non-uniform motion blur from multiple blurry images. Traditional methods focus on estimating a single motion blur kernel for the entire image. In contrast, we aim to restore images blurred by unknown, spatially varying motion blur kernels caused by different relative motions between the camera and the scene. Our algorithm simultaneously estimates multiple motions, motion blur kernels, and the associated image segments. We formulate the problem as a regularized energy function and solve it using an alternating optimization technique. Real- world experiments demonstrate the effectiveness of the proposed method.", "The problem of motion estimation and restoration of objects in a blurred video sequence is addressed in this paper. Fast movement of the objects, together with the aperture time of the camera, result in a motion-blurred image. The direct velocity estimation from this blurred video is inaccurate. On the other hand, an accurate estimation of the velocity of the moving objects is critical for restoration of motion-blurred video. Therefore, restoration needs accurate motion estimation and vice versa, and a joint process is called for. To address this problem we derive a novel model of the blurring process and propose a Mumford-Shah type of variational framework, acting on consecutive frames, for joint object deblurring and velocity estimation. The proposed procedure distinguishes between the moving object and the background and is accurate also close to the boundary of the moving object. Experimental results both on simulated and real data show the importance of this joint estimation and its superior performance when compared to the independent estimation of motion and restoration.", "Videos contain complex spatially-varying motion blur due to the combination of object motion, camera motion, and depth variation with finite shutter speeds. Existing methods to estimate optical flow, deblur the images, and segment the scene fail in such cases. In particular, boundaries between differently moving objects cause problems, because here the blurred images are a combination of the blurred appearances of multiple surfaces. We address this with a novel layered model of scenes in motion. From a motion-blurred video sequence, we jointly estimate the layer segmentation and each layer’s appearance and motion. Since the blur is a function of the layer motion and segmentation, it is completely determined by our generative model. Given a video, we formulate the optimization problem as minimizing the pixel error between the blurred frames and images synthesized from the model, and solve it using gradient descent. We demonstrate our approach on synthetic and real sequences." ] }
1708.03423
2746703465
Video deblurring is a challenging problem as the blur is complex and usually caused by the combination of camera shakes, object motions, and depth variations. Optical flow can be used for kernel estimation since it predicts motion trajectories. However, the estimates are often inaccurate in complex scenes at object boundaries, which are crucial in kernel estimation. In this paper, we exploit semantic segmentation in each blurry frame to understand the scene contents and use different motion models for image regions to guide optical flow estimation. While existing pixel-wise blur models assume that the blur kernel is the same as optical flow during the exposure time, this assumption does not hold when the motion blur trajectory at a pixel is different from the estimated linear optical flow. We analyze the relationship between motion blur trajectory and optical flow, and present a novel pixel-wise non-linear kernel model to account for motion blur. The proposed blur model is based on the non-linear optical flow, which describes complex motion blur more effectively. Extensive experiments on challenging blurry videos demonstrate the proposed algorithm performs favorably against the state-of-the-art methods.
To address this issue, Li al @cite_31 parameterize the observed frames in a blurry video by homography and recover sharp contents by jointly estimating blur kernels, camera duty cycles, and latent images. @cite_29 , a projective motion path model @cite_38 is used to estimate blur kernels by exploiting inter-frame misalignments between frames. However, blur models based on homography and projection are designed to account for global camera motions, which cannot model complex object motion and depth variations. To solve this problem, Kim and Lee @cite_26 propose a segmentation-free algorithm by using bidirectional optical flow to model motion blurs for dynamic scene deblurring. This method is extended to generalized video deblurring in @cite_7 by alternatively estimating optical flow and latent frames. Although promising results have been obtained, the assumption that motion blur is same as optical flow does not hold in complex scenes as illustrated in Figure especially when the camera duty cycle is large.
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_7", "@cite_29", "@cite_31" ], "mid": [ "2106923440", "2044005793", "1917431891", "1910977477", "1996109124" ], "abstract": [ "This paper addresses how to model and correct image blur that arises when a camera undergoes ego motion while observing a distant scene. In particular, we discuss how the blurred image can be modeled as an integration of the clear scene under a sequence of planar projective transformations (i.e., homographies) that describe the camera's path. This projective motion path blur model is more effective at modeling the spatially varying motion blur exhibited by ego motion than conventional methods based on space-invariant blur kernels. To correct the blurred image, we describe how to modify the Richardson-Lucy (RL) algorithm to incorporate this new blur model. In addition, we show that our projective motion RL algorithm can incorporate state-of-the-art regularization priors to improve the deblurred results. The projective motion path blur model, along with the modified RL algorithm, is detailed, together with experimental results demonstrating its overall effectiveness. Statistical analysis on the algorithm's convergence properties and robustness to noise is also provided.", "Most state-of-the-art dynamic scene deblurring methods based on accurate motion segmentation assume that motion blur is small or that the specific type of motion causing the blur is known. In this paper, we study a motion segmentation-free dynamic scene deblurring method, which is unlike other conventional methods. When the motion can be approximated to linear motion that is locally (pixel-wise) varying, we can handle various types of blur caused by camera shake, including out-of-plane motion, depth variation, radial distortion, and so on. Thus, we propose a new energy model simultaneously estimating motion flow and the latent image based on robust total variation (TV)-L1 model. This approach is necessary to handle abrupt changes in motion without segmentation. Furthermore, we address the problem of the traditional coarse-to-fine deblurring framework, which gives rise to artifacts when restoring small structures with distinct motion. We thus propose a novel kernel re-initialization method which reduces the error of motion flow propagated from a coarser level. Moreover, a highly effective convex optimization-based solution mitigating the computational difficulties of the TV-L1 model is established. Comparative experimental results on challenging real blurry images demonstrate the efficiency of the proposed method.", "Several state-of-the-art video deblurring methods are based on a strong assumption that the captured scenes are static. These methods fail to deblur blurry videos in dynamic scenes. We propose a video deblurring method to deal with general blurs inherent in dynamic scenes, contrary to other methods. To handle locally varying and general blurs caused by various sources, such as camera shake, moving objects, and depth variation in a scene, we approximate pixel-wise kernel with bidirectional optical flows. Therefore, we propose a single energy model that simultaneously estimates optical flows and latent frames to solve our deblurring problem. We also provide a framework and efficient solvers to optimize the energy model. By minimizing the proposed energy function, we achieve significant improvements in removing blurs and estimating accurate optical flows in blurry frames. Extensive experimental results demonstrate the superiority of the proposed method in real and challenging videos that state-of-the-art methods fail in either deblurring or optical flow estimation.", "Camera motion introduces motion blur, degrading the quality of video. A video deblurring method is proposed based on two observations: (i) camera motion within capture of each individual frame leads to motion blur; (ii) camera motion between frames yields inter-frame mis-alignment that can be exploited for blur removal. The proposed method effectively leverages the information distributed across multiple video frames due to camera motion, jointly estimating the motion between consecutive frames and blur within each frame. This joint analysis is crucial for achieving effective restoration by leveraging temporal information. Extensive experiments are carried out on synthetic data as well as real-world blurry videos. Comparisons with several state-of-the-art methods verify the effectiveness of the proposed method.", "In this paper, we show how to generate a sharp panorama from a set of motion-blurred video frames. Our technique is based on joint global motion estimation and multi-frame deblurring. It also automatically computes the duty cycle of the video, namely the percentage of time between frames that is actually exposure time. The duty cycle is necessary for allowing the blur kernels to be accurately extracted and then removed. We demonstrate our technique on a number of videos." ] }
1708.03423
2746703465
Video deblurring is a challenging problem as the blur is complex and usually caused by the combination of camera shakes, object motions, and depth variations. Optical flow can be used for kernel estimation since it predicts motion trajectories. However, the estimates are often inaccurate in complex scenes at object boundaries, which are crucial in kernel estimation. In this paper, we exploit semantic segmentation in each blurry frame to understand the scene contents and use different motion models for image regions to guide optical flow estimation. While existing pixel-wise blur models assume that the blur kernel is the same as optical flow during the exposure time, this assumption does not hold when the motion blur trajectory at a pixel is different from the estimated linear optical flow. We analyze the relationship between motion blur trajectory and optical flow, and present a novel pixel-wise non-linear kernel model to account for motion blur. The proposed blur model is based on the non-linear optical flow, which describes complex motion blur more effectively. Extensive experiments on challenging blurry videos demonstrate the proposed algorithm performs favorably against the state-of-the-art methods.
Recently, image or video restoration algorithms that aim to recover the underlying sharp contents based on convolutional neural networks, have emerged. In @cite_34 , deep neural networks are used for single image deblurring using synthetic training data. Su al @cite_42 propose a deep encoder-decoder network to address real world video deblurring problems. Nevertheless, when images are heavily blurred, this method may introduce temporal artifacts that become more visible after stabilization.
{ "cite_N": [ "@cite_34", "@cite_42" ], "mid": [ "1457323852", "2558246008" ], "abstract": [ "We describe a learning-based approach to blind image deconvolution. It uses a deep layered architecture, parts of which are borrowed from recent work on neural network learning, and parts of which incorporate computations that are specific to image deconvolution. The system is trained end-to-end on a set of artificially generated training examples, enabling competitive performance in blind deconvolution, both with respect to quality and runtime.", "Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines." ] }
1708.03423
2746703465
Video deblurring is a challenging problem as the blur is complex and usually caused by the combination of camera shakes, object motions, and depth variations. Optical flow can be used for kernel estimation since it predicts motion trajectories. However, the estimates are often inaccurate in complex scenes at object boundaries, which are crucial in kernel estimation. In this paper, we exploit semantic segmentation in each blurry frame to understand the scene contents and use different motion models for image regions to guide optical flow estimation. While existing pixel-wise blur models assume that the blur kernel is the same as optical flow during the exposure time, this assumption does not hold when the motion blur trajectory at a pixel is different from the estimated linear optical flow. We analyze the relationship between motion blur trajectory and optical flow, and present a novel pixel-wise non-linear kernel model to account for motion blur. The proposed blur model is based on the non-linear optical flow, which describes complex motion blur more effectively. Extensive experiments on challenging blurry videos demonstrate the proposed algorithm performs favorably against the state-of-the-art methods.
Semantic segmentation @cite_11 @cite_41 @cite_30 aims to cluster image pixels of the same object class with assigned labels. Numerous recent methods use semantic segmentation to resolve ambiguities in road signs detection @cite_24 , 3D reconstruction @cite_4 , and optical flow estimation by using different motion models at different object regions @cite_35 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_4", "@cite_41", "@cite_24", "@cite_11" ], "mid": [ "2950241165", "", "2150134683", "", "2150581781", "1973255633" ], "abstract": [ "Surveillance video parsing, which segments the video frames into several labels, e.g., face, pants, left-leg, has wide applications. However,pixel-wisely annotating all frames is tedious and inefficient. In this paper, we develop a Single frame Video Parsing (SVP) method which requires only one labeled frame per video in training stage. To parse one particular frame, the video segment preceding the frame is jointly considered. SVP (1) roughly parses the frames within the video segment, (2) estimates the optical flow between frames and (3) fuses the rough parsing results warped by optical flow to produce the refined parsing result. The three components of SVP, namely frame parsing, optical flow estimation and temporal fusion are integrated in an end-to-end manner. Experimental results on two surveillance video datasets show the superiority of SVP over state-of-the-arts.", "", "Both image segmentation and dense 3D modeling from images represent an intrinsically ill-posed problem. Strong regularizers are therefore required to constrain the solutions from being 'too noisy'. Unfortunately, these priors generally yield overly smooth reconstructions and or segmentations in certain regions whereas they fail in other areas to constrain the solution sufficiently. In this paper we argue that image segmentation and dense 3D reconstruction contribute valuable information to each other's task. As a consequence, we propose a rigorous mathematical framework to formulate and solve a joint segmentation and dense reconstruction problem. Image segmentations provide geometric cues about which surface orientations are more likely to appear at a certain location in space whereas a dense 3D reconstruction yields a suitable regularization for the segmentation problem by lifting the labeling from 2D images to 3D space. We show how appearance-based cues and 3D surface orientation priors can be learned from training data and subsequently used for class-specific regularization. Experimental results on several real data sets highlight the advantages of our joint formulation.", "", "This paper presents an automatic road-sign detection and recognition system based on support vector machines (SVMs). In automatic traffic-sign maintenance and in a visual driver-assistance system, road-sign detection and recognition are two of the most important functions. Our system is able to detect and recognize circular, rectangular, triangular, and octagonal signs and, hence, covers all existing Spanish traffic-sign shapes. Road signs provide drivers important information and help them to drive more safely and more easily by guiding and warning them and thus regulating their actions. The proposed recognition system is based on the generalization properties of SVMs. The system consists of three stages: 1) segmentation according to the color of the pixel; 2) traffic-sign detection by shape classification using linear SVMs; and 3) content recognition based on Gaussian-kernel SVMs. Because of the used segmentation stage by red, blue, yellow, white, or combinations of these colors, all traffic signs can be detected, and some of them can be detected by several colors. Results show a high success rate and a very low amount of false positives in the final recognition stage. From these results, we can conclude that the proposed algorithm is invariant to translation, rotation, scale, and, in many situations, even to partial occlusions", "In this work, the human parsing task, namely decomposing a human image into semantic fashion body regions, is formulated as an active template regression (ATR) problem, where the normalized mask of each fashion body item is expressed as the linear combination of the learned mask templates, and then morphed to a more precise mask with the active shape parameters, including position, scale and visibility of each semantic region. The mask template coefficients and the active shape parameters together can generate the human parsing results, and are thus called the structure outputs for human parsing. The deep Convolutional Neural Network (CNN) is utilized to build the end-to-end relation between the input human image and the structure outputs for human parsing. More specifically, the structure outputs are predicted by two separate networks. The first CNN network is with max-pooling, and designed to predict the template coefficients for each label mask, while the second CNN network is without max-pooling to preserve sensitivity to label mask position and accurately predict the active shape parameters. For a new image, the structure outputs of the two networks are fused to generate the probability of each label for each pixel, and super-pixel smoothing is finally used to refine the human parsing result. Comprehensive evaluations on a large dataset well demonstrate the significant superiority of the ATR framework over other state-of-the-arts for human parsing. In particular, the F1-score reaches @math percent by our ATR framework, significantly higher than @math percent based on the state-of-the-art algorithm [28] ." ] }
1905.13466
2947156782
In this paper, we aim to recover the 3D human pose from 2D body joints of a single image. The major challenge in this task is the depth ambiguity since different 3D poses may produce similar 2D poses. Although many recent advances in this problem are found in both unsupervised and supervised learning approaches, the performances of most of these approaches are greatly affected by insufficient diversities and richness of training data. To alleviate this issue, we propose an unsupervised learning approach, which is capable of estimating various complex poses well under limited available training data. Specifically, we propose a Shape Decomposition Model (SDM) in which a 3D pose is considered as the superposition of two parts which are global structure together with some deformations. Based on SDM, we estimate these two parts explicitly by solving two sets of different distributed combination coefficients of geometric priors. In addition, to obtain geometric priors, a joint dictionary learning algorithm is proposed to extract both coarse and fine pose clues simultaneously from limited training data. Quantitative evaluations on several widely used datasets demonstrate that our approach yields better performances over other competitive approaches. Especially, on some categories with more complex deformations, significant improvements are achieved by our approach. Furthermore, qualitative experiments conducted on in-the-wild images also show the effectiveness of the proposed approach.
The model-free methods estimate the 3D pose from 2D observations of the image directly. Elgammal and Lee @cite_71 infer the 3D pose from human silhouettes by learning the representations of activity manifolds. Ankur and Bill @cite_11 apply a nonlinear regression to estimate pose from shape descriptor vectors of silhouettes. To improve the estimation performance, Sedai al. @cite_12 combines shape and appearance descriptors to infer the 3D pose jointly based on a discriminative learning framework. Considering that the above approaches cannot ensure the interdependencies between outputs, Bo and Sminchisescu @cite_50 use the Kullback-Leibler divergence to match the distributions of inputs and outputs. Unlike the above learning-based approaches, Jiang al. @cite_54 searched the optimal poses through millions of exemplars using the nearest neighbor scheme. As the deep convolutional networks yield significant performance in many areas, various ConvNets architectures are designed to estimate the 3D pose @cite_30 @cite_56 @cite_43 @cite_41 @cite_13 @cite_73 @cite_77 . However, the collection of a large of paired 2D-3D correspondence for the performance of model-free techniques is still a challenge although many data augmentation approaches have been proposed @cite_14 @cite_65 .
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_41", "@cite_54", "@cite_65", "@cite_56", "@cite_43", "@cite_77", "@cite_50", "@cite_71", "@cite_73", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2113325037", "2962729993", "2557698284", "2105041273", "2576289912", "2293220651", "2524613005", "2798646183", "2169738563", "2152386463", "2612706635", "2554247908", "2111986959", "2158268505" ], "abstract": [ "We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.", "Human 3D pose estimation from a single image is a challenging task with numerous applications. Convolutional Neural Networks (CNNs) have recently achieved superior performance on the task of 2D pose estimation from a single image, by training on images with 2D annotations collected by crowd sourcing. This suggests that similar success could be achieved for direct estimation of 3D poses. However, 3D poses are much harder to annotate, and the lack of suitable annotated training images hinders attempts towards end-to-end solutions. To address this issue, we opt to automatically synthesize training images with ground truth pose annotations. Our work is a systematic study along this road. We find that pose space coverage and texture diversity are the key ingredients for the effectiveness of synthetic training data. We present a fully automatic, scalable approach that samples the human pose space for guiding the synthesis procedure and extracts clothing textures from real images. Furthermore, we explore domain adaptation for bridging the gap between our synthetic training images and real testing photos. We demonstrate that CNNs trained with our synthetic images out-perform those trained with real photos on 3D pose estimation tasks.", "This paper addresses the problem of 3D human pose estimation from a single image. We follow a standard two-step pipeline by first detecting the 2D position of the N body joints, and then using these observations to infer 3D pose. For the first step, we use a recent CNN-based detector. For the second step, most existing approaches perform 2N-to-3N regression of the Cartesian joint coordinates. We show that more precise pose estimates can be obtained by representing both the 2D and 3D human poses using NxN distance matrices, and formulating the problem as a 2D-to-3D distance matrix regression. For learning such a regressor we leverage on simple Neural Network architectures, which by construction, enforce positivity and symmetry of the predicted matrices. The approach has also the advantage to naturally handle missing observations and allowing to hypothesize the position of non-observed joints. Quantitative results on Humaneva and Human3.6M datasets demonstrate consistent performance gains over state-of-the-art. Qualitative evaluation on the images in-the-wild of the LSP dataset, using the regressor learned on Human3.6M, reveals very promising generalization results.", "We propose a novel exemplar based method to estimate 3D human poses from single images by using only the joint correspondences. Due to the inherent depth ambiguity, estimating 3D poses from a monocular view is a challenging problem. We solve the problem by searching through millions of exemplars for optimal poses. Compared with traditional parametric schemes, our method is able to handle very large pose database, relieves parameter tweaking, is easier to train and is more effective for complex pose 3D reconstruction. The proposed method estimates upper body poses and lower body poses sequentially, which implicitly squares the size of the exemplar database and enables us to reconstruct unconstrained poses efficiently. Our implementation based on the kd-tree achieves real-time performance. The experiments on a variety of images show that the proposed method is efficient and effective.", "Estimating human pose, shape, and motion from images and videos are fundamental challenges with many applications. Recent advances in 2D human pose estimation use large amounts of manually-labeled training data for learning convolutional neural networks (CNNs). Such data is time consuming to acquire and difficult to extend. Moreover, manual labeling of 3D pose, depth and motion is impractical. In this work we present SURREAL (Synthetic hUmans foR REAL tasks): a new large-scale dataset with synthetically-generated but realistic images of people rendered from 3D sequences of human motion capture data. We generate more than 6 million frames together with ground truth pose, depth maps, and segmentation masks. We show that CNNs trained on our synthetic dataset allow for accurate human depth estimation and human part segmentation in real RGB images. Our results and the new dataset open up new possibilities for advancing person analysis using cheap and large-scale synthetic data.", "In this paper, we propose a deep convolutional neural network for 3D human pose estimation from monocular images. We train the network using two strategies: (1) a multi-task framework that jointly trains pose regression and body part detectors; (2) a pre-training strategy where the pose regressor is initialized using a network trained for body part detection. We compare our network on a large data set and achieve significant improvement over baseline methods. Human pose estimation is a structured prediction problem, i.e., the locations of each body part are highly correlated. Although we do not add constraints about the correlations between body parts to the network, we empirically show that the network has disentangled the dependencies among different body parts, and learned their correlations.", "This paper focuses on structured-output learning using deep neural networks for 3D human pose estimation from monocular images. Our network takes an image and 3D pose as inputs and outputs a score value, which is high when the image-pose pair matches and low otherwise. The network structure consists of a convolutional neural network for image feature extraction, followed by two sub-networks for transforming the image features and pose into a joint embedding. The score function is then the dot-product between the image and pose embeddings. The image-pose embedding and score function are jointly trained using a maximum-margin cost function. Our proposed framework can be interpreted as a special form of structured support vector machines where the joint feature space is discriminatively learned using deep neural networks. We also propose an efficient recurrent neural network for performing inference with the learned image-embedding. We test our framework on the Human3.6m dataset and obtain state-of-the-art results compared to other recent methods. Finally, we present visualizations of the image-pose embedding space, demonstrating the network has learned a high-level embedding of body-orientation and pose-configuration.", "Our ability to train end-to-end systems for 3D human pose estimation from single images is currently constrained by the limited availability of 3D annotations for natural images. Most datasets are captured using Motion Capture (MoCap) systems in a studio setting and it is difficult to reach the variability of 2D human pose datasets, like MPII or LSP. To alleviate the need for accurate 3D ground truth, we propose to use a weaker supervision signal provided by the ordinal depths of human joints. This information can be acquired by human annotators for a wide range of images and poses. We showcase the effectiveness and flexibility of training Convolutional Networks (ConvNets) with these ordinal relations in different settings, always achieving competitive performance with ConvNets trained with accurate 3D joint coordinates. Additionally, to demonstrate the potential of the approach, we augment the popular LSP and MPII datasets with ordinal depth annotations. This extension allows us to present quantitative and qualitative evaluation in non-studio conditions. Simultaneously, these ordinal annotations can be easily incorporated in the training procedure of typical ConvNets for 3D human pose. Through this inclusion we achieve new state-of-the-art performance for the relevant benchmarks and validate the effectiveness of ordinal depth supervision for 3D human pose.", "We describe twin Gaussian processes (TGP), a generic structured prediction method that uses Gaussian process (GP) priors on both covariates and responses, both multivariate, and estimates outputs by minimizing the Kullback-Leibler divergence between two GP modeled as normal distributions over finite index sets of training and testing examples, emphasizing the goal that similar inputs should produce similar percepts and this should hold, on average, between their marginal distributions. TGP captures not only the interdependencies between covariates, as in a typical GP, but also those between responses, so correlations among both inputs and outputs are accounted for. TGP is exemplified, with promising results, for the reconstruction of 3d human poses from monocular and multicamera video sequences in the recently introduced HumanEva benchmark, where we achieve 5 cm error on average per 3d marker for models trained jointly, using data from multiple people and multiple activities. The method is fast and automatic: it requires no hand-crafting of the initial pose, camera calibration parameters, or the availability of a 3d body model associated with human subjects used for training or testing.", "We aim to infer 3D body pose directly from human silhouettes. Given a visual input (silhouette), the objective is to recover the intrinsic body configuration, recover the viewpoint, reconstruct the input and detect any spatial or temporal outliers. In order to recover intrinsic body configuration (pose) from the visual input (silhouette), we explicitly learn view-based representations of activity manifolds as well as learn mapping functions between such central representations and both the visual input space and the 3D body pose space. The body pose can be recovered in a closed form in two steps by projecting the visual input to the learned representations of the activity manifold, i.e., finding the point on the learned manifold representation corresponding to the visual input, followed by interpolating 3D pose.", "Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3- dimensional positions.,,With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, \"lifting\" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30 on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results – this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.", "This paper addresses the challenge of 3D human pose estimation from a single color image. Despite the general success of the end-to-end learning paradigm, top performing approaches employ a two-step solution consisting of a Convolutional Network (ConvNet) for 2D joint localization and a subsequent optimization step to recover 3D pose. In this paper, we identify the representation of 3D pose as a critical issue with current ConvNet approaches and make two important contributions towards validating the value of end-to-end learning for this task. First, we propose a fine discretization of the 3D space around the subject and train a ConvNet to predict per voxel likelihoods for each joint. This creates a natural representation for 3D pose and greatly improves performance over the direct regression of joint coordinates. Second, to further improve upon initial estimates, we employ a coarse-to-fine prediction scheme. This step addresses the large dimensionality increase and enables iterative refinement and repeated processing of the image features. The proposed approach outperforms all state-of-the-art methods on standard benchmarks achieving a relative error reduction greater than 30 on average. Additionally, we investigate using our volumetric representation in a related architecture which is suboptimal compared to our end-to-end approach, but is of practical interest, since it enables training when no image with corresponding 3D groundtruth is available, and allows us to present compelling results for in-the-wild images.", "This paper presents a method for combining the shape and appearance feature types in a discriminative learning framework for human pose estimation. We first present a new appearance descriptor that is distinctive and resilient to noise for 3D human pose estimation. We then combine the proposed appearance descriptor with a shape descriptor computed from the silhouette of the human subject using discriminative learning. Our method, which we refer to as a localized decision level fusion technique, is based on clustering the output pose space into several partitions and learning a decision level fusion model for the shape and appearance descriptors in each region. The combined shape and appearance descriptor allows complementary information of the individual feature types to be exploited, leading to improved performance of the pose estimation system. We evaluate our proposed fusion method with feature level fusion and kernel level fusion methods using a synchronized video and 3D motion dataset. Our experimental results show that the proposed feature combination method gives more accurate pose estimation than the one obtained from each individual feature type. Among the three fusion methods, our localized decision level fusion method is demonstrated to perform the best for 3D pose estimation.", "We describe a learning based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labelling of body pans in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. For the main regression, we evaluate both regularized least squares and relevance vector machine (RVM) regressors over both linear and kernel bases. The RVM's provide much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. For realism and good generalization with respect to viewpoints, we train the regressors on images resynthesized from real human motion capture data, and test it both quantitatively on similar independent test data, and qualitatively on a real image sequence. Mean angular errors of 6-7 degrees are obtained - a factor of 3 better than the current state of the art for the much simpler upper body problem." ] }
1905.13466
2947156782
In this paper, we aim to recover the 3D human pose from 2D body joints of a single image. The major challenge in this task is the depth ambiguity since different 3D poses may produce similar 2D poses. Although many recent advances in this problem are found in both unsupervised and supervised learning approaches, the performances of most of these approaches are greatly affected by insufficient diversities and richness of training data. To alleviate this issue, we propose an unsupervised learning approach, which is capable of estimating various complex poses well under limited available training data. Specifically, we propose a Shape Decomposition Model (SDM) in which a 3D pose is considered as the superposition of two parts which are global structure together with some deformations. Based on SDM, we estimate these two parts explicitly by solving two sets of different distributed combination coefficients of geometric priors. In addition, to obtain geometric priors, a joint dictionary learning algorithm is proposed to extract both coarse and fine pose clues simultaneously from limited training data. Quantitative evaluations on several widely used datasets demonstrate that our approach yields better performances over other competitive approaches. Especially, on some categories with more complex deformations, significant improvements are achieved by our approach. Furthermore, qualitative experiments conducted on in-the-wild images also show the effectiveness of the proposed approach.
Unlike the model-free approaches, the model-based method @cite_66 @cite_35 @cite_38 @cite_57 @cite_17 @cite_52 @cite_47 @cite_44 @cite_15 only use 3D annotations to fit their models. Based on prior knowledge, the model-based methods mainly contains two parts, the modeling and the inferring. A commonly used model is the active shape model (ASM) @cite_23 , where a 3D human pose is represented as a linear combination of 3D bases @cite_35 @cite_31 @cite_1 @cite_75 @cite_2 . @cite_35 estimate the model parameters by minimizing the projection error based on sparse representation model. Simo- @cite_66 propagate the noise to the pose space by using a stochastic sampling strategy. @cite_57 propose a convex formulation to solve the non-convex problems of the orthogonality constraint imposed on the objective function. Instead of using the @math -norm to measure the reconstruction error, @cite_15 apply the @math -norm to minimize the projection error to improve the estimation robustness. Apart from ASM, @cite_0 apply 3D pictorial structure to estimate the 3D pose based on regression forests. Based on the mixture of pictorial structures, @cite_51 impose kinematic and orientation constraints on the reasoning model for the self-occlusion pose estimation.
{ "cite_N": [ "@cite_35", "@cite_38", "@cite_31", "@cite_15", "@cite_1", "@cite_52", "@cite_57", "@cite_44", "@cite_0", "@cite_23", "@cite_2", "@cite_51", "@cite_47", "@cite_75", "@cite_66", "@cite_17" ], "mid": [ "2155196764", "2111446867", "", "2799870331", "", "2520324844", "2256477790", "2573854917", "2054820429", "2038952578", "", "", "2583372902", "", "2088196373", "2483862638" ], "abstract": [ "Reconstructing an arbitrary configuration of 3D points from their projection in an image is an ill-posed problem. When the points hold semantic meaning, such as anatomical landmarks on a body, human observers can often infer a plausible 3D configuration, drawing on extensive visual memory. We present an activity-independent method to recover the 3D configuration of a human figure from 2D locations of anatomical landmarks in a single image, leveraging a large motion capture corpus as a proxy for visual memory. Our method solves for anthropometrically regular body pose and explicitly estimates the camera via a matching pursuit algorithm operating on the image projections. Anthropometric regularity (i.e., that limbs obey known proportions) is a highly informative prior, but directly applying such constraints is intractable. Instead, we enforce a necessary condition on the sum of squared limb-lengths that can be solved for in closed form to discourage implausible configurations in 3D. We evaluate performance on a wide variety of human poses captured from different viewpoints and show generalization to novel 3D configurations and robustness to missing data.", "We introduce a novel approach to automatically recover 3D human pose from a single image. Most previous work follows a pipelined approach: initially, a set of 2D features such as edges, joints or silhouettes are detected in the image, and then these observations are used to infer the 3D pose. Solving these two problems separately may lead to erroneous 3D poses when the feature detector has performed poorly. In this paper, we address this issue by jointly solving both the 2D detection and the 3D inference problems. For this purpose, we propose a Bayesian framework that integrates a generative model based on latent variables and discriminative 2D part detectors based on HOGs, and perform inference using evolutionary algorithms. Real experimentation demonstrates competitive results, and the ability of our methodology to provide accurate 2D and 3D pose estimations even when the 2D detectors are inaccurate.", "", "We propose a method for estimating 3D human poses from single images or video sequences. The task is challenging because: (a) many 3D poses can have similar 2D pose projections which makes the lifting ambiguous, and (b) current 2D joint detectors are not accurate which can cause big errors in 3D estimates. We represent 3D poses by a sparse combination of bases which encode structural pose priors to reduce the lifting ambiguity. This prior is strengthened by adding limb length constraints. We estimate the 3D pose by minimizing an @math L 1 norm measurement error between the 2D pose and the 3D pose because it is less sensitive to inaccurate 2D poses. We modify our algorithm to output @math K 3D pose candidates for an image, and for videos, we impose a temporal smoothness constraint to select the best sequence of 3D poses from the candidates. We demonstrate good results on 3D pose estimation from static images and improved performance by selecting the best 3D pose from the @math K proposals. Our results on video sequences also show improvements (over static images) of roughly 15 .", "", "We introduce a 3D human pose estimation method from single image, based on a hierarchical Bayesian non-parametric model. The proposed model relies on a representation of the idiosyncratic motion of human body parts, which is captured by a subdivision of the human skeleton joints into groups. A dictionary of motion snapshots for each group is generated. The hierarchy ensures to integrate the visual features within the pose dictionary. Given a query image, the learned dictionary is used to estimate the likelihood of the group pose based on its visual features. The full-body pose is reconstructed taking into account the consistency of the connected group poses. The results show that the proposed approach is able to accurately reconstruct the 3D pose of previously unseen subjects.", "We investigate the problem of estimating the 3D shape of an object defined by a set of 3D landmarks, given their 2D correspondences in a single image. A successful approach to alleviating the reconstruction ambiguity is the 3D deformable shape model and a sparse representation is often used to capture complex shape variability. But the model inference is still challenging due to the nonconvexity in the joint optimization of shape and viewpoint. In contrast to prior work that relies on an alternating scheme whose solution depends on initialization, we propose a convex approach to addressing this challenge and develop an efficient algorithm to solve the proposed convex program. We further propose a robust model to handle gross errors in the 2D correspondences. We demonstrate the exact recovery property of the proposed method, the advantage compared to several nonconvex baselines and the applicability to recover 3D human poses and car models from single images.", "Recovering 3D full-body human pose is a challenging problem with many applications. It has been successfully addressed by motion capture systems with body worn markers and multiple cameras. In this paper, we address the more challenging case of not only using a single camera but also not leveraging markers: going directly from 2D appearance to 3D geometry. Deep learning approaches have shown remarkable abilities to discriminatively learn 2D appearance features. The missing piece is how to integrate 2D, 3D, and temporal information to recover 3D geometry and account for the uncertainties arising from the discriminative model. We introduce a novel approach that treats 2D joint locations as latent variables whose uncertainty distributions are given by a deep fully convolutional neural network. The unknown 3D poses are modeled by a sparse representation and the 3D parameter estimates are realized via an Expectation-Maximization algorithm, where it is shown that the 2D joint location uncertainties can be conveniently marginalized out during inference. Extensive evaluation on benchmark datasets shows that the proposed approach achieves greater accuracy over state-of-the-art baselines. Notably, the proposed approach does not require synchronized 2D-3D data for training and is applicable to “in-the-wild” images, which is demonstrated with the MPII dataset.", "In this work we address the problem of estimating the 3D human pose from a single RGB image, which is a challenging problem since different 3D poses may have similar 2D projections. Following the success of regression forests for 3D pose estimation from depth data or 2D pose estimation from RGB images, we extend regression forests to infer missing depth data of image features and 3D pose simultaneously. Since we do not observe depth for inference or training directly, we hypothesize the depth of the features by sweeping with a plane through the 3D volume of potential joint locations. The regression forests are then combined with a pictorial structure framework, which is extended to 3D. The approach is evaluated on two challenging benchmarks where stateof-the-art performance is achieved.", "!, Model-based vision is firmly established as a robust approach to recognizing and locating known rigid objects in the presence of noise, clutter, and occlusion. It is more problematic to apply modelbased methods to images of objects whose appearance can vary, though a number of approaches based on the use of flexible templates have been proposed. The problem with existing methods is that they sacrifice model specificity in order to accommodate variability, thereby compromising robustness during image interpretation. We argue that a model should only be able to deform in ways characteristic of the class of objects it represents. We describe a method for building models by learning patterns of variability from a training set of correctly annotated images. These models can be used for image search in an iterative refinement algorithm analogous to that employed by Active Contour Models (Snakes). The key difference is that our Active Shape Models can only deform to fit the data in ways consistent with the training set. We show several practical examples where we have built such models and used them to locate partially occluded objects in noisy, cluttered images. Q 199s A&& prrss, IN.", "", "", "We explore 3D human pose estimation from a single RGB image. While many approaches try to directly predict 3D pose from image measurements, we explore a simple architecture that reasons through intermediate 2D pose predictions. Our approach is based on two key observations (1) Deep neural nets have revolutionized 2D pose estimation, producing accurate 2D predictions even for poses with self-occlusions (2) Big-datasets of 3D mocap data are now readily available, making it tempting to lift predicted 2D poses to 3D through simple memorization (e.g., nearest neighbors). The resulting architecture is straightforward to implement with off-the-shelf 2D pose estimation systems and 3D mocap libraries. Importantly, we demonstratethatsuchmethodsoutperformalmostallstate-of-theart 3D pose estimation systems, most of which directly try to regress 3D pose from 2D measurements.", "", "Markerless 3D human pose detection from a single image is a severely underconstrained problem because different 3D poses can have similar image projections. In order to handle this ambiguity, current approaches rely on prior shape models that can only be correctly adjusted if 2D image features are accurately detected. Unfortunately, although current 2D part detector algorithms have shown promising results, they are not yet accurate enough to guarantee a complete disambiguation of the 3D inferred shape. In this paper, we introduce a novel approach for estimating 3D human pose even when observations are noisy. We propose a stochastic sampling strategy to propagate the noise from the image plane to the shape space. This provides a set of ambiguous 3D shapes, which are virtually undistinguishable from their image projections. Disambiguation is then achieved by imposing kinematic constraints that guarantee the resulting pose resembles a 3D human shape. We validate the method on a variety of situations in which state-of-the-art 2D detectors yield either inaccurate estimations or partly miss some of the body parts.", "We describe the first method to automatically estimate the 3D pose of the human body as well as its 3D shape from a single unconstrained image. We estimate a full 3D mesh and show that 2D joints alone carry a surprising amount of information about body shape. The problem is challenging because of the complexity of the human body, articulation, occlusion, clothing, lighting, and the inherent ambiguity in inferring 3D from 2D. To solve this, we first use a recently published CNN-based method, DeepCut, to predict (bottom-up) the 2D body joint locations. We then fit (top-down) a recently published statistical body shape model, called SMPL, to the 2D joints. We do so by minimizing an objective function that penalizes the error between the projected 3D model joints and detected 2D joints. Because SMPL captures correlations in human shape across the population, we are able to robustly fit it to very little data. We further leverage the 3D model to prevent solutions that cause interpenetration. We evaluate our method, SMPLify, on the Leeds Sports, HumanEva, and Human3.6M datasets, showing superior pose accuracy with respect to the state of the art." ] }
1905.13543
2947095094
Network architectures obtained by Neural Architecture Search (NAS) have shown state-of-the-art performance in various computer vision tasks. Despite the exciting progress, the computational complexity of the forward-backward propagation and the search process makes it difficult to apply NAS in practice. In particular, most previous methods require thousands of GPU days for the search process to converge. In this paper, we propose a dynamic distribution pruning method towards extremely efficient NAS, which samples architectures from a joint categorical distribution. The search space is dynamically pruned every a few epochs to update this distribution, and the optimal neural architecture is obtained when there is only one structure remained. We conduct experiments on two widely-used datasets in NAS. On CIFAR-10, the optimal structure obtained by our method achieves the state-of-the-art @math test error, while the search process is more than @math times faster (only @math GPU hours on a Tesla V100) than the state-of-the-art NAS algorithms. On ImageNet, our model achieves 75.2 top-1 accuracy under the MobileNet settings, with a time cost of only @math GPU days that is @math acceleration over the fastest NAS algorithm. The code is available at this https URL
Neural architecture search is an automatic architecture engineering technique, which has received significant attention over the last few years. For a given dataset, architectures with high accuracy or low latency are obtained by performing a heuristic search in a predefined search space. For image classification, most human-designed networks are built by stacking ( the spatial dimension is reduced and the channel size is increased) and ( the spatial and channel dimensions are preserved) cells @cite_6 @cite_1 @cite_11 @cite_25 @cite_9 . Therefore, existing NAS methods @cite_19 @cite_4 @cite_10 @cite_0 can search architectures under the same settings to work on a small search space.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_19", "@cite_10", "@cite_25", "@cite_11" ], "mid": [ "2964081807", "", "", "2194775991", "2951104886", "2553303224", "2963821229", "2963446712", "1686810756" ], "abstract": [ "Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset.", "", "", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.", "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.", "We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.", "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision." ] }
1905.13543
2947095094
Network architectures obtained by Neural Architecture Search (NAS) have shown state-of-the-art performance in various computer vision tasks. Despite the exciting progress, the computational complexity of the forward-backward propagation and the search process makes it difficult to apply NAS in practice. In particular, most previous methods require thousands of GPU days for the search process to converge. In this paper, we propose a dynamic distribution pruning method towards extremely efficient NAS, which samples architectures from a joint categorical distribution. The search space is dynamically pruned every a few epochs to update this distribution, and the optimal neural architecture is obtained when there is only one structure remained. We conduct experiments on two widely-used datasets in NAS. On CIFAR-10, the optimal structure obtained by our method achieves the state-of-the-art @math test error, while the search process is more than @math times faster (only @math GPU hours on a Tesla V100) than the state-of-the-art NAS algorithms. On ImageNet, our model achieves 75.2 top-1 accuracy under the MobileNet settings, with a time cost of only @math GPU days that is @math acceleration over the fastest NAS algorithm. The code is available at this https URL
Many different search algorithms have been proposed to explore the neural architecture space using specific search strategies. One popular method is to model NAS as a (RL) problem @cite_19 @cite_30 @cite_3 @cite_24 @cite_10 @cite_33 . Zoph @cite_4 employs a recurrent neural network as the policy function to sequentially generate a string that encodes the specific neural architecture. The policy network can be trained with the policy gradient algorithm or the proximal policy optimization. Cai @cite_24 @cite_20 propose a method that regards the architecture search space as a tree structure for network transformation. In this method, new network architectures can be generated by a father network with some predefined transformations, which reduces the search space and thus speeds up the search. Another alternative way to explore the architecture space is through based methods, which evolve a population of network structures using evolutionary algorithms @cite_13 @cite_2 . Although the above architecture search algorithms have achieved state-of-the-art results on various tasks, a large amount of computational resources are still needed.
{ "cite_N": [ "@cite_30", "@cite_13", "@cite_4", "@cite_33", "@cite_3", "@cite_24", "@cite_19", "@cite_2", "@cite_10", "@cite_20" ], "mid": [ "", "", "2964081807", "", "2951886768", "", "2553303224", "2785430118", "2963821229", "2803311163" ], "abstract": [ "", "", "Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset.", "", "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using @math -learning with an @math -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.", "", "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.", "The effort devoted to hand-crafting image classifiers has motivated the use of architecture search to discover them automatically. Reinforcement learning and evolution have both shown promise for this purpose. This study introduces a regularized version of a popular asynchronous evolutionary algorithm. We rigorously compare it to the non-regularized form and to a highly-successful reinforcement learning baseline. Using the same hardware, compute effort and neural network training code, we conduct repeated experiments side-by-side, exploring different datasets, search spaces and scales. We show regularized evolution consistently produces models with similar or higher accuracy, across a variety of contexts without need for re-tuning parameters. In addition, regularized evolution exhibits considerably better performance than reinforcement learning at early search stages, suggesting it may be the better choice when fewer compute resources are available. This constitutes the first controlled comparison of the two search algorithms in this context. Finally, we present new architectures discovered with regularized evolution that we nickname AmoebaNets. These models set a new state of the art for CIFAR-10 (mean test error = 2.13 ) and mobile-size ImageNet (top-5 accuracy = 92.1 with 5.06M parameters), and reach the current state of the art for ImageNet (top-5 accuracy = 96.2 ).", "We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.", "We introduce a new function-preserving transformation for efficient neural architecture search. This network transformation allows reusing previously trained networks and existing successful architectures that improves sample efficiency. We aim to address the limitation of current network transformation operations that can only perform layer-level architecture modifications, such as adding (pruning) filters or inserting (removing) a layer, which fails to change the topology of connection paths. Our proposed path-level transformation operations enable the meta-controller to modify the path topology of the given network while keeping the merits of reusing weights, and thus allow efficiently designing effective structures with complex path topologies like Inception models. We further propose a bidirectional tree-structured reinforcement learning meta-controller to explore a simple yet highly expressive tree-structured architecture space that can be viewed as a generalization of multi-branch architectures. We experimented on the image classification datasets with limited computational resources (about 200 GPU-hours), where we observed improved parameter efficiency and better test results (97.70 test accuracy on CIFAR-10 with 14.3M parameters and 74.6 top-1 accuracy on ImageNet in the mobile setting), demonstrating the effectiveness and transferability of our designed architectures." ] }
1905.13543
2947095094
Network architectures obtained by Neural Architecture Search (NAS) have shown state-of-the-art performance in various computer vision tasks. Despite the exciting progress, the computational complexity of the forward-backward propagation and the search process makes it difficult to apply NAS in practice. In particular, most previous methods require thousands of GPU days for the search process to converge. In this paper, we propose a dynamic distribution pruning method towards extremely efficient NAS, which samples architectures from a joint categorical distribution. The search space is dynamically pruned every a few epochs to update this distribution, and the optimal neural architecture is obtained when there is only one structure remained. We conduct experiments on two widely-used datasets in NAS. On CIFAR-10, the optimal structure obtained by our method achieves the state-of-the-art @math test error, while the search process is more than @math times faster (only @math GPU hours on a Tesla V100) than the state-of-the-art NAS algorithms. On ImageNet, our model achieves 75.2 top-1 accuracy under the MobileNet settings, with a time cost of only @math GPU days that is @math acceleration over the fastest NAS algorithm. The code is available at this https URL
To overcome this problem, several recent works have proposed to accelerate NAS in a setting, which has demonstrated the possibility to find the optimal network architecture within a few GPU days. In this one-shot architecture search, each architecture in the search space is considered as a sub-graph sampled from a super-graph, and the search process can be accelerated by parameter sharing @cite_21 . Liu @cite_0 jointly optimized the weights within two nodes with the hyper-parameters under continuous relaxation. Both the weights in the graph and the hyper-parameters are updated via standard gradient descent. However, the method in @cite_0 still suffers from large GPU memory footprints, and the search complexity is still not applicable to real-world applications. To this end, Cai @cite_7 adopte the differentiable framework and proposed to search architectures without any proxy. However, the method still keeps the same search algorithms as the previous work @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_21", "@cite_7" ], "mid": [ "2951104886", "2785366763", "2902251695" ], "abstract": [ "This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.", "We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Thanks to parameter sharing between child models, ENAS is fast: it delivers strong empirical performances using much fewer GPU-hours than all existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On the Penn Treebank dataset, ENAS discovers a novel architecture that achieves a test perplexity of 55.8, establishing a new state-of-the-art among all methods without post-training processing. On the CIFAR-10 dataset, ENAS designs novel architectures that achieve a test error of 2.89 , which is on par with NASNet (, 2018), whose test error is 2.65 .", "Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. @math GPU hours) makes it difficult to search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on the target task. In this paper, we present that can learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08 test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6 @math fewer parameters. On ImageNet, our model achieves 3.1 better top-1 accuracy than MobileNetV2, while being 1.2 @math faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design." ] }
1905.13540
2947962063
This paper proposes a method to gain extra supervision via multi-task learning for multi-modal video question answering. Multi-modal video question answering is an important task that aims at the joint understanding of vision and language. However, establishing large scale dataset for multi-modal video question answering is expensive and the existing benchmarks are relatively small to provide sufficient supervision. To overcome this challenge, this paper proposes a multi-task learning method which is composed of three main components: (1) multi-modal video question answering network that answers the question based on the both video and subtitle feature, (2) temporal retrieval network that predicts the time in the video clip where the question was generated from and (3) modality alignment network that solves metric learning problem to find correct association of video and subtitle modalities. By simultaneously solving related auxiliary tasks with hierarchically shared intermediate layers, the extra synergistic supervisions are provided. Motivated by curriculum learning, multi task ratio scheduling is proposed to learn easier task earlier to set inductive bias at the beginning of the training. The experiments on publicly available dataset TVQA shows state-of-the-art results, and ablation studies are conducted to prove the statistical validity.
Multi-task learning aims at jointly solving multiple related tasks with a single model. By sharing parameters across related tasks, the model can generalize better on the original task. Most of the multi-task learning methods share the hidden layers across all tasks and have task-specific output layers for each task. Starting from the work of @cite_27 , there have been rich research on multi-task learning across the majority of applications of machine learning from computer vision (CV) @cite_15 to natural language processing (NLP) @cite_32 . @cite_16 proposed Deep Partial Person Re-identification (DPPR) that jointly learns person classification and person re-identification for partial person re-identification. Object detection architectures such as Fast R-CNN @cite_15 and Faster R-CNN @cite_10 used multi-task loss for bounding box regression and object classification. @cite_14 tackled the task of Person Search @cite_14 by jointly learning pedestrian detection and person re-identification. Recently, Li et el. @cite_8 proposed the invertible Question Answering Network (iQAN) to leverage the complementary relations between questions and answers in the image by jointly learning the Visual Question Answering (VQA) and Visual Question Generation (VQG) tasks.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_32", "@cite_27", "@cite_15", "@cite_16", "@cite_10" ], "mid": [ "2963574614", "2963976294", "2117130368", "1614862348", "", "2789323483", "2613718673" ], "abstract": [ "Existing person re-identification benchmarks and methods mainly focus on matching cropped pedestrian images between queries and candidates. However, it is different from real-world scenarios where the annotations of pedestrian bounding boxes are unavailable and the target person needs to be searched from a gallery of whole scene images. To close the gap, we propose a new deep learning framework for person search. Instead of breaking it down into two separate tasks—pedestrian detection and person re-identification, we jointly handle both aspects in a single convolutional neural network. An Online Instance Matching (OIM) loss function is proposed to train the network effectively, which is scalable to datasets with numerous identities. To validate our approach, we collect and annotate a large-scale benchmark dataset for person search. It contains 18,184 images, 8,432 identities, and 96,143 pedestrian bounding boxes. Experiments show that our framework outperforms other separate approaches, and the proposed OIM loss function converges much faster and better than the conventional Softmax loss.", "Visual question answering (VQA) and visual question generation (VQG) are two trending topics in the computer vision, but they are usually explored separately despite their intrinsic complementary relationship. In this paper, we propose an end-to-end unified model, the Invertible Question Answering Network (iQAN), to introduce question generation as a dual task of question answering to improve the VQA performance. With our proposed invertible bilinear fusion module and parameter sharing scheme, our iQAN can accomplish VQA and its dual task VQG simultaneously. By jointly trained on two tasks with our proposed dual regularizes (termed as Dual Training), our model has a better understanding of the interactions among images, questions and answers. After training, iQAN can take either question or answer as input, and output the counterpart. Evaluated on the CLEVR and VQA2 datasets, our iQAN improves the top-1 accuracy of the prior art MUTAN VQA method by 1.33 and 0.88 (absolute increase) respectiely. We also show that our proposed dual training framework can consistently improve model performances of many popular VQA architectures.1", "We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.", "This paper suggests that it may be easier to learn several hard tasks at one time than to learn these same tasks separately. In effect, the information provided by the training signal for each task serves as a domain-specific inductive bias for the other tasks. Frequently the world gives us clusters of related tasks to learn. When it does not, it is often straightforward to create additional tasks. For many domains, acquiring inductive bias by collecting additional teaching signal may be more practical than the traditional approach of codifying domain-specific biases acquired from human expertise. We call this approach Multitask Learning (MTL). Since much of the power of an inductive learner follows directly from its inductive bias, multitask learning may yield more powerful learning. An empirical example of multitask connectionist learning is presented where learning improves by training one network on several related tasks at the same time. Multitask decision tree induction is also outlined.", "", "This paper considers a novel algorithm referred to as deep partial person re-identification (DPPR) for partial person re-identification where only a part of a person is observed and full body images are available for identification. The DPPR is based on an end-to-end deep model which make use of convolutional neural network (CNN), RoI Pooling layer and attention model. The RoI Pooling layer enables the extraction of feature vector corresponding to predefined part of input image. The attention model selects a subset of CNN feature vectors. For qualitative evaluation of proposed model, data from CUHK03 are randomly cropped in constructing p-CUHK03. Experimental results show that DPPR outperforms our baseline model on p-CUHK03.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn." ] }
1905.13540
2947962063
This paper proposes a method to gain extra supervision via multi-task learning for multi-modal video question answering. Multi-modal video question answering is an important task that aims at the joint understanding of vision and language. However, establishing large scale dataset for multi-modal video question answering is expensive and the existing benchmarks are relatively small to provide sufficient supervision. To overcome this challenge, this paper proposes a multi-task learning method which is composed of three main components: (1) multi-modal video question answering network that answers the question based on the both video and subtitle feature, (2) temporal retrieval network that predicts the time in the video clip where the question was generated from and (3) modality alignment network that solves metric learning problem to find correct association of video and subtitle modalities. By simultaneously solving related auxiliary tasks with hierarchically shared intermediate layers, the extra synergistic supervisions are provided. Motivated by curriculum learning, multi task ratio scheduling is proposed to learn easier task earlier to set inductive bias at the beginning of the training. The experiments on publicly available dataset TVQA shows state-of-the-art results, and ablation studies are conducted to prove the statistical validity.
As an auxiliary task of our proposed method, we jointly learn the modality alignment and the temporal localization along with the multi-modal video question answering. Both tasks have been extensively studied in the field of deep learning. @cite_33 proposed a method that captures the inter-modal correspondences between vision and language to generate natural language description of the given image. The latent alignment between the segments of the sentence and the region of the image is learned with a structured max-margin loss. Castrej ' o @cite_9 proposed a method that learn cross-modal scene representations that transfer across modalities. By regularizing cross-modal CNNs to have shared representation, the resulting representation is agnostic of the modality. @cite_36 proposed Joint Sequence Fusion (JSFusion) model that can measure semantic similarity between any pairs of multimodal sequence data. Hierarchical attention mechanism is utilized to learn matching representation patterns among modalities.
{ "cite_N": [ "@cite_36", "@cite_9", "@cite_33" ], "mid": [ "2885775891", "2474574787", "2481240925" ], "abstract": [ "We present an approach named JSFusion (Joint Sequence Fusion) that can measure semantic similarity between any pairs of multimodal sequence data (e.g. a video clip and a language sentence). Our multimodal matching network consists of two key components. First, the Joint Semantic Tensor composes a dense pairwise representation of two sequence data into a 3D tensor. Then, the Convolutional Hierarchical Decoder computes their similarity score by discovering hidden hierarchical matches between the two sequence modalities. Both modules leverage hierarchical attention mechanisms that learn to promote well-matched representation patterns while prune out misaligned ones in a bottom-up manner. Although the JSFusion is a universal model to be applicable to any multimodal sequence data, this work focuses on video-language tasks including multimodal retrieval and video QA. We evaluate the JSFusion model in three retrieval and VQA tasks in LSMDC, for which our model achieves the best performance reported so far. We also perform multiple-choice and movie retrieval tasks for the MSR-VTT dataset, on which our approach outperforms many state-of-the-art methods.", "People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize cross-modal scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for crossmodal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality.", "We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks (RNN) over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions outperform retrieval baselines on both full images and on a new dataset of region-level annotations. Finally, we conduct large-scale analysis of our RNN language model on the Visual Genome dataset of 4.1 million captions and highlight the differences between image and region-level caption statistics." ] }
1905.13540
2947962063
This paper proposes a method to gain extra supervision via multi-task learning for multi-modal video question answering. Multi-modal video question answering is an important task that aims at the joint understanding of vision and language. However, establishing large scale dataset for multi-modal video question answering is expensive and the existing benchmarks are relatively small to provide sufficient supervision. To overcome this challenge, this paper proposes a multi-task learning method which is composed of three main components: (1) multi-modal video question answering network that answers the question based on the both video and subtitle feature, (2) temporal retrieval network that predicts the time in the video clip where the question was generated from and (3) modality alignment network that solves metric learning problem to find correct association of video and subtitle modalities. By simultaneously solving related auxiliary tasks with hierarchically shared intermediate layers, the extra synergistic supervisions are provided. Motivated by curriculum learning, multi task ratio scheduling is proposed to learn easier task earlier to set inductive bias at the beginning of the training. The experiments on publicly available dataset TVQA shows state-of-the-art results, and ablation studies are conducted to prove the statistical validity.
Temporal localization aims at localizing temporal parts from the given video. @cite_18 proposed the Moment Context Network (MCN) for temporal localization with natural language query. The MCN effectively localizes temporal parts related to natural language query by integrating local and global video feature over time. @cite_12 proposed a multi-task learning approach for temporal localization with natural language query. Location regression and visual-semantic alignment are jointly learned for temporal localization. Temporal Unit Regression Network (TURN) @cite_17 jointly predicts action proposals and refines the temporal boundaries by temporal coordinate regression. Long untrimmed video is decomposed into video clips, which are reused as basic building blocks of temporal proposals for fast computation.
{ "cite_N": [ "@cite_18", "@cite_12", "@cite_17" ], "mid": [ "2963017553", "2964089981", "2597958930" ], "abstract": [ "We consider retrieving a specific temporal segment, or moment, from a video given a natural language text description. Methods designed to retrieve whole video clips with natural language determine what occurs in a video but not when. To address this issue, we propose the Moment Context Network (MCN) which effectively localizes natural language queries in videos by integrating local and global video features over time. A key obstacle to training our MCN model is that current video datasets do not include pairs of localized video segments and referring expressions, or text descriptions which uniquely identify a corresponding moment. Therefore, we collect the Distinct Describable Moments (DiDeMo) dataset which consists of over 10,000 unedited, personal videos in diverse visual settings with pairs of localized video segments and referring expressions. We demonstrate that MCN outperforms several baseline methods and believe that our initial results together with the release of DiDeMo will inspire further research on localizing video moments with natural language.", "This paper focuses on temporal localization of actions in untrimmed videos. Existing methods typically train classifiers for a pre-defined list of actions and apply them in a sliding window fashion. However, activities in the wild consist of a wide combination of actors, actions and objects; it is difficult to design a proper activity list that meets users’ needs. We propose to localize activities by natural language queries. Temporal Activity Localization via Language (TALL) is challenging as it requires: (1) suitable design of text and video representations to allow cross-modal matching of actions and language queries; (2) ability to locate actions accurately given features from sliding windows of limited granularity. We propose a novel Cross-modal Temporal Regression Localizer (CTRL) to jointly model text query and video clips, output alignment scores and action boundary regression results for candidate clips. Lor evaluation, we adopt TaCoS dataset, and build a new dataset for this task on top of Charades by adding sentence temporal annotations, called Charades-STA. We also build complex sentence queries in Charades-STA for test. Experimental results show that CTRL outperforms previous methods significantly on both datasets.", "Temporal Action Proposal (TAP) generation is an important problem, as fast and accurate extraction of semantically important (e.g. human actions) segments from untrimmed videos is an important step for large-scale video analysis. We propose a novel Temporal Unit Regression Network (TURN) model. There are two salient aspects of TURN: (1) TURN jointly predicts action proposals and refines the temporal boundaries by temporal coordinate regression: (2) Fast computation is enabled by unit feature reuse: a long untrimmed video is decomposed into video units, which are reused as basic building blocks of temporal proposals. TURN outperforms the previous state-of-the-art methods under average recall (AR) by a large margin on THUMOS-14 and ActivityNet datasets, and runs at over 880 frames per second (FPS) on a TITAN X GPU. We further apply TURN as a proposal generation stage for existing temporal action localization pipelines, it outperforms state-of-the-art performance on THUMOS-14 and ActivityNet." ] }
1905.13540
2947962063
This paper proposes a method to gain extra supervision via multi-task learning for multi-modal video question answering. Multi-modal video question answering is an important task that aims at the joint understanding of vision and language. However, establishing large scale dataset for multi-modal video question answering is expensive and the existing benchmarks are relatively small to provide sufficient supervision. To overcome this challenge, this paper proposes a multi-task learning method which is composed of three main components: (1) multi-modal video question answering network that answers the question based on the both video and subtitle feature, (2) temporal retrieval network that predicts the time in the video clip where the question was generated from and (3) modality alignment network that solves metric learning problem to find correct association of video and subtitle modalities. By simultaneously solving related auxiliary tasks with hierarchically shared intermediate layers, the extra synergistic supervisions are provided. Motivated by curriculum learning, multi task ratio scheduling is proposed to learn easier task earlier to set inductive bias at the beginning of the training. The experiments on publicly available dataset TVQA shows state-of-the-art results, and ablation studies are conducted to prove the statistical validity.
Recently, the research on multi-modal videoQA @cite_44 @cite_4 @cite_7 @cite_31 @cite_23 @cite_22 leverages additional text modality such as subtitle along with video modality for the joint understanding of vision and language. There are various benchmark datasets for multi-modal videoQA including MovieQA @cite_44 , PororoQA @cite_42 and TVQA @cite_4 . Multi-modal videoQA is a challenging task for its relatively small size of benchmark datasets. The majority of the methods on multi-modal videoQA are motivated by memory-augmented architecture @cite_37 . @cite_44 utilized memory network (MemN2N) @cite_37 to store video clips into memory and retrieve required information for answering question. Read-Write Memory Network (RWMN) @cite_7 replaces the fully-connected layers in memory network @cite_37 into the convolutional layers to capture local information in each memory slot. After the video and subtitle features are fused using bilinear operation, convolutional write read networks store retrieve information, respectively. Focal Visual-Text Attention (FVTA) @cite_31 applied hierarchical attention mechanism on three-dimensional tensor of question, video and text to dynamically determine which modality and what time to attend for question answering. Multimodal Dual Attention Memory (MDAM) @cite_22 applied multi-head attention mechanism @cite_5 to learn the latent representation of multi-modal inputs.
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_22", "@cite_7", "@cite_42", "@cite_44", "@cite_23", "@cite_5", "@cite_31" ], "mid": [ "2951008357", "2963541336", "2890904455", "2963781647", "2962910007", "", "2962938145", "2963403868", "" ], "abstract": [ "We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (, 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.", "", "We propose a video story question-answering (QA) architecture, Multimodal Dual Attention Memory (MDAM). The key idea is to use a dual attention mechanism with late fusion. MDAM uses self-attention to learn the latent concepts in scene frames and captions. Given a question, MDAM uses the second attention over these latent concepts. Multimodal fusion is performed after the dual attention processes (late fusion). Using this processing pipeline, MDAM learns to infer a high-level vision-language joint representation from an abstraction of the full video content. We evaluate MDAM on PororoQA and MovieQA datasets which have large-scale QA annotations on cartoon videos and movies, respectively. For both datasets, MDAM achieves new state-of-the-art results with significant margins compared to the runner-up models. We confirm the best performance of the dual attention mechanism combined with late fusion by ablation studies. We also perform qualitative analysis by visualizing the inference mechanisms of MDAM.", "We propose a novel memory network model named Read-Write Memory Network (RWMN) to perform question and answering tasks for large-scale, multimodal movie story understanding. The key focus of our RWMN model is to design the read network and the write network that consist of multiple convolutional layers, which enable memory read and write operations to have high capacity and flexibility. While existing memory-augmented network models treat each memory slot as an independent block, our use of multi-layered CNNs allows the model to read and write sequential memory cells as chunks, which is more reasonable to represent a sequential story because adjacent memory blocks often have strong correlations. For evaluation, we apply our model to all the six tasks of the MovieQA benchmark [24], and achieve the best accuracies on several tasks, especially on the visual QA task. Our model shows a potential to better understand not only the content in the story, but also more abstract information, such as relationships between characters and the reasons for their actions.", "Question-answering (QA) on video contents is a significant challenge for achieving human-level intelligence as it involves both vision and language in real-world settings. Here we demonstrate the possibility of an AI agent performing video story QA by learning from a large amount of cartoon videos. We develop a video-story learning model, i.e. Deep Embedded Memory Networks (DEMN), to reconstruct stories from a joint scene-dialogue video stream using a latent embedding space of observed data. The video stories are stored in a long-term memory component. For a given question, an LSTM-based attention model uses the long-term memory to recall the best question-story-answer triplet by focusing on specific words containing key information. We trained the DEMN on a novel QA dataset of children's cartoon video series, Pororo. The dataset contains 16,066 scene-dialogue pairs of 20.5-hour videos, 27,328 fine-grained sentences for scene description, and 8,913 story-related QA pairs. Our experimental results show that the DEMN outperforms other QA models. This is mainly due to 1) the reconstruction of video stories in a scene-dialogue combined form that utilize the latent embedding and 2) attention. DEMN also achieved state-of-the-art results on the MovieQA benchmark.", "", "", "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.", "" ] }
1905.13561
2947731696
The social media revolution has produced a plethora of web services to which users can easily upload and share multimedia documents. Despite the popularity and convenience of such services, the sharing of such inherently personal data, including speech data, raises obvious security and privacy concerns. In particular, a user's speech data may be acquired and used with speech synthesis systems to produce high-quality speech utterances which reflect the same user's speaker identity. These utterances may then be used to attack speaker verification systems. One solution to mitigate these concerns involves the concealing of speaker identities before the sharing of speech data. For this purpose, we present a new approach to speaker anonymization. The idea is to extract linguistic and speaker identity features from an utterance and then to use these with neural acoustic and waveform models to synthesize anonymized speech. The original speaker identity, in the form of timbre, is suppressed and replaced with that of an anonymous pseudo identity. The approach exploits state-of-the-art x-vector speaker representations. These are used to derive anonymized pseudo speaker identities through the combination of multiple, random speaker x-vectors. Experimental results show that the proposed approach is effective in concealing speaker identities. It increases the equal error rate of a speaker verification system while maintaining high quality, anonymized speech.
First of all, speaker anonymization differs from speech anonymization @cite_21 @cite_7 in that the former suppresses speaker identity while the latter obscures linguistic content. In accordance with the manipulation objectives, speaker anonymization can be split into physical and logical anonymization. Physical anonymization aims to perturb speech in physical space by adding an external sound to the original waveform @cite_24 while logical anonymization modifies speaker identity on the recorded speech signal. Our proposed method falls into the latter category.
{ "cite_N": [ "@cite_24", "@cite_21", "@cite_7" ], "mid": [ "2407075323", "1966092470", "2595587557" ], "abstract": [ "In this paper, a privacy protection method to prevent speaker identification from recorded speech is proposed and evaluated. Although many techniques for preserving various private information included in speech have been proposed, their impacts on human speech communication in physical space are not taken into account. To overcome this problem, this paper proposes privacy-preserving sound as a privacy protection method. The privacy-preserving sound can degrade speaker verification performance without interfering with human speech communication in physical space. To make a first step toward solving this problem, suitable sound characteristics for preserving privacy are evaluated in terms of the speaker verification performance and speech intelligibility. The experimental results show that appropriate sound can efficiently degrade the speaker verification performance without degrading speech intelligibility.", "Privacy protection issue introduces numerous challenges in the multimedia processing domain. In this paper, we propose an anonymization framework for audio clinical data. The HMM based keyword recognition technique is used to locate the predefined sensitive keywords, which are identified by the users or patients in advance. These keywords will then be substituted by the synthesized nominal words of the similar nature and voice characteristics. The ultimate goal is to protect the privacy information as much as possible, while trying to preserve the speech properties, especially the disease-related symptoms, such as the loudness, the rhythm, the emotion, etc. A preliminary system is presented to demonstrate the usage of the process.", "" ] }
1905.13561
2947731696
The social media revolution has produced a plethora of web services to which users can easily upload and share multimedia documents. Despite the popularity and convenience of such services, the sharing of such inherently personal data, including speech data, raises obvious security and privacy concerns. In particular, a user's speech data may be acquired and used with speech synthesis systems to produce high-quality speech utterances which reflect the same user's speaker identity. These utterances may then be used to attack speaker verification systems. One solution to mitigate these concerns involves the concealing of speaker identities before the sharing of speech data. For this purpose, we present a new approach to speaker anonymization. The idea is to extract linguistic and speaker identity features from an utterance and then to use these with neural acoustic and waveform models to synthesize anonymized speech. The original speaker identity, in the form of timbre, is suppressed and replaced with that of an anonymous pseudo identity. The approach exploits state-of-the-art x-vector speaker representations. These are used to derive anonymized pseudo speaker identities through the combination of multiple, random speaker x-vectors. Experimental results show that the proposed approach is effective in concealing speaker identities. It increases the equal error rate of a speaker verification system while maintaining high quality, anonymized speech.
@cite_8 performed speaker anonymization by first recognizing the diphones in the input speech using an ASR system and then synthesizing speech from the recognized diphone sequence. The synthesized speech differs from the original one in terms of speaker identity because the synthesizer is speaker-dependent and was trained using the data of a different speaker. This method is similar to our proposed method, but we use a speaker-independent speech synthesizer trained on the data for many speakers. Our framework is therefore more flexible.
{ "cite_N": [ "@cite_8" ], "mid": [ "1652729136" ], "abstract": [ "The paper addresses the problem of speaker (or voice) de-identification by presenting a novel approach for concealing the identity of speakers in their speech. The proposed technique first recognizes the input speech with a diphone recognition system and then transforms the obtained phonetic transcription into the speech of another speaker with a speech synthesis system. Due to the fact that a Diphone RecOgnition step and a sPeech SYnthesis step are used during the de-identification, we refer to the developed technique as DROPSY. With this approach the acoustical models of the recognition and synthesis modules are completely independent from each other, which ensures the highest level of input speaker de-identification. The proposed DROPSY-based de-identification approach is language dependent, text independent and capable of running in real-time due to the relatively simple computing methods used. When designing speaker de-identification technology two requirements are typically imposed on the de-identification techniques: i) it should not be possible to establish the identity of the speakers based on the de-identified speech, and ii) the processed speech should still sound natural and be intelligible. This paper, therefore, implements the proposed DROPSY-based approach with two different speech synthesis techniques (i.e, with the HMM-based and the diphone TD-PSOLA-based technique). The obtained de-identified speech is evaluated for intelligibility and evaluated in speaker verification experiments with a state-of-the-art (i-vector PLDA) speaker recognition system. The comparison of both speech synthesis modules integrated in the proposed method reveals that both can efficiently de-identify the input speakers while still producing intelligible speech." ] }
1905.13561
2947731696
The social media revolution has produced a plethora of web services to which users can easily upload and share multimedia documents. Despite the popularity and convenience of such services, the sharing of such inherently personal data, including speech data, raises obvious security and privacy concerns. In particular, a user's speech data may be acquired and used with speech synthesis systems to produce high-quality speech utterances which reflect the same user's speaker identity. These utterances may then be used to attack speaker verification systems. One solution to mitigate these concerns involves the concealing of speaker identities before the sharing of speech data. For this purpose, we present a new approach to speaker anonymization. The idea is to extract linguistic and speaker identity features from an utterance and then to use these with neural acoustic and waveform models to synthesize anonymized speech. The original speaker identity, in the form of timbre, is suppressed and replaced with that of an anonymous pseudo identity. The approach exploits state-of-the-art x-vector speaker representations. These are used to derive anonymized pseudo speaker identities through the combination of multiple, random speaker x-vectors. Experimental results show that the proposed approach is effective in concealing speaker identities. It increases the equal error rate of a speaker verification system while maintaining high quality, anonymized speech.
With a goal closely related to that of anonymization, @cite_15 investigated so-called speaker evasion and obfuscation using voice conversion techniques. With the work aiming only to circumvent surveillance systems, it evaluated only how the approach could degrade ASV performance. It did not consider degradations to speech quality. In contrast, the ideas presented in this paper are evaluated in terms of speaker identity anonymization, speech quality and linguistic content.
{ "cite_N": [ "@cite_15" ], "mid": [ "1966993810" ], "abstract": [ "The potential for biometric systems to be manipulated through some form of subversion is well acknowledged. One such approach known as spoofing relates to the provocation of false accepts in authentication applications. Another approach referred to as obfuscation relates to the provocation of missed detections in surveillance applications. While the automatic speaker verification research community is now addressing spoofing and countermeasures, vulnerabilities to obfuscation remain largely unknown. This paper reports the first study. Our work with standard NIST datasets and protocols shows that the equal error rate of a standard GMM-UBM system is increased from 9 to 48 through obfuscation, whereas that of a state-of-the-art i-vector system increases from 3 to 20 . We also present a generalised approach to obfuscation detection which succeeds in detecting almost all attempts to evade detection." ] }
1905.13648
2947555604
Current visual question answering datasets do not consider the rich semantic information conveyed by text within an image. In this work, we present a new dataset, ST-VQA, that aims to highlight the importance of exploiting high-level semantic information present in images as textual cues in the VQA process. We use this dataset to define a series of tasks of increasing difficulty for which reading the scene text in the context provided by the visual information is necessary to reason and generate an appropriate answer. We propose a new evaluation metric for these tasks to account both for reasoning errors as well as shortcomings of the text recognition module. In addition we put forward a series of baseline methods, which provide further insight to the newly released dataset, and set the scene for further research.
The task of text detection and recognition in natural images sets the starting point of a generalized VQA system that can integrate textual cues for complete scene understanding. The most common approach in the text community consists of two steps, text detection and recognition. Several works have been proposed addressing text detection such as @cite_43 @cite_8 @cite_24 @cite_38 which are mostly comprised by a Fully Convolutional Neural Network. Text recognition methods such as the one presented in @cite_1 propose recognizing text as a classification problem from a 90K English vocabulary. An attention based sequence-to-sequence model is used by @cite_14 and a connectionist temporal classification (CTC) is proposed by @cite_20 . Later works levitate towards end-to-end architectures such as the ones presented by @cite_27 @cite_15 @cite_4 , which mostly consist of an initial Convolutional Neural Network (CNN) that acts as an encoder and a Long Short Term Memory (LSTM) combined with attention that acts as the decoder.
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_4", "@cite_8", "@cite_1", "@cite_24", "@cite_43", "@cite_27", "@cite_15", "@cite_20" ], "mid": [ "", "2963517393", "", "", "1922126009", "", "2962773189", "2777652944", "", "2122585011" ], "abstract": [ "", "Recognizing text in natural images is a challenging task with many unsolved problems. Different from those in documents, words in natural images often possess irregular shapes, which are caused by perspective distortion, curved character placement, etc. We propose RARE (Robust text recognizer with Automatic REctification), a recognition model that is robust to irregular text. RARE is a speciallydesigned deep neural network, which consists of a Spatial Transformer Network (STN) and a Sequence Recognition Network (SRN). In testing, an image is firstly rectified via a predicted Thin-Plate-Spline (TPS) transformation, into a more \"readable\" image for the following SRN, which recognizes text through a sequence recognition approach. We show that the model is able to recognize several types of irregular text, including perspective text and curved text. RARE is end-to-end trainable, requiring only images and associated text labels, making it convenient to train and deploy the model in practical systems. State-of-the-art or highly-competitive performance achieved on several benchmarks well demonstrates the effectiveness of the proposed model.", "", "", "In this work we present an end-to-end system for text spotting--localising and recognising text in natural scene images--and text based image retrieval. This system is based on a region proposal mechanism for detection and deep convolutional neural networks for recognition. Our pipeline uses a novel combination of complementary proposal generation techniques to ensure high recall, and a fast subsequent filtering stage for improving precision. For the recognition and ranking of proposals, we train very large convolutional neural networks to perform word recognition on the whole proposal region at the same time, departing from the character classifier based systems of the past. These networks are trained solely on data produced by a synthetic text generation engine, requiring no human labelled data. Analysing the stages of our pipeline, we show state-of-the-art performance throughout. We perform rigorous experiments across a number of standard end-to-end text spotting benchmarks and text-based image retrieval datasets, showing a large improvement over all previous methods. Finally, we demonstrate a real-world application of our text spotting system to allow thousands of hours of news footage to be instantly searchable via a text query.", "", "This paper presents an end-to-end trainable fast scene text detector, named TextBoxes, which detects scene text with both high accuracy and efficiency in a single network forward pass, involving no post-process except for a standard non-maximum suppression. TextBoxes outperforms competing methods in terms of text localization accuracy and is much faster, taking only 0.09s per image in a fast implementation. Furthermore, combined with a text recognizer, TextBoxes significantly outperforms state-of-the-art approaches on word spotting and end-to-end text recognition tasks.", "A method for scene text localization and recognition is proposed. The novelties include: training of both text detection and recognition in a single end-to-end pass, the structure of the recognition CNN and the geometry of its input layer that preserves the aspect of the text and adapts its resolution to the data.,,The proposed method achieves state-of-the-art accuracy in the end-to-end text recognition on two standard datasets – ICDAR 2013 and ICDAR 2015, whilst being an order of magnitude faster than competing methods - the whole pipeline runs at 10 frames per second on an NVidia K80 GPU.", "", "Recognizing lines of unconstrained handwritten text is a challenging task. The difficulty of segmenting cursive or overlapping characters, combined with the need to exploit surrounding context, has led to low recognition rates for even the best current recognizers. Most recent progress in the field has been made either through improved preprocessing or through advances in language modeling. Relatively little work has been done on the basic recognition algorithms. Indeed, most systems rely on the same hidden Markov models that have been used for decades in speech and handwriting recognition, despite their well-known shortcomings. This paper proposes an alternative approach based on a novel type of recurrent neural network, specifically designed for sequence labeling tasks where the data is hard to segment and contains long-range bidirectional interdependencies. In experiments on two large unconstrained handwriting databases, our approach achieves word recognition accuracies of 79.7 percent on online data and 74.1 percent on offline data, significantly outperforming a state-of-the-art HMM-based system. In addition, we demonstrate the network's robustness to lexicon size, measure the individual influence of its hidden layers, and analyze its use of context. Last, we provide an in-depth discussion of the differences between the network and HMMs, suggesting reasons for the network's superior performance." ] }
1905.13648
2947555604
Current visual question answering datasets do not consider the rich semantic information conveyed by text within an image. In this work, we present a new dataset, ST-VQA, that aims to highlight the importance of exploiting high-level semantic information present in images as textual cues in the VQA process. We use this dataset to define a series of tasks of increasing difficulty for which reading the scene text in the context provided by the visual information is necessary to reason and generate an appropriate answer. We propose a new evaluation metric for these tasks to account both for reasoning errors as well as shortcomings of the text recognition module. In addition we put forward a series of baseline methods, which provide further insight to the newly released dataset, and set the scene for further research.
Also related to the task proposed in this paper are the recent works of Kafle et al. @cite_41 and Kahou et al. @cite_29 on question answering for bar charts and diagrams, and the work of Kembhavi et al. @cite_46 on textbook question answering. The Textbook Question Answering (TQA) dataset @cite_46 aims at answering multimodal questions given a context of text, diagrams and images, but textual information is provided in computer readable format. This is not the case for the diagrams and charts of the datasets proposed in @cite_41 @cite_29 , meaning that models require some sort of text recognition to solve such QA tasks. However, the text found on these datasets is rendered in standard font types and with good quality, and thus represents a less challenging setup than the scene text used in our work. Similarly, Kise at al. @cite_50 leverage OCR outputs to develop a QA system for machine printed document images.
{ "cite_N": [ "@cite_41", "@cite_29", "@cite_46", "@cite_50" ], "mid": [ "2963420691", "2766732270", "2746097825", "2112931589" ], "abstract": [ "Bar charts are an effective way to convey numeric information, but today's algorithms cannot parse them. Existing methods fail when faced with even minor variations in appearance. Here, we present DVQA, a dataset that tests many aspects of bar chart understanding in a question answering framework. Unlike visual question answering (VQA), DVQA requires processing words and answers that are unique to a particular bar chart. State-of-the-art VQA algorithms perform poorly on DVQA, and we propose two strong baselines that perform considerably better. Our work will enable algorithms to automatically extract numeric and semantic information from vast quantities of bar charts found in scientific publications, Internet articles, business reports, and many other areas.", "We introduce FigureQA, a visual reasoning corpus of over one million question-answer pairs grounded in over 100,000 images. The images are synthetic, scientific-style figures from five classes: line plots, dot-line plots, vertical and horizontal bar graphs, and pie charts. We formulate our reasoning task by generating questions from 15 templates; questions concern various relationships between plot elements and examine characteristics like the maximum, the minimum, area-under-the-curve, smoothness, and intersection. To resolve, such questions often require reference to multiple plot elements and synthesis of information distributed spatially throughout a figure. To facilitate the training of machine learning systems, the corpus also includes side data that can be used to formulate auxiliary objectives. In particular, we provide the numerical data used to generate each figure as well as bounding-box annotations for all plot elements. We study the proposed visual reasoning task by training several models, including the recently proposed Relation Network as a strong baseline. Preliminary results indicate that the task poses a significant machine learning challenge. We envision FigureQA as a first step towards developing models that can intuitively recognize patterns from visual representations of data.", "We introduce the task of Multi-Modal Machine Comprehension (M3C), which aims at answering multimodal questions given a context of text, diagrams and images. We present the Textbook Question Answering (TQA) dataset that includes 1,076 lessons and 26,260 multi-modal questions, taken from middle school science curricula. Our analysis shows that a significant portion of questions require complex parsing of the text and the diagrams and reasoning, indicating that our dataset is more complex compared to previous machine comprehension and visual question answering datasets. We extend state-of-the-art methods for textual machine comprehension and visual question answering to the TQA dataset. Our experiments show that these models do not perform well on TQA. The presented dataset opens new challenges for research in question answering and reasoning across multiple modalities.", "Question answering (QA) is the task of retrieving an answer in response to a question by analyzing documents. Although most of the efforts in developing QA systems are devoted to dealing with electronic text, we consider it is also necessary to develop systems for document images. In this paper, we propose a method of document image retrieval for such QA systems. Since the task is not to retrieve all relevant documents but to find the answer somewhere in documents, retrieval should be precision oriented. The main contribution of this paper is to propose a method of improving precision of document image retrieval by taking into account the co-occurrence of successive terms in a question. The indexing scheme is based on two-dimensional distributions of terms and the weight of co-occurrence is measured by calculating the density distributions of terms. The proposed method was tested by using 1253 pages of documents about the major league baseball with 20 questions and found that it is superior to the baseline method proposed by the authors." ] }
1905.12864
2947251152
Generating high-quality and interpretable adversarial examples in the text domain is a much more daunting task than it is in the image domain. This is due partly to the discrete nature of text, partly to the problem of ensuring that the adversarial examples are still probable and interpretable, and partly to the problem of maintaining label invariance under input perturbations. In order to address some of these challenges, we introduce sparse projected gradient descent (SPGD), a new approach to crafting interpretable adversarial examples for text. SPGD imposes a directional regularization constraint on input perturbations by projecting them onto the directions to nearby word embeddings with highest cosine similarities. This constraint ensures that perturbations move each word embedding in an interpretable direction (i.e., towards another nearby word embedding). Moreover, SPGD imposes a sparsity constraint on perturbations at the sentence level by ignoring word-embedding perturbations whose norms are below a certain threshold. This constraint ensures that our method changes only a few words per sequence, leading to higher quality adversarial examples. Our experiments with the IMDB movie review dataset show that the proposed SPGD method improves adversarial example interpretability and likelihood (evaluated by average per-word perplexity) compared to state-of-the-art methods, while suffering little to no loss in training performance.
The phenomenon of adversarial examples was first noted in @cite_33 . Thereafter, @cite_5 proposed injecting adversarial examples into a model's training data in order to regularize the model and improve test generalization (referred to here as AdvT). Early work in adversarial examples focused on the image domain, where example inputs have a natural continuous representation. Examples in the text domain, on the other hand, present peculiar difficulties due to their discrete nature; the sorts of methods devised to contend with these difficulties fall into three broad classes, which we address in turn, occasionally identifying a method's origins in image-domain work.
{ "cite_N": [ "@cite_5", "@cite_33" ], "mid": [ "2963207607", "2964153729" ], "abstract": [ "Abstract: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "Abstract: Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input." ] }
1905.12864
2947251152
Generating high-quality and interpretable adversarial examples in the text domain is a much more daunting task than it is in the image domain. This is due partly to the discrete nature of text, partly to the problem of ensuring that the adversarial examples are still probable and interpretable, and partly to the problem of maintaining label invariance under input perturbations. In order to address some of these challenges, we introduce sparse projected gradient descent (SPGD), a new approach to crafting interpretable adversarial examples for text. SPGD imposes a directional regularization constraint on input perturbations by projecting them onto the directions to nearby word embeddings with highest cosine similarities. This constraint ensures that perturbations move each word embedding in an interpretable direction (i.e., towards another nearby word embedding). Moreover, SPGD imposes a sparsity constraint on perturbations at the sentence level by ignoring word-embedding perturbations whose norms are below a certain threshold. This constraint ensures that our method changes only a few words per sequence, leading to higher quality adversarial examples. Our experiments with the IMDB movie review dataset show that the proposed SPGD method improves adversarial example interpretability and likelihood (evaluated by average per-word perplexity) compared to state-of-the-art methods, while suffering little to no loss in training performance.
A family of fast-gradient sign methods was introduced in @cite_5 , and applied to the image domain. A Jacobian-based saliency map attack (JSMA) was introduced by @cite_3 and @cite_23 , whereby a saliency map computed from the classifier's Jacobian is used to rank input components; thereafter, input components are perturbed iteratively, in order of salience, until the classifier's prediction changes. The method was never applied to text. @cite_27 seems to have first proposed repeated application of FGMS to improve the chance of fooling the classifier.
{ "cite_N": [ "@cite_5", "@cite_27", "@cite_23", "@cite_3" ], "mid": [ "2963207607", "2460937040", "2274565976", "2180612164" ], "abstract": [ "Abstract: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.", "Advances in deep learning have led to the broad adoption of Deep Neural Networks (DNNs) to a range of important machine learning problems, e.g., guiding autonomous vehicles, speech recognition, malware detection. Yet, machine learning models, including DNNs, were shown to be vulnerable to adversarial samples-subtly (and often humanly indistinguishably) modified malicious inputs crafted to compromise the integrity of their outputs. Adversarial examples thus enable adversaries to manipulate system behaviors. Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software. Adversarial examples are known to transfer from one model to another, even if the second model has a different architecture or was trained on a different set. We introduce the first practical demonstration that this cross-model transfer phenomenon enables attackers to control a remotely hosted DNN with no access to the model, its parameters, or its training data. In our demonstration, we only assume that the adversary can observe outputs from the target DNN given inputs chosen by the adversary. We introduce the attack strategy of fitting a substitute model to the input-output pairs in this manner, then crafting adversarial examples based on this auxiliary model. We evaluate the approach on existing DNN datasets and real-world settings. In one experiment, we force a DNN supported by MetaMind (one of the online APIs for DNN classifiers) to mis-classify inputs at a rate of 84.24 . We conclude with experiments exploring why adversarial samples transfer between DNNs, and a discussion on the applicability of our attack when targeting machine learning algorithms distinct from DNNs.", "Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97 adversarial success rate while only modifying on average 4.02 of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification." ] }
1905.12864
2947251152
Generating high-quality and interpretable adversarial examples in the text domain is a much more daunting task than it is in the image domain. This is due partly to the discrete nature of text, partly to the problem of ensuring that the adversarial examples are still probable and interpretable, and partly to the problem of maintaining label invariance under input perturbations. In order to address some of these challenges, we introduce sparse projected gradient descent (SPGD), a new approach to crafting interpretable adversarial examples for text. SPGD imposes a directional regularization constraint on input perturbations by projecting them onto the directions to nearby word embeddings with highest cosine similarities. This constraint ensures that perturbations move each word embedding in an interpretable direction (i.e., towards another nearby word embedding). Moreover, SPGD imposes a sparsity constraint on perturbations at the sentence level by ignoring word-embedding perturbations whose norms are below a certain threshold. This constraint ensures that our method changes only a few words per sequence, leading to higher quality adversarial examples. Our experiments with the IMDB movie review dataset show that the proposed SPGD method improves adversarial example interpretability and likelihood (evaluated by average per-word perplexity) compared to state-of-the-art methods, while suffering little to no loss in training performance.
With @cite_26 , fast gradient AdvT was first applied to text classification models at the word embedding level. In contrast to @cite_5 's family of FGSM attacks, which only use the gradient sign, @cite_26 uses the raw gradient. @cite_15 extends @cite_26 's work, modifying their method to improve interpretabilitywithout sacrificing test accuracy or computational efficiency. Our method (SPGD) follows this line of work, further modifying fast gradient text methods by introducing sequence-level sparsification and projecting The idea of using some form of PGD to find adversarial examples in the image domain was recently explored in @cite_24 , though our application is, to the best of our knowledge, unique. onto a measure-zero subset of embedding space.
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_15", "@cite_26" ], "mid": [ "2640329709", "2963207607", "2799420921", "2963699875" ], "abstract": [ "Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models.", "Abstract: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "Following great success in the image processing field, the idea of adversarial training has been applied to tasks in the natural language processing (NLP) field. One promising approach directly applies adversarial training developed in the image processing field to the input word embedding space instead of the discrete input space of texts. However, this approach abandons such interpretability as generating adversarial texts to significantly improve the performance of NLP tasks. This paper restores interpretability to such methods by restricting the directions of perturbations toward the existing words in the input embedding space. As a result, we can straightforwardly reconstruct each input with perturbations to an actual text by considering the perturbations to be the replacement of words in the sentence while maintaining or even improving the task performance.", "Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting. However, both methods require making small perturbations to numerous entries of the input vector, which is inappropriate for sparse high-dimensional inputs such as one-hot word representations. We extend adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings in a recurrent neural network rather than to the original input itself. The proposed method achieves state of the art results on multiple benchmark semi-supervised and purely supervised tasks. We provide visualizations and analysis showing that the learned word embeddings have improved in quality and that while training, the model is less prone to overfitting." ] }
1905.12864
2947251152
Generating high-quality and interpretable adversarial examples in the text domain is a much more daunting task than it is in the image domain. This is due partly to the discrete nature of text, partly to the problem of ensuring that the adversarial examples are still probable and interpretable, and partly to the problem of maintaining label invariance under input perturbations. In order to address some of these challenges, we introduce sparse projected gradient descent (SPGD), a new approach to crafting interpretable adversarial examples for text. SPGD imposes a directional regularization constraint on input perturbations by projecting them onto the directions to nearby word embeddings with highest cosine similarities. This constraint ensures that perturbations move each word embedding in an interpretable direction (i.e., towards another nearby word embedding). Moreover, SPGD imposes a sparsity constraint on perturbations at the sentence level by ignoring word-embedding perturbations whose norms are below a certain threshold. This constraint ensures that our method changes only a few words per sequence, leading to higher quality adversarial examples. Our experiments with the IMDB movie review dataset show that the proposed SPGD method improves adversarial example interpretability and likelihood (evaluated by average per-word perplexity) compared to state-of-the-art methods, while suffering little to no loss in training performance.
In contrast to the fast-gradient methods just described, global gradient methods use the model gradient to perturb a global embedding of the entire example. For instance, @cite_10 uses an adversarially regularized autoencoder ( @cite_30 ) to learn a continuous projection of example sequences; in the adversarial regime, they perturb this global representation, subsequently decoding the perturbed point using an LSTM. @cite_19 employs a similar approach, using syntactically-controlled paraphrase networks (SCPNs) to generate semantically similar, but syntactically divergent adversarial examples from original, ground-truth examples. The point of these sorts of approaches is usually to generate natural" adversarial examples which, unlike those produced by the fast-gradient methods above, may diverge in word order or sentence structure from the original example. Thus, such methods align with ,--- ,and represent an alternate approach to ,--- ,our goal of generating syntactically well-formed and semantically consistent examples in order to improve model regularization. Unfortunately, text autoencoders tend to suffer from high reconstruction error when dealing with very long sequences (such as those in the IMDB corpus, where sequences can range over 2,000 words), and so global gradient methods cannot, in general, be applied to the same datasets.
{ "cite_N": [ "@cite_30", "@cite_19", "@cite_10" ], "mid": [ "2773028880", "2963126845", "2766108848" ], "abstract": [ "Deep latent variable models, trained using variational autoencoders or generative adversarial networks, are now a key technique for representation learning of continuous structures. However, applying similar methods to discrete structures, such as text sequences or discretized images, has proven to be more challenging. In this work, we propose a flexible method for training deep latent variable models of discrete structures. Our approach is based on the recently-proposed Wasserstein autoencoder (WAE) which formalizes the adversarial autoencoder (AAE) as an optimal transport problem. We first extend this framework to model discrete sequences, and then further explore different learned priors targeting a controllable representation. This adversarially regularized autoencoder (ARAE) allows us to generate natural textual outputs as well as perform manipulations in the latent space to induce change in the output space. Finally we show that the latent representation can be trained to perform unaligned textual style transfer, giving improvements both in automatic human evaluation compared to existing methods.", "", "Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed. Recent work on adversarial examples, i.e. inputs with minor perturbations that result in substantially different model predictions, is helpful in evaluating the robustness of these models by exposing the adversarial scenarios where they fail. However, these malicious perturbations are often unnatural, not semantically meaningful, and not applicable to complicated domains such as language. In this paper, we propose a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing the recent advances in generative adversarial networks. We present generated adversaries to demonstrate the potential of the proposed approach for black-box classifiers for a wide range of applications such as image classification, textual entailment, and machine translation. We include experiments to show that the generated adversaries are natural, legible to humans, and useful in evaluating and analyzing black-box classifiers." ] }
1905.12864
2947251152
Generating high-quality and interpretable adversarial examples in the text domain is a much more daunting task than it is in the image domain. This is due partly to the discrete nature of text, partly to the problem of ensuring that the adversarial examples are still probable and interpretable, and partly to the problem of maintaining label invariance under input perturbations. In order to address some of these challenges, we introduce sparse projected gradient descent (SPGD), a new approach to crafting interpretable adversarial examples for text. SPGD imposes a directional regularization constraint on input perturbations by projecting them onto the directions to nearby word embeddings with highest cosine similarities. This constraint ensures that perturbations move each word embedding in an interpretable direction (i.e., towards another nearby word embedding). Moreover, SPGD imposes a sparsity constraint on perturbations at the sentence level by ignoring word-embedding perturbations whose norms are below a certain threshold. This constraint ensures that our method changes only a few words per sequence, leading to higher quality adversarial examples. Our experiments with the IMDB movie review dataset show that the proposed SPGD method improves adversarial example interpretability and likelihood (evaluated by average per-word perplexity) compared to state-of-the-art methods, while suffering little to no loss in training performance.
Most of the past approaches to adversarial text generation, however, do not use the model gradient at all, instead working directly on the (discrete) text input. The earliest of these is due to @cite_21 , which proposes iteratively substituting words in a sentence with nearby neighbors until the classifier's label prediction changes; @cite_31 , unpublished, pursues a similar methodology. Still other recent methods ( @cite_36 , @cite_34 , and @cite_25 , for example) attack text examples by scrambling, misspelling, or erasing words, or even by introducing out-of-vocabulary (OOV) words; these approaches can allow for more easily debugging and regularizing models in the text domain, but they also suffer from the aforementioned issues of harming syntactic coherence or destroying semantic equivalence between original example and adversarial example. Other rule-based methods, such as @cite_32 , attempt to use hand-crafted rules to mitigate these issues and preserve semantic entailment between adversarially-generated and original example.
{ "cite_N": [ "@cite_36", "@cite_21", "@cite_32", "@cite_31", "@cite_34", "@cite_25" ], "mid": [ "2963661177", "2963834268", "2799007037", "2785699986", "2777353073", "2562979205" ], "abstract": [ "Character-based neural machine translation (NMT) models alleviate out-of-vocabulary issues, learn morphology, and move us closer to completely end-to-end translation systems. Unfortunately, they are also very brittle and easily falter when presented with noisy data. In this paper, we confront NMT models with synthetic and natural sources of noise. We find that state-of-the-art models fail to translate even moderately noisy texts that humans have no trouble comprehending. We explore two approaches to increase model robustness: structure-invariant word representations and robust training on noisy texts. We find that a model based on a character convolutional neural network is able to simultaneously learn representations robust to multiple kinds of noise.", "Machine learning models are frequently used to solve complex security problems, as well as to make decisions in sensitive situations like guiding autonomous vehicles or predicting financial market behaviors. Previous efforts have shown that numerous machine learning models are vulnerable to adversarial manipulations of their inputs taking the form of adversarial samples. Such inputs are crafted by adding carefully selected perturbations to legitimate inputs so as to force the machine learning model to misbehave, for instance by outputting a wrong class if the machine learning task of interest is classification. In fact, to the best of our knowledge, all previous work on adversarial samples crafting for neural networks considered models used to solve classification tasks, most frequently in computer vision applications. In this paper, we investigate adversarial input sequences for recurrent neural networks processing sequential data. We show that the classes of algorithms introduced previously to craft adversarial samples misclassified by feed-forward neural networks can be adapted to recurrent neural networks. In a experiment, we show that adversaries can craft adversarial sequences misleading both categorical and sequential recurrent neural networks.", "", "Modern machine learning algorithms are often susceptible to adversarial examples — maliciously crafted inputs that are undetectable by humans but that fool the algorithm into producing undesirable behavior. In this work, we show that adversarial examples exist in natural language classification: we formalize the notion of an adversarial example in this setting and describe algorithms that construct such examples. Adversarial perturbations can be crafted for a wide range of tasks — including spam filtering, fake news detection, and sentiment analysis — and affect different models — convolutional and recurrent neural networks as well as linear classifiers to a lesser degree. Constructing an adversarial example involves replacing 10-30 of words in a sentence with synonyms that don’t change its meaning. Up to 90 of input examples admit adversarial perturbations; furthermore, these perturbations retain a degree of transferability across models. Our findings demonstrate the existence of vulnerabilities in machine learning systems and hint at limitations in our understanding of classification algorithms.", "Adversarial examples expose vulnerabilities of machine learning models. We propose an efficient method to generate white-box adversarial examples that trick character-level and word-level neural models. Our method, HotFlip, relies on an atomic flip operation, which swaps one token for another, based on the gradients of the one-hot input vectors. In experiments on text classification and machine translation, we find that only a few manipulations are needed to greatly increase the error rates. We analyze the properties of these examples, and show that employing these adversarial examples in training can improve test-time accuracy on clean examples, as well as defend the models against adversarial examples.", "While neural networks have been successfully applied to many natural language processing tasks, they come at the cost of interpretability. In this paper, we propose a general methodology to analyze and interpret decisions from a neural model by observing the effects on the model of erasing various parts of the representation, such as input word-vector dimensions, intermediate hidden units, or input words. We present several approaches to analyzing the effects of such erasure, from computing the relative difference in evaluation metrics, to using reinforcement learning to erase the minimum set of input words in order to flip a neural model's decision. In a comprehensive analysis of multiple NLP tasks, including linguistic feature classification, sentence-level sentiment analysis, and document level sentiment aspect prediction, we show that the proposed methodology not only offers clear explanations about neural model decisions, but also provides a way to conduct error analysis on neural models." ] }
1905.13132
2950501737
Content-based news recommendation systems need to recommend news articles based on the topics and content of articles without using user specific information. Many news articles describe the occurrence of specific events and named entities including people, places or objects. In this paper, we propose a graph traversal algorithm as well as a novel weighting scheme for cold-start content based news recommendation utilizing these named entities. Seeking to create a higher degree of user-specific relevance, our algorithm computes the shortest distance between named entities, across news articles, over a large knowledge graph. Moreover, we have created a new human annotated data set for evaluating content based news recommendation systems. Experimental results show our method is suitable to tackle the hard cold-start problem and it produces stronger Pearson correlation to human similarity scores than other cold-start methods. Our method is also complementary and a combination with the conventional cold-start recommendation methods may yield significant performance gains. The dataset, CNRec, is available at: this https URL
@cite_3 present a document similarity approach where a document-wise connectivity score is computed based on the number of paths between document annotations. In a follow-up paper , use traditional TF-IDF to select top entities for each document. In @cite_13 @cite_28 , Passant also computes all paths between two nodes and the number of direct and distinct links between resources in a graph, which are used to determine the similarity of two entities for recommendation.
{ "cite_N": [ "@cite_28", "@cite_13", "@cite_3" ], "mid": [ "1808782381", "2405745513", "2032310999" ], "abstract": [ "This paper describes the theoretical background and the implementation of dbrec, a music recommendation system built on top of DBpedia, offering recommendations for more than 39,000 bands and solo artists. We discuss the various challenges and lessons learnt while building it, providing relevant insights for people developing applications consuming Linked Data. Furthermore, we provide a user-centric evaluation of the system, notably by comparing it to last.fm.", "A frequent topic discussed in the Linked Data community, especially when trying to outreach its values, is \"What can we do with all this data ?\". In this paper, we demonstrate (1) how to measure semantic distance on Linked Data in order to identify relatedness between resources, and (2) how such measures can be used to provide a new kind of self-explanatory recommendations, bringing together Linked Data and Artificial Intelligence principles, and demonstrating how intelligent agents could emerge in the realm of Linked Data.", "Abstract Connectivity and relatedness of Web resources are two concepts that define to what extent different parts are connected or related to one another. Measuring connectivity and relatedness between Web resources is a growing field of research, often the starting point of recommender systems. Although relatedness is liable to subjective interpretations, connectivity is not. Given the Semantic Web's ability of linking Web resources, connectivity can be measured by exploiting the links between entities. Further, these connections can be exploited to uncover relationships between Web resources. In this paper, we apply and expand a relationship assessment methodology from social network theory to measure the connectivity between documents. The connectivity measures are used to identify connected and related Web resources. Our approach is able to expose relations that traditional text-based approaches fail to identify. We validate and assess our proposed approaches through an evaluation on a real world dataset, where results show that the proposed techniques outperform state of the art approaches." ] }
1905.13132
2950501737
Content-based news recommendation systems need to recommend news articles based on the topics and content of articles without using user specific information. Many news articles describe the occurrence of specific events and named entities including people, places or objects. In this paper, we propose a graph traversal algorithm as well as a novel weighting scheme for cold-start content based news recommendation utilizing these named entities. Seeking to create a higher degree of user-specific relevance, our algorithm computes the shortest distance between named entities, across news articles, over a large knowledge graph. Moreover, we have created a new human annotated data set for evaluating content based news recommendation systems. Experimental results show our method is suitable to tackle the hard cold-start problem and it produces stronger Pearson correlation to human similarity scores than other cold-start methods. Our method is also complementary and a combination with the conventional cold-start recommendation methods may yield significant performance gains. The dataset, CNRec, is available at: this https URL
Zhu et. al. @cite_34 propose to use WordNet @cite_25 and DBpedia @cite_30 to determine the semantic similarity between concepts. They have attained the state of the art performance using a weighted shortest distance metric based on the least common subsumer measure extracted from graphs. This method measures similarity between pairs of nodes in a graph. In comparison, our work compares news articles by groups of nodes, i.e. sub-graphs.
{ "cite_N": [ "@cite_30", "@cite_34", "@cite_25" ], "mid": [ "102708294", "2523199059", "2081580037" ], "abstract": [ "DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human-andmachine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.", "This paper presents a method for measuring the semantic similarity between concepts in Knowledge Graphs (KGs) such as WordNet and DBpedia. Previous work on semantic similarity methods have focused on either the structure of the semantic network between concepts (e.g., path length and depth), or only on the Information Content (IC) of concepts. We propose a semantic similarity method, namely wpath, to combine these two approaches, using IC to weight the shortest path length between concepts. Conventional corpus-based IC is computed from the distributions of concepts over textual corpus, which is required to prepare a domain corpus containing annotated concepts and has high computational cost. As instances are already extracted from textual corpus and annotated by concepts in KGs, graph-based IC is proposed to compute IC based on the distributions of concepts over instances. Through experiments performed on well known word similarity datasets, we show that the wpath semantic similarity method has produced a statistically significant improvement over other semantic similarity methods. Moreover, in a real category classification evaluation, the wpath method has shown the best performance in terms of accuracy and F score.", "Because meaningful sentences are composed of meaningful words, any system that hopes to process natural languages as people do must have information about words and their meanings. This information is traditionally provided through dictionaries, and machine-readable dictionaries are now widely available. But dictionary entries evolved for the convenience of human readers, not for machines. WordNet 1 provides a more effective combination of traditional lexicographic information and modern computing. WordNet is an online lexical database designed for use under program control. English nouns, verbs, adjectives, and adverbs are organized into sets of synonyms, each representing a lexicalized concept. Semantic relations link the synonym sets [4]." ] }
1905.13132
2950501737
Content-based news recommendation systems need to recommend news articles based on the topics and content of articles without using user specific information. Many news articles describe the occurrence of specific events and named entities including people, places or objects. In this paper, we propose a graph traversal algorithm as well as a novel weighting scheme for cold-start content based news recommendation utilizing these named entities. Seeking to create a higher degree of user-specific relevance, our algorithm computes the shortest distance between named entities, across news articles, over a large knowledge graph. Moreover, we have created a new human annotated data set for evaluating content based news recommendation systems. Experimental results show our method is suitable to tackle the hard cold-start problem and it produces stronger Pearson correlation to human similarity scores than other cold-start methods. Our method is also complementary and a combination with the conventional cold-start recommendation methods may yield significant performance gains. The dataset, CNRec, is available at: this https URL
In @cite_4 , similarity is measured based on entity linking and analysis of entity neighborhood in DBpedia , where information content is used to weight edges to compute the SSSP between all pairs entities in documents. In this work, we use a much larger knowledge graph, i.e., Freebase @cite_17 , and compute the average minimum symmetric distance across all pairwise entities between two articles.
{ "cite_N": [ "@cite_4", "@cite_17" ], "mid": [ "2145769341", "2094728533" ], "abstract": [ "We propose a graph-based semantic model for representing document content. Our method relies on the use of a semantic network, namely the DBpedia knowledge base, for acquiring fine-grained information about entities and their semantic relations, thus resulting in a knowledge-rich document model. We demonstrate the benefits of these semantic representations in two tasks: entity ranking and computing document semantic similarity. To this end, we couple DBpedia's structure with an information-theoretic measure of concept association, based on its explicit semantic relations, and compute semantic similarity using a Graph Edit Distance based measure, which finds the optimal matching between the documents' entities using the Hungarian method. Experimental results show that our general model outperforms baselines built on top of traditional methods, and achieves a performance close to that of highly specialized methods that have been tuned to these specific tasks.", "Freebase is a practical, scalable tuple database used to structure general human knowledge. The data in Freebase is collaboratively created, structured, and maintained. Freebase currently contains more than 125,000,000 tuples, more than 4000 types, and more than 7000 properties. Public read write access to Freebase is allowed through an HTTP-based graph-query API using the Metaweb Query Language (MQL) as a data query and manipulation language. MQL provides an easy-to-use object-oriented interface to the tuple data in Freebase and is designed to facilitate the creation of collaborative, Web-based data-oriented applications." ] }
1905.13132
2950501737
Content-based news recommendation systems need to recommend news articles based on the topics and content of articles without using user specific information. Many news articles describe the occurrence of specific events and named entities including people, places or objects. In this paper, we propose a graph traversal algorithm as well as a novel weighting scheme for cold-start content based news recommendation utilizing these named entities. Seeking to create a higher degree of user-specific relevance, our algorithm computes the shortest distance between named entities, across news articles, over a large knowledge graph. Moreover, we have created a new human annotated data set for evaluating content based news recommendation systems. Experimental results show our method is suitable to tackle the hard cold-start problem and it produces stronger Pearson correlation to human similarity scores than other cold-start methods. Our method is also complementary and a combination with the conventional cold-start recommendation methods may yield significant performance gains. The dataset, CNRec, is available at: this https URL
Bayesian Networks is a popular algorithm to model user interests in news recommendation @cite_19 , @cite_9 . Context trees for news recommendation are proposed in @cite_5 LDA is used in @cite_33 to represent topic distributions @cite_32 . @cite_21 proposes locality sensitive hashing (LSH) and @cite_33 uses MinHash , i.e., a probabilistic clustering method. MinHash has also been combined with probabilistic latent semantic indexing (pLSI) in @cite_21 .
{ "cite_N": [ "@cite_33", "@cite_9", "@cite_21", "@cite_32", "@cite_19", "@cite_5" ], "mid": [ "1988506514", "2377164795", "2123427850", "1482459167", "1964482960", "2154908680" ], "abstract": [ "Online news articles, as a new format of press releases, have sprung up on the Internet. With its convenience and recency, more and more people prefer to read news online instead of reading the paper-format press releases. However, a gigantic amount of news events might be released at a rate of hundreds, even thousands per hour. A challenging problem is how to efficiently select specific news articles from a large corpus of newly-published press releases to recommend to individual readers, where the selected news items should match the reader's reading preference as much as possible. This issue refers to personalized news recommendation. Recently, personalized news recommendation has become a promising research direction as the Internet provides fast access to real-time information from multiple sources around the world. Existing personalized news recommendation systems strive to adapt their services to individual users by virtue of both user and news content information. A variety of techniques have been proposed to tackle personalized news recommendation, including content-based, collaborative filtering systems and hybrid versions of these two. In this paper, we provide a comprehensive investigation of existing personalized news recommenders. We discuss several essential issues underlying the problem of personalized news recommendation, and explore possible solutions for performance improvement. Further, we provide an empirical study on a collection of news articles obtained from various news websites, and evaluate the effect of different factors for personalized news recommendation. We hope our discussion and exploration would provide insights for researchers who are interested in personalized news recommendation.", "", "Several approaches to collaborative filtering have been studied but seldom have studies been reported for large (several millionusers and items) and dynamic (the underlying item set is continually changing) settings. In this paper we describe our approach to collaborative filtering for generating personalized recommendations for users of Google News. We generate recommendations using three approaches: collaborative filtering using MinHash clustering, Probabilistic Latent Semantic Indexing (PLSI), and covisitation counts. We combine recommendations from different algorithms using a linear model. Our approach is content agnostic and consequently domain independent, making it easily adaptable for other applications and languages with minimal effort. This paper will describe our algorithms and system setup in detail, and report results of running the recommendations engine on Google News.", "", "Recommendation Systems have become an important research area in mobile computing. Although various recommendation systems have been developed to help users to deal with information overload, few systems focus on proactive information recommendation. This paper presents a news recommender system that proactively pushes just-in-time personalized news articles to mobile users based on user’s contextual information as well as news content. User’s information needs are estimated based on Bayesian network technique. An Analytic Hierarchy Process (AHP) Model, which supports both Content-based filtering and Collaborative filtering, is developed to rate the relevance of news articles. The weight of contexts (criteria) is automatically adjusted via individual-based and or group-based (group decision making) assignment. The experiments show that the system can push relevant news to mobile users.", "The proliferation of online news creates a need for filtering interesting articles. Compared to other products, however, recommending news has specific challenges: news preferences are subject to trends, users do not want to see multiple articles with similar content, and frequently we have insufficient information to profile the reader. In this paper, we introduce a class of news recommendation systems based on context trees. They can provide high-quality news recommendations to anonymous visitors based on present browsing behaviour. Using an unbiased testing methodology, we show that they make accurate and novel recommendations, and that they are sufficiently flexible for the challenges of news recommendation." ] }
1905.13132
2950501737
Content-based news recommendation systems need to recommend news articles based on the topics and content of articles without using user specific information. Many news articles describe the occurrence of specific events and named entities including people, places or objects. In this paper, we propose a graph traversal algorithm as well as a novel weighting scheme for cold-start content based news recommendation utilizing these named entities. Seeking to create a higher degree of user-specific relevance, our algorithm computes the shortest distance between named entities, across news articles, over a large knowledge graph. Moreover, we have created a new human annotated data set for evaluating content based news recommendation systems. Experimental results show our method is suitable to tackle the hard cold-start problem and it produces stronger Pearson correlation to human similarity scores than other cold-start methods. Our method is also complementary and a combination with the conventional cold-start recommendation methods may yield significant performance gains. The dataset, CNRec, is available at: this https URL
In @cite_31 , a pre-trained named entity recognition (NER)model is applied to each news article. Based on the identified entities from articles, they determine user's long-term preferences for collaborative filtering. In this paper, rather than building user profile, we also leverage named entity information to improve content based recommendation rather than collaborative filtering. @cite_33 , @cite_31 and @cite_32 survey more natural language processing techniques used for news recommendation.
{ "cite_N": [ "@cite_31", "@cite_32", "@cite_33" ], "mid": [ "1786758439", "1482459167", "1988506514" ], "abstract": [ "Mobile news recommender systems help users retrieve news that is relevant in their particular context and can be presented in ways that require minimal user interaction. In spite of the availability of contextual information about mobile users, though, current mobile news applications employ rather simple strategies for news recommendation. Our multi-perspective approach unifies temporal, locational, and preferential information to provide a more fine-grained recommendation strategy. This demo paper presents the implementation of our solution to efficiently recommend specific news articles from a large corpus of newly-published press releases in a way that closely matches a reader's reading preferences.", "", "Online news articles, as a new format of press releases, have sprung up on the Internet. With its convenience and recency, more and more people prefer to read news online instead of reading the paper-format press releases. However, a gigantic amount of news events might be released at a rate of hundreds, even thousands per hour. A challenging problem is how to efficiently select specific news articles from a large corpus of newly-published press releases to recommend to individual readers, where the selected news items should match the reader's reading preference as much as possible. This issue refers to personalized news recommendation. Recently, personalized news recommendation has become a promising research direction as the Internet provides fast access to real-time information from multiple sources around the world. Existing personalized news recommendation systems strive to adapt their services to individual users by virtue of both user and news content information. A variety of techniques have been proposed to tackle personalized news recommendation, including content-based, collaborative filtering systems and hybrid versions of these two. In this paper, we provide a comprehensive investigation of existing personalized news recommenders. We discuss several essential issues underlying the problem of personalized news recommendation, and explore possible solutions for performance improvement. Further, we provide an empirical study on a collection of news articles obtained from various news websites, and evaluate the effect of different factors for personalized news recommendation. We hope our discussion and exploration would provide insights for researchers who are interested in personalized news recommendation." ] }
1905.13066
2947609158
We propose a novel feed-forward network for video inpainting. We use a set of sampled video frames as the reference to take visible contents to fill the hole of a target frame. Our video inpainting network consists of two stages. The first stage is an alignment module that uses computed homographies between the reference frames and the target frame. The visible patches are then aggregated based on the frame similarity to fill in the target holes roughly. The second stage is a non-local attention module that matches the generated patches with known reference patches (in space and time) to refine the previous global alignment stage. Both stages consist of large spatial-temporal window size for the reference and thus enable modeling long-range correlations between distant information and the hole regions. Therefore, even challenging scenes with large or slowly moving holes can be handled, which have been hardly modeled by existing flow-based approach. Our network is also designed with a recurrent propagation stream to encourage temporal consistency in video results. Experiments on video object removal demonstrate that our method inpaints the holes with globally and locally coherent contents.
Early works for image inpainting can broadly fall into either diffusion-based @cite_12 @cite_26 @cite_13 or patch-based methods @cite_0 @cite_23 @cite_19 @cite_2 . The former propagates texture from the hole boundaries towards the hole center, and works well with small holes, but suffers artifacts and noisy results with large holes. The latter tries to match and copy the nearest neighbor background patches, and is widely deployed in practical applications.
{ "cite_N": [ "@cite_26", "@cite_0", "@cite_19", "@cite_23", "@cite_2", "@cite_13", "@cite_12" ], "mid": [ "", "2156235915", "1999360130", "1975049209", "2115273023", "2169575425", "2100415658" ], "abstract": [ "", "An algorithm for the simultaneous filling-in of texture and structure in regions of missing image information is presented. The basic idea is to first decompose the image into the sum of two functions with different basic characteristics, and then reconstruct each one of these functions separately with structure and texture filling-in algorithms. The first function used in the decomposition is of bounded variation, representing the underlying image structure, while the second function captures the texture and possible noise. The region of missing information in the bounded variation image is reconstructed using image inpainting algorithms, while the same region in the texture image is filled-in with texture synthesis techniques. The original image is then reconstructed adding back these two sub-images. The novel contribution of the paper is then in the combination of these three previously developed components: image decomposition with inpainting and texture synthesis, which permits the simultaneous use of filling-in algorithms that are suited for different image characteristics. Examples on real images show the advantages of this proposed approach.", "We present a simple image-based method of generating novel visual appearance in which a new image is synthesized by stitching together small patches of existing images. We call this process image quilting. First, we use quilting as a fast and very simple texture synthesis algorithm which produces surprisingly good results for a wide range of textures. Second, we extend the algorithm to perform texture transfer — rendering an object with a texture taken from a different object. More generally, we demonstrate how an image can be re-rendered in the style of a different image. The method works directly on the images and does not require 3D information.", "Current methods for combining two different images produce visible artifacts when the sources have very different textures and structures. We present a new method for synthesizing a transition region between two source images, such that inconsistent color, texture, and structural properties all change gradually from one source to the other. We call this process image melding. Our method builds upon a patch-based optimization foundation with three key generalizations: First, we enrich the patch search space with additional geometric and photometric transformations. Second, we integrate image gradients into the patch representation and replace the usual color averaging with a screened Poisson equation solver. And third, we propose a new energy based on mixed L2 L0 norms for colors and gradients that produces a gradual transition between sources without sacrificing texture sharpness. Together, all three generalizations enable patch-based solutions to a broad class of image melding problems involving inconsistent sources: object cloning, stitching challenging panoramas, hole filling from multiple photos, and image harmonization. In several cases, our unified method outperforms previous state-of-the-art methods specifically designed for those applications.", "We propose a principled approach to summarization of visual data (images or video) based on optimization of a well-defined similarity measure. The problem we consider is re-targeting (or summarization) of image video data into smaller sizes. A good ldquovisual summaryrdquo should satisfy two properties: (1) it should contain as much as possible visual information from the input data; (2) it should introduce as few as possible new visual artifacts that were not in the input data (i.e., preserve visual coherence). We propose a bi-directional similarity measure which quantitatively captures these two requirements: Two signals S and T are considered visually similar if all patches of S (at multiple scales) are contained in T, and vice versa. The problem of summarization re-targeting is posed as an optimization problem of this bi-directional similarity measure. We show summarization results for image and video data. We further show that the same approach can be used to address a variety of other problems, including automatic cropping, completion and synthesis of visual data, image collage, object removal, photo reshuffling and more.", "Inpainting is the problem of filling-in holes in images. Considerable progress has been made by techniques that use the immediate boundary of the hole and some prior information on images to solve this problem. These algorithms successfully solve the local inpainting problem but they must, by definition, give the same completion to any two holes that have the same boundary, even when the rest of the image is vastly different. We address a different, more global inpainting problem. How can we use the rest of the image in order to learn how to inpaint? We approach this problem from the context of statistical learning. Given a training image we build an exponential family distribution over images that is based on the histograms of local features. We then use this image specific distribution to inpaint the hole by finding the most probable image given the boundary and the distribution. The optimization is done using loopy belief propagation. We show that our method can successfully complete holes while taking into account the specific image statistics. In particular it can give vastly different completions even when the local neighborhoods are identical.", "A variational approach for filling-in regions of missing data in digital images is introduced. The approach is based on joint interpolation of the image gray levels and gradient isophotes directions, smoothly extending in an automatic fashion the isophote lines into the holes of missing data. This interpolation is computed by solving the variational problem via its gradient descent flow, which leads to a set of coupled second order partial differential equations, one for the gray-levels and one for the gradient orientations. The process underlying this approach can be considered as an interpretation of the Gestaltist's principle of good continuation. No limitations are imposed on the topology of the holes, and all regions of missing data can be simultaneously processed, even if they are surrounded by completely different structures. Applications of this technique include the restoration of old photographs and removal of superimposed text like dates, subtitles, or publicity. Examples of these applications are given. We conclude the paper with a number of theoretical results on the proposed variational approach and its corresponding gradient descent flow." ] }
1905.13066
2947609158
We propose a novel feed-forward network for video inpainting. We use a set of sampled video frames as the reference to take visible contents to fill the hole of a target frame. Our video inpainting network consists of two stages. The first stage is an alignment module that uses computed homographies between the reference frames and the target frame. The visible patches are then aggregated based on the frame similarity to fill in the target holes roughly. The second stage is a non-local attention module that matches the generated patches with known reference patches (in space and time) to refine the previous global alignment stage. Both stages consist of large spatial-temporal window size for the reference and thus enable modeling long-range correlations between distant information and the hole regions. Therefore, even challenging scenes with large or slowly moving holes can be handled, which have been hardly modeled by existing flow-based approach. Our network is also designed with a recurrent propagation stream to encourage temporal consistency in video results. Experiments on video object removal demonstrate that our method inpaints the holes with globally and locally coherent contents.
For videos, Granados al @cite_7 and Newson al @cite_29 proposed to align the frames in addition to using the optical flow or 3D PatchMatch search. Huang al @cite_21 jointly optimize global flow and colors throughout a video for long-term temporal consistency. As mentioned earlier, these methods are heavy in computation time, prone to flow errors, and not able to capture high-level semantics.
{ "cite_N": [ "@cite_29", "@cite_21", "@cite_7" ], "mid": [ "2069237980", "2551763541", "1487937094" ], "abstract": [ "We propose an automatic video inpainting algorithm which relies on the optimization of a global, patch-based functional. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects, and moving background. Furthermore, we achieve this in an order of magnitude less execution time with respect to the state-of-the-art. We are also able to achieve good quality results on high-definition videos. Finally, we provide specific algorithmic details to make implementation of our algorithm as easy as possible. The resulting algorithm requires no segmentation or manual input other than the definition of the inpainting mask and can deal with a wider variety of situations than is handled by previous work.", "We present an automatic video completion algorithm that synthesizes missing regions in videos in a temporally coherent fashion. Our algorithm can handle dynamic scenes captured using a moving camera. State-of-the-art approaches have difficulties handling such videos because viewpoint changes cause image-space motion vectors in the missing and known regions to be inconsistent. We address this problem by jointly estimating optical flow and color in the missing regions. Using pixel-wise forward backward flow fields enables us to synthesize temporally coherent colors. We formulate the problem as a non-parametric patch-based optimization. We demonstrate our technique on numerous challenging videos.", "We propose a method for removing marked dynamic objects from videos captured with a free-moving camera, so long as the objects occlude parts of the scene with a static background. Our approach takes as input a video, a mask marking the object to be removed, and a mask marking the dynamic objects to remain in the scene. To inpaint a frame, we align other candidate frames in which parts of the missing region are visible. Among these candidates, a single source is chosen to fill each pixel so that the final arrangement is color-consistent. Intensity differences between sources are smoothed using gradient domain fusion. Our frame alignment process assumes that the scene can be approximated using piecewise planar geometry: A set of homographies is estimated for each frame pair, and one each is selected for aligning pixels such that the color-discrepancy is minimized and the epipolar constraints are maintained. We provide experimental validation with several real-world video sequences to demonstrate that, unlike in previous work, inpainting videos shot with free-moving cameras does not necessarily require estimation of absolute camera positions and per-frame per-pixel depth maps." ] }
1905.13066
2947609158
We propose a novel feed-forward network for video inpainting. We use a set of sampled video frames as the reference to take visible contents to fill the hole of a target frame. Our video inpainting network consists of two stages. The first stage is an alignment module that uses computed homographies between the reference frames and the target frame. The visible patches are then aggregated based on the frame similarity to fill in the target holes roughly. The second stage is a non-local attention module that matches the generated patches with known reference patches (in space and time) to refine the previous global alignment stage. Both stages consist of large spatial-temporal window size for the reference and thus enable modeling long-range correlations between distant information and the hole regions. Therefore, even challenging scenes with large or slowly moving holes can be handled, which have been hardly modeled by existing flow-based approach. Our network is also designed with a recurrent propagation stream to encourage temporal consistency in video results. Experiments on video object removal demonstrate that our method inpaints the holes with globally and locally coherent contents.
Deep learning based methods have achieved great success on the image inpainting task @cite_24 @cite_27 @cite_28 @cite_17 @cite_4 . They proposed to use Convolutional Neural Network together with Generative Adversarial Networks @cite_24 , global and local discriminators to improve spatial coherency @cite_27 , a coarse-to-fine model with contextual attention @cite_28 , partial convolution @cite_17 and gated convolution @cite_4 to handle free-form masks. However, they do not consider any consistencies between frames when applied to videos.
{ "cite_N": [ "@cite_4", "@cite_28", "@cite_24", "@cite_27", "@cite_17" ], "mid": [ "2807633959", "2963540914", "2963420272", "2738588019", "2798365772" ], "abstract": [ "We present a novel deep learning based image inpainting system to complete images with free-form masks and inputs. The system is based on gated convolutions learned from millions of images without additional labelling efforts. The proposed gated convolution solves the issue of vanilla convolution that treats all input pixels as valid ones, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location across all layers. Moreover, as free-form masks may appear anywhere in images with any shapes, global and local GANs designed for a single rectangular mask are not suitable. To this end, we also present a novel GAN loss, named SN-PatchGAN, by applying spectral-normalized discriminators on dense image patches. It is simple in formulation, fast and stable in training. Results on automatic image inpainting and user-guided extension demonstrate that our system generates higher-quality and more flexible results than previous methods. We show that our system helps users quickly remove distracting objects, modify image layouts, clear watermarks, edit faces and interactively create novel objects in images. Furthermore, visualization of learned feature representations reveals the effectiveness of gated convolution and provides an interpretation of how the proposed neural network fills in missing regions. More high-resolution results and video materials are available at this http URL", "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https: github.com JiahuiYu generative_inpainting.", "We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders – a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.", "We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces.", "Existing deep learning based image inpainting methods use a standard convolutional network over the corrupted image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the masked holes (typically the mean value). This often leads to artifacts such as color discrepancy and blurriness. Post-processing is usually used to reduce such artifacts, but are expensive and may fail. We propose the use of partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels. We further include a mechanism to automatically generate an updated mask for the next layer as part of the forward pass. Our model outperforms other methods for irregular masks. We show qualitative and quantitative comparisons with other methods to validate our approach." ] }
1905.12921
2947983671
We show that the classification performance of Graph Convolutional Networks is related to the alignment between features, graph and ground truth, which we quantify using a subspace alignment measure corresponding to the Frobenius norm of the matrix of pairwise chordal distances between three subspaces associated with features, graph and ground truth. The proposed measure is based on the principal angles between subspaces and has both spectral and geometrical interpretations. We showcase the relationship between the subspace alignment measure and the classification performance through the study of limiting cases of Graph Convolutional Networks as well as systematic randomizations of both features and graph structure applied to a constructive example and several examples of citation networks of different origin. The analysis also reveals the relative importance of the graph and features for classification purposes.
The first attempt to generalize neural networks on graphs can be traced back to Gori (2005) @cite_24 , who proposed a scheme combining recurrent neural networks (RNNs) and random walk models. Their method requires the repeated application of contraction maps as propagation functions until the node representations reach a stable fixed point. This method, however, did not attract much attention when it was proposed. With the current surge of interest in deep learning, this work has been reappraised in a new and modern form: Ref. @cite_22 introduced modern techniques for RNN training based on the original graph neural network framework, whereas Ref. @cite_9 proposed a convolution-like propagation rule on graphs and methods for graph-level classification.
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_22" ], "mid": [ "1501856433", "2964338167", "2244807774" ], "abstract": [ "In several applications the information is naturally represented by graphs. Traditional approaches cope with graphical data structures using a preprocessing phase which transforms the graphs into a set of flat vectors. However, in this way, important topological information may be lost and the achieved results may heavily depend on the preprocessing stage. This paper presents a new neural model, called graph neural network (GNN), capable of directly processing graphs. GNNs extends recursive neural networks and can be applied on most of the practically useful kinds of graphs, including directed, undirected, labelled and cyclic graphs. A learning algorithm for GNNs is proposed and some experiments are discussed which assess the properties of the model.", "Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNet, that uses continuous communication for fully cooperative tasks. The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines. In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.", "Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (, 2009), which we modify to use gated recurrent units and modern optimization techniques and then extend to output sequences. The result is a flexible and broadly useful class of neural network models that has favorable inductive biases relative to purely sequence-based models (e.g., LSTMs) when the problem is graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and graph algorithm learning tasks. We then show it achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures." ] }
1905.12921
2947983671
We show that the classification performance of Graph Convolutional Networks is related to the alignment between features, graph and ground truth, which we quantify using a subspace alignment measure corresponding to the Frobenius norm of the matrix of pairwise chordal distances between three subspaces associated with features, graph and ground truth. The proposed measure is based on the principal angles between subspaces and has both spectral and geometrical interpretations. We showcase the relationship between the subspace alignment measure and the classification performance through the study of limiting cases of Graph Convolutional Networks as well as systematic randomizations of both features and graph structure applied to a constructive example and several examples of citation networks of different origin. The analysis also reveals the relative importance of the graph and features for classification purposes.
We now present briefly the key insights introduced by Bruna @cite_26 to extend CNNs to the non-Euclidean domain. For an extensive recent review, the reader should refer to @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_26" ], "mid": [ "2558748708", "1662382123" ], "abstract": [ "Many scientific fields study data with an underlying structure that is non-Euclidean. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions) and are natural targets for machine-learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural-language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure and in cases where the invariances of these structures are built into networks used to model them.", "Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures." ] }
1905.12921
2947983671
We show that the classification performance of Graph Convolutional Networks is related to the alignment between features, graph and ground truth, which we quantify using a subspace alignment measure corresponding to the Frobenius norm of the matrix of pairwise chordal distances between three subspaces associated with features, graph and ground truth. The proposed measure is based on the principal angles between subspaces and has both spectral and geometrical interpretations. We showcase the relationship between the subspace alignment measure and the classification performance through the study of limiting cases of Graph Convolutional Networks as well as systematic randomizations of both features and graph structure applied to a constructive example and several examples of citation networks of different origin. The analysis also reveals the relative importance of the graph and features for classification purposes.
* Layer-wise propagation rule and multi-layer architecture Our study uses the multi-layer GCN proposed in @cite_12 . Given the matrix @math with sample features and the (undirected) adjacency matrix @math of the graph @math encoding relational information between the samples, the propagation rule between layers @math and @math (of size @math and @math , respectively) is given by: where @math and @math are matrices of activation in the @math and @math layers, respectively; @math is the threshold activation function for layer @math ; and the weights connecting layers @math and @math are stored in the matrix @math . Note that the input layer contains the feature matrix @math .
{ "cite_N": [ "@cite_12" ], "mid": [ "2964015378" ], "abstract": [ "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin." ] }
1905.12921
2947983671
We show that the classification performance of Graph Convolutional Networks is related to the alignment between features, graph and ground truth, which we quantify using a subspace alignment measure corresponding to the Frobenius norm of the matrix of pairwise chordal distances between three subspaces associated with features, graph and ground truth. The proposed measure is based on the principal angles between subspaces and has both spectral and geometrical interpretations. We showcase the relationship between the subspace alignment measure and the classification performance through the study of limiting cases of Graph Convolutional Networks as well as systematic randomizations of both features and graph structure applied to a constructive example and several examples of citation networks of different origin. The analysis also reveals the relative importance of the graph and features for classification purposes.
The graph is encoded in @math , where @math is the adjacency matrix of a graph with added self-loops, @math is the identity matrix, and @math is a diagonal matrix containing the degrees of @math . In the remainder of this work (and to ensure comparability with the results in @cite_12 ), we use @math as the descriptor of the graph @math .
{ "cite_N": [ "@cite_12" ], "mid": [ "2964015378" ], "abstract": [ "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin." ] }
1905.12921
2947983671
We show that the classification performance of Graph Convolutional Networks is related to the alignment between features, graph and ground truth, which we quantify using a subspace alignment measure corresponding to the Frobenius norm of the matrix of pairwise chordal distances between three subspaces associated with features, graph and ground truth. The proposed measure is based on the principal angles between subspaces and has both spectral and geometrical interpretations. We showcase the relationship between the subspace alignment measure and the classification performance through the study of limiting cases of Graph Convolutional Networks as well as systematic randomizations of both features and graph structure applied to a constructive example and several examples of citation networks of different origin. The analysis also reveals the relative importance of the graph and features for classification purposes.
Following @cite_12 , we implement a two-layer GCN with propagation rule and different activation functions for each layer, i.e., a rectified linear unit for the first layer and a softmax unit for the output layer: where @math is a vector. The model then takes the simple form: where the softmax activation function is applied row-wise and the ReLU is applied element-wise. Note there is only one hidden layer with @math units. Hence @math maps the input with @math features to the hidden layer and @math maps these hidden units to the output layer with @math units, corresponding to the number of classes of the ground truth. In this semi-supervised multi-class classification, the cross-entropy error over all labeled instances is evaluated as follows: where @math is the set of nodes that have labels. The weights of the neural network ( @math and @math ) are trained using gradient descent to minimize the loss @math . A visual summary of the GCN architecture is shown in Fig. . The reader is referred to @cite_12 for details and in-depth analysis.
{ "cite_N": [ "@cite_12" ], "mid": [ "2964015378" ], "abstract": [ "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin." ] }
1905.12966
2947834750
Rankings, representing preferences over a set of candidates, are widely used in many information systems, e.g., group decision making. It is of great importance to evaluate the consensus of the obtained rankings from multiple agents. There is often no ground truth available for a ranking task. An overall measure of the consensus degree enables us to have a clear cognition about the ranking data. Moreover, it could provide a quantitative indicator for consensus comparison between groups and further improvement of a ranking system. In this paper, a novel consensus quantifying approach, without the need for any correlation or distance functions, is proposed based on a concept of q-support patterns of rankings. The q-support patterns represent the commonality embedded in a set of rankings. A method for detecting outliers in a set of rankings is naturally derived from the proposed consensus quantifying approach. Experimental studies are conducted to demonstrate the effectiveness of the proposed approach.
Historically developed by Maurice Kendall in 1938 @cite_29 , Kendall's @math measures the correlation between two rankings by considering the numbers of pairwise items ranked in the same orders and in opposite orders. Suppose that we consider rankings over candidates @math . A ranking is an ordered list in which items in topper positions are more preferred than items in lower positions. Let @math be the position function. @math returns the position of item @math in ranking @math . The Kendall's @math for two rankings @math and @math is This coefficient is in the range @math , where value 1 correspons to the case that the two rankings are in the same order and value @math indicates that one ranking is in the reverse order of the other.
{ "cite_N": [ "@cite_29" ], "mid": [ "1985514943" ], "abstract": [ "1. In psychological work the problem of comparing two different rankings of the same set of individuals may be divided into two types. In the first type the individuals have a given order A which is objectively defined with reference to some quality, and a characteristic question is: if an observer ranks the individuals in an order B, does a comparison of B with A suggest that he possesses a reliable judgment of the quality, or, alternatively, is it probable that B could have arisen by chance? In the second type no objective order is given. Two observers consider the individuals and rank them in orders A and B. The question now is, are these orders sufficiently alike to indicate similarity of taste in the observers, or, on the other hand, are A and B incompatible within assigned limits of probability? An example of the first type occurs in the familiar experiments wherein an observer has to arrange a known set of weights in ascending order of weight; the second type would arise if two observers had to rank a set of musical compositions in order of preference. The measure of rank correlation proposed in this paper is capable of being applied to both problems, which are, in fact, formally very much the same. For purposes of simplicity in the exposition it has, however, been thought convenient to preserve a distinction between theni." ] }
1905.12966
2947834750
Rankings, representing preferences over a set of candidates, are widely used in many information systems, e.g., group decision making. It is of great importance to evaluate the consensus of the obtained rankings from multiple agents. There is often no ground truth available for a ranking task. An overall measure of the consensus degree enables us to have a clear cognition about the ranking data. Moreover, it could provide a quantitative indicator for consensus comparison between groups and further improvement of a ranking system. In this paper, a novel consensus quantifying approach, without the need for any correlation or distance functions, is proposed based on a concept of q-support patterns of rankings. The q-support patterns represent the commonality embedded in a set of rankings. A method for detecting outliers in a set of rankings is naturally derived from the proposed consensus quantifying approach. Experimental studies are conducted to demonstrate the effectiveness of the proposed approach.
These rank correlation functions do not take into account the varying relevance of ranked items in different positions. They are not suitable for evaluating the rankings where items at the top of a ranking are much more important than those at the bottom @cite_17 . Further studies on weighted rank correlation were carried out extensively based on these two functions @cite_38 @cite_24 @cite_20 @cite_26 @cite_13 @cite_14 @cite_18 . More reasonable variants of rank correlation functions were also proposed in the literature @cite_7 @cite_4 @cite_0 @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_18", "@cite_26", "@cite_14", "@cite_4", "@cite_7", "@cite_24", "@cite_0", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "1616993132", "2104822358", "1966835268", "1986926488", "2021581601", "1980940798", "2463069270", "1972827298", "2218551734", "1707848225", "2160266360", "2142280617" ], "abstract": [ "Rank similarity measures provide a method for quantifying differences between search engine results without the need for relevance judgments. For example, the providers of a search service might use such measures to estimate the impact of a proposed algorithmic change across a large number of queries—perhaps millions—identifying those queries where the impact is greatest. In this paper, we propose and validate a family of rank similarity measures, each derived from an associated effectiveness measure. Each member of the family is based on the maximization of effectiveness difference under this associated measure. Computing this maximized effectiveness difference (MED) requires the solution of an optimization problem that varies in difficulty, depending on the associated measure. We present solutions for several standard effectiveness measures, including nDCG, AP, and ERR. Through an experimental validation, we show that MED reveals meaningful differences between retrieval runs. Mathematically, MED is a metric, regardless of the associated measure. Prior work has established a number of other desiderata for rank similarity in the context of search, and we demonstrate that MED satisfies these requirements. Unlike previous proposals, MED allows us to directly translate assumptions about user behavior from any established effectiveness measure to create a corresponding rank similarity measure. In addition, MED cleanly accommodates partial relevance judgments, and if complete relevance information is available, it reduces to a simple difference between effectiveness values.", "Rank correlation statistics are useful for determining whether a there is a correspondence between two measurements, particularly when the measures themselves are of less interest than their relative ordering. Kendall's - in particular has found use in Information Retrieval as a \"meta-evaluation\" measure: it has been used to compare evaluation measures, evaluate system rankings, and evaluate predicted performance. In the meta-evaluation domain, however, correlations between systems confound relationships between measurements, practically guaranteeing a positive and significant estimate of - regardless of any actual correlation between the measurements. We introduce an alternative measure of distance between rankings that corrects this by explicitly accounting for correlations between systems over a sample of topics, and moreover has a probabilistic interpretation for use in a test of statistical significance. We validate our measure with theory, simulated data, and experiment.", "In the field of information retrieval, one is often faced with the problem of computing the correlation between two ranked lists. The most commonly used statistic that quantifies this correlation is Kendall's Τ. Often times, in the information retrieval community, discrepancies among those items having high rankings are more important than those among items having low rankings. The Kendall's Τ statistic, however, does not make such distinctions and equally penalizes errors both at high and low rankings. In this paper, we propose a new rank correlation coefficient, AP correlation (Τap), that is based on average precision and has a probabilistic interpretation. We show that the proposed statistic gives more weight to the errors at high rankings and has nice mathematical properties which make it easy to interpret. We further validate the applicability of the statistic using experimental data.", "A weighted Kendall's tau statistic ([tau]w) is proposed to measure weighted correlation. It can place more emphasis on items having low rankings than those have high rankings, or vice versa. The null limiting distribution is derived by the theory of U-statistics. An application, power comparison, and some critical values of [tau]w are presented.", "Ranked lists are encountered in research and daily life and it is often of interest to compare these lists even when they are incomplete or have only some members in common. An example is document rankings returned for the same query by different search engines. A measure of the similarity between incomplete rankings should handle nonconjointness, weight high ranks more heavily than low, and be monotonic with increasing depth of evaluation; but no measure satisfying all these criteria currently exists. In this article, we propose a new measure having these qualities, namely rank-biased overlap (RBO). The RBO measure is based on a simple probabilistic user model. It provides monotonicity by calculating, at a given depth of evaluation, a base score that is non-decreasing with additional evaluation, and a maximum score that is nonincreasing. An extrapolated score can be calculated between these bounds if a point estimate is required. RBO has a parameter which determines the strength of the weighting to top ranks. We extend RBO to handle tied ranks and rankings of different lengths. Finally, we give examples of the use of the measure in comparing the results produced by public search engines and in assessing retrieval systems in the laboratory.", "We propose a new family of distance measures on rankings, derived through an axiomatic approach, that consider the nonuniform relevance of the top and bottom of ordered lists and similarities between candidates. The proposed distance functions include specialized weighted versions of the Kendall τ distance and the Cayley distance, and are suitable for comparing rankings in a number of applications, including information retrieval and rank aggregation. In addition to proposing the distance measures and providing the theoretical underpinnings for their applications, we also analyze algorithmic and computational aspects of weighted distance-based rank aggregation. We present an aggregation method based on approximating weighted distance measures by a generalized version of Spearman's footrule distance as well as a Markov chain method inspired by PageRank, where transition probabilities of the Markov chain reflect the chosen weighted distances.", "Based on the notion of maximal correlation, we introduce a new measure of correlation between two different rankings of the same group of items. Our measure captures various types of correlation detected in previous measures of rank correlation like the Spearman correlation and the Kendall tau correlation. We show that the maximal rank correlation satisfies the data processing and tensorization properties (that make ordinary maximal correlation applicable to problems in information theory). Furthermore, MRC is shown to be intimately related to the FKG inequality. Finally, we pose the problem of the complexity of the computation of this new measure. We make partial progress by giving a simple but exponential-time algorithm for it.", "Many situations exist in which n objects are ranked by two or more independent sources, where interest centers primarily on agreement in the top rankings and disagreements on items at the bottom of the rankings are of little or no importance. A problem with Spearman's rho or Kendall's coefftcient of concordance in this setting is that they are equally influenced by disagreement on the assignment of rankings at all levels. In this article, a concordance measure is provided that is more sensitive to agreement on the top rankings. The statistics used in this setting are functions of the ordinary correlation coeRicient computed on Savage (1956) scores. The asymptotic distributions of these statistics are presented, and a summary of the quantiles of the exact distribution for the two sample case are provided for n = 3(1)14. The statistic for the two-sample case is shown to provide a locally most powerful rank test for a model given by Hajek and Sidak (1967).", "Measures of rank correlation are commonly used in statistics to capture the degree of concordance between two orderings of the same set of items. Standard measures like Kendall's tau and Spearman's rho coefficient put equal emphasis on each position of a ranking. Yet, motivated by applications in which some of the positions (typically those on the top) are more important than others, a few weighted variants of these measures have been proposed. Most of these generalizations fail to meet desirable formal properties, however. Besides, they are often quite inflexible in the sense of committing to a fixed weighing scheme. In this paper, we propose a weighted rank correlation measure on the basis of fuzzy order relations. Our measure, called scaled gamma, is related to Goodman and Kruskal's gamma rank correlation. It is parametrized by a fuzzy equivalence relation on the rank positions, which in turn is specified conveniently by a so-called scaling function. This approach combines soundness with flexibility: it has a sound formal foundation and allows for weighing rank positions in a flexible way. The usefulness of our class of weighted rank correlation measures is shown by means of experimental studies using both synthetic and real-world ranking data.", "Understanding the correlation between two different scores for the same set of items is a common problem in graph analysis and information retrieval. The most commonly used statistics that quantifies this correlation is Kendall's tau; however, the standard definition fails to capture that discordances between items with high rank are more important than those between items with low rank. Recently, a new measure of correlation based on average precision has been proposed to solve this problem, but like many alternative proposals in the literature it assumes that there are no ties in the scores. This is a major deficiency in a number of contexts, and in particular when comparing centrality scores on large graphs, as the obvious baseline, indegree, has a very large number of ties in social networks and web graphs. We propose to extend Kendall's definition in a natural way to take into account weights in the presence of ties. We prove a number of interesting mathematical properties of our generalization and describe an O(n n) algorithm for its computation. We also validate the usefulness of our weighted measure of correlation using experimental data on social networks and web graphs.", "Spearman's footrule and Kendall's tau are two well established distances between rankings. They, however, fail to take into account concepts crucial to evaluating a result set in information retrieval: element relevance and positional information. That is, changing the rank of a highly-relevant document should result in a higher penalty than changing the rank of an irrelevant document; a similar logic holds for the top versus the bottom of the result ordering. In this work, we extend both of these metrics to those with position and element weights, and show that a variant of the Diaconis-Graham inequality still holds - the generalized two measures remain within a constant factor of each other for all permutations. We continue by extending the element weights into a distance metric between elements. For example, in search evaluation, swapping the order of two nearly duplicate results should result in little penalty, even if these two are highly relevant and appear at the top of the list. We extend the distance measures to this more general case and show that they remain within a constant factor of each other. We conclude by conducting simple experiments on web search data with the proposed measures. Our experiments show that the weighted generalizations are more robust and consistent with each other than their unweighted counter-parts.", "Motivated by several applications, we introduce various distance measures between \"top k lists.\" Some of these distance measures are metrics, while others are not. For each of these latter distance measures, we show that they are \"almost\" a metric in the following two seemingly unrelated aspects: (i) they satisfy a relaxed version of the polygonal (hence, triangle) inequality, and (ii) there is a metric with positive constant multiples that bound our measure above and below. This is not a coincidence---we show that these two notions of almost being a metric are the same. Based on the second notion, we define two distance measures to be equivalent if they are bounded above and below by constant multiples of each other. We thereby identify a large and robust equivalence class of distance measures. Besides the applications to the task of identifying good notions of (dis)similarity between two top k lists, our results imply polynomial-time constant-factor approximation algorithms for the rank aggregation problem with respect to a large class of distance measures." ] }
1905.12966
2947834750
Rankings, representing preferences over a set of candidates, are widely used in many information systems, e.g., group decision making. It is of great importance to evaluate the consensus of the obtained rankings from multiple agents. There is often no ground truth available for a ranking task. An overall measure of the consensus degree enables us to have a clear cognition about the ranking data. Moreover, it could provide a quantitative indicator for consensus comparison between groups and further improvement of a ranking system. In this paper, a novel consensus quantifying approach, without the need for any correlation or distance functions, is proposed based on a concept of q-support patterns of rankings. The q-support patterns represent the commonality embedded in a set of rankings. A method for detecting outliers in a set of rankings is naturally derived from the proposed consensus quantifying approach. Experimental studies are conducted to demonstrate the effectiveness of the proposed approach.
Literature studies measure the consensus or diversity by making pairwise comparisons of the rankings and aggregating the comparison results. Thus, two key issues with these approaches include the utilization of proper comparison metrics and aggregation methods. A consensus measure was first proposed in @cite_15 with simple axioms including unanimity, anonymity and neutrality. Work @cite_23 improved the study of @cite_15 by considering weighted Kemeny distance. Extended work with more reasonable distance metrics was carried out @cite_6 @cite_10 @cite_37 @cite_33 . In @cite_31 , a generalization of work @cite_28 was developed with a geometric mean aggregator and the leximax comparison.
{ "cite_N": [ "@cite_37", "@cite_31", "@cite_33", "@cite_28", "@cite_6", "@cite_23", "@cite_15", "@cite_10" ], "mid": [ "2396364167", "2336243352", "2077406617", "2086415241", "2265291", "1979580698", "", "89869475" ], "abstract": [ "We introduce a general framework for measuring the degree of diversity in the preferences held by the members of a group. We formalise and investigate three specific approaches within that framework: diversity as the range of distinct views held, diversity as aggregate distance between individual views, and diversity as distance of the group's views to a single compromise view. While similarly attractive from an intuitive point of view, the three approaches display significant differences when analysed using both the axiomatic method and empirical studies.", "This paper surveys approaches to preference diversity measurement. Applying preference diversity axiomatics, a generalization of the Alcalde-Unzu and Vorsatz (2016) criterion, is developed. It is shown that all previously used indices violate this criterion. Two new indices (geometric mean based and leximaxbased)are developed that satisfy a new criterion. Leximax-based orders act as a polarization index and are compared with ’s (2015) polarization index. The paper concludes by formulating a new open question: the preference profile reconstruction conjecture.", "We consider measuring the degree of homogeneity for preference-approval profiles which include the approval information for the alternatives as well as the rankings of them. A distance-based approach is followed to measure the disagreement for any given two preference-approvals. Under the condition that a proper metric is used, we propose a measure of consensus which is robust to some extensions of the ordinal framework. This paper also shows that there exists a limit for increasing the homogeneity level in a group of individuals by simply replicating their preference-approvals.", "In this paper, we axiomatically study how to measure the similarity of preferences in a group of individuals. For simplicity, we refer to this as the cohesiveness. First, we provide axioms that characterize a family of linear and additive measures whose intersection is a partial ordinal criterion similar to first order stochastic dominance. The introduction of some additional properties isolates a one-parameter subfamily. This parameter evaluates the effect on the cohesiveness if one individual changes his ranking on a single pair of objects, as a function of how many of the remaining individuals in the group rank the first object over the second and vice versa. Finally, we characterize the focal measures of this subfamily separately showing that they coincide with measures constructed using two, at first sight, totally different approaches suggested in the literature.", "Motivation. Human beings do not live in isolation and they have to take many decisions collectively. Examples include the election of firm representatives, the decision of where to build a new school, and the task of how to share natural resources. Equally, the satisfaction of a single individual usually depends on the performance of the group; just think of the problem of shirking in team production. To obtain a good collective performance and, as a consequence, a high individual satisfaction, it is important that collective decisions are taken with consensus. This is because in many instances, it is not beneficial for the society as a whole if the decision is imposed by a subset of the individuals—even if this subset includes more than half of the collective— as there may be other alternatives that are more accepted by the rest of the members and that increase the overall satisfaction.", "In this paper we analyze the consensus in groups of decision makers that rank alternatives by means of weak orders. We have introduced the class of weighted Kemeny distances on weak orders for taking into account where the disagreements occur, and we have analyzed the properties of the associated consensus measures.", "", "In this chapter we focus our attention in how to measure consensus in groups of agents when they show their preferences over a fixed set of alternatives or candidates by means of weak orders (complete preorders). We have introduced a new class of consensus measures on weak orders based on distances, and we have analyzed some of their properties paying special attention to seven well-known distances." ] }
cmp-lg9405001
2950718651
In many applications of natural language processing it is necessary to determine the likelihood of a given word combination. For example, a speech recognizer may need to determine which of the two word combinations eat a peach'' and eat a beach'' is more likely. Statistical NLP methods determine the likelihood of a word combination according to its frequency in a training corpus. However, the nature of language is such that many word combinations are infrequent and do not occur in a given corpus. In this work we propose a method for estimating the probability of such previously unseen word combinations using available information on most similar'' words. We describe a probabilistic word association model based on distributional word similarity, and apply it to improving probability estimates for unseen word bigrams in a variant of Katz's back-off model. The similarity-based method yields a 20 perplexity improvement in the prediction of unseen bigrams and statistically significant reductions in speech-recognition error.
The cooccurrence smoothing technique @cite_8 , based on earlier stochastic speech modeling work by Sugawara+al-85:smoothing , is the main previous attempt to use similarity to estimate the probability of unseen events in language modeling. In addition to its original use in language modeling for speech recognition, Grishman+Sterling-93:selectional applied the cooccurrence smoothing technique to estimate the likelihood of selectional patterns. We will outline here the main parallels and differences between our method and cooccurrence smoothing. A more detailed analysis would require an empirical comparison of the two methods on the same corpus and task.
{ "cite_N": [ "@cite_8" ], "mid": [ "2110993209" ], "abstract": [ "Training corpora for stochastic language models are virtually always too small for maximum-likelihood estimation, so smoothing the models is of great importance. The authors derive the cooccurrence smoothing technique for stochastic language modeling and give experimental evidence for its validity. Using word-bigram language models, cooccurrence smoothing improved the test-set perplexity by 14 on a German 100000-word text corpus and by 10 on an English 1-million word corpus. >" ] }
cmp-lg9405011
2949426059
This paper presents a plan-based architecture for response generation in collaborative consultation dialogues, with emphasis on cases in which the system (consultant) and user (executing agent) disagree. Our work contributes to an overall system for collaborative problem-solving by providing a plan-based framework that captures the Propose-Evaluate-Modify cycle of collaboration, and by allowing the system to initiate subdialogues to negotiate proposed additions to the shared plan and to provide support for its claims. In addition, our system handles in a unified manner the negotiation of proposed domain actions, proposed problem-solving actions, and beliefs proposed by discourse actions. Furthermore, it captures cooperative responses within the collaborative framework and accounts for why questions are sometimes never answered.
Researchers have utilized plan-based mechanisms to generate natural language responses, including explanations @cite_9 @cite_6 @cite_8 . However, they only handle cases in which the user fails to understand the system, instead of cases in which the user disagrees with the system. Maybury developed plan operators for persuasive utterances, but does not provide a framework for negotiation of conflicting views.
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_8" ], "mid": [ "1733954365", "2087786941", "2116151961" ], "abstract": [ "To participate in a dialogue a system must be capable of reasoning about its own previous utterances. Follow-up questions must be interpreted in the context of the ongoing conversation, and the system's previous contributions form part of this context. Furthermore, if a system is to be able to clarify misunderstood explanations or to elaborate on prior explanations, it must understand what it has conveyed in prior explanations. Previous approaches to generating multisentential texts have relied solely on rhetorical structuring techniques. In this paper, we argue that, to handle explanation dialogues successfully, a discourse model must include information about the intended effect of individual parts of the text on the hearer, as well as how the parts relate to one another rhetorically. We present a text planner that records this information and show how the resulting structure is used to respond appropriately to a follow-up question.", "Abstract Knowledge-based systems that interact with humans often need to define their terminology, elucidate their behavior or support their recommendations or conclusions. In general, they need to explain themselves. Unfortunately, current computer systems, if they can explain themselves at all, often generate explanations that are unnatural, ill-connected or simply incoherent. They typically have only one method of explanation which does not allow them to recover from failed communication. At a minimum, this can irritate an end-user and potentially decrease their productivity. More dangerous, poorly conveyed information may result in misconceptions on the part of the user which can lead to bad decisions or invalid conclusions, which may have costly or even dangerous implications. To address this problem, we analyse human-produced explanations with the aim of transferring explanation expertise to machines. Guided by this analysis, we present a classification of explanatory utterances based on their content and communicative function. We then use these utterance classes and additional text analysis to construct a taxonomy of text types. This text taxonomy characterizes multisentence explanations according to the content they convey, the communicative acts they perform, and their intended effect on the addressee's knowledge, beliefs, goals and plans. We then argue that the act of explanation presentation is an action-based endeavor and introduce and define an integrated theory of communicative acts (rhetorical, illocutionary, and locutionary acts). To illustrate this theory we formalize several of these communicative acts as plan operators and then show their use by a hierarchical text planner (TEXPLAN—Textual EXplanation PLANner) that composes natural language explanations. Finally, we classify a range of reactions readers may have to explanations and illustrate how a system can respond to these given a plan-based approach. Our research thus contributes (1) a domain-independent taxonomy of abstract explanatory utterances, (2) a taxonomy of multisentence explanations based on these utterance classes and (3) a classification of reactions readers may have to explanations as well as (4) an illustration of how these classifications can be applied computationally.", "Abstract Human verbal explanations are essentially interactive. If someone is giving a complex explanation, the hearer will be given the opportunity to indicate whether they are following as the explanation proceeds, and if necessary interrupt with clarification questions. These interactions allow the speaker to both clear up the hearer's immediate difficulties as they arise, and to update assumptions about their level of understanding. Better models of the hearer's level of understanding in turn allow the speaker to continue the explanation in a more appropriate manner, lessening the risk of continuing confusion. Despite its apparent importance, existing explanation and text generation systems fail to allow for this sort of interaction. Although some systems allow follow-up questions at the end of an explanation, they assume that a complete explanation has been planned and generated before such interactions are allowed. However, for complex explanations interactions with the user should take place as the explanation progresses, and should influence how that explanation continues. This paper describes the EDGE system, which is able to plan complex, extended explanations which allow such interactions with the user. The system can update assumptions about the user's knowledge on the basis of these interactions, and uses this information to influence the detailed further planning of the explanation. When the user appears confused, the system can attempt to fill in missing knowledge or to explain things another way." ] }
1905.13417
2947844600
Current state-of-the-art approaches for spatio-temporal action detection have achieved impressive results but remain unsatisfactory for temporal extent detection. The main reason comes from that, there are some ambiguous states similar to the real actions which may be treated as target actions even by a well-trained network. In this paper, we define these ambiguous samples as "transitional states", and propose a Transition-Aware Context Network (TACNet) to distinguish transitional states. The proposed TACNet includes two main components, i.e., temporal context detector and transition-aware classifier. The temporal context detector can extract long-term context information with constant time complexity by constructing a recurrent network. The transition-aware classifier can further distinguish transitional states by classifying action and transitional states simultaneously. Therefore, the proposed TACNet can substantially improve the performance of spatio-temporal action detection. We extensively evaluate the proposed TACNet on UCF101-24 and J-HMDB datasets. The experimental results demonstrate that TACNet obtains competitive performance on JHMDB and significantly outperforms the state-of-the-art methods on the untrimmed UCF101-24 in terms of both frame-mAP and video-mAP.
Spatio-temporal action detection methods can generally be classified into two categories: weakly and fully supervised methods. Although we concentrate on fully supervised methods in this paper, weakly supervised methods have also achieved significant improvement in recent years. The purpose of these methods is to detect actions only with video-level labels but without frame-level bounding box annotations. These methods can significantly reduce annotation costs and are more suitable to process large unannotated video data. Multi-Instance Learning (MIL) is one of the frequently-used approaches for weakly supervised spatio-temporal action detection. In @cite_11 , Siva transforms the weakly supervised action detection as a MIL problem. They globally optimize both inter- and intra-class distances to locate interested actions. Multi-fold MIL scheme is then proposed in @cite_23 to prevent training from prematurely locking onto erroneous object detection. Recently, deep model and attention mechanisms are also employed in deep model based weakly supervised methods. The methods in @cite_12 @cite_7 apply attention mechanism to focus on key volumes for action detection. Besides, Mettes @cite_18 @cite_16 proposes to apply point annotations to perform action detection.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_23", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "2398000642", "2963246338", "2016016818", "2740126389", "2172806452", "2016208906" ], "abstract": [ "We strive for spatio-temporal localization of actions in videos. The state-of-the-art relies on action proposals at test time and selects the best one with a classifier trained on carefully annotated box annotations. Annotating action boxes in video is cumbersome, tedious, and error prone. Rather than annotating boxes, we propose to annotate actions in video with points on a sparse subset of frames only. We introduce an overlap measure between action proposals and points and incorporate them all into the objective of a non-convex Multiple Instance Learning optimization. Experimental evaluation on the UCF Sports and UCF 101 datasets shows that (i) spatio-temporal proposals can be used to train classifiers while retaining the localization performance, (ii) point annotations yield results comparable to box annotations while being significantly faster to annotate, (iii) with a minimum amount of supervision our approach is competitive to the state-of-the-art. Finally, we introduce spatio-temporal action annotations on the train and test videos of Hollywood2, resulting in Hollywood2Tubes, available at http: tinyurl.com hollywood2tubes.", "Abstract We present VideoLSTM for end-to-end sequence learning of actions in video. Rather than adapting the video to the peculiarities of established recurrent or convolutional architectures, we adapt the architecture to fit the requirements of the video medium. Starting from the soft-Attention LSTM, VideoLSTM makes three novel contributions. First, video has a spatial layout. To exploit the spatial correlation we hardwire convolutions in the soft-Attention LSTM architecture. Second, motion not only informs us about the action content, but also guides better the attention towards the relevant spatio-temporal locations. We introduce motion-based attention. And finally, we demonstrate how the attention from VideoLSTM can be exploited for action localization by relying on the action class label and temporal attention smoothing. Experiments on UCF101, HMDB51 and THUMOS13 reveal the benefit of the video-specific adaptations of VideoLSTM in isolation as well as when integrated in a combined architecture. It compares favorably against other LSTM architectures for action classification and especially action localization.", "Object category localization is a challenging problem in computer vision. Standard supervised training requires bounding box annotations of object instances. This time-consuming annotation process is sidestepped in weakly supervised learning. In this case, the supervised information is restricted to binary labels that indicate the absence presence of object instances in the image, without their locations. We follow a multiple-instance learning approach that iteratively trains the detector and infers the object locations in the positive training images. Our main contribution is a multi-fold multiple instance learning procedure, which prevents training from prematurely locking onto erroneous object locations. This procedure is particularly important when high-dimensional representations, such as the Fisher vectors, are used. We present a detailed experimental evaluation using the PASCAL VOC 2007 dataset. Compared to state-of-the-art weakly supervised detectors, our approach better localizes objects in the training images, which translates into improved detection performance.", "The goal of this paper is to determine the spatio-temporal location of actions in video. Where training from hard to obtain box annotations is the norm, we propose an intuitive and effective algorithm that localizes actions from their class label only. We are inspired by recent work showing that unsupervised action proposals selected with human point-supervision perform as well as using expensive box annotations. Rather than asking users to provide point supervision, we propose fully automatic visual cues that replace manual point annotations. We call the cues pseudo-annotations, introduce five of them, and propose a correlation metric for automatically selecting and combining them. Thorough evaluation on challenging action localization datasets shows that we reach results comparable to results with full box supervision. We also show that pseudo-annotations can be leveraged during testing to improve weakly- and strongly-supervised localizers.", "We propose a soft attention based model for the task of action recognition in videos. We use multi-layered Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units which are deep both spatially and temporally. Our model learns to focus selectively on parts of the video frames and classifies videos after taking a few glimpses. The model essentially learns which parts in the frames are relevant for the task at hand and attaches higher importance to them. We evaluate the model on UCF-11 (YouTube Action), HMDB-51 and Hollywood2 datasets and analyze how the model focuses its attention depending on the scene and the action being performed.", "The detection of human action in videos of busy natural scenes with dynamic background is of interest for applications such as video surveillance. Taking a conventional fully supervised approach, the spatio-temporal locations of the action of interest have to be manually annotated frame by frame in the training videos, which is tedious and unreliable. In this paper, for the first time, a weakly supervised action detection method is proposed which only requires binary labels of the videos indicating the presence of the action of interest. Given a training set of binary labelled videos, the weakly supervised learning (WSL) problem is recast as a multiple instance learning (MIL) problem. A novel MIL algorithm is developed which differs from the existing MIL algorithms in that it locates the action of interest spatially and temporally by globally optimising both interand intra-class distance. We demonstrate through experiments that our WSL approach can achieve comparable detection performance to a fully supervised learning approach, and that the proposed MIL algorithm significantly outperforms the existing ones." ] }
1905.13331
2946832503
Unsupervised domain adaptation seeks to learn an invariant and discriminative representation for an unlabeled target domain by leveraging the information of a labeled source dataset. We propose to improve the discriminative ability of the target domain representation by simultaneously learning tightly clustered target representations while encouraging that each cluster is assigned to a unique and different class from the source. This strategy alleviates the effects of negative transfer when combined with adversarial domain matching between source and target representations. Our approach is robust to differences in the source and target label distributions and thus applicable to both balanced and imbalanced domain adaptation tasks, and with a simple extension, it can also be used for partial domain adaptation. Experiments on several benchmark datasets for domain adaptation demonstrate that our approach can achieve state-of-the-art performance in all three scenarios, namely, balanced, imbalanced and partial domain adaptation.
Driven by the increasing popularity of the Generative Adversarial Networks (GANs) @cite_3 , recent adaptation methods resort to matching the distributions in an adversarial manner. @cite_19 @cite_17 added a discriminator to the latent representation to distinguish features from different domains, while the feature encoders are trained to mislead the domain discriminator so it cannot find an effective boundary that distinguishes between source and target instances. The domain discriminator and feature encoders are trained adversarially as a min-max objective. Inspired by @cite_18 , @cite_20 utilized the classifier discrepancy to detect target samples that are distant from the source. Instead of using a discriminator, they proposed to adversarially maximize the discrepancy between two source classifiers, while training a feature encoder to reduce the inconsistency of their predictions.
{ "cite_N": [ "@cite_18", "@cite_3", "@cite_19", "@cite_20", "@cite_17" ], "mid": [ "2104094955", "2099471712", "2951670162", "2771773135", "" ], "abstract": [ "Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. Often, however, we have plentiful labeled training data from a source domain but wish to learn a classifier which performs well on a target domain with a different distribution and little or no labeled training data. In this work we investigate two questions. First, under what conditions can a classifier trained from source data be expected to perform well on target data? Second, given a small amount of labeled target data, how should we combine it during training with the large amount of labeled source data to achieve the lowest target error at test time? We address the first question by bounding a classifier's target error in terms of its source error and the divergence between the two domains. We give a classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains. Under the assumption that there exists some hypothesis that performs well in both domains, we show that this quantity together with the empirical source error characterize the target error of a source-trained classifier. We answer the second question by bounding the target error of a model which minimizes a convex combination of the empirical source and target errors. Previous theoretical work has considered minimizing just the source error, just the target error, or weighting instances from the two domains equally. We show how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class. The resulting bound generalizes the previously studied cases and is always at least as tight as a bound which considers minimizing only the target error or an equal weighting of source and target errors.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.", "In this work, we present a method for unsupervised domain adaptation (UDA), where we aim to transfer knowledge from a label-rich domain (i.e., a source domain) to an unlabeled domain (i.e., a target domain). Many adversarial learning methods have been proposed for this task. These methods train domain classifier networks (i.e., a discriminator) to discriminate distinguish the features as either a source or target and train a feature generator network to mimic the discriminator.However, the domain classifier only tries to distinguish the features as a source or target and thus does not consider task-specific decision boundaries between classes. Therefore, a trained generator can generate ambiguous features near class boundaries. To solve the problem, we propose a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries. We propose to utilize task-specific classifiers as discriminators that try to detect target samples that are far from the support of the source. A feature generator learns to generate target features inside the support to fool the classifiers. Since the generator uses feedback from task-specific classifiers, it avoids generating target features near class boundaries. Our method outperforms other methods on several datasets of image classification and semantic segmentation.", "" ] }
1905.13331
2946832503
Unsupervised domain adaptation seeks to learn an invariant and discriminative representation for an unlabeled target domain by leveraging the information of a labeled source dataset. We propose to improve the discriminative ability of the target domain representation by simultaneously learning tightly clustered target representations while encouraging that each cluster is assigned to a unique and different class from the source. This strategy alleviates the effects of negative transfer when combined with adversarial domain matching between source and target representations. Our approach is robust to differences in the source and target label distributions and thus applicable to both balanced and imbalanced domain adaptation tasks, and with a simple extension, it can also be used for partial domain adaptation. Experiments on several benchmark datasets for domain adaptation demonstrate that our approach can achieve state-of-the-art performance in all three scenarios, namely, balanced, imbalanced and partial domain adaptation.
The approaches described above rely on the assumption that source and target share the same label domain and distribution. This assumption limits their applicability to situations where these are violated, , imbalanced or partial domain adaptation scenarios. @cite_14 utilized the pairwise similarity information from the source to regularize the implicit clustering of the target domain and thus it has the potential to be used for the imbalanced scenario. However, their clustering on target domain is determined from the source only, thus it does not benefit from the local information provided by the target. @cite_21 @cite_8 introduced the concept of partial domain adaptation, in which target classes are assumed to be a subset of the source domain. They reduce the effect of negative transfer by selecting out classes not present in the target, however, their approaches are only moderate when the source and target label domains are the same. In our approach, we first transform the partial scenario into a special imbalanced setting via target domain augmentation, then we perform domain adaptation with our clustering-based objective without further changes.
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_8" ], "mid": [ "2769159728", "2738463471", "" ], "abstract": [ "This paper introduces a novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster. The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning. We begin by reducing categorical information to pairwise constraints, which only considers whether two instances belong to the same class or not. This similarity is category-agnostic and can be learned from data in the source domain using a similarity network. We then present two novel approaches for performing transfer learning using this similarity function. First, for unsupervised domain adaptation, we design a new loss function to regularize classification with a constrained clustering loss, hence learning a clustering network with the transferred similarity metric generating the training inputs. Second, for cross-task learning (i.e., unsupervised clustering with unseen categories), we propose a framework to reconstruct and estimate the number of semantic clusters, again using the clustering network. Since the similarity network is noisy, the key is to use a robust clustering algorithm, and we show that our formulation is more robust than the alternative constrained and unconstrained clustering approaches. Using this method, we first show state of the art results for the challenging cross-task problem, applied on Omniglot and ImageNet. Our results show that we can reconstruct semantic clusters with high accuracy. We then evaluate the performance of cross-domain transfer using images from the Office-31 and SVHN-MNIST tasks and present top accuracy on both datasets. Our approach doesn't explicitly deal with domain discrepancy. If we combine with a domain adaptation loss, it shows further improvement.", "Adversarial learning has been successfully embedded into deep networks to learn transferable features, which reduce distribution discrepancy between the source and target domains. Existing domain adversarial networks assume fully shared label space across domains. In the presence of big data, there is strong motivation of transferring both classification and representation models from existing big domains to unknown small domains. This paper introduces partial transfer learning, which relaxes the shared label space assumption to that the target label space is only a subspace of the source label space. Previous methods typically match the whole source domain to the target domain, which are prone to negative transfer for the partial transfer problem. We present Selective Adversarial Network (SAN), which simultaneously circumvents negative transfer by selecting out the outlier source classes and promotes positive transfer by maximally matching the data distributions in the shared label space. Experiments demonstrate that our models exceed state-of-the-art results for partial transfer learning tasks on several benchmark datasets.", "" ] }
1905.13191
2947375083
We study revenue-optimal pricing and driver compensation in ridesharing platforms when drivers have heterogeneous preferences over locations. If a platform ignores drivers' location preferences, it may make inefficient trip dispatches; moreover, drivers may strategize so as to route towards their preferred locations. In a model with stationary and continuous demand and supply, we present a mechanism that incentivizes drivers to both (i) report their location preferences truthfully and (ii) always provide service. In settings with unconstrained driver supply or symmetric demand patterns, our mechanism achieves (full-information) first-best revenue. Under supply constraints and unbalanced demand, we show via simulation that our mechanism improves over existing mechanisms and has performance close to the first-best.
There are various empirical studies, analyzing the impact of dynamic pricing @cite_2 @cite_9 , the labor market for Uber drivers @cite_0 @cite_1 , consumer surplus @cite_4 , the value of flexible work @cite_11 , the gender earnings gap @cite_3 , and the commission vs. medallion lease based compensation models @cite_5 .
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_1", "@cite_3", "@cite_0", "@cite_2", "@cite_5", "@cite_11" ], "mid": [ "2518792172", "2614461528", "", "2788495288", "1912171728", "", "2762223703", "" ], "abstract": [ "Estimating consumer surplus is challenging because it requires identification of the entire demand curve. We rely on Uber’s “surge” pricing algorithm and the richness of its individual level data to first estimate demand elasticities at several points along the demand curve. We then use these elasticity estimates to estimate consumer surplus. Using almost 50 million individual-level observations and a regression discontinuity design, we estimate that in 2015 the UberX service generated about @math 1.60 of consumer surplus is generated. Back-of-the-envelope calculations suggest that the overall consumer surplus generated by the UberX service in the United States in 2015 was $6.8 billion.", "Optimizing shared vehicle systems (bike-sharing car-sharing ride-sharing) is more challenging compared to traditional resource allocation settings due to the presence of complex network externalities. In particular, changes in the demand supply at any location (via dynamic pricing, rebalancing of empty vehicles, etc.) affect future supply throughout the system within short timescales. Such externalities are well captured by steady-state Markovian models, which are therefore widely used to analyze and design shared vehicle systems. However, using such models to design pricing control policies is computationally difficult since the resulting optimization problems are high-dimensional and non-convex. To this end, we develop a general approximation framework for designing pricing policies in shared vehicle systems, based on a novel convex relaxation which we term elevated flow relaxation. Our approach provides the first efficient algorithms with rigorous approximation guarantees for a wide range of objective functions (throughput, revenue, welfare). For any shared vehicle system with @math stations and m vehicles, our framework provides a pricing policy with an approximation ratio of 1+(n-1) m. This guarantee is particularly meaningful when m n, the average number of vehicles per station is large, as is often the case in practice. Further, the simplicity of our approach allows us to extend it to more complex settings. Apart from pricing, shared vehicle systems enable other control levers for modulating demand and supply, e.g. rebalancing empty vehicles, redirecting riders to nearby vehicles, etc. Our approach yields efficient algorithms with the same approximation guarantees for all these problems, and in the process, obtains as special cases several existing heuristics and asymptotic guarantees. We also extend our approach to obtain bi-criterion guarantees in multi-objective settings; we illustrate this with the example of Ramsey pricing. From a technical perspective, our work develops a new approach for obtaining control policies with approximation guarantees in steady-state Markovian models. Our approach can be distilled into the following three-step program: (i) construct an upper bound via a relaxation to the original problem that encodes essential conservation laws of the system, (ii) identify a family of control policies inducing known steady-state distributions that achieve the value of the relaxed solution in an appropriate scaling limit (in our case, state-independent policies in the limit m++), and (iii) characterize the performance loss between the finite system (i.e. fixed m) and the scaling limit. This technique may be of independent interest for other settings.", "", "The growth of the \"gig\" economy generates worker flexibility that, some have speculated, will favor women. We explore one facet of the gig economy by examining labor supply choices and earnings among more than a million rideshare drivers on Uber in the U.S. Perhaps most surprisingly, we find that there is a roughly 7 gender earnings gap among drivers. The uniqueness of our data - knowing exactly the production and compensation functions - permits us to completely unpack the underlying determinants of the gender earnings gap. We find that the entire gender gap is caused by three factors: experience on the platform (learning-by-doing), preferences over where when to work, and preferences for driving speed. This suggests that, as the gig economy grows and brings more flexibility in employment, women's relatively high opportunity cost of non-paid-work time and gender-based preference differences can perpetuate a gender earnings gap even in the absence of discrimination.", "This paper provides the first comprehensive analysis of Uber's driver-partners, based on both survey data and anonymized, aggregated administrative data. Uber has grown at an exponential rate over the last few years, and drivers who partner with Uber appear to be attracted to the platform in large part because of the flexibility it offers, the level of compensation, and the fact that earnings per hour do not vary much with hours worked, which facilitates part-time and variable hours. Uber's driver-partners are more similar in terms of their age and education to the general workforce than to taxi drivers and chauffeurs. Uber may serve as a bridge for many seeking other employment opportunities, and it may attract well-qualified individuals because, with Uber's star rating system, driver-partners' reputations are explicitly shared with potential customers. Most of Uber's driver-partners had full- or part-time employment prior to joining Uber, and many continued in those positions after starting to drive with the Uber platform, which makes the flexibility to set their own hours all the more valuable. Uber's driver-partners also often cited the desire to smooth fluctuations in their income as a reason for partnering with Uber.", "", "Ride-hailing drivers pay a proportion of their fares to the ride-hailing platform operator, a commission-based compensation model used by many internet-mediated service providers. To Uber drivers, this commission is known as the Uber fee. By contrast, traditional taxi drivers in most US cities make a fixed payment independent of their earnings, usually a weekly or daily medallion lease, but keep every fare dollar net of expenses. We assess these compensation models from a driver’s point of view using an experiment that offered random samples of Boston Uber drivers opportunities to lease a virtual taxi medallion that eliminates the Uber fee. Some drivers were offered a negative fee. Drivers’ labor supply response to our offers reveals a large intertemporal substitution elasticity, on the order of 1.2. At the same time, our virtual lease program was under-subscribed: many drivers who would have benefitted from buying an inexpensive lease chose to opt out. We use these results to compute the average compensation required to make drivers indifferent between ride-hailing and a traditional taxi compensation contract. The results suggest that ride-hailing drivers gain considerably from the opportunity to drive without leasing.", "" ] }
1905.13149
2947937852
In this work we propose a new computational framework, based on generative deep models, for synthesis of photo-realistic food meal images from textual descriptions of its ingredients. Previous works on synthesis of images from text typically rely on pre-trained text models to extract text features, followed by a generative neural networks (GANs) aimed to generate realistic images conditioned on the text features. These works mainly focus on generating spatially compact and well-defined categories of objects, such as birds or flowers. In contrast, meal images are significantly more complex, consisting of multiple ingredients whose appearance and spatial qualities are further modified by cooking methods. We propose a method that first builds an attention-based ingredients-image association model, which is then used to condition a generative neural network tasked with synthesizing meal images. Furthermore, a cycle-consistent constraint is added to further improve image quality and control appearance. Extensive experiments show our model is able to generate meal image corresponding to the ingredients, which could be used to augment existing dataset for solving other computational food analysis problems.
Work on generating images conditioned on a deterministic label by directly concatenating the label with the input was proposed by @cite_1 and by adding the label information at a certain layer's output in @cite_18 @cite_21 . Another line of work conditions the generation process with text information. @cite_16 uses a pre-trained model to extract text features and concatenate them with the input random vector, in order to generate text-based images. @cite_30 extends this concept by stacking three GANs to generate the same image at different resolutions. The same idea was shown to be valuable in human face generation @cite_19 @cite_2 . These works are conditioned on short textual descriptions of the image and rely on recurrent neural networks (RNN) to extract text features. RNNs treat words sequentially and with the same importance, in the sparse set of ingredients of a meal, not all ingredients play an important role in image appearance, therefore, it makes sense to model this importance. Inspired by @cite_24 , we combine attention mechanism with bi-directional LSTM to learn the importance of each ingredient, the attention-based LSTM model improves the association performance considerably.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_21", "@cite_1", "@cite_24", "@cite_19", "@cite_2", "@cite_16" ], "mid": [ "2766091292", "2548275288", "", "2125389028", "2897152025", "2766527293", "", "2949999304" ], "abstract": [ "Although Generative Adversarial Networks (GANs) have shown remarkable success in various tasks, they still face challenges in generating high quality images. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) aiming at generating high-resolution photo-realistic images. First, we propose a two-stage generative adversarial network architecture, StackGAN-v1, for text-to-image synthesis. The Stage-I GAN sketches the primitive shape and colors of the object based on given text description, yielding low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. Second, an advanced multi-stage generative adversarial network architecture, StackGAN-v2, is proposed for both conditional and unconditional generative tasks. Our StackGAN-v2 consists of multiple generators and discriminators in a tree-like structure; images at multiple scales corresponding to the same scene are generated from different branches of the tree. StackGAN-v2 shows more stable training behavior than StackGAN-v1 by jointly approximating multiple distributions. Extensive experiments demonstrate that the proposed stacked generative adversarial networks significantly outperform other state-of-the-art methods in generating photo-realistic images.", "Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7 of the classes have samples exhibiting diversity comparable to real ImageNet data.", "", "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.", "Finding a right recipe that describes the cooking procedure for a dish from just one picture is inherently a difficult problem. Food preparation undergoes a complex process involving raw ingredients, utensils, cutting and cooking operations. This process gives clues to the multimedia presentation of a dish (e.g., taste, colour, shape). However, the description of the process is implicit, implying only the cause of dish presentation rather than the visual effect that can be vividly observed on a picture. Therefore, different from other cross-modal retrieval problems in the literature, recipe search requires the understanding of textually described procedure to predict its possible consequence on visual appearance. In this paper, we approach this problem from the perspective of attention modeling. Specifically, we model the attention of words and sentences in a recipe and align them with its image feature such that both text and visual features share high similarity in multi-dimensional space. Through a large food dataset, Recipe1M, we empirically demonstrate that understanding the cooking procedure can lead to improvement in a large margin compared to the existing methods which mostly consider only ingredient information. Furthermore, with attention modeling, we show that language-specific named-entity extraction based on domain knowledge becomes optional. The result gives light to the feasibility of performing cross-lingual cross-modal recipe retrieval with off-the-shelf machine translation engines.", "We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.", "", "Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions." ] }
1905.13392
2947380094
This paper proposes a deep convolutional neural network model for ordinal regression by considering a family of probabilistic ordinal link functions in the output layer. The link functions are those used for cumulative link models, which are traditional statistical linear models based on projecting each pattern into a 1-dimensional space. A set of ordered thresholds splits this space into the different classes of the problem. In our case, the projections are estimated by a non-linear deep neural network. To further improve the results, we combine these ordinal models with a loss function that takes into account the distance between the categories, based on the weighted Kappa index. Three different link functions are studied in the experimental study, and the results are contrasted with statistical analysis. The experiments run over two different ordinal classification problems, and the statistical tests confirm that these models improve the results of a nominal model and outperform other proposals considered in the literature.
@cite_52 proposed a complex CNN architecture for solving Twitter Sentiment Classification as an ordinal problem. They checked that using average pooling preserves significant features that provide more expressiveness to ordinal scale. They didn't propose any method to include the ordinal information into the classifier, but they tried to find the best CNN model architecture based on an ordinal metric.
{ "cite_N": [ "@cite_52" ], "mid": [ "2784099047" ], "abstract": [ "Twitter sentiment analysis according to five points scales has attracted research interest due to its potential use in commercial and public social media application. A multi-point scale classification is a popular way used by many companies to evaluate the sentiment of product reviews (e.g. Alibaba, Amazon and eBay). Most of the classification approaches addressed this problem using traditional classification algorithm that requires expert knowledge to select the best features. Even though deep learning has been utilized, most of them employed a simple structure that not enough to capture the important features. In this paper, a complex structure of convolutional neural network (CNN) is proposed to classify the tweet into five-point scale and obtain a more several tweet representation. After a series of experiments with CNN including different hyperparameters and pooling strategies (Max and Average), we found that the best structure for our model is three convolutional layers, each one followed by average pooling layer. The proposed multi-layers convolutional neural network (MLCNN) model achieve the lowest Macro average mean absolute error (MAEM) and outperforms the state-of-the-art approach on tweet 2016 dataset for Ordinal classification. Experimental results show the ability of average pooling to preserve significant features that provide more expressiveness to ordinal scale." ] }
1905.13382
2947393200
Online hashing has attracted extensive research attention when facing streaming data. Most online hashing methods, learning binary codes based on pairwise similarities of training instances, fail to capture the semantic relationship, and suffer from a poor generalization in large-scale applications due to large variations. In this paper, we propose to model the similarity distributions between the input data and the hashing codes, upon which a novel supervised online hashing method, dubbed as Similarity Distribution based Online Hashing (SDOH), is proposed, to keep the intrinsic semantic relationship in the produced Hamming space. Specifically, we first transform the discrete similarity matrix into a probability matrix via a Gaussian-based normalization to address the extremely imbalanced distribution issue. And then, we introduce a scaling Student t-distribution to solve the challenging initialization problem, and efficiently bridge the gap between the known and unknown distributions. Lastly, we align the two distributions via minimizing the Kullback-Leibler divergence (KL-diverence) with stochastic gradient descent (SGD), by which an intuitive similarity constraint is imposed to update hashing model on the new streaming data with a powerful generalizing ability to the past data. Extensive experiments on three widely-used benchmarks validate the superiority of the proposed SDOH over the state-of-the-art methods in the online retrieval task.
SGD-based online hashing employs SGD to update the learned parameters. To our best knowledge, Online Kernel Hashing (OKH) @cite_7 is the first of this kind, which requires pairs of points to update the hash functions via an online passive-aggressive strategy @cite_8 . Adaptive Hashing (AdaptHash) @cite_15 defines a hinge-like loss, which is approximated by a differentiable Sigmoid function adopted to update the hash functions with SGD. In @cite_27 , a more general two-step hashing was introduced, in which binary Error Correcting Output Codes (ECOC) are first assigned to labeled data, and then the hash functions are learned to fit the binary ECOC using Online Boosting. Cakir . @cite_17 developed an Online Hashing with Mutual Information (MIHash), which targets at optimizing the mutual information between the neighbors and non-neighbors, given a query. Lin . @cite_2 @cite_13 proposed a Hadamard Codebook based Online Hashing (HCOH), where a more discriminative Hadamard matrix is used as the ECOC codebook to guide the learning of hash functions.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_27", "@cite_2", "@cite_15", "@cite_13", "@cite_17" ], "mid": [ "", "", "2535352129", "2897391546", "2204148968", "2944642470", "2602646780" ], "abstract": [ "", "", "Fast nearest neighbor search is becoming more and more crucial given the advent of large-scale data in many computer vision applications. Hashing approaches provide both fast search mechanisms and compact index structures to address this critical need. In image retrieval problems where labeled training data is available, supervised hashing methods prevail over unsupervised methods. Most state-of-the-art supervised hashing approaches employ batch-learners. Unfortunately, batch-learning strategies may be inefficient when confronted with large datasets. Moreover, with batch-learners, it is unclear how to adapt the hash functions as the dataset continues to grow and new variations appear over time. To handle these issues, we propose OSH: an Online Supervised Hashing technique that is based on Error Correcting Output Codes. We consider a stochastic setting where the data arrives sequentially and our method learns and adapts its hashing functions in a discriminative manner. Our method makes no assumption about the number of possible class labels, and accommodates new classes as they are presented in the incoming data stream. In experiments with three image retrieval benchmarks, our method yields state-of-the-art retrieval performance as measured in Mean Average Precision, while also being orders-of-magnitude faster than competing batch methods for supervised hashing. Also, our method significantly outperforms recently introduced online hashing solutions.", "In recent years, binary code learning, a.k.a. hashing, has received extensive attention in large-scale multimedia retrieval. It aims to encode high-dimensional data points into binary codes, hence the original high-dimensional metric space can be efficiently approximated via Hamming space. However, most existing hashing methods adopted offline batch learning, which is not suitable to handle incremental datasets with streaming data or new instances. In contrast, the robustness of the existing online hashing remains as an open problem, while the embedding of supervised semantic information hardly boosts the performance of the online hashing, mainly due to the defect of unknown category numbers in supervised learning. In this paper, we propose an online hashing scheme, termed Hadamard Codebook based Online Hashing (HCOH), which aims to solving the above problems towards robust and supervised online hashing. In particular, we first assign an appropriate high-dimensional binary codes to each class label, which is generated randomly by Hadamard codes. Subsequently, LSH is adopted to reduce the length of such Hadamard codes in accordance with the hash bits, which can adapt the predefined binary codes online, and theoretically guarantee the semantic similarity. Finally, we consider the setting of stochastic data acquisition, which facilitates our method to efficiently learn the corresponding hashing functions via stochastic gradient descend (SGD) online. Notably, the proposed HCOH can be embedded with supervised labels and is not limited to a predefined category number. Extensive experiments on three widely-used benchmarks demonstrate the merits of the proposed scheme over the state-of-the-art methods.", "With the staggering growth in image and video datasets, algorithms that provide fast similarity search and compact storage are crucial. Hashing methods that map the data into Hamming space have shown promise, however, many of these methods employ a batch-learning strategy in which the computational cost and memory requirements may become intractable and infeasible with larger and larger datasets. To overcome these challenges, we propose an online learning algorithm based on stochastic gradient descent in which the hash functions are updated iteratively with streaming data. In experiments with three image retrieval benchmarks, our online algorithm attains retrieval accuracy that is comparable to competing state-of-the-art batch-learning solutions, while our formulation is orders of magnitude faster and being online it is adaptable to the variations of the data. Moreover, our formulation yields improved retrieval performance over a recently reported online hashing technique, Online Kernel Hashing.", "Online image hashing has received increasing research attention recently, which receives large-scale data in a streaming manner to update the hash functions on-the-fly. Its key challenge lies in the difficulty in balancing the learning timeliness and model accuracy. To this end, most works exploit a supervised setting, i.e., using class labels to boost the hashing performance, which defects in two aspects: First, large amount of training batches are required to learn up-to-date hash functions, which however largely increase the learning complexity. Second, strong constraints, e.g., orthogonal or similarity preserving, are used, which are however typically relaxed and lead to large accuracy drop. To handle the above challenges, in this paper, a novel supervised online hashing scheme termed Hadamard Matrix Guided Online Hashing (HMOH) is proposed. Our key innovation lies in the construction and usage of Hadamard matrix, which is an orthogonal binary matrix and is built via Sylvester method. To release the need of strong constraints, we regard each column of Hadamard matrix as the target code for each class label, which by nature satisfies several desired properties of hashing codes. To accelerate the online training, the LSH is first adopted to align the length of target code and the to-be-learned binary code. And then, we treat the learning of hash functions as a set of binary classification problems to fit the assigned target code. Finally, we propose to ensemble the learned models in all rounds to maximally preserve the information of past streaming data. The superior accuracy and efficiency of the proposed method are demonstrated through extensive experiments on three widely-used datasets comparing to various state-of-the-art methods.", "Learning-based hashing methods are widely used for nearest neighbor retrieval, and recently, online hashing methods have demonstrated good performance-complexity trade-offs by learning hash functions from streaming data. In this paper, we first address a key challenge for online hashing: the binary codes for indexed data must be recomputed to keep pace with updates to the hash functions. We propose an efficient quality measure for hash functions, based on an information-theoretic quantity, mutual information, and use it successfully as a criterion to eliminate unnecessary hash table updates. Next, we also show how to optimize the mutual information objective using stochastic gradient descent. We thus develop a novel hashing method, MIHash, that can be used in both online and batch settings. Experiments on image retrieval benchmarks (including a 2.5M image dataset) confirm the effectiveness of our formulation, both in reducing hash table recomputations and in learning high-quality hash functions." ] }
1905.13382
2947393200
Online hashing has attracted extensive research attention when facing streaming data. Most online hashing methods, learning binary codes based on pairwise similarities of training instances, fail to capture the semantic relationship, and suffer from a poor generalization in large-scale applications due to large variations. In this paper, we propose to model the similarity distributions between the input data and the hashing codes, upon which a novel supervised online hashing method, dubbed as Similarity Distribution based Online Hashing (SDOH), is proposed, to keep the intrinsic semantic relationship in the produced Hamming space. Specifically, we first transform the discrete similarity matrix into a probability matrix via a Gaussian-based normalization to address the extremely imbalanced distribution issue. And then, we introduce a scaling Student t-distribution to solve the challenging initialization problem, and efficiently bridge the gap between the known and unknown distributions. Lastly, we align the two distributions via minimizing the Kullback-Leibler divergence (KL-diverence) with stochastic gradient descent (SGD), by which an intuitive similarity constraint is imposed to update hashing model on the new streaming data with a powerful generalizing ability to the past data. Extensive experiments on three widely-used benchmarks validate the superiority of the proposed SDOH over the state-of-the-art methods in the online retrieval task.
The inspiration of matrix sketch-based online hashing methods comes from the idea of data sketch" @cite_16 , where a small size of sketch data is leveraged to preserve the main property of a large-scale dataset. To this end, Leng . @cite_14 proposed an Online Sketching Hashing (SketchHash), which employs an efficient variant of SVD decomposition to learn hash functions, with a PCA-based batch learning on the sketch to learn hashing weights. A faster version of Online Sketch Hashing (FROSH) was developed in @cite_29 , where the independent Subsampled Randomized Hadamard Transform (SRHT) is employed on different data chunks to make the sketch more compact and accurate, and to accelerate the sketching process.
{ "cite_N": [ "@cite_14", "@cite_29", "@cite_16" ], "mid": [ "1893754589", "2771454759", "2088424151" ], "abstract": [ "Recently, hashing based approximate nearest neighbor (ANN) search has attracted much attention. Extensive new algorithms have been developed and successfully applied to different applications. However, two critical problems are rarely mentioned. First, in real-world applications, the data often comes in a streaming fashion but most of existing hashing methods are batch based models. Second, when the dataset becomes huge, it is almost impossible to load all the data into memory to train hashing models. In this paper, we propose a novel approach to handle these two problems simultaneously based on the idea of data sketching. A sketch of one dataset preserves its major characters but with significantly smaller size. With a small size sketch, our method can learn hash functions in an online fashion, while needs rather low computational complexity and storage space. Extensive experiments on two large scale benchmarks and one synthetic dataset demonstrate the efficacy of the proposed method.", "", "A sketch of a matrix A is another matrix B which is significantly smaller than A but still approximates it well. Finding such sketches efficiently is an important building block in modern algorithms for approximating, for example, the PCA of massive matrices. This task is made more challenging in the streaming model, where each row of the input matrix can only be processed once and storage is severely limited. In this paper we adapt a well known streaming algorithm for approximating item frequencies to the matrix sketching setting. The algorithm receives n rows of a large matrix A e ℜ n x m one after the other in a streaming fashion. It maintains a sketch B ℜ l x m containing only l This gives a streaming algorithm whose error decays proportional to 1 l using O(ml) space. For comparison, random-projection, hashing or sampling based algorithms produce convergence bounds proportional to 1 √l. Sketch updates per row in A require amortized O(ml) operations and the algorithm is perfectly parallelizable. Our experiments corroborate the algorithm's scalability and improved convergence rate. The presented algorithm also stands out in that it is deterministic, simple to implement and elementary to prove." ] }
1905.13382
2947393200
Online hashing has attracted extensive research attention when facing streaming data. Most online hashing methods, learning binary codes based on pairwise similarities of training instances, fail to capture the semantic relationship, and suffer from a poor generalization in large-scale applications due to large variations. In this paper, we propose to model the similarity distributions between the input data and the hashing codes, upon which a novel supervised online hashing method, dubbed as Similarity Distribution based Online Hashing (SDOH), is proposed, to keep the intrinsic semantic relationship in the produced Hamming space. Specifically, we first transform the discrete similarity matrix into a probability matrix via a Gaussian-based normalization to address the extremely imbalanced distribution issue. And then, we introduce a scaling Student t-distribution to solve the challenging initialization problem, and efficiently bridge the gap between the known and unknown distributions. Lastly, we align the two distributions via minimizing the Kullback-Leibler divergence (KL-diverence) with stochastic gradient descent (SGD), by which an intuitive similarity constraint is imposed to update hashing model on the new streaming data with a powerful generalizing ability to the past data. Extensive experiments on three widely-used benchmarks validate the superiority of the proposed SDOH over the state-of-the-art methods in the online retrieval task.
However, existing sketch-based online hashing methods are unsupervised, which suffer from a low performance due to the lack of supervised labels. SGD-based methods @cite_7 @cite_15 @cite_27 @cite_17 @cite_2 @cite_6 aim to make full use of labels, which still face practical problems as discussed in Sec. ,. For OKH @cite_7 , AdaptHash @cite_15 , MIHash @cite_17 and BSODH @cite_6 , less generalization ability exists since only pairwise relationships of current sequential data are considered. As for OSH @cite_27 and HCOH @cite_2 @cite_13 , a well-defined ECOC codebook has to be given in advance, which still fails when the size of codebook is inconsistent with the length of hashing bits.
{ "cite_N": [ "@cite_7", "@cite_6", "@cite_27", "@cite_2", "@cite_15", "@cite_13", "@cite_17" ], "mid": [ "", "2905097026", "2535352129", "2897391546", "2204148968", "2944642470", "2602646780" ], "abstract": [ "", "When facing large-scale image datasets, online hashing serves as a promising solution for online retrieval and prediction tasks. It encodes the online streaming data into compact binary codes, and simultaneously updates the hash functions to renew codes of the existing dataset. To this end, the existing methods update hash functions solely based on the new data batch, without investigating the correlation between such new data and the existing dataset. In addition, existing works update the hash functions using a relaxation process in its corresponding approximated continuous space. And it remains as an open problem to directly apply discrete optimizations in online hashing. In this paper, we propose a novel supervised online hashing method, termed Balanced Similarity for Online Discrete Hashing (BSODH), to solve the above problems in a unified framework. BSODH employs a well-designed hashing algorithm to preserve the similarity between the streaming data and the existing dataset via an asymmetric graph regularization. We further identify the “data-imbalance” problem brought by the constructed asymmetric graph, which restricts the application of discrete optimization in our problem. Therefore, a novel balanced similarity is further proposed, which uses two equilibrium factors to balance the similar and dissimilar weights and eventually enables the usage of discrete optimizations. Extensive experiments conducted on three widely-used benchmarks demonstrate the advantages of the proposed method over the stateof-the-art methods.", "Fast nearest neighbor search is becoming more and more crucial given the advent of large-scale data in many computer vision applications. Hashing approaches provide both fast search mechanisms and compact index structures to address this critical need. In image retrieval problems where labeled training data is available, supervised hashing methods prevail over unsupervised methods. Most state-of-the-art supervised hashing approaches employ batch-learners. Unfortunately, batch-learning strategies may be inefficient when confronted with large datasets. Moreover, with batch-learners, it is unclear how to adapt the hash functions as the dataset continues to grow and new variations appear over time. To handle these issues, we propose OSH: an Online Supervised Hashing technique that is based on Error Correcting Output Codes. We consider a stochastic setting where the data arrives sequentially and our method learns and adapts its hashing functions in a discriminative manner. Our method makes no assumption about the number of possible class labels, and accommodates new classes as they are presented in the incoming data stream. In experiments with three image retrieval benchmarks, our method yields state-of-the-art retrieval performance as measured in Mean Average Precision, while also being orders-of-magnitude faster than competing batch methods for supervised hashing. Also, our method significantly outperforms recently introduced online hashing solutions.", "In recent years, binary code learning, a.k.a. hashing, has received extensive attention in large-scale multimedia retrieval. It aims to encode high-dimensional data points into binary codes, hence the original high-dimensional metric space can be efficiently approximated via Hamming space. However, most existing hashing methods adopted offline batch learning, which is not suitable to handle incremental datasets with streaming data or new instances. In contrast, the robustness of the existing online hashing remains as an open problem, while the embedding of supervised semantic information hardly boosts the performance of the online hashing, mainly due to the defect of unknown category numbers in supervised learning. In this paper, we propose an online hashing scheme, termed Hadamard Codebook based Online Hashing (HCOH), which aims to solving the above problems towards robust and supervised online hashing. In particular, we first assign an appropriate high-dimensional binary codes to each class label, which is generated randomly by Hadamard codes. Subsequently, LSH is adopted to reduce the length of such Hadamard codes in accordance with the hash bits, which can adapt the predefined binary codes online, and theoretically guarantee the semantic similarity. Finally, we consider the setting of stochastic data acquisition, which facilitates our method to efficiently learn the corresponding hashing functions via stochastic gradient descend (SGD) online. Notably, the proposed HCOH can be embedded with supervised labels and is not limited to a predefined category number. Extensive experiments on three widely-used benchmarks demonstrate the merits of the proposed scheme over the state-of-the-art methods.", "With the staggering growth in image and video datasets, algorithms that provide fast similarity search and compact storage are crucial. Hashing methods that map the data into Hamming space have shown promise, however, many of these methods employ a batch-learning strategy in which the computational cost and memory requirements may become intractable and infeasible with larger and larger datasets. To overcome these challenges, we propose an online learning algorithm based on stochastic gradient descent in which the hash functions are updated iteratively with streaming data. In experiments with three image retrieval benchmarks, our online algorithm attains retrieval accuracy that is comparable to competing state-of-the-art batch-learning solutions, while our formulation is orders of magnitude faster and being online it is adaptable to the variations of the data. Moreover, our formulation yields improved retrieval performance over a recently reported online hashing technique, Online Kernel Hashing.", "Online image hashing has received increasing research attention recently, which receives large-scale data in a streaming manner to update the hash functions on-the-fly. Its key challenge lies in the difficulty in balancing the learning timeliness and model accuracy. To this end, most works exploit a supervised setting, i.e., using class labels to boost the hashing performance, which defects in two aspects: First, large amount of training batches are required to learn up-to-date hash functions, which however largely increase the learning complexity. Second, strong constraints, e.g., orthogonal or similarity preserving, are used, which are however typically relaxed and lead to large accuracy drop. To handle the above challenges, in this paper, a novel supervised online hashing scheme termed Hadamard Matrix Guided Online Hashing (HMOH) is proposed. Our key innovation lies in the construction and usage of Hadamard matrix, which is an orthogonal binary matrix and is built via Sylvester method. To release the need of strong constraints, we regard each column of Hadamard matrix as the target code for each class label, which by nature satisfies several desired properties of hashing codes. To accelerate the online training, the LSH is first adopted to align the length of target code and the to-be-learned binary code. And then, we treat the learning of hash functions as a set of binary classification problems to fit the assigned target code. Finally, we propose to ensemble the learned models in all rounds to maximally preserve the information of past streaming data. The superior accuracy and efficiency of the proposed method are demonstrated through extensive experiments on three widely-used datasets comparing to various state-of-the-art methods.", "Learning-based hashing methods are widely used for nearest neighbor retrieval, and recently, online hashing methods have demonstrated good performance-complexity trade-offs by learning hash functions from streaming data. In this paper, we first address a key challenge for online hashing: the binary codes for indexed data must be recomputed to keep pace with updates to the hash functions. We propose an efficient quality measure for hash functions, based on an information-theoretic quantity, mutual information, and use it successfully as a criterion to eliminate unnecessary hash table updates. Next, we also show how to optimize the mutual information objective using stochastic gradient descent. We thus develop a novel hashing method, MIHash, that can be used in both online and batch settings. Experiments on image retrieval benchmarks (including a 2.5M image dataset) confirm the effectiveness of our formulation, both in reducing hash table recomputations and in learning high-quality hash functions." ] }
1905.13214
2947546116
Over the past several years progress in designing better neural network architectures for visual recognition has been substantial. To help sustain this rate of progress, in this work we propose to reexamine the methodology for comparing network architectures. In particular, we introduce a new comparison paradigm of distribution estimates, in which network design spaces are compared by applying statistical techniques to populations of sampled models, while controlling for confounding factors like network complexity. Compared to current methodologies of comparing point and curve estimates of model families, distribution estimates paint a more complete picture of the entire design landscape. As a case study, we examine design spaces used in neural architecture search (NAS). We find significant statistical differences between recent NAS design space variants that have been largely overlooked. Furthermore, our analysis reveals that the design spaces for standard model families like ResNeXt can be comparable to the more complex ones used in recent NAS work. We hope these insights into distribution analysis will enable more robust progress toward discovering better networks for visual recognition.
General hyperparameter search techniques @cite_42 @cite_33 address the laborious model tuning process in machine learning. A possible approach for comparing networks from two different model families is to first tune their hyperparameters @cite_29 . However, such comparisons can be challenging in practice. Instead, @cite_43 advocates using random search as a strong baseline for hyperparameter search and suggests that it additionally helps improve reproducibility. In our work we propose to directly compare the model distributions (not just their minima).
{ "cite_N": [ "@cite_43", "@cite_29", "@cite_42", "@cite_33" ], "mid": [ "2963873275", "1994197834", "2106411961", "" ], "abstract": [ "Generative adversarial networks (GAN) are a powerful subclass of generative models. Despite a very rich research activity leading to numerous interesting GAN algorithms, it is still very hard to assess which algorithm(s) perform better than others. We conduct a neutral, multi-faceted large-scale empirical study on state-of-the art models and evaluation measures. We find that most models can reach similar scores with enough hyperparameter optimization and random restarts. This suggests that improvements can arise from a higher computational budget and tuning more than fundamental algorithmic changes. To overcome some limitations of the current metrics, we also propose several data sets on which precision and recall can be computed. Our experimental results suggest that future GAN research should be based on more systematic and objective evaluation procedures. Finally, we did not find evidence that any of the tested algorithms consistently outperforms the non-saturating GAN introduced in goodfellow2014generative .", "Recently, several learning algorithms relying on models with deep architectures have been proposed. Though they have demonstrated impressive performance, to date, they have only been evaluated on relatively simple problems such as digit recognition in a controlled environment, for which many machine learning algorithms already report reasonable results. Here, we present a series of experiments which indicate that these models show promise in solving harder learning problems that exhibit many factors of variation. These models are compared with well-established algorithms such as Support Vector Machines and single hidden-layer feed-forward neural networks.", "Several recent advances to the state of the art in image classification benchmarks have come from better configurations of existing techniques rather than novel approaches to feature learning. Traditionally, hyper-parameter optimization has been the job of humans because they can be very efficient in regimes where only a few trials are possible. Presently, computer clusters and GPU processors make it possible to run more trials and we show that algorithmic approaches can find better results. We present hyper-parameter optimization results on tasks of training neural networks and deep belief networks (DBNs). We optimize hyper-parameters using random search and two new greedy sequential methods based on the expected improvement criterion. Random search has been shown to be sufficiently efficient for learning neural networks for several datasets, but we show it is unreliable for training DBNs. The sequential algorithms are applied to the most difficult DBN learning problems from [1] and find significantly better results than the best previously reported. This work contributes novel techniques for making response surface models P(y|x) in which many elements of hyper-parameter assignment (x) are known to be irrelevant given particular values of other elements.", "" ] }
1905.13196
2947319506
In topological data analysis, persistent homology is used to study the "shape of data". Persistent homology computations are completely characterized by a set of intervals called a bar code. It is often said that the long intervals represent the "topological signal" and the short intervals represent "noise". We give evidence to dispute this thesis, showing that the short intervals encode geometric information. Specifically, we prove that persistent homology detects the curvature of disks from which points have been sampled. We describe a general computational framework for solving inverse problems using the average persistence landscape, a continuous mapping from metric spaces with a probability measure to a Hilbert space. In the present application, the average persistence landscapes of points sampled from disks of constant curvature results in a path in this Hilbert space which may be learned using standard tools from statistical and machine learning.
Persistence landscapes have been used to study the geometry of microstructures @cite_15 ; protein conformations @cite_23 ; and financial times series @cite_29 . Average persistence landscapes and average death vectors were used to detect differences in images of leaves in @cite_31 . B. Schweinhart recently proved that persistent homology of random samples may be used to determine the fractal dimension of certain metric spaces @cite_5 .
{ "cite_N": [ "@cite_31", "@cite_29", "@cite_23", "@cite_5", "@cite_15" ], "mid": [ "2798779428", "2597495304", "2408772186", "2886824112", "2267374815" ], "abstract": [ "Statistical analysis on object data presents many challenges. Basic summaries such as means and variances are difficult to compute. We apply ideas from topology to study object data. We present a framework for using death vectors and persistence landscapes to vectorize object data and perform statistical analysis. We apply this method to some common leaf images that were previously shown to be challenging to compare using a 3D shape techniques. Surprisingly, the most persistent features are shown to be “topological noise” and the statistical analysis depends on the less persistent features which we refer to as the “geometric signal”. We also describe the first steps to a new approach to using topology for object data analysis, which applies topology to distributions on object spaces. We introduce a new Frechet-Morse function technique for probability distribution on a compact object space, extending the Frechet means lo a larger number of location parameters, including Frechet antimeans. An example of 3D data analysis to distinguish two flowers using the new location parameters associated with a Veronese-Whitney (VW) embedding of random projective shapes of 3D configurations extracted from a set of pairs of their digital camera images is also given here.", "We explore the evolution of daily returns of four major US stock market indices during the technology crash of 2000, and the financial crisis of 2007-2009. Our methodology is based on topological data analysis (TDA). We use persistence homology to detect and quantify topological patterns that appear in multidimensional time series. Using a sliding window, we extract time-dependent point cloud data sets, to which we associate a topological space. We detect transient loops that appear in this space, and we measure their persistence. This is encoded in real-valued functions referred to as a 'persistence landscapes'. We quantify the temporal changes in persistence landscapes via their @math -norms. We test this procedure on multidimensional time series generated by various non-linear and non-equilibrium models. We find that, in the vicinity of financial meltdowns, the @math -norms exhibit strong growth prior to the primary peak, which ascends during a crash. Remarkably, the average spectral density at low frequencies of the time series of @math -norms of the persistence landscapes demonstrates a strong rising trend for 250 trading days prior to either dotcom crash on 03 10 2000, or to the Lehman bankruptcy on 09 15 2008. Our study suggests that TDA provides a new type of econometric analysis, which goes beyond the standard statistical measures. The method can be used to detect early warning signals of imminent market crashes. We believe that this approach can be used beyond the analysis of financial time series presented here.", "Persistent homology captures the evolution of topological features of a model as a parameter changes. The most commonly used summary statistics of persistent homology are the barcode and the persistence diagram. Another summary statistic, the persistence landscape, was recently introduced by Bubenik. It is a functional summary, so it is easy to calculate sample means and variances, and it is straightforward to construct various test statistics. Implementing a permutation test we detect conformational changes between closed and open forms of the maltose-binding protein, a large biomolecule consisting of 370 amino acid residues. Furthermore, persistence landscapes can be applied to machine learning methods. A hyperplane from a support vector machine shows the clear separation between the closed and open proteins conformations. Moreover, because our approach captures dynamical properties of the protein our results may help in identifying residues susceptible to ligand binding; we show that the majority of active site residues and allosteric pathway residues are located in the vicinity of the most persistent loop in the corresponding filtered Vietoris-Rips complex. This finding was not observed in the classical anisotropic network model.", "We study the asymptotic behavior of the persistent homology of i.i.d. samples from a @math -Ahlfors regular measure --- one that satisfies uniform bounds of the form for some @math all @math in the support of @math and all sufficiently small @math Our main result is that if @math are sampled from a @math -Ahlfors regular measure on @math and @math denotes the @math -weight of the minimal spanning tree on @math [E_ (x_1, ,x_n )= e T (x_1, ,x_n ) |e|^ ] then [E_ (x_1, ,x_n ) n^ d- d ] with high probability as @math We also prove theorems about the asymptotic behavior of weighted sums defined in terms of higher-dimensional persistent homology. As an application, we exhibit hypotheses under which the fractal dimension of a measure can be computed from the persistent homology of i.i.d. samples from that space, in a manner similar to that proposed in the experimental work of (2018).", "Phase separation mechanisms can produce a variety of complicated and intricate microstructures, which often can be difficult to characterize in a quantitative way. In recent years, a number of novel topological metrics for microstructures have been proposed, which measure essential connectivity information and are based on techniques from algebraic topology. Such metrics are inherently computable using computational homology, provided the microstructures are discretized using a thresholding process. However, while in many cases the thresholding is straightforward, noise and measurement errors can lead to misleading metric values. In such situations, persistence landscapes have been proposed as a natural topology metric. Common to all of these approaches is the enormous data reduction, which passes from complicated patterns to discrete information. It is therefore natural to wonder what type of information is actually retained by the topology. In the present paper, we demonstrate that averaged persistence landscapes can be used to recover central system information in the Cahn-Hilliard theory of phase separation. More precisely, we show that topological information of evolving microstructures alone suffices to accurately detect both concentration information and the actual decomposition stage of a data snapshot. Considering that persistent homology only measures discrete connectivity information, regardless of the size of the topological features, these results indicate that the system parameters in a phase separation process affect the topology considerably more than anticipated. We believe that the methods discussed in this paper could provide a valuable tool for relating experimental data to model simulations." ] }
1905.13388
2946900496
Deep 3-dimensional (3D) Convolutional Network (ConvNet) has shown promising performance on video recognition tasks because of its powerful spatio-temporal information fusion ability. However, the extremely intensive requirements on memory access and computing power prohibit it from being used in resource-constrained scenarios, such as portable and edge devices. So in this paper, we first propose a two-stage Fully Separable Block (FSB) to significantly compress the model sizes of 3D ConvNets. Then a feature enhancement approach named Temporal Residual Gradient (TRG) is developed to improve the performance of compressed model on video tasks, which provides higher accuracy, faster convergency and better robustness. Moreover, in order to further decrease the computing workload, we propose a hybrid Fast Algorithm (hFA) to drastically reduce the computation complexity of convolutions. These methods are effectively combined to design a light-weight and efficient ConvNet for video recognition tasks. Experiments on the popular dataset report 2.3x compression rate, 3.6x workload reduction, and 6.3 top-1 accuracy gain, over the state-of-the-art SlowFast model, which is already a highly compact model. The proposed methods also show good adaptability on traditional 3D ConvNet, demonstrating 7.4x more compact model, 11.0x less workload, and 3.0 higher accuracy
The application of Fast Algorithms in ConvNets was first proposed by @cite_20 @cite_21 . Then Lavin al @cite_5 applied another type of fast approach called Winograd Algorithm (WinoA) @cite_17 for 2D convolutions, leading to a great compression rate on the convolutional layers. It have been integrated into the cudnn library @cite_15 as a build-in method, which proved the superior of the WinoA in convolution implementations. Recently the 3D WinoA was also proposed to decrease the computation complexity of 3D ConvNet @cite_0 , although it could only be used for 3D kernels with same scales on temporal and spatial orientations.
{ "cite_N": [ "@cite_21", "@cite_0", "@cite_5", "@cite_15", "@cite_20", "@cite_17" ], "mid": [ "1789336918", "2789246071", "2172654076", "1667652561", "2963340555", "" ], "abstract": [ "We examine the performance profile of Convolutional Neural Network training on the current generation of NVIDIA Graphics Processing Units. We introduce two new Fast Fourier Transform convolution implementations: one based on NVIDIA's cuFFT library, and another based on a Facebook authored FFT implementation, fbfft, that provides significant speedups over cuFFT (over 1.5x) for whole CNNs. Both of these convolution implementations are available in open source, and are faster than NVIDIA's cuDNN implementation for many common convolutional layers (up to 23.5x for some synthetic kernel configurations). We discuss different performance regimes of convolutions, comparing areas where straightforward time domain convolutions outperform Fourier frequency domain convolutions. Details on algorithmic applications of NVIDIA GPU hardware specifics in the implementation of fbfft are also provided.", "Three-dimensional convolutional neural networks (3D CNNs) are used efficiently in many computer vision applications. Most previous work in this area has concentrated only on designing and optimizing accelerators for 2D CNN, with few attempts made to accelerate 3D CNN on FPGA. We find accelerating 3D CNNs on FPGA to be challenge due to their high computational complexity and storage demands. More importantly, although the computation patterns of 2D and 3D CNNs are analogous, the conventional approaches adopted for accelerating 2D CNNs may be unfit for 3D CNN acceleration. In this paper, in order to accelerate 2D and 3D CNNs using a uniform framework, we propose a uniform template-based architecture that uses templates based on the Winograd algorithm to ensure fast development of 2D and 3D CNN accelerators. Furthermore, we also develop a uniform analytical model to facilitate efficient design space explorations of 2D and 3D CNN accelerators based on our architecture. Finally, we demonstrate the effectiveness of the template-based architecture by implementing accelerators for real-life 2D and 3D CNNs (VGG16 and C3D) on multiple FPGA platforms. On S2C VUS440, we achieve up to 1.13 TOPS and 1.11 TOPS under low resource utilization for VGG16 and C3D, respectively. End-to-end comparisons with CPU and GPU solutions demonstrate that our implementation of C3D achieves gains of up to 13x and 60x in performance and energy relative to a CPU solution, and a 6.4x energy efficiency gain over a GPU solution.", "Deep convolutional neural networks take GPU-days of computation to train on large data sets. Pedestrian detection for self driving cars requires very low latency. Image recognition for mobile phones is constrained by limited processing resources. The success of convolutional neural networks in these situations is limited by how fast we can compute them. Conventional FFT based convolution is fast for large filters, but state of the art convolutional neural networks use small, 3 3 filters. We introduce a new class of fast algorithms for convolutional neural networks using Winograd's minimal filtering algorithms. The algorithms compute minimal complexity convolution over small tiles, which makes them fast with small filters and small batch sizes. We benchmark a GPU implementation of our algorithm with the VGG network and show state of the art throughput at batch sizes from 1 to 64.", "We present a library that provides optimized implementations for deep learning primitives. Deep learning workloads are computationally intensive, and optimizing the kernels of deep learning workloads is difficult and time-consuming. As parallel architectures evolve, kernels must be reoptimized for new processors, which makes maintaining codebases difficult over time. Similar issues have long been addressed in the HPC community by libraries such as the Basic Linear Algebra Subroutines (BLAS) [2]. However, there is no analogous library for deep learning. Without such a library, researchers implementing deep learning workloads on parallel processors must create and optimize their own implementations of the main computational kernels, and this work must be repeated as new parallel processors emerge. To address this problem, we have created a library similar in intent to BLAS, with optimized routines for deep learning workloads. Our implementation contains routines for GPUs, and similarly to the BLAS library, could be implemented for other platforms. The library is easy to integrate into existing frameworks, and provides optimized performance and memory usage. For example, integrating cuDNN into Caffe, a popular framework for convolutional networks, improves performance by 36 on a standard model while also reducing memory consumption.", "Abstract: Convolutional networks are one of the most widely employed architectures in computer vision and machine learning. In order to leverage their ability to learn complex functions, large amounts of data are required for training. Training a large convolutional network to produce state-of-the-art results can take weeks, even when using modern GPUs. Producing labels using a trained network can also be costly when dealing with web-scale datasets. In this work, we present a simple algorithm which accelerates training and inference by a significant factor, and can yield improvements of over an order of magnitude compared to existing state-of-the-art implementations. This is done by computing convolutions as pointwise products in the Fourier domain while reusing the same transformed feature map many times. The algorithm is implemented on a GPU architecture and addresses a number of related challenges.", "" ] }
1905.13394
2951921227
Robust road detection is a key challenge in safe autonomous driving. Recently, with the rapid development of 3D sensors, more and more researchers are trying to fuse information across different sensors to improve the performance of road detection. Although many successful works have been achieved in this field, methods for data fusion under deep learning framework is still an open problem. In this paper, we propose a Siamese deep neural network based on FCN-8s to detect road region. Our method uses data collected from a monocular color camera and a Velodyne-64 LiDAR sensor. We project the LiDAR point clouds onto the image plane to generate LiDAR images and feed them into one of the branches of the network. The RGB images are fed into another branch of our proposed network. The feature maps that these two branches extract in multiple scales are fused before each pooling layer, via padding additional fusion layers. Extensive experimental results on public dataset KITTI ROAD demonstrate the effectiveness of our proposed approach.
Our work is efficient and shows strong performance on par or even better than other works. It is a variation of Siamese network originally proposed by Y. Lecun @cite_58 . Hinton in @cite_35 used a Siamese architecture to make a binary classifier with two sets of faces. The key idea of this architecture is to learn the classifier by taking two different input that describe a single representation, which inspired the fusing mechanism in our work. There are three main contributions: firstly, a new network based on FCN-8s which embedded a siamese structure in encoder part to help with camera and LiDAR fusion; secondly, using sparse LiDAR image as input source instead of dense one, which can achieve similar performance with dramatically computation intensity reducing in processing pipeline; at last, evaluation of our work are performed in public dataset.
{ "cite_N": [ "@cite_35", "@cite_58" ], "mid": [ "1665214252", "2171590421" ], "abstract": [ "Restricted Boltzmann machines were developed using binary stochastic hidden units. These can be generalized by replacing each binary unit by an infinite number of copies that all have the same weights but have progressively more negative biases. The learning and inference rules for these \"Stepped Sigmoid Units\" are unchanged. They can be approximated efficiently by noisy, rectified linear units. Compared with binary units, these units learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset. Unlike binary units, rectified linear units preserve information about relative intensities as information travels through multiple layers of feature detectors.", "This paper describes the development of an algorithm for verification of signatures written on a touch-sensitive pad. The signature verification algorithm is based on an artificial neural network. The novel network presented here, called a “Siamese” time delay neural network, consists of two identical networks joined at their output. During training the network learns to measure the similarity between pairs of signatures. When used for verification, only one half of the Siamese network is evaluated. The output of this half network is the feature vector for the input signature. Verification consists of comparing this feature vector with a stored feature vector for the signer. Signatures closer than a chosen threshold to this stored representation are accepted, all other signatures are rejected as forgeries. System performance is illustrated with experiments performed in the laboratory." ] }
1905.13288
2946859542
Traditional structured prediction models try to learn the conditional likelihood, i.e., p(y|x), to capture the relationship between the structured output y and the input features x. For many models, computing the likelihood is intractable. These models are therefore hard to train, requiring the use of surrogate objectives or variational inference to approximate likelihood. In this paper, we propose conditional Glow (c-Glow), a conditional generative flow for structured output learning. C-Glow benefits from the ability of flow-based models to compute p(y|x) exactly and efficiently. Learning with c-Glow does not require a surrogate objective or performing inference during training. Once trained, we can directly and efficiently generate conditional samples to do structured prediction. We evaluate this approach on different structured prediction tasks and find c-Glow's structured outputs comparable in quality with state-of-the-art deep structured prediction approaches.
Normalizing flows are neural networks constructed with fully invertible components. The invertibility of the resulting network provides various mathematical benefits. Normalizing flows have been successfully used to build likelihood-based deep generative models @cite_20 @cite_33 @cite_0 and to improve variational approximation @cite_19 @cite_15 . Autoregressive flows @cite_15 @cite_9 @cite_22 @cite_34 condition each affine transformation on all previous variables, so that they ensure an invertible transformation and triangular Jacobian matrix. Continuous normalizing flows @cite_25 @cite_32 define the transformation function using ordinary differential equations. While most normalizing flow models define generative models, developed radial flows to model univariate conditional probabilities. Most related to our approach are flow-based generative models for complex output. first proposed a flow-based model, NICE, for modeling complex high-dimensional densities. They later proposed Real-NVP @cite_33 , which improves the expressiveness of NICE by adding more flexible coupling layers. The Glow model @cite_0 further improved the performance of such approaches by incorporating new invertible layers. Most recently, Flow++ @cite_18 improved generative flows with variational dequantization and architecture design, and proposed new invertible layers for flow-based models.
{ "cite_N": [ "@cite_18", "@cite_33", "@cite_22", "@cite_9", "@cite_32", "@cite_0", "@cite_19", "@cite_15", "@cite_34", "@cite_25", "@cite_20" ], "mid": [ "2951682695", "2409550820", "2796426718", "2963047245", "2895434480", "2963139417", "299440670", "2587284713", "2950838571", "2963755523", "1583912456" ], "abstract": [ "Flow-based generative models are powerful exact likelihood models with efficient sampling and inference. Despite their computational efficiency, flow-based models generally have much worse density modeling performance compared to state-of-the-art autoregressive models. In this paper, we investigate and improve upon three limiting design choices employed by flow-based models in prior work: the use of uniform noise for dequantization, the use of inexpressive affine flows, and the use of purely convolutional conditioning networks in coupling layers. Based on our findings, we propose Flow++, a new flow-based model that is now the state-of-the-art non-autoregressive model for unconditional density estimation on standard image benchmarks. Our work has begun to close the significant performance gap that has so far existed between autoregressive models and flow-based models. Our implementation is available at this https URL", "Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful invertible and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact sampling, exact inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation and latent variable manipulations.", "Normalizing flows and autoregressive models have been successfully combined to produce state-of-the-art results in density estimation, via Masked Autoregressive Flows (MAF), and to accelerate state-of-the-art WaveNet-based speech synthesis to 20x faster than real-time, via Inverse Autoregressive Flows (IAF). We unify and generalize these approaches, replacing the (conditionally) affine univariate transformations of MAF IAF with a more general class of invertible univariate transformations expressed as monotonic neural networks. We demonstrate that the proposed neural autoregressive flows (NAF) are universal approximators for continuous probability distributions, and their greater expressivity allows them to better capture multimodal target distributions. Experimentally, NAF yields state-of-the-art performance on a suite of density estimation tasks and outperforms IAF in variational autoencoders trained on binarized MNIST.", "Autoregressive models are among the best performing neural density estimators. We describe an approach for increasing the flexibility of an autoregressive model, based on modelling the random numbers that the model uses internally when generating data. By constructing a stack of autoregressive models, each modelling the random numbers of the next model in the stack, we obtain a type of normalizing flow suitable for density estimation, which we call Masked Autoregressive Flow. This type of flow is closely related to Inverse Autoregressive Flow and is a generalization of Real NVP. Masked Autoregressive Flow achieves state-of-the-art performance in a range of general-purpose density estimation tasks.", "A promising class of generative models maps points from a simple distribution to a complex distribution through an invertible neural network. Likelihood-based training of these models requires restricting their architectures to allow cheap computation of Jacobian determinants. Alternatively, the Jacobian trace can be used if the transformation is specified by an ordinary differential equation. In this paper, we use Hutchinson's trace estimator to give a scalable unbiased estimate of the log-density. The result is a continuous-time invertible generative model with unbiased density estimation and one-pass sampling, while allowing unrestricted neural network architectures. We demonstrate our approach on high-dimensional density estimation, image generation, and variational inference, achieving the state-of-the-art among exact likelihood methods with efficient sampling.", "Flow-based generative models are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using invertible 1x1 convolution. Using our method we demonstrate a significant improvement in log-likelihood and qualitative sample quality. Perhaps most strikingly, we demonstrate that a generative model optimized towards the plain log-likelihood objective is capable of efficient synthesis of large and subjectively realistic-looking images.", "The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.", "The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces. The proposed flow consists of a chain of invertible transformations, where each transformation is based on an autoregressive neural network. In experiments, we show that IAF significantly improves upon diagonal Gaussian approximate posteriors. In addition, we demonstrate that a novel type of variational autoencoder, coupled with IAF, is competitive with neural autoregressive models in terms of attained log-likelihood on natural images, while allowing significantly faster synthesis.", "Normalizing flows are a powerful class of generative models for continuous random variables, showing both strong model flexibility and the potential for non-autoregressive generation. These benefits are also desired when modeling discrete random variables such as text, but directly applying normalizing flows to discrete sequences poses significant additional challenges. We propose a VAE-based generative model which jointly learns a normalizing flow-based distribution in the latent space and a stochastic mapping to an observed discrete space. In this setting, we find that it is crucial for the flow-based distribution to be highly multimodal. To capture this property, we propose several normalizing flow architectures to maximize model flexibility. Experiments consider common discrete sequence tasks of character-level language modeling and polyphonic music generation. Our results indicate that an autoregressive flow-based model can match the performance of a comparable autoregressive baseline, and a non-autoregressive flow-based model can improve generation speed with a penalty to performance.", "We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a blackbox differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.", "We propose a deep learning framework for modeling complex high-dimensional densities called Non-linear Independent Component Estimation (NICE). It is based on the idea that a good representation is one in which the data has a distribution that is easy to model. For this purpose, a non-linear deterministic transformation of the data is learned that maps it to a latent space so as to make the transformed data conform to a factorized distribution, i.e., resulting in independent latent variables. We parametrize this transformation so that computing the Jacobian determinant and inverse transform is trivial, yet we maintain the ability to learn complex non-linear transformations, via a composition of simple building blocks, each based on a deep neural network. The training criterion is simply the exact log-likelihood, which is tractable. Unbiased ancestral sampling is also easy. We show that this approach yields good generative models on four image datasets and can be used for inpainting." ] }
1905.13308
2948014272
Interpretable representations of data are useful for testing a hypothesis or to distinguish between multiple potential hypotheses about the data. In contrast, applied machine learning, and specifically deep learning (DL), is often used in contexts where performance is valued over interpretability. Indeed, deep networks (DNs) are often treated as black boxes'', and it is not well understood what and how they learn from a given dataset. This lack of understanding seriously hinders adoption of DNs as data analysis tools in science and poses numerous research questions. One problem is that current deep learning research datasets either have very little hierarchical structure or are too complex for their structure to be analyzed, impeding precise predictions of hierarchical representations. To address this gap, we present a benchmark dataset with known hierarchical and compositional structure and a set of methods for performing hypothesis-driven data analysis using DNs. The Hangul Fonts Dataset is composed of 35 fonts, each with 11,172 written syllables consisting of 19 initial consonants, 21 medial vowels, and 28 final consonants. The rules for combining and modifying individual Hangul characters into blocks can be encoded, with translation, scaling, and style variation that depend on precise block content, as well as naturalistic variation across fonts. Thus, the Hangul Fonts Dataset will provide an intermediate complexity dataset with well-defined, hierarchical features to interrogate learned representations. We first present a summary of the structure of the dataset. Using a set of unsupervised and supervised methods, we find that deep network representations contain structure related to the geometrical hierarchy of the characters. Our results lay the foundation for a better understanding of what deep networks learn from complex, structured datasets.
Deep networks can learn feature hierarchies, wherein features from higher levels of the hierarchy are formed by the composition of lower level features. The hierarchical multiscale RNN captures the latent hierarchical structure on two tasks--character-level language modelling and handwriting sequence generation--by encoding the temporal dependencies with different timescales using a novel update mechanism @cite_4 . showed that deep networks learn an articulatory hierarchy when trained on neural data recorded during spoken speech syllables. Hangul characters are formed from a hierarchy of atoms, glyphs, and geometric structure which deep networks can be trained to learn.
{ "cite_N": [ "@cite_4" ], "mid": [ "2510842514" ], "abstract": [ "Learning both hierarchical and temporal representation has been among the long-standing challenges of recurrent neural networks. Multiscale recurrent neural networks have been considered as a promising approach to resolve this issue, yet there has been a lack of empirical evidence showing that this type of models can actually capture the temporal dependencies by discovering the latent hierarchical structure of the sequence. In this paper, we propose a novel multiscale approach, called the hierarchical multiscale recurrent neural networks, which can capture the latent hierarchical structure in the sequence by encoding the temporal dependencies with different timescales using a novel update mechanism. We show some evidence that our proposed multiscale architecture can discover underlying hierarchical structure in the sequences without using explicit boundary information. We evaluate our proposed model on character-level language modelling and handwriting sequence modelling." ] }
1905.13448
2947411092
Captioning has attracted much attention in image and video understanding while little work examines audio captioning. This paper contributes a manually-annotated dataset on car scene, in extension to a previously published hospital audio captioning dataset. An encoder-decoder model with pretrained word embeddings and additional sentence loss is proposed. This current model can accelerate the training process and generate semantically correct but unseen unique sentences. We test the model on the current car dataset, previous Hospital Dataset and the Joint Dataset, indicating its generalization capability across different scenes. Further, we make an effort to provide a better objective evaluation metric, namely the BERT similarity score. It compares the semantic-level similarity and compensates for drawbacks of N-gram based metrics like BLEU, namely high scores for word-similar sentences. This new metric demonstrates higher correlation with human evaluation. However, though detailed audio captions can now be automatically generated, human annotations still outperform model captions in many aspects.
Image and video captioning has witnessed promising improvement recently. The development of sequence-to-sequence models enables well-performing video captioning models by simply using temporal image information @cite_2 . Later, the attention mechanism was utilized to fuse audio with video information and assign different importance to time frames @cite_10 @cite_13 @cite_9 . Shen @cite_6 generated multiple captions in different detail levels and temporal attention.
{ "cite_N": [ "@cite_10", "@cite_9", "@cite_6", "@cite_2", "@cite_13" ], "mid": [ "2584992898", "2786670585", "2607119937", "2609138599", "2902469875" ], "abstract": [ "Current methods for video description are based on encoder-decoder sentence generation using recurrent neural networks (RNNs). Recent work has demonstrated the advantages of integrating temporal attention mechanisms into these models, in which the decoder network predicts each word in the description by selectively giving more weight to encoded features from specific time frames. Such methods typically use two different types of features: image features (from an object classification model), and motion features (from an action recognition model), combined by naive concatenation in the model input. Because different feature modalities may carry task-relevant information at different times, fusing them by naive concatenation may limit the model's ability to dynamically determine the relevance of each type of feature to different parts of the description. In this paper, we incorporate audio features in addition to the image and motion features. To fuse these three modalities, we introduce a multimodal attention model that can selectively utilize features from different modalities for each word in the output description. Combining our new multimodal attention model with standard temporal attention outperforms state-of-the-art methods on two standard datasets: YouTube2Text and MSR-VTT.", "Video captioning has been widely researched. Most related work takes into account only visual content in generating descriptions. However, auditory content such as human speech or environmental sounds contains rich information for describing scenes, but has yet to be widely explored for video captions. Here, we experiment with different ways to use this auditory content in videos, and demonstrate improved caption generation in terms of popular evaluation methods such as BLEU, CIDEr, and METEOR. We also measure the semantic similarities between generated captions and human-provided ground truth using sentence embeddings, and find that good use of multi-modal contents helps the machine to generate captions that are more semantically related to the ground truth. When analyzing the generated sentences, we find some ambiguous situations for which visual-only models yield incorrect results but that are resolved by approaches that take into account auditory cues.", "This paper focuses on a novel and challenging vision task, dense video captioning, which aims to automatically describe a video clip with multiple informative and diverse caption sentences. The proposed method is trained without explicit annotation of fine-grained sentence to video region-sequence correspondence, but is only based on weak video-level sentence annotations. It differs from existing video captioning systems in three technical aspects. First, we propose lexical fully convolutional neural networks (Lexical-FCN) with weakly supervised multi-instance multi-label learning to weakly link video regions with lexical labels. Second, we introduce a novel submodular maximization scheme to generate multiple informative and diverse region-sequences based on the Lexical-FCN outputs. A winner-takes-all scheme is adopted to weakly associate sentences to region-sequences in the training phase. Third, a sequence-to-sequence learning based language model is trained with the weakly supervised information obtained through the association process. We show that the proposed method can not only produce informative and diverse dense captions, but also outperform state-of-the-art single video captioning methods by a large margin.", "Video captioning, the task of describing the content of a video, has seen some promising improvements in recent years with sequence-to-sequence models, but accurately learning the temporal and logical dynamics involved in the task still remains a challenge, especially given the lack of sufficient annotated data. We improve video captioning by sharing knowledge with two related directed-generation tasks: a temporally-directed unsupervised video prediction task to learn richer context-aware video encoder representations, and a logically-directed language entailment generation task to learn better video-entailed caption decoder representations. For this, we present a many-to-many multi-task learning model that shares parameters across the encoders and decoders of the three tasks. We achieve significant improvements and the new state-of-the-art on several standard video captioning datasets using diverse automatic and human evaluations. We also show mutual multi-task improvements on the entailment generation task.", "" ] }
1905.13448
2947411092
Captioning has attracted much attention in image and video understanding while little work examines audio captioning. This paper contributes a manually-annotated dataset on car scene, in extension to a previously published hospital audio captioning dataset. An encoder-decoder model with pretrained word embeddings and additional sentence loss is proposed. This current model can accelerate the training process and generate semantically correct but unseen unique sentences. We test the model on the current car dataset, previous Hospital Dataset and the Joint Dataset, indicating its generalization capability across different scenes. Further, we make an effort to provide a better objective evaluation metric, namely the BERT similarity score. It compares the semantic-level similarity and compensates for drawbacks of N-gram based metrics like BLEU, namely high scores for word-similar sentences. This new metric demonstrates higher correlation with human evaluation. However, though detailed audio captions can now be automatically generated, human annotations still outperform model captions in many aspects.
Early works like GloVe @cite_19 and Word2Vec @cite_7 in natural language processing (NLP) focused on context-free embedding of words. Recently, models like Cove @cite_3 , ELMo @cite_15 and GPT @cite_4 made use of the self-attention mechanism and transformer to build context-sensitive word representations. An unsupervised, C-BOW like method to embed sentence to fixed length vector @cite_1 was later proposed. In this paper, our work is based on the state-of-art sentence embedding technique from Google named BERT @cite_12 . It contains large bidirectional transformers trained on huge corpus, thus embeddings extracted from pretrained BERT model can perform well in many tasks with a little fine-tuning work.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_1", "@cite_3", "@cite_19", "@cite_15", "@cite_12" ], "mid": [ "", "2131744502", "2605035112", "2963756346", "2250539671", "2787560479", "2896457183" ], "abstract": [ "", "Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperforms bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.", "The recent tremendous success of unsupervised word embeddings in a multitude of applications raises the obvious question if similar methods could be derived to improve embeddings (i.e. semantic representations) of word sequences as well. We present a simple but efficient unsupervised objective to train distributed representations of sentences. Our method outperforms the state-of-the-art unsupervised models on most benchmark tasks, highlighting the robustness of the produced general-purpose sentence embeddings.", "Computer vision has benefited from initializing multiple deep layers with weights pretrained on large supervised training sets like ImageNet. Natural language processing (NLP) typically sees initialization of only the lowest layer of deep models with pretrained word vectors. In this paper, we use a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation (MT) to contextualize word vectors. We show that adding these context vectors (CoVe) improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks: sentiment analysis (SST, IMDb), question classification (TREC), entailment (SNLI), and question answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe improves performance of our baseline models to the state of the art.", "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.", "We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.", "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7 (4.6 absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)." ] }
1905.13448
2947411092
Captioning has attracted much attention in image and video understanding while little work examines audio captioning. This paper contributes a manually-annotated dataset on car scene, in extension to a previously published hospital audio captioning dataset. An encoder-decoder model with pretrained word embeddings and additional sentence loss is proposed. This current model can accelerate the training process and generate semantically correct but unseen unique sentences. We test the model on the current car dataset, previous Hospital Dataset and the Joint Dataset, indicating its generalization capability across different scenes. Further, we make an effort to provide a better objective evaluation metric, namely the BERT similarity score. It compares the semantic-level similarity and compensates for drawbacks of N-gram based metrics like BLEU, namely high scores for word-similar sentences. This new metric demonstrates higher correlation with human evaluation. However, though detailed audio captions can now be automatically generated, human annotations still outperform model captions in many aspects.
In previous captioning work evaluation metrics were mainly borrowed from machine translation: BLEU@1-4, METEOR, CIDEr and ROUGE-L scores were calculated. All these metrics are based on N-Gram overlaps between hypothesis and reference @cite_17 @cite_18 @cite_10 @cite_9 . @cite_8 and @cite_16 treated image captioning as a sentence ranking task and used recall@k and median@r as their metric. Chuang @cite_9 used the Sent2Vec model from @cite_1 to embed sentences to fixed length vectors. In addition to a BLEU score, a sentence embedding cosine similarity between model outputs and human transcriptions was involved as a semantic evaluation. In our current work, we were inspired by such a sentence-level evaluation.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_9", "@cite_1", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "1514535095", "2951805548", "2786670585", "2605035112", "68733909", "2584992898", "1895577753" ], "abstract": [ "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.", "We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.", "Video captioning has been widely researched. Most related work takes into account only visual content in generating descriptions. However, auditory content such as human speech or environmental sounds contains rich information for describing scenes, but has yet to be widely explored for video captions. Here, we experiment with different ways to use this auditory content in videos, and demonstrate improved caption generation in terms of popular evaluation methods such as BLEU, CIDEr, and METEOR. We also measure the semantic similarities between generated captions and human-provided ground truth using sentence embeddings, and find that good use of multi-modal contents helps the machine to generate captions that are more semantically related to the ground truth. When analyzing the generated sentences, we find some ambiguous situations for which visual-only models yield incorrect results but that are resolved by approaches that take into account auditory cues.", "The recent tremendous success of unsupervised word embeddings in a multitude of applications raises the obvious question if similar methods could be derived to improve embeddings (i.e. semantic representations) of word sequences as well. We present a simple but efficient unsupervised objective to train distributed representations of sentences. Our method outperforms the state-of-the-art unsupervised models on most benchmark tasks, highlighting the robustness of the produced general-purpose sentence embeddings.", "The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. In analogy to image search, we propose to frame sentence-based image annotation as the task of ranking a given pool of captions. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We introduce a number of systems that perform quite well on this task, even though they are only based on features that can be obtained with minimal supervision. Our results clearly indicate the importance of training on multiple captions per image, and of capturing syntactic (word order-based) and semantic features of these captions. We also perform an in-depth comparison of human and automatic evaluation metrics for this task, and propose strategies for collecting human judgments cheaply and on a very large scale, allowing us to augment our collection with additional relevance judgments of which captions describe which image. Our analysis shows that metrics that consider the ranked list of results for each query image or sentence are significantly more robust than metrics that are based on a single response per query. Moreover, our study suggests that the evaluation of ranking-based image description systems may be fully automated.", "Current methods for video description are based on encoder-decoder sentence generation using recurrent neural networks (RNNs). Recent work has demonstrated the advantages of integrating temporal attention mechanisms into these models, in which the decoder network predicts each word in the description by selectively giving more weight to encoded features from specific time frames. Such methods typically use two different types of features: image features (from an object classification model), and motion features (from an action recognition model), combined by naive concatenation in the model input. Because different feature modalities may carry task-relevant information at different times, fusing them by naive concatenation may limit the model's ability to dynamically determine the relevance of each type of feature to different parts of the description. In this paper, we incorporate audio features in addition to the image and motion features. To fuse these three modalities, we introduce a multimodal attention model that can selectively utilize features from different modalities for each word in the output description. Combining our new multimodal attention model with standard temporal attention outperforms state-of-the-art methods on two standard datasets: YouTube2Text and MSR-VTT.", "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art." ] }
1905.13378
2946936942
This paper studies a deep learning (DL) framework to solve distributed non-convex constrained optimizations in wireless networks where multiple computing nodes, interconnected via backhaul links, desire to determine an efficient assignment of their states based on local observations. Two different configurations are considered: First, an infinite-capacity backhaul enables nodes to communicate in a lossless way, thereby obtaining the solution by centralized computations. Second, a practical finite-capacity backhaul leads to the deployment of distributed solvers equipped along with quantizers for communication through capacity-limited backhaul. The distributed nature and the nonconvexity of the optimizations render the identification of the solution unwieldy. To handle them, deep neural networks (DNNs) are introduced to approximate an unknown computation for the solution accurately. In consequence, the original problems are transformed to training tasks of the DNNs subject to non-convex constraints where existing DL libraries fail to extend straightforwardly. A constrained training strategy is developed based on the primal-dual method. For distributed implementation, a novel binarization technique at the output layer is developed for quantization at each node. Our proposed distributed DL framework is examined in various network configurations of wireless resource management. Numerical results verify the effectiveness of our proposed approach over existing optimization techniques.
To overcome the drawbacks of traditional optimization techniques, deep learning (DL) frameworks have been recently investigated in wireless resource management @cite_19 @cite_18 @cite_23 @cite_35 @cite_5 @cite_8 and end-to-end communication system design @cite_17 @cite_4 @cite_31 @cite_24 @cite_33 . In particular, learning to optimize'' approaches in @cite_19 @cite_18 @cite_23 @cite_35 @cite_5 @cite_8 have received considerable attention for their potential to replace traditional optimization algorithms with neural network computations. Power control problems are addressed in multi-user interference channels (IFCs) via deep neural networks (DNNs) to maximize the sum rate performance @cite_19 . A supervised learning technique is used to learn a locally optimal solution produced by the weighted minimum-mean-square-error (WMMSE) algorithm @cite_7 . Its real-time computational complexity is shown to be much smaller than the original WMMSE, albeit involving intensive computations with numerous samples in training. However, the supervised learning task needs the generation of numerous training labels, i.e., the power control solution of the WMMSE algorithm, which would be a bottleneck for the DNN training step. In addition, the average sum rate performance achieved by the DNN in @cite_19 is normally lower than the WMMSE algorithm due to the nature of the supervised learning framework.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_4", "@cite_33", "@cite_7", "@cite_8", "@cite_24", "@cite_19", "@cite_23", "@cite_5", "@cite_31", "@cite_17" ], "mid": [ "2883104796", "2797462110", "2736068844", "2905030038", "1970830346", "2894241318", "2810236238", "2616867685", "2859887037", "2885238370", "2789734068", "2734408173" ], "abstract": [ "In this letter, a resource allocation strategy based on a deep neural network (DNN) is proposed for multi-channel cognitive radio networks, where the secondary user (SU) opportunistically utilizes channels without causing excessive interference to the primary user (PU). In the proposed scheme, the allocation of transmit power in each channel for SUs is found by utilizing the newly proposed DNN model, which separately determines the overall transmit power of individual SUs and the proportion of transmit power allocated to each channel. Both the spectral efficiency (SE) of the SU and the amount of interference caused to the PU are considered in the training of the DNN model, such that the interference caused to the PUs can be properly regulated while the SE of the SU is improved. Through simulations, we show that our scheme enables a high SE of the SU to be achieved while the interference caused to the PU can be maintained at less than the threshold.", "In this letter, deep power control (DPC), which is the first transmit power control framework based on a convolutional neural network (CNN), is proposed. In DPC, the transmit power control strategy to maximize either spectral efficiency (SE) or energy efficiency (EE) is learned by means of a CNN. While conventional power control schemes require a considerable number of computations, in DPC, the transmit power of users can be determined using far fewer computations enabling real-time processing. We also propose a form of DPC that can be performed in a distributed manner with local channel state information, allowing the signaling overhead to be greatly reduced. Through simulations, we show that the DPC can achieve almost the same or even higher SE and EE than a conventional power control scheme, with a much lower computation time.", "End-to-end learning of communications systems is a fascinating novel concept that has so far only been validated by simulations for block-based transmissions. It allows learning of transmitter and receiver implementations as deep neural networks (NNs) that are optimized for an arbitrary differentiable end-to-end performance metric, e.g., block error rate (BLER). In this paper, we demonstrate that over-the-air transmissions are possible: We build, train, and run a complete communications system solely composed of NNs using unsynchronized off-the-shelf software-defined radios and open-source deep learning software libraries. We extend the existing ideas toward continuous data transmission, which eases their current restriction to short block lengths but also entails the issue of receiver synchronization. We overcome this problem by introducing a frame synchronization module based on another NN. A comparison of the BLER performance of the “learned” system with that of a practical baseline shows competitive performance close to @math dB, even without extensive hyperparameter tuning. We identify several practical challenges of training such a system over actual channels, in particular, the missing channel gradient, and propose a two-step learning procedure based on the idea of transfer learning that circumvents this issue.", "Optical wireless communication (OWC) is a promising technology for future wireless communications due to its potential for cost-effective network deployment and high data rate. There are several implementation issues in OWC that have not been encountered in radio frequency wireless communications. First, practical OWC transmitters need illumination control on color, intensity, luminance, and so on, which poses complicated modulation design challenges. Furthermore, signal-dependent properties of optical channels raise nontrivial challenges in both modulation and demodulation of the optical signals. To tackle such difficulties, deep learning (DL) technologies can be applied for optical wireless transceiver design. This article addresses recent efforts on DL-based OWC system designs. A DL framework for emerging image sensor communication is proposed, and its feasibility is verified by simulation. Finally, technical challenges and implementation issues for the DL-based optical wireless technology are discussed.", "Consider the multiple-input multiple-output (MIMO) interfering broadcast channel whereby multiple base stations in a cellular network simultaneously transmit signals to a group of users in their own cells while causing interference to each other. The basic problem is to design linear beamformers that can maximize the system throughput. In this paper, we propose a linear transceiver design algorithm for weighted sum-rate maximization that is based on iterative minimization of weighted mean-square error (MSE). The proposed algorithm only needs local channel knowledge and converges to a stationary point of the weighted sum-rate maximization problem. Furthermore, the algorithm and its convergence can be extended to a general class of sum-utility maximization problem. The effectiveness of the proposed algorithm is validated by numerical experiments.", "In this paper, a means of transmit power control for underlaid device-to-device (D2D) communication is proposed based on deep learning technology. In the proposed scheme, the transmit power of D2D user equipment (DUE) is autonomously learned via a deep neural network such that the weighted sum rate (WSR) of DUEs can be maximized by considering the interference from cellular user equipment. Unlike conventional transmit power control schemes in which complex optimization problems have to be solved in an iterative manner, which possibly requires long computation time, in our proposed scheme the transmit power can be determined with a relatively low computation time. Through simulations, we confirm that the proposed scheme achieves a sufficiently high WSR with a sufficiently low computation time.", "This paper develops a deep learning framework for the design of on-off keying (OOK) based binary signaling transceiver in dimmable visible light communication (VLC) systems. The dimming support for the OOK optical signal is achieved by adjusting the number of ones in a binary codeword, which boils down to a combinatorial design problem for the codebook of a constant weight code (CWC) over signal-dependent noise channels. To tackle this challenge, we employ an autoencoder (AE) approach to learn a neural network of the encoder-decoder pair that reconstructs the output identical to an input. In addition, optical channel layers and binarization techniques are introduced to reflect the physical and discrete nature of the OOK-based VLC systems. The VLC transceiver is designed and optimized via the end-to-end training procedure for the AE. Numerical results verify that the proposed transceiver performs better than baseline CWC schemes.", "Numerical optimization has played a central role in addressing key signal processing (SP) problems. Highly effective methods have been developed for a large variety of SP applications such as communications, radar, filter design, and speech and image analytics, just to name a few. However, optimization algorithms often entail considerable complexity, which creates a serious gap between theoretical design analysis and real-time processing. In this paper, we aim at providing a new learning-based perspective to address this challenging issue. The key idea is to treat the input and output of an SP algorithm as an unknown nonlinear mapping and use a deep neural network (DNN) to approximate it. If the nonlinear mapping can be learned accurately by a DNN of moderate size, then SP tasks can be performed effectively—since passing the input through a DNN only requires a small number of simple operations. In our paper, we first identify a class of optimization algorithms that can be accurately approximated by a fully connected DNN. Second, to demonstrate the effectiveness of the proposed approach, we apply it to approximate a popular interference management algorithm, namely, the WMMSE algorithm. Extensive experiments using both synthetically generated wireless channel data and real DSL channel data have been conducted. It is shown that, in practice, only a small network is sufficient to obtain high approximation accuracy, and DNNs can achieve orders of magnitude speedup in computational time compared to the state-of-the-art interference management algorithm.", "In this paper, we propose to use Deep Neural Networks (DNNs) to solve so-called Team Decision (TD) problems, in which decentralized Decision Makers (DMs) aim at maximizing a common utility on the basis of locally available Channel State Information (CSI) without any additional communication or iteration. In the proposed configuration -coined Team DNNs (T-DNNs)-, the decision at each DM is approximated using a DNN and the weights of all DNNs are jointly trained, even though the implementation remains fundamentally decentralized. Turning to a practical application, the problem of decentralized link scheduling in Interference Channels (IC) is reformulated as a TD problem so that the T-DNNs approach can be applied. After adequate training, the scheduling obtained using the T-DNNs flexibly adapts to the decentralized CSI configuration to outperform other scheduling algorithms, thus proposing a novel efficient solution to a problem that has remained elusive for years.", "A transmit power control strategy using a deep neural network (DNN) is proposed for underlay device-to-device (D2D) communication where D2D user equipment (DUE) shares radio resources with cellular user equipment (CUE). In this scheme, a transmit power control strategy for DUE is found with the aid of a newly proposed DNN structure. Both the spectral efficiency (SE) of the DUE and the amount of interference at the CUE are taken into account, such that the SE of the DUE can be improved while alleviating any deterioration in the cellular transmission. Using simulations, we show that the proposed scheme can achieve a high SE of the DUE while properly regulating the interference caused to the CUE, with a low computation time.", "This paper presents a deep-learning (DL) based approach to the design of multi-colored visible light communication (VLC) systems where RGB light-emitting diode (LED) lamps accomplish multi-dimensional color modulation under color and illuminance requirements. It is aimed to identify a pair of multi-color modulation transmitter and receiver leading to efficient symbol recovery performance. To this end, an autoencoder (AE), an unsupervised deep learning technique, is adopted to train the end-to-end symbol recovery process that includes the VLC transceiver pair and a channel layer characterizing the optical channel along with additional LED intensity control features. As a result, the VLC transmitter and receiver are jointly designed and optimized. Intensive numerical results demonstrate that the learned VLC system outperforms existing techniques in terms of the average symbol error probability. This framework sheds light on the viability of DL techniques in the optical communication system design.", "We present and discuss several novel applications of deep learning for the physical layer. By interpreting a communications system as an autoencoder, we develop a fundamental new way to think about communications system design as an end-to-end reconstruction task that seeks to jointly optimize transmitter and receiver components in a single process. We show how this idea can be extended to networks of multiple transmitters and receivers and present the concept of radio transformer networks as a means to incorporate expert domain knowledge in the machine learning model. Lastly, we demonstrate the application of convolutional neural networks on raw IQ samples for modulation classification which achieves competitive accuracy with respect to traditional schemes relying on expert features. This paper is concluded with a discussion of open challenges and areas for future investigation." ] }
1905.13363
2947881065
Many research questions can be answered quickly and efficiently using data already collected for previous research. This practice is called secondary data analysis (SDA), and has gained popularity due to lower costs and improved research efficiency. In this paper we propose DFS, a file system to standardize the metadata representation of datasets, and DDU, a scalable architecture based on DFS for semi-automated metadata generation and data recommendation on the cloud. We discuss how DFS and DDU lays groundwork for automatic dataset aggregation, how it integrates with existing data wrangling and machine learning tools, and explores their implications on datasets stored in digital libraries.
Another study @cite_2 provides a storage-efficient approach to provide version control to datasets. They state that the amount of storage used is proportional to the speed of recreating or retrieving dataset versions. A suite of inexpensive heuristics were created based on techniques in delay-constrained scheduling, and spanning tree literature. Results show that these heuristics provide efficient solutions in practical dataset versioning scenarios.
{ "cite_N": [ "@cite_2" ], "mid": [ "297231882" ], "abstract": [ "The relative ease of collaborative data science and analysis has led to a proliferation of many thousands or millions of versions of the same datasets in many scientific and commercial domains, acquired or constructed at various stages of data analysis across many users, and often over long periods of time. Managing, storing, and recreating these dataset versions is a non-trivial task. The fundamental challenge here is the storage-recreation trade-off: the more storage we use, the faster it is to recreate or retrieve versions, while the less storage we use, the slower it is to recreate or retrieve versions. Despite the fundamental nature of this problem, there has been a surprisingly little amount of work on it. In this paper, we study this trade-off in a principled manner: we formulate six problems under various settings, trading off these quantities in various ways, demonstrate that most of the problems are intractable, and propose a suite of inexpensive heuristics drawing from techniques in delay-constrained scheduling, and spanning tree literature, to solve these problems. We have built a prototype version management system, that aims to serve as a foundation to our DataHub system for facilitating collaborative data science. We demonstrate, via extensive experiments, that our proposed heuristics provide efficient solutions in practical dataset versioning scenarios." ] }
1905.13428
2947287992
Many potential applications of reinforcement learning in the real world involve interacting with other agents whose numbers vary over time. We propose new neural policy architectures for these multi-agent problems. In contrast to other methods of training an individual, discrete policy for each agent and then enforcing cooperation through some additional inter-policy mechanism, we follow the spirit of recent work on the power of relational inductive biases in deep networks by learning multi-agent relationships at the policy level via an attentional architecture. In our method, all agents share the same policy, but independently apply it in their own context to aggregate the other agents' state information when selecting their next action. The structure of our architectures allow them to be applied on environments with varying numbers of agents. We demonstrate our architecture on a benchmark multi-agent autonomous vehicle coordination problem, obtaining superior results to a full-knowledge, fully-centralized reference solution, and significantly outperforming it when scaling to large numbers of agents.
As an example, proposed an attentional module to enable agents to decide to communicate, and following and a global LSTM coordinator is used to aggregate and disseminate this information back to the agents. They argue that enabling the learning of communication improves performance, by removing the need for receivers to filter out less-useful information. An extension of our framework where the attended-on information is emitted by a learned communication module as in @cite_2 , but the processing is done in a decentralized agent-wise manner like in this work, is an interesting avenue for future work.
{ "cite_N": [ "@cite_2" ], "mid": [ "2963717208" ], "abstract": [ "Communication could potentially be an effective way for multi-agent cooperation. However, information sharing among all agents or in predefined communication architectures that existing methods adopt can be problematic. When there is a large number of agents, agents hardly differentiate valuable information that helps cooperative decision making from globally shared information. Therefore, communication barely help, and could even impair the learning of multi-agent cooperation. Predefined communication architectures, on the other hand, restrict communication among agents and thus restrain potential cooperation. To tackle these difficulties, in this paper, we propose an attentional communication model that learns when communication is needed and how to integrates shared information for cooperative decision making. Our model leads to efficient and effective communication for large-scale multi-agent cooperation. Empirically, we show the strength of our model in various cooperative scenarios, where agents are able to develop more coordinated and sophisticated strategies than existing methods." ] }
1905.13418
2947981834
We propose a novel application of self-attention networks towards grammar induction. We present an attention-based supertagger for a refined type-logical grammar, trained on constructing types inductively. In addition to achieving a high overall type accuracy, our model is able to learn the syntax of the grammar's type system along with its denotational semantics. This lifts the closed world assumption commonly made by lexicalized grammar supertaggers, greatly enhancing its generalization potential. This is evidenced both by its adequate accuracy over sparse word types and its ability to correctly construct complex types never seen during training, which, to the best of our knowledge, was as of yet unaccomplished.
Regardless of the particular implementation, the above works all fall in the same category of sequence labeling architectures. As such, the type vocabulary (i.e. the set of candidate categories) is always considered fixed and pre-specified --- it is, in fact, hard coded within the architecture itself (e.g. in the network's final classification layer). The inability of such systems to account for unseen types or even consistently predict rare ones has permeated through the training and evaluation process; a frequency cut-off is usually applied on the corpus, keeping only categories that appear at least 10 times throughout the training set @cite_8 . This limitation has been acknowledged in the past; in the case of CCG, certain classes of syntactic constructions pose significant difficulties for parsing due to categories completely missing from the corpus @cite_21 . An attempt to address the issue was made in the form of an inference algorithm, which iteratively expands upon the lexicon with new categories for unseen words @cite_4 --- its applicability, however, is narrow, as new categories can often be necessary even for words that have been previously encountered.
{ "cite_N": [ "@cite_21", "@cite_4", "@cite_8" ], "mid": [ "1736698085", "", "2074729706" ], "abstract": [ "Accurate dependency recovery has recently been reported for a number of wide-coverage statistical parsers using Combinatory Categorial Grammar (CCG). However, overall figures give no indication of a parser’s performance on specific constructions, nor how suitable a parser is for specific applications. In this paper we give a detailed evaluation of a CCG parser on object extraction dependencies found in WSJ text. We also show how the parser can be used to parse questions for Question Answering. The accuracy of the original parser on questions is very poor, and we propose a novel technique for porting the parser to a new domain, by creating new labelled data at the lexical category level only. Using a supertagger to assign categories to words, trained on the new data, leads to a dramatic increase in question parsing accuracy.", "", "This paper describes the role of supertagging in a wide-coverage CCG parser which uses a log-linear model to select an analysis. The supertagger reduces the derivation space over which model estimation is performed, reducing the space required for discriminative training. It also dramatically increases the speed of the parser. We show that large increases in speed can be obtained by tightly integrating the supertagger with the CCG grammar and parser. This is the first work we are aware of to successfully integrate a supertagger with a full parser which uses an automatically extracted grammar. We also further reduce the derivation space using constraints on category combination. The result is an accurate wide-coverage CCG parser which is an order of magnitude faster than comparable systems for other linguistically motivated formalisms." ] }
1905.13342
2947815032
Raw underwater images are degraded due to wavelength dependent light attenuation and scattering, limiting their applicability in vision systems. Another factor that makes enhancing underwater images particularly challenging is the diversity of the water types in which they are captured. For example, images captured in deep oceanic waters have a different distribution from those captured in shallow coastal waters. Such diversity makes it hard to train a single model to enhance underwater images. In this work, we propose a novel model which nicely handles the diversity of water during the enhancement, by adversarially learning the content features of the images by disentangling the unwanted nuisances corresponding to water types (viewed as different domains). We use the learned domain agnostic features to generate enhanced underwater images. We train our model on a dataset consisting images of 10 Jerlov water types. Experimental results show that the proposed model not only outperforms the previous methods in SSIM and PSNR scores for almost all Jerlov water types but also generalizes well on real-world datasets. The performance of a high-level vision task (object detection) also shows improvement using enhanced images with our model.
Many previous attempts to solve the underwater image enhancement problem have used physics-based methods. @cite_25 tries to solve this problem by explicitly modeling the refraction in water, whereas, @cite_8 incorporates the inherent properties of the underwater medium such as attenuation, scattering, and the volume scattering function in order to simulate image formation. @cite_21 defines an underwater image formation model which is given as
{ "cite_N": [ "@cite_21", "@cite_25", "@cite_8" ], "mid": [ "1976263166", "250258559", "2041285268" ], "abstract": [ "Light scattering and color change are two major sources of distortion for underwater photography. Light scattering is caused by light incident on objects reflected and deflected multiple times by particles present in the water before reaching the camera. This in turn lowers the visibility and contrast of the image captured. Color change corresponds to the varying degrees of attenuation encountered by light traveling in the water with different wavelengths, rendering ambient underwater environments dominated by a bluish tone. No existing underwater processing techniques can handle light scattering and color change distortions suffered by underwater images, and the possible presence of artificial lighting simultaneously. This paper proposes a novel systematic approach to enhance underwater images by a dehazing algorithm, to compensate the attenuation discrepancy along the propagation path, and to take the influence of the possible presence of an artifical light source into consideration. Once the depth map, i.e., distances between the objects and the camera, is estimated, the foreground and background within a scene are segmented. The light intensities of foreground and background are compared to determine whether an artificial light source is employed during the image capturing process. After compensating the effect of artifical light, the haze phenomenon and discrepancy in wavelength attenuation along the underwater propagation path to camera are corrected. Next, the water depth in the image scene is estimated according to the residual energy ratios of different color channels existing in the background light. Based on the amount of attenuation corresponding to each light wavelength, color change compensation is conducted to restore color balance. The performance of the proposed algorithm for wavelength compensation and image dehazing (WCID) is evaluated both objectively and subjectively by utilizing ground-truth color patches and video downloaded from the Youtube website. Both results demonstrate that images with significantly enhanced visibility and superior color fidelity are obtained by the WCID proposed.", "In recent years, underwater imaging has gained a lot of popularity partly due to the availability of off-the-shelf consumer cameras, but also due to a growing interest in the ocean floor by science and industry. Apart from capturing single images or sequences, the application of methods from the area of computer vision has gained interest as well. However, water affects image formation in two major ways. First, while traveling through the water, light is attenuated and scattered, depending on the light's wavelength causing the typical strong green or blue hue in underwater images. Second, cameras used in underwater scenarios need to be confined in an underwater housing, viewing the scene through a flat or dome-shaped glass port. The inside of the housing is filled with air. Consequently, the light entering the housing needs to pass a water-glass interface, then a glass-air interface, thus is refracted twice, affecting underwater image formation geometrically. In classic Structure-from-Motion (SfM) approaches, the perspective camera model is usually assumed, however, it can be shown that it becomes invalid due to refraction in underwater scenarios. Therefore, this thesis proposes an adaptation of the SfM algorithm to underwater image formation with flat port underwater housings, i.e. introduces a method where refraction at the underwater housing is modeled explicitly. This includes a calibration approach, algorithms for relative and absolute pose estimation, an efficient, non-linear error function that is utilized in bundle adjustment, and a refractive plane sweep algorithm. Finally, if calibration data for an underwater light propagation model exists, the dense depth maps can be used to correct texture colors. Experiments with a perspective and the proposed refractive approach to 3D reconstruction revealed that the perspective approach does indeed suffer from a systematic model error depending on the distance between camera and glass and a possible tilt of the glass with respect to the image sensor. The proposed method shows no such systematic error and thus provides more accurate results for underwater image sequences.", "A computer model to simulate the formation of underwater images has been developed. The model incorporates the inherent and apparent properties of the propagation of light in water. An image is approximated as a linear superposition of several image components. The model has been used to simulate the relative advantages of different camera light configurations. The results indicate that extremely large gains in image contrast can be obtained by careful design of beam patterns and the manipulation of camera and light locations. The performance of range-gated systems is explored, and it is demonstrated that these systems are presently power limited. In order to obtain better quality images at larger distances, an imaging configuration which consists of scanning an incoherent light beam across the field of view of a camera is proposed. The incoherent light-scanning system is shown to have advantages over both conventional imaging techniques and range-gated methods. >" ] }
1905.13342
2947815032
Raw underwater images are degraded due to wavelength dependent light attenuation and scattering, limiting their applicability in vision systems. Another factor that makes enhancing underwater images particularly challenging is the diversity of the water types in which they are captured. For example, images captured in deep oceanic waters have a different distribution from those captured in shallow coastal waters. Such diversity makes it hard to train a single model to enhance underwater images. In this work, we propose a novel model which nicely handles the diversity of water during the enhancement, by adversarially learning the content features of the images by disentangling the unwanted nuisances corresponding to water types (viewed as different domains). We use the learned domain agnostic features to generate enhanced underwater images. We train our model on a dataset consisting images of 10 Jerlov water types. Experimental results show that the proposed model not only outperforms the previous methods in SSIM and PSNR scores for almost all Jerlov water types but also generalizes well on real-world datasets. The performance of a high-level vision task (object detection) also shows improvement using enhanced images with our model.
The above physical model is similar to that of image dehazing, except that the medium attenuation coefficient is wavelength dependent, whereas in dehazing it does not depend on the light wavelength. This model has been used by many approaches to solve the underwater image enhancement problem. @cite_11 tries to improve on the above model by computing attenuation coefficients in the 3D RGB space, whereas @cite_17 uses the above model to generate a synthetic dataset of 10 Jerlov water types. We generate a similar dataset in our work, the details of which are given in section .
{ "cite_N": [ "@cite_17", "@cite_11" ], "mid": [ "2831859938", "2746518190" ], "abstract": [ "In an underwater scene, wavelength-dependent light absorption and scattering degrade the visibility of images, causing low contrast and distorted color casts. To address this problem, we propose a convolutional neural network based image enhancement model, i.e., UWCNN, which is trained efficiently using a synthetic underwater image database. Unlike the existing works that require the parameters of underwater imaging model estimation or impose inflexible frameworks applicable only for specific scenes, our model directly reconstructs the clear latent underwater image by leveraging on an automatic end-to-end and data-driven training mechanism. Compliant with underwater imaging models and optical properties of underwater scenes, we first synthesize ten different marine image databases. Then, we separately train multiple UWCNN models for each underwater image formation type. Experimental results on real-world and synthetic underwater images demonstrate that the presented method generalizes well on different underwater scenes and outperforms the existing methods both qualitatively and quantitatively. Besides, we conduct an ablation study to demonstrate the effect of each component in our network.", "Underwater image reconstruction methods require the knowledge of wideband attenuation coefficients per color channel. Current estimation methods for these coefficients require specialized hardware or multiple images, and none of them leverage the multitude of existing ocean optical measurements as priors. Here, we aim to constrain the set of physically-feasible wideband attenuation coefficients in the ocean by utilizing water attenuation measured worldwide by oceanographers. We calculate the space of valid wideband effective attenuation coefficients in the 3D RGB domain and find that a bound manifold in 3-space sufficiently represents the variation from the clearest to murkiest waters. We validate our model using in situ experiments in two different optical water bodies, the Red Sea and the Mediterranean. Moreover, we show that contradictory to the common image formation model, the coefficients depend on the imaging range and object reflectance, and quantify the errors resulting from ignoring these dependencies." ] }
1905.12723
2947241406
This paper proposes a novel approach for extending monocular visual odometry to a stereo camera system. The proposed method uses an additional camera to accurately estimate and optimize the scale of the monocular visual odometry, rather than triangulating 3D points from stereo matching. Specifically, the 3D points generated by the monocular visual odometry are projected onto the other camera of the stereo pair, and the scale is recovered and optimized by directly minimizing the photometric error. In particular, it is computationally efficient, adding minimal overhead to the stereo vision system compared to straightforward stereo matching, and is robust to repetitive texture. Additionally, direct scale optimization enables stereo visual odometry to be purely based on direct method. Extensive evaluation on public datasets (e.g., KITTI), and outdoor environments (both terrestrial and underwater) demonstrates the accuracy and efficiency of a stereo visual odometry approach extended by scale optimization, as well as the robustness in environments with challenging texture.
Stereo VO has been widely explored, with many of them @cite_16 @cite_14 @cite_20 @cite_22 @cite_18 relying on stereo matching. S-PTAM @cite_12 is one of the recent developments in stereo VO, which extends PTAM @cite_13 to a stereo system. Stereo matching is used to generate new 3D points. Stereo ORB-SLAM @cite_9 is another example of stereo VO that depends on stereo matching. extended their monocular LSD-SLAM @cite_15 to a stereo VO @cite_14 . Monocular LSD-SLAM is purely based on a direct method (directly minimizing photometric error, independent of feature matching), but as LSD-SLAM uses stereo matching, it is no longer a fully direct method. VO with stereo matching often suffers from the problems discussed in Sec , they tend to fail if the scene texture is repetitive, and they are not very computational efficient.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_9", "@cite_15", "@cite_16", "@cite_13", "@cite_12", "@cite_20" ], "mid": [ "", "2218842719", "", "2535547924", "612478963", "2031998160", "2151290401", "", "" ], "abstract": [ "", "We propose a novel Large-Scale Direct SLAM algorithm for stereo cameras (Stereo LSD-SLAM) that runs in real-time at high frame rate on standard CPUs. In contrast to sparse interest-point based methods, our approach aligns images directly based on the photoconsistency of all high-contrast pixels, including corners, edges and high texture areas. It concurrently estimates the depth at these pixels from two types of stereo cues: Static stereo through the fixed-baseline stereo camera setup as well as temporal multi-view stereo exploiting the camera motion. By incorporating both disparity sources, our algorithm can even estimate depth of pixels that are under-constrained when only using fixed-baseline stereo. Using a fixed baseline, on the other hand, avoids scale-drift that typically occurs in pure monocular SLAM.We furthermore propose a robust approach to enforce illumination invariance, capable of handling aggressive brightness changes between frames - greatly improving the performance in realistic settings. In experiments, we demonstrate state-of-the-art results on stereo SLAM benchmarks such as Kitti or challenging datasets from the EuRoC Challenge 3 for micro aerial vehicles.", "", "We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.", "We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.", "In this paper we present a novel algorithm for fast and robust stereo visual odometry based on feature selection and tracking (SOFT). The reduction of drift is based on careful selection of a subset of stable features and their tracking through the frames. Rotation and translation between two consecutive poses are estimated separately. The five point method is used for rotation estimation, whereas the three point method is used for estimating translation. Experimental results show that the proposed algorithm has an average pose error of 1.03 with processing speed above 10 Hz. According to publicly available KITTI leaderboard, SOFT outperforms all other validated methods. We also present a modified IMU-aided version of the algorithm, fast and suitable for embedded systems. This algorithm employs an IMU for outlier rejection and Kalman filter for rotation refinement. Experiments show that the IMU based system runs at 20 Hz on an ODROID U3 ARM-based embedded computer without any hardware acceleration. Integration of all components is described and experimental results are presented.", "This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand-held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer: one thread deals with the task of robustly tracking erratic hand-held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems.", "", "" ] }
1905.12723
2947241406
This paper proposes a novel approach for extending monocular visual odometry to a stereo camera system. The proposed method uses an additional camera to accurately estimate and optimize the scale of the monocular visual odometry, rather than triangulating 3D points from stereo matching. Specifically, the 3D points generated by the monocular visual odometry are projected onto the other camera of the stereo pair, and the scale is recovered and optimized by directly minimizing the photometric error. In particular, it is computationally efficient, adding minimal overhead to the stereo vision system compared to straightforward stereo matching, and is robust to repetitive texture. Additionally, direct scale optimization enables stereo visual odometry to be purely based on direct method. Extensive evaluation on public datasets (e.g., KITTI), and outdoor environments (both terrestrial and underwater) demonstrates the accuracy and efficiency of a stereo visual odometry approach extended by scale optimization, as well as the robustness in environments with challenging texture.
On the other hand, extended their monocular SVO @cite_8 for multi-camera systems @cite_7 , not particularly for a stereo camera. Instead of stereo matching, they couple all cameras into one function to reduce photometric error. Their error function is calculated by projecting 3D points onto all visible image frames. The accuracy is further improved and the scale problem is solved implicitly. However, computational cost significantly increases because of the augmented error function. Stereo DSO @cite_5 is a hybrid model, which uses stereo matching to initialize depth for each keyframe; the stereo image is also coupled into the error function. While computational cost increases, Stereo DSO is a highly accurate approach to VO.
{ "cite_N": [ "@cite_5", "@cite_7", "@cite_8" ], "mid": [ "2963562098", "2564632156", "1970504153" ], "abstract": [ "We propose Stereo Direct Sparse Odometry (Stereo DSO) as a novel method for highly accurate real-time visual odometry estimation of large-scale environments from stereo cameras. It jointly optimizes for all the model parameters within the active window, including the intrinsic extrinsic camera parameters of all keyframes and the depth values of all selected pixels. In particular, we propose a novel approach to integrate constraints from static stereo into the bundle adjustment pipeline of temporal multi-view stereo. Real-time optimization is realized by sampling pixels uniformly from image regions with sufficient intensity gradient. Fixed-baseline stereo resolves scale drift. It also reduces the sensitivities to large optical flow and to rolling shutter effect which are known shortcomings of direct image alignment methods. Quantitative evaluation demonstrates that the proposed Stereo DSO outperforms existing state-of-the-art visual odometry methods both in terms of tracking accuracy and robustness. Moreover, our method delivers a more precise metric 3D reconstruction than previous dense semi-dense direct approaches while providing a higher reconstruction density than feature-based methods.", "Direct methods for visual odometry (VO) have gained popularity for their capability to exploit information from all intensity gradients in the image. However, low computational speed as well as missing guarantees for optimality and consistency are limiting factors of direct methods, in which established feature-based methods succeed instead. Based on these considerations, we propose a semidirect VO (SVO) that uses direct methods to track and triangulate pixels that are characterized by high image gradients, but relies on proven feature-based methods for joint optimization of structure and motion. Together with a robust probabilistic depth estimation algorithm, this enables us to efficiently track pixels lying on weak corners and edges in environments with little or high-frequency texture. We further demonstrate that the algorithm can easily be extended to multiple cameras, to track edges, to include motion priors, and to enable the use of very large field of view cameras, such as fisheye and catadioptric ones. Experimental evaluation on benchmark datasets shows that the algorithm is significantly faster than the state of the art while achieving highly competitive accuracy.", "We propose a semi-direct monocular visual odometry algorithm that is precise, robust, and faster than current state-of-the-art methods. The semi-direct approach eliminates the need of costly feature extraction and robust matching techniques for motion estimation. Our algorithm operates directly on pixel intensities, which results in subpixel precision at high frame-rates. A probabilistic mapping method that explicitly models outlier measurements is used to estimate 3D points, which results in fewer outliers and more reliable points. Precise and high frame-rate motion estimation brings increased robustness in scenes of little, repetitive, and high-frequency texture. The algorithm is applied to micro-aerial-vehicle state-estimation in GPS-denied environments and runs at 55 frames per second on the onboard embedded computer and at more than 300 frames per second on a consumer laptop. We call our approach SVO (Semi-direct Visual Odometry) and release our implementation as open-source software." ] }
1905.12516
2947838315
Technologies for abusive language detection are being developed and applied with little consideration of their potential biases. We examine racial bias in five different sets of Twitter data annotated for hate speech and abusive language. We train classifiers on these datasets and compare the predictions of these classifiers on tweets written in African-American English with those written in Standard American English. The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates. If these abusive language detection systems are used in the field they will therefore have a disproportionate negative impact on African-American social media users. Consequently, these systems may discriminate against the groups who are often the targets of the abuse we are trying to detect.
Scholars and practitioners have recently been devoting more attention to bias in machine learning models, particularly as these models are becoming involved in more and more consequential decisions @cite_1 . Bias often derives from the data used to train these models. For example, show how facial recognition technologies perform worse for darker-skinned people, particularly darker-skinned women, due to the disproportionate presence of white, male faces in the training data. Natural language processing systems also inherit biases from the data they were trained on. For example, in unsupervised learning, word embeddings often contain biases @cite_19 @cite_29 @cite_17 which persist even after attempts to remove them @cite_16 . There are many examples of bias in supervised learning contexts: YouTube's captioning models make more errors when transcribing women @cite_10 , AAE is more likely to be misclassified as non-English by widely used language classifiers @cite_11 , numerous gender and racial biases exist in sentiment classification systems @cite_8 , and errors in both co-reference resolution systems and occupational classification models reflect gendered occupational patterns @cite_6 @cite_32 .
{ "cite_N": [ "@cite_11", "@cite_8", "@cite_29", "@cite_1", "@cite_32", "@cite_6", "@cite_19", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "2728567418", "", "2893425640", "2584924584", "", "2963526187", "2483215953", "2921633540", "2607719644", "" ], "abstract": [ "We highlight an important frontier in algorithmic fairness: disparity in the quality of natural language processing algorithms when applied to language from authors of different social groups. For example, current systems sometimes analyze the language of females and minorities more poorly than they do of whites and males. We conduct an empirical analysis of racial disparity in language identification for tweets written in African-American English, and discuss implications of disparity in NLP.", "", "Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology.", "Machine-learning prediction methods have been extremely productive in applications ranging from medicine to allocating fire and health inspectors in cities. However, there are a number of gaps between making a prediction and making a decision, and underlying assumptions need to be understood in order to optimize data-driven decision-making.", "", "", "The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between the words receptionist and female, while maintaining desired associations such as between the words queen and female. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.", "Word embeddings are widely used in NLP for a vast range of tasks. It was shown that word embeddings derived from text corpora reflect gender biases in society. This phenomenon is pervasive and consistent across different word embedding models, causing serious concern. Several recent works tackle this problem, and propose methods for significantly reducing this gender bias in word embeddings, demonstrating convincing results. However, we argue that this removal is superficial. While the bias is indeed substantially reduced according to the provided bias definition, the actual effect is mostly hiding the bias, not removing it. The gender bias information is still reflected in the distances between \"gender-neutralized\" words in the debiased embeddings, and can be recovered from them. We present a series of experiments to support this claim, for two debiasing methods. We conclude that existing bias removal techniques are insufficient, and should not be trusted for providing gender-neutral modeling.", "", "" ] }
1905.12766
2947230780
Probabilistic approach to Boolean matrix factorization can provide solutions robustagainst noise and missing values with linear computational complexity. However,the assumption about latent factors can be problematic in real world applications.This study proposed a new probabilistic algorithm free of assumptions of latentfactors, while retaining the advantages of previous algorithms. Real data experimentshowed that our algorithm was favourably compared with current state-of-the-artprobabilistic algorithms.
The BMF problem is closely connected, if not identical, to many other computational problems including dense bipartite subgraph extraction ( @cite_8 , @cite_7 ), the tiling problem ( @cite_6 ), and the binary independent component analysis ( @cite_9 ). Various heuristics has been adopted to develop efficient approximates, including the discrete basis problem ( @cite_16 ), linear programming ( @cite_5 ), formal concept analysis ( @cite_2 , @cite_1 ) and minimum description length ( @cite_17 , @cite_4 ). However, most algorithms cannot handle both noise and missing values. For example, @cite_7 presented a simple two-step algorithm that can identify tiny clusters on the right side even under destructive noise level. But it assumed the cluster size on the left are large while those on the right are small. Moreover, it cannot handle missing values.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_9", "@cite_1", "@cite_6", "@cite_2", "@cite_5", "@cite_16", "@cite_17" ], "mid": [ "1986170391", "2890880108", "1815390093", "2136810006", "1497640967", "", "1482437849", "2131782448", "2160342152", "2915000964" ], "abstract": [ "Matrix factorizations---where a given data matrix is approximated by a product of two or more factor matrices---are powerful data mining tools. Among other tasks, matrix factorizations are often used to separate global structure from noise. This, however, requires solving the model order selection problem' of determining where fine-grained structure stops, and noise starts, i.e., what is the proper size of the factor matrices. Boolean matrix factorization (BMF)---where data, factors, and matrix product are Boolean---has received increased attention from the data mining community in recent years. The technique has desirable properties, such as high interpretability and natural sparsity. But so far no method for selecting the correct model order for BMF has been available. In this paper we propose to use the Minimum Description Length (MDL) principle for this task. Besides solving the problem, this well-founded approach has numerous benefits, e.g., it is automatic, does not require a likelihood function, is fast, and, as experiments show, is highly accurate. We formulate the description length function for BMF in general---making it applicable for any BMF algorithm. We extend an existing algorithm for BMF to use MDL to identify the best Boolean matrix factorization, analyze the complexity of the problem, and perform an extensive experimental evaluation to study its behavior.", "We study the problem of finding planted clusters in bipartite graphs. We present a simple two-step algorithm which provably finds even tiny clusters of size O(n^e), where n is the number of vertices in the graph and e > 0. Previous algorithms were only able to identify clusters of size Ω( sqrt(n) ). We evaluated the algorithm on synthetic and on real-world data; the experiments show that the algorithm can find extremely small clusters even in presence of high destructive noise.", "We present a framework for biclustering and clustering where the observations are general labels. Our approach is based on the maximum likelihood estimator and its convex relaxation, and generalizes recent works in graph clustering to the biclustering setting. In addition to standard biclustering setting where one seeks to discover clustering structure simultaneously in two domain sets, we show that the same algorithm can be as effective when clustering structure only occurs in one domain. This allows for an alternative approach to clustering that is more natural in some scenarios. We present theoretical results that provide sufficient conditions for the recovery of the true underlying clusters under a generalized stochastic block model. These are further validated by our empirical results on both synthetic and real data.", "Independent component analysis (ICA) is a computational method for separating a multivariate signal into subcomponents assuming the mutual statistical independence of the non-Gaussian source signals. The classical independent components analysis (ICA) framework usually assumes linear combinations of independent sources over the field of real-valued numbers R. In this paper, we investigate binary ICA for or mixtures (bICA), which can find applications in many domains including medical diagnosis, multi-cluster assignment, Internet tomography and network resource management. We prove that bICA is uniquely identifiable under the disjunctive generation model, and propose a deterministic iterative algorithm to determine the distribution of the latent random variables and the mixing matrix. The inverse problem to infer the values of latent variables is also considered for noisy measurements. We conduct an extensive simulation study to verify the effectiveness of the propose algorithm and present examples of real-world applications where bICA can be applied.", "Abstract We present new results on Boolean matrix factorization and a new algorithm based on these results. The results emphasize the significance of factorizations that provide from-below approximations of the input matrix. While the previously proposed algorithms do not consider the possibly different significance of different matrix entries, our results help measure such significance and suggest where to focus when computing factors. An experimental evaluation of the new algorithm on both synthetic and real data demonstrates its good performance in terms of good coverage by the first few factors as well as a small number of factors needed for an almost exact decomposition and indicates that the algorithm outperforms the available ones in these terms. We also propose future research topics.", "", "We propose a new approach for Collaborative Filtering which is based on Boolean Matrix Factorisation (BMF) and Formal Concept Analysis. In a series of experiments on real data (Movielens dataset) we compare the approach with the SVD- and NMF-based algorithms in terms of Mean Average Error (MAE). One of the experimental consequences is that it is enough to have a binary-scaled rating data to obtain almost the same quality in terms of MAE by BMF than for the SVD-based algorithm in case of non-scaled data.", "A decomposition of a binary matrix into two matrices gives a set of basis vectors and their appropriate combination to form the original matrix. Such decomposition solutions are useful in a number of application domains including text mining, role engineering as well as knowledge discovery. While a binary matrix can be decomposed in several ways, however, certain decompositions better characterize the semantics associated with the original matrix in a succinct but comprehensive way. Indeed, one can find different decompositions optimizing different criteria matching various semantics. In this paper, we first present a number of variants to the optimal Boolean matrix decomposition problem that have pragmatic implications. We then present a unified framework for modeling the optimal binary matrix decomposition and its variants using binary integer programming. Such modeling allows us to directly adopt the huge body of heuristic solutions and tools developed for binary integer programming. Although the proposed solutions are applicable to any domain of interest, for providing more meaningful discussions and results, in this paper, we present the binary matrix decomposition problem in a role engineering context, whose goal is to discover an optimal and correct set of roles from existing permissions, referred to as the role mining problem (RMP). This problem has gained significant interest in recent years as role based access control has become a popular means of enforcing security in databases. We consider several variants of the above basic RMP, including the min-noise RMP, delta-approximate RMP and edge-RMP. Solutions to each of them aid security administrators in specific scenarios. We then model these variants as Boolean matrix decomposition and present efficient heuristics to solve them.", "Matrix decomposition methods represent a data matrix as a product of two factor matrices: one containing basis vectors that represent meaningful concepts in the data, and another describing how the observed data can be expressed as combinations of the basis vectors. Decomposition methods have been studied extensively, but many methods return real-valued matrices. Interpreting real-valued factor matrices is hard if the original data is Boolean. In this paper, we describe a matrix decomposition formulation for Boolean data, the Discrete Basis Problem. The problem seeks for a Boolean decomposition of a binary matrix, thus allowing the user to easily interpret the basis vectors. We also describe a variation of the problem, the Discrete Basis Partitioning Problem. We show that both problems are NP-hard. For the Discrete Basis Problem, we give a simple greedy algorithm for solving it; for the Discrete Basis Partitioning Problem we show how it can be solved using existing methods. We present experimental results for the greedy algorithm and compare it against other, well known methods. Our algorithm gives intuitive basis vectors, but its reconstruction error is usually larger than with the real-valued methods. We discuss about the reasons for this behavior.", "During the past few years Boolean matrix factorization (BMF) has become an important direction in data analysis. The minimum description length principle (MDL) was successfully adapted in BMF for the model order selection. Nevertheless, a BMF algorithm performing good results from the standpoint of standard measures in BMF is missing. In this paper, we propose a novel from-below Boolean matrix factorization algorithm based on formal concept analysis. The algorithm utilizes the MDL principle as a criterion for the factor selection. On various experiments we show that the proposed algorithm outperforms---from different standpoints---existing state-of-the-art BMF algorithms." ] }
1905.12737
2947620140
Deep Neural Networks (DNNs) often rely on very large datasets for training. Given the large size of such datasets, it is conceivable that they contain certain samples that either do not contribute or negatively impact the DNN's performance. If there is a large number of such samples, subsampling the training dataset in a way that removes them could provide an effective solution to both improve performance and reduce training time. In this paper, we propose an approach called Active Dataset Subsampling (ADS), to identify favorable subsets within a dataset for training using ensemble based uncertainty estimation. When applied to three image classification benchmarks (CIFAR-10, CIFAR-100 and ImageNet) we find that there are low uncertainty subsets, which can be as large as 50 of the full dataset, that negatively impact performance. These subsets are identified and removed with ADS. We demonstrate that datasets obtained using ADS with a lightweight ResNet-18 ensemble remain effective when used to train deeper models like ResNet-101. Our results provide strong empirical evidence that using all the available data for training can hurt performance on large scale vision tasks.
Active learning aims to select, from a large unlabeled dataset, the smallest possible training set to label in order to solve a specific task @cite_23 . The main goal of active learning is to minimize labeling costs for unlabeled data. In comparison, ADS does not focus directly on labeling costs, but rather the redundancy of samples in labeled datasets. Implicitly, if there is a large amount of redundancy in a labeled dataset for a given task, it indicates that the labeling budget was not optimally utilized while building that dataset. This in turn indicates the potential savings in annotation costs that can be achieved for that task through improved data selection strategies such as active learning.
{ "cite_N": [ "@cite_23" ], "mid": [ "2151023586" ], "abstract": [ "Active learning differs from “learning from examples” in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful than learning from examples alone, giving better generalization for a fixed number of training examples. In this article, we consider the problem of learning a binary concept in the absence of noise. We describe a formalism for active concept learning called selective sampling and show how it may be approximately implemented by a neural network. In selective sampling, a learner receives distribution information from the environment and queries an oracle on parts of the domain it considers “useful.” We test our implementation, called an SG-network, on three domains and observe significant improvement in generalization." ] }
1905.12737
2947620140
Deep Neural Networks (DNNs) often rely on very large datasets for training. Given the large size of such datasets, it is conceivable that they contain certain samples that either do not contribute or negatively impact the DNN's performance. If there is a large number of such samples, subsampling the training dataset in a way that removes them could provide an effective solution to both improve performance and reduce training time. In this paper, we propose an approach called Active Dataset Subsampling (ADS), to identify favorable subsets within a dataset for training using ensemble based uncertainty estimation. When applied to three image classification benchmarks (CIFAR-10, CIFAR-100 and ImageNet) we find that there are low uncertainty subsets, which can be as large as 50 of the full dataset, that negatively impact performance. These subsets are identified and removed with ADS. We demonstrate that datasets obtained using ADS with a lightweight ResNet-18 ensemble remain effective when used to train deeper models like ResNet-101. Our results provide strong empirical evidence that using all the available data for training can hurt performance on large scale vision tasks.
A comprehensive review of classical approaches to active learning is presented in @cite_30 . In these approaches, data samples for which the current model is are queried for labeling. Current state-of-the-art active learning techniques for computer vision DNNs are based on uncertainty estimates from ensembles @cite_33 @cite_20 . While these methods typically assume that the performance obtained by training on the entire data pool is an upper bound, we show that this is not always the case, and demonstrate distinct advantages of training on a subset of data using ADS regardless of the annotation costs. Active learning techniques also tend to focus on a single model, which leads to datasets that have been shown to transfer poorly to new model architectures @cite_31 . On the other hand, we empirically show that ADS selects datasets that are robust to changes in model architecture.
{ "cite_N": [ "@cite_30", "@cite_31", "@cite_33", "@cite_20" ], "mid": [ "2903158431", "2609701267", "2943152387", "2899943572" ], "abstract": [ "", "Self-paced learning and hard example mining re-weight training instances to improve learning accuracy. This paper presents two improved alternatives based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of mini-batch SGD, and the proximity of the correct class probability to the decision threshold. Extensive experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques, such as residual learning, momentum, ADAM, batch normalization, dropout, and distillation.", "This paper presents a study of semi-supervised learning with large convolutional networks. We propose a pipeline, based on a teacher student paradigm, that leverages a large collection of unlabelled images (up to 1 billion). Our main goal is to improve the performance for a given target architecture, like ResNet-50 or ResNext. We provide an extensive analysis of the success factors of our approach, which leads us to formulate some recommendations to produce high-accuracy models for image classification with semi-supervised learning. As a result, our approach brings important gains to standard architectures for image, video and fine-grained classification. For instance, by leveraging one billion unlabelled images, our learned vanilla ResNet-50 achieves 81.2 top-1 accuracy on the ImageNet benchmark.", "Annotating the right data for training deep neural networks is an important challenge. Active learning using uncertainty estimates from Bayesian Neural Networks (BNNs) could provide an effective solution to this. Despite being theoretically principled, BNNs require approximations to be applied to large-scale problems, where both performance and uncertainty estimation are crucial. In this paper, we introduce Deep Probabilistic Ensembles (DPEs), a scalable technique that uses a regularized ensemble to approximate a deep BNN. We conduct a series of large-scale visual active learning experiments to evaluate DPEs on classification with the CIFAR-10, CIFAR-100 and ImageNet datasets, and semantic segmentation with the BDD100k dataset. Our models require significantly less training data to achieve competitive performances, and steadily improve upon strong active learning baselines as the annotation budget is increased." ] }
1905.12737
2947620140
Deep Neural Networks (DNNs) often rely on very large datasets for training. Given the large size of such datasets, it is conceivable that they contain certain samples that either do not contribute or negatively impact the DNN's performance. If there is a large number of such samples, subsampling the training dataset in a way that removes them could provide an effective solution to both improve performance and reduce training time. In this paper, we propose an approach called Active Dataset Subsampling (ADS), to identify favorable subsets within a dataset for training using ensemble based uncertainty estimation. When applied to three image classification benchmarks (CIFAR-10, CIFAR-100 and ImageNet) we find that there are low uncertainty subsets, which can be as large as 50 of the full dataset, that negatively impact performance. These subsets are identified and removed with ADS. We demonstrate that datasets obtained using ADS with a lightweight ResNet-18 ensemble remain effective when used to train deeper models like ResNet-101. Our results provide strong empirical evidence that using all the available data for training can hurt performance on large scale vision tasks.
Uncertainty estimation is a key component of ADS. Due to several important applications, such as active learning and adversarial sample detection, techniques to improve the uncertainty estimates of a DNN have recently gained significant momentum @cite_2 @cite_32 . These techniques may be broadly categorized into (i) Bayesian @cite_12 @cite_26 @cite_16 and (ii) non-Bayesian techniques @cite_29 @cite_8 . Bayesian techniques approximate a class of neural networks called Bayesian Neural Networks (BNNs) @cite_14 . When BNNs are trained, each weight in the network takes the form of a probability distribution in the parameter space. However, training a BNN involves marginalization over all possible assignments of weights, which is intractable for deep BNNs without approximations @cite_12 @cite_26 @cite_16 . Due to this, these approaches have traditionally been computationally more demanding and conceptually more complicated than non-Bayesian ones @cite_29 @cite_8 .
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_8", "@cite_29", "@cite_32", "@cite_2", "@cite_16", "@cite_12" ], "mid": [ "", "1567512734", "2786712888", "2768927604", "2810382146", "2792388013", "601603264", "2108677974" ], "abstract": [ "", "From the Publisher: Artificial \"neural networks\" are now widely used as flexible models for regression classification applications, but questions remain regarding what these models mean, and how they can safely be used when training data is limited. Bayesian Learning for Neural Networks shows that Bayesian methods allow complex neural network models to be used without fear of the \"overfitting\" that can occur with traditional neural network learning methods. Insight into the nature of these complex Bayesian models is provided by a theoretical investigation of the priors over functions that underlie them. Use of these models in practice is made possible using Markov chain Monte Carlo techniques. Both the theoretical and computational aspects of this work are of wider statistical interest, as they contribute to a better understanding of how Bayesian methods can be applied to complex problems. Presupposing only the basic knowledge of probability and statistics, this book should be of interest to many researchers in statistics, engineering, and artificial intelligence. Software for Unix systems that implements the methods described is freely available over the Internet.", "Modern neural networks are very powerful predictive models, but they are often incapable of recognizing when their predictions may be wrong. Closely related to this is the task of out-of-distribution detection, where a network must determine whether or not an input is outside of the set on which it is expected to safely perform. To jointly address these issues, we propose a method of learning confidence estimates for neural networks that is simple to implement and produces intuitively interpretable outputs. We demonstrate that on the task of out-of-distribution detection, our technique surpasses recently proposed techniques which construct confidence based on the network's output distribution, without requiring any additional labels or access to out-of-distribution examples. Additionally, we address the problem of calibrating out-of-distribution detectors, where we demonstrate that misclassified in-distribution examples can be used as a proxy for out-of-distribution examples.", "The problem of detecting whether a test sample is from in-distribution (i.e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications. However, the state-of-art deep neural networks are known to be highly overconfident in their predictions, i.e., do not distinguish in- and out-of-distributions. Recently, to handle this issue, several threshold-based detectors have been proposed given pre-trained neural classifiers. However, the performance of prior works highly depends on how to train the classifiers since they only focus on improving inference procedures. In this paper, we develop a novel training method for classifiers so that such inference algorithms can work better. In particular, we suggest two additional terms added to the original loss (e.g., cross entropy). The first one forces samples from out-of-distribution less confident by the classifier and the second one is for (implicitly) generating most effective training samples for the first one. In essence, our method jointly trains both classification and generative neural networks for out-of-distribution. We demonstrate its effectiveness using deep convolutional neural networks on various popular image datasets.", "We prove, under two sufficient conditions, that idealised models can have no adversarial examples. We discuss which idealised models satisfy our conditions, and show that idealised Bayesian neural networks (BNNs) satisfy these. We continue by studying near-idealised BNNs using HMC inference, demonstrating the theoretical ideas in practice. We experiment with HMC on synthetic data derived from MNIST for which we know the ground-truth image density, showing that near-perfect epistemic uncertainty correlates to density under image manifold, and that adversarial images lie off the manifold in our setting. This suggests why MC dropout, which can be seen as performing approximate inference, has been observed to be an effective defence against adversarial examples in practice; We highlight failure-cases of non-idealised BNNs relying on dropout, suggesting a new attack for dropout models and a new defence as well. Lastly, we demonstrate the defence on a cats-vs-dogs image classification task with a VGG13 variant.", "Measuring uncertainty is a promising technique for detecting adversarial examples, crafted inputs on which the model predicts an incorrect class with high confidence. But many measures of uncertainty exist, including predictive en- tropy and mutual information, each capturing different types of uncertainty. We study these measures, and shed light on why mutual information seems to be effective at the task of adversarial example detection. We highlight failure modes for MC dropout, a widely used approach for estimating uncertainty in deep models. This leads to an improved understanding of the drawbacks of current methods, and a proposal to improve the quality of uncertainty estimates using probabilistic model ensembles. We give illustrative experiments using MNIST to demonstrate the intuition underlying the different measures of uncertainty, as well as experiments on a real world Kaggle dogs vs cats classification dataset.", "Convolutional neural networks (CNNs) work well on large datasets. But labelled data is hard to collect, and in some applications larger amounts of data are not available. The problem then is how to use CNNs with small data -- as CNNs overfit quickly. We present an efficient Bayesian CNN, offering better robustness to over-fitting on small data than traditional approaches. This is by placing a probability distribution over the CNN's kernels. We approximate our model's intractable posterior with Bernoulli variational distributions, requiring no additional model parameters. On the theoretical side, we cast dropout network training as approximate inference in Bayesian neural networks. This allows us to implement our model using existing tools in deep learning with no increase in time complexity, while highlighting a negative result in the field. We show a considerable improvement in classification accuracy compared to standard techniques and improve on published state-of-the-art results for CIFAR-10.", "Variational methods have been previously explored as a tractable approximation to Bayesian inference for neural networks. However the approaches proposed so far have only been applicable to a few simple network architectures. This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks. Along the way it revisits several common regularisers from a variational perspective. It also provides a simple pruning heuristic that can both drastically reduce the number of network weights and lead to improved generalisation. Experimental results are provided for a hierarchical multidimensional recurrent neural network applied to the TIMIT speech corpus." ] }
1905.12737
2947620140
Deep Neural Networks (DNNs) often rely on very large datasets for training. Given the large size of such datasets, it is conceivable that they contain certain samples that either do not contribute or negatively impact the DNN's performance. If there is a large number of such samples, subsampling the training dataset in a way that removes them could provide an effective solution to both improve performance and reduce training time. In this paper, we propose an approach called Active Dataset Subsampling (ADS), to identify favorable subsets within a dataset for training using ensemble based uncertainty estimation. When applied to three image classification benchmarks (CIFAR-10, CIFAR-100 and ImageNet) we find that there are low uncertainty subsets, which can be as large as 50 of the full dataset, that negatively impact performance. These subsets are identified and removed with ADS. We demonstrate that datasets obtained using ADS with a lightweight ResNet-18 ensemble remain effective when used to train deeper models like ResNet-101. Our results provide strong empirical evidence that using all the available data for training can hurt performance on large scale vision tasks.
Recent methods based on ensembles have made notable progress on simplifying BNN approximations, but remain computationally demanding @cite_11 . In our work, we present a technique to efficiently scale up ensemble-based methods for experiments with larger datasets with millions of samples. We focus on reducing the train-time computational cost of generating ensembles.
{ "cite_N": [ "@cite_11" ], "mid": [ "2560321925" ], "abstract": [ "Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet." ] }
1905.12778
2947722513
The problem of online matching with stochastic rewards is a variant of the online bipartite matching problem where each edge has a probability of "success". When a match is made it succeeds with the probability of the corresponding edge. Introducing this model, Mehta and Panigrahi (FOCS 2012) focused on the special case of identical and vanishingly small edge probabilities and gave an online algorithm which is 0.567 competitive against a deterministic offline LP. For the case of vanishingly small but heterogeneous probabilities (SODA 2015), gave a 0.534 competitive algorithm against the same LP benchmark. We study a generalization of the problem to vertex-weighted graphs and compare against clairvoyant algorithms that know the sequence of arrivals and the edge probabilities in advance, but not the outcomes of potential matches. To the best of our knowledge, no results beating @math were previously known for this setting, even for identical probabilities. By introducing a novel path-based formulation, we show that a natural variant of the RANKING algorithm achieves the best possible competitive ratio of @math , for heterogeneous but vanishingly small edge probabilities. Our result also holds for non-vanishing probabilities that decompose as a product of two factors, one corresponding to each vertex of the edge. The idea of a path-based program may be of independent interest in other online matching problems with a stochastic component.
@cite_6 introduced the online bipartite matching model and proposed the optimal @math competitive RANKING algorithm. Birnbaum and Mathieu @cite_20 , Goel and Mehta @cite_7 considerably simplified the original analysis. Subsequently, @cite_18 gave an elegant randomized primal-dual interpretation that we also use here. Their framework applies to and simplifies more general settings, such as vertex-weighted matchings in @cite_0 , and the related budgeted setting (AdWords problem of @cite_2 ). There has also been a series of results in the random arrival model, where there is distributional information in the arrivals that can be exploited for better results @cite_11 @cite_15 @cite_1 @cite_4 @cite_12 . For a detailed survey we refer the reader to the monograph by Mehta @cite_14 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_7", "@cite_1", "@cite_6", "@cite_0", "@cite_2", "@cite_15", "@cite_12", "@cite_20", "@cite_11" ], "mid": [ "2401584318", "2183531780", "2104224720", "2011730242", "2571343094", "2162656786", "1522394790", "1769120835", "", "2170675255", "2048837158", "2963337212" ], "abstract": [ "We give a simple proof that the ranking algorithm of Karp, Vazirani and Vazirani [KVV90] is 1-1 e competitive for the online bipartite matching problem. The proof is via a randomized primal-dual argument. Primal-dual algorithms have been successfully used for many online algorithm problems, but the dual constraints are always satisfied deterministically. This is the first instance of a non-trivial randomized primal-dual algorithm in which the dual constraints only hold in expectation. The approach also generalizes easily to the vertex-weighted version considered by [AGKM11]. Further we show that the proof is very similar to the deterministic primal-dual argument for the online budgeted allocation problem with small bids (also called the AdWords problem) of [MSVV05].", "Matching is a classic problem with a rich history and a significant impact, both on the theory of algorithms and in practice. Recently there has been a surge of interest in the online version of matching and its generalizations, due to the important new application domain of Internet advertising. The theory of online matching and allocation has played a critical role in designing algorithms for ad allocation. This monograph surveys the key problems, models and algorithms from online matchings, as well as their implication in the practice of ad allocation. The goal is to provide a classification of the problems in this area, an introduction into the techniques used, a glimpse into the practical impact, and to provide direction in terms of open questions. Matching continues to find core applications in diverse domains, and the advent of massive online and streaming data emphasizes the future applicability of the algorithms and techniques surveyed here.", "We consider the online bipartite matching problem in the unknown distribution input model. We show that the Ranking algorithm of [KVV90] achieves a competitive ratio of at least 0.653. This is the first analysis to show an algorithm which breaks the natural 1 - 1 e -barrier' in the unknown distribution model (our analysis in fact works in the stricter, random order model) and answers an open question in [GM08]. We also describe a family of graphs on which Ranking does no better than 0.727 in the random order model. Finally, we show that for graphs which have k > 1 disjoint perfect matchings, Ranking achieves a competitive ratio of at least 1 - √(1 k - 1 k2 + 1 n) -- in particular Ranking achieves a factor of 1 - o(1) for graphs with ω(1) disjoint perfect matchings.", "We study an online assignment problem, motivated by Adwords Allocation, in which queries are to be assigned to bidders with budget constraints. We analyze the performance of the Greedy algorithm (which assigns each query to the highest bidder) in a randomized input model with queries arriving in a random permutation. Our main result is a tight analysis of Greedy in this model showing that it has a competitive ratio of 1 - 1 e for maximizing the value of the assignment. We also consider the more standard i.i.d. model of input, and show that our analysis holds there as well. This is to be contrasted with the worst case analysis of [MSVV05] which shows that Greedy has a ratio of 1 2, and that the optimal algorithm presented there has a ratio of 1 - 1 e. The analysis of Greedy is important in the Adwords setting because it is the natural allocation algorithm for an auction-style process. From a theoretical perspective, our result simplifies and generalizes the classic algorithm of Karp, Vazirani and Vazirani for online bipartite matching. Our results include a new proof to show that the Ranking alforithm of [KVV90] has a ratio of 1 - 1 e in the worst case. It has been recently discovered [KV07] (independent of our results) that one of the crucial lemmas in [KVV90], related to a certain reduction, is incorrect. Our proof is direct, in that it does not go via such a reduction, which also enables us to generalize the analysis to our online assignment problem.", "We consider the online stochastic matching problem proposed by [Feldman J, Mehta A, Mirrokni VS, Muthukrishnan S (2009) Online stochastic matching: Beating 1-1 e. Annual IEEE Sympos. Foundations Comput. Sci. 117--126] as a model of display ad allocation. We are given a bipartite graph; one side of the graph corresponds to a fixed set of bins, and the other side represents the set of possible ball types. At each time step, a ball is sampled independently from the given distribution and it needs to be matched upon its arrival to an empty bin. The goal is to maximize the number of allocations. We present an online algorithm for this problem with a competitive ratio of 0.702. Before our result, algorithms with a competitive ratio better than 1-1 e were known under the assumption that the expected number of arriving balls of each type is integral. A key idea of the algorithm is to collect statistics about the decisions of the optimum offline solution using Monte Carlo sampling and use those statistics to guide the decisions of the online algorithm. We also show that our algorithm achieves a competitive ratio of 0.705 when the rates are integral. On the hardness side, we prove that no online algorithm can have a competitive ratio better than 0.823 under the known distribution model (and henceforth under the permutation model). This improves upon the 5 6 hardness result proved by Goel and Mehta [Goel G, Mehta A (2008) Online budgeted matching in random input models with applications to adwords. ACM-SIAM Symposium Discrete Algorithms 982--991] for the permutation model.", "There has been a great deal of interest recently in the relative power of on-line and off-line algorithms. An on-line algorithm receives a sequence of requests and must respond to each request as soon as it is receiveD. An off-line algorithm may wait until all requests have been received before determining its responses. One approach to evaluating an on-line algorithm is to compare its performance with that of the best possible off-line algorithm for the same problem. Thus, given a measure of \"profit\", the performance of an on-line algorithm can be measured by the worst-case ratio of its profit to that of the optimal off-line algorithm. This general approach has been applied in a number of contexts, including data structures [SITa], bin packing [CoGaJo], graph coloring [GyLe] and the k-server problem [MaMcSI]. Here we apply it to bipartite matching and show that a simple randomized on-line algorithm achieves the best possible performance.", "We study the following vertex-weighted online bipartite matching problem: G(U, V, E) is a bipartite graph. The vertices in U have weights and are known ahead of time, while the vertices in V arrive online in an arbitrary order and have to be matched upon arrival. The goal is to maximize the sum of weights of the matched vertices in U. When all the weights are equal, this reduces to the classic online bipartite matching problem for which Karp, Vazirani and Vazirani gave an optimal (1−1 e)-competitive algorithm in their seminal work [10]. Our main result is an optimal (1−1 e)-competitive randomized algorithm for general vertex weights. We use random perturbations of weights by appropriately chosen multiplicative factors. Our solution constitutes the first known generalization of the algorithm in [10] in this model and provides new insights into the role of randomization in online allocation problems. It also effectively solves the problem of online budgeted allocations [14] in the case when an agent makes the same bid for any desired item, even if the bid is comparable to his budget - complementing the results of [14, 3] which apply when the bids are much smaller than the budgets.", "How does a search engine company decide what ads to display with each query so as to maximize its revenue? This turns out to be a generalization of the online bipartite matching problem. We introduce the notion of a tradeoff revealing LP and use it to derive two optimal algorithms achieving competitive ratios of 1-1 e for this problem.", "", "We study the online stochastic bipartite matching problem, in a form motivated by display ad allocation on the Internet. In the online, but adversarial case, the celebrated result of Karp, Vazirani and Vazirani gives an approximation ratio of @math , a very familiar bound that holds for many online problems; further, the bound is tight in this case. In the online, stochastic case when nodes are drawn repeatedly from a known distribution, the greedy algorithm matches this approximation ratio, but still, no algorithm is known that beats the @math bound.Our main result is a @math -approximation online algorithm for stochastic bipartite matching, breaking this @math barrier. Furthermore, we show that no online algorithm can produce a @math approximation for an arbitrarily small @math for this problem. Our algorithms are based on computing an optimal offline solution to the expected instance, and using this solution as a guideline in the process of online allocation. We employ a novel application of the idea of the power of two choices from load balancing: we compute two disjoint solutions to the expected instance, and use both of them in the online algorithm in a prescribed preference order. To identify these two disjoint solutions, we solve a max flow problem in a boosted flow graph, and then carefully decompose this maximum flow to two edge-disjoint (near-) matchings. In addition to guiding the online decision making, these two offline solutions are used to characterize an upper bound for the optimum in any scenario. This is done by identifying a cut whose value we can bound under the arrival distribution. At the end, we discuss extensions of our results to more general bipartite allocations that are important in a display ad application.", "We examine the classic on-line bipartite matching problem studied by Karp, Vazirani, and Vazirani [8] and provide a simple proof of their result that the Ranking algorithm for this problem achieves a competitive ratio of 1 -- 1 e.", "Online matching has received significant attention over the last 15 years due to its close connection to Internet advertising. As the seminal work of Karp, Vazirani, and Vazirani has an optimal (1 - 1 epsilon) competitive ratio in the standard adversarial online model, much effort has gone into developing useful online models that incorporate some stochasticity in the arrival process. One such popular model is the \"known I.I.D. model\" where different customer-types arrive online from a known distribution. We develop algorithms with improved competitive ratios for some basic variants of this model with integral arrival rates, including: (a) the case of general weighted edges, where we improve the best-known ratio of 0.667 due to [Haeupler, Mirrokni and Zadimoghaddam WINE 2011] to 0.705; and (b) the vertex-weighted case, where we improve the 0.7250 ratio of [Jaillet and Lu Math. Oper. Res 2013] to 0.7299. We also consider two extensions, one is \"known I.I.D.\" with non-integral arrival rate and stochastic rewards; the other is \"known I.I.D.\" b-matching with non-integral arrival rate and stochastic rewards. We present a simple non-adaptive algorithm which works well simultaneously on the two extensions. One of the key ingredients of our improvement is the following (offline) approach to bipartite-matching polytopes with additional constraints. We first add several valid constraints in order to get a good fractional solution f; however, these give us less control over the structure of f. We next remove all these additional constraints and randomly move from f to a feasible point on the matching polytope with all coordinates being from the set 0, 1 k, 2 k,..., 1 for a chosen integer k. The structure of this solution is inspired by [Jaillet and Lu Math. Oper. Res 2013] and is a tractable structure for algorithm design and analysis. The appropriate random move preserves many of the removed constraints (approximately [exactly] with high probability [in expectation]). This underlies some of our improvements, and, we hope, could be of independent interest." ] }
1905.12624
2946909365
Combinatorial Bandits generalize multi-armed bandits, where k out of n arms are chosen at each round and the sum of the rewards is gained. We address the full-bandit feedback, in which the agent observes only the sum of rewards, in contrast to the semi-bandit feedback, in which the agent observes also the individual arms' rewards. We present the Combinatorial Successive Accepts and Rejects (CSAR) algorithm, which is a generalization of the SAR algorithm ( 2013) for the combinatorial setting. Our main contribution is an efficient sampling scheme that uses Hadamard matrices in order to estimate accurately the individual arms' expected rewards. We discuss two variants of the algorithm, the first minimizes the sample complexity and the second minimizes the regret. For the sample complexity we also prove a matching lower bound that shows it is optimal. For the regret minimization, we prove a lower bound which is tight up to a factor of k. Finally, we run experiments and show that our algorithm outperforms other methods.
The problem of , a.k.a. , was introduced in @cite_6 , and later in @cite_9 , where the goal is to find the best arm using a minimal number of samples. The authors of @cite_6 describe two algorithms for this end, one of them is that in each round estimates all the arms with an increasing level of accuracy, and eliminates the arms which are far from the optimal arm with high confidence. This algorithm is the conceptual basis for a number of algorithms, including the one we describe in this work.
{ "cite_N": [ "@cite_9", "@cite_6" ], "mid": [ "1881419322", "2147967768" ], "abstract": [ "We consider the framework of stochastic multi-armed bandit problems and study the possibilities and limitations of strategies that perform an online exploration of the arms. The strategies are assessed in terms of their simple regret, a regret notion that captures the fact that exploration is only constrained by the number of available rounds (not necessarily known in advance), in contrast to the case when the cumulative regret is considered and when exploitation needs to be performed at the same time.We believe that this performance criterion is suited to situations when the cost of pulling an arm is expressed in terms of resources rather than rewards. We discuss the links between the simple and the cumulative regret. The main result is that the required exploration-exploitation trade-offs are qualitatively different, in view of a general lower bound on the simple regret in terms of the cumulative regret.", "We incorporate statistical confidence intervals in both the multi-armed bandit and the reinforcement learning problems. In the bandit problem we show that given n arms, it suffices to pull the arms a total of O((n e2)log(1 δ)) times to find an e-optimal arm with probability of at least 1-δ. This bound matches the lower bound of Mannor and Tsitsiklis (2004) up to constants. We also devise action elimination procedures in reinforcement learning algorithms. We describe a framework that is based on learning the confidence interval around the value function or the Q-function and eliminating actions that are not optimal (with high probability). We provide a model-based and a model-free variants of the elimination method. We further derive stopping conditions guaranteeing that the learned policy is approximately optimal with high probability. Simulations demonstrate a considerable speedup and added robustness over e-greedy Q-learning." ] }
1905.12624
2946909365
Combinatorial Bandits generalize multi-armed bandits, where k out of n arms are chosen at each round and the sum of the rewards is gained. We address the full-bandit feedback, in which the agent observes only the sum of rewards, in contrast to the semi-bandit feedback, in which the agent observes also the individual arms' rewards. We present the Combinatorial Successive Accepts and Rejects (CSAR) algorithm, which is a generalization of the SAR algorithm ( 2013) for the combinatorial setting. Our main contribution is an efficient sampling scheme that uses Hadamard matrices in order to estimate accurately the individual arms' expected rewards. We discuss two variants of the algorithm, the first minimizes the sample complexity and the second minimizes the regret. For the sample complexity we also prove a matching lower bound that shows it is optimal. For the regret minimization, we prove a lower bound which is tight up to a factor of k. Finally, we run experiments and show that our algorithm outperforms other methods.
Another branch of the research is dedicated to combinatorial bandits. The framework of stochastic combinatorial bandits was defined in @cite_7 , followed by a numerous works, most of them for semi-bandit feedback @cite_3 @cite_22 @cite_15 @cite_26 . For full-bandit feedback, only a few algorithms were suggested. One of them is Sort & Merge @cite_33 , which is designed for cases when the aggregated reward is not necessarily the sum of individual arms. This algorithm is based on Explore-then-Exploit approach and achieves regret of @math . Another algorithm is described in @cite_29 , based on LinUCB, where the NP-hardness of the subset selection stage is solved by approximations. The last algorithm that should be mentioned in this context is the one described in @cite_28 . They consider a problem that somewhat generalizes the full-bandit setting, where the reward is not necessarily the sum of individual arms, but the feedback for the agent is a linear combination of the arms' rewards. For the sake of completeness, we note that there are also a number of works on full-bandit feedback in the adversarial setting @cite_25 @cite_22 .
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_7", "@cite_33", "@cite_28", "@cite_29", "@cite_3", "@cite_15", "@cite_25" ], "mid": [ "2966360153", "2963860006", "35251828", "2902826155", "2114802044", "2925540570", "2542446008", "2964007796", "" ], "abstract": [ "", "This paper investigates stochastic and adversarial combinatorial multi-armed bandit problems. In the stochastic setting under semi-bandit feedback, we derive a problem-specific regret lower bound, and discuss its scaling with the dimension of the decision space. We propose ESCB, an algorithm that efficiently exploits the structure of the problem and provide a finite-time analysis of its regret. ESCB has better performance guarantees than existing algorithms, and significantly outperforms these algorithms in practice. In the adversarial setting under bandit feedback, we propose COMBEXP, an algorithm with the same regret scaling as state-of-the-art algorithms, but with lower computational complexity for some combinatorial problems.", "", "Many real-world problems face the dilemma of choosing best @math out of @math options at a given time instant. This setup can be modelled as combinatorial bandit which chooses @math out of @math arms at each time, with an aim to achieve an efficient tradeoff between exploration and exploitation. This is the first work for combinatorial bandit where the reward received can be a non-linear function of the chosen @math arms. The direct use of multi-armed bandit requires choosing among @math -choose- @math options making the state space large. In this paper, we present a novel algorithm which is computationally efficient and the storage is linear in @math . The proposed algorithm is a divide-and-conquer based strategy, that we call CMAB-SM. Further, the proposed algorithm achieves a regret bound of @math for a time horizon @math , which is sub-linear in all parameters @math , @math , and @math . The evaluation results on different reward functions and arm distribution functions show significantly improved performance as compared to standard multi-armed bandit approach with @math choices.", "In online learning, a player chooses actions to play and receives reward and feedback from the environment with the goal of maximizing her reward over time. In this paper, we propose the model of combinatorial partial monitoring games with linear feedback, a model which simultaneously addresses limited feedback, infinite outcome space of the environment and exponentially large action space of the player. We present the Global Confidence Bound (GCB) algorithm, which integrates ideas from both combinatorial multi-armed bandits and finite partial monitoring games to handle all the above issues. GCB only requires feedback on a small set of actions and achieves O(T2 3 log T) distribution-independent regret and O(log T) distribution-dependent regret (the latter assuming unique optimal action), where T is the total time steps played. Moreover, the regret bounds only depend linearly on log |χ| rather than |χ|, where χ is the action space. GCB isolates offline optimization tasks from online learning and avoids explicit enumeration of all actions in the online learning part. We demonstrate that our model and algorithm can be applied to a crowdsourcing application leading to both an efficient learning algorithm and low regret, and argue that they can be applied to a wide range of combinatorial applications constrained with limited feedback.", "We study the problem of stochastic combinatorial pure exploration (CPE), where an agent sequentially pulls a set of single arms (a.k.a. a super arm) and tries to find the best super arm. Among a variety of problem settings of the CPE, we focus on the full-bandit setting, where we cannot observe the reward of each single arm, but only the sum of the rewards. Although we can regard the CPE with full-bandit feedback as a special case of pure exploration in linear bandits, an approach based on linear bandits is not computationally feasible since the number of super arms may be exponential. In this paper, we first propose a polynomial-time bandit algorithm for the CPE under general combinatorial constraints and provide an upper bound of the sample complexity. Second, we design an approximation algorithm for the 0-1 quadratic maximization problem, which arises in many bandit algorithms with confidence ellipsoids. Based on our approximation algorithm, we propose novel bandit algorithms for the top-k selection problem, and prove that our algorithms run in polynomial time. Finally, we conduct experiments on synthetic and real-world datasets, and confirm the validity of our theoretical analysis in terms of both the computation time and the sample complexity.", "In this paper, we study the stochastic combinatorial multi-armed bandit (CMAB) framework that allows a general nonlinear reward function, whose expected value may not depend only on the means of the input random variables but possibly on the entire distributions of these variables. Our framework enables a much larger class of reward functions such as the max() function and nonlinear utility functions. Existing techniques relying on accurate estimations of the means of random variables, such as the upper confidence bound (UCB) technique, do not work directly on these functions. We propose a new algorithm called stochastically dominant confidence bound (SDCB), which estimates the distributions of underlying random variables and their stochastically dominant confidence bounds. We prove that SDCB can achieve O(log T) distribution-dependent regret and O(√T) distribution-independent regret, where T is the time horizon. We apply our results to the K-MAX problem and expected utility maximization problems. In particular, for K-MAX, we provide the first polynomial-time approximation scheme (PTAS) for its offline problem, and give the first O(√T) bound on the (1 — e)-approximation regret of its online problem, for any e > 0.", "A stochastic combinatorial semi-bandit is an online learning problem where at each step a learning agent chooses a subset of ground items subject to constraints, and then observes stochastic weights of these items and receives their sum as a payoff. In this paper, we close the problem of computationally and sample efficient learning in stochastic combinatorial semi-bandits. In particular, we analyze a UCB-like algorithm for solving the problem, which is known to be computationally efficient; and prove O(KL(1 )logn) and O( p KLnlogn) upper bounds on its n-step regret, where L is the number of ground items, K is the maximum number of chosen items, and is the gap between the expected returns of the optimal and best suboptimal solutions. The gapdependent bound is tight up to a constant factor and the gap-free bound is tight up to a polylogarithmic factor.", "" ] }
1905.12624
2946909365
Combinatorial Bandits generalize multi-armed bandits, where k out of n arms are chosen at each round and the sum of the rewards is gained. We address the full-bandit feedback, in which the agent observes only the sum of rewards, in contrast to the semi-bandit feedback, in which the agent observes also the individual arms' rewards. We present the Combinatorial Successive Accepts and Rejects (CSAR) algorithm, which is a generalization of the SAR algorithm ( 2013) for the combinatorial setting. Our main contribution is an efficient sampling scheme that uses Hadamard matrices in order to estimate accurately the individual arms' expected rewards. We discuss two variants of the algorithm, the first minimizes the sample complexity and the second minimizes the regret. For the sample complexity we also prove a matching lower bound that shows it is optimal. For the regret minimization, we prove a lower bound which is tight up to a factor of k. Finally, we run experiments and show that our algorithm outperforms other methods.
Alongside algorithms for the various models, a considerable part of the research is concerned with the question what is the minimal sample complexity for any algorithm to succeed in the identification task. For best arm identification, at least @math samples are necessary for any @math -PAC algorithm to identify the best arm @cite_30 . For multiple arms identification, a slightly more samples are needed, where the lower bound is shown to be @math @cite_12 @cite_8 . Another line of the research bounds the minimal regret that any MAB algorithm must pay along a time horizon @math . For classical MABs, a well-known lower bound of @math was proven in @cite_14 . This result extends to @math for combinatorial bandits with semi-bandit feedback @cite_15 . For full-bandit feedback, the regret is bounded by @math @cite_1 and even @math if the rewards are chosen by an adversary @cite_10 . However, unlike the current setting, these bounds assume that not all subsets of arms can be selected by the agent. Another relevant bound is for the linear bandits model, where the regret is bounded by @math @cite_34 .
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_8", "@cite_1", "@cite_15", "@cite_34", "@cite_10", "@cite_12" ], "mid": [ "2132876566", "2077902449", "8700100", "2152898676", "2964007796", "50486269", "2952320247", "2168810201" ], "abstract": [ "We consider the multi-armed bandit problem under the PAC (\"probably approximately correct\") model. It was shown by Even- (2002) that given n arms, a total of O((n e2)log(1 δ)) trials suffices in order to find an e-optimal arm with probability at least 1-δ. We establish a matching lower bound on the expected number of trials under any sampling policy. We furthermore generalize the lower bound, and show an explicit dependence on the (unknown) statistics of the arms. We also provide a similar bound within a Bayesian setting. The case where the statistics of the arms are known but the identities of the arms are not, is also discussed. For this case, we provide a lower bound of Θ((1 e2)(n+log(1 δ))) on the expected number of trials, as well as a sampling policy with a matching upper bound. If instead of the expected number of trials, we consider the maximum (over all sample paths) number of trials, we establish a matching upper and lower bound of the form Θ((n e2)log(1 δ)). Finally, we derive lower bounds on the expected regret, in the spirit of Lai and Robbins.", "In the multiarmed bandit problem, a gambler must decide which arm of K nonidentical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing the arm believed to give the best payoff). Past solutions for the bandit problem have almost always relied on assumptions about the statistics of the slot machines. In this work, we make no statistical assumptions whatsoever about the nature of the process generating the payoffs of the slot machines. We give a solution to the bandit problem in which an adversary, rather than a well-behaved stochastic process, has complete control over the payoffs. In a sequence of T plays, we prove that the per-round payoff of our algorithm approaches that of the best arm at the rate O(T-1 2). We show by a matching lower bound that this is the best possible. We also prove that our algorithm approaches the per-round payoff of any set of strategies at a similar rate: if the best strategy is chosen from a pool of N strategies, then our algorithm approaches the per-round payoff of the strategy at the rate O((log N1 2 T-1 2). Finally, we apply our results to the problem of playing an unknown repeated matrix game. We show that our algorithm approaches the minimax payoff of the unknown game at the rate O(T-1 2).", "We consider the problem of eciently exploring the arms of a stochastic bandit to identify the best subset of a specied size. Under the PAC and the xed-budget formulations, we derive improved bounds by using KL-divergence-based condence intervals. Whereas the application of a similar idea in the regret setting has yielded bounds in terms of the KL-divergence between the arms, our bounds in the pure-exploration setting involve the information\" between the arms. In addition to introducing this novel quantity to the bandits literature, we contribute a comparison between strategies based on uniform and adaptive sampling for pure-exploration problems, nding evidence in favor of the latter.", "We address online linear optimization problems when the possible actions of the decision maker are represented by binary vectors. The regret of the decision maker is the difference between her realized loss and the minimal loss she would have achieved by picking, in hindsight, the best possible action. Our goal is to understand the magnitude of the best possible minimax regret. We study the problem under three different assumptions for the feedback the decision maker receives: full information, and the partial information models of the so-called “semi-bandit” and “bandit” problems. In the full information case we show that the standard exponentially weighted average forecaster is a provably suboptimal strategy. For the semi-bandit model, by combining the Mirror Descent algorithm and the INF Implicitely Normalized Forecaster strategy, we are able to prove the first optimal bounds. Finally, in the bandit case we discuss existing results in light of a new lower bound, and suggest a conjecture on the optimal regret in that case.", "A stochastic combinatorial semi-bandit is an online learning problem where at each step a learning agent chooses a subset of ground items subject to constraints, and then observes stochastic weights of these items and receives their sum as a payoff. In this paper, we close the problem of computationally and sample efficient learning in stochastic combinatorial semi-bandits. In particular, we analyze a UCB-like algorithm for solving the problem, which is known to be computationally efficient; and prove O(KL(1 )logn) and O( p KLnlogn) upper bounds on its n-step regret, where L is the number of ground items, K is the maximum number of chosen items, and is the gap between the expected returns of the optimal and best suboptimal solutions. The gapdependent bound is tight up to a constant factor and the gap-free bound is tight up to a polylogarithmic factor.", "In the classical stochastic k-armed bandit problem, in each of a sequence of T rounds, a decision maker chooses one of k arms and incurs a cost chosen from an unknown distribution associated with that arm. The goal is to minimize regret, defined as the difference between the cost incurred by the algorithm and the optimal cost. In the linear optimization version of this problem (first considered by Auer [2002]), we view the arms as vectors in R, and require that the costs be linear functions of the chosen vector. As before, it is assumed that the cost functions are sampled independently from an unknown distribution. In this setting, the goal is to find algorithms whose running time and regret behave well as functions of the number of rounds T and the dimensionality n (rather than the number of arms, k, which may be exponential in n or even infinite). We give a nearly complete characterization of this problem in terms of both upper and lower bounds for the regret. In certain special cases (such as when the decision region is a polytope), the regret is polylog(T ). In general though, the optimal regret is Θ∗( √ T ) — our lower bounds rule out the possibility of obtaining polylog(T ) rates in general. We present two variants of an algorithm based on the idea of “upper confidence bounds.” The first, due to Auer [2002], but not fully analyzed, obtains regret whose dependence on n and T are both essentially optimal, but which may be computationally intractable when the decision set is a polytope. The second version can be efficiently implemented when the decision set is a polytope (given as an intersection of half-spaces), but gives up a factor of √ n in the regret bound. Our results also extend to the setting where the set of allowed decisions may change over time. ∗Department of Computer Science, University of Chicago, varsha@cs.uchicago.edu †Toyota Technological Institute at Chicago, hayest,sham @tti-c.org", "We revisit the study of optimal regret rates in bandit combinatorial optimization---a fundamental framework for sequential decision making under uncertainty that abstracts numerous combinatorial prediction problems. We prove that the attainable regret in this setting grows as @math where @math is the dimension of the problem and @math is a bound over the maximal instantaneous loss, disproving a conjecture of Audibert, Bubeck, and Lugosi (2013) who argued that the optimal rate should be of the form @math . Our bounds apply to several important instances of the framework, and in particular, imply a tight bound for the well-studied bandit shortest path problem. By that, we also resolve an open problem posed by Cesa-Bianchi and Lugosi (2012).", "We consider the problem of selecting, from among the arms of a stochastic n-armed bandit, a subset of size m of those arms with the highest expected rewards, based on efficiently sampling the arms. This \"subset selection\" problem finds application in a variety of areas. In the authors' previous work (Kalyanakrishnan & Stone, 2010), this problem is framed under a PAC setting (denoted \"Explore-m\"), and corresponding sampling algorithms are analyzed. Whereas the formal analysis therein is restricted to the worst case sample complexity of algorithms, in this paper, we design and analyze an algorithm (\"LUCB\") with improved expected sample complexity. Interestingly LUCB bears a close resemblance to the well-known UCB algorithm for regret minimization. The expected sample complexity bound we show for LUCB is novel even for single-arm selection (Explore-1). We also give a lower bound on the worst case sample complexity of PAC algorithms for Explore-m." ] }
1905.12534
2946841533
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (e.g. Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
The goal of generative models is to match real data distribution @math with generated data distribution @math . Thus, minimizing differences between two distributions is a crucial point for training generative models. Goodfellow introduced an adversarial framework (GAN) , @cite_16 which is capable of learning deep generative models by minimizing the Jensen-Shannon Divergence between @math and @math . This optimization problem can be described as a minmax game between the generator @math , which learns how to generate samples which resemble real data, and a discriminator @math , which learns to discriminate between real and fake data. Throughout this process, @math indirectly learns how to model @math by taking samples @math from a fixed distribution @math (e.g. Gaussian) and forcing the generated samples @math to match @math . The objective loss function is defined as
{ "cite_N": [ "@cite_16" ], "mid": [ "2099471712" ], "abstract": [ "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1905.12534
2946841533
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (e.g. Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
Deep Convolutional GAN (DCGAN) , @cite_27 is one of the popular and successful network topology design for GAN that in a certain way achieves a consistently stability during training. It is a direct extension of the GAN described above, except that it is mainly composed of convolutional and convolutional-transpose layers without max pooling or fully connected layers in both discriminator and generator.
{ "cite_N": [ "@cite_27" ], "mid": [ "2173520492" ], "abstract": [ "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations." ] }
1905.12534
2946841533
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (e.g. Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
Least-Squares GAN (LSGAN) , @cite_19 also tries to minimize Pearson @math divergence between the real and the generated distribution. The standard GAN uses a sigmoid cross entropy loss for the discriminator to determine whether its input comes from @math and @math . Nonetheless, this loss has an important drawback. Given a generated sample is classified as real by the discriminator, then there would be no apparent reason for the generator to be updated even though the generated sample is located far from the real data distribution. In other words, sigmoid cross entropy loss can barely push such generated samples towards real data distribution since its classification role has been achieved. Motivated by this phenomenon, LSGAN replaces a sigmoid cross entropy loss with a least square loss, which directly penalizes fake samples by moving them close to the real data distribution.
{ "cite_N": [ "@cite_19" ], "mid": [ "2593414223" ], "abstract": [ "Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We evaluate LSGANs on LSUN and CIFAR-10 datasets and the experimental results show that the images generated by LSGANs are of better quality than the ones generated by regular GANs. We also conduct two comparison experiments between LSGANs and regular GANs to illustrate the stability of LSGANs." ] }
1905.12534
2946841533
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (e.g. Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
Standard convolutional layers are designed to detect local conjunctions of features from the previous layer and mapping their appearance to a feature map which does not vary its spatial resolution at no time. Nevertheless, in accordance with the spatial-frequency model , @cite_12 @cite_7 , natural images can be factorized into a low frequency signal that captures the global layout and coarse structure, and a high frequency signal that captures fine details. Attracted by the idea of having feature maps with different resolution, recent works based on deep learning approaches , @cite_8 @cite_22 , have built on top of standard CNNs, architecture schemes that have access to different frequency content. A multigrid architecture is the idea suggested by , @cite_8 that has the intention of wiring cross-scale connections into network structure at the lowest level. In order to create such a topology, every convolutional filter extends spatially within grids ( @math , @math ), across grids multiple scales ( @math ) within a pyramid, and over corresponding feature channels ( @math ). Building in this fashion, a combination of pyramids across the architecture ( @math , @math , @math , @math ).
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_12", "@cite_8" ], "mid": [ "2938458886", "", "1999908130", "2616324011" ], "abstract": [ "In natural images, information is conveyed at different frequencies where higher frequencies are usually encoded with fine details and lower frequencies are usually encoded with global structures. Similarly, the output feature maps of a convolution layer can also be seen as a mixture of information at different frequencies. In this work, we propose to factorize the mixed feature maps by their frequencies and design a novel Octave Convolution (OctConv) operation to store and process feature maps that vary spatially \"slower\" at a lower spatial resolution reducing both memory and computation cost. Unlike existing multi-scale meth-ods, OctConv is formulated as a single, generic, plug-and-play convolutional unit that can be used as a direct replacement of (vanilla) convolutions without any adjustments in the network architecture. It is also orthogonal and complementary to methods that suggest better topologies or reduce channel-wise redundancy like group or depth-wise convolutions. We experimentally show that by simply replacing con-volutions with OctConv, we can consistently boost accuracy for both image and video recognition tasks, while reducing memory and computational cost. An OctConv-equipped ResNet-152 can achieve 82.9 top-1 classification accuracy on ImageNet with merely 22.2 GFLOPs.", "", "SUMMARY 1.Thecontrast thresholds ofavariety ofgrating patterns havebeen measured overawiderangeofspatial frequencies. 2.Contrast thresholds forthedetection ofgratings whoseluminance profiles aresine, square, rectangular orsaw-tooth wavescanbesimply related using Fourier theory. 3.Overawiderangeofspatial frequencies thecontrast threshold ofa grating isdetermined onlybytheamplitude ofthefundamental Fourier component ofitswaveform. 4.Gratings ofcomplex waveformcannotbedistinguished fromsinewavegratings until their contrast hasbeenraised toalevel atwhichthe higher harmonic components reach their independent threshold. 5.Thesefindings canbeexplained bytheexistence within thenervous system oflinearly operating independent mechanisms selectively sensitive tolimited ranges ofspatial frequencies.", "We propose a multigrid extension of convolutional neural networks (CNNs). Rather than manipulating representations living on a single spatial grid, our network layers operate across scale space, on a pyramid of grids. They consume multigrid inputs and produce multigrid outputs, convolutional filters themselves have both within-scale and cross-scale extent. This aspect is distinct from simple multiscale designs, which only process the input at different scales. Viewed in terms of information flow, a multigrid network passes messages across a spatial pyramid. As a consequence, receptive field size grows exponentially with depth, facilitating rapid integration of context. Most critically, multigrid structure enables networks to learn internal attention and dynamic routing mechanisms, and use them to accomplish tasks on which modern CNNs fail. Experiments demonstrate wide-ranging performance advantages of multigrid. On CIFAR and ImageNet classification tasks, flipping from a single grid to multigrid within the standard CNN paradigm improves accuracy, while being compute and parameter efficient. Multigrid is independent of other architectural choices, we show synergy in combination with residual connections. Multigrid yields dramatic improvement on a synthetic semantic segmentation dataset. Most strikingly, relatively shallow multigrid networks can learn to directly perform spatial transformation tasks, where, in contrast, current CNNs fail. Together, our results suggest that continuous evolution of features on a multigrid pyramid is a more powerful alternative to existing CNN designs on a flat grid." ] }