aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1811.10907
2950618454
Diffusion is commonly used as a ranking or re-ranking method in retrieval tasks to achieve higher retrieval performance, and has attracted lots of attention in recent years. A downside to diffusion is that it performs slowly in comparison to the naive k-NN search, which causes a non-trivial online computational cost on large datasets. To overcome this weakness, we propose a novel diffusion technique in this paper. In our work, instead of applying diffusion to the query, we pre-compute the diffusion results of each element in the database, making the online search a simple linear combination on top of the k-NN search process. Our proposed method becomes 10 times faster in terms of online search speed. Moreover, we propose to use late truncation instead of early truncation in previous works to achieve better retrieval performance.
To tackle this inefficiency, past efforts have been made to scale diffusion up to handle larger datasets. @cite_7 proposed to accelerate the construction of the affinity matrix denoting the graph. reported that Dong's method is orders of magnitude faster than exhaustive search with only limited decreases in performance @cite_16 . Another approach to improve efficency is using approximate nearest neighbor search (ANN). Compared to constructing the graph by exhaustive @math -NN search, ANN search is faster and provides comparable accuracy @cite_20 @cite_4 . Most recently, @cite_21 approximated the affinity matrix by using a low-rank spectral decomposition to reduce the online computational cost. However, this method did not result in much improvement in terms of retrieval performance.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_21", "@cite_16", "@cite_20" ], "mid": [ "2077815765", "2110026675", "2604652497", "2559091987", "2124509324" ], "abstract": [ "Product quantization (PQ) is an effective vector quantization method. A product quantizer can generate an exponentially large codebook at very low memory time cost. The essence of PQ is to decompose the high-dimensional vector space into the Cartesian product of subspaces and then quantize these subspaces separately. The optimal space decomposition is important for the PQ performance, but still remains an unaddressed issue. In this paper, we optimize PQ by minimizing quantization distortions w.r.t the space decomposition and the quantization codebooks. We present two novel solutions to this challenging optimization problem. The first solution iteratively solves two simpler sub-problems. The second solution is based on a Gaussian assumption and provides theoretical analysis of the optimality. We evaluate our optimized product quantizers in three applications: (i) compact encoding for exhaustive ranking [1], (ii) building inverted multi-indexing for non-exhaustive search [2], and (iii) compacting image representations for image retrieval [3]. In all applications our optimized product quantizers outperform existing solutions.", "K-Nearest Neighbor Graph (K-NNG) construction is an important operation with many web related applications, including collaborative filtering, similarity search, and many others in data mining and machine learning. Existing methods for K-NNG construction either do not scale, or are specific to certain similarity measures. We present NN-Descent, a simple yet efficient algorithm for approximate K-NNG construction with arbitrary similarity measures. Our method is based on local search, has minimal space overhead and does not rely on any shared global index. Hence, it is especially suitable for large-scale applications where data structures need to be distributed over the network. We have shown with a variety of datasets and similarity measures that the proposed method typically converges to above 90 recall with each point comparing only to several percent of the whole dataset on average.", "Despite the success of deep learning on representing images for particular object retrieval, recent studies show that the learned representations still lie on manifolds in a high dimensional space. This makes the Euclidean nearest neighbor search biased for this task. Exploring the manifolds online remains expensive even if a nearest neighbor graph has been computed offline. This work introduces an explicit embedding reducing manifold search to Euclidean search followed by dot product similarity search. This is equivalent to linear graph filtering of a sparse signal in the frequency domain. To speed up online search, we compute an approximate Fourier basis of the graph offline. We improve the state of art on particular object retrieval datasets including the challenging Instre dataset containing small objects. At a scale of 105 images, the offline cost is only a few hours, while query time is comparable to standard similarity search.", "Query expansion is a popular method to improve the quality of image retrieval with both conventional and CNN representations. It has been so far limited to global image similarity. This work focuses on diffusion, a mechanism that captures the image manifold in the feature space. An efficient off-line stage allows optional reduction in the number of stored regions. In the on-line stage, the proposed handling of unseen queries in the indexing stage removes additional computation to adjust the precomputed data. We perform diffusion through a sparse linear system solver, yielding practical query times well below one second. Experimentally, we observe a significant boost in performance of image retrieval with compact CNN descriptors on standard benchmarks, especially when the query object covers only a small part of the image. Small objects have been a common failure case of CNN-based retrieval.", "This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors." ] }
1811.10779
2902109187
Softmax activation is commonly used to output the probability distribution over categories based on certain distance metric. In scenarios like one-shot learning, the distance metric is often chosen to be squared Euclidean distance between the query sample and the category prototype. This practice works well in most time. However, we find that choosing squared Euclidean distance may cause distance explosion leading gradients to be extremely sparse in the early stage of back propagation. We term this phenomena as the early sparse gradients problem. Though it doesn't deteriorate the convergence of the model, it may set up a barrier to further model improvement. To tackle this problem, we propose to use leaky squared Euclidean distance to impose a restriction on distances. In this way, we can avoid distance explosion and increase the magnitude of gradients. Extensive experiments are conducted on Omniglot and miniImageNet datasets. We show that using leaky squared Euclidean distance can improve one-shot classification accuracy on both datasets.
There are several variants of softmax activation. @cite_1 , show that softmax loss does not preserve the compactness of clusters in image classification. They propose a center loss which explicitly pulls features within a cluster to the cluster center. @cite_12 , find that there is no constraint for the margins between clusters in softmax loss. They present a modification of softmax activation to maximize the margins. @cite_17 , introduce the angular softmax (A-Softmax) loss which can be viewed as imposing discriminative constraints on a hypersphere manifold to enable convolutional neural networks to learn angularly discriminative features. @cite_4 , show that the individual saturation leads to short-lived gradients propagation in softmax activation which is poor for robust exploration of SGD. They suggest to use annealed noise injection to mitigate this problem.
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_12", "@cite_17" ], "mid": [ "2963227127", "2520774990", "2594407953", "2963466847" ], "abstract": [ "Over the past few years, softmax and SGD have become a commonly used component and the default training strategy in CNN frameworks, respectively. However, when optimizing CNNs with SGD, the saturation behavior behind softmax always gives us an illusion of training well and then is omitted. In this paper, we first emphasize that the early saturation behavior of softmax will impede the exploration of SGD, which sometimes is a reason for model converging at a bad local-minima, then propose Noisy Softmax to mitigating this early saturation issue by injecting annealed noise in softmax during each iteration. This operation based on noise injection aims at postponing the early saturation and further bringing continuous gradients propagation so as to significantly encourage SGD solver to be more exploratory and help to find a better local-minima. This paper empirically verifies the superiority of the early softmax desaturation, and our method indeed improves the generalization ability of CNN model by regularization. We experimentally find that this early desaturation helps optimization in many tasks, yielding state-of-the-art or competitive results on several popular benchmark datasets.", "Convolutional neural networks (CNNs) have been widely used in computer vision community, significantly improving the state-of-the-art. In most of the available CNNs, the softmax loss function is used as the supervision signal to train the deep model. In order to enhance the discriminative power of the deeply learned features, this paper proposes a new supervision signal, called center loss, for face recognition task. Specifically, the center loss simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers. More importantly, we prove that the proposed center loss function is trainable and easy to optimize in the CNNs. With the joint supervision of softmax loss and center loss, we can train a robust CNNs to obtain the deep features with the two key learning objectives, inter-class dispension and intra-class compactness as much as possible, which are very essential to face recognition. It is encouraging to see that our CNNs (with such joint supervision) achieve the state-of-the-art accuracy on several important face recognition benchmarks, Labeled Faces in the Wild (LFW), YouTube Faces (YTF), and MegaFace Challenge. Especially, our new approach achieves the best results on MegaFace (the largest public domain face benchmark) under the protocol of small training set (contains under 500000 images and under 20000 persons), significantly improving the previous results and setting new state-of-the-art for both face recognition and face verification tasks.", "Cross-entropy loss together with softmax is arguably one of the most common used supervision components in convolutional neural networks (CNNs). Despite its simplicity, popularity and excellent performance, the component does not explicitly encourage discriminative learning of features. In this paper, we propose a generalized large-margin softmax (L-Softmax) loss which explicitly encourages intra-class compactness and inter-class separability between learned features. Moreover, L-Softmax not only can adjust the desired margin but also can avoid overfitting. We also show that the L-Softmax loss can be optimized by typical stochastic gradient descent. Extensive experiments on four benchmark datasets demonstrate that the deeply-learned features with L-softmax loss become more discriminative, hence significantly boosting the performance on a variety of visual classification and verification tasks.", "This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks." ] }
1811.10796
2903437020
Fingerprint-based indoor localization methods are promising due to the high availability of deployed access points and compatibility with commercial-off-the-shelf user devices. However, to train regression models for localization, an extensive site survey is required to collect fingerprint data from the target areas. In this paper, we consider the problem of informative path planning (IPP) to find the optimal walk for site survey subject to a budget constraint. IPP for location fingerprint collection is related to the well-known orienteering problem (OP) but is more challenging due to edge-based non-additive rewards and revisits. Given the NP-hardness of IPP, we propose two heuristic approaches: a Greedy algorithm and a genetic algorithm. We show through experimental data collected from two indoor environments with different characteristics that the two algorithms have low computation complexity, can generally achieve higher utility and lower localization errors compared to the extension of two state-of-the-art approaches to OP.
The Recursive Greedy (RG) algorithm is an approximate algorithm for the submodular orienteering problem (SOP), first proposed by Chekuri in 2005 @cite_2 . The algorithm considers all possible combinations of intermediate vertices and budget, and then it is recursively applied on the smaller sub-problems. The greedy aspect of the algorithm comes from the divide-and-conquer approach by simply concatenating the paths returned from sub-problems into a single one as the final solution. The algorithm can be summarized informally in the following steps: enumerate all possible combinations of intermediate vertices and budget splits recursively find the first half of the path within the budget split find the second half of the path within the remaining budget return the concatenation of the two sub-paths which have the biggest reward
{ "cite_N": [ "@cite_2" ], "mid": [ "2151239055" ], "abstract": [ "Given an arc-weighted directed graph G = (V, A, spl lscr ) and a pair of nodes s, t, we seek to find an s-t walk of length at most B that maximizes some given function f of the set of nodes visited by the walk. The simplest case is when we seek to maximize the number of nodes visited: this is called the orienteering problem. Our main result is a quasi-polynomial time algorithm that yields an O(log OPT) approximation for this problem when f is a given submodular set function. We then extend it to the case when a node v is counted as visited only if the walk reaches v in its time window [R(v), D(v)]. We apply the algorithm to obtain several new results. First, we obtain an O(log OPT) approximation for a generalization of the orienteering problem in which the profit for visiting each node may vary arbitrarily with time. This captures the time window problem considered earlier for which, even in undirected graphs, the best approximation ratio known [Bansal, (2004)] is O(log sup 2 OPT). The second application is an O(log sup 2 k) approximation for the k-TSP problem in directed graphs (satisfying asymmetric triangle inequality). This is the first non-trivial approximation algorithm for this problem. The third application is an O(log sup 2 k) approximation (in quasi-poly time) for the group Steiner problem in undirected graphs where k is the number of groups. This improves earlier ratios (Garg, ) by a logarithmic factor and almost matches the inapproximability threshold on trees (Halperin and Krauthgamer, 2003). This connection to group Steiner trees also enables us to prove that the problem we consider is hard to approximate to a ratio better than spl Omega (log sup 1- spl epsi OPT), even in undirected graphs. Even though our algorithm runs in quasi-poly time, we believe that the implications for the approximability of several basic optimization problems are interesting." ] }
1811.10796
2903437020
Fingerprint-based indoor localization methods are promising due to the high availability of deployed access points and compatibility with commercial-off-the-shelf user devices. However, to train regression models for localization, an extensive site survey is required to collect fingerprint data from the target areas. In this paper, we consider the problem of informative path planning (IPP) to find the optimal walk for site survey subject to a budget constraint. IPP for location fingerprint collection is related to the well-known orienteering problem (OP) but is more challenging due to edge-based non-additive rewards and revisits. Given the NP-hardness of IPP, we propose two heuristic approaches: a Greedy algorithm and a genetic algorithm. We show through experimental data collected from two indoor environments with different characteristics that the two algorithms have low computation complexity, can generally achieve higher utility and lower localization errors compared to the extension of two state-of-the-art approaches to OP.
Even with RG-QP, the run time is long for practical problems, especially when the number of vertices or the budget is large. As presented in the experiments in @cite_14 , even with a small graph of 16 nodes, it takes many hours or even days when the budget is large. Furthermore, the run time is sensitive to the recursion depth @math since it is exponential with respect to @math . As a result, the recursive greedy algorithm can be only used to solve the OP when the number of vertices and the budget is small.
{ "cite_N": [ "@cite_14" ], "mid": [ "2066291790" ], "abstract": [ "We present a path planning method for autonomous underwater vehicles in order to maximize mutual information. We adapt a method previously used for surface vehicles, and extend it to deal with the unique characteristics of underwater vehicles. We show how to generate near-optimal paths while ensuring that the vehicle stays out of high-traffic areas during predesignated time intervals. In our objective function we explicitly account for the fact that underwater vehicles typically take measurements while moving, and that they do not have the ability to communicate until they resurface. We present field results from ocean trials on planning paths for a specific AUV, an underwater glider." ] }
1811.10796
2903437020
Fingerprint-based indoor localization methods are promising due to the high availability of deployed access points and compatibility with commercial-off-the-shelf user devices. However, to train regression models for localization, an extensive site survey is required to collect fingerprint data from the target areas. In this paper, we consider the problem of informative path planning (IPP) to find the optimal walk for site survey subject to a budget constraint. IPP for location fingerprint collection is related to the well-known orienteering problem (OP) but is more challenging due to edge-based non-additive rewards and revisits. Given the NP-hardness of IPP, we propose two heuristic approaches: a Greedy algorithm and a genetic algorithm. We show through experimental data collected from two indoor environments with different characteristics that the two algorithms have low computation complexity, can generally achieve higher utility and lower localization errors compared to the extension of two state-of-the-art approaches to OP.
RO was originally introduced in @cite_9 . The basic idea is to convert OP into two sub-problems: the subset selection problem and the travelling salesman problem (TSP). Specifically, vertices are randomly added and removed from the current vertex set, and path planning is done via a TSP solver among the selected vertex set. Such an approach requires that the graph is fully connected. Additionally, the utility is additively associated with the vertices and costs arise from the edges. Thus, once the set of vertices is determined, the utility is also determined. It suffices to use a TSP solver to find the least cost path to traverse those vertices.
{ "cite_N": [ "@cite_9" ], "mid": [ "2737130609" ], "abstract": [ "Maximizing information gathered within a budget is a relevant problem for information gathering tasks for robots with cost or operating time constraints. This problem is also known as the informative path planning (IPP) problem or correlated orienteering. It can be formalized as that of finding budgeted routes in a graph such that the reward collected by the route is maximized, where the reward at nodes can be dependent. Unfortunately, the problem is NP-Hard and the state of the art methods are too slow to even present an approximate solution online. Here we present Randomized Anytime Orienteering (RAOr) algorithm that provides near optimal solutions while demonstrably converging to an efficient solution in runtimes that allows the solver to be run online. The key idea of our approach is to pose orienteering as a combination of a Constraint Satisfaction Problem and a Traveling Salesman Problem. This formulation allows us to restrict the search space to routes that incur minimum distance to visit a set of selected nodes, and rapidly search this space using random sampling. The paper provides the analysis of asymptotic near-optimality, convergence rates for RAOr algorithms, and present strategies to improve anytime performance of the algorithm. Our experimental results suggest an improvement by an order of magnitude over the state of the art methods in relevant simulation and in real world scenarios." ] }
1811.10753
2902634936
This paper presents an efficient optimization framework that solves trajectory optimization problems efficiently by decoupling state variables from timing variables, thereby decomposing a challenging nonlinear programming (NLP) problem into two easier subproblems. With timing fixed, the state variables can be optimized efficiently using convex optimization, and so the time variables can be optimized using a separate outer optimization. This is a bilevel optimization in which the outer objective function itself requires an optimization to compute. The challenge is that gradient optimization methods require the gradient of the objective function with respect to the time variables, which is not available. Whereas the finite difference method must solve many optimization problems to compute a gradient, this paper proposes a more efficient method: the dual solution (Lagrange multipliers) of the convex optimization problem is exploited to calculate the analytical gradient. Since the dual solution is a by-product of the convex optimization problem, the gradient can be obtained for free' with high accuracy. The framework is demonstrated on solving minimum-jerk trajectory optimization problems in safety corridors for unmanned aerial vehicles (UAVs). Experiments demonstrate that bilevel optimization improves performance over a standard NLP solver, and analytical gradients outperforms finite differences. With a 40 ,ms cutoff time, our approach achieves over 8 times better suboptimality than the current state-of-the-art.
Despite intensive research, it is still challenging to find an optimal time allocation scheme of a spline trajectory in real-time. One strategy is to generate a time allocation scheme with heuristics @cite_0 , @cite_19 and stick with the scheme during the optimization stage. For example, Gao @cite_0 use the heuristic of selecting velocity according to the distance to the nearest obstacle, closer the trajectory is to the obstacle, lower the velocity. Though the heuristic is reasonably chosen, the velocity of the trajectory after the optimization stage does not necessarily follow the heuristic. Heuristics, though being cheap to compute, are often not optimal and will sometimes lead to spikes in jerk or snap near connection points, see Fig. for an example.
{ "cite_N": [ "@cite_0", "@cite_19" ], "mid": [ "2891491652", "2587415290" ], "abstract": [ "In this paper, we propose a framework for online quadrotor motion planning for autonomous navigation in unknown environments. Based on the onboard state estimation and environment perception, we adopt a fast marching-based path searching method to find a path on a velocity field induced by the Euclidean signed distance field (ESDF) of the map, to achieve better time allocation. We generate a flight corridor for the quadrotor to travel through by inflating the path against the environment. We represent the trajectory as piecewise Bezier curves by using Bernstein polynomial basis and formulate the trajectory generation problem as typical convex programs. By using Bezier curves, we are able to bound positions and higher order dynamics of the trajectory entirely within safe regions. The proposed motion planning method is integrated into a customized light-weight quadrotor platform and is validated by presenting fully autonomous navigation in unknown cluttered indoor and outdoor environments. We also release our code for trajectory generation as an open-source package.", "There is extensive literature on using convex optimization to derive piece-wise polynomial trajectories for controlling differential flat systems with applications to three-dimensional flight for Micro Aerial Vehicles. In this work, we propose a method to formulate trajectory generation as a quadratic program (QP) using the concept of a Safe Flight Corridor (SFC). The SFC is a collection of convex overlapping polyhedra that models free space and provides a connected path from the robot to the goal position. We derive an efficient convex decomposition method that builds the SFC from a piece-wise linear skeleton obtained using a fast graph search technique. The SFC provides a set of linear inequality constraints in the QP allowing real-time motion planning. Because the range and field of view of the robot's sensors are limited, we develop a framework of Receding Horizon Planning , which plans trajectories within a finite footprint in the local map, continuously updating the trajectory through a re-planning process. The re-planning process takes between 50 to 300 ms for a large and cluttered map. We show the feasibility of our approach, its completeness and performance, with applications to high-speed flight in both simulated and physical experiments using quadrotors." ] }
1811.10753
2902634936
This paper presents an efficient optimization framework that solves trajectory optimization problems efficiently by decoupling state variables from timing variables, thereby decomposing a challenging nonlinear programming (NLP) problem into two easier subproblems. With timing fixed, the state variables can be optimized efficiently using convex optimization, and so the time variables can be optimized using a separate outer optimization. This is a bilevel optimization in which the outer objective function itself requires an optimization to compute. The challenge is that gradient optimization methods require the gradient of the objective function with respect to the time variables, which is not available. Whereas the finite difference method must solve many optimization problems to compute a gradient, this paper proposes a more efficient method: the dual solution (Lagrange multipliers) of the convex optimization problem is exploited to calculate the analytical gradient. Since the dual solution is a by-product of the convex optimization problem, the gradient can be obtained for free' with high accuracy. The framework is demonstrated on solving minimum-jerk trajectory optimization problems in safety corridors for unmanned aerial vehicles (UAVs). Experiments demonstrate that bilevel optimization improves performance over a standard NLP solver, and analytical gradients outperforms finite differences. With a 40 ,ms cutoff time, our approach achieves over 8 times better suboptimality than the current state-of-the-art.
One way to improve a non-optimal time allocation scheme is to refine it iteratively using gradient descent @cite_11 @cite_13 . The refinement process computes the gradient of the objective w.r.t. the time allocation scheme, takes a step after choosing a suitable descent direction and step length. To the best of our knowledge, finite difference is the only way being used for gradient calculation in this scenario. However, estimating gradient with finite difference is time-consuming since the objective function is actually the objective of an optimization problem. For each gradient evaluation, this approach has to solve the same number of optimization problems as the number of segments in the spline, thus making it intractable for real-time performance. Moreover, choosing step sizes for finite difference is hard and the result may not necessarily be a good approximation to the actual gradient.
{ "cite_N": [ "@cite_13", "@cite_11" ], "mid": [ "2482392012", "2162991084" ], "abstract": [ "We explore the challenges of planning trajectories for quadrotors through cluttered indoor environments. We extend the existing work on polynomial trajectory generation by presenting a method of jointly optimizing polynomial path segments in an unconstrained quadratic program that is numerically stable for high-order polynomials and large numbers of segments, and is easily formulated for efficient sparse computation. We also present a technique for automatically selecting the amount of time allocated to each segment, and hence the quadrotor speeds along the path, as a function of a single parameter determining aggressiveness, subject to actuator constraints. The use of polynomial trajectories, coupled with the differentially flat representation of the quadrotor, eliminates the need for computationally intensive sampling and simulation in the high dimensional state space of the vehicle during motion planning. Our approach generates high-quality trajecrtories much faster than purely sampling-based optimal kinodynamic planning methods, but sacrifices the guarantee of asymptotic convergence to the global optimum that those methods provide. We demonstrate the performance of our algorithm by efficiently generating trajectories through challenging indoor spaces and successfully traversing them at speeds up to 8 m s. A demonstration of our algorithm and flight performance is available at: http: groups.csail.mit.edu rrg quad_polynomial_trajectory_planning.", "We address the controller design and the trajectory generation for a quadrotor maneuvering in three dimensions in a tightly constrained setting typical of indoor environments. In such settings, it is necessary to allow for significant excursions of the attitude from the hover state and small angle approximations cannot be justified for the roll and pitch. We develop an algorithm that enables the real-time generation of optimal trajectories through a sequence of 3-D positions and yaw angles, while ensuring safe passage through specified corridors and satisfying constraints on velocities, accelerations and inputs. A nonlinear controller ensures the faithful tracking of these trajectories. Experimental results illustrate the application of the method to fast motion (5–10 body lengths second) in three-dimensional slalom courses." ] }
1811.10753
2902634936
This paper presents an efficient optimization framework that solves trajectory optimization problems efficiently by decoupling state variables from timing variables, thereby decomposing a challenging nonlinear programming (NLP) problem into two easier subproblems. With timing fixed, the state variables can be optimized efficiently using convex optimization, and so the time variables can be optimized using a separate outer optimization. This is a bilevel optimization in which the outer objective function itself requires an optimization to compute. The challenge is that gradient optimization methods require the gradient of the objective function with respect to the time variables, which is not available. Whereas the finite difference method must solve many optimization problems to compute a gradient, this paper proposes a more efficient method: the dual solution (Lagrange multipliers) of the convex optimization problem is exploited to calculate the analytical gradient. Since the dual solution is a by-product of the convex optimization problem, the gradient can be obtained for free' with high accuracy. The framework is demonstrated on solving minimum-jerk trajectory optimization problems in safety corridors for unmanned aerial vehicles (UAVs). Experiments demonstrate that bilevel optimization improves performance over a standard NLP solver, and analytical gradients outperforms finite differences. With a 40 ,ms cutoff time, our approach achieves over 8 times better suboptimality than the current state-of-the-art.
Another strategy to determine a time allocation scheme is to use sampling @cite_1 . This approach randomly samples the duration of each segment until the corresponding QP can be solved, and is applicable to problems with a rather small feasible set, for example the problem of humanoid locomotion where a small change in time can make the optimization problem infeasible. However it may not scale well to generating trajectories with a large number of segments. Also, the work described in @cite_1 only seeks for a feasible time allocation and does not improve it.
{ "cite_N": [ "@cite_1" ], "mid": [ "2789286310" ], "abstract": [ "We tackle the transition feasibility problem, that is the issue of determining whether there exists a feasible motion connecting two configurations of a legged robot. To achieve this we introduce CROC, a novel method for computing centroidal dynamics trajectories in multi-contact planning contexts. Our approach is based on a conservative and convex reformulation of the problem, where we represent the center of mass trajectory as a Bezier curve comprising a single free control point as a variable. Under this formulation, the transition problem is solved efficiently with a Linear Program (LP)of low dimension. We use this LP as a feasibility criterion, incorporated in a sampling-based contact planner, to discard efficiently unfeasible contact plans. We are thus able to produce robust contact sequences, likely to define feasible motion synthesis problems. We illustrate this application on various multi-contact scenarios featuring HRP2 and HyQ. We also show that we can use CROC to compute valuable initial guesses, used to warm-start non-linear solvers for motion generation methods. This method could also be used for the 0 and 1-Step capturability problem. The source code of CROC is available under an open source BSD-2 License." ] }
1811.10753
2902634936
This paper presents an efficient optimization framework that solves trajectory optimization problems efficiently by decoupling state variables from timing variables, thereby decomposing a challenging nonlinear programming (NLP) problem into two easier subproblems. With timing fixed, the state variables can be optimized efficiently using convex optimization, and so the time variables can be optimized using a separate outer optimization. This is a bilevel optimization in which the outer objective function itself requires an optimization to compute. The challenge is that gradient optimization methods require the gradient of the objective function with respect to the time variables, which is not available. Whereas the finite difference method must solve many optimization problems to compute a gradient, this paper proposes a more efficient method: the dual solution (Lagrange multipliers) of the convex optimization problem is exploited to calculate the analytical gradient. Since the dual solution is a by-product of the convex optimization problem, the gradient can be obtained for free' with high accuracy. The framework is demonstrated on solving minimum-jerk trajectory optimization problems in safety corridors for unmanned aerial vehicles (UAVs). Experiments demonstrate that bilevel optimization improves performance over a standard NLP solver, and analytical gradients outperforms finite differences. With a 40 ,ms cutoff time, our approach achieves over 8 times better suboptimality than the current state-of-the-art.
Richter @cite_13 proposes a framework that is able to generate collision-free spline trajectories and optimize time allocation. They managed to formulate a QP with only equality constraints. However, such a formulation has the disadvantage: (a) Being collision-free is achieved by iteratively adding new points and solve another QP if there is a collision. This strategy has no bound on the number of iterations needed though their experiments show a few iterations are usually enough. (b) Dynamic feasibility is achieved by checking the max acceleration in every iteration and stop the iteration if the acceleration reaches the limit. Since computing the extrema of a high order polynomial is hard according to the Abel-Ruffini theorem, this strategy can be slow when we are using high order polynomials. Moreover, their framework solves the optimization problem by re-formulating the equality constrained QP into an unconstrained one, they will not be able to handle general inequality constraints which often exist in trajectory optimization. In our work, we use bilevel optimization as a way to solve a GBD, and exploit theoretical developments from these fields to derive analytical gradients for trajectory optimization.
{ "cite_N": [ "@cite_13" ], "mid": [ "2482392012" ], "abstract": [ "We explore the challenges of planning trajectories for quadrotors through cluttered indoor environments. We extend the existing work on polynomial trajectory generation by presenting a method of jointly optimizing polynomial path segments in an unconstrained quadratic program that is numerically stable for high-order polynomials and large numbers of segments, and is easily formulated for efficient sparse computation. We also present a technique for automatically selecting the amount of time allocated to each segment, and hence the quadrotor speeds along the path, as a function of a single parameter determining aggressiveness, subject to actuator constraints. The use of polynomial trajectories, coupled with the differentially flat representation of the quadrotor, eliminates the need for computationally intensive sampling and simulation in the high dimensional state space of the vehicle during motion planning. Our approach generates high-quality trajecrtories much faster than purely sampling-based optimal kinodynamic planning methods, but sacrifices the guarantee of asymptotic convergence to the global optimum that those methods provide. We demonstrate the performance of our algorithm by efficiently generating trajectories through challenging indoor spaces and successfully traversing them at speeds up to 8 m s. A demonstration of our algorithm and flight performance is available at: http: groups.csail.mit.edu rrg quad_polynomial_trajectory_planning." ] }
1811.10753
2902634936
This paper presents an efficient optimization framework that solves trajectory optimization problems efficiently by decoupling state variables from timing variables, thereby decomposing a challenging nonlinear programming (NLP) problem into two easier subproblems. With timing fixed, the state variables can be optimized efficiently using convex optimization, and so the time variables can be optimized using a separate outer optimization. This is a bilevel optimization in which the outer objective function itself requires an optimization to compute. The challenge is that gradient optimization methods require the gradient of the objective function with respect to the time variables, which is not available. Whereas the finite difference method must solve many optimization problems to compute a gradient, this paper proposes a more efficient method: the dual solution (Lagrange multipliers) of the convex optimization problem is exploited to calculate the analytical gradient. Since the dual solution is a by-product of the convex optimization problem, the gradient can be obtained for free' with high accuracy. The framework is demonstrated on solving minimum-jerk trajectory optimization problems in safety corridors for unmanned aerial vehicles (UAVs). Experiments demonstrate that bilevel optimization improves performance over a standard NLP solver, and analytical gradients outperforms finite differences. With a 40 ,ms cutoff time, our approach achieves over 8 times better suboptimality than the current state-of-the-art.
More broadly, the idea of decomposition and divide-and-conquer has been explored in the Benders Decomposition (BD) @cite_15 and Generalized Benders Decomposition (GBD) @cite_8 , which are well-known techniques in operations research. In the GBD formulation, are variables which would make the optimization problem considerably more tractable when temporarily fixed. The idea of GBD is to split the original optimization problem into two: a and a subproblem. The is generated by temporarily fixing the of the original problem, and in BD these are usually integer variables. In the case of trajectory optimization, the complicating variables are times allocated to each segment. This approach is also closely related to bilevel optimization @cite_3 , which refers to a mathematical program where one optimization problem has another optimization problem as one of its constraints, that is, one optimization task is embedded within another. The outer optimization task is often referred to as the and the inner optimization task is known to be the . Bilevel optimization is well-known in production and marketing decision making, and we refer readers to @cite_3 @cite_10 for more comprehensive treatments of the topic.
{ "cite_N": [ "@cite_15", "@cite_10", "@cite_3", "@cite_8" ], "mid": [ "2044121814", "2124659975", "2614367549", "2061962896" ], "abstract": [ "", "This paper is devoted to bilevel optimization, a branch of mathematical programming of both practical and theoretical interest. Starting with a simple example, we proceed towards a general formulation. We then present fields of application, focus on solution approaches, and make the connection with MPECs (Mathematical Programs with Equilibrium Constraints).", "Bilevel optimization is defined as a mathematical program, where an optimization problem contains another optimization problem as a constraint. These problems have received significant attention from the mathematical programming community. Only limited work exists on bilevel problems using evolutionary computation techniques; however, recently there has been an increasing interest due to the proliferation of practical applications and the potential of evolutionary algorithms in tackling these problems. This paper provides a comprehensive review on bilevel optimization from the basic principles to solution strategies; both classical and evolutionary. A number of potential application problems are also discussed. To offer the readers insights on the prominent developments in the field of bilevel optimization, we have performed an automated text-analysis of an extended list of papers published on bilevel optimization to date. This paper should motivate evolutionary computation researchers to pay more attention to this practical yet challenging area.", "J. F. Benders devised a clever approach for exploiting the structure of mathematical programming problems withcomplicating variables (variables which, when temporarily fixed, render the remaining optimization problem considerably more tractable). For the class of problems specifically considered by Benders, fixing the values of the complicating variables reduces the given problem to an ordinary linear program, parameterized, of course, by the value of the complicating variables vector. The algorithm he proposed for finding the optimal value of this vector employs a cutting-plane approach for building up adequate representations of (i) the extremal value of the linear program as a function of the parameterizing vector and (ii) the set of values of the parameterizing vector for which the linear program is feasible. Linear programming duality theory was employed to derive the natural families ofcuts characterizing these representations, and the parameterized linear program itself is used to generate what are usuallydeepest cuts for building up the representations." ] }
1811.10753
2902634936
This paper presents an efficient optimization framework that solves trajectory optimization problems efficiently by decoupling state variables from timing variables, thereby decomposing a challenging nonlinear programming (NLP) problem into two easier subproblems. With timing fixed, the state variables can be optimized efficiently using convex optimization, and so the time variables can be optimized using a separate outer optimization. This is a bilevel optimization in which the outer objective function itself requires an optimization to compute. The challenge is that gradient optimization methods require the gradient of the objective function with respect to the time variables, which is not available. Whereas the finite difference method must solve many optimization problems to compute a gradient, this paper proposes a more efficient method: the dual solution (Lagrange multipliers) of the convex optimization problem is exploited to calculate the analytical gradient. Since the dual solution is a by-product of the convex optimization problem, the gradient can be obtained for free' with high accuracy. The framework is demonstrated on solving minimum-jerk trajectory optimization problems in safety corridors for unmanned aerial vehicles (UAVs). Experiments demonstrate that bilevel optimization improves performance over a standard NLP solver, and analytical gradients outperforms finite differences. With a 40 ,ms cutoff time, our approach achieves over 8 times better suboptimality than the current state-of-the-art.
There are analogues to our approach in the field of machine learning. OptNet @cite_20 incorporates a QP solver as a layer into the neural network and is able to provide analytical gradients of the solution to the QP w.r.t. input parameters for back propagation. Gould @cite_2 presents results on differentiating argmin optimization problems w.r.t. optimization variables in the context of bilevel optimization. Their results give exact gradients for the . However, works described in @cite_20 , @cite_2 do not consider finding the gradient of the objective function. Moreover, methods proposed by Gould @cite_2 involves inverting a Hessian matrix which could be computationally expensive and often not even invertible, and they only describe the case of equality constraints and use a log-barrier function to tackle inequality constraints. The gradient of the objective function is even cheaper to compute, making our approach very suitable for speeding up trajectory optimization.
{ "cite_N": [ "@cite_20", "@cite_2" ], "mid": [ "2592457170", "2505728881" ], "abstract": [ "This paper presents OptNet, a network architecture that integrates optimization problems (here, specifically in the form of quadratic programs) as individual layers in larger end-to-end trainable deep networks. These layers encode constraints and complex dependencies between the hidden states that traditional convolutional and fully-connected layers often cannot capture. In this paper, we explore the foundations for such an architecture: we show how techniques from sensitivity analysis, bilevel optimization, and implicit differentiation can be used to exactly differentiate through these layers and with respect to layer parameters; we develop a highly efficient solver for these layers that exploits fast GPU-based batch solves within a primal-dual interior point method, and which provides backpropagation gradients with virtually no additional cost on top of the solve; and we highlight the application of these approaches in several problems. In one notable example, we show that the method is capable of learning to play mini-Sudoku (4x4) given just input and output games, with no a priori information about the rules of the game; this highlights the ability of our architecture to learn hard constraints better than other neural architectures.", "Some recent works in machine learning and computer vision involve the solution of a bi-level optimization problem. Here the solution of a parameterized lower-level problem binds variables that appear in the objective of an upper-level problem. The lower-level problem typically appears as an argmin or argmax optimization problem. Many techniques have been proposed to solve bi-level optimization problems, including gradient descent, which is popular with current end-to-end learning approaches. In this technical report we collect some results on differentiating argmin and argmax optimization problems with and without constraints and provide some insightful motivating examples." ] }
1811.10636
2902904290
In this paper, we present a new method for evolving video CNN models to find architectures that more optimally captures rich spatio-temporal information in videos. Previous work, taking advantage of 3D convolutional layers, obtained promising results by manually designing CNN architectures for videos. We here develop an evolutionary algorithm that automatically explores models with different types and combinations of space-time convolutional layers to jointly capture various spatial and temporal aspects of video representations. We further propose a new key component in video model evolution, the iTGM layer, which more efficiently utilizes its parameters to allow learning of space-time interactions over longer time horizons. The experiments confirm the advantages of our video CNN architecture evolution, with results outperforming previous state-of-the-art models. Our algorithm discovers new and interesting video architecture structures.
Approaches considering a video as a space-time volume have been particularly successful @cite_31 @cite_28 @cite_12 @cite_15 , with a direct application of 3D CNNs to videos. C3D @cite_12 learned 3x3x3 XYT filters, which was not only applied for action recognition but also for video object recognition. I3D @cite_31 extended the Inception architecture to 3D, obtaining successful results on multiple activity recognition video datasets including Kinetics. S3D @cite_6 investigated the usage of 1D and 2D convolutional layers in addition to the 3D layers. R(2+1)D @cite_32 used the 2D conv. layers followed by 1D conv. layers while following the ResNet structure. Two-stream CNN design is also widely adopted in action recognition, which takes optical flow inputs in addition to raw RGBs @cite_19 @cite_39 . There also are works focusing on capturing longer temporal information in continuous videos using pooling @cite_34 , attention @cite_38 , and convolution @cite_36 . Recurrent neural networks (e.g., LSTMs) were also used to sequentially represent videos @cite_34 @cite_20 .
{ "cite_N": [ "@cite_38", "@cite_15", "@cite_28", "@cite_36", "@cite_32", "@cite_6", "@cite_39", "@cite_19", "@cite_31", "@cite_34", "@cite_20", "@cite_12" ], "mid": [ "2963484593", "", "2963616706", "2016053056", "2963155035", "", "2156303437", "2342662179", "2963524571", "1923404803", "", "2122476475" ], "abstract": [ "", "", "Convolutional neural networks with spatio-temporal 3D kernels (3D CNNs) have an ability to directly extract spatiotemporal features from videos for action recognition. Although the 3D kernels tend to overfit because of a large number of their parameters, the 3D CNNs are greatly improved by using recent huge video databases. However, the architecture of3D CNNs is relatively shallow against to the success of very deep neural networks in 2D-based CNNs, such as residual networks (ResNets). In this paper, we propose a 3D CNNs based on ResNets toward a better action representation. We describe the training procedure of our 3D ResNets in details. We experimentally evaluate the 3D ResNets on the ActivityNet and Kinetics datasets. The 3D ResNets trained on the Kinetics did not suffer from overfitting despite the large number of parameters of the model, and achieved better performance than relatively shallow networks, such as C3D. Our code and pretrained models (e.g. Kinetics and ActivityNet) are publicly available at https: github.com kenshohara 3D-ResNets.", "Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3 to 63.9 ), but only a surprisingly modest improvement compared to single-frame models (59.3 to 60.9 ). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3 up from 43.9 ).", "In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition. Our motivation stems from the observation that 2D CNNs applied to individual frames of the video have remained solid performers in action recognition. In this work we empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within the framework of residual learning. Furthermore, we show that factorizing the 3D convolutional filters into separate spatial and temporal components yields significantly gains in accuracy. Our empirical study leads to the design of a new spatiotemporal convolutional block \"R(2+1)D\" which produces CNNs that achieve results comparable or superior to the state-of-the-art on Sports-1M, Kinetics, UCF101, and HMDB51.", "", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters, (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy, finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.", "The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.2 on HMDB-51 and 97.9 on UCF-101.", "Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving state-of-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1 vs. 60.9 ) and the UCF-101 datasets with (88.6 vs. 88.0 ) and without additional optical flow information (82.6 vs. 73.0 ).", "", "" ] }
1811.10636
2902904290
In this paper, we present a new method for evolving video CNN models to find architectures that more optimally captures rich spatio-temporal information in videos. Previous work, taking advantage of 3D convolutional layers, obtained promising results by manually designing CNN architectures for videos. We here develop an evolutionary algorithm that automatically explores models with different types and combinations of space-time convolutional layers to jointly capture various spatial and temporal aspects of video representations. We further propose a new key component in video model evolution, the iTGM layer, which more efficiently utilizes its parameters to allow learning of space-time interactions over longer time horizons. The experiments confirm the advantages of our video CNN architecture evolution, with results outperforming previous state-of-the-art models. Our algorithm discovers new and interesting video architecture structures.
Neural network architectures have advanced significantly since the early convolutional neural network concepts of @cite_13 and @cite_0 : from developing wider modules, e.g., Inception @cite_27 , or introducing duplicated modules @cite_25 , residual connections @cite_5 @cite_35 , densely connected networks @cite_37 @cite_33 , or multi-task architectures: e.g., Faster-RCNN and RetinaNet for detection, and many others @cite_3 @cite_14 @cite_2 . Recently several ground-breaking approaches have been proposed for automated learning searching of neural network architectures, rather than manually designing them @cite_4 @cite_9 @cite_22 @cite_7 . Successful architecture search has been demonstrated for images and text @cite_22 @cite_7 , including object classification. @cite_21 analyze action recognition experiments with different settings, e.g., input resolution, frame rate, number of frames, network depth, all within the 3D ResNet architecture. In the context of video understanding, we are not aware of any prior work that has attempted developing an automated algorithm for data-driven architecture search evolution.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_14", "@cite_4", "@cite_33", "@cite_22", "@cite_7", "@cite_9", "@cite_21", "@cite_3", "@cite_0", "@cite_27", "@cite_2", "@cite_5", "@cite_13", "@cite_25" ], "mid": [ "2549139847", "2963446712", "2798930779", "2594529350", "2559597482", "2963374479", "", "2886953980", "2745519816", "2743473392", "2618530766", "2183341477", "2613718673", "2194775991", "2310919327", "" ], "abstract": [ "We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call cardinality (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.", "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet.", "In this paper we propose a novel deep neural network that is able to jointly reason about 3D detection, tracking and motion forecasting given data captured by a 3D sensor. By jointly reasoning about these tasks, our holistic approach is more robust to occlusion as well as sparse data at range. Our approach performs 3D convolutions across space and time over a bird's eye view representation of the 3D world, which is very efficient in terms of both memory and computation. Our experiments on a new very large scale dataset captured in several north american cities, show that we can outperform the state-of-the-art by a large margin. Importantly, by sharing computation we can perform all tasks in as little as 30 ms.", "Neural networks have proven effective at solving difficult problems but designing their architectures can be challenging, even for image classification problems alone. Our goal is to minimize human participation, so we employ evolutionary algorithms to discover such networks automatically. Despite significant computational requirements, we show that it is now possible to evolve models with accuracies within the range of those published in the last year. Specifically, we employ simple evolutionary techniques at unprecedented scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting from trivial initial conditions and reaching accuracies of 94.6 (95.6 for ensemble) and 77.0 , respectively. To do this, we use novel and intuitive mutation operators that navigate large search spaces; we stress that no human participation is required once evolution starts and that the output is a fully-trained model. Throughout this work, we place special emphasis on the repeatability of results, the variability in the outcomes and the computational requirements.", "State-of-the-art approaches for semantic image segmentation are built on Convolutional Neural Networks (CNNs). The typical segmentation architecture is composed of (a) a downsampling path responsible for extracting coarse semantic features, followed by (b) an upsampling path trained to recover the input image resolution at the output of the model and, optionally, (c) a post-processing module (e.g. Conditional Random Fields) to refine the model predictions.,,,,,, Recently, a new CNN architecture, Densely Connected Convolutional Networks (DenseNets), has shown excellent results on image classification tasks. The idea of DenseNets is based on the observation that if each layer is directly connected to every other layer in a feed-forward fashion then the network will be more accurate and easier to train.,,,,,, In this paper, we extend DenseNets to deal with the problem of semantic segmentation. We achieve state-of-the-art results on urban scene benchmark datasets such as CamVid and Gatech, without any further post-processing module nor pretraining. Moreover, due to smart construction of the model, our approach has much less parameters than currently published best entries for these datasets.", "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.", "", "Designing convolutional neural networks (CNN) models for mobile devices is challenging because mobile models need to be small and fast, yet still accurate. Although significant effort has been dedicated to design and improve mobile models on all three dimensions, it is challenging to manually balance these trade-offs when there are so many architectural possibilities to consider. In this paper, we propose an automated neural architecture search approach for designing resource-constrained mobile CNN models. We propose to explicitly incorporate latency information into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency. Unlike in previous work, where mobile latency is considered via another, often inaccurate proxy (e.g., FLOPS), in our experiments, we directly measure real-world inference latency by executing the model on a particular platform, e.g., Pixel phones. To further strike the right balance between flexibility and search space size, we propose a novel factorized hierarchical search space that permits layer diversity throughout the network. Experimental results show that our approach consistently outperforms state-of-the-art mobile CNN models across multiple vision tasks. On the ImageNet classification task, our model achieves 74.0 top-1 accuracy with 76ms latency on a Pixel phone, which is 1.5x faster than MobileNetV2 ( 2018) and 2.4x faster than NASNet ( 2018) with the same top-1 accuracy. On the COCO object detection task, our model family achieves both higher mAP quality and lower latency than MobileNets.", "Learning image representations with ConvNets by pre-training on ImageNet has proven useful across many visual understanding tasks including object detection, semantic segmentation, and image captioning. Although any image representation can be applied to video frames, a dedicated spatiotemporal representation is still vital in order to incorporate motion patterns that cannot be captured by appearance based models alone. This paper presents an empirical ConvNet architecture search for spatiotemporal feature learning, culminating in a deep 3-dimensional (3D) Residual ConvNet. Our proposed architecture outperforms C3D by a good margin on Sports-1M, UCF101, HMDB51, THUMOS14, and ASLAN while being 2 times faster at inference time, 2 times smaller in model size, and having a more compact representation.", "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 , respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2 top-1 and 5:6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5 top-5 error and 17:3 top-1 error on the validation set and 3:6 top-5 error on the official test set.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "", "" ] }
1906.11452
2954519665
In this work, we address traffic management of multiple payload transport systems comprising of non-holonomic robots. We consider loosely coupled rigid robot formations carrying a payload from one place to another. Each payload transport system (PTS) moves in various kinds of environments with obstacles. We ensure each PTS completes its given task by avoiding collisions with other payload systems and obstacles as well. Each PTS has one leader and multiple followers and the followers maintain a desired distance and angle with respect to the leader using a decentralized leader-follower control architecture while moving in the traffic. We showcase, through simulations the time taken by each PTS to traverse its respective trajectory with and without other PTS and obstacles. We show that our strategies help manage the traffic for a large number of PTS moving from one place to another.
Over the years, various methods have been developed for formation control and navigation of formations for payload transportation. @cite_23 , @cite_11 propose methods for path planning of a single formation without laying much emphasis on the presence of obstacles. @cite_10 , @cite_3 , @cite_25 , @cite_4 shows navigation of single formation in environments which consists of static obstacles.Path planning for 3D formations is proposed in @cite_17 and @cite_24 . Method for navigating a single formation using the leader-follower approach is shown in @cite_25 and @cite_7 . @cite_20 , @cite_15 shows some promising results in aerial vehicle formation control. A slung payload transportation method is demonstrated in @cite_18 where as @cite_9 devised wheeled locomotion for payload carrying with modular robot. @cite_19 proposes path planning of a single formation in environment with dynamic obstacles but is a computationally expensive centralized method.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_9", "@cite_17", "@cite_3", "@cite_24", "@cite_19", "@cite_23", "@cite_15", "@cite_10", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "2172041073", "2804720827", "2046959847", "2111939510", "2047889826", "1542318281", "2564115802", "2744979297", "2162481101", "2887736113", "2198893007", "2156903767", "2409707557", "2791175394" ], "abstract": [ "In this paper we present an overview of techniques and approaches used for a load transportation system based on small size unmanned helicopters. The focus is on the control approach and on the movement of the rope connecting helicopters and load. The proposed approach is based on two control loops: an outer loop to control the translation of each helicopter in compound and an inner loop to control the orientation of helicopters. The challenge here is that in both loops the dynamics of the whole system - all helicopters and load - should be accounted for. It is shown, that for designing the outer loop controller a complex model of the helicopters and load can be replaced by a simplified model based on interconnected mass points. For designing the inner loop controller, the complete dynamics of the whole system are considered. The usage of force sensors in the ropes is proposed in order to simplify the inner loop controller and to make it robust against variations of system parameters. The presented inner loop controller is independent of the number of coupled helicopters. The outer loop controller depends on the number of helicopters. The problem of oscillations in the flexible ropes due to external disturbancies (e.g. wind gusts) is discussed and a solution based on load state observer is presented. The performance of the presented system was verified in simulations and in real flight experiments with one and three helicopters transporting the load. The worldwide first demonstration of a slung load transportation using three helicopters was performed in December 2007.", "Traditionally, motion planning involved navigating one robot from source to goal for accomplishing a task. Now, tasks mostly require movement of a team of robots to the goal site, requiring a chain of robots to reach the desired goal. While numerous efforts are made in the literature for solving the problems of motion planning of a single robot and collective robot navigation in isolation, this paper fuses the two paradigms to let a chain of robot navigate. Further, this paper uses SLAM to first make a static map using a high-end robot, over which the physical low-sensing robots run. Deliberative Planning uses A* algorithm to plan the path. Reactive planning uses the Potential Field Approach to avoid obstacles and stay as close to the initial path planned as possible. These two algorithms are then merged to provide an algorithm that allows the robot to reach its goal via the shortest path possible while avoiding obstacles. The algorithm is further extended to multiple robots so that one robot is followed by the next robot and so on, thus forming a chain. In order to maintain the robots in a chain form, the Elastic Strip model is used. The algorithm proposed successfully executes the above stated when tested on Amigobot robots in an office environment using a map made by the Pioneer LX robot. The proposed algorithm works well for moving a group of robots in a chain in a mapped environment.", "This paper presents the optimal path of nonholonomic multi robots with coherent formation in a leader-follower structure in the presence of obstacles using Asexual Reproduction Optimization (ARO). The robots path planning based on potential field method are accomplished and a novel formation controller for mobile robots based on potential field method is proposed. The efficiency of the proposed method is verified through simulation and experimental studies by applying them to control the formation of four e-Pucks robots (low-cost mobile robot platform). Also the proposed method is compared with Simulated Annealing, Improved Harmony Search and Cuckoo Optimization Algorithm methods and the experimental results, higher performance and fast convergence time to the best solution of the ARO demonstrated that this optimization method is appropriate for real time control application.", "Carrying heavy payloads is a challenging task for the modular robot, because its composing modules are relatively tiny and less strong compared with conventional robots. To accomplish this task, we attached passive rollers to the modular robot, and designed a wheeled locomotion gait called tricycleBot. The gait is inspired by paddling motion, and is implemented on the modular robot called SuperBot. Features of this gait are systematically studied and verified through extensive experiments. It is shown that tricycleBot can carry payloads at least 530 of its own weight. It can also be steered remotely to move forward backward, turn left right. Capability of tricycleBot demonstrates that the versatility of modular robot can be further expanded to solve very specialized and challenging tasks by using heterogeneous devices.", "This work presents a path planning algorithm for 3D robot formations based on the standard Fast Marching Square (FM2) path planning method. This method is enlarged in order to apply it to robot formations motion planning. The algorithm is based on a leader-followers scheme, which means that the reference pose for the follower robots is defined by geometric equations that place the goal pose of each follower as a function of the leader's pose. Besides, the Frenet-Serret frame is used to control the orientation of the formation. The algorithm presented allows the formation to adapt its shape so that the obstacles are avoided. Additionally, an approach to model mobile obstacles in a 3D environment is described. This model modifies the information used by the FM2 algorithm in favour of the robots to be able to avoid obstacles. The shape deformation scheme allows to easily change the behaviour of the formation. Finally, simulations are performed in different scenarios and a quantitative analysis of the results has been carried out. The tests show that the proposed shape deformation method, in combination with the FM2 path planner, is robust enough to manage autonomous movements through an indoor 3D environment.", "", "", "We present a constrained optimization method for multi-robot formation control in dynamic environments, where the robots adjust the parameters of the formation, such as size and three-dimensional o...", "This paper presents a motion-planning approach for coordinating multiple mobile robots in moving along specified paths. The robots are required to fulfill formation requirements while meeting velocity acceleration constraints and avoiding collisions. Coordination is achieved by planning robot velocities along the paths through a velocity-optimization process. An objective function for minimizing formation errors is established and solved by a linear interactive and general optimizer. Motion planning can be further adjusted online to address emergent demands such as avoiding suddenly appearing obstacles. Simulations and experiments are performed on a group of mobile robots to demonstrate the effectiveness of the proposed coordinated motion planning in multirobot formations.", "This paper presents a method for controlling a swarm of quadrotors to perform agile interleaved maneuvers while holding a fixed relative formation, or transitioning between different formations. The method prevents collisions within the swarm, as well as between the quadrotors and static obstacles in the environment. The method is built upon the existing notion of a virtual structure, which serves as a framework with which to plan and execute complex interleaved trajectories, and also gives a simple, intuitive interface for a single human operator to control an arbitrarily large aerial swarm in real time. The virtual structure concept is integrated with differential flatness-based feedback control to give an end-to-end integrated swarm teleoperation system. Collision avoidance is achieved by using multiple layered potential fields. Our method is demonstrated in hardware experiments with groups of 3–5 quadrotors teleoperated by a single human operator, and simulations of 200 quadrotors teleoperated by a single human operator.", "Multi robot formation is a canonical problem in robotic research. The problem has been examined in neutral environments, where the robots' goal is usually to maintain the formation despite changes in the environment. The problem of multi robot formation has been motivated by natural phenomena such as schools of fish or flocks of birds. While in the natural phenomena the team behavior is responsive to threats, in robotics research of team formation, adversarial presence has been ignored. In this paper we present the problem of adversarial formation, in which a team of robots travels in a connected formation through an adversarial environment that includes threats that may harm the robots. The robots' goal is, therefore, to maximize their chance of traveling through the environment unharmed, where the formation may be used as a mean to achieve this goal. We formally define the problem, present a quantitative measure for evaluating the survivability of the team, and suggest possible solutions to a variant of the problem under certain threat characteristics, optimizing different team survivability criteria. Finally, we discuss the challenges raised by transitioning the discrete representation to a continuous environment in simulation.", "This paper describes a robust algorithm for mobile robot formations based on the Voronoi Fast Marching path planning method. This is based on the propagation of a wave throughout the model of the environment, the wave expanding faster as the wave's distance from obstacles increases. This method provides smooth and safe trajectories and its computational efficiency allows us to maintain a good response time. The proposed method is based on a local-minima-free planner; it is complete and has an �1�:�J�; complexity order whereJ is the number of cells of the map. Simulation results show that the proposed algorithm generates good trajectories.", "This paper presents a distributed method for navigating a team of robots in formation in 2D and 3D environments with static and dynamic obstacles. The robots are assumed to have a reduced communication and visibility radius and share information with their neighbors. Via distributed consensus the robots compute (a) the convex hull of the robot positions and (b) the largest convex region within free space. The robots then compute, via sequential convex programming, the locally optimal parameters for the formation within this convex neighborhood of the robots. Reconfiguration is allowed, when required, by considering a set of target formations. The robots navigate towards the target collision-free formation with individual local planners that account for their dynamics. The approach is efficient and scalable with the number of robots and performs well in simulations with up to sixteen quadrotors.", "A novel multi-robot path planning approach is presented in this paper. Based on the standard Dijkstra, the algorithm looks for the optimal paths for a formation of robots, taking into account the possibility of split and merge. The algorithm explores a graph representation of the environment, computing for each node the cost of moving a number of robots and their corresponding paths. In every node where the formation can split, all the new possible formation subdivisions are taken into account accordingly to their individual costs. In the same way, in every node where the formation can merge, the algorithm verifies whether the combination is possible and, if possible, computes the new cost. In order to manage split and merge situations, a set of constrains is applied. The proposed algorithm is thus deterministic, complete and finds an optimal solution from a source node to all other nodes in the graph. The presented solution is general enough to be incorporated into high-level tasks as well as it can benefit from state-of-the-art formation motion planning approaches, which can be used for evaluation of edges of an input graph. The presented experimental results demonstrate ability of the method to find the optimal solution for a formation of robots in environments with various complexity." ] }
1906.11452
2954519665
In this work, we address traffic management of multiple payload transport systems comprising of non-holonomic robots. We consider loosely coupled rigid robot formations carrying a payload from one place to another. Each payload transport system (PTS) moves in various kinds of environments with obstacles. We ensure each PTS completes its given task by avoiding collisions with other payload systems and obstacles as well. Each PTS has one leader and multiple followers and the followers maintain a desired distance and angle with respect to the leader using a decentralized leader-follower control architecture while moving in the traffic. We showcase, through simulations the time taken by each PTS to traverse its respective trajectory with and without other PTS and obstacles. We show that our strategies help manage the traffic for a large number of PTS moving from one place to another.
To the best of our knowledge, no work has been done on path planning and collision avoidance of multiple formations of robots in environments with obstacles. We cannot consider other formations as dynamic obstacles and use the approach presented in @cite_19 because the formations have a specific goal of reaching the destination while dynamic obstacles do not have a proper goal and hence show random behaviour. This aspect makes our task even more challenging .
{ "cite_N": [ "@cite_19" ], "mid": [ "2744979297" ], "abstract": [ "We present a constrained optimization method for multi-robot formation control in dynamic environments, where the robots adjust the parameters of the formation, such as size and three-dimensional o..." ] }
1906.11441
2953545691
Clustering and analyzing on collected data can improve user experiences and quality of services in big data, IoT applications. However, directly releasing original data brings potential privacy concerns, which raises challenges and opportunities for privacy-preserving clustering. In this paper, we study the problem of non-interactive clustering in distributed setting under the framework of local differential privacy. We first extend the Bit Vector, a novel anonymization mechanism to be functionality-capable and privacy-preserving. Based on the modified encoding mechanism, we propose kCluster algorithm that can be used for clustering in the anonymized space. We show the modified encoding mechanism can be easily implemented in existing clustering algorithms that only rely on distance information, such as DBSCAN. Theoretical analysis and experimental results validate the effectiveness of the proposed schemes.
The concept of differential privacy is proposed by DWork in the context of statistical disclosure control @cite_19 . Recent researches have validated that mechanisms with differential privacy output accurate statistical information about the whole data while providing high privacy-preserving levels for single data in datasets. Based on differential privacy, the notion of LDP (ocal ifferential rivacy) is also proposed to protect local privacy context from data analysis @cite_11 @cite_9 @cite_27 @cite_6 .
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_19", "@cite_27", "@cite_11" ], "mid": [ "2963629772", "1986293063", "2027595342", "2742225091", "2147435839" ], "abstract": [ "Local differential privacy (LDP) is a recently proposed privacy standard for collecting and analyzing data, which has been used, e.g., in the Chrome browser, iOS and macOS. In LDP, each user perturbs her information locally, and only sends the randomized version to an aggregator who performs analyses, which protects both the users and the aggregator against private information leaks. Although LDP has attracted much research attention in recent years, the majority of existing work focuses on applying LDP to complex data and or analysis tasks. In this paper, we point out that the fundamental problem of collecting multidimensional data under LDP has not been addressed sufficiently, and there remains much room for improvement even for basic tasks such as computing the mean value over a single numeric attribute under LDP. Motivated by this, we first propose novel LDP mechanisms for collecting a numeric attribute, whose accuracy is at least no worse (and usually better) than existing solutions in terms of worst-case noise variance. Then, we extend these mechanisms to multidimensional data that can contain both numeric and categorical attributes, where our mechanisms always outperform existing solutions regarding worst-case noise variance. As a case study, we apply our solutions to build an LDP-compliant stochastic gradient descent algorithm (SGD), which powers many important machine learning tasks. Experiments using real datasets confirm the effectiveness of our methods, and their advantages over existing solutions.", "We give efficient protocols and matching accuracy lower bounds for frequency estimation in the local model for differential privacy. In this model, individual users randomize their data themselves, sending differentially private reports to an untrusted server that aggregates them. We study protocols that produce a succinct histogram representation of the data. A succinct histogram is a list of the most frequent items in the data (often called \"heavy hitters\") along with estimates of their frequencies; the frequency of all other items is implicitly estimated as 0. If there are n users whose items come from a universe of size d, our protocols run in time polynomial in n and log(d). With high probability, they estimate the accuracy of every item up to error O(√ log(d) (e2n) ). Moreover, we show that this much error is necessary, regardless of computational efficiency, and even for the simple setting where only one item appears with significant frequency in the data set. Previous protocols (Mishra and Sandler, 2006; Hsu, Khanna and Roth, 2012) for this task either ran in time Ω(d) or had much worse error (about √[6] log(d) (e2n) ), and the only known lower bound on error was Ω(1 √ n ). We also adapt a result of (2010) to the local setting. In a model with public coins, we show that each user need only send 1 bit to the server. For all known local protocols (including ours), the transformation preserves computational efficiency.", "The problem of privacy-preserving data analysis has a long history spanning multiple disciplines. As electronic data about individuals becomes increasingly detailed, and as technology enables ever more powerful collection and curation of these data, the need increases for a robust, meaningful, and mathematically rigorous definition of privacy, together with a computationally rich class of algorithms that satisfy this definition. Differential Privacy is such a definition.After motivating and discussing the meaning of differential privacy, the preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example. A key point is that, by rethinking the computational goal, one can often obtain far better results than would be achieved by methodically replacing each step of a non-private computation with a differentially private implementation. Despite some astonishingly powerful computational results, there are still fundamental limitations — not just on what can be achieved with differential privacy but on what can be achieved with any method that protects against a complete breakdown in privacy. Virtually all the algorithms discussed herein maintain differential privacy against adversaries of arbitrary computational power. Certain algorithms are computationally intensive, others are efficient. Computational complexity for the adversary and the algorithm are both discussed.We then turn from fundamentals to applications other than queryrelease, discussing differentially private methods for mechanism design and machine learning. The vast majority of the literature on differentially private algorithms considers a single, static, database that is subject to many analyses. Differential privacy in other models, including distributed databases and computations on data streams is discussed.Finally, we note that this work is meant as a thorough introduction to the problems and techniques of differential privacy, but is not intended to be an exhaustive survey — there is by now a vast amount of work in differential privacy, and we can cover only a small portion of it.", "", "Local differential privacy has recently surfaced as a strong measure of privacy in contexts where personal information remains private even from data analysts. Working in a setting where the data providers and data analysts want to maximize the utility of statistical inferences performed on the released data, we study the fundamental tradeoff between local differential privacy and information theoretic utility functions. We introduce a family of extremal privatization mechanisms, which we call staircase mechanisms, and prove that it contains the optimal privatization mechanism that maximizes utility. We further show that for all information theoretic utility functions studied in this paper, maximizing utility is equivalent to solving a linear program, the outcome of which is the optimal staircase mechanism. However, solving this linear program can be computationally expensive since it has a number of variables that is exponential in the data size. To account for this, we show that two simple staircase mechanisms, the binary and randomized response mechanisms, are universally optimal in the high and low privacy regimes, respectively, and well approximate the intermediate regime." ] }
1906.11441
2953545691
Clustering and analyzing on collected data can improve user experiences and quality of services in big data, IoT applications. However, directly releasing original data brings potential privacy concerns, which raises challenges and opportunities for privacy-preserving clustering. In this paper, we study the problem of non-interactive clustering in distributed setting under the framework of local differential privacy. We first extend the Bit Vector, a novel anonymization mechanism to be functionality-capable and privacy-preserving. Based on the modified encoding mechanism, we propose kCluster algorithm that can be used for clustering in the anonymized space. We show the modified encoding mechanism can be easily implemented in existing clustering algorithms that only rely on distance information, such as DBSCAN. Theoretical analysis and experimental results validate the effectiveness of the proposed schemes.
Literately, RAPPOR @cite_1 was proposed for studying client data under the framework of differential privacy. In one-time RAPPOR, a value @math is first hashed into a bloom filter @math with length @math by a series of hash functions. Then permanent randomized response is added to @math to get @math before reported. However, the one-time RAPPOR mechanism can not be used for distance-aware encoding as the bloom filter is not distance-aware. To facilitate this, we use the BV mechanism as mentioned. Instead of using permanent randomized response, the 1Bit mechanism @cite_7 is used to embed a numerical value. For data from @math to @math , a numerical value is encoded to @math with probability @math .
{ "cite_N": [ "@cite_1", "@cite_7" ], "mid": [ "1981029888", "2963559079" ], "abstract": [ "Randomized Aggregatable Privacy-Preserving Ordinal Response, or RAPPOR, is a technology for crowdsourcing statistics from end-user client software, anonymously, with strong privacy guarantees. In short, RAPPORs allow the forest of client data to be studied, without permitting the possibility of looking at individual trees. By applying randomized response in a novel manner, RAPPOR provides the mechanisms for such collection as well as for efficient, high-utility analysis of the collected data. In particular, RAPPOR permits statistics to be collected on the population of client-side strings with strong privacy guarantees for each client, and without linkability of their reports. This paper describes and motivates RAPPOR, details its differential-privacy and utility guarantees, discusses its practical deployment and properties in the face of different attack models, and, finally, gives results of its application to both synthetic and real-world data.", "The collection and analysis of telemetry data from user's devices is routinely performed by many software companies. Telemetry collection leads to improved user experience but poses significant risks to users' privacy. Locally differentially private (LDP) algorithms have recently emerged as the main tool that allows data collectors to estimate various population statistics, while preserving privacy. The guarantees provided by such algorithms are typically very strong for a single round of telemetry collection, but degrade rapidly when telemetry is collected regularly. In particular, existing LDP algorithms are not suitable for repeated collection of counter data such as daily app usage statistics. In this paper, we develop new LDP mechanisms geared towards repeated collection of counter data, with formal privacy guarantees even after being executed for an arbitrarily long period of time. For two basic analytical tasks, mean estimation and histogram estimation, our LDP mechanisms for repeated data collection provide estimates with comparable or even the same accuracy as existing single-round LDP collection mechanisms. We conduct empirical evaluation on real-world counter datasets to verify our theoretical results. Our mechanisms have been deployed by Microsoft to collect telemetry across millions of devices." ] }
1906.11441
2953545691
Clustering and analyzing on collected data can improve user experiences and quality of services in big data, IoT applications. However, directly releasing original data brings potential privacy concerns, which raises challenges and opportunities for privacy-preserving clustering. In this paper, we study the problem of non-interactive clustering in distributed setting under the framework of local differential privacy. We first extend the Bit Vector, a novel anonymization mechanism to be functionality-capable and privacy-preserving. Based on the modified encoding mechanism, we propose kCluster algorithm that can be used for clustering in the anonymized space. We show the modified encoding mechanism can be easily implemented in existing clustering algorithms that only rely on distance information, such as DBSCAN. Theoretical analysis and experimental results validate the effectiveness of the proposed schemes.
In the distributed environment, each data owner performs generalized a noised version of his dataset and sends the perturbated datasets to aggregator for clustering (non-interactive mode). To provide privacy guarantee, obfuscation mechanisms such as perturbation and dimensionality-reduction methods are used for data anonymization. The additive data perturbation (ADP @cite_13 ) and the random subspace projection (RSP @cite_21 ) are two of the most common approaches to transfer original data to the anonymized space in the literature.
{ "cite_N": [ "@cite_21", "@cite_13" ], "mid": [ "2160553465", "2111272198" ], "abstract": [ "This paper explores the possibility of using multiplicative random projection matrices for privacy preserving distributed data mining. It specifically considers the problem of computing statistical aggregates like the inner product matrix, correlation coefficient matrix, and Euclidean distance matrix from distributed privacy sensitive data possibly owned by multiple parties. This class of problems is directly related to many other data-mining problems such as clustering, principal component analysis, and classification. This paper makes primary contributions on two different grounds. First, it explores independent component analysis as a possible tool for breaching privacy in deterministic multiplicative perturbation-based models such as random orthogonal transformation and random rotation. Then, it proposes an approximate random projection-based technique to improve the level of privacy protection while still preserving certain statistical characteristics of the data. The paper presents extensive theoretical analysis and experimental results. Experiments demonstrate that the proposed technique is effective and can be successfully used for different types of privacy-preserving data mining applications.", "Despite its benefit in a wide range of applications, data mining techniques also have raised a number of ethical issues. Some such issues include those of privacy, data security, intellectual property rights, and many others. In this paper, we address the privacy problem against unauthorized secondary use of information. To do so, we introduce a family of geometric data transformation methods (GDTMs) which ensure that the mining process will not violate privacy up to a certain degree of security. We focus primarily on privacy preserving data clustering, notably on partition-based and hierarchical methods. Our proposed methods distort only confidential numerical attributes to meet privacy requirements, while preserving general features for clustering analysis. Our experiments demonstrate that our methods are effective and provide acceptable values in practice for balancing privacy and accuracy. We report the main results of our performance evaluation and discuss some open research issues." ] }
1906.11416
2955995678
Cluster analysis which focuses on the grouping and categorization of similar elements is widely used in various fields of research. Inspired by the phenomenon of atomic fission, a novel density-based clustering algorithm is proposed in this paper, called fission clustering (FC). It focuses on mining the dense families of a dataset and utilizes the information of the distance matrix to fissure clustering dataset into subsets. When we face the dataset which has a few points surround the dense families of clusters, K-nearest neighbors local density indicator is applied to distinguish and remove the points of sparse areas so as to obtain a dense subset that is constituted by the dense families of clusters. A number of frequently-used datasets were used to test the performance of this clustering approach, and to compare the results with those of algorithms. The proposed algorithm is found to outperform other algorithms in speed and accuracy.
Clustering is a classical issue in data mining. In recent decades, a number of typical clustering algorithms have been proposed, such as DBSCAN @cite_30 , OPTICS @cite_29 (density-based); STING @cite_7 , CLIQUE @cite_38 (grid-based); Gaussian mixture models @cite_18 , COBWEB @cite_39 (model-based); K-means @cite_19 , CLARANS @cite_0 (partitioning) and DIANA @cite_40 , BIRCH @cite_6 ( hierarchical).
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_18", "@cite_7", "@cite_29", "@cite_6", "@cite_39", "@cite_0", "@cite_19", "@cite_40" ], "mid": [ "1673310716", "1977496278", "2011832962", "1566114229", "2160642098", "2095897464", "54893182", "2126626732", "2127218421", "2129869052" ], "abstract": [ "Clustering algorithms are attractive for the task of class identification in spatial databases. However, the application to large spatial databases rises the following requirements for clustering algorithms: minimal requirements of domain knowledge to determine the input parameters, discovery of clusters with arbitrary shape and good efficiency on large databases. The well-known clustering algorithms offer no solution to the combination of these requirements. In this paper, we present the new clustering algorithm DBSCAN relying on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN requires only one input parameter and supports the user in determining an appropriate value for it. We performed an experimental evaluation of the effectiveness and efficiency of DBSCAN using synthetic data and real data of the SEQUOIA 2000 benchmark. The results of our experiments demonstrate that (1) DBSCAN is significantly more effective in discovering clusters of arbitrary shape than the well-known algorithm CLARANS, and that (2) DBSCAN outperforms CLARANS by a factor of more than 100 in terms of efficiency.", "Data mining applications place special requirements on clustering algorithms including: the ability to find clusters embedded in subspaces of high dimensional data, scalability, end-user comprehensibility of the results, non-presumption of any canonical data distribution, and insensitivity to the order of input records. We present CLIQUE, a clustering algorithm that satisfies each of these requirements. CLIQUE identifies dense clusters in subspaces of maximum dimensionality. It generates cluster descriptions in the form of DNF expressions that are minimized for ease of comprehension. It produces identical results irrespective of the order in which input records are presented and does not presume any specific mathematical form for data distribution. Through experiments, we show that CLIQUE efficiently finds accurate cluster in large high dimensional datasets.", "Cluster analysis is the automated search for groups of related observations in a dataset. Most clustering done in practice is based largely on heuristic but intuitively reasonable procedures, and most clustering methods available in commercial software are also of this type. However, there is little systematic guidance associated with these methods for solving important practical questions that arise in cluster analysis, such as how many clusters are there, which clustering method should be used, and how should outliers be handled. We review a general methodology for model-based clustering that provides a principled statistical approach to these issues. We also show that this can be useful for other problems in multivariate analysis, such as discriminant analysis and multivariate density estimation. We give examples from medical diagnosis, minefield detection, cluster recovery from noisy data, and spatial density estimation. Finally, we mention limitations of the methodology and discuss recent development...", "Spatial data mining, i.e., discovery of interesting characteristics and patterns that may implicitly exist in spatial databases, is a challenging task due to the huge amounts of spatial data and to the new conceptual nature of the problems which must account for spatial distance. Clustering and region oriented queries are common problems in this domain. Several approaches have been presented in recent years, all of which require at least one scan of all individual objects (points). Consequently, the computational complexity is at least linearly proportional to the number of objects to answer each query. In this paper, we propose a hierarchical statistical information grid based approach for spatial data mining to reduce the cost further. The idea is to capture statistical information associated with spatial cells in such a manner that whole classes of queries and clustering problems can be answered without recourse to the individual objects. In theory, and confirmed by empirical studies, this approach outperforms the best previous method by at least an order of magnitude, especially when the data set is very large.", "Cluster analysis is a primary method for database mining. It is either used as a stand-alone tool to get insight into the distribution of a data set, e.g. to focus further analysis and data processing, or as a preprocessing step for other algorithms operating on the detected clusters. Almost all of the well-known clustering algorithms require input parameters which are hard to determine but have a significant influence on the clustering result. Furthermore, for many real-data sets there does not even exist a global parameter setting for which the result of the clustering algorithm describes the intrinsic clustering structure accurately. We introduce a new algorithm for the purpose of cluster analysis which does not produce a clustering of a data set explicitly; but instead creates an augmented ordering of the database representing its density-based clustering structure. This cluster-ordering contains information which is equivalent to the density-based clusterings corresponding to a broad range of parameter settings. It is a versatile basis for both automatic and interactive cluster analysis. We show how to automatically and efficiently extract not only 'traditional' clustering information (e.g. representative points, arbitrary shaped clusters), but also the intrinsic clustering structure. For medium sized data sets, the cluster-ordering can be represented graphically and for very large data sets, we introduce an appropriate visualization technique. Both are suitable for interactive exploration of the intrinsic clustering structure offering additional insights into the distribution and correlation of the data.", "Finding useful patterns in large datasets has attracted considerable interest recently, and one of the most widely studied problems in this area is the identification of clusters, or densely populated regions, in a multi-dimensional dataset. Prior work does not adequately address the problem of large datasets and minimization of I O costs.This paper presents a data clustering method named BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies), and demonstrates that it is especially suitable for very large databases. BIRCH incrementally and dynamically clusters incoming multi-dimensional metric data points to try to produce the best quality clustering with the available resources (i.e., available memory and time constraints). BIRCH can typically find a good clustering with a single scan of the data, and improve the quality further with a few additional scans. BIRCH is also the first clustering algorithm proposed in the database area to handle \"noise\" (data points that are not part of the underlying pattern) effectively.We evaluate BIRCH's time space efficiency, data input order sensitivity, and clustering quality through several experiments. We also present a performance comparisons of BIRCH versus CLARANS, a clustering method proposed recently for large datasets, and show that BIRCH is consistently superior.", "Conceptual clustering is an important way to sununarize data in an understandable manner. However, the recency of the conceptual clustering paradigm has allowed little exploration of conceptual clustering as a means of improving performance. This paper presents COBWEB, a conceptual clustering system that organizes data to maximize inference abilities. It does this by capturing attribute inter-correlations at classification tree nodes and generating inferences as a by-product of classification. Results from the domains of soybean and thyroid disease diagnosis support the success of this approach.", "Spatial data mining is the discovery of interesting relationships and characteristics that may exist implicitly in spatial databases. To this end, this paper has three main contributions. First, it proposes a new clustering method called CLARANS, whose aim is to identify spatial structures that may be present in the data. Experimental results indicate that, when compared with existing clustering methods, CLARANS is very efficient and effective. Second, the paper investigates how CLARANS can handle not only point objects, but also polygon objects efficiently. One of the methods considered, called the IR-approximation, is very efficient in clustering convex and nonconvex polygon objects. Third, building on top of CLARANS, the paper develops two spatial data mining algorithms that aim to discover relationships between spatial and nonspatial attributes. Both algorithms can discover knowledge that is difficult to find with existing spatial data mining algorithms.", "The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give partitions which are reasonably efficient in the sense of within-class variance. That is, if p is the probability mass function for the population, S = S1, S2, * *, Sk is a partition of EN, and ui, i = 1, 2, * , k, is the conditional mean of p over the set Si, then W2(S) = ff=ISi f z u42 dp(z) tends to be low for the partitions S generated by the method. We say 'tends to be low,' primarily because of intuitive considerations, corroborated to some extent by mathematical analysis and practical computational experience. Also, the k-means procedure is easily programmed and is computationally economical, so that it is feasible to process very large samples on a digital computer. Possible applications include methods for similarity grouping, nonlinear prediction, approximating multivariate distributions, and nonparametric tests for independence among several variables. In addition to suggesting practical classification methods, the study of k-means has proved to be theoretically interesting. The k-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the k-means. This problem is sufficiently interesting, in fact, for us to devote a good portion of this paper to it. The k-means are defined in section 2.1, and the main results which have been obtained on the asymptotic behavior are given there. The rest of section 2 is devoted to the proofs of these results. Section 3 describes several specific possible applications, and reports some preliminary results from computer experiments conducted to explore the possibilities inherent in the k-means idea. The extension to general metric spaces is indicated briefly in section 4. The original point of departure for the work described here was a series of problems in optimal classification (MacQueen [9]) which represented special", "1. Introduction. 2. Partitioning Around Medoids (Program PAM). 3. Clustering large Applications (Program CLARA). 4. Fuzzy Analysis. 5. Agglomerative Nesting (Program AGNES). 6. Divisive Analysis (Program DIANA). 7. Monothetic Analysis (Program MONA). Appendix 1. Implementation and Structure of the Programs. Appendix 2. Running the Programs. Appendix 3. Adapting the Programs to Your Needs. Appendix 4. The Program CLUSPLOT. References. Author Index. Subject Index." ] }
1906.11416
2955995678
Cluster analysis which focuses on the grouping and categorization of similar elements is widely used in various fields of research. Inspired by the phenomenon of atomic fission, a novel density-based clustering algorithm is proposed in this paper, called fission clustering (FC). It focuses on mining the dense families of a dataset and utilizes the information of the distance matrix to fissure clustering dataset into subsets. When we face the dataset which has a few points surround the dense families of clusters, K-nearest neighbors local density indicator is applied to distinguish and remove the points of sparse areas so as to obtain a dense subset that is constituted by the dense families of clusters. A number of frequently-used datasets were used to test the performance of this clustering approach, and to compare the results with those of algorithms. The proposed algorithm is found to outperform other algorithms in speed and accuracy.
Of the earlier methods found, the most representative clustering method is K-means @cite_19 , which focuses on dividing data points into K clusters by using Euclidean distance as the distance metric. K-means has many variants (see @cite_48 @cite_4 ). It also is applied as a useful tool for other method, for instance spectral clustering @cite_42 , the spectral clustering maps the data points to a low-dimensional feature space to replace the Euclidean space in conventional K-means clustering. It can be reformulated as the weighted kernel K-means clustering method.
{ "cite_N": [ "@cite_19", "@cite_42", "@cite_4", "@cite_48" ], "mid": [ "2127218421", "2136294701", "", "1965514298" ], "abstract": [ "The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give partitions which are reasonably efficient in the sense of within-class variance. That is, if p is the probability mass function for the population, S = S1, S2, * *, Sk is a partition of EN, and ui, i = 1, 2, * , k, is the conditional mean of p over the set Si, then W2(S) = ff=ISi f z u42 dp(z) tends to be low for the partitions S generated by the method. We say 'tends to be low,' primarily because of intuitive considerations, corroborated to some extent by mathematical analysis and practical computational experience. Also, the k-means procedure is easily programmed and is computationally economical, so that it is feasible to process very large samples on a digital computer. Possible applications include methods for similarity grouping, nonlinear prediction, approximating multivariate distributions, and nonparametric tests for independence among several variables. In addition to suggesting practical classification methods, the study of k-means has proved to be theoretically interesting. The k-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the k-means. This problem is sufficiently interesting, in fact, for us to devote a good portion of this paper to it. The k-means are defined in section 2.1, and the main results which have been obtained on the asymptotic behavior are given there. The rest of section 2 is devoted to the proofs of these results. Section 3 describes several specific possible applications, and reports some preliminary results from computer experiments conducted to explore the possibilities inherent in the k-means idea. The extension to general metric spaces is indicated briefly in section 4. The original point of departure for the work described here was a series of problems in optimal classification (MacQueen [9]) which represented special", "We consider spectral clustering and transductive inference for data with multiple views. A typical example is the web, which can be described by either the hyperlinks between web pages or the words occurring in web pages. When each view is represented as a graph, one may convexly combine the weight matrices or the discrete Laplacians for each graph, and then proceed with existing clustering or classification techniques. Such a solution might sound natural, but its underlying principle is not clear. Unlike this kind of methodology, we develop multiview spectral clustering via generalizing the normalized cut from a single view to multiple views. We further build multiview transductive inference on the basis of multiview spectral clustering. Our framework leads to a mixture of Markov chains defined on every graph. The experimental evaluation on real-world web classification demonstrates promising results that validate our method.", "", "In this paper, the well-known k-means algorithm for searching for a locally optimal partition of the set A@?R^n is analyzed in the case if some data points occur on the border of two or more clusters. For this special case, a useful strategy by implementation of the k-means algorithm is proposed." ] }
1906.11416
2955995678
Cluster analysis which focuses on the grouping and categorization of similar elements is widely used in various fields of research. Inspired by the phenomenon of atomic fission, a novel density-based clustering algorithm is proposed in this paper, called fission clustering (FC). It focuses on mining the dense families of a dataset and utilizes the information of the distance matrix to fissure clustering dataset into subsets. When we face the dataset which has a few points surround the dense families of clusters, K-nearest neighbors local density indicator is applied to distinguish and remove the points of sparse areas so as to obtain a dense subset that is constituted by the dense families of clusters. A number of frequently-used datasets were used to test the performance of this clustering approach, and to compare the results with those of algorithms. The proposed algorithm is found to outperform other algorithms in speed and accuracy.
Schikuta @cite_10 designed a grid in the data distribution area to partition data into blocks. The points in grid cells of greater density are considered to be members of the same cluster. Grid-based clustering had developed many extensions in recent years, such as grid ranking strategy based on local density and priority-based anchor expansion @cite_35 , density peaks clustering algorithm based on grid @cite_49 and shifting grid clustering algorithm @cite_27 . However, these grid-based methods cannot be applied on high-dimensional datasets as the number of cells in the grid grows exponentially with the dimensionality of data.
{ "cite_N": [ "@cite_35", "@cite_27", "@cite_10", "@cite_49" ], "mid": [ "2885712284", "1994439126", "2117401405", "2524847239" ], "abstract": [ "Abstract Clustering based on grid and density for multi-density datasets plays a key role in data mining. In this work, a clustering method that consists of a grid ranking strategy based on local density and priority-based anchor expansion is proposed. In the proposed method, grid cells are ranked first according to local grid properties so the dataset is transformed into a ranked grid. An adjusted shifting grid is then introduced to calculate grid cell density. A cell expansion strategy that simulates the growth of bacterial colony is used to improve the completeness of each cluster. An adaptive technique is finally adopted to handle noisy cells to ensure accurate clustering. The accuracy, parameter sensitivity and computation cost of the proposed algorithm are analysed. The performance of the proposed algorithm is then compared to other clustering methods using four two-dimensional datasets, and the applicability of the proposed method to high-dimensional, large-scale dataset is discussed. Experimental results demonstrate that the proposed algorithm shows good performance in terms of accuracy, de-noising capability, robustness (parameters sensitivity) and computational efficiency. In addition, the results show that the proposed algorithm can handle effectively the problem of multi-density clustering.", "A new density- and grid-based type clustering algorithm using the concept of shifting grid is proposed. The proposed algorithm is a non-parametric type, which does not require users inputting parameters. It divides each dimension of the data space into certain intervals to form a grid structure in the data space. Based on the concept of sliding window, shifting of the whole grid structure is introduced to obtain a more descriptive density profile. As a result, we are able to enhance the accuracy of the results. Compared with many conventional algorithms, this algorithm is computational efficient because it clusters data in a way of cell rather than in points.", "Clustering is a common technique for the analysis of large images. In this paper a new approach to hierarchical clustering of very large data sets is presented. The GRIDCLUS algorithm uses a multidimensional grid data structure to organize the value space surrounding the pattern values, rather than to organize the patterns themselves. The patterns are grouped into blocks and clustered with respect to the blocks by a topological neighbor search algorithm. The runtime behavior of the algorithm outperforms all conventional hierarchical methods. A comparison of execution times to those of other commonly used clustering algorithms, and a heuristic runtime analysis are presented.", "To deal with the complex structure of the data set, density peaks clustering algorithm (DPC) was proposed in 2014. The density and the delta-distance are utilized to find the clustering centers in the DPC method. It detects outliers efficiently and finds clusters of arbitrary shape. But unfortunately, we need to calculate the distance between all data points in the first process, which limits the running speed of DPC algorithm on large datasets. To address this issue, this paper introduces a novel approach based on grid, called density peaks clustering algorithm based on grid (DPCG). This approach can overcome the operation efficiency problem. When calculating the local density, the idea of the grid is introduced to reduce the computation time based on the DPC algorithm. Neither it requires calculating all the distances nor much input parameters. Moreover, DPCG algorithm successfully inherits the all merits of the DPC algorithm. Experimental results on UCI data sets and artificial data show that the DPCG algorithm is flexible and effective." ] }
1906.11415
2954662570
There is a growing interest in learning a model which could recognize novel classes with only a few labeled examples. In this paper, we propose Temporal Alignment Module (TAM), a novel few-shot learning framework that can learn to classify a previous unseen video. While most previous works neglect long-term temporal ordering information, our proposed model explicitly leverages the temporal ordering information in video data through temporal alignment. This leads to strong data-efficiency for few-shot learning. In concrete, TAM calculates the distance value of query video with respect to novel class proxies by averaging the per frame distances along its alignment path. We introduce continuous relaxation to TAM so the model can be learned in an end-to-end fashion to directly optimize the few-shot learning objective. We evaluate TAM on two challenging real-world datasets, Kinetics and Something-Something-V2, and show that our model leads to significant improvement of few-shot video classification over a wide range of competitive baselines.
There are works exploring few-shot recognition. OSS-Metric Learning @cite_6 proposes a novel OSS-Metric Learning to measure the similarity of video pairs to enable one-shot video classification. @cite_37 introduces a zero-shot method which learns a mapping function from an attribute to a class center. It has an extension to few-shot learning by integrating labeled data on unseen classes. CMN @cite_12 is the most related work to ours. They introduce a multi-saliency embedding algorithm to encode video into a fixed-size matrix representation. Then they propose a compound memory network (CMN) to compress and store the representation and classify videos by matching and ranking. However, previous works collapse the order of frames at representation @cite_6 @cite_37 @cite_12 . Thus, the learned model is sub-optimal for video datasets where sequence order is important. In this paper, we preserve the frame order in video representation and estimate distance with temporal alignment, which utilizes video sequence order to solve few-shot video classification.
{ "cite_N": [ "@cite_37", "@cite_12", "@cite_6" ], "mid": [ "2963689837", "2901751978", "199855617" ], "abstract": [ "We present a generative framework for zero-shot action recognition where some of the possible action classes do not occur in the training data. Our approach is based on modeling each action class using a probability distribution whose parameters are functions of the attribute vector representing that action class. In particular, we assume that the distribution parameters for any action class in the visual space can be expressed as a linear combination of a set of basis vectors where the combination weights are given by the attributes of the action class. These basis vectors can be learned solely using labeled data from the known (i.e., previously seen) action classes, and can then be used to predict the parameters of the probability distributions of unseen action classes. We consider two settings: (1) Inductive setting, where we use only the labeled examples of the seen action classes to predict the unseen action class parameters; and (2) Transductive setting which further leverages unlabeled data from the unseen action classes. Our framework also naturally extends to few-shot action recognition where a few labelled examples from unseen classes are available. Our experiments on benchmark datasets (UCF101, HMDB51 and Olympic) show significant performance improvements as compared to various baselines, in both standard zero-shot (disjoint seen and unseen classes) and generalized zero-shot learning settings.", "The explosive growth in video streaming gives rise to challenges on efficiently extracting the spatial-temporal information to perform video understanding at low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive, making it expensive to deploy. In this paper, we propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance. Specifically, it can achieve the performance of 3D CNN but maintain 2D CNN's complexity. TSM shifts part of the channels along the temporal dimension, thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. We also extended TSM to online video recognition setting, which enables real-time low-latency online video recognition. On the Something-Something-V1 dataset which focuses on temporal modeling, we achieved better results than I3D family and ECO family using 6X and 2.7X fewer FLOPs respectively. Measured on P100 GPU, our single model achieved 1.8 higher accuracy at 9.5X lower latency and 12.7X higher throughput compared to I3D. The code is available here: this https URL.", "The One-Shot-Similarity (OSS) is a framework for classifierbased similarity functions. It is based on the use of background samples and was shown to excel in tasks ranging from face recognition to document analysis. However, we found that its performance depends on the ability to effectively learn the underlying classifiers, which in turn depends on the underlying metric. In this work we present a metric learning technique that is geared toward improved OSS performance. We test the proposed technique using the recently presented ASLAN action similarity labeling benchmark. Enhanced, state of the art performance is obtained, and the method compares favorably to leading similarity learning techniques." ] }
1906.11415
2954662570
There is a growing interest in learning a model which could recognize novel classes with only a few labeled examples. In this paper, we propose Temporal Alignment Module (TAM), a novel few-shot learning framework that can learn to classify a previous unseen video. While most previous works neglect long-term temporal ordering information, our proposed model explicitly leverages the temporal ordering information in video data through temporal alignment. This leads to strong data-efficiency for few-shot learning. In concrete, TAM calculates the distance value of query video with respect to novel class proxies by averaging the per frame distances along its alignment path. We introduce continuous relaxation to TAM so the model can be learned in an end-to-end fashion to directly optimize the few-shot learning objective. We evaluate TAM on two challenging real-world datasets, Kinetics and Something-Something-V2, and show that our model leads to significant improvement of few-shot video classification over a wide range of competitive baselines.
A significant amount of research has tackled the problem of video classification. State-of-the-art video classification methods have evolved from hand-crafted representation learning @cite_15 @cite_9 @cite_16 to deep-learning based models. C3D @cite_10 utilizes 3D spatial-temporal convolutional filters to extract deep features from sequences of RGB frames. TSN @cite_7 and I3D @cite_40 uses two-stream 2D or 3D CNNs with larger size on both RGB and optical flow sequences. By factorizing 3D convolutional filters into separate spatial and temporal components, P3D @cite_2 and R(2+1)D @cite_28 yield models with comparable or superior classification accuracy but smaller in size. An issue of these video representation learning methods is their dependence on large-scale video datasets for training. Models with an excessive amount of learnable parameters tend to fail when only a small number of training samples are available.
{ "cite_N": [ "@cite_7", "@cite_28", "@cite_9", "@cite_40", "@cite_2", "@cite_15", "@cite_16", "@cite_10" ], "mid": [ "2507009361", "2963155035", "", "2619082050", "2963820951", "2024868105", "", "1522734439" ], "abstract": [ "Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( ( 69.4 , )) and UCF101 ( ( 94.2 , )). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices (Models and code at https: github.com yjxiong temporal-segment-networks).", "In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition. Our motivation stems from the observation that 2D CNNs applied to individual frames of the video have remained solid performers in action recognition. In this work we empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within the framework of residual learning. Furthermore, we show that factorizing the 3D convolutional filters into separate spatial and temporal components yields significantly gains in accuracy. Our empirical study leads to the design of a new spatiotemporal convolutional block \"R(2+1)D\" which produces CNNs that achieve results comparable or superior to the state-of-the-art on Sports-1M, Kinetics, UCF101, and HMDB51.", "", "The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9 on HMDB-51 and 98.0 on UCF-101.", "Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating 3 x 3 x 3 convolutions with 1 × 3 × 3 convolutional filters on spatial domain (equivalent to 2D CNN) plus 3 × 1 × 1 convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3 and 1.8 , respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques.", "In this work, we present a novel local descriptor for video sequences. The proposed descriptor is based on histograms of oriented 3D spatio-temporal gradients. Our contribution is four-fold. (i) To compute 3D gradients for arbitrary scales, we develop a memory-efficient algorithm based on integral videos. (ii) We propose a generic 3D orientation quantization which is based on regular polyhedrons. (iii) We perform an in-depth evaluation of all descriptor parameters and optimize them for action recognition. (iv) We apply our descriptor to various action datasets (KTH, Weizmann, Hollywood) and show that we outperform the state-of-the-art.", "", "We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use." ] }
1906.11415
2954662570
There is a growing interest in learning a model which could recognize novel classes with only a few labeled examples. In this paper, we propose Temporal Alignment Module (TAM), a novel few-shot learning framework that can learn to classify a previous unseen video. While most previous works neglect long-term temporal ordering information, our proposed model explicitly leverages the temporal ordering information in video data through temporal alignment. This leads to strong data-efficiency for few-shot learning. In concrete, TAM calculates the distance value of query video with respect to novel class proxies by averaging the per frame distances along its alignment path. We introduce continuous relaxation to TAM so the model can be learned in an end-to-end fashion to directly optimize the few-shot learning objective. We evaluate TAM on two challenging real-world datasets, Kinetics and Something-Something-V2, and show that our model leads to significant improvement of few-shot video classification over a wide range of competitive baselines.
Another concern of video representation learning is the lack of temporal relational reasoning. Classification on the videos sensitive to temporal ordering poses a more significant challenge to the above networks which are tailored to capture short-term temporal features. Non-local neural networks @cite_21 introduce self-attention to aggregate temporal information in the long-term. Wang al @cite_4 further employ space-time region graphs to model the spatial-temporal reasoning. Recently, TRN @cite_51 proposes a temporal relational module to achieve superior performance. Still, these networks inevitably pool fuse features from different frames in the last layers to extract a single feature vector representing the whole video. In contrast, our model is able to learn video representation without loss of temporal ordering in order to generate more accurate final predictions.
{ "cite_N": [ "@cite_51", "@cite_21", "@cite_4" ], "mid": [ "2770804203", "", "2806331055" ], "abstract": [ "Temporal relational reasoning, the ability to link meaningful transformations of objects or entities over time, is a fundamental property of intelligent species. In this paper, we introduce an effective and interpretable network module, the Temporal Relation Network (TRN), designed to learn and reason about temporal dependencies between video frames at multiple time scales. We evaluate TRN-equipped networks on activity recognition tasks using three recent video datasets - Something-Something, Jester, and Charades - which fundamentally depend on temporal relational reasoning. Our results demonstrate that the proposed TRN gives convolutional neural networks a remarkable capacity to discover temporal relations in videos. Through only sparsely sampled video frames, TRN-equipped networks can accurately predict human-object interactions in the Something-Something dataset and identify various human gestures on the Jester dataset with very competitive performance. TRN-equipped networks also outperform two-stream networks and 3D convolution networks in recognizing daily activities in the Charades dataset. Further analyses show that the models learn intuitive and interpretable visual common sense knowledge in videos (Code and models are available at http: relation.csail.mit.edu .).", "", "How do humans recognize the action “opening a book”? We argue that there are two important cues: modeling temporal shape dynamics and modeling functional relationships between humans and objects. In this paper, we propose to represent videos as space-time region graphs which capture these two important cues. Our graph nodes are defined by the object region proposals from different frames in a long range video. These nodes are connected by two types of relations: (i) similarity relations capturing the long range dependencies between correlated objects and (ii) spatial-temporal relations capturing the interactions between nearby objects. We perform reasoning on this graph representation via Graph Convolutional Networks. We achieve state-of-the-art results on the Charades and Something-Something datasets. Especially for Charades with complex environments, we obtain a huge (4.4 ) gain when our model is applied in complex environments." ] }
1906.11415
2954662570
There is a growing interest in learning a model which could recognize novel classes with only a few labeled examples. In this paper, we propose Temporal Alignment Module (TAM), a novel few-shot learning framework that can learn to classify a previous unseen video. While most previous works neglect long-term temporal ordering information, our proposed model explicitly leverages the temporal ordering information in video data through temporal alignment. This leads to strong data-efficiency for few-shot learning. In concrete, TAM calculates the distance value of query video with respect to novel class proxies by averaging the per frame distances along its alignment path. We introduce continuous relaxation to TAM so the model can be learned in an end-to-end fashion to directly optimize the few-shot learning objective. We evaluate TAM on two challenging real-world datasets, Kinetics and Something-Something-V2, and show that our model leads to significant improvement of few-shot video classification over a wide range of competitive baselines.
Sequence alignment is of great importance in the field of bioinformatics, which describes the way of arrangement of DNA RNA or protein sequences, in order to identify the regions of similarity among them @cite_30 . In the vision community, researchers have growing interests in tackling the sequence alignment problem with high dimensional multi-modal data, such as finding the alignment between untrimmed video sequence and the corresponding textual action sequence @cite_24 @cite_1 @cite_13 . The main technique that has been applied to this line of work is dynamic programming. While dynamic programming is guaranteed to find the optimal alignment between two sequences given a prescribed distance function, the discrete operations used in dynamic programming are non-differentiable and hence prevent learning distance functions with gradient-based methods. Our work is closely related to recent progress on using continuous relaxation of discrete operations to tackle sequence alignment problem @cite_24 and hence allow us to train our entire model end-to-end.
{ "cite_N": [ "@cite_30", "@cite_1", "@cite_13", "@cite_24" ], "mid": [ "2158714788", "", "2962916463", "2910102801" ], "abstract": [ "The BLAST programs are widely used tools for searching protein and DNA databases for sequence similarities. For protein comparisons, a variety of definitional, algorithmic and statistical refinements described here permits the execution time of the BLAST programs to be decreased substantially while enhancing their sensitivity to weak similarities. A new criterion for triggering the extension of word hits, combined with a new heuristic for generating gapped alignments, yields a gapped BLAST program that runs at approximately three times the speed of the original. In addition, a method is introduced for automatically combining statistically significant alignments produced by BLAST into a position-specific score matrix, and searching the database using this matrix. The resulting Position-Specific Iterated BLAST (PSIBLAST) program runs at approximately the same speed per iteration as gapped BLAST, but in many cases is much more sensitive to weak but biologically relevant sequence similarities. PSI-BLAST is used to uncover several new and interesting members of the BRCT superfamily.", "", "Video learning is an important task in computer vision and has experienced increasing interest over the recent years. Since even a small amount of videos easily comprises several million frames, methods that do not rely on a frame-level annotation are of special importance. In this work, we propose a novel learning algorithm with a Viterbi-based loss that allows for online and incremental learning of weakly annotated video data. We moreover show that explicit context and length modeling leads to huge improvements in video segmentation and labeling tasks and include these models into our framework. On several action segmentation benchmarks, we obtain an improvement of up to 10 compared to current state-of-the-art methods.", "We address weakly supervised action alignment and segmentation in videos, where only the order of occurring actions is available during training. We propose Discriminative Differentiable Dynamic Time Warping (D3TW), the first discriminative model using weak ordering supervision. The key technical challenge for discriminative modeling with weak supervision is that the loss function of the ordering supervision is usually formulated using dynamic programming and is thus not differentiable. We address this challenge with a continuous relaxation of the min-operator in dynamic programming and extend the alignment loss to be differentiable. The proposed D3TW innovatively solves sequence alignment with discriminative modeling and end-to-end training, which substantially improves the performance in weakly supervised action alignment and segmentation tasks. We show that our model is able to bypass the degenerated sequence problem usually encountered in previous work and outperform the current state-of-the-art across three evaluation metrics in two challenging datasets." ] }
1906.11373
2954325026
Analysis of player tracking data for American football is in its infancy, since the National Football League (NFL) released its Next Gen Stats tracking data publicly for the first time in December 2018. While tracking datasets in other sports often contain detailed annotations of on-field events, annotations in the NFL's tracking data are limited. Methods for creating these annotations typically require extensive human labeling, which is difficult and expensive. We begin tackling this class of problems by creating annotations for pass coverage types by defensive backs using unsupervised learning techniques, which require no manual labeling or human oversight. We define a set of features from the NFL's tracking data that help distinguish between "zone" and "man" coverage. We use Gaussian mixture modeling and hierarchical clustering to create clusters corresponding to each group, and we assign the appropriate type of coverage to each cluster through qualitative analysis of the plays in each cluster. We find that the mixture model's "soft" cluster assignments allow for more flexibility when identifying coverage types. Our work makes possible several potential avenues of future NFL research, and we provide a basic exploration of these in this paper.
Since the NFL tracking data was released early 2019, there is not much work done involving providing annotations for tracking data in football. As mentioned above, the work of @cite_2 is a notable exception here. However, similar work has been done in other sports. Most specifically, work with tracking data has been done in other invasion'' style sports, such as soccer and basketball.
{ "cite_N": [ "@cite_2" ], "mid": [ "2197859468" ], "abstract": [ "8 ASUBTITLE: THE SALE OF 20 REGIONAL JETS TO COMAIR HAS BOOSTED CONFIDENCE AT CANADAIR: NOW THE PRESSURE IS STARTING TO MOUNT FOR THE BOMBARDIER SUBSIDIARY TO LAUNCH A STRETCHED VERSION OF THE 50-SEATER." ] }
1906.11373
2954325026
Analysis of player tracking data for American football is in its infancy, since the National Football League (NFL) released its Next Gen Stats tracking data publicly for the first time in December 2018. While tracking datasets in other sports often contain detailed annotations of on-field events, annotations in the NFL's tracking data are limited. Methods for creating these annotations typically require extensive human labeling, which is difficult and expensive. We begin tackling this class of problems by creating annotations for pass coverage types by defensive backs using unsupervised learning techniques, which require no manual labeling or human oversight. We define a set of features from the NFL's tracking data that help distinguish between "zone" and "man" coverage. We use Gaussian mixture modeling and hierarchical clustering to create clusters corresponding to each group, and we assign the appropriate type of coverage to each cluster through qualitative analysis of the plays in each cluster. We find that the mixture model's "soft" cluster assignments allow for more flexibility when identifying coverage types. Our work makes possible several potential avenues of future NFL research, and we provide a basic exploration of these in this paper.
In work done in soccer, @cite_0 have done work in using player tracking data in order to identify a team's playing style. They are able to identify a team just from the player's position data on the field by gathering features regarding reducing entropy of role-specific occupancy maps. In doing so, the authors assign some formation-based roles to individual players, and then create occupancy maps based on these roles that are assigned.
{ "cite_N": [ "@cite_0" ], "mid": [ "2545266867" ], "abstract": [ "To the trained-eye, experts can often identify a team based on their unique style of play due to their movement, passing and interactions. In this paper, we present a method which can accurately determine the identity of a team from spatiotemporal player tracking data. We do this by utilizing a formation descriptor which is found by minimizing the entropy of role-specific occupancy maps. We show how our approach is significantly better at identifying different teams compared to standard measures (i.e., Shots, passes etc.). We demonstrate the utility of our approach using an entire season of Prozone player tracking data from a top-tier professional soccer league." ] }
1906.11373
2954325026
Analysis of player tracking data for American football is in its infancy, since the National Football League (NFL) released its Next Gen Stats tracking data publicly for the first time in December 2018. While tracking datasets in other sports often contain detailed annotations of on-field events, annotations in the NFL's tracking data are limited. Methods for creating these annotations typically require extensive human labeling, which is difficult and expensive. We begin tackling this class of problems by creating annotations for pass coverage types by defensive backs using unsupervised learning techniques, which require no manual labeling or human oversight. We define a set of features from the NFL's tracking data that help distinguish between "zone" and "man" coverage. We use Gaussian mixture modeling and hierarchical clustering to create clusters corresponding to each group, and we assign the appropriate type of coverage to each cluster through qualitative analysis of the plays in each cluster. We find that the mixture model's "soft" cluster assignments allow for more flexibility when identifying coverage types. Our work makes possible several potential avenues of future NFL research, and we provide a basic exploration of these in this paper.
In some work done in basketball, @cite_5 use the spatio-temporal changes in the team formation in order to determine what features allow for a player to create an open'' shot. Similar to @cite_0 , the authors create role-based features by initially assigning each player a role. In this case, they assign each of the five players on the court one of the traditional five basketball positions. After doing this, they can track the motion of the roles rather than individual players in order to create a permutation-free set of features. Additionally, @cite_4 annotate set plays in the NBA, as discussed above.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_4" ], "mid": [ "2545266867", "2016381774", "" ], "abstract": [ "To the trained-eye, experts can often identify a team based on their unique style of play due to their movement, passing and interactions. In this paper, we present a method which can accurately determine the identity of a team from spatiotemporal player tracking data. We do this by utilizing a formation descriptor which is found by minimizing the entropy of role-specific occupancy maps. We show how our approach is significantly better at identifying different teams compared to standard measures (i.e., Shots, passes etc.). We demonstrate the utility of our approach using an entire season of Prozone player tracking data from a top-tier professional soccer league.", "Abstract A procedure for forming hierarchical groups of mutually exclusive subsets, each of which has members that are maximally similar with respect to specified characteristics, is suggested for use in large-scale (n > 100) studies when a precise optimal solution for a specified number of groups is not practical. Given n sets, this procedure permits their reduction to n − 1 mutually exclusive sets by considering the union of all possible n(n − 1) 2 pairs and selecting a union having a maximal value for the functional relation, or objective function, that reflects the criterion chosen by the investigator. By repeating this process until only one group remains, the complete hierarchical structure and a quantitative estimate of the loss associated with each stage in the grouping can be obtained. A general flowchart helpful in computer programming and a numerical example are included.", "" ] }
1906.11373
2954325026
Analysis of player tracking data for American football is in its infancy, since the National Football League (NFL) released its Next Gen Stats tracking data publicly for the first time in December 2018. While tracking datasets in other sports often contain detailed annotations of on-field events, annotations in the NFL's tracking data are limited. Methods for creating these annotations typically require extensive human labeling, which is difficult and expensive. We begin tackling this class of problems by creating annotations for pass coverage types by defensive backs using unsupervised learning techniques, which require no manual labeling or human oversight. We define a set of features from the NFL's tracking data that help distinguish between "zone" and "man" coverage. We use Gaussian mixture modeling and hierarchical clustering to create clusters corresponding to each group, and we assign the appropriate type of coverage to each cluster through qualitative analysis of the plays in each cluster. We find that the mixture model's "soft" cluster assignments allow for more flexibility when identifying coverage types. Our work makes possible several potential avenues of future NFL research, and we provide a basic exploration of these in this paper.
More work has been done in invasion sports regarding player tracking in determining formations of teams. @cite_3 provides an excellent overview of this topic, as well as a detailed overview of the different ways that tracking data has been used in sports over the last decade. We encourage interested readers to visit this paper for more information on this topic. Most of the work that has been done in this field has been to determine how teams as a whole attempt approach the game. In this paper, we attempt to determine how certain players on a team are behaving, in a sport where not much tracking work has been done.
{ "cite_N": [ "@cite_3" ], "mid": [ "2287065228" ], "abstract": [ "Team-based invasion sports such as football, basketball, and hockey are similar in the sense that the players are able to move freely around the playing area and that player and team performance cannot be fully analysed without considering the movements and interactions of all players as a group. State-of-the-art object tracking systems now produce spatio-temporal traces of player trajectories with high definition and high frequency, and this, in turn, has facilitated a variety of research efforts, across many disciplines, to extract insight from the trajectories. We survey recent research efforts that use spatio-temporal data from team sports as input and involve non-trivial computation. This article categorises the research efforts in a coherent framework and identifies a number of open research questions." ] }
1906.11524
2954465753
We present improved results for approximating Maximum Independent Set ( @math ) in the standard LOCAL and CONGEST models of distributed computing. Let @math and @math be the number of nodes and maximum degree in the input graph, respectively. Bar- [PODC 2017] showed that there is an algorithm in the CONGEST model that finds a @math -approximation to @math in @math rounds, where @math is the running time for finding a independent set, and @math is the maximum weight of a node in the network. Whether their algorithm is randomized or deterministic depends on the @math algorithm that they use as a black-box. Our results: (1) A deterministic @math rounds algorithm for @math -approximation to @math in the CONGEST model. (2) A randomized @math rounds algorithm that finds, with high probability, an @math -approximation to @math in the CONGEST model. (3) An @math lower bound for any randomized algorithm that finds an independent set of size @math that succeeds with probability at least @math , even for the LOCAL model. This hardness result applies for graphs of maximum degree @math . One might wonder whether the same hardness result applies for low degree graphs. We rule out this possibility with our next result. (4) An @math rounds algorithm that finds an independent set of size @math in graphs with maximum degree @math , with high probability. Due to a lower bound of @math that was given by Kuhn, Moscibroda and Wattenhofer [JACM, 2016] on the number of rounds for finding a maximal independent set ( @math ) in the LOCAL model, even for randomized algorithms, our second result implies that finding an @math -approximation to @math is strictly easier than @math .
For computing an MIS, for many years, the only known algorithms were the classic ones by @cite_34 @cite_1 that take @math rounds, even for the CONGEST model. In recent breakthroughs, @cite_25 presented a LOCAL algorithm that takes @math rounds, which was then improved by Ghaffari @cite_7 to an @math rounds. More recently, Ghaffari @cite_16 presented a CONGEST algorithm that takes @math rounds.
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_16", "@cite_34", "@cite_25" ], "mid": [ "1666479227", "2100061495", "2902010279", "1964089073", "2467514673" ], "abstract": [ "The Maximal Independent Set (MIS) problem is one of the basics in the study of locality in distributed graph algorithms. This paper presents a very simple randomized algorithm for this problem providing a near-optimal local complexity, which incidentally, when combined with some known techniques, also leads to a near-optimal global complexity. Classical MIS algorithms of Luby [STOC'85] and Alon, Babai and Itai [JALG'86] provide the global complexity guarantee that, with high probability1, all nodes terminate after O(log n) rounds. In contrast, our initial focus is on the local complexity, and our main contribution is to provide a very simple algorithm guaranteeing that each particular node v terminates after O(log deg(v) + log 1 e) rounds, with probability at least 1 -- e. The degree-dependency in this bound is optimal, due to a lower bound of Kuhn, Moscibroda, and Wattenhofer [PODC'04]. Interestingly, this local complexity smoothly transitions to a global complexity: by adding techniques of Barenboim, Elkin, Pettie, and Schneider [FOCS'12; arXiv: 1202.1983v3], we2 get an MIS algorithm with a high probability global complexity of O(log Δ) + 2O([EQUATION]), where Δ denotes the maximum degree. This improves over the O(log2 Δ) + 2O([EQUATION]) result of , and gets close to the Ω(min log Δ, [EQUATION] ) lower bound of Corollaries include improved algorithms for MIS in graphs of upper-bounded arboricity, or lower-bounded girth, for Ruling Sets, for MIS in the Local Computation Algorithms (LCA) model, and a faster distributed algorithm for the Lovasz Local Lemma.", "Two basic design strategies are used to develop a very simple and fast parallel algorithms for the maximal independent set (MIS) problem. The first strategy consists of assigning identical copies of a simple algorithm to small local portions of the problem input. The algorithm is designed so that when the copies are executed in parallel the correct problem output is produced very quickly. A very simple Monte Carlo algorithm for the MIS problem is presented which is based upon this strategy. The second strategy is a general and powerful technique for removing randomization from algorithms. This strategy is used to convert the Monte Carlo algorithm for this MIS problem into a simple deterministic algorithm with the same parallel running time.", "", "Abstract A simple parallel randomized algorithm to find a maximal independent set in a graph G = ( V , E ) on n vertices is presented. Its expected running time on a concurrent-read concurrent-write PRAM with O (| E | d max ) processors is O (log n ), where d max denotes the maximum degree. On an exclusive-read exclusive-write PRAM with O (| E |) processors the algorithm runs in O (log 2 n ). Previously, an O (log 4 n ) deterministic algorithm was given by Karp and Wigderson for the EREW-PRAM model. This was recently (independently of our work) improved to O (log 2 n ) by M. Luby. In both cases randomized algorithms depending on pairwise independent choices were turned into deterministic algorithms. We comment on how randomized combinatorial algorithms whose analysis only depends on d -wise rather than fully independent random choices (for some constant d ) can be converted into deterministic algorithms. We apply a technique due to A. Joffe (1974) and obtain deterministic construction in fast parallel time of various combinatorial objects whose existence follows from probabilistic arguments.", "Symmetry-breaking problems are among the most well studied in the field of distributed computing and yet the most fundamental questions about their complexity remain open. In this article we work in the LOCAL model (where the input graph and underlying distributed network are identical) and study the randomized complexity of four fundamental symmetry-breaking problems on graphs: computing MISs (maximal independent sets), maximal matchings, vertex colorings, and ruling sets. A small sample of our results includes the following: —An MIS algorithm running in O(log2D Δ L 2√log n, and comes close to the Ω(flog Δ log log Δ lower bound of Kuhn, Moscibroda, and Wattenhofer. —A maximal matching algorithm running in O(log Δ + log 4log n) time. This is the first significant improvement to the 1986 algorithm of Israeli and Itai. Moreover, its dependence on Δ is nearly optimal. —A (Δ + 1)-coloring algorithm requiring O(log Δ + 2o(√log log n) time, improving on an O(log Δ + √log n)-time algorithm of Schneider and Wattenhofer. —A method for reducing symmetry-breaking problems in low arboricity degeneracy graphs to low-degree graphs. (Roughly speaking, the arboricity or degeneracy of a graph bounds the density of any subgraph.) Corollaries of this reduction include an O(√log n)-time maximal matching algorithm for graphs with arboricity up to 2√log n and an O(log 2 3n)-time MIS algorithm for graphs with arboricity up to 2(log n)1 3. Each of our algorithms is based on a simple but powerful technique for reducing a randomized symmetry-breaking task to a corresponding deterministic one on a poly(log n)-size graph." ] }
1906.11524
2954465753
We present improved results for approximating Maximum Independent Set ( @math ) in the standard LOCAL and CONGEST models of distributed computing. Let @math and @math be the number of nodes and maximum degree in the input graph, respectively. Bar- [PODC 2017] showed that there is an algorithm in the CONGEST model that finds a @math -approximation to @math in @math rounds, where @math is the running time for finding a independent set, and @math is the maximum weight of a node in the network. Whether their algorithm is randomized or deterministic depends on the @math algorithm that they use as a black-box. Our results: (1) A deterministic @math rounds algorithm for @math -approximation to @math in the CONGEST model. (2) A randomized @math rounds algorithm that finds, with high probability, an @math -approximation to @math in the CONGEST model. (3) An @math lower bound for any randomized algorithm that finds an independent set of size @math that succeeds with probability at least @math , even for the LOCAL model. This hardness result applies for graphs of maximum degree @math . One might wonder whether the same hardness result applies for low degree graphs. We rule out this possibility with our next result. (4) An @math rounds algorithm that finds an independent set of size @math in graphs with maximum degree @math , with high probability. Due to a lower bound of @math that was given by Kuhn, Moscibroda and Wattenhofer [JACM, 2016] on the number of rounds for finding a maximal independent set ( @math ) in the LOCAL model, even for randomized algorithms, our second result implies that finding an @math -approximation to @math is strictly easier than @math .
On the other hand, @cite_8 showed a lower bound of @math , even for the LOCAL model. All the algorithms mentioned earlier for finding an @math are randomized that succeed with high probability. For deterministic algorithms, in @cite_28 a @math -round algorithm is given using network decomposition, in the LOCAL model. In @cite_38 a coloring-based @math -round CONGEST algorithm is given.
{ "cite_N": [ "@cite_28", "@cite_38", "@cite_8" ], "mid": [ "1998643836", "2136185479", "1957963525" ], "abstract": [ "In this paper, we improve the bounds for computing a network decomposition distributively and deterministically. Our algorithm computes an (n?(n),n?(n))-decomposition innO(?(n))time, whereformula. As a corollary we obtain improved deterministic bounds for distributively computing several graph structures such as maximal independent sets and ?-vertex colorings. We also show that the class of graphs G whose maximum degree isnO(?(n))where ?(n)=1 log lognis complete for the task of computing a near-optimal decomposition, i.e., a (logn, logn)-decomposition, in polylog(n) time. This is a corollary of a more general characterization, which pinpoints the weak points of existing network decomposition algorithms. Completeness is to be intended in the following sense: if we have an algorithmAthat computes a near-optimal decomposition in polylog(n) time for graphs inG, then we can compute a near-optimal decomposition in polylog(n) time for all graphs.", "The distributed @math -coloring problem is one of the most fundamental and well-studied problems in distributed algorithms. Starting with the work of Cole and Vishkin in 1986, a long line of gradually improving algorithms has been published. The state-of-the-art running time, prior to our work, is @math , due to Kuhn and Wattenhofer [Proceedings of the @math th Annual ACM Symposium on Principles of Distributed Computing, Denver, CO, 2006, pp. 7--15]. Linial [Proceedings of the @math th Annual IEEE Symposium on Foundation of Computer Science, Los Angeles, CA, 1987, pp. 331--335] proved a lower bound of @math for the problem, and Szegedy and Vishwanathan [Proceedings of the 25th Annual ACM Symposium on Theory of Computing, San Diego, CA, 1993, pp. 201--207] provided a heuristic argument that shows that algorithms from a wide family of locally iterative algorithms are unlikely to achieve a running time smaller than @math . We present a de...", "The question of what can be computed, and how efficiently, is at the core of computer science. Not surprisingly, in distributed systems and networking research, an equally fundamental question is what can be computed in a distributed fashion. More precisely, if nodes of a network must base their decision on information in their local neighborhood only, how well can they compute or approximate a global (optimization) problem? In this paper we give the first polylogarithmic lower bound on such local computation for (optimization) problems including minimum vertex cover, minimum (connected) dominating set, maximum matching, maximal independent set, and maximal matching. In addition, we present a new distributed algorithm for solving general covering and packing linear programs. For some problems this algorithm is tight with the lower bounds, whereas for others it is a distributed approximation scheme. Together, our lower and upper bounds establish the local computability and approximability of a large class of problems, characterizing how much local information is required to solve these tasks." ] }
1906.11524
2954465753
We present improved results for approximating Maximum Independent Set ( @math ) in the standard LOCAL and CONGEST models of distributed computing. Let @math and @math be the number of nodes and maximum degree in the input graph, respectively. Bar- [PODC 2017] showed that there is an algorithm in the CONGEST model that finds a @math -approximation to @math in @math rounds, where @math is the running time for finding a independent set, and @math is the maximum weight of a node in the network. Whether their algorithm is randomized or deterministic depends on the @math algorithm that they use as a black-box. Our results: (1) A deterministic @math rounds algorithm for @math -approximation to @math in the CONGEST model. (2) A randomized @math rounds algorithm that finds, with high probability, an @math -approximation to @math in the CONGEST model. (3) An @math lower bound for any randomized algorithm that finds an independent set of size @math that succeeds with probability at least @math , even for the LOCAL model. This hardness result applies for graphs of maximum degree @math . One might wonder whether the same hardness result applies for low degree graphs. We rule out this possibility with our next result. (4) An @math rounds algorithm that finds an independent set of size @math in graphs with maximum degree @math , with high probability. Due to a lower bound of @math that was given by Kuhn, Moscibroda and Wattenhofer [JACM, 2016] on the number of rounds for finding a maximal independent set ( @math ) in the LOCAL model, even for randomized algorithms, our second result implies that finding an @math -approximation to @math is strictly easier than @math .
Recently, @cite_36 , showed that there is an algorithm for the LOCAL model that finds a @math -approximation to @math in @math rounds, for a constant @math . The results in @cite_13 @cite_17 give a lower bound of @math rounds for any deterministic algorithm returning an independent set of size at least @math on a cycle. Furthermore, @cite_47 provide a deterministic @math algorithm, and a randomized @math rounds algorithm, for @math -approximations in planar graphs.
{ "cite_N": [ "@cite_36", "@cite_47", "@cite_13", "@cite_17" ], "mid": [ "2552279664", "1869515244", "1502920553", "" ], "abstract": [ "This paper is centered on the complexity of graph problems in the well-studied LOCAL model of distributed computing, introduced by Linial [FOCS '87]. It is widely known that for many of the classic distributed graph problems (including maximal independent set (MIS) and (Δ+1)-vertex coloring), the randomized complexity is at most polylogarithmic in the size n of the network, while the best deterministic complexity is typically 2O(√logn). Understanding and potentially narrowing down this exponential gap is considered to be one of the central long-standing open questions in the area of distributed graph algorithms. We investigate the problem by introducing a complexity-theoretic framework that allows us to shed some light on the role of randomness in the LOCAL model. We define the SLOCAL model as a sequential version of the LOCAL model. Our framework allows us to prove completeness results with respect to the class of problems which can be solved efficiently in the SLOCAL model, implying that if any of the complete problems can be solved deterministically in logn rounds in the LOCAL model, we can deterministically solve all efficient SLOCAL-problems (including MIS and (Δ+1)-coloring) in logn rounds in the LOCAL model. Perhaps most surprisingly, we show that a rather rudimentary looking graph coloring problem is complete in the above sense: Color the nodes of a graph with colors red and blue such that each node of sufficiently large polylogarithmic degree has at least one neighbor of each color. The problem admits a trivial zero-round randomized solution. The result can be viewed as showing that the only obstacle to getting efficient determinstic algorithms in the LOCAL model is an efficient algorithm to approximately round fractional values into integer values. In addition, our formal framework also allows us to develop polylogarithmic-time randomized distributed algorithms in a simpler way. As a result, we provide a polylog-time distributed approximation scheme for arbitrary distributed covering and packing integer linear programs.", "We give deterministic distributed algorithms that given i¾?> 0 find in a planar graph G, (1±i¾?)-approximations of a maximum independent set, a maximum matching, and a minimum dominating set. The algorithms run in O(log*|G|) rounds. In addition, we prove that no faster deterministic approximation is possible and show that if randomization is allowed it is possible to beat the lower bound for deterministic algorithms.", "In this paper we extend the lower bound technique by Linial for local coloring and maximal independent sets. We show that constant approximations to maximum independent sets on a ring require at least log-star time. More generally, the product of approximation quality and running time cannot be less than log-star. Using a generalized ring topology, we gain identical lower bounds for approximations to minimum dominating sets. Since our generalized ring topology is contained in a number of geometric graphs such as the unit disk graph, our bounds directly apply as lower bounds for quite a few algorithmic problems in wireless networking. Having in mind these and other results about local approximations of maximum independent sets and minimum dominating sets, one might think that the former are always at least as difficult to obtain as the latter. Conversely, we show that graphs exist, where a maximum independent set can be determined without any communication, while finding even an approximation to a minimum dominating set is as hard as in general graphs.", "" ] }
1906.11524
2954465753
We present improved results for approximating Maximum Independent Set ( @math ) in the standard LOCAL and CONGEST models of distributed computing. Let @math and @math be the number of nodes and maximum degree in the input graph, respectively. Bar- [PODC 2017] showed that there is an algorithm in the CONGEST model that finds a @math -approximation to @math in @math rounds, where @math is the running time for finding a independent set, and @math is the maximum weight of a node in the network. Whether their algorithm is randomized or deterministic depends on the @math algorithm that they use as a black-box. Our results: (1) A deterministic @math rounds algorithm for @math -approximation to @math in the CONGEST model. (2) A randomized @math rounds algorithm that finds, with high probability, an @math -approximation to @math in the CONGEST model. (3) An @math lower bound for any randomized algorithm that finds an independent set of size @math that succeeds with probability at least @math , even for the LOCAL model. This hardness result applies for graphs of maximum degree @math . One might wonder whether the same hardness result applies for low degree graphs. We rule out this possibility with our next result. (4) An @math rounds algorithm that finds an independent set of size @math in graphs with maximum degree @math , with high probability. Due to a lower bound of @math that was given by Kuhn, Moscibroda and Wattenhofer [JACM, 2016] on the number of rounds for finding a maximal independent set ( @math ) in the LOCAL model, even for randomized algorithms, our second result implies that finding an @math -approximation to @math is strictly easier than @math .
In @cite_46 , an @math -round @math randomized algorithm for an expected @math -approximation is presented for the unweighted case, along with a matching lower bound. Recently, @cite_6 presented a single round algorithm for unweighted graphs achieving an approximation ratio of @math , where @math is the the Caro-Wei bound on @math , in the Beeping model among other results. The results in @cite_45 provide a simple algorithm which achieves an expected @math -approximation for the weighted MaxIS in a single communication round in the CONGEST model.
{ "cite_N": [ "@cite_46", "@cite_45", "@cite_6" ], "mid": [ "2503706701", "2963301485", "2599155613" ], "abstract": [ "We show that the first phase of the Linial-Saks network decomposition algorithm gives a randomized distributed O(ne)-approximation algorithm for the maximum independent set problem that operates in O(1 e) rounds, and we give a matching lower bound that holds even for bipartite graphs.", "We bound the performance guarantees that follow from Turan-like bounds for unweighted and weighted independent sets in bounded-degree graphs. In particular, a randomized approach of Boppana forms a simple 1-round distributed algorithm, as well as a streaming and preemptive online algorithm. We show it gives a tight (( +1) 2 )-approximation in unweighted graphs of maximum degree ( ), which is best possible for 1-round distributed algorithms. For weighted graphs, it gives only a (( +1) )-approximation, but a simple modification results in an asymptotic expected (0.529 ( +1) )-approximation. This compares with a recent, more complex ( )-approximation [6], which holds deterministically.", "Independent sets play a central role in distributed algorithmics. We examine here the minimal requirements for computing non-trivial independent sets. In particular, we focus on algorithms that operate in a single communication round. A classic result of Linial shows that a constant number of rounds does not suffice to compute a maximal independent set. We are therefore interested in the size of the solution that can be computed, especially in comparison to the optimal. Our main result is a randomized one-round algorithm that achieves poly-logarithmic approximation on graphs of polynomially bounded-independence. Specifically, we show that the algorithm achieves the Caro-Wei bound (an extension of the Turan bound for independent sets) in general graphs up to a constant factor, and that the Caro-Wei bound yields a poly-logarithmic approximation on bounded-independence graphs. The algorithm uses only a single bit message and operates in a beeping model, where a node receives only the disjunction of the bits transmitted by its neighbors. We give limitation results that show that these are the minimal requirements for obtaining non-trivial solutions. In particular, a sublinear approximation cannot be obtained in a single round on general graphs, nor when nodes cannot both transmit and receive messages. We also show that our analysis of the Caro-Wei bound on polynomially bounded-independence graphs is tight, and that the poly-logarithmic approximation factor does not extend to @math O(1)-claw free graphs." ] }
1906.11524
2954465753
We present improved results for approximating Maximum Independent Set ( @math ) in the standard LOCAL and CONGEST models of distributed computing. Let @math and @math be the number of nodes and maximum degree in the input graph, respectively. Bar- [PODC 2017] showed that there is an algorithm in the CONGEST model that finds a @math -approximation to @math in @math rounds, where @math is the running time for finding a independent set, and @math is the maximum weight of a node in the network. Whether their algorithm is randomized or deterministic depends on the @math algorithm that they use as a black-box. Our results: (1) A deterministic @math rounds algorithm for @math -approximation to @math in the CONGEST model. (2) A randomized @math rounds algorithm that finds, with high probability, an @math -approximation to @math in the CONGEST model. (3) An @math lower bound for any randomized algorithm that finds an independent set of size @math that succeeds with probability at least @math , even for the LOCAL model. This hardness result applies for graphs of maximum degree @math . One might wonder whether the same hardness result applies for low degree graphs. We rule out this possibility with our next result. (4) An @math rounds algorithm that finds an independent set of size @math in graphs with maximum degree @math , with high probability. Due to a lower bound of @math that was given by Kuhn, Moscibroda and Wattenhofer [JACM, 2016] on the number of rounds for finding a maximal independent set ( @math ) in the LOCAL model, even for randomized algorithms, our second result implies that finding an @math -approximation to @math is strictly easier than @math .
In the sequential setting, an excellent summary of the known results is given by @cite_11 , which we overview in what follows. For general graphs, the best known algorithm achieves an @math -approximation factor @cite_18 . Assuming @math , @cite_29 shows that there is no efficient @math -approximation algorithm for every constant @math .
{ "cite_N": [ "@cite_18", "@cite_29", "@cite_11" ], "mid": [ "2074465184", "2081254453", "2272045681" ], "abstract": [ "We show an algorithm that finds cliques of size (log n log log n)2 whenever a graph has a clique of size at least n (log n)b for an arbitrary constant b. This leads to an algorithm that approximates max clique within a factor of O(n(log log n)2 (log n)3), which matches the best approximation ratio known for the chromatic number. The previously best approximation ratio known for max clique was O(n (log n)2).", "", "We consider the maximum independent set problem on sparse graphs with maximum degree d. The best known result for the problem is an SDP based O(d log log d log d) approximation due to Halperin. It is also known that no o(d log2 d) approximation exists assuming the Unique Games Conjecture. We show the following two results: (i) The natural LP formulation for the problem strengthened by O(log4(d)) levels of the mixed-hierarchy has an integrality gap of O(d log2 d), where O(·) ignores some log log d factors. However, our proof is non-constructive, in particular it uses an entropy based approach due to Shearer, and does not give a O(d log2 d) approximation algorithm with sub-exponential running time. (ii) We give an O(d log d) approximation based on polylog(d)-levels of the mixed hierarchy that runs in nO(1) exp(logO(1) d) time, improving upon Halperin's bound by a modest log log d factor. Our algorithm is based on combining Halperin's approach together with an idea used by Ajtai, Erdos, Komlos and Szemeredi to show that Kr-free, degree-d graphs have independent sets of size Ωr(n log log d d)." ] }
1906.11461
2955398858
Blockchain is a promising technology for establishing trust in IoT networks, where network nodes do not necessarily trust each other. Cryptographic hash links and distributed consensus mechanisms ensure that the data stored on an immutable blockchain can not be altered or deleted. However, blockchain mechanisms do not guarantee the trustworthiness of data at the origin. We propose a layered architecture for improving the end-to-end trust that can be applied to a diverse range of blockchain-based IoT applications. Our architecture evaluates the trustworthiness of sensor observations at the data layer and adapts block verification at the blockchain layer through the proposed data trust and gateway reputation modules. We present the performance evaluation of the data trust module using a simulated indoor target localization and the gateway reputation module using an end-to-end blockchain implementation, together with a qualitative security analysis for the architecture.
Distributed trust methods have been proposed @cite_4 whereby multiple observer nodes within spatial or temporal proximity independently corroborate nearby observations, yet these methods have been so far considered independently from auditability, transparency, and trust mechanisms of the blockchain. conducted a survey on IoT trust management mechanisms and their objectives in @cite_16 . More recent works focus on integrating the trust and reputation management mechanisms into blockchain-based IoT applications. The structure of blockchain-based applications require decentralization of trust and reputation management mechanisms. Furthermore, since blockchain technology has been applied to a diverse range of IoT applications with different network topologies, rules of participation and governance, and interactions between nodes, current trust and reputation management proposals are application-specific. These proposals can be categorized based on the layer at which the reputation and trust mechanisms work as: IoT data capture and blockchain node interactions.
{ "cite_N": [ "@cite_16", "@cite_4" ], "mid": [ "2074709832", "2104927807" ], "abstract": [ "Internet of Things (IoT) is going to create a world where physical objects are seamlessly integrated into information networks in order to provide advanced and intelligent services for human-beings. Trust management plays an important role in IoT for reliable data fusion and mining, qualified services with context-awareness, and enhanced user privacy and information security. It helps people overcome perceptions of uncertainty and risk and engages in user acceptance and consumption on IoT services and applications. However, current literature still lacks a comprehensive study on trust management in IoT. In this paper, we investigate the properties of trust, propose objectives of IoT trust management, and provide a survey on the current literature advances towards trustworthy IoT. Furthermore, we discuss unsolved issues, specify research challenges and indicate future research trends by proposing a research model for holistic trust management in IoT.", "Internet of Things (IoT) is characterized by heterogeneous technologies, which concur to the provisioning of innovative services in various application domains. In this scenario, the satisfaction of security and privacy requirements plays a fundamental role. Such requirements include data confidentiality and authentication, access control within the IoT network, privacy and trust among users and things, and the enforcement of security and privacy policies. Traditional security countermeasures cannot be directly applied to IoT technologies due to the different standards and communication stacks involved. Moreover, the high number of interconnected devices arises scalability issues; therefore a flexible infrastructure is needed able to deal with security threats in such a dynamic environment. In this survey we present the main research challenges and the existing solutions in the field of IoT security, identifying open issues, and suggesting some hints for future research." ] }
1906.11461
2955398858
Blockchain is a promising technology for establishing trust in IoT networks, where network nodes do not necessarily trust each other. Cryptographic hash links and distributed consensus mechanisms ensure that the data stored on an immutable blockchain can not be altered or deleted. However, blockchain mechanisms do not guarantee the trustworthiness of data at the origin. We propose a layered architecture for improving the end-to-end trust that can be applied to a diverse range of blockchain-based IoT applications. Our architecture evaluates the trustworthiness of sensor observations at the data layer and adapts block verification at the blockchain layer through the proposed data trust and gateway reputation modules. We present the performance evaluation of the data trust module using a simulated indoor target localization and the gateway reputation module using an end-to-end blockchain implementation, together with a qualitative security analysis for the architecture.
@cite_3 proposed the blockchain based Anonymous Reputation System (BARS), which uses direct historical interactions and indirect opinions about vehicles to establish a trusted communication environment for vehicular applications. Their system determines the trust level of broadcasted messages based on the reputation score of the vehicles. @cite_11 proposed a reputation based high-quality data sharing scheme for vehicular networks using a consortium blockchain, smart contracts, and a subjective logic model, which relies on interaction frequency, event timeliness, and trajectory similarity for reputation management. @cite_12 , authors proposed a distributed trust management scheme to calculate the credibility of exchanged messages based on the reputation value of observers in blockchain based vehicular networks.
{ "cite_N": [ "@cite_12", "@cite_3", "@cite_11" ], "mid": [ "2885171760", "2887486920", "2898082692" ], "abstract": [ "A Vehicular Ad hoc NETwork (VANET) is a self-organized network, formed by vehicles and some fixed equipment on roads called Roads Side Units (RSUs). Vehicular communications are expected to share different kinds of information between vehicles and infrastructure. Because of these specifications, securing VANET constitutes a difficult and challenging task that has attracted the interest of many researchers. In a previous work, we proposed a Clustering Mechanism for VANET (CMV) and its inherit Trust management scheme (TCMV) to ensure security of communication among vehicles. CMV organizes vehicles into clusters and elected Cluster Heads (CHs), and allows the clusters maintenance while dealing with velocity. On the other side, TCMV computes the credibility of the message by CH using the reputation of vehicles. However, we found that the value of credibility of the message by CH is not enough to verify if an exchanged message is correct or no. In order to provide a secured vehicle communication and to build reliance communication among vehicles, we propose a distributive trust management scheme for VANET to verify the correctness of the message based on the controlling of the vehicle'behavior by a miner and the credibility of message by a CH.", "The public key infrastructure-based authentication protocol provides basic security services for the vehicular ad hoc networks (VANETs). However, trust and privacy are still open issues due to the unique characteristics of VANETs. It is crucial to prevent internal vehicles from broadcasting forged messages while simultaneously preserving the privacy of vehicles against the tracking attacks. In this paper, we propose a blockchain-based anonymous reputation system (BARS) to establish a privacy-preserving trust model for VANETs. The certificate and revocation transparency is implemented efficiently with the proofs of presence and absence based on the extended blockchain technology. The public keys are used as pseudonyms in communications without any information about real identities for conditional anonymity. In order to prevent the distribution of forged messages, a reputation evaluation algorithm is presented relying on both direct historical interactions and indirect opinions about vehicles. A set of experiments is conducted to evaluate BARS in terms of security, validity, and performance, and the results show that BARS is able to establish a trust model with transparency, conditional anonymity, efficiency, and robustness for VANETs.", "The drastically increasing volume and the growing trend on the types of data have brought in the possibility of realizing advanced applications such as enhanced driving safety, and have enriched existing vehicular services through data sharing among vehicles and data analysis. Due to limited resources with vehicles, vehicular edge computing and networks (VECONs) i.e., the integration of mobile edge computing and vehicular networks, can provide powerful computing and massive storage resources. However, road side units that primarily presume the role of vehicular edge computing servers cannot be fully trusted, which may lead to serious security and privacy challenges for such integrated platforms despite their promising potential and benefits. We exploit consortium blockchain and smart contract technologies to achieve secure data storage and sharing in vehicular edge networks. These technologies efficiently prevent data sharing without authorization. In addition, we propose a reputation-based data sharing scheme to ensure high-quality data sharing among vehicles. A three-weight subjective logic model is utilized for precisely managing reputation of the vehicles. Numerical results based on a real dataset show that our schemes achieve reasonable efficiency and high-level of security for data sharing in VECONs." ] }
1906.11461
2955398858
Blockchain is a promising technology for establishing trust in IoT networks, where network nodes do not necessarily trust each other. Cryptographic hash links and distributed consensus mechanisms ensure that the data stored on an immutable blockchain can not be altered or deleted. However, blockchain mechanisms do not guarantee the trustworthiness of data at the origin. We propose a layered architecture for improving the end-to-end trust that can be applied to a diverse range of blockchain-based IoT applications. Our architecture evaluates the trustworthiness of sensor observations at the data layer and adapts block verification at the blockchain layer through the proposed data trust and gateway reputation modules. We present the performance evaluation of the data trust module using a simulated indoor target localization and the gateway reputation module using an end-to-end blockchain implementation, together with a qualitative security analysis for the architecture.
@cite_14 , the authors introduced the Delegated Proof-of-Stake consensus scheme for secure data sharing in blockchain-enabled Internet of Vehicles. They used reputation-based voting to select the miners, where the reputation of the miner candidates are calculated utilizing a multi-weight subjective logic scheme. They also proposed a contract theory based mechanism to incentivize the standby miners to participate in block verification. @cite_7 , a Lightweight Scalable Blockchain (LSB) for IoT was proposed with an IoT friendly consensus algorithm that incorporates a distributed trust method to the block verification mechanism. The proposed architecture has two tiers: overlay, and smart home networks. Based on direct and indirect evidence, the overlay network nodes build trust, which is used to reduce the number of transactions to be validated in a new block.
{ "cite_N": [ "@cite_14", "@cite_7" ], "mid": [ "2890103652", "2771747728" ], "abstract": [ "In the Internet of Vehicles (IoV), data sharing among vehicles is critical for improving driving safety and enhancing vehicular services. To ensure security and traceability of data sharing, existing studies utilize efficient delegated proof-of-stake consensus scheme as hard security solutions to establish blockchain-enabled IoV (BIoV). However, as the miners are selected from miner candidates by stake-based voting, defending against voting collusion between the candidates and compromised high-stake vehicles becomes challenging. To address the challenge, in this paper, we propose a two-stage soft security enhancement solution: 1) miner selection and 2) block verification. In the first stage, we design a reputation-based voting scheme to ensure secure miner selection. This scheme evaluates candidates’ reputation using both past interactions and recommended opinions from other vehicles. The candidates with high reputation are selected to be active miners and standby miners. In the second stage, to prevent internal collusion among active miners, a newly generated block is further verified and audited by standby miners. To incentivize the participation of the standby miners in block verification, we adopt the contract theory to model the interactions between active miners and standby miners, where block verification security and delay are taken into consideration. Numerical results based on a real-world dataset confirm the security and efficiency of our schemes for data sharing in BIoV.", "BlockChain (BC) has attracted tremendous attention due to its immutable nature and the associated security and privacy benefits. BC has the potential to overcome security and privacy challenges of Internet of Things (IoT). However, BC is computationally expensive, has limited scalability and incurs significant bandwidth overheads and delays which are not suited to the IoT context. We propose a tiered Lightweight Scalable BC (LSB) that is optimized for IoT requirements. We explore LSB in a smart home setting as a representative example for broader IoT applications. Low resource devices in a smart home benefit from a centralized manager that establishes shared keys for communication and processes all incoming and outgoing requests. LSB achieves decentralization by forming an overlay network where high resource devices jointly manage a public BC that ensures end-to-end privacy and security. The overlay is organized as distinct clusters to reduce overheads and the cluster heads are responsible for managing the public BC. LSB incorporates several optimizations which include algorithms for lightweight consensus, distributed trust and throughput management. Qualitative arguments demonstrate that LSB is resilient to several security attacks. Extensive simulations show that LSB decreases packet overhead and delay and increases BC scalability compared to relevant baselines." ] }
1906.11437
2956147616
Semantic segmentation has achieved significant progress but is still challenging due to the complex scene, object occlusion, and so on. Some research works have attempted to use extra information such as depth information to help RGB based semantic segmentation. However, extra information is usually unavailable for the test images. Inspired by learning using privileged information, in this paper, we only leverage the depth information of training images as privileged information in the training stage. Specifically, we rely on depth information to identify the hard pixels which are difficult to classify, by using our proposed Depth Prediction Error (DPE) and Depth-dependent Segmentation Error (DSE). By paying more attention to the identified hard pixels, our approach achieves the state-of-the-art results on two benchmark datasets and even outperforms the methods which use depth information of test images.
deep learning methods @cite_33 @cite_16 @cite_49 @cite_5 @cite_42 @cite_29 @cite_45 @cite_50 @cite_37 have shown impressive results in the semantic segmentation. Most of them are based on the encoder-decoder architecture which is first proposed in Fully Convolutional Networks (FCN) @cite_41 . The extension based on FCN can be grouped into the following two directions: capturing the contextual information at multiple scales and designing more sophisticated decoder. In the first direction, some works @cite_22 @cite_31 combined feature maps generated by different dilated convolutions and pooling operations. For example, PSPNet @cite_31 adopts Spatial Pyramid Pooling which pools the feature maps into different sizes for detecting objects of different scales. Deeplab v3 and v3+ @cite_22 @cite_56 proposed an Atrous Spatial Pyramid Pooling by using dilated convolutions to keep the large receptive field. In the second direction, some works @cite_35 @cite_51 @cite_39 proposed to construct better decoder modules to fuse mid-level and high-level features. For example, RefineNet-152 @cite_35 is a multi-path refinement network which fuses features at multiple levels of the encoder and decoder. However, all the above methods are RGB-based segmentation methods while our approach can utilize depth information to facilitate semantic segmentation.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_33", "@cite_22", "@cite_41", "@cite_29", "@cite_42", "@cite_56", "@cite_39", "@cite_45", "@cite_49", "@cite_50", "@cite_5", "@cite_31", "@cite_16", "@cite_51" ], "mid": [ "2563705555", "2964269771", "", "2630837129", "1903029394", "", "", "2787091153", "", "", "", "", "", "2952596663", "", "" ], "abstract": [ "Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.", "We introduce a novel loss max-pooling concept for handling imbalanced training data distributions, applicable as alternative loss layer in the context of deep neural networks for semantic image segmentation. Most real-world semantic segmentation datasets exhibit long tail distributions with few object categories comprising the majority of data and consequently biasing the classifiers towards them. Our method adaptively re-weights the contributions of each pixel based on their observed losses, targeting under-performing classification results as often encountered for under-represented object classes. Our approach goes beyond conventional cost-sensitive learning attempts through adaptive considerations that allow us to indirectly address both, inter- and intra-class imbalances. We provide a theoretical justification of our approach, complementary to experimental analyses on benchmark datasets. In our experiments on the Cityscapes and Pascal VOC 2012 segmentation datasets we find consistently improved results, demonstrating the efficacy of our approach.", "", "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "", "", "Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0 and 82.1 without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at this https URL .", "", "", "", "", "", "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.", "", "" ] }
1906.11437
2956147616
Semantic segmentation has achieved significant progress but is still challenging due to the complex scene, object occlusion, and so on. Some research works have attempted to use extra information such as depth information to help RGB based semantic segmentation. However, extra information is usually unavailable for the test images. Inspired by learning using privileged information, in this paper, we only leverage the depth information of training images as privileged information in the training stage. Specifically, we rely on depth information to identify the hard pixels which are difficult to classify, by using our proposed Depth Prediction Error (DPE) and Depth-dependent Segmentation Error (DSE). By paying more attention to the identified hard pixels, our approach achieves the state-of-the-art results on two benchmark datasets and even outperforms the methods which use depth information of test images.
Recently, privileged information has also been integrated into deep learning methods @cite_13 @cite_3 @cite_1 @cite_23 @cite_19 to distill knowledge or control the training process. More recently, SPIGAN @cite_15 proposed to use privileged depth information in the semantic segmentation. However, their main contribution is exploiting depth information to assist with domain adaptation, which adapts the synthetic image domain to the real image domain. So the motivation and solution of their method are intrinsically different from ours. Distinctive from all the above methods, this the first work to use depth information as privileged information to mine hard pixels for semantic segmentation.
{ "cite_N": [ "@cite_1", "@cite_3", "@cite_19", "@cite_23", "@cite_15", "@cite_13" ], "mid": [ "", "1821462560", "2592165076", "2963368804", "2896923958", "2253986341" ], "abstract": [ "", "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.", "Multi-instance multi-label (MIML) learning has many interesting applications in computer visions, including multi-object recognition and automatic image tagging. In these applications, additional information such as bounding-boxes, image captions and descriptions is often available during training phrase, which is referred as privileged information (PI). However, as existing works on learning using PI only consider instance-level PI (privileged instances), they fail to make use of bag-level PI (privileged bags) available in MIML learning. Therefore, in this paper, we propose a two-stream fully convolutional network, named MIML-FCN+, unified by a novel PI loss to solve the problem of MIML learning with privileged bags. Compared to the previous works on PI, the proposed MIML-FCN+ utilizes the readily available privileged bags, instead of hard-to-obtain privileged instances, making the system more general and practical in real world applications. As the proposed PI loss is convex and SGD-compatible and the framework itself is a fully convolutional network, MIML FCN+ can be easily integrated with state-of-the-art deep learning networks. Moreover, the flexibility of convolutional layers allows us to exploit structured correlations among instances to facilitate more effective training and testing. Experimental results on three benchmark datasets demonstrate the effectiveness of the proposed MIML-FCN+, outperforming state-of-the-art methods in the application of multi-object recognition.", "Unlike machines, humans learn through rapid, Abstract model-building. The role of a teacher is not simply to hammer home right or wrong answers, but rather to provide intuitive comments, comparisons, and explanations to a pupil. This is what the Learning Under Privileged Information (LUPI) paradigm endeavors to model by utilizing extra knowledge only available during training. We propose a new LUPI algorithm specifically designed for Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). We propose to use a heteroscedastic dropout (i.e. dropout with a varying variance) and make the variance of the dropout a function of privileged information. Intuitively, this corresponds to using the privileged information to control the uncertainty of the model output. We perform experiments using CNNs and RNNs for the tasks of image classification and machine translation. Our method significantly increases the sample efficiency during learning, resulting in higher accuracy with a large margin when the number of training examples is limited. We also theoretically justify the gains in sample efficiency by providing a generalization error bound decreasing with O(1 n), where n is the number of training examples, in an oracle case.", "Deep Learning for Computer Vision depends mainly on the source of supervision.Photo-realistic simulators can generate large-scale automatically labeled syntheticdata, but introduce a domain gap negatively impacting performance. We propose anew unsupervised domain adaptation algorithm, called SPIGAN, relying on Sim-ulator Privileged Information (PI) and Generative Adversarial Networks (GAN).We use internal data from the simulator as PI during the training of a target tasknetwork. We experimentally evaluate our approach on semantic segmentation. Wetrain the networks on real-world Cityscapes and Vistas datasets, using only unla-beled real-world images and synthetic labeled data with z-buffer (depth) PI fromthe SYNTHIA dataset. Our method improves over no adaptation and state-of-the-art unsupervised domain adaptation techniques.", "Distillation (, 2015) and privileged information (Vapnik & Izmailov, 2015) are two techniques that enable machines to learn from other machines. This paper unifies these two techniques into generalized distillation, a framework to learn from multiple machines and data representations. We provide theoretical and causal insight about the inner workings of generalized distillation, extend it to unsupervised, semisupervised and multitask learning scenarios, and illustrate its efficacy on a variety of numerical simulations on both synthetic and real-world data." ] }
1906.11437
2956147616
Semantic segmentation has achieved significant progress but is still challenging due to the complex scene, object occlusion, and so on. Some research works have attempted to use extra information such as depth information to help RGB based semantic segmentation. However, extra information is usually unavailable for the test images. Inspired by learning using privileged information, in this paper, we only leverage the depth information of training images as privileged information in the training stage. Specifically, we rely on depth information to identify the hard pixels which are difficult to classify, by using our proposed Depth Prediction Error (DPE) and Depth-dependent Segmentation Error (DSE). By paying more attention to the identified hard pixels, our approach achieves the state-of-the-art results on two benchmark datasets and even outperforms the methods which use depth information of test images.
Under the framework of multi-task learning, Eigen @cite_59 predicts depths, surface norm, and segmentation in one unified network. Hoffman @cite_10 proposed to learn an extra branch to hallucinate middle-level depth features. Although our method also employs another branch to predict depths under the multi-task learning framework, our focus is using Depth Prediction Error (DPE) to mine hard pixels, which contributes more to the segmentation task than merely predicting depths (see ).
{ "cite_N": [ "@cite_10", "@cite_59" ], "mid": [ "2463402750", "1905829557" ], "abstract": [ "We present a modality hallucination architecture for training an RGB object detection model which incorporates depth side information at training time. Our convolutional hallucination network learns a new and complementary RGB image representation which is taught to mimic convolutional mid-level features from a depth network. At test time images are processed jointly through the RGB and hallucination networks to produce improved detection performance. Thus, our method transfers information commonly extracted from depth training data to a network which can extract that information from the RGB counterpart. We present results on the standard NYUDv2 dataset and report improvement on the RGB detection task.", "In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks." ] }
1906.11301
2804152211
Public debate forums provide a common platform for exchanging opinions on a topic of interest. While recent studies in natural language processing (NLP) have provided empirical evidence that the language of the debaters and their patterns of interaction play a key role in changing the mind of a reader, research in psychology has shown that prior beliefs can affect our interpretation of an argument and could therefore constitute a competing alternative explanation for resistance to changing one's stance. To study the actual effect of language use vs. prior beliefs on persuasion, we provide a new dataset and propose a controlled setting that takes into consideration two reader level factors: political and religious ideology. We find that prior beliefs affected by these reader level factors play a more important role than language use effects and argue that it is important to account for them in NLP studies of persuasion.
Although most recent work on argumentation has focused on identifying the structure of arguments and extracting argument components @cite_7 @cite_36 @cite_2 @cite_3 @cite_11 @cite_22 @cite_29 @cite_30 @cite_17 @cite_4 @cite_31 @cite_9 , more relevant is research on identifying the characteristics of persuasive text, e.g., what distinguishes persuasive from non-persuasive text @cite_6 @cite_27 @cite_0 @cite_23 @cite_37 @cite_19 @cite_13 . Similar to these, our work aims to understand the characteristics of persuasive text but also considers the effect of people's prior beliefs.
{ "cite_N": [ "@cite_30", "@cite_22", "@cite_36", "@cite_29", "@cite_3", "@cite_2", "@cite_4", "@cite_23", "@cite_17", "@cite_37", "@cite_7", "@cite_6", "@cite_19", "@cite_27", "@cite_9", "@cite_0", "@cite_31", "@cite_13", "@cite_11" ], "mid": [ "2250287365", "2251931307", "", "2144232471", "", "2076139272", "2251661596", "2562522356", "2251478708", "2518510348", "2251625298", "2271245358", "2508030421", "2342255891", "2251911493", "2577700560", "2609722168", "2760124296", "2119707623" ], "abstract": [ "The ability to analyze the adequacy of supporting information is necessary for determining the strength of an argument.1 This is especially the case for online user comments, which often consist of arguments lacking proper substantiation and reasoning. Thus, we develop a framework for automatically classifying each proposition as UNVERIFIABLE, VERIFIABLE NONEXPERIENTIAL, or VERIFIABLE EXPERIENTIAL2, where the appropriate type of support is reason, evidence, and optional evidence, respectively3. Once the existing support for propositions are identified, this classification can provide an estimate of how adequately the arguments have been supported. We build a goldstandard dataset of 9,476 sentences and clauses from 1,047 comments submitted to an eRulemaking platform and find that Support Vector Machine (SVM) classifiers trained with n-grams and additional features capturing the verifiability and experientiality exhibit statistically significant improvement over the unigram baseline, achieving a macro-averaged F1 of 68.99 .", "In this paper, we present a novel approach to model arguments, their components and relations in persuasive essays in English. We propose an annotation scheme that includes the annotation of claims and premises as well as support and attack relations for capturing the structure of argumentative discourse. We further conduct a manual annotation study with three annotators on 90 persuasive essays. The obtained inter-rater agreement of αU =0 .72 for argument components and α =0 .81 for argumentative relations indicates that the proposed annotation scheme successfully guides annotators to substantial agreement. The final corpus and the annotation guidelines are freely available to encourage future research in argument recognition.", "", "Argumentation is the process by which arguments are constructed and handled. Argumentation constitutes a major component of human intelligence. The ability to engage in argumentation is essential for humans to understand new problems, to perform scientific reasoning, to express, to clarify and to defend their opinions in their daily lives. Argumentation mining aims to detect the arguments presented in a text document, the relations between them and the internal structure of each individual argument. In this paper we analyse the main research questions when dealing with argumentation mining and the different methods we have studied and developed in order to successfully confront the challenges of argumentation mining in legal texts.", "", "In written dialog, discourse participants need to justify claims they make, to convince the reader the claim is true and or relevant to the discourse. This paper presents a new task (with an associated corpus), namely detecting such justifications. We investigate the nature of such justifications, and observe that the justifications themselves often contain discourse structure. We therefore develop a method to detect the existence of certain types of discourse relations, which helps us classify whether a segment is a justification or not. Our task is novel, and our work is novel in that it uses a large set of connectives (which we call indicators), and in that it uses a large set of discourse relations, without choosing among them.", "We introduce a new approach to argumentation mining that we applied to a parallel German English corpus of short texts annotated with argumentation structure. We focus on structure prediction, which we break into a number of subtasks: relation identification, central claim identification, role classification, and function classification. Our new model jointly predicts different aspects of the structure by combining the different subtask predictions in the edge weights of an evidence graph; we then apply a standard MST decoding algorithm. This model not only outperforms two reasonable baselines and two datadriven models of global argument structure for the difficult subtask of relation identification, but also improves the results for central claim identification and function classification and it compares favorably to a complex mstparser pipeline.", "This article tackles a new challenging task in computational argumentation. Given a pair of two arguments to a certain controversial topic, we aim to directly assess qualitative properties of the arguments in order to explain why one argument is more convincing than the other one. We approach this task in a fully empirical manner by annotating 26k explanations written in natural language. These explanations describe convincingness of arguments in the given argument pair, such as their strengths or flaws. We create a new crowd-sourced corpus containing 9,111 argument pairs, multi-labeled with 17 classes, which was cleaned and curated by employing several strict quality measures. We propose two tasks on this data set, namely (1) predicting the full label distribution and (2) classifying types of flaws in less convincing arguments. Our experiments with feature-rich SVM learners and Bidirectional LSTM neural networks with convolution and attention mechanism reveal that such a novel fine-grained analysis of Web argument convincingness is a very challenging task. We release the new UKPConvArg2 corpus and software under permissive licenses to the research community.", "Argument mining studies in natural language text often use lexical (e.g. n-grams) and syntactic (e.g. grammatical production rules) features with all possible values. In prior work on a corpus of academic essays, we demonstrated that such large and sparse feature spaces can cause difficulty for feature selection and proposed a method to design a more compact feature space. The proposed feature design is based on post-processing a topic model to extract argument and domain words. In this paper we investigate the generality of this approach, by applying our methodology to a new corpus of persuasive essays. Our experiments show that replacing n-grams and syntactic rules with features and constraints using extracted argument and domain words significantly improves argument mining performance for persuasive essays.", "We propose a new task in the field of computational argumentation in which we investigate qualitative properties of Web arguments, namely their convincingness. We cast the problem as relation classification, where a pair of arguments having the same stance to the same prompt is judged. We annotate a large datasets of 16k pairs of arguments over 32 topics and investigate whether the relation “A is more convincing than B” exhibits properties of total ordering; these findings are used as global constraints for cleaning the crowdsourced data. We propose two tasks: (1) predicting which argument from an argument pair is more convincing and (2) ranking all arguments to the topic based on their convincingness. We experiment with feature-rich SVM and bidirectional LSTM and obtain 0.76-0.78 accuracy and 0.35-0.40 Spearman’s correlation in a cross-topic evaluation. We release the newly created corpus UKPConvArg1 and the experimental software under open licenses.", "While recent years have seen a surge of interest in automated essay grading, including work on grading essays with respect to particular dimensions such as prompt adherence, coherence, and technical quality, there has been relatively little work on grading the essay dimension of argument strength, which is arguably the most important aspect of argumentative essays. We introduce a new corpus of argumentative student essays annotated with argument strength scores and propose a supervised, feature-rich approach to automatically scoring the essays along this dimension. Our approach significantly outperforms a baseline that relies solely on heuristically applied sentence argument function labels by up to 16.1 .", "Changing someone's opinion is arguably one of the most important challenges of social interaction. The underlying process proves difficult to study: it is hard to know how someone's opinions are formed and whether and how someone's views shift. Fortunately, ChangeMyView, an active community on Reddit, provides a platform where users present their own opinions and reasoning, invite others to contest them, and acknowledge when the ensuing discussions change their original views. In this work, we study these interactions to understand the mechanisms behind persuasion. We find that persuasive arguments are characterized by interesting patterns of interaction dynamics, such as participant entry-order and degree of back-and-forth exchange. Furthermore, by comparing similar counterarguments to the same opinion, we show that language factors play an essential role. In particular, the interplay between the language of the opinion holder and that of the counterargument provides highly predictive cues of persuasiveness. Finally, since even in this favorable setting people may not be persuaded, we investigate the problem of determining whether someone's opinion is susceptible to being changed at all. For this more difficult task, we show that stylistic choices in how the opinion is expressed carry predictive power.", "Many social media platforms offer a mechanism for readers to react to comments, both positively and negatively, which in aggregate can be thought of as community endorsement. This paper addresses the problem of predicting community endorsement in online discussions, leveraging both the participant response structure and the text of the comment. The different types of features are integrated in a neural network that uses a novel architecture to learn latent modes of discussion structure that perform as well as deep neural networks but are more interpretable. In addition, the latent modes can be used to weight text features thereby improving prediction accuracy.", "Public debates are a common platform for presenting and juxtaposing diverging views on important issues. In this work we propose a methodology for tracking how ideas flow between participants throughout a debate. We use this approach in a case study of Oxford-style debates---a competitive format where the winner is determined by audience votes---and show how the outcome of a debate depends on aspects of conversational flow. In particular, we find that winners tend to make better use of a debate's interactive component than losers, by actively pursuing their opponents' points rather than promoting their own ideas over the course of the conversation.", "Determining when conversational participants agree or disagree is instrumental for broader conversational analysis; it is necessary, for example, in deciding when a group has reached consensus. In this paper, we describe three main contributions. We show how different aspects of conversational structure can be used to detect agreement and disagreement in discussion forums. In particular, we exploit information about meta-thread structure and accommodation between participants. Second, we demonstrate the impact of the features using 3-way classification, including sentences expressing disagreement, agreement or neither. Finally, we show how to use a naturally occurring data set with labels derived from the sides that participants choose in debates on createdebate.com. The resulting new agreement corpus, Agreement by Create Debaters (ABCD) is 25 times larger than any prior corpus. We demonstrate that using this data enables us to outperform the same system trained on prior existing in-domain smaller annotated datasets.", "", "", "", "Argumentation schemes are structures or templates for various kinds of arguments. Given the text of an argument with premises and conclusion identified, we classify it as an instance of one of five common schemes, using features specific to each scheme. We achieve accuracies of 63--91 in one-against-others classification and 80--94 in pairwise classification (baseline = 50 in both cases)." ] }
1906.11417
2954160693
We focus on minimizing nonconvex finite-sum functions that typically arise in machine learning problems. In an attempt to solve this problem, the adaptive cubic regularized Newton method has shown its strong global convergence guarantees and ability to escape from strict saddle points. This method uses a trust region-like scheme to determine if an iteration is successful or not, and updates only when it is successful. In this paper, we suggest an algorithm combining negative curvature with the adaptive cubic regularized Newton method to update even at unsuccessful iterations. We call this new method Stochastic Adaptive cubic regularization with Negative Curvature (SANC). Unlike the previous method, in order to attain stochastic gradient and Hessian estimators, the SANC algorithm uses independent sets of data points of consistent size over all iterations. It makes the SANC algorithm more practical to apply for solving large-scale machine learning problems. To the best of our knowledge, this is the first approach that combines the negative curvature method with the adaptive cubic regularized Newton method. Finally, we provide experimental results including neural networks problems supporting the efficiency of our method.
In the stochastic optimization setting, second-order methods have been researched for decades. J. Martens @cite_11 @cite_1 published papers about the Hessian-free optimization method to train neural networks including deep auto-encoder and recurrent neural networks. In these papers, it is argued that using a Gauss-Newton approximation instead of a Hessian approximation is more effective to train neural networks. Also, he employs a damping coefficient to maintain the positive definiteness of a Hessian approximation, which helps restrict a step size. Consequently, with a damping coefficient @math , a linear system is solved, @math , to attain an update step @math by using the conjugate gradient algorithm, at every iterate. The idea to use a damping coefficient is highly similar to that of adopting an adaptive cubic regularization term. But it is noted that adaptive cubic regularization has a theoretical adaptive rule to guarantee its convergence whereas the Newton method with a damping coefficient does not provide.
{ "cite_N": [ "@cite_1", "@cite_11" ], "mid": [ "1408639475", "196761320" ], "abstract": [ "In this work we resolve the long-outstanding problem of how to effectively train recurrent neural networks (RNNs) on complex and difficult sequence modeling problems which may contain long-term data dependencies. Utilizing recent advances in the Hessian-free optimization approach (Martens, 2010), together with a novel damping scheme, we successfully train RNNs on two sets of challenging problems. First, a collection of pathological synthetic datasets which are known to be impossible for standard optimization approaches (due to their extremely long-term dependencies), and second, on three natural and highly complex real-world sequence datasets where we find that our method significantly outperforms the previous state-of-the-art method for training neural sequence models: the Long Short-term Memory approach of Hochreiter and Schmidhuber (1997). Additionally, we offer a new interpretation of the generalized Gauss-Newton matrix of Schraudolph (2002) which is used within the HF approach of Martens.", "We develop a 2nd-order optimization method based on the \"Hessian-free\" approach, and apply it to training deep auto-encoders. Without using pre-training, we obtain results superior to those reported by Hinton & Salakhutdinov (2006) on the same tasks they considered. Our method is practical, easy to use, scales nicely to very large datasets, and isn't limited in applicability to auto-encoders, or any specific model class. We also discuss the issue of \"pathological curvature\" as a possible explanation for the difficulty of deep-learning and how 2nd-order optimization, and our method in particular, effectively deals with it." ] }
1906.11417
2954160693
We focus on minimizing nonconvex finite-sum functions that typically arise in machine learning problems. In an attempt to solve this problem, the adaptive cubic regularized Newton method has shown its strong global convergence guarantees and ability to escape from strict saddle points. This method uses a trust region-like scheme to determine if an iteration is successful or not, and updates only when it is successful. In this paper, we suggest an algorithm combining negative curvature with the adaptive cubic regularized Newton method to update even at unsuccessful iterations. We call this new method Stochastic Adaptive cubic regularization with Negative Curvature (SANC). Unlike the previous method, in order to attain stochastic gradient and Hessian estimators, the SANC algorithm uses independent sets of data points of consistent size over all iterations. It makes the SANC algorithm more practical to apply for solving large-scale machine learning problems. To the best of our knowledge, this is the first approach that combines the negative curvature method with the adaptive cubic regularized Newton method. Finally, we provide experimental results including neural networks problems supporting the efficiency of our method.
To circumvent solving a linear system, many researchers have considered the quasi-Newton method @cite_16 @cite_2 as an alternative. In stochastic optimization, they applied an L-BFGS update formula using stochastic gradient and Hessian estimators. However, since an assumption of positive definiteness of a Hessian is required to prove its convergence, it is not capable to provide convergence guarantees for nonconvex optimization problems. Experience has shown that some gains in performance in machine learning applications can be achieved, but the full potential of the stochastic quasi-Newton schemes is not yet known. @cite_10
{ "cite_N": [ "@cite_16", "@cite_10", "@cite_2" ], "mid": [ "2964303576", "2963433607", "2963941964" ], "abstract": [ "In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that noisy information about the gradients of the objective function is available via a stochastic first-order oracle ( @math ). We propose a general framework for such methods, for which we prove almost sure convergence to stationary points and analyze its worst-case iteration complexity. When a randomly chosen iterate is returned as the output of such an algorithm, we prove that in the worst case, the @math -calls complexity is @math to ensure that the expectation of the squared norm of the gradient is smaller than the given accuracy tolerance @math . We also propose a specific algorithm, namely, a stochastic damped limited-memory BFGS (SdLBFGS) method, that falls under the proposed framework. Moreover, we incorporate the stochastic variance reduced gradient variance reduction technique into the proposed SdLBFGS method and analyze its @math -calls compl...", "This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques th...", "The question of how to incorporate curvature information into stochastic approximation methods is challenging. The direct application of classical quasi-Newton updating techniques for deterministic optimization leads to noisy curvature estimates that have harmful effects on the robustness of the iteration. In this paper, we propose a stochastic quasi-Newton method that is efficient, robust, and scalable. It employs the classical BFGS update formula in its limited memory form, and is based on the observation that it is beneficial to collect curvature information pointwise, and at spaced intervals. One way to do this is through (subsampled) Hessian-vector products. This technique differs from the classical approach that would compute differences of gradients at every iteration, and where controlling the quality of the curvature estimates can be difficult. We present numerical results on problems arising in machine learning that suggest that the proposed method shows much promise." ] }
1906.11417
2954160693
We focus on minimizing nonconvex finite-sum functions that typically arise in machine learning problems. In an attempt to solve this problem, the adaptive cubic regularized Newton method has shown its strong global convergence guarantees and ability to escape from strict saddle points. This method uses a trust region-like scheme to determine if an iteration is successful or not, and updates only when it is successful. In this paper, we suggest an algorithm combining negative curvature with the adaptive cubic regularized Newton method to update even at unsuccessful iterations. We call this new method Stochastic Adaptive cubic regularization with Negative Curvature (SANC). Unlike the previous method, in order to attain stochastic gradient and Hessian estimators, the SANC algorithm uses independent sets of data points of consistent size over all iterations. It makes the SANC algorithm more practical to apply for solving large-scale machine learning problems. To the best of our knowledge, this is the first approach that combines the negative curvature method with the adaptive cubic regularized Newton method. Finally, we provide experimental results including neural networks problems supporting the efficiency of our method.
Their theoretical results for the cubic regularization paved the way for the upcoming ARC algorithm @cite_17 @cite_14 . This algorithm uses an adaptive coefficient in the cubic regularization term. This coefficient varies according to the difference between the local cubic model value and actual function value at the next iterate. The way of adjusting a cubic coefficient in the ARC algorithm is similar to that of adjusting a radius of the trust region methods. They also showed the mathematical proof for the global worst-case complexity bounds.
{ "cite_N": [ "@cite_14", "@cite_17" ], "mid": [ "1994974865", "2156005216" ], "abstract": [ "An Adaptive Regularisation framework using Cubics (ARC) was proposed for unconstrained optimization and analysed in Cartis, Gould and Toint (Part I, Math Program, doi: 10.1007 s10107-009-0286-5, 2009), generalizing at the same time an unpublished method due to Griewank (Technical Report NA 12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by Weiser, Deuflhard and Erdmann (Optim Methods Softw 22(3):413–431, 2007). In this companion paper, we further the analysis by providing worst-case global iteration complexity bounds for ARC and a second-order variant to achieve approximate first-order, and for the latter second-order, criticality of the iterates. In particular, the second-order ARC algorithm requires at most @math iterations, or equivalently, function- and gradient-evaluations, to drive the norm of the gradient of the objective below the desired accuracy @math , and @math iterations, to reach approximate nonnegative curvature in a subspace. The orders of these bounds match those proved for Algorithm 3.3 of Nesterov and Polyak which minimizes the cubic model globally on each iteration. Our approach is more general in that it allows the cubic model to be solved only approximately and may employ approximate Hessians.", "An Adaptive Regularisation algorithm using Cubics (ARC) is proposed for unconstrained optimization, generalizing at the same time an unpublished method due to Griewank (Technical Report NA 12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by (Optim Methods Softw 22(3):413–431, 2007). At each iteration of our approach, an approximate global minimizer of a local cubic regularisation of the objective function is determined, and this ensures a significant improvement in the objective so long as the Hessian of the objective is locally Lipschitz continuous. The new method uses an adaptive estimation of the local Lipschitz constant and approximations to the global model-minimizer which remain computationally-viable even for large-scale problems. We show that the excellent global and local convergence properties obtained by Nesterov and Polyak are retained, and sometimes extended to a wider class of problems, by our ARC approach. Preliminary numerical experiments with small-scale test problems from the CUTEr set show encouraging performance of the ARC algorithm when compared to a basic trust-region implementation." ] }
1906.11417
2954160693
We focus on minimizing nonconvex finite-sum functions that typically arise in machine learning problems. In an attempt to solve this problem, the adaptive cubic regularized Newton method has shown its strong global convergence guarantees and ability to escape from strict saddle points. This method uses a trust region-like scheme to determine if an iteration is successful or not, and updates only when it is successful. In this paper, we suggest an algorithm combining negative curvature with the adaptive cubic regularized Newton method to update even at unsuccessful iterations. We call this new method Stochastic Adaptive cubic regularization with Negative Curvature (SANC). Unlike the previous method, in order to attain stochastic gradient and Hessian estimators, the SANC algorithm uses independent sets of data points of consistent size over all iterations. It makes the SANC algorithm more practical to apply for solving large-scale machine learning problems. To the best of our knowledge, this is the first approach that combines the negative curvature method with the adaptive cubic regularized Newton method. Finally, we provide experimental results including neural networks problems supporting the efficiency of our method.
More recently, Kohler and Lucchi @cite_29 provided stochastic ARC variant for solving finite-sum structure nonconvex objectives. They proved the lower bound on cardinalities of the subsampling subsets to calculate stochastic gradient and Hessian estimators. But in their method, the sizes of the subsets are increasing over iterations which hinders training large scale neural networks.
{ "cite_N": [ "@cite_29" ], "mid": [ "2614389101" ], "abstract": [ "We consider the minimization of non-convex functions that typically arise in machine learning. Specifically, we focus our attention on a variant of trust region methods known as cubic regularization. This approach is particularly attractive because it escapes strict saddle points and it provides stronger convergence guarantees than first- and second-order as well as classical trust region methods. However, it suffers from a high computational complexity that makes it impractical for large-scale learning. Here, we propose a novel method that uses sub-sampling to lower this computational cost. By the use of concentration inequalities we provide a sampling scheme that gives sufficiently accurate gradient and Hessian approximations to retain the strong global and local convergence guarantees of cubically regularized methods. To the best of our knowledge this is the first work that gives global convergence guarantees for a sub-sampled variant of cubic regularization on non-convex functions. Furthermore, we provide experimental results supporting our theory." ] }
1906.11417
2954160693
We focus on minimizing nonconvex finite-sum functions that typically arise in machine learning problems. In an attempt to solve this problem, the adaptive cubic regularized Newton method has shown its strong global convergence guarantees and ability to escape from strict saddle points. This method uses a trust region-like scheme to determine if an iteration is successful or not, and updates only when it is successful. In this paper, we suggest an algorithm combining negative curvature with the adaptive cubic regularized Newton method to update even at unsuccessful iterations. We call this new method Stochastic Adaptive cubic regularization with Negative Curvature (SANC). Unlike the previous method, in order to attain stochastic gradient and Hessian estimators, the SANC algorithm uses independent sets of data points of consistent size over all iterations. It makes the SANC algorithm more practical to apply for solving large-scale machine learning problems. To the best of our knowledge, this is the first approach that combines the negative curvature method with the adaptive cubic regularized Newton method. Finally, we provide experimental results including neural networks problems supporting the efficiency of our method.
Wang @cite_16 @cite_30 proposed approaches to incorporate a momentum acceleration, or a stochastic variance reduced gradient(SVRG)( @cite_22 ) into a framework of the cubic regularized Newton method. However, they did not use an adaptive cubic coefficient in their paper.
{ "cite_N": [ "@cite_30", "@cite_16", "@cite_22" ], "mid": [ "2896285425", "2964303576", "2107438106" ], "abstract": [ "Momentum is a popular technique to accelerate the convergence in practical training, and its impact on convergence guarantee has been well-studied for first-order algorithms. However, such a successful acceleration technique has not yet been proposed for second-order algorithms in nonconvex optimization.In this paper, we apply the momentum scheme to cubic regularized (CR) Newton's method and explore the potential for acceleration. Our numerical experiments on various nonconvex optimization problems demonstrate that the momentum scheme can substantially facilitate the convergence of cubic regularization, and perform even better than the Nesterov's acceleration scheme for CR. Theoretically, we prove that CR under momentum achieves the best possible convergence rate to a second-order stationary point for nonconvex optimization. Moreover, we study the proposed algorithm for solving problems satisfying an error bound condition and establish a local quadratic convergence rate. Then, particularly for finite-sum problems, we show that the proposed algorithm can allow computational inexactness that reduces the overall sample complexity without degrading the convergence rate.", "In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that noisy information about the gradients of the objective function is available via a stochastic first-order oracle ( @math ). We propose a general framework for such methods, for which we prove almost sure convergence to stationary points and analyze its worst-case iteration complexity. When a randomly chosen iterate is returned as the output of such an algorithm, we prove that in the worst case, the @math -calls complexity is @math to ensure that the expectation of the squared norm of the gradient is smaller than the given accuracy tolerance @math . We also propose a specific algorithm, namely, a stochastic damped limited-memory BFGS (SdLBFGS) method, that falls under the proposed framework. Moreover, we incorporate the stochastic variance reduced gradient variance reduction technique into the proposed SdLBFGS method and analyze its @math -calls compl...", "Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning." ] }
1906.11417
2954160693
We focus on minimizing nonconvex finite-sum functions that typically arise in machine learning problems. In an attempt to solve this problem, the adaptive cubic regularized Newton method has shown its strong global convergence guarantees and ability to escape from strict saddle points. This method uses a trust region-like scheme to determine if an iteration is successful or not, and updates only when it is successful. In this paper, we suggest an algorithm combining negative curvature with the adaptive cubic regularized Newton method to update even at unsuccessful iterations. We call this new method Stochastic Adaptive cubic regularization with Negative Curvature (SANC). Unlike the previous method, in order to attain stochastic gradient and Hessian estimators, the SANC algorithm uses independent sets of data points of consistent size over all iterations. It makes the SANC algorithm more practical to apply for solving large-scale machine learning problems. To the best of our knowledge, this is the first approach that combines the negative curvature method with the adaptive cubic regularized Newton method. Finally, we provide experimental results including neural networks problems supporting the efficiency of our method.
There have been relatively little works on the negative curvature methods. Curtis and Robinson @cite_26 proposed several algorithms exploiting negative curvature for solving deterministic and stochastic optimization problems. In this work, a current iterate is updated with a direction of (stochastic) gradient descent and negative curvature. The 'dynamic method' is also proposed which adaptively estimates a Lipschitz constant of gradient continuity to estimate a step length in the stochastic optimization framework. They showed that the 'dynamic method' is efficient in performance in stochastic optimization problems, but they did not provide a proof of its convergence.
{ "cite_N": [ "@cite_26" ], "mid": [ "2594169002" ], "abstract": [ "This paper addresses the question of whether it can be beneficial for an optimization algorithm to follow directions of negative curvature. Although prior work has established convergence results for algorithms that integrate both descent and negative curvature steps, there has not yet been extensive numerical evidence showing that such methods offer consistent performance improvements. In this paper, we present new frameworks for combining descent and negative curvature directions: alternating two-step approaches and dynamic step approaches. The aspect that distinguishes our approaches from ones previously proposed is that they make algorithmic decisions based on (estimated) upper-bounding models of the objective function. A consequence of this aspect is that our frameworks can, in theory, employ fixed stepsizes, which makes the methods readily translatable from deterministic to stochastic settings. For deterministic problems, we show that instances of our dynamic framework yield gains in performance compared to related methods that only follow descent steps. We also show that gains can be made in a stochastic setting in cases when a standard stochastic-gradient-type method might make slow progress." ] }
1906.11417
2954160693
We focus on minimizing nonconvex finite-sum functions that typically arise in machine learning problems. In an attempt to solve this problem, the adaptive cubic regularized Newton method has shown its strong global convergence guarantees and ability to escape from strict saddle points. This method uses a trust region-like scheme to determine if an iteration is successful or not, and updates only when it is successful. In this paper, we suggest an algorithm combining negative curvature with the adaptive cubic regularized Newton method to update even at unsuccessful iterations. We call this new method Stochastic Adaptive cubic regularization with Negative Curvature (SANC). Unlike the previous method, in order to attain stochastic gradient and Hessian estimators, the SANC algorithm uses independent sets of data points of consistent size over all iterations. It makes the SANC algorithm more practical to apply for solving large-scale machine learning problems. To the best of our knowledge, this is the first approach that combines the negative curvature method with the adaptive cubic regularized Newton method. Finally, we provide experimental results including neural networks problems supporting the efficiency of our method.
Most recently, in a similar vein, Liu @cite_5 proposed the adaptive negative curvature descent (NCD) method which gives some adaptability to the terminating criteria of the Lanczos procedure depending on the magnitude of the subsampled gradients. They also provided variants of the adaptive NCD method whose worst-case time complexity is @math for stochastic optimization, where @math hides logarithmic terms.
{ "cite_N": [ "@cite_5" ], "mid": [ "2891935861" ], "abstract": [ "Negative curvature descent (NCD) method has been utilized to design deterministic or stochastic algorithms for non-convex optimization aiming at finding second-order stationary points or local minima. In existing studies, NCD needs to approximate the smallest eigen-value of the Hessian matrix with a sufficient precision (e.g., ϵ2≪1) in order to achieve a sufficiently accurate second-order stationary solution (i.e., λmin(∇2f( ))≥−ϵ2). One issue with this approach is that the target precision ϵ2 is usually set to be very small in order to find a high quality solution, which increases the complexity for computing a negative curvature. To address this issue, we propose an adaptive NCD to allow for an adaptive error dependent on the current gradient's magnitude in approximating the smallest eigen-value of the Hessian, and to encourage competition between a noisy NCD step and gradient descent step. We consider the applications of the proposed adaptive NCD for both deterministic and stochastic non-convex optimization, and demonstrate that it can help reduce the the overall complexity in computing the negative curvatures during the course of optimization without sacrificing the iteration complexity." ] }
1906.11417
2954160693
We focus on minimizing nonconvex finite-sum functions that typically arise in machine learning problems. In an attempt to solve this problem, the adaptive cubic regularized Newton method has shown its strong global convergence guarantees and ability to escape from strict saddle points. This method uses a trust region-like scheme to determine if an iteration is successful or not, and updates only when it is successful. In this paper, we suggest an algorithm combining negative curvature with the adaptive cubic regularized Newton method to update even at unsuccessful iterations. We call this new method Stochastic Adaptive cubic regularization with Negative Curvature (SANC). Unlike the previous method, in order to attain stochastic gradient and Hessian estimators, the SANC algorithm uses independent sets of data points of consistent size over all iterations. It makes the SANC algorithm more practical to apply for solving large-scale machine learning problems. To the best of our knowledge, this is the first approach that combines the negative curvature method with the adaptive cubic regularized Newton method. Finally, we provide experimental results including neural networks problems supporting the efficiency of our method.
Carmon @cite_15 used negative curvature directions at the first phase of iterates and then switched it to accelerated stochastic gradient descent method when an iterate reaches an almost convex region.
{ "cite_N": [ "@cite_15" ], "mid": [ "2546420264" ], "abstract": [ "We present an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives. In a time @math , the method finds an @math -stationary point, meaning a point @math such that @math . The method improves upon the @math complexity of gradient descent and provides the additional second-order guarantee that @math for the computed @math . Furthermore, our method is Hessian free, i.e., it only requires gradient computations, and is therefore suitable for large-scale applications." ] }
1906.11417
2954160693
We focus on minimizing nonconvex finite-sum functions that typically arise in machine learning problems. In an attempt to solve this problem, the adaptive cubic regularized Newton method has shown its strong global convergence guarantees and ability to escape from strict saddle points. This method uses a trust region-like scheme to determine if an iteration is successful or not, and updates only when it is successful. In this paper, we suggest an algorithm combining negative curvature with the adaptive cubic regularized Newton method to update even at unsuccessful iterations. We call this new method Stochastic Adaptive cubic regularization with Negative Curvature (SANC). Unlike the previous method, in order to attain stochastic gradient and Hessian estimators, the SANC algorithm uses independent sets of data points of consistent size over all iterations. It makes the SANC algorithm more practical to apply for solving large-scale machine learning problems. To the best of our knowledge, this is the first approach that combines the negative curvature method with the adaptive cubic regularized Newton method. Finally, we provide experimental results including neural networks problems supporting the efficiency of our method.
Also, a direction of negative curvature has been used for escaping from a strict saddle point. For the details on it, please refer to @cite_20 @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_20" ], "mid": [ "2963349772", "2751090697" ], "abstract": [ "(This is a theory paper) In this paper, we consider first-order methods for solving stochastic non-convex optimization problems. The key building block of the proposed algorithms is first-order procedures to extract negative curvature from the Hessian matrix through a principled sequence starting from noise, which are referred to NEgative-curvature-Originated-from-Noise or NEON and are of independent interest. Based on this building block, we design purely first-order stochastic algorithms for escaping from non-degenerate saddle points with a much better time complexity (almost linear time in the problem's dimensionality). In particular, we develop a general framework of first-order stochastic algorithms with a second-order convergence guarantee based on our new technique and existing algorithms that may only converge to a first-order stationary point. For finding a nearly second-order stationary point such that ‖∇F( )‖≤ϵ and ∇2F( )≥− √ ϵ I (in high probability), the best time complexity of the presented algorithms is ˜ O (d ϵ3.5), where F(⋅) denotes the objective function and d is the dimensionality of the problem. To the best of our knowledge, this is the first theoretical result of first-order stochastic algorithms with an almost linear time in terms of problem's dimensionality for finding second-order stationary points, which is even competitive with existing stochastic algorithms hinging on the second-order information.", "A central challenge to using first-order methods for optimizing nonconvex problems is the presence of saddle points. First-order methods often get stuck at saddle points, greatly deteriorating their performance. Typically, to escape from saddles one has to use second-order methods. However, most works on second-order methods rely extensively on expensive Hessian-based computations, making them impractical in large-scale settings. To tackle this challenge, we introduce a generic framework that minimizes Hessian based computations while at the same time provably converging to second-order critical points. Our framework carefully alternates between a first-order and a second-order subroutine, using the latter only close to saddle points, and yields convergence results competitive to the state-of-the-art. Empirical results suggest that our strategy also enjoys a good practical performance." ] }
1906.11286
2954916351
Drawing an inspiration from behavioral studies of human decision making, we propose here a general parametric framework for a reinforcement learning problem, which extends the standard Q-learning approach to incorporate a two-stream framework of reward processing with biases biologically associated with several neurological and psychiatric conditions, including Parkinson's and Alzheimer's diseases, attention-deficit hyperactivity disorder (ADHD), addiction, and chronic pain. For AI community, the development of agents that react differently to different types of rewards can enable us to understand a wide spectrum of multi-agent interactions in complex real-world socioeconomic systems. Empirically, the proposed model outperforms Q-Learning and Double Q-Learning in artificial scenarios with certain reward distributions and real-world human decision making gambling tasks. Moreover, from the behavioral modeling perspective, our parametric framework can be viewed as a first step towards a unifying computational model capturing reward processing abnormalities across multiple mental conditions and user preferences in long-term recommendation systems.
In the study by @cite_19 , a simple heuristic model is developed to simulate individuals’ choice behavior by varying the level of decision randomness and the importance given to gains and losses. The findings revealed that risky decision-making seems to be markedly disrupted in patients with chronic pain, probably due to the high cost that pain and negative mood impose on executive control functions. Patients’ behavioral performance in decision-making tasks, such as the Iowa Gambling Task (IGT), is characterized by selecting cards more frequently from disadvantageous than from advantageous decks, and by switching more often between competing responses, as compared with healthy controls.
{ "cite_N": [ "@cite_19" ], "mid": [ "2141859379" ], "abstract": [ "Risky decision-making seems to be markedly disrupted in patients with chronic pain, probably due to the high cost that impose pain and negative mood on executive control functions. Patients’ behavioral performance on decision-making tasks such as the Iowa Gambling Task (IGT) is characterized by selecting cards more frequently from disadvantageous than from advantageous decks, and by switching often between competing responses in comparison with healthy controls. In the present study, we developed a simple heuristic model to simulate individuals’ choice behavior by varying the level of decision randomness and the importance given to gains and losses. The findings revealed that the model was able to differentiate the behavioral performance of patients with chronic pain and healthy controls at the group, as well as at the individual level. The best fit of the model in patients with chronic pain was yielded when decisions were not based on previous choices and when gains were considered more relevant than losses. By contrast, the best account of the available data in healthy controls was obtained when decisions were based on previous experiences and losses loomed larger than gains. In conclusion, our model seems to provide useful information to measure each individual participant extensively, and to deal with the data on a participant-by-participant basis." ] }
1906.11355
2953540383
Contrast enhancement is an important preprocessing technique for improving the performance of downstream tasks in image processing and computer vision. Among the existing approaches based on nonlinear histogram transformations, contrast limited adaptive histogram equalization (CLAHE) is a popular choice for dealing with 2D images obtained in natural and scientific settings. The recent hardware upgrade in data acquisition systems results in significant increase in data complexity, including their sizes and dimensions. Measurements of densely sampled data higher than three dimensions, usually composed of 3D data as a function of external parameters, are becoming commonplace in various applications in the natural sciences and engineering. The initial understanding of these complex multidimensional datasets often requires human intervention through visual examination, which may be hampered by the varying levels of contrast permeating through the dimensions. We show both qualitatively and quantitatively that using our multidimensional extension of CLAHE (MCLAHE) acting simultaneously on all dimensions of the datasets allows better visualization and discernment of multidimensional image features, as demonstrated using cases from 4D photoemission spectroscopy and fluorescence microscopy. Our implementation of multidimensional CLAHE in Tensorflow is publicly accessible and supports parallelization with multiple CPUs and various other hardware accelerators, including GPUs.
Evaluating the outcome of contrast enhancement requires quantitative metrics of image contrast, which are rarely used in the early demonstrations of HE algorithms @cite_24 @cite_22 @cite_29 @cite_36 @cite_27 @cite_0 because the use cases are predominantly in 2D and the improvements of image quality are largely intuitive. In domain-specific settings involving higher-dimensional (3D and above) imagery, intuition becomes less suitable for making judgments, but computational contrast metrics can provide guidance for evaluation. The commonly known contrast metrics include the mean squared error (MSE) or the related PSNR @cite_11 @cite_21 , the standard deviation (also called the root-mean-square contrast) @cite_38 and the Shannon entropy (also called the grey-level entropy) @cite_39 . These metrics are naturally generalizable to imagery in arbitrary dimensions @cite_31 and are easy to compute. We also note that despite the recently developed 2D image quality assessment scores based on the current understanding of human visual systems @cite_11 @cite_39 proved to be more effective than the classic metrics we choose to quantify contrast, their generalization and relevance to the evaluation of higher-dimensional images obtained in natural sciences and engineering settings, often without undistorted references, are not yet explored, so they are not used here for comparison of results.
{ "cite_N": [ "@cite_38", "@cite_22", "@cite_36", "@cite_29", "@cite_21", "@cite_39", "@cite_24", "@cite_0", "@cite_27", "@cite_31", "@cite_11" ], "mid": [ "2134774992", "2038376950", "2107858703", "2025685200", "2946864864", "1971014006", "", "2128926607", "2010429712", "", "2133665775" ], "abstract": [ "The physical contrast of simple images such as sinusoidal gratings or a single patch of light on a uniform background is well defined and agrees with the perceived contrast, but this is not so for complex images. Most definitions assign a single contrast value to the whole image, but perceived contrast may vary greatly across the image. Human contrast sensitivity is a function of spatial frequency; therefore the spatial frequency content of an image should be considered in the definition of contrast. In this paper a definition of local band-limited contrast in images is proposed that assigns a contrast value to every point in the image as a function of the spatial frequency band. For each frequency band, the contrast is defined as the ratio of the bandpass-filtered image at that frequency to the low-pass image filtered to an octave below the same frequency (local luminance mean). This definition raises important implications regarding the perception of contrast in complex images and is helpful in understanding the effects of image-processing algorithms on the perceived contrast. A pyramidal image-contrast structure based on this definition is useful in simulating nonlinear, threshold characteristics of spatial vision in both normal observers and the visually impaired.", "The use of a gray level transformation which transforms a given empirical distribution function of gray level values in an image into a uniform distribution has been used as an image enhancement as well as for a normalization procedure. This transformation produces a discrete variable whose empirical distribution might be expected to be approximately uniform since it is related to the well known distribution transformation. In this correspondence, an extension of the theorem is given which shows that the theorem \"almost\" holds for an \"almost\" continuous input distribution. The application of the discrete distribution transformation to computer image enhancement is considered.", "Abstract : A number of simple and inexpensive enhancement techniques are suggested. These techniques attempt to make use of easily computed local context features to aid in the reassignment of each point's gray level during histogram transformation. (Author)", "Most image enhancement techniques are not suitable for real-time applications. This paper presents two contrast enhancement techniques that can work at TV rates with fairly simple hardware.© (1976) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.", "Contrast enhancement algorithms have been evolved through last decades to meet the requirement of its objectives. Actually, there are two main objectives while enhancing the contrast of an image: (i) improve its appearance for visual interpretation and (ii) facilitate increase the performance of subsequent tasks (e.g., image analysis, object detection, and image segmentation). Most of the contrast enhancement techniques are based on histogram modifications, which can be performed globally or locally. The Contrast Limited Adaptive Histogram Equalization (CLAHE) is a method which can overcome the limitations of global approaches by performing local contrast enhancement. However, this method relies on two essential hyperparameters: the number of tiles and the clip limit. An improper hyperparameter selection may heavily decrease the image quality toward its degradation. Considering the lack of methods to efficiently determine these hyperparameters, this article presents a learning-based hyperparameter selection method for the CLAHE technique. The proposed supervised method was built and evaluated using contrast distortions from well-known image quality assessment datasets. Also, we introduce a more challenging dataset containing over 6200 images with a large range of contrast and intensity variations. The results show the efficiency of the proposed approach in predicting CLAHE hyperparameters with up to 0.014 RMSE and 0.935 R2 values. Also, our method overcomes both experimented baselines by enhancing image contrast while keeping its natural aspect.", "Proper contrast change can improve the perceptual quality of most images, but it has largely been overlooked in the current research of image quality assessment (IQA). To fill this void, we in this paper first report a new large dedicated contrast-changed image database (CCID2014), which includes 655 images and associated subjective ratings recorded from 22 inexperienced observers. We then present a novel reduced-reference image quality metric for contrast change (RIQMC) using phase congruency and statistics information of the image histogram. Validation of the proposed model is conducted on contrast related CCID2014, TID2008, CSIQ and TID2013 databases, and results justify the superiority and efficiency of RIQMC over a majority of classical and state-of-the-art IQA methods. Furthermore, we combine aforesaid subjective and objective assessments to derive the RIQMC based Optimal HIstogram Mapping (ROHIM) for automatic contrast enhancement, which is shown to outperform recently developed enhancement technologies.", "", "This paper proposes a scheme for adaptive image-contrast enhancement based on a generalization of histogram equalization (HE). HE is a useful technique for improving image contrast, but its effect is too severe for many purposes. However, dramatically different results can be obtained with relatively minor modifications. A concise description of adaptive HE is set out, and this framework is used in a discussion of past suggestions for variations on HE. A key feature of this formalism is a \"cumulation function,\" which is used to generate a grey level mapping from the local histogram. By choosing alternative forms of cumulation function one can achieve a wide variety of effects. A specific form is proposed. Through the variation of one or two parameters, the resulting process can produce a range of degrees of contrast enhancement, at one extreme leaving the image unchanged, at another yielding full adaptive equalization.", "Abstract An analysis of the local histogram equalization algorithm is presented. An adaptation of the algorithm is suggested that involves varying the window size over different regions of the image. This enables each region to be enhanced equally. The algorithm has a parameter, S, to control the amount of stretching required. Some results are presented and analysed.", "", "Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http: www.cns.nyu.edu spl sim lcv ssim ." ] }
1906.11419
2955464297
Localization is a critical capability for robots, drones and autonomous vehicles operating in a wide range of environments. One of the critical considerations for designing, training or calibrating visual localization systems is the coverage of the visual sensors equipped on the platforms. In an aerial context for example, the altitude of the platform and camera field of view plays a critical role in how much of the environment a downward facing camera can perceive at any one time. Furthermore, in other applications, such as on roads or in indoor environments, additional factors such as camera resolution and sensor placement altitude can also affect this coverage. The sensor coverage and the subsequent processing of its data also has significant computational implications. In this paper we present for the first time a set of methods for automatically determining the trade-off between coverage and visual localization performance, enabling the identification of the minimum visual sensor coverage required to obtain optimal localization performance with minimal compute. We develop a localization performance indicator based on the overlapping coefficient, and demonstrate its predictive power for localization performance with a certain sensor coverage. We evaluate our method on several challenging real-world datasets from aerial and ground-based domains, and demonstrate that our method is able to automatically optimize for coverage using a small amount of calibration data. We hope these results will assist in the design of localization systems for future autonomous robot, vehicle and flying systems.
In several mobile robotics applications the system moves relative to a surface, such as a drone across the ground, an autonomous vehicle over the road or a submarine relative to a ship's hull. As a result, several approaches have proposed using the surface that the robot moves relative to as a visual reference map for localization. For example, thoroughly demonstrated that surface-based visual localization using pixel-based techniques for mobile ground platforms is feasible within warehouse environments with controlled lighting using a monocular camera @cite_31 @cite_7 . also demonstrated this technique can be applied to autonomous vehicles and a road surface, even with day to night image data @cite_40 . red Additionally, @cite_13 @cite_37 demonstrate the use of local features for road surface-based visual localization.
{ "cite_N": [ "@cite_37", "@cite_7", "@cite_40", "@cite_31", "@cite_13" ], "mid": [ "2766283203", "2080421438", "", "2128622892", "2414260735" ], "abstract": [ "Location-aware applications play an increasingly critical role in everyday life. However, the most common global localization technology - GPS - has limited accuracy and can be unusable in dense urban areas and indoors. We introduce an image-based global localization system that is accurate to a few millimeters and performs reliable localization both indoors and outside. The key idea is to capture and index distinctive local features in ground textures. This is based on the observation that ground textures including wood, carpet, tile, concrete, and asphalt may look random and homogeneous, but all contain cracks, scratches, or unique arrangements of carpet fibers. These imperfections are persistent, and can serve as local features. Our system incorporates a downward-facing camera to capture the fine texture of the ground, together with an image processing pipeline that locates the captured texture patch in a compact database constructed offline. We demonstrate the capability of our system to robustly, accurately, and quickly locate test images on various types of outdoor and indoor ground surfaces.", "Automated guided vehicles (AGVs) have been operating effectively in factories for decades. These vehicles have successfully used strategies of deliberately structuring the environment and adapting the process to the automation. The potential of computer vision technology to increase the intelligence and adaptability of AGVs is largely unexploited in contemporary commercially available vehicles. We developed an infrastructure-free AGV that uses four distinct vision systems. Three of them exploit naturally occurring visual cues instead of relying on infrastructure. When coupled with a highly capable trajectory generation algorithm, the system produces four visual servo controllers that guide the vehicle continuously in several contexts. These contexts range from gross motion in the facility to precision operations for lifting and mating parts racks and removing them from semi-trailers. To our knowledge, this is the first instance of an AGV that has operated successfully in a relevant environment for an extended period of time without relying on any infrastructure.", "", "A new practical, high-performance mobile robot localization technique is described that is motivated by the fact that many man-made environments contain substantially flat, visually textured surfaces of persistent appearance. While the tracking of image regions is much studied in computer vision, appearance is still a largely unexploited localization resource in commercially relevant applications. We show how prior appearance models can be used to enable highly repeatable mobile robot guidance that, unlike commercial alternatives, is both infrastructure-free and free-ranging. Very large-scale mosaics are constructed and used to localize a mobile robot operating in the modeled environment. Straightforward techniques from vision-based localization and mosaicking are used to produce a field-relevant AGV guidance system based only on vision and odometry. The feasibility, design, implementation, and precommercial field qualification of such a guidance system are described.", "This paper presents an overview of the Ranger localization system and its constituent technologies and algorithms. Ranger is a high-precision localization system for ground vehicles that performs map-based localization using a ground-facing camera. Ranger uses commercially available hardware, including a camera, lights, and a computer, in combination with auxiliary localization sensors and a custom state estimator, to produce a complete high-precision positioning solution suitable for feedback control of an automated vehicle. Ranger was originally conceived and designed to address the accuracy and availability problems of GPS, and can operate independent from or as a supplement to GPS and other GNSS positioning systems. Ranger position measurements are made by matching live ground imagery to imagery stored in a map. The image matching process uses a feature-based approach that yields a high positive match rate with a vanishingly small false positive rate. The precision of the Ranger measurements is shown to be on the order of centimeters, and the actual lateral positioning performance of MARTI, one of Southwest Research Institute's automated vehicles, autonomously driving a test route is shown to be repeatable over many runs to within 2 cm." ] }
1906.11419
2955464297
Localization is a critical capability for robots, drones and autonomous vehicles operating in a wide range of environments. One of the critical considerations for designing, training or calibrating visual localization systems is the coverage of the visual sensors equipped on the platforms. In an aerial context for example, the altitude of the platform and camera field of view plays a critical role in how much of the environment a downward facing camera can perceive at any one time. Furthermore, in other applications, such as on roads or in indoor environments, additional factors such as camera resolution and sensor placement altitude can also affect this coverage. The sensor coverage and the subsequent processing of its data also has significant computational implications. In this paper we present for the first time a set of methods for automatically determining the trade-off between coverage and visual localization performance, enabling the identification of the minimum visual sensor coverage required to obtain optimal localization performance with minimal compute. We develop a localization performance indicator based on the overlapping coefficient, and demonstrate its predictive power for localization performance with a certain sensor coverage. We evaluate our method on several challenging real-world datasets from aerial and ground-based domains, and demonstrate that our method is able to automatically optimize for coverage using a small amount of calibration data. We hope these results will assist in the design of localization systems for future autonomous robot, vehicle and flying systems.
The research presented on underwater visual ship hull inspection and navigation further demonstrates that vision based surface localization is feasible even in challenging conditions @cite_32 @cite_24 @cite_26 . There has also been a variety of research into utilizing the surface as the input image stream for visual odometry @cite_25 @cite_4 @cite_23 .
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_32", "@cite_24", "@cite_23", "@cite_25" ], "mid": [ "2013117623", "2058311622", "2171792626", "2020950478", "2519018767", "2169076903" ], "abstract": [ "This paper reports on a method for an autonomous underwater vehicle to perform real-time visual simultaneous localization and mapping (SLAM) on large ship hulls over multiple sessions. Along with a monocular camera, our method uses a piecewise-planar model to explicitly optimize the ship hull surface in our factor-graph framework, and anchor nodes to co-register multiple surveys. To enable realtime performance for long-term SLAM, we use the recent Generic Linear Constraints (GLC) framework to sparsify our factor-graph. This paper analyzes how our single-session SLAM techniques can be used in the GLC framework, and describes a particle filter reacquisition algorithm so that an underwater session can be automatically re-localized to a previously built SLAM graph. We provide real-world experimental results involving automated ship hull inspection, and show that our localization filter out-performs Fast Appearance-Based Mapping (FAB-MAP), a popular place-recognition system. Using our approach, we can automatically align surveys that were taken days, months, and even years apart.", "Reliable motion estimation is a key component for autonomous vehicles. We present a visual odometry method for ground vehicles using template matching. The method uses a downward-facing camera perpendicular to the ground and estimates the motion of the vehicle by analyzing the image shift from frame to frame. Specifically, an image region (template) is selected, and using correlation we find the corresponding image region in the next frame. We introduce the use of multitemplate correlation matching and suggest template quality measures for estimating the suitability of a template for the purpose of correlation. Several aspects of the template choice are also presented. Through an extensive analysis, we derive the expected theoretical error rate of our system and show its dependence on the template window size and image noise. We also show how a linear forward prediction filter can be used to limit the search area to significantly increase the computation performance. Using a single camera and assuming an Ackerman-steering model, the method has been implemented successfully on a large industrial forklift and a 4×4 vehicle. Over 6 km of field trials from our industrial test site, an off-road area and an urban environment are presented illustrating the applicability of the method as an independent sensor for large vehicle motion estimation at practical velocities. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.", "Inspection of ship hulls and marine structures using autonomous underwater vehicles has emerged as a unique and challenging application of robotics. The problem poses rich questions in physical design and operation, perception and navigation, and planning, driven by difficulties arising from the acoustic environment, poor water quality and the highly complex structures to be inspected. In this paper, we develop and apply algorithms for the central navigation and planning problems on ship hulls. These divide into two classes, suitable for the open, forward parts of a typical monohull, and for the complex areas around the shafting, propellers and rudders. On the open hull, we have integrated acoustic and visual mapping processes to achieve closed-loop control relative to features such as weld-lines and biofouling. In the complex area, we implemented new large-scale planning routines so as to achieve full imaging coverage of all the structures, at a high resolution. We demonstrate our approaches in recent op...", "This paper reports a real-time monocular visual simultaneous localization and mapping (SLAM) algorithm and results for its application in the area of autonomous underwater ship hull inspection. The proposed algorithm overcomes some of the specific challenges associated with underwater visual SLAM, namely, limited field of view imagery and feature-poor regions. It does so by exploiting our SLAM navigation prior within the image registration pipeline and by being selective about which imagery is considered informative in terms of our visual SLAM map. A novel online bag-of-words measure for intra and interimage saliency are introduced and are shown to be useful for image key-frame selection, information-gain-based link hypothesis, and novelty detection. Results from three real-world hull inspection experiments evaluate the overall approach, including one survey comprising a 3.4-h 2.7-km-long trajectory.", "One of the important tasks of an autonomous mobile vehicle is the reliable and fast estimation of its position over time. This paper presents the development of an adaptive technique to hasten and improve the quality of correlation-based template matching for monocular visual odometry systems that estimate the relative motion of ground vehicles in low-textured environments. Moreover, the factors that can affect the maximum permissible vehicle driving speed were determined and the related equations were derived. The developed system uses a single downward-facing monocular camera installed at an optimum location to avoid the negative effect of directional sunlight and shadows which can disturb the correlation. In addition, the normalized cross-correlation method is implemented to calculate the pixel displacement between image frames. Although this method is highly effective for template matching because of its invariance to linear brightness and contrast variations, it incurs high computational cost. Thus, the optimal sizes of image template and matching search area are selected and their locations are dynamically changed according to vehicle acceleration, in order to achieve a compromise between the performance and the computational cost of correlation. The proposed technique increases the allowable vehicle driving speed and reduces the probability of template false-matching. Moreover, compared to traditional full search matching techniques, the adaptive technique demonstrates high efficiency and accuracy and improves the quality and speed of the correlation with more than 87 of reduction in computational cost.", "Positioning is a key task in most field robotics applications but can be very challenging in GPS-denied or high-slip environments. A common tactic in such cases is to position visually, and we present a visual odometry implementa- tion with the unusual reliance on optical mouse sensors to report vehicle velocity. Using multiple kilometers of data from a lunar rover prototype, we demonstrate that, in conjunction with a moderate-grade inertial measurement unit, such a sensor can provide an integrated pose stream that is at times more accurate than that achievable by wheel odometry and visibly more desirable for perception purposes than that provided by a high-end GPS-INS system. A discussion of the sensor's limitations and several drift mitigating strategies attempted are presented." ] }
1811.10519
2901375506
We present a method to learn the 3D surface of objects directly from a collection of images. Previous work achieved this capability by exploiting additional manual annotation, such as object pose, 3D surface templates, temporal continuity of videos, manually selected landmarks, and foreground background masks. In contrast, our method does not make use of any such annotation. Rather, it builds a generative model, a convolutional neural network, which, given a noise vector sample, outputs the 3D surface and texture of an object and a background image. These 3 components combined with an additional random viewpoint vector are then fed to a differential renderer to produce a view of the sampled object and background. Our general principle is that if the output of the renderer, the generated image, is realistic, then its input, the generated 3D and texture, should also be realistic. To achieve realism, the generative model is trained adversarially against a discriminator that tries to distinguish between the output of the renderer and real images from the given data set. Moreover, our generative model can be paired with an encoder and trained as an autoencoder, to automatically extract the 3D shape, texture and pose of the object in an image. Our trained generative model and encoder show promising results both on real and synthetic data, which demonstrate for the first time that fully unsupervised 3D learning from image collections is possible.
In this work we focus on learning a generative model. 3D morphable models (3DMM) @cite_34 @cite_25 are trained with high quality face scans and provide a high quality template for face reconstruction and recognition. Tran al @cite_14 and Genova al @cite_17 train neural networks for regressing the parameters of 3DMMs. Model-based Face Autoencoders (MoFA) and Genova al @cite_24 @cite_8 only use unlabelled training data, but they rely on existing models that used supervision. Therefore, with different object categories, these methods require a new pre-training of the 3DMM and knowledge of what 3D objects they need to reconstruct, while our method applies directly to any category without prior knowledge on what 3D shape the objects have.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_24", "@cite_34", "@cite_25", "@cite_17" ], "mid": [ "2584229793", "", "2952080583", "2237250383", "", "2952018420" ], "abstract": [ "The 3D shapes of faces are well known to be discriminative. Yet despite this, they are rarely used for face recognition and always under controlled viewing conditions. We claim that this is a symptom of a serious but often overlooked problem with existing methods for single view 3D face reconstruction: when applied in the wild, their 3D estimates are either unstable and change for different photos of the same subject or they are over-regularized and generic. In response, we describe a robust method for regressing discriminative 3D morphable face models (3DMM). We use a convolutional neural network (CNN) to regress 3DMM shape and texture parameters directly from an input photo. We overcome the shortage of training data required for this purpose by offering a method for generating huge numbers of labeled examples. The 3D estimates produced by our CNN surpass state of the art accuracy on the MICC data set. Coupled with a 3D-3D face matching pipeline, we show the first competitive face recognition results on the LFW, YTF and IJB-A benchmarks using 3D face shapes as representations, rather than the opaque deep feature vectors used by other modern systems.", "", "In this work we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image. To this end, we combine a convolutional encoder network with an expert-designed generative model that serves as decoder. The core innovation is our new differentiable parametric decoder that encapsulates image formation analytically based on a generative model. Our decoder takes as input a code vector with exactly defined semantic meaning that encodes detailed face pose, shape, expression, skin reflectance and scene illumination. Due to this new way of combining CNN-based with model-based face reconstruction, the CNN-based encoder learns to extract semantically meaningful parameters from a single monocular input image. For the first time, a CNN encoder and an expert-designed generative model can be trained end-to-end in an unsupervised manner, which renders training on very large (unlabeled) real world data feasible. The obtained reconstructions compare favorably to current state-of-the-art approaches in terms of quality and richness of representation.", "In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness.", "", "We present a method for training a regression network from image pixels to 3D morphable model coordinates using only unlabeled photographs. The training loss is based on features from a facial recognition network, computed on-the-fly by rendering the predicted faces with a differentiable renderer. To make training from features feasible and avoid network fooling effects, we introduce three objectives: a batch distribution loss that encourages the output distribution to match the distribution of the morphable model, a loopback loss that ensures the network can correctly reinterpret its own output, and a multi-view identity loss that compares the features of the predicted 3D face and the input photograph from multiple viewing angles. We train a regression network using these objectives, a set of unlabeled photographs, and the morphable model itself, and demonstrate state-of-the-art results." ] }
1811.10519
2901375506
We present a method to learn the 3D surface of objects directly from a collection of images. Previous work achieved this capability by exploiting additional manual annotation, such as object pose, 3D surface templates, temporal continuity of videos, manually selected landmarks, and foreground background masks. In contrast, our method does not make use of any such annotation. Rather, it builds a generative model, a convolutional neural network, which, given a noise vector sample, outputs the 3D surface and texture of an object and a background image. These 3 components combined with an additional random viewpoint vector are then fed to a differential renderer to produce a view of the sampled object and background. Our general principle is that if the output of the renderer, the generated image, is realistic, then its input, the generated 3D and texture, should also be realistic. To achieve realism, the generative model is trained adversarially against a discriminator that tries to distinguish between the output of the renderer and real images from the given data set. Moreover, our generative model can be paired with an encoder and trained as an autoencoder, to automatically extract the 3D shape, texture and pose of the object in an image. Our trained generative model and encoder show promising results both on real and synthetic data, which demonstrate for the first time that fully unsupervised 3D learning from image collections is possible.
Unlike 3DMMs, Generative Adversarial Nets (GAN) @cite_2 and Variational Autoecoders (VAE) @cite_1 do not provide interpretable parameters, but they are very powerful and can be trained in a fully unsupervised manner. In recent years they improved significantly @cite_30 @cite_9 @cite_28 . 3DGANs @cite_7 are used to generate 3D objects with 3D supervision. It is possible however to use GANs to train 3D generators by only using 2D images and differentiable renderers similar to the Neural Mesh Renderer @cite_21 or OpenDR @cite_4 . PrGAN @cite_5 learns a voxel based representation with GAN, and Henderson al @cite_22 train surface meshes using VAE. They are both limited to synthetic data as they do not model the background. This can be interpreted as using the silhouettes as supervision signal. In contrast we only use 2D image collections and learn a 3D mesh with texture, and model the background as well. PrGAN @cite_5 as well as our method is a special case of AmbientGAN @cite_15 . We extend their theory to the case of 3D reconstruction and describe failure modes, including the Hollow-mask illusion @cite_16 and the reference ambiguity @cite_12 .
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_22", "@cite_7", "@cite_28", "@cite_9", "@cite_21", "@cite_1", "@cite_2", "@cite_5", "@cite_15", "@cite_16", "@cite_12" ], "mid": [ "", "183071939", "2963338719", "2949551726", "", "", "2963527086", "", "2099471712", "2582734987", "2785532149", "2030066581", "2895729551" ], "abstract": [ "", "Inverse graphics attempts to take sensor data and infer 3D geometry, illumination, materials, and motions such that a graphics renderer could realistically reproduce the observed scene. Renderers, however, are designed to solve the forward process of image synthesis. To go in the other direction, we propose an approximate differentiable renderer (DR) that explicitly models the relationship between changes in model parameters and image observations. We describe a publicly available OpenDR framework that makes it easy to express a forward graphics model and then automatically obtain derivatives with respect to the model parameters and to optimize over them. Built on a new auto-differentiation package and OpenGL, OpenDR provides a local optimization method that can be incorporated into probabilistic programming frameworks. We demonstrate the power and simplicity of programming with OpenDR by using it to solve the problem of estimating human body shape from Kinect depth and RGB data.", "", "We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods.", "", "", "For modeling the 3D world behind 2D images, which 3D representation is most appropriate? A polygon mesh is a promising candidate for its compactness and geometric properties. However, it is not straightforward to model a polygon mesh from 2D images using neural networks because the conversion from a mesh to an image, or rendering, involves a discrete operation called rasterization, which prevents back-propagation. Therefore, in this work, we propose an approximate gradient for rasterization that enables the integration of rendering into neural networks. Using this renderer, we perform single-image 3D mesh reconstruction with silhouette image supervision and our system outperforms the existing voxel-based approach. Additionally, we perform gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream, with 2D supervision for the first time. These applications demonstrate the potential of the integration of a mesh renderer into neural networks and the effectiveness of our proposed renderer.", "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "In this paper we investigate the problem of inducing a distribution over three-dimensional structures given two-dimensional views of multiple objects taken from unknown viewpoints. Our approach called \"projective generative adversarial networks\" (PrGANs) trains a deep generative model of 3D shapes whose projections match the distributions of the input 2D views. The addition of a projection module allows us to infer the underlying 3D shape distribution without using any 3D, viewpoint information, or annotation during the learning phase. We show that our approach produces 3D shapes of comparable quality to GANs trained on 3D data for a number of shape categories including chairs, airplanes, and cars. Experiments also show that the disentangled representation of 2D shapes into geometry and viewpoint leads to a good generative model of 2D shapes. The key advantage is that our model allows us to predict 3D, viewpoint, and generate novel views from an input image in a completely unsupervised manner.", "Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest. However, current techniques for training generative models require access to fully-observed samples. In many settings, it is expensive or even impossible to obtain fully-observed samples, but economical to obtain partial, noisy observations. We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest. We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models. Based on this, we propose a new method of training Generative Adversarial Networks (GANs) which we call AmbientGAN. On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements. Generative models trained with our method can obtain @math - @math x higher inception scores than the baselines.", "", "We study the problem of building models that can transfer selected attributes from one image to another without affecting the other attributes. Towards this goal, we develop analysis and a training methodology for autoencoding models, whose encoded features aim to disentangle attributes. These features are explicitly split into two components: one that should represent attributes in common between pairs of images, and another that should represent attributes that change between pairs of images. We show that achieving this objective faces two main challenges: One is that the model may learn degenerate mappings, which we call shortcut problem, and the other is that the attribute representation for an image is not guaranteed to follow the same interpretation on another image, which we call reference ambiguity. To address the shortcut problem, we introduce novel constraints on image pairs and triplets and show their effectiveness both analytically and experimentally. In the case of the reference ambiguity, we formally prove that a model that guarantees an ideal feature separation cannot be built. We validate our findings on several datasets and show that, surprisingly, trained neural networks often do not exhibit the reference ambiguity." ] }
1811.10519
2901375506
We present a method to learn the 3D surface of objects directly from a collection of images. Previous work achieved this capability by exploiting additional manual annotation, such as object pose, 3D surface templates, temporal continuity of videos, manually selected landmarks, and foreground background masks. In contrast, our method does not make use of any such annotation. Rather, it builds a generative model, a convolutional neural network, which, given a noise vector sample, outputs the 3D surface and texture of an object and a background image. These 3 components combined with an additional random viewpoint vector are then fed to a differential renderer to produce a view of the sampled object and background. Our general principle is that if the output of the renderer, the generated image, is realistic, then its input, the generated 3D and texture, should also be realistic. To achieve realism, the generative model is trained adversarially against a discriminator that tries to distinguish between the output of the renderer and real images from the given data set. Moreover, our generative model can be paired with an encoder and trained as an autoencoder, to automatically extract the 3D shape, texture and pose of the object in an image. Our trained generative model and encoder show promising results both on real and synthetic data, which demonstrate for the first time that fully unsupervised 3D learning from image collections is possible.
Our approach can also be interpreted as disentangling the 3D and the viewpoint factors. Reed al @cite_29 solved that task with full supervision using image triplets. They utilised an autoencoder to reconstruct an image from the mixed latent encodings of other two images. Mathieu al @cite_18 and Szab ' o al @cite_12 only use image pairs that share an attribute, thus reducing the supervision with the help of GANs. By using only a standard image formation model (projective geometry), by setting a prior on the viewpoint distribution in the dataset, we demonstrate the disentangling of the 3D from the viewpoint and the background for the case of independently sampled input images.
{ "cite_N": [ "@cite_29", "@cite_18", "@cite_12" ], "mid": [ "", "2951392118", "2895729551" ], "abstract": [ "", "We introduce a conditional generative model for learning to disentangle the hidden factors of variation within a set of labeled observations, and separate them into complementary codes. One code summarizes the specified factors of variation associated with the labels. The other summarizes the remaining unspecified variability. During training, the only available source of supervision comes from our ability to distinguish among different observations belonging to the same class. Examples of such observations include images of a set of labeled objects captured at different viewpoints, or recordings of set of speakers dictating multiple phrases. In both instances, the intra-class diversity is the source of the unspecified factors of variation: each object is observed at multiple viewpoints, and each speaker dictates multiple phrases. Learning to disentangle the specified factors from the unspecified ones becomes easier when strong supervision is possible. Suppose that during training, we have access to pairs of images, where each pair shows two different objects captured from the same viewpoint. This source of alignment allows us to solve our task using existing methods. However, labels for the unspecified factors are usually unavailable in realistic scenarios where data acquisition is not strictly controlled. We address the problem of disentanglement in this more general setting by combining deep convolutional autoencoders with a form of adversarial training. Both factors of variation are implicitly captured in the organization of the learned embedding space, and can be used for solving single-image analogies. Experimental results on synthetic and real datasets show that the proposed method is capable of generalizing to unseen classes and intra-class variabilities.", "We study the problem of building models that can transfer selected attributes from one image to another without affecting the other attributes. Towards this goal, we develop analysis and a training methodology for autoencoding models, whose encoded features aim to disentangle attributes. These features are explicitly split into two components: one that should represent attributes in common between pairs of images, and another that should represent attributes that change between pairs of images. We show that achieving this objective faces two main challenges: One is that the model may learn degenerate mappings, which we call shortcut problem, and the other is that the attribute representation for an image is not guaranteed to follow the same interpretation on another image, which we call reference ambiguity. To address the shortcut problem, we introduce novel constraints on image pairs and triplets and show their effectiveness both analytically and experimentally. In the case of the reference ambiguity, we formally prove that a model that guarantees an ideal feature separation cannot be built. We validate our findings on several datasets and show that, surprisingly, trained neural networks often do not exhibit the reference ambiguity." ] }
1811.10559
2900493439
We present a filter correlation based model compression approach for deep convolutional neural networks. Our approach iteratively identifies pairs of filters with largest pairwise correlations and discards one of the filters from each such pair. However, instead of discarding one of the filter from such pairs naively, we further optimize the model so that the two filters from each such pair are as highly correlated as possible so that discarding one of the filters from the pairs results in as little information loss as possible. After discarding the filters in each round, we further finetune the model to recover from the potential small loss incurred by the compression. We evaluate our proposed approach using a comprehensive set of experiments and ablation studies. Our compression method yields state-of-the-art FLOPs compression rates on various benchmarks, such as LeNet-5, VGG-16, and ResNet-50,56, which are still achieving excellent predictive performance for tasks such as object detection on benchmark datasets.
Connection pruning is a direct way to introduce sparsity into the CNN model. One approach for CNN compression is to prune the unimportant parameters. However, it is challenging to define the importance of parameters quantitatively. There are several approaches to rank the importance of the parameters. Optimal Brain Damage @cite_43 and Optimal Brain Surgeon @cite_4 used the second order Taylor expansion to calculate the parameters importance. However, the second order derivative calculations are very costly. @cite_42 used hashing to randomly group the connection weights into a single bucket and then finetune the network. @cite_36 proposed the skip layer approach for network compression. @cite_47 proposed an iterative approach where absolute values of weights below a certain threshold are set to zero and drop in accuracy is recovered by fine-tuning. This approach is very successful when most of the parameters lie in the fully connected layer. The main limitation of these approaches is the requirement of special hardware software for acceleration at run-time.
{ "cite_N": [ "@cite_4", "@cite_36", "@cite_42", "@cite_43", "@cite_47" ], "mid": [ "2125389748", "2770042371", "2952432176", "2114766824", "2964299589" ], "abstract": [ "We investigate the use of information from all second order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization, simplify networks, reduce hardware or storage requirements, increase the speed of further training, and in some cases enable rule extraction. Our method, Optimal Brain Surgeon (OBS), is Significantly better than magnitude-based methods and Optimal Brain Damage [Le Cun, Denker and Solla, 1990], which often remove the wrong weights. OBS permits the pruning of more weights than other methods (for the same error on the training set), and thus yields better generalization on test data. Crucial to OBS is a recursion relation for calculating the inverse Hessian matrix H-1 from training data and structural information of the net. OBS permits a 90 , a 76 , and a 62 reduction in weights over backpropagation with weight decay on three benchmark MONK's problems [, 1991]. Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case. Finally, whereas Sejnowski and Rosenberg [1987] used 18,000 weights in their NETtalk network, we used OBS to prune a network to just 1560 weights, yielding better generalization.", "Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20 on average, going as high as 36 for some images, while maintaining the same 76.4 top-1 accuracy on ImageNet.", "As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.", "We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.", "Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency." ] }
1811.10559
2900493439
We present a filter correlation based model compression approach for deep convolutional neural networks. Our approach iteratively identifies pairs of filters with largest pairwise correlations and discards one of the filters from each such pair. However, instead of discarding one of the filter from such pairs naively, we further optimize the model so that the two filters from each such pair are as highly correlated as possible so that discarding one of the filters from the pairs results in as little information loss as possible. After discarding the filters in each round, we further finetune the model to recover from the potential small loss incurred by the compression. We evaluate our proposed approach using a comprehensive set of experiments and ablation studies. Our compression method yields state-of-the-art FLOPs compression rates on various benchmarks, such as LeNet-5, VGG-16, and ResNet-50,56, which are still achieving excellent predictive performance for tasks such as object detection on benchmark datasets.
Filter pruning approaches (which is the focus of our work too) do not need any special hardware or software for acceleration. The basic idea in filter pruning is to get an estimate of the importance of the filters and discard the unimportant ones. After that, at each pruning step, re-training is needed to recover from the accuracy drop. @cite_40 evaluates the importance of filter on a subset of the training data based on the output feature map. @cite_25 used a greedy approach for pruning. They evaluated the filter importance by checking the model accuracy after pruning the filter. @cite_31 @cite_15 used similar approach but different metric for filter pruning. @cite_35 @cite_13 @cite_28 used the low-rank approximation. @cite_39 used a data-driven approach for filter ranking and pruning. @cite_48 performed the channel level pruning based on the scaling factor in the training process. Recently, group sparsity is also a popular method for the filter pruning. @cite_49 @cite_37 @cite_7 @cite_23 @cite_11 explored the filter pruning based on the group sparsity.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_7", "@cite_15", "@cite_28", "@cite_48", "@cite_39", "@cite_40", "@cite_49", "@cite_23", "@cite_31", "@cite_13", "@cite_25", "@cite_11" ], "mid": [ "2167215970", "566555209", "2513419314", "", "2950967261", "", "2963118768", "2495425901", "", "2520760693", "", "1902041153", "2619444510", "2951134251" ], "abstract": [ "We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy, but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2 x, while keeping the accuracy within 1 of the original model.", "We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers in ConvNets. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion. After such pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. We investigate different ways to add group-wise prunning to the learning process, and show that severalfold speedups of convolutional layers can be attained using group-sparsity regularizers. Our approach can adjust the shapes of the receptive fields in the convolutional layers, and even prune excessive feature maps from ConvNets, all in data-driven way.", "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation. Experimental results show that SSL achieves on average 5.1x and 3.1x speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25 to 92.60 , which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by around 1 . Open source code is in this https URL", "", "The focus of this paper is speeding up the evaluation of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition, showing a possible 2.5x speedup with no loss in accuracy, and 4.5x speedup with less than 1 drop in accuracy, still achieving state-of-the-art on standard benchmarks.", "", "Convolutional neural networks (CNN) have achieved impressive performance on the wide variety of tasks (classification, detection, etc.) across multiple domains at the cost of high computational and memory requirements. Thus, leveraging CNNs for real-time applications necessitates model compression approaches that not only reduce the total number of parameters but reduce the overall computation as well. In this work, we present a stability-based approach for filter-level pruning of CNNs. We evaluate our proposed approach on different architectures (LeNet, VGG-16, ResNet, and Faster RCNN) and datasets and demonstrate its generalizability through extensive experiments. Moreover, our compressed models can be used at run-time without requiring any special libraries or hardware. Our model compression method reduces the number of FLOPS by an impressive factor of 6.03X and GPU memory footprint by more than 17X, significantly outperforming other state-of-the-art filter pruning methods.", "State-of-the-art neural networks are getting deeper and wider. While their performance increases with the increasing number of layers and neurons, it is crucial to design an efficient deep architecture in order to reduce computational and memory costs. Designing an efficient neural network, however, is labor intensive requiring many experiments, and fine-tunings. In this paper, we introduce network trimming which iteratively optimizes the network by pruning unimportant neurons based on analysis of their outputs on a large dataset. Our algorithm is inspired by an observation that the outputs of a significant portion of neurons in a large network are mostly zero, regardless of what inputs the network received. These zero activation neurons are redundant, and can be removed without affecting the overall accuracy of the network. After pruning the zero activation neurons, we retrain the network using the weights before pruning as initialization. We alternate the pruning and retraining to further reduce zero activations in a network. Our experiments on the LeNet and VGG-16 show that we can achieve high compression ratio of parameters without losing or even achieving higher accuracy than the original network.", "", "To attain a favorable performance on large-scale datasets, convolutional neural networks (CNNs) are usually designed to have very high capacity involving millions of parameters. In this work, we aim at optimizing the number of neurons in a network, thus the number of parameters. We show that, by incorporating sparse constraints into the objective function, it is possible to decimate the number of neurons during the training stage. As a result, the number of parameters and the memory footprint of the neural network are also reduced, which is also desirable at the test time. We evaluated our method on several well-known CNN structures including AlexNet, and VGG over different datasets including ImageNet. Extensive experimental results demonstrate that our method leads to compact networks. Taking first fully connected layer as an example, our compact CNN contains only (30 , ) of the original neurons without any degradation of the top-1 classification accuracy.", "", "This paper aims to accelerate the test-time computation of deep convolutional neural networks (CNNs). Unlike existing methods that are designed for approximating linear filters or linear responses, our method takes the nonlinear units into account. We minimize the reconstruction error of the nonlinear responses, subject to a low-rank constraint which helps to reduce the complexity of filters. We develop an effective solution to this constrained nonlinear optimization problem. An algorithm is also presented for reducing the accumulated error when multiple layers are approximated. A whole-model speedup ratio of 4× is demonstrated on a large network trained for ImageNet, while the top-5 error rate is only increased by 0.9 . Our accelerated model has a comparably fast speed as the “AlexNet” [11], but is 4.7 more accurate.", "Convolutional neural networks (CNNs) have state-of-the-art performance on many problems in machine vision. However, networks with superior performance often have millions of weights so that it is difficult or impossible to use CNNs on computationally limited devices or to humanly interpret them. A myriad of CNN compression approaches have been proposed and they involve pruning and compressing the weights and filters. In this article, we introduce a greedy structural compression scheme that prunes filters in a trained CNN. We define a filter importance index equal to the classification accuracy reduction (CAR) of the network after pruning that filter (similarly defined as RAR for regression). We then iteratively prune filters based on the CAR index. This algorithm achieves substantially higher classification accuracy in AlexNet compared to other structural compression schemes that prune filters. Pruning half of the filters in the first or second layer of AlexNet, our CAR algorithm achieves 26 and 20 higher classification accuracies respectively, compared to the best benchmark filter pruning scheme. Our CAR algorithm, combined with further weight pruning and compressing, reduces the size of first or second convolutional layer in AlexNet by a factor of 42, while achieving close to original classification accuracy through retraining (or fine-tuning) network. Finally, we demonstrate the interpretability of CAR-compressed CNNs by showing that our algorithm prunes filters with visually redundant functionalities. In fact, out of top 20 CAR-pruned filters in AlexNet, 17 of them in the first layer and 14 of them in the second layer are color-selective filters as opposed to shape-selective filters. To our knowledge, this is the first reported result on the connection between compression and interpretability of CNNs.", "Nowadays, the number of layers and of neurons in each layer of a deep network are typically set manually. While very deep and wide networks have proven effective in general, they come at a high memory and computation cost, thus making them impractical for constrained platforms. These networks, however, are known to have many redundant parameters, and could thus, in principle, be replaced by more compact architectures. In this paper, we introduce an approach to automatically determining the number of neurons in each layer of a deep network during learning. To this end, we propose to make use of structured sparsity during learning. More precisely, we use a group sparsity regularizer on the parameters of the network, where each group is defined to act on a single neuron. Starting from an overcomplete network, we show that our approach can reduce the number of parameters by up to 80 while retaining or even improving the network accuracy." ] }
1811.10559
2900493439
We present a filter correlation based model compression approach for deep convolutional neural networks. Our approach iteratively identifies pairs of filters with largest pairwise correlations and discards one of the filters from each such pair. However, instead of discarding one of the filter from such pairs naively, we further optimize the model so that the two filters from each such pair are as highly correlated as possible so that discarding one of the filters from the pairs results in as little information loss as possible. After discarding the filters in each round, we further finetune the model to recover from the potential small loss incurred by the compression. We evaluate our proposed approach using a comprehensive set of experiments and ablation studies. Our compression method yields state-of-the-art FLOPs compression rates on various benchmarks, such as LeNet-5, VGG-16, and ResNet-50,56, which are still achieving excellent predictive performance for tasks such as object detection on benchmark datasets.
Weight quantization based approaches have also been used in prior works on model compression. @cite_47 @cite_46 @cite_21 compressed the CNN by combining pruning, quantization and Huffman coding. @cite_45 conducted the network compression based on the float value quantization for model storage. Binarization @cite_14 can be used for the network compression where each float value is quantized to a binary value. Bayesian methods @cite_27 are also used for the network quantization. The quantization methods require special hardware support to get the advantage of the compression.
{ "cite_N": [ "@cite_47", "@cite_14", "@cite_21", "@cite_27", "@cite_45", "@cite_46" ], "mid": [ "2964299589", "", "2799197246", "2963243833", "", "2951136586" ], "abstract": [ "Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.", "", "Deep neural networks enable state-of-the-art accuracy on visual recognition tasks such as image classification and object detection. However, modern deep networks contain millions of learned weights; a more efficient utilization of computation resources would assist in a variety of deployment scenarios, from embedded platforms with resource constraints to computing clusters running ensembles of networks. In this paper, we combine network pruning and weight quantization in a single learning framework that performs pruning and quantization jointly, and in parallel with fine-tuning. This allows us to take advantage of the complementary nature of pruning and quantization and to recover from premature pruning errors, which is not possible with current two-stage approaches. Our proposed CLIP-Q method (Compression Learning by In-Parallel Pruning-Quantization) compresses AlexNet by 51-fold, GoogLeNet by 10-fold, and ResNet-50 by 15-fold, while preserving the uncompressed network accuracies on ImageNet.", "Compression and computational efficiency in deep learning have become a problem of great significance. In this work, we argue that the most principled and effective way to attack this problem is by adopting a Bayesian point of view, where through sparsity inducing priors we prune large parts of the network. We introduce two novelties in this paper: 1) we use hierarchical priors to prune nodes instead of individual weights, and 2) we use the posterior uncertainties to determine the optimal fixed point precision to encode the weights. Both factors significantly contribute to achieving the state of the art in terms of compression rates, while still staying competitive with methods designed to optimize for speed or energy efficiency.", "", "We propose a novel Convolutional Neural Network (CNN) compression algorithm based on coreset representations of filters. We exploit the redundancies extant in the space of CNN weights and neuronal activations (across samples) in order to obtain compression. Our method requires no retraining, is easy to implement, and obtains state-of-the-art compression performance across a wide variety of CNN architectures. Coupled with quantization and Huffman coding, we create networks that provide AlexNet-like accuracy, with a memory footprint that is @math smaller than the original AlexNet, while also introducing significant reductions in inference time as well. Additionally these compressed networks when fine-tuned, successfully generalize to other domains as well." ] }
1811.10435
2901612813
Recently, many researchers have been focusing on the definition of neural networks for graphs. The basic component for many of these approaches remains the graph convolution idea proposed almost a decade ago. In this paper, we extend this basic component, following an intuition derived from the well-known convolutional filters over multi-dimensional tensors. In particular, we derive a simple, efficient and effective way to introduce a hyper-parameter on graph convolutions that influences the filter size, i.e. its receptive field over the considered graph. We show with experimental results on real-world graph datasets that the proposed graph convolutional filter improves the predictive performance of Deep Graph Convolutional Networks.
Propagation kernels (PK) @cite_2 follow a different idea, inspired by the diffusion process in graph node kernels (i.e. kernels between nodes in a single graph), of propagating the node label information through the edges in a graph. Then, for each node, a distribution over the propagated labels is computed. Finally, the kernel between two graphs compares such distributions over all the nodes in the two graphs.
{ "cite_N": [ "@cite_2" ], "mid": [ "2185303849" ], "abstract": [ "Learning from complex data is becoming increasingly important, and graph kernels have recently evolved into a rapidly developing branch of learning on structured data. However, previously proposed kernels rely on having discrete node label information. In this paper, we explore the power of continuous node-level features for propagation-based graph kernels. Specifically, propagation kernels exploit node label distributions from propagation schemes like label propagation, which naturally enables the construction of graph kernels for partially labeled graphs. In order to efficiently extract graph features from continuous node label distributions, and in general from continuous vector-valued node attributes, we utilize randomized techniques, which easily allow for deriving similarity measures based on propagated information. We show that propagation kernels utilizing locality-sensitive hashing reduce the runtime of existing graph kernels by several orders of magnitude. We evaluate the performance of various propagation kernels on real-world bioinformatics and image benchmark datasets." ] }
1811.10435
2901612813
Recently, many researchers have been focusing on the definition of neural networks for graphs. The basic component for many of these approaches remains the graph convolution idea proposed almost a decade ago. In this paper, we extend this basic component, following an intuition derived from the well-known convolutional filters over multi-dimensional tensors. In particular, we derive a simple, efficient and effective way to introduce a hyper-parameter on graph convolutions that influences the filter size, i.e. its receptive field over the considered graph. We show with experimental results on real-world graph datasets that the proposed graph convolutional filter improves the predictive performance of Deep Graph Convolutional Networks.
While exhibiting state-of-the-art performance on many graph datasets, the main problem of graph kernels is that they define a fixed representation, that is not task-dependent and can in principle limit the predictive performance of the method. Deep graph kernels (DGK) @cite_14 propose an approach to alleviate this problem. Let us fix a base kernel and its explicit representation @math . Then a deep graph kernel can be defined as: @math where @math is a matrix of parameters that has to be learned, possibly including target information.
{ "cite_N": [ "@cite_14" ], "mid": [ "2008857988" ], "abstract": [ "In this paper, we present Deep Graph Kernels, a unified framework to learn latent representations of sub-structures for graphs, inspired by latest advancements in language modeling and deep learning. Our framework leverages the dependency information between sub-structures by learning their latent representations. We demonstrate instances of our framework on three popular graph kernels, namely Graphlet kernels, Weisfeiler-Lehman subtree kernels, and Shortest-Path graph kernels. Our experiments on several benchmark datasets show that Deep Graph Kernels achieve significant improvements in classification accuracy over state-of-the-art graph kernels." ] }
1811.10427
2900866154
Despite the growing interest in generative adversarial networks (GANs), training GANs remains a challenging problem, both from a theoretical and a practical standpoint. To address this challenge, in this paper, we propose a novel way to exploit the unique geometry of the real data, especially the manifold information. More specifically, we design a method to regularize GAN training by adding an additional regularization term referred to as manifold regularizer. The manifold regularizer forces the generator to respect the unique geometry of the real data manifold and generate high quality data. Furthermore, we theoretically prove that the addition of this regularization term in any class of GANs including DCGAN and Wasserstein GAN leads to improved performance in terms of generalization, existence of equilibrium, and stability. Preliminary experiments show that the proposed manifold regularization helps in avoiding mode collapse and leads to stable training.
In the literature, not much theory exists that explains the unstable behaviour of GAN training except for @cite_1 that stands out as one of the most successful work. The authors provided important insights into mode collapse and instability in GAN training. They showed that these issues arise when the supports of the generated distribution and the true distribution are disjoint. The authors in @cite_9 , on the other hand, explored questions relating to the sample complexity and expressiveness of the GAN architecture and their relation to the existence of an equilibrium. Given that an equilibrium exists, the convergence of GAN with update procedure using gradient descent was studied in @cite_24 .
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_1" ], "mid": [ "2964126461", "2952745707", "2581485081" ], "abstract": [ "Despite the growing prominence of generative adversarial networks (GANs), optimization in GANs is still a poorly understood topic. In this paper, we analyze the gradient descent'' form of GAN optimization, i.e., the natural setting where we simultaneously take small gradient steps in both generator and discriminator parameters. We show that even though GAN optimization does correspond to a convex-concave game (even for simple parameterizations), under proper conditions, equilibrium points of this optimization procedure are still for the traditional GAN formulation. On the other hand, we show that the recently proposed Wasserstein GAN can have non-convergent limit cycles near equilibrium. Motivated by this stability analysis, we propose an additional regularization term for gradient descent GAN updates, which able to guarantee local stability for both the WGAN and the traditional GAN, and also shows practical promise in speeding up convergence and addressing mode collapse.", "We show that training of generative adversarial network (GAN) may not have good generalization properties; e.g., training may appear successful but the trained distribution may be far from target distribution in standard metrics. However, generalization does occur for a weaker metric called neural net distance. It is also shown that an approximate pure equilibrium exists in the discriminator generator game for a special class of generators with natural training objectives when generator capacity and training set sizes are moderate. This existence of equilibrium inspires MIX+GAN protocol, which can be combined with any existing GAN training, and empirically shown to improve some of them.", "The goal of this paper is not to introduce a single algorithm or method, but to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks. In order to substantiate our theoretical analysis, we perform targeted experiments to verify our assumptions, illustrate our claims, and quantify the phenomena. This paper is divided into three sections. The first section introduces the problem at hand. The second section is dedicated to studying and proving rigorously the problems including instability and saturation that arize when training generative adversarial networks. The third section examines a practical and theoretically grounded direction towards solving these problems, while introducing new tools to study them." ] }
1906.09521
2952611269
We consider adaptations of the Mumford-Shah functional to graphs. These are based on discretizations of nonlocal approximations to the Mumford-Shah functional. Motivated by applications in machine learning we study the random geometric graphs associated to random samples of a measure. We establish the conditions on the graph constructions under which the minimizers of graph Mumford-Shah functionals converge to a minimizer of a continuum Mumford-Shah functional. Furthermore we explicitly identify the limiting functional. Moreover we describe an efficient algorithm for computing the approximate minimizers of the graph Mumford-Shah functional.
Regularizing and denoising functions given on graphs has been studied in variety of contexts in machine learning. Here we focus on regularizations which still allow for the jumps in the regularized function. There are two lines of research which have led to such functionals. One, as is the case with our approach, is in using inspiration from image processing where variational approaches have been widely used for image denoising and segmentation. Particularly relevant in the context of imaging are the works of Chan and Vese @cite_33 @cite_11 , who proposed a piecewise constant simplification of the Mumford-Shah functional and have shown its effectiveness in image segmentation, and Rudin, Osher, and Fatemi @cite_22 who proposed a TV (total variation) based regularization for the image denoising. In analogy with Chan and Vese, @cite_35 Hu, Sunu, and Bertozzi formulated the piecewise-constant Mumford functional on graphs. They also developed an efficient numerical approach to compute the minimizers and used it to study a (multi-class) classification problem. A ROF functional on graphs, with @math fidelity term, was studied by Garc 'ia Trillos and Murray @cite_29 .
{ "cite_N": [ "@cite_35", "@cite_22", "@cite_33", "@cite_29", "@cite_11" ], "mid": [ "1580188278", "2103559027", "2116040950", "2963466282", "" ], "abstract": [ "We focus on the multi-class segmentation problem using the piecewise constant Mumford-Shah model in a graph setting. After formulating a graph version of the Mumford-Shah energy, we propose an efficient algorithm called the MBO scheme using threshold dynamics. Theoretical analysis is developed and a Lyapunov functional is proven to decrease as the algorithm proceeds. Furthermore, to reduce the computational cost for large datasets, we incorporate the Nystrom extension method which efficiently approximates eigenvectors of the graph Laplacian based on a small portion of the weight matrix. Finally, we implement the proposed method on the problem of chemical plume detection in hyper-spectral video data.", "A constrained optimization type of numerical algorithm for removing noise from images is presented. The total variation of the image is minimized subject to constraints involving the statistics of the noise. The constraints are imposed using Lagrange multipliers. The solution is obtained using the gradient-projection method. This amounts to solving a time dependent partial differential equation on a manifold determined by the constraints. As t--- 0o the solution converges to a steady state which is the denoised image. The numerical algorithm is simple and relatively fast. The results appear to be state-of-the-art for very noisy images. The method is noninvasive, yielding sharp edges in the image. The technique could be interpreted as a first step of moving each level set of the image normal to itself with velocity equal to the curvature of the level set divided by the magnitude of the gradient of the image, and a second step which projects the image back onto the constraint set.", "We propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah (1989) functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by the gradient. We minimize an energy which can be seen as a particular case of the minimal partition problem. In the level set formulation, the problem becomes a \"mean-curvature flow\"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. We give a numerical algorithm using finite differences. Finally, we present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. Also, the initial curve can be anywhere in the image, and interior contours are automatically detected.", "This work considers the problem of binary classification: given training data x 1 , . . ., x n from a certain population, together with associated labels y 1 ,. . ., y n ∈ 0,1 , determine the best label for an element x not among the training data. More specifically, this work considers a variant of the regularized empirical risk functional which is defined intrinsically to the observed data and does not depend on the underlying population. Tools from modern analysis are used to obtain a concise proof of asymptotic consistency as regularization parameters are taken to zero at rates related to the size of the sample. These analytical tools give a new framework for understanding overfitting and underfitting, and rigorously connect the notion of overfitting with a loss of compactness.", "" ] }
1906.09521
2952611269
We consider adaptations of the Mumford-Shah functional to graphs. These are based on discretizations of nonlocal approximations to the Mumford-Shah functional. Motivated by applications in machine learning we study the random geometric graphs associated to random samples of a measure. We establish the conditions on the graph constructions under which the minimizers of graph Mumford-Shah functionals converge to a minimizer of a continuum Mumford-Shah functional. Furthermore we explicitly identify the limiting functional. Moreover we describe an efficient algorithm for computing the approximate minimizers of the graph Mumford-Shah functional.
TV based regularizations have also been developed in statistics community. Mammen and van de Geer @cite_4 have considered it is the setting of nonparametric regression and have shown that the TV regularization provides an estimator that achieves the optimal min-max recovery rate in one dimension over noisy samples of functions in unit ball with respect to the BV norm. TV based regularizations in hifher dimensions have been considered by by Tibshirani, Saunders, Rosset, Zhu, and Knight @cite_14 who call the functional fused LASSO. H "utter and Rigollet @cite_23 , show that, up to logarithms, in dimension @math , the TV regularization on grids achieves the optimal min-max rate over the unit ball with respect to the BV norm. Recently, Padilla, Sharpnack, Chen and Witten @cite_36 show for random. geometric graphs and for KNN graphs that up to logarithms, in dimension @math , TV regularization again achieves the optimal min-max rate.
{ "cite_N": [ "@cite_36", "@cite_14", "@cite_4", "@cite_23" ], "mid": [ "2887629087", "2140514146", "2072081687", "2962732903" ], "abstract": [ "The fused lasso, also known as total-variation denoising, is a locally-adaptive function estimator over a regular grid of design points. In this paper, we extend the fused lasso to settings in which the points do not occur on a regular grid, leading to a new approach for non-parametric regression. This approach, which we call the @math -nearest neighbors ( @math -NN) fused lasso, involves (i) computing the @math -NN graph of the design points; and (ii) performing the fused lasso over this @math -NN graph. We show that this procedure has a number of theoretical advantages over competing approaches: specifically, it inherits local adaptivity from its connection to the fused lasso, and it inherits manifold adaptivity from its connection to the @math -NN approach. We show that excellent results are obtained in a simulation study and on an application to flu data. For completeness, we also study an estimator that makes use of an @math -graph rather than a @math -NN graph, and contrast this with the @math -NN fused lasso.", "Summary. The lasso penalizes a least squares regression by the sum of the absolute values (L1-norm) of the coefficients. The form of this penalty encourages sparse solutions (with many coefficients equal to 0). We propose the ‘fused lasso’, a generalization that is designed for problems with features that can be ordered in some meaningful way. The fused lasso penalizes the L1-norm of both the coefficients and their successive differences. Thus it encourages sparsity of the coefficients and also sparsity of their differences—i.e. local constancy of the coefficient profile. The fused lasso is especially useful when the number of features p is much greater than N, the sample size. The technique is also extended to the ‘hinge’ loss function that underlies the support vector classifier.We illustrate the methods on examples from protein mass spectroscopy and gene expression data.", "In this paper least squares penalized regression estimates withtotal variation penalities are considered. It is shown that theseestimators are least squares splines with locally data adaptive placed knotpoints. Algorithms and asymptotic properties are discussed.", "Motivated by its practical success, we show that the twodimensional total variation denoiser satisfies a sharp oracle inequality that leads to near optimal rates of estimation for a large class of image models such as bi-isotonic, Holder smooth and cartoons. Our analysis hinges on properties of the unnormalized Laplacian of the twodimensional grid such as eigenvector delocalization and spectral decay. We also present extensions to more than two dimensions as well as several other graphs. AMS 2000 subject classifications: Primary 62G08; secondary 62C20, 62G05, 62H35." ] }
1906.09521
2952611269
We consider adaptations of the Mumford-Shah functional to graphs. These are based on discretizations of nonlocal approximations to the Mumford-Shah functional. Motivated by applications in machine learning we study the random geometric graphs associated to random samples of a measure. We establish the conditions on the graph constructions under which the minimizers of graph Mumford-Shah functionals converge to a minimizer of a continuum Mumford-Shah functional. Furthermore we explicitly identify the limiting functional. Moreover we describe an efficient algorithm for computing the approximate minimizers of the graph Mumford-Shah functional.
The paper @cite_26 by Hallac, Leskovec, Boyd extends fused LASSO to the graph setting and considers some further functionals which are closely related to the graph Mumford--Shah functional we consider here. In particular the initial models of the paper deal with convex functionals which include graph total-variation based terms, and are thus called Network LASSO''. The second part of the paper modifies the total-variation term, which leads to nonconvex functionals. Here we interpret some of these nonconvex functionals, in particular model (7) of @cite_26 ), as the graph-based Mumford--Shah functional, which, together with out asymptotic results, explains the behavior of these models. Wang, Sharpnack, Smola, and Tibshirani @cite_17 consider higher order total variation regularizers on graphs. further extensions. We also note that the use of total variation penalization for signal denoising and filtering has also been considered in the signal processing community, see for example the work of Chen, Sandryhaila, Moura, and Kova c evi 'c @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_26", "@cite_17" ], "mid": [ "1540550726", "2143862148", "2963670588" ], "abstract": [ "We consider the problem of signal recovery on graphs. Graphs model data with complex structure as signals on a graph. Graph signal recovery recovers one or multiple smooth graph signals from noisy, corrupted, or incomplete measurements. We formulate graph signal recovery as an optimization problem, for which we provide a general solution through the alternating direction methods of multipliers. We show how signal inpainting, matrix completion, robust principal component analysis, and anomaly detection all relate to graph signal recovery and provide corresponding specific solutions and theoretical analysis. We validate the proposed methods on real-world recovery problems, including online blog classification, bridge condition identification, temperature estimation, recommender system for jokes, and expert opinion combination of online blog classification.", "Convex optimization is an essential tool for modern data analysis, as it provides a framework to formulate and solve many problems in machine learning and data mining. However, general convex optimization solvers do not scale well, and scalable solvers are often specialized to only work on a narrow class of problems. Therefore, there is a need for simple, scalable algorithms that can solve many common optimization problems. In this paper, we introduce the network lasso, a generalization of the group lasso to a network setting that allows for simultaneous clustering and optimization on graphs. We develop an algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve this problem in a distributed and scalable manner, which allows for guaranteed global convergence even on large graphs. We also examine a non-convex extension of this approach. We then demonstrate that many types of problems can be expressed in our framework. We focus on three in particular --- binary classification, predicting housing prices, and event detection in time series data --- comparing the network lasso to baseline approaches and showing that it is both a fast and accurate method of solving large optimization problems.", "We introduce a family of adaptive estimators on graphs, based on penalizing the l1 norm of discrete graph differences. This generalizes the idea of trend filtering (, 2009; Tibshirani, 2014), used for univariate nonparametric regression, to graphs. Analogous to the univariate case, graph trend filtering exhibits a level of local adaptivity unmatched by the usual l2-based graph smoothers. It is also defined by a convex minimization problem that is readily solved (e.g., by fast ADMM or Newton algorithms). We demonstrate the merits of graph trend filtering through both examples and theory." ] }
1906.09355
2951718434
Peer-to-Peer networks are designed to rely on resources of their own users. Therefore, resource management plays an important role in P2P protocols. Therefore, resource management plays an important role in P2P protocols. Early P2P networks did not use proper mechanisms to manage fairness. However, after seeing difficulties and rise of freeloaders in networks like Gnutella, the importance of providing fairness for users have become apparent. In this paper, we propose an incentive based security model which leads to a network infrastructure that lightens the work of Seeders and makes Leechers to contribute more. This method is able to prevent betrayals in Leecher-to-Leecher transactions and more importantly, helps Seeders to be treated more fairly. This is what other incentive methods such as Bittorrent are incapable of doing. Additionally, by getting help from cryptography and combining it with our method, it is also possible to achieve secure channels, immune to spying, next to a fair network. The simulation results clearly show that how our proposed approach can overcome free-riding issue. In addition, our findings revealed that our approach is able to provide an appropriate level of fairness for the users and can decrease the download time.
Incentive-based systems usually consider some form of reward to encourage users for more cooperation. BitTorrent is one of the first major attempts which used incentive in its protocol @cite_21 . It proposed a tit-for-tat (TFT) mechanism in order to incentivize peers to contribute resources to the system and discourage free-riders. As an important benefit, TFT encourages peers for more contribution without the need for centralized infrastructures. However, one important challenge regarding BitTorrent is that the robustness of the system is questionable. This is due to the fact that many of the contributions for improving the performance are unnecessary and can be reallocated or refused while still improving performance for strategic users. As a result, there are always some peers who contribute more data to the system than others.
{ "cite_N": [ "@cite_21" ], "mid": [ "239964209" ], "abstract": [ "The BitTorrent file distribution system uses tit-fortat as a method of seeking pareto efficiency. It achieves a higher level of robustness and resource utilization than any currently known cooperative technique. We explain what BitTorrent does, and how economic methods are used to achieve that goal. 1 What BitTorrent Does When a file is made available using HTTP, all upload cost is placed on the hosting machine. With BitTorrent, when multiple people are downloading the same file at the same time, they upload pieces of the file to each other. This redistributes the cost of upload to downloaders, (where it is often not even metered), thus making hosting a file with a potentially unlimited number of downloaders affordable. Researchers have attempted to find practical techniqes to do this before[3]. It has not been previously deployed on a large scale because the logistical and robustness problems are quite difficult. Simply figuring out which peers have what parts of the file and where they should be sent is difficult to do without incurring a huge overhead. In addition, real deployments experience very high churn rates. Peers rarely connect for more than a few hours, and frequently for only a few minutes [4]. Finally, there is a general problem of fairness [1]. The total download rate across all downloaders must, of mathematical necessity, be equal to the total upload rate. The strategy for allocating upload which seems most likely to make peers happy with their download rates is to make each peer’s download rate be proportional to their upload rate. In practice it’s very difficult to keep peer download rates from sometimes dropping to zero by chance, much less make upload and download rates be correlated. We will explain how BitTorrent solves all of these problems well. 1.1 BitTorrent Interface BitTorrent’s interface is almost the simplest possible. Users launch it by clicking on a hyperlink to the file they wish to download, and are given a standard “Save As” dialog, followed by a download progress dialog which is mostly notable for having an upload rate in addition to a download rate. This extreme ease of use has contributed greatly to BitTorrent’s adoption, and may even be more important than, although it certainly complements, the performance and cost redistribution features which are described in this paper." ] }
1906.09355
2951718434
Peer-to-Peer networks are designed to rely on resources of their own users. Therefore, resource management plays an important role in P2P protocols. Therefore, resource management plays an important role in P2P protocols. Early P2P networks did not use proper mechanisms to manage fairness. However, after seeing difficulties and rise of freeloaders in networks like Gnutella, the importance of providing fairness for users have become apparent. In this paper, we propose an incentive based security model which leads to a network infrastructure that lightens the work of Seeders and makes Leechers to contribute more. This method is able to prevent betrayals in Leecher-to-Leecher transactions and more importantly, helps Seeders to be treated more fairly. This is what other incentive methods such as Bittorrent are incapable of doing. Additionally, by getting help from cryptography and combining it with our method, it is also possible to achieve secure channels, immune to spying, next to a fair network. The simulation results clearly show that how our proposed approach can overcome free-riding issue. In addition, our findings revealed that our approach is able to provide an appropriate level of fairness for the users and can decrease the download time.
Trust management systems are methods to establish word-of-mouth for P2P networks. This means that based on transactions between nodes, they evaluate a degree of trust for each node which is mostly used for establishing a fair network. Finding a malicious user from its neighbors is a selection problem. Methods such as fuzzy decision making @cite_20 @cite_25 and genetic-based algorithms @cite_23 can be used to select the free-riders from multiple criteria. A successful trust management system that uses this approach is @cite_5 . The @cite_18 @cite_4 are other examples of trust management systems. EigenTrust is the name of an approach proposed In @cite_1 . This method uses an algorithm to decrease the number of downloads of inauthentic files in a P2P network that assigns each peer a unique global trust value, based on the peer's history of uploads. For this purpose, the authors proposed a distributed and secure method to compute global trust values, based on power iteration. These values are then used by peers to select the peers from whom they download.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_1", "@cite_23", "@cite_5", "@cite_25", "@cite_20" ], "mid": [ "1973383140", "", "2156523427", "1866313586", "1898716852", "2550694155", "2119948204" ], "abstract": [ "The Peer-to-Peer (P2P) architecture has been successfully used to reduce costs and increase the scalability of Internet live streaming systems. However, the effectiveness of these applications depends largely on user (peer) cooperation. In this article we use data collected from SopCast, a popular P2P live application, to show that there is high correlation between peer centrality--out-degree, out-closeness, and betweenness--in the P2P overlay graph and peer cooperation. We use this finding to propose a new regression-based model to predict peer cooperation from its past centrality. Our model takes only peer out-degrees as input, as out-degree has the strongest correlation with peer cooperation. Our evaluation shows that our model has good accuracy and does not need to be trained too often (e.g., once each 16 min). We also use our model to sketch a mechanism to detect malicious peers that report artificially inflated cooperation aiming at, for example, receiving better quality of service.", "", "Peer-to-peer file-sharing networks are currently receiving much attention as a means of sharing and distributing information. However, as recent experience shows, the anonymous, open nature of these networks offers an almost ideal environment for the spread of self-replicating inauthentic files.We describe an algorithm to decrease the number of downloads of inauthentic files in a peer-to-peer file-sharing network that assigns each peer a unique global trust value, based on the peer's history of uploads. We present a distributed and secure method to compute global trust values, based on Power iteration. By having peers use these global trust values to choose the peers from whom they download, the network effectively identifies malicious peers and isolates them from the network.In simulations, this reputation system, called EigenTrust, has been shown to significantly decrease the number of inauthentic files on the network, even under a variety of conditions where malicious peers cooperate in an attempt to deliberately subvert the system.", "Wireless sensor networks consist of a large number of nodes which are distributed sporadically in a geographic area. The energy of all nodes on the network is limited. For this reason, providing a method of communication between nodes and network administrator to manage energy consumption is crucial. For this purpose, one of the proposed methods with high performance, is clustering methods. The big challenge in clustering methods is dividing network into several clusters that each cluster is managed by a cluster head (CH). In this paper, a centralized genetic-based clustering (CGC) protocol using onion approach is proposed. The CGC protocol selects the appropriate nodes as CHs according to three criteria that ultimately increases the network life time. This paper investigates the genetic algorithm (GA) as a dynamic technique to find optimum CHs. Furthermore, an innovative fitness function according to the specified parameters is presented. Each chromosome which minimizes fitness function, is selected by base station (BS) and its nodes are introduced to the whole network as proper CHs. After the selection of CHs and cluster formation, for upper level routing between CHs, we define a novel concept which is called Onion Approach. We divide the network into several onion layers in order to reduce the communication overhead among CH nodes. Simulation results show that the implementation of the proposed method by GA and using onion approach, presents better efficiency compared with other previous methods. Conducted simulation results show that the CGC protocol has done significant improvement in terms of running time of the algorithm, the number of nodes alive, first node death, last node death, the number of packets received by the BS, and energy consumption of the network.", "An important issue in Peer-to-Peer networks is to encourage users to share with others as they use the resources of the network. However, some nodes may only consume from users without giving anything in return. To fix this problem, we can incorporate trust management systems with network infrastructures. Current trust managements are usually made for unstructured overlays and have several shortcomings. They are made to be very similar to e-commerce scoring websites which may not be the best design for fairness in P2P networks. Several problems may arise with their designs such as difficulties to provide a complete history of freeloaders or lack of an autonomous removal mechanism in case of severe attacks. In this paper, we argue that such systems can be deployed more efficiently by using a structured paradigm. For this purpose, we propose C-Trust, a trust management system which is focused on fairness for P2P networks. This is done by getting help from current circular structured designs. This method is able to mark freeloaders, identify their severity of abusion and punish them accordingly. We are also able to effectively protect both Seeder-to-Leecher and Leecher-to-Leecher transactions. This feature is specially important for fairness which other trust systems have not considered so far.", "The most important aim of Automated Intrusion Response Systems (AIRSs) is selecting responses that impose less cost on the protected system and which are able to neutralize intrusions progress effectively. Cost-sensitive AIRSs use different methods to launch efficient responses. In this regard, risk assessment as a component for assessing intrusion danger on the system is introduced in many papers. However, most available risk assessment methods produce ambiguous results. Fuzzy logic is known as an effective method to be used in the process of risk assessment. This is mainly because fuzzy approach reduces the level of uncertainty of risk factors. To assess risk by fuzzy methods, risk parameters which are extracted from the traffic patterns are used as inputs of fuzzy systems. The aim of this paper is to introduce an AIRS based on fuzzy risk assessment to evaluate the risk of each intrusion in real time and apply a suitable response for protecting web applications. We also introduce a method for applying responses retroactively. The results of applied method show the effective performance of the proposed method in terms of cost-sensitivity and time to response.", "In this paper we propose a novel relay selection method for cooperation communication networks using fuzzy logic. Many efforts have been made in the literature to select the superior relay based on relay's SNR SER and or relay's reputation (in cooperation stimulation methods). We jointly consider four criteria for the process of relay selection, relay's SNR SER, relay's reputation, relaying strategy and relay location. We consider the condition in which network users employ different relaying strategies, i.e., some relay nodes employ decode-and-forward strategy, some nodes employ amplify-and-forward strategy, and the other ones employ compress-and-forward strategy. Also, we categorize the network users into LM clusters and take into account the relative distance of potential relay nodes from the source and destination nodes in the process of relay selection. This relative distance has significant impact on the average achievable rate in destination node. Finally, by using a fuzzy logic decision making method, we select the “best relay” based on these four criteria." ] }
1906.09355
2951718434
Peer-to-Peer networks are designed to rely on resources of their own users. Therefore, resource management plays an important role in P2P protocols. Therefore, resource management plays an important role in P2P protocols. Early P2P networks did not use proper mechanisms to manage fairness. However, after seeing difficulties and rise of freeloaders in networks like Gnutella, the importance of providing fairness for users have become apparent. In this paper, we propose an incentive based security model which leads to a network infrastructure that lightens the work of Seeders and makes Leechers to contribute more. This method is able to prevent betrayals in Leecher-to-Leecher transactions and more importantly, helps Seeders to be treated more fairly. This is what other incentive methods such as Bittorrent are incapable of doing. Additionally, by getting help from cryptography and combining it with our method, it is also possible to achieve secure channels, immune to spying, next to a fair network. The simulation results clearly show that how our proposed approach can overcome free-riding issue. In addition, our findings revealed that our approach is able to provide an appropriate level of fairness for the users and can decrease the download time.
As a result, the network will be able to identify malicious peers and isolates them from the network. EigenTrust is able to decrease the number of inauthentic files on the network, even under a variety of conditions where malicious peers cooperate in an attempt to subvert the system. In @cite_18 , the authors have introduced an approach to predict a peer's cooperation level, focusing on the cooperation induced by the P2P protocol rather than the cooperation that results from user behavior or bandwidth limitation. This method mainly focuses on live streaming applications. By investigating the correlation between a peer's cooperation level, the authors tried to show that three centrality metrics, namely out-degree, out-closeness, and betweenness, are positively correlated with the cooperation level. Based on this, they proposed a non-linear regression model to measure peer's out-degree in the recent past to predict its cooperation level value in the near future.
{ "cite_N": [ "@cite_18" ], "mid": [ "1973383140" ], "abstract": [ "The Peer-to-Peer (P2P) architecture has been successfully used to reduce costs and increase the scalability of Internet live streaming systems. However, the effectiveness of these applications depends largely on user (peer) cooperation. In this article we use data collected from SopCast, a popular P2P live application, to show that there is high correlation between peer centrality--out-degree, out-closeness, and betweenness--in the P2P overlay graph and peer cooperation. We use this finding to propose a new regression-based model to predict peer cooperation from its past centrality. Our model takes only peer out-degrees as input, as out-degree has the strongest correlation with peer cooperation. Our evaluation shows that our model has good accuracy and does not need to be trained too often (e.g., once each 16 min). We also use our model to sketch a mechanism to detect malicious peers that report artificially inflated cooperation aiming at, for example, receiving better quality of service." ] }
1906.09302
2950492278
SPARQL is a highly powerful query language for an ever-growing number of Linked Data resources and Knowledge Graphs. Using it requires a certain familiarity with the entities in the domain to be queried as well as expertise in the language's syntax and semantics, none of which average human web users can be assumed to possess. To overcome this limitation, automatically translating natural language questions to SPARQL queries has been a vibrant field of research. However, to this date, the vast success of deep learning methods has not yet been fully propagated to this research problem. This paper contributes to filling this gap by evaluating the utilization of eight different Neural Machine Translation (NMT) models for the task of translating from natural language to the structured query language SPARQL. While highlighting the importance of high-quantity and high-quality datasets, the results show a dominance of a CNN-based architecture with a BLEU score of up to 98 and accuracy of up to 94 .
@cite_13 presented a method based on an encoder-decoder model with attention mechanism aimed at translating the input utterances to their logical forms with minimum domain knowledge. Moreover, they proposed another sequence-to-tree model that has a special decoder better able to capture the hierarchical structure of logical forms. Then, they tested their model on four different datasets and evaluated the results with accuracy as the metric.
{ "cite_N": [ "@cite_13" ], "mid": [ "2963794306" ], "abstract": [ "Semantic parsing aims at mapping natural language to machine interpretable meaning representations. Traditional approaches rely on high-quality lexicons, manually-built templates, and linguistic features which are either domainor representation-specific. In this paper we present a general method based on an attention-enhanced encoder-decoder model. We encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vectors. Experimental results on four datasets show that our approach performs competitively without using hand-engineered features and is easy to adapt across domains and meaning representations." ] }
1906.09302
2950492278
SPARQL is a highly powerful query language for an ever-growing number of Linked Data resources and Knowledge Graphs. Using it requires a certain familiarity with the entities in the domain to be queried as well as expertise in the language's syntax and semantics, none of which average human web users can be assumed to possess. To overcome this limitation, automatically translating natural language questions to SPARQL queries has been a vibrant field of research. However, to this date, the vast success of deep learning methods has not yet been fully propagated to this research problem. This paper contributes to filling this gap by evaluating the utilization of eight different Neural Machine Translation (NMT) models for the task of translating from natural language to the structured query language SPARQL. While highlighting the importance of high-quantity and high-quality datasets, the results show a dominance of a CNN-based architecture with a BLEU score of up to 98 and accuracy of up to 94 .
@cite_6 proposed a framework called that utilized an LSTM-based encoder-decoder architecture to translate NL questions to SQL. Input NL questions were augmented by adding the column names of the queried table. Correspondingly, the decoder was split into three components, predicting aggregation classifier, column names, and WHERE clause part of a SQL query, respectively. As opposed to conventional teacher forcing, the model was trained with reinforcement learning avoid that queries delivering correct results upon execution but not having exact string matches would be wrongly penalized. To address this issue in the evaluation, execution accuracy and logical form accuracy of the generated queries were measured.
{ "cite_N": [ "@cite_6" ], "mid": [ "2751448157" ], "abstract": [ "Relational databases store a significant amount of the worlds data. However, accessing this data currently requires users to understand a query language such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model uses rewards from in the loop query execution over the database to learn a policy to generate the query, which contains unordered parts that are less suitable for optimization via cross entropy loss. Moreover, Seq2SQL leverages the structure of SQL to prune the space of generated queries and significantly simplify the generation problem. In addition to the model, we release WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables fromWikipedia that is an order of magnitude larger than comparable datasets. By applying policy based reinforcement learning with a query execution environment to WikiSQL, Seq2SQL outperforms a state-of-the-art semantic parser, improving execution accuracy from 35.9 to 59.4 and logical form accuracy from 23.4 to 48.3 ." ] }
1906.09302
2950492278
SPARQL is a highly powerful query language for an ever-growing number of Linked Data resources and Knowledge Graphs. Using it requires a certain familiarity with the entities in the domain to be queried as well as expertise in the language's syntax and semantics, none of which average human web users can be assumed to possess. To overcome this limitation, automatically translating natural language questions to SPARQL queries has been a vibrant field of research. However, to this date, the vast success of deep learning methods has not yet been fully propagated to this research problem. This paper contributes to filling this gap by evaluating the utilization of eight different Neural Machine Translation (NMT) models for the task of translating from natural language to the structured query language SPARQL. While highlighting the importance of high-quantity and high-quality datasets, the results show a dominance of a CNN-based architecture with a BLEU score of up to 98 and accuracy of up to 94 .
@cite_19 also used an LSTM encoder-decoder model but the purpose is to encode natural language and decode into SPARQL. Furthermore, they employed a neural probabilistic language model to learn a word vector representation for SPARQL, and used the attention mechanism to associate a vocabulary mapping between natural language and SPARQL. For the experiment, they transformed the logical queries in the traditional Geo880 dataset into equivalent SPARQL form. In terms of evaluation, they adopted two metrics: accuracy and syntactic errors. While they obtained reasonable results to comparable approaches, they did not handle the out of vocabulary issue and lexical ambiguities.
{ "cite_N": [ "@cite_19" ], "mid": [ "2791908423" ], "abstract": [ "Semantic parsing is the process of mapping a natural language sentence into a formal representation of its meaning. In this work we use the neural network approach to transform natural language sentence into a query to an ontology database in the SPARQL language. This method does not rely on handcraft-rules, high-quality lexicons, manually-built templates or other handmade complex structures. Our approach is based on vector space model and neural networks. The proposed model is based in two learning steps. The first step generates a vector representation for the sentence in natural language and SPARQL query. The second step uses this vector representation as input to a neural network (LSTM with attention mechanism) to generate a model able to encode natural language and decode SPARQL." ] }
1906.09410
2966666009
With a view towards molecular communication systems and molecular multi-agent systems, we propose the Chemical Baum-Welch Algorithm, a novel reaction network scheme that learns parameters for Hidden Markov Models (HMMs). Each reaction in our scheme changes only one molecule of one species to one molecule of another. The reverse change is also accessible but via a different set of enzymes, in a design reminiscent of futile cycles in biochemical pathways. We show that every fixed point of the Baum-Welch algorithm for HMMs is a fixed point of our reaction network scheme, and every positive fixed point of our scheme is a fixed point of the Baum-Welch algorithm. We prove that the “Expectation” step and the “Maximization” step of our reaction network separately converge exponentially fast. We simulate mass-action kinetics for our network on an example sequence, and show that it learns the same parameters for the HMM as the Baum-Welch algorithm.
Napp and Adams @cite_21 have shown how to compute marginals on graphical models with reaction networks. They exploit graphical structure by mimicking belief propagation. Hidden Markov Models can be viewed as a special type of graphical model where there are @math random variables @math with the @math random variables taking values in @math and the @math random variables in @math . The @math random variables form a Markov chain @math . In addition, there are @math edges from @math to @math for @math to @math denoting observations. Specialized to HMMs, the scheme of Napp and Adams would compute the equivalent of steady state values of the @math species, performing a version of the E step. They are able to show that true marginals are fixed points of their scheme, which is similar to our Theorem . Thus their work may be viewed as the first example of a reaction network scheme that exploits graphical structure to compute E projections. Our E step goes further by proving correctness as well as exponential convergence. Their work also raises the challenge of extending our scheme to all graphical models.
{ "cite_N": [ "@cite_21" ], "mid": [ "2166918926" ], "abstract": [ "Recent work on molecular programming has explored new possibilities for computational abstractions with biomolecules, including logic gates, neural networks, and linear systems. In the future such abstractions might enable nanoscale devices that can sense and control the world at a molecular scale. Just as in macroscale robotics, it is critical that such devices can learn about their environment and reason under uncertainty. At this small scale, systems are typically modeled as chemical reaction networks. In this work, we develop a procedure that can take arbitrary probabilistic graphical models, represented as factor graphs over discrete random variables, and compile them into chemical reaction networks that implement inference. In particular, we show that marginalization based on sum-product message passing can be implemented in terms of reactions between chemical species whose concentrations represent probabilities. We show algebraically that the steady state concentration of these species correspond to the marginal distributions of the random variables in the graph and validate the results in simulations. As with standard sum-product inference, this procedure yields exact results for tree-structured graphs, and approximate solutions for loopy graphs." ] }
1906.09410
2966666009
With a view towards molecular communication systems and molecular multi-agent systems, we propose the Chemical Baum-Welch Algorithm, a novel reaction network scheme that learns parameters for Hidden Markov Models (HMMs). Each reaction in our scheme changes only one molecule of one species to one molecule of another. The reverse change is also accessible but via a different set of enzymes, in a design reminiscent of futile cycles in biochemical pathways. We show that every fixed point of the Baum-Welch algorithm for HMMs is a fixed point of our reaction network scheme, and every positive fixed point of our scheme is a fixed point of the Baum-Welch algorithm. We prove that the “Expectation” step and the “Maximization” step of our reaction network separately converge exponentially fast. We simulate mass-action kinetics for our network on an example sequence, and show that it learns the same parameters for the HMM as the Baum-Welch algorithm.
@cite_4 have described Chemical Boltzmann Machines, which are reaction network schemes whose dynamics reconstructs inference in Boltzmann Machines. This inference can be viewed as a version of E projection. No scheme for learning is presented. The exact schemes presented there are exponentially large. The more realistically sized schemes are presented there without proof. In comparison, our schemes are polynomially sized, provably correct if the equilibrium is positive, and perform both inference and learning for HMMs.
{ "cite_N": [ "@cite_4" ], "mid": [ "2963699962" ], "abstract": [ "How smart can a micron-sized bag of chemicals be? How can an artificial or real cell make inferences about its environment? From which kinds of probability distributions can chemical reaction networks sample? We begin tackling these questions by showing three ways in which a stochastic chemical reaction network can implement a Boltzmann machine, a stochastic neural network model that can generate a wide range of probability distributions and compute conditional probabilities. The resulting models, and the associated theorems, provide a road map for constructing chemical reaction networks that exploit their native stochasticity as a computational resource. Finally, to show the potential of our models, we simulate a chemical Boltzmann machine to classify and generate MNIST digits in-silico." ] }
1906.09410
2966666009
With a view towards molecular communication systems and molecular multi-agent systems, we propose the Chemical Baum-Welch Algorithm, a novel reaction network scheme that learns parameters for Hidden Markov Models (HMMs). Each reaction in our scheme changes only one molecule of one species to one molecule of another. The reverse change is also accessible but via a different set of enzymes, in a design reminiscent of futile cycles in biochemical pathways. We show that every fixed point of the Baum-Welch algorithm for HMMs is a fixed point of our reaction network scheme, and every positive fixed point of our scheme is a fixed point of the Baum-Welch algorithm. We prove that the “Expectation” step and the “Maximization” step of our reaction network separately converge exponentially fast. We simulate mass-action kinetics for our network on an example sequence, and show that it learns the same parameters for the HMM as the Baum-Welch algorithm.
@cite_34 have shown that Kalman filters can be implemented with reactions networks. Kalman filters can be thought of as a version of Hidden Markov Models with continuous hidden states @cite_40 . It would be instructive to compare their scheme with ours, and note similarities and differences. In passing from position @math to position @math along the sequence, our scheme repeats the same reaction network that updates @math using @math values. It is worth examining if this can be done in place'' so that the same species can be reused, and a reaction network can be described that is not tied to the length @math of the sequence to be observed.
{ "cite_N": [ "@cite_40", "@cite_34" ], "mid": [ "2103139809", "2340334270" ], "abstract": [ "Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model.We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models.", "The invention of the Kalman filter is a crowning achievement of filtering theory—one that has revolutionized technology in countless ways. By dealing effectively with noise, the Kalman filter has enabled various applications in positioning, navigation, control, and telecommunications. In the emerging field of synthetic biology, noise and context dependency are among the key challenges facing the successful implementation of reliable, complex, and scalable synthetic circuits. Although substantial further advancement in the field may very well rely on effectively addressing these issues, a principled protocol to deal with noise—as provided by the Kalman filter—remains completely missing. Here we develop an optimal filtering theory that is suitable for noisy biochemical networks. We show how the resulting filters can be implemented at the molecular level and provide various simulations related to estimation, system identification, and noise cancellation problems. We demonstrate our approach in vitro using DNA strand displacement cascades as well as in vivo using flow cytometry measurements of a light-inducible circuit in Escherichia coli." ] }
1906.09410
2966666009
With a view towards molecular communication systems and molecular multi-agent systems, we propose the Chemical Baum-Welch Algorithm, a novel reaction network scheme that learns parameters for Hidden Markov Models (HMMs). Each reaction in our scheme changes only one molecule of one species to one molecule of another. The reverse change is also accessible but via a different set of enzymes, in a design reminiscent of futile cycles in biochemical pathways. We show that every fixed point of the Baum-Welch algorithm for HMMs is a fixed point of our reaction network scheme, and every positive fixed point of our scheme is a fixed point of the Baum-Welch algorithm. We prove that the “Expectation” step and the “Maximization” step of our reaction network separately converge exponentially fast. We simulate mass-action kinetics for our network on an example sequence, and show that it learns the same parameters for the HMM as the Baum-Welch algorithm.
Recently @cite_3 have given a brilliant experimental demonstration of learning with DNA molecules. They have empirically demonstrated a DNA molecular system that can classify @math types of handwritten digits from the MNIST database. Their approach is based on the notion of winner-takes-all'' circuits due to Maass @cite_13 which was originally a proposal for how neural networks in the brain work. Winner-take-all might also be capable of approximating HMM learning, at least in theory @cite_7 , and it is worth understanding precisely how such schemes relate to the kind of scheme we have described here. It is conceivable that our scheme could be converted to winner-take-all by getting different species in the same flower to give negative feedback to each other. This might well lead to sampling the most likely path, performing a decoding task similar to the Viterbi algorithm.
{ "cite_N": [ "@cite_13", "@cite_7", "@cite_3" ], "mid": [ "2167148127", "2050689535", "2810268565" ], "abstract": [ "This article initiates a rigorous theoretical analysis of the computational power of circuits that employ modules for computing winner-take-all. Computational models that involve competitive stages have so far been neglected in computational complexity theory, although they are widely used in computational brain models, artificial neural networks, and analog VLSI. Our theoretical analysis shows that winner-take-all is a surprisingly powerful computational module in comparison with threshold gates (also referred to as McCulloch-Pitts neurons) and sigmoidal gates. We prove an optimal quadratic lower bound for computing winner-take-all in any feedforward circuit consisting of threshold gates. In addition we show that arbitrary continuous functions can be approximated by circuits employing a single soft winner-take-all gate as their only nonlinear operation. Our theoretical analysis also provides answers to two basic questions raised by neurophysiologists in view of the well-known asymmetry between excitatory and inhibitory connections in cortical circuits: how much computational power of neural networks is lost if only positive weights are employed in weighted sums and how much adaptive capability is lost if only the positive weights are subject to plasticity.", "In order to cross a street without being run over, we need to be able to extract very fast hidden causes of dynamically changing multi-modal sensory stimuli, and to predict their future evolution. We show here that a generic cortical microcircuit motif, pyramidal cells with lateral excitation and inhibition, provides the basis for this difficult but all-important information processing capability. This capability emerges in the presence of noise automatically through effects of STDP on connections between pyramidal cells in Winner-Take-All circuits with lateral excitation. In fact, one can show that these motifs endow cortical microcircuits with functional properties of a hidden Markov model, a generic model for solving such tasks through probabilistic inference. Whereas in engineering applications this model is adapted to specific tasks through offline learning, we show here that a major portion of the functionality of hidden Markov models arises already from online applications of STDP, without any supervision or rewards. We demonstrate the emergent computing capabilities of the model through several computer simulations. The full power of hidden Markov model learning can be attained through reward-gated STDP. This is due to the fact that these mechanisms enable a rejection sampling approximation to theoretically optimal learning. We investigate the possible performance gain that can be achieved with this more accurate learning method for an artificial grammar task.", "From bacteria following simple chemical gradients to the brain distinguishing complex odour information, the ability to recognize molecular patterns is essential for biological organisms. This type of information-processing function has been implemented using DNA-based neural networks, but has been limited to the recognition of a set of no more than four patterns, each composed of four distinct DNA molecules. Winner-take-all computation has been suggested as a potential strategy for enhancing the capability of DNA-based neural networks. Compared to the linear-threshold circuits and Hopfield networks used previously, winner-take-all circuits are computationally more powerful, allow simpler molecular implementation and are not constrained by the number of patterns and their complexity, so both a large number of simple patterns and a small number of complex patterns can be recognized. Here we report a systematic implementation of winner-take-all neural networks based on DNA-strand-displacement reactions. We use a previously developed seesaw DNA gate motif, extended to include a simple and robust component that facilitates the cooperative hybridization that is involved in the process of selecting a ‘winner’. We show that with this extended seesaw motif DNA-based neural networks can classify patterns into up to nine categories. Each of these patterns consists of 20 distinct DNA molecules chosen from the set of 100 that represents the 100 bits in 10 × 10 patterns, with the 20 DNA molecules selected tracing one of the handwritten digits ‘1’ to ‘9’. The network successfully classified test patterns with up to 30 of the 100 bits flipped relative to the digit patterns ‘remembered’ during training, suggesting that molecular circuits can robustly accomplish the sophisticated task of classifying highly complex and noisy information on the basis of similarity to a memory." ] }
1906.09433
2950852305
Rain removal in images videos is still an important task in computer vision field and attracting attentions of more and more people. Traditional methods always utilize some incomplete priors or filters (e.g. guided filter) to remove rain effect. Deep learning gives more probabilities to better solve this task. However, they remove rain either by evaluating background from rainy image directly or learning a rain residual first then subtracting the residual to obtain a clear background. No other models are used in deep learning based de-raining methods to remove rain and obtain other information about rainy scenes. In this paper, we utilize an extensively-used image degradation model which is derived from atmospheric scattering principles to model the formation of rainy images and try to learn the transmission, atmospheric light in rainy scenes and remove rain further. To reach this goal, we propose a robust evaluation method of global atmospheric light in a rainy scene. Instead of using the estimated atmospheric light directly to learn a network to calculate transmission, we utilize it as ground truth and design a simple but novel triangle-shaped network structure to learn atmospheric light for every rainy image, then fine-tune the network to obtain a better estimation of atmospheric light during the training of transmission network. Furthermore, more efficient ShuffleNet Units are utilized in transmission network to learn transmission map and the de-raining image is then obtained by the image degradation model. By subjective and objective comparisons, our method outperforms the selected state-of-the-art works.
To avoid time-consuming dictionary learning stage, some filter ( , the edge-preserving guided filter @cite_6 ) based de-raining works appeared @cite_7 @cite_24 . These works are always simple, so they have high processing speed, but their de-raining effect is really limited. In @cite_27 , Chen al proposed a low-rank appearance model to capture the spatio-temporally correlated rain streaks. Li al @cite_23 proposed priors which are based on Gaussian mixture models for both rain and background to accommodate multiple orientations and scales of the rain streaks. However, these two models will mistreat some image details as rain and remove them with rain streaks together. Besides, the work @cite_23 is more time-consuming than dictionary learning methods.
{ "cite_N": [ "@cite_7", "@cite_6", "@cite_24", "@cite_27", "@cite_23" ], "mid": [ "634211087", "2125188192", "2132819211", "2154621477", "2466666260" ], "abstract": [ "Since no temporal information can be exploited, rain and snow removal from single image is a challenging problem. In this paper, an improved rain and snow removal method from single image is proposed by designing a guided L0 smoothing filter. The designed filter is inspired by the previous L0 gradient minimization. Then a coarse rain-free or snow-free image can be obtained with the proposed filter, and the final refined result is recovered by a further minimization operation depending on the observed image. Experimental results show that the proposed algorithm generates better or comparable outputs than the state-of-the-art algorithms in rain and snow removal task for single image.", "In this paper, we propose a novel explicit image filter called guided filter. Derived from a local linear model, the guided filter computes the filtering output by considering the content of a guidance image, which can be the input image itself or another different image. The guided filter can be used as an edge-preserving smoothing operator like the popular bilateral filter [1], but it has better behaviors near edges. The guided filter is also a more generic concept beyond smoothing: It can transfer the structures of the guidance image to the filtering output, enabling new filtering applications like dehazing and guided feathering. Moreover, the guided filter naturally has a fast and nonapproximate linear time algorithm, regardless of the kernel size and the intensity range. Currently, it is one of the fastest edge-preserving filters. Experiments show that the guided filter is both effective and efficient in a great variety of computer vision and computer graphics applications, including edge-aware smoothing, detail enhancement, HDR compression, image matting feathering, dehazing, joint upsampling, etc.", "Rain and snow bring poor visibility at outdoor vision systems. The common used image processing methods may be not suitable for a degraded image. In this paper, a guidance image method is proposed to remove rain and snow in a single image. To removal rain and snow only using one image, a guidance image is derived from the imaging model of a raindrop or a snowflake when it is passing through an element on the CCD of the camera. Since only using this guidance image may lose some detailed information, in this paper, a refined guidance image is proposed. This refined guidance image has similar contour with the un-degraded image and also maintains the detailed information which may be lost at the guidance image. Then a removal procedure is given by the use of the refined guidance image. Some comparison results are made between different methods using the guidance image and the refined guidance image. The refined guidance image can be used to get a better removal result. Our results show that this proposed method has both good performance in rain removal and snow removal.", "In this paper, we propose a novel low-rank appearance model for removing rain streaks. Different from previous work, our method needs neither rain pixel detection nor time-consuming dictionary learning stage. Instead, as rain streaks usually reveal similar and repeated patterns on imaging scene, we propose and generalize a low-rank model from matrix to tensor structure in order to capture the spatio-temporally correlated rain streaks. With the appearance model, we thus remove rain streaks from image video (and also other high-order image structure) in a unified way. Our experimental results demonstrate competitive (or even better) visual quality and efficient run-time in comparison with state of the art.", "This paper addresses the problem of rain streak removal from a single image. Rain streaks impair visibility of an image and introduce undesirable interference that can severely affect the performance of computer vision algorithms. Rain streak removal can be formulated as a layer decomposition problem, with a rain streak layer superimposed on a background layer containing the true scene content. Existing decomposition methods that address this problem employ either dictionary learning methods or impose a low rank structure on the appearance of the rain streaks. While these methods can improve the overall visibility, they tend to leave too many rain streaks in the background image or over-smooth the background image. In this paper, we propose an effective method that uses simple patch-based priors for both the background and rain layers. These priors are based on Gaussian mixture models and can accommodate multiple orientations and scales of the rain streaks. This simple approach removes rain streaks better than the existing methods qualitatively and quantitatively. We overview our method and demonstrate its effectiveness over prior work on a number of examples." ] }
1906.09433
2950852305
Rain removal in images videos is still an important task in computer vision field and attracting attentions of more and more people. Traditional methods always utilize some incomplete priors or filters (e.g. guided filter) to remove rain effect. Deep learning gives more probabilities to better solve this task. However, they remove rain either by evaluating background from rainy image directly or learning a rain residual first then subtracting the residual to obtain a clear background. No other models are used in deep learning based de-raining methods to remove rain and obtain other information about rainy scenes. In this paper, we utilize an extensively-used image degradation model which is derived from atmospheric scattering principles to model the formation of rainy images and try to learn the transmission, atmospheric light in rainy scenes and remove rain further. To reach this goal, we propose a robust evaluation method of global atmospheric light in a rainy scene. Instead of using the estimated atmospheric light directly to learn a network to calculate transmission, we utilize it as ground truth and design a simple but novel triangle-shaped network structure to learn atmospheric light for every rainy image, then fine-tune the network to obtain a better estimation of atmospheric light during the training of transmission network. Furthermore, more efficient ShuffleNet Units are utilized in transmission network to learn transmission map and the de-raining image is then obtained by the image degradation model. By subjective and objective comparisons, our method outperforms the selected state-of-the-art works.
A multi-stage network which consists of several parallel sub-networks was designed to model and remove rain streaks of various size @cite_5 . Different parallel sub-networks model rain streaks with corresponding sizes. Li al regarded rain streaks as the accumulation of multiple rain streaks layers, then use a recurrent neural network to remove rain streaks state-wisely. Though, recurrent training method is used, this work is not sensitive to rain streaks with blur edges. In @cite_13 , a non-locally enhanced encoder-decoder network framework is proposed to capture long-range spatial dependencies via skip-connections and pooling indices guided decoding is used to learn increasingly abstract feature representation to preserve the image details decoding. Different from all these state-of-the-art de-raining works, we design a simple but effective network to estimate atmospheric light, transmission of rain and obtain a clear rain-removed results. For the wide rain streaks with blurry edges, our method also produces better visual effect.
{ "cite_N": [ "@cite_5", "@cite_13" ], "mid": [ "2781413027", "2887181327" ], "abstract": [ "Given a single input rainy image, our goal is to visually remove rain streaks and the veiling effect caused by scattering and transmission of rain streaks and rain droplets. We are particularly concerned with heavy rain, where rain streaks of various sizes and directions can overlap each other and the veiling effect reduces contrast severely. To achieve our goal, we introduce a scale-aware multi-stage convolutional neural network. Our main idea here is that different sizes of rain-streaks visually degrade the scene in different ways. Large nearby streaks obstruct larger regions and are likely to reflect specular highlights more prominently than smaller distant streaks. These different effects of different streaks have their own characteristics in their image features, and thus need to be treated differently. To realize this, we create parallel sub-networks that are trained and made aware of these different scales of rain streaks. To our knowledge, this idea of parallel sub-networks that treats the same class of objects according to their unique sub-classes is novel, particularly in the context of rain removal. To verify our idea, we conducted experiments on both synthetic and real images, and found that our method is effective and outperforms the state-of-the-art methods.", "Single image rain streaks removal has recently witnessed substantial progress due to the development of deep convolutional neural networks. However, existing deep learning based methods either focus on the entrance and exit of the network by decomposing the input image into high and low frequency information and employing residual learning to reduce the mapping range, or focus on the introduction of cascaded learning scheme to decompose the task of rain streaks removal into multi-stages. These methods treat the convolutional neural network as an encapsulated end-to-end mapping module without deepening into the rationality and superiority of neural network design. In this paper, we delve into an effective end-to-end neural network structure for stronger feature expression and spatial correlation learning. Specifically, we propose a non-locally enhanced encoder-decoder network framework, which consists of a pooling indices embedded encoder-decoder network to efficiently learn increasingly abstract feature representation for more accurate rain streaks modeling while perfectly preserving the image detail. The proposed encoder-decoder framework is composed of a series of non-locally enhanced dense blocks that are designed to not only fully exploit hierarchical features from all the convolutional layers but also well capture the long-distance dependencies and structural information. Extensive experiments on synthetic and real datasets demonstrate that the proposed method can effectively remove rain-streaks on rainy image of various densities while well preserving the image details, which achieves significant improvements over the recent state-of-the-art methods." ] }
1906.09357
2952717691
Influence maximization (IM) has been extensively studied for better viral marketing. However, previous works put less emphasis on how balancedly the audience are affected across different communities and how diversely the seed nodes are selected. In this paper, we incorporate audience diversity and seed diversity into the IM task. From the model perspective, in order to characterize both influence spread and diversity in our objective function, we adopt three commonly used utilities in economics (i.e., Perfect Substitutes, Perfect Complements and Cobb-Douglas). We validate our choices of these three functions by showing their nice properties. From the algorithmic perspective, we present various approximation strategies to maximize the utilities. In audience diversification, we propose a solution-dependent approximation algorithm to circumvent the hardness results. In seed diversification, we prove a ( @math ) approximation ratio based on non-monotonic submodular maximization. Experimental results show that our framework outperforms other natural heuristics both in utility maximization and result diversification.
@cite_22 first formalize IM as a discrete optimization problem. Subsequent efforts following their framework can be divided into two directions. In one direction, researchers focus on how to accelerate the vanilla hill-climbing greedy algorithm @cite_10 @cite_36 @cite_37 @cite_9 @cite_17 @cite_25 @cite_18 @cite_33 . In the other direction, researchers propose new problem settings @cite_21 @cite_30 @cite_28 @cite_34 @cite_5 based on IM. Due to space limitation, we do not list all the variants here. One can refer to a recent tutorial @cite_40 for more details. Despite their success, the final cascade size is their primary criterion in selecting influential nodes. In contrast, our model incorporates diversity as a factor.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_18", "@cite_22", "@cite_33", "@cite_36", "@cite_28", "@cite_9", "@cite_21", "@cite_40", "@cite_5", "@cite_34", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "", "1984069252", "2035165116", "2061820396", "2798654119", "2108858998", "", "2108278206", "2009305899", "2788983380", "2740261000", "2963071749", "2141403143", "2132801025", "2963122331" ], "abstract": [ "", "Influence maximization, defined by Kempe, Kleinberg, and Tardos (2003), is the problem of finding a small set of seed nodes in a social network that maximizes the spread of influence under certain influence cascade models. The scalability of influence maximization is a key factor for enabling prevalent viral marketing in large-scale online social networks. Prior solutions, such as the greedy algorithm of (2003) and its improvements are slow and not scalable, while other heuristic algorithms do not provide consistently good performance on influence spreads. In this paper, we design a new heuristic algorithm that is easily scalable to millions of nodes and edges in our experiments. Our algorithm has a simple tunable parameter for users to control the balance between the running time and the influence spread of the algorithm. Our results from extensive simulations on several real-world and synthetic networks demonstrate that our algorithm is currently the best scalable solution to the influence maximization problem: (a) our algorithm scales beyond million-sized graphs where the greedy algorithm becomes infeasible, and (b) in all size ranges, our algorithm performs consistently well in influence spread --- it is always among the best algorithms, and in most cases it significantly outperforms all other scalable heuristics to as much as 100 --260 increase in influence spread.", "Given a social network G and a positive integer k, the influence maximization problem asks for k nodes (in G) whose adoptions of a certain idea or product can trigger the largest expected number of follow-up adoptions by the remaining nodes. This problem has been extensively studied in the literature, and the state-of-the-art technique runs in O((k+l) (n+m) log n e2) expected time and returns a (1-1 e-e)-approximate solution with at least 1 - 1 n l probability. This paper presents an influence maximization algorithm that provides the same worst-case guarantees as the state of the art, but offers significantly improved empirical efficiency. The core of our algorithm is a set of estimation techniques based on martingales, a classic statistical tool. Those techniques not only provide accurate results with small computation overheads, but also enable our algorithm to support a larger class of information diffusion models than existing methods do. We experimentally evaluate our algorithm against the states of the art under several popular diffusion models, using real social networks with up to 1.4 billion edges. Our experimental results show that the proposed algorithm consistently outperforms the states of the art in terms of computation efficiency, and is often orders of magnitude faster.", "Models for the processes by which ideas and influence propagate through a social network have been studied in a number of domains, including the diffusion of medical and technological innovations, the sudden and widespread adoption of various strategies in game-theoretic settings, and the effects of \"word of mouth\" in the promotion of new products. Recently, motivated by the design of viral marketing strategies, Domingos and Richardson posed a fundamental algorithmic problem for such social network processes: if we can try to convince a subset of individuals to adopt a new product or innovation, and the goal is to trigger a large cascade of further adoptions, which set of individuals should we target?We consider this problem in several of the most widely studied models in social network analysis. The optimization problem of selecting the most influential nodes is NP-hard here, and we provide the first provable approximation guarantees for efficient algorithms. Using an analysis framework based on submodular functions, we show that a natural greedy strategy obtains a solution that is provably within 63 of optimal for several classes of models; our framework suggests a general approach for reasoning about the performance guarantees of algorithms for these types of influence problems in social networks.We also provide computational experiments on large collaboration networks, showing that in addition to their provable guarantees, our approximation algorithms significantly out-perform node-selection heuristics based on the well-studied notions of degree centrality and distance centrality from the field of social networks.", "Influence maximization is a classic and extensively studied problem with important applications in viral marketing. Existing algorithms for influence maximization, however, mostly focus on offline processing, in the sense that they do not provide any output to the user until the final answer is derived, and that the user is not allowed to terminate the algorithm early to trade the quality of solution for efficiency. Such lack of interactiveness and flexibility leads to poor user experience, especially when the algorithm incurs long running time. To address the above problem, this paper studies algorithms for online processing of influence maximization (OPIM), where the user can pause the algorithm at any time and ask for a solution (to the influence maximization problem) and its approximation guarantee, and can resume the algorithm to let it improve the quality of solution by giving it more time to run. (This interactive paradigm is similar in spirit to online query processing in database systems.) We show that the only existing algorithm for OPIM is vastly ineffective in practice, and that adopting existing influence maximization methods for OPIM yields unsatisfactory results. Motivated by this, we propose a new algorithm for OPIM with both superior empirical effectiveness and strong theoretical guarantees, and we show that it can also be extended to handle conventional influence maximization. Extensive experiments on real data demonstrate that our solutions outperform the state of the art for both OPIM and conventional influence maximization.", "Influence maximization is the problem of finding a small subset of nodes (seed nodes) in a social network that could maximize the spread of influence. In this paper, we study the efficient influence maximization from two complementary directions. One is to improve the original greedy algorithm of [5] and its improvement [7] to further reduce its running time, and the second is to propose new degree discount heuristics that improves influence spread. We evaluate our algorithms by experiments on two large academic collaboration graphs obtained from the online archival database arXiv.org. Our experimental results show that (a) our improved greedy algorithm achieves better running time comparing with the improvement of [7] with matching influence spread, (b) our degree discount heuristics achieve much better influence spread than classic degree and centrality-based heuristics, and when tuned for a specific influence cascade model, it achieves almost matching influence thread with the greedy algorithm, and more importantly (c) the degree discount heuristics run only in milliseconds while even the improved greedy algorithms run in hours in our experiment graphs with a few tens of thousands of nodes. Based on our results, we believe that fine-tuned heuristics may provide truly scalable solutions to the influence maximization problem with satisfying influence spread and blazingly fast running time. Therefore, contrary to what implied by the conclusion of [5] that traditional heuristics are outperformed by the greedy approximation algorithm, our results shed new lights on the research of heuristic algorithms.", "", "Influence maximization is the problem of finding a small set of most influential nodes in a social network so that their aggregated influence in the network is maximized. In this paper, we study influence maximization in the linear threshold model, one of the important models formalizing the behavior of influence propagation in social networks. We first show that computing exact influence in general networks in the linear threshold model is #P-hard, which closes an open problem left in the seminal work on influence maximization by Kempe, Kleinberg, and Tardos, 2003. As a contrast, we show that computing influence in directed a cyclic graphs (DAGs) can be done in time linear to the size of the graphs. Based on the fast computation in DAGs, we propose the first scalable influence maximization algorithm tailored for the linear threshold model. We conduct extensive simulations to show that our algorithm is scalable to networks with millions of nodes and edges, is orders of magnitude faster than the greedy approximation algorithm proposed by and its optimized versions, and performs consistently among the best algorithms while other heuristic algorithms not design specifically for the linear threshold model have unstable performances on different real-world networks.", "Influence maximization is a fundamental research problem in social networks. Viral marketing, one of its applications, is to get a small number of users to adopt a product, which subsequently triggers a large cascade of further adoptions by utilizing \"\"\"\"Word-of-Mouth\"\"\"\" effect in social networks. Influence maximization problem has been extensively studied recently. However, none of the previous work considers the time constraint in the influence maximization problem. In this paper, we propose the time constrained influence maximization problem. We show that the problem is NP-hard, and prove the monotonicity and submodularity of the time constrained influence spread function. Based on this, we develop a greedy algorithm with performance guarantees. To improve the algorithm scalability, we propose two Influence Spreading Path based methods. Extensive experiments conducted over four public available datasets demonstrate the efficiency and effectiveness of the Influence Spreading Path based methods.", "Starting with the earliest studies showing that the spread of new trends, information, and innovations is closely related to the social influence exerted on people by their social networks, the research on social influence theory took off, providing remarkable evidence on social influence induced viral phenomena. Fueled by the extreme popularity of online social networks and social media, computational social influence has emerged as a subfield of data mining whose goal is to analyze and optimize social influence using computational frameworks such as algorithm design and theoretical modeling. One of the fundamental problems in this field is the problem of influence maximization, primarily motivated by the application of viral marketing. The objective is to identify a small set of users in a social network who, when convinced to adopt a product, shall influence others in the network in a manner that leads to a large number of adoptions. In this tutorial, we extensively survey the research on social influence propagation and maximization, with a focus on the recent algorithmic and theoretical advances. To this end, we provide detailed reviews of the latest research effort devoted to (i) improving the efficiency and scalability of the influence maximization algorithms; (ii) context-aware modeling of the influence maximization problem to better capture real-world marketing scenarios; (iii) modeling and learning of real-world social influence; (iv) bridging the gap between social advertising and viral marketing.", "Influence maximization, the fundamental of viral marketing, aims to find top- @math seed nodes maximizing influence spread under certain spreading models. In this paper, we study influence maximization from a game perspective. We propose a Coordination Game model, in which every individuals make their decisions based on the benefit of coordination with their network neighbors, to study information propagation. Our model serves as the generalization of some existing models, such as Majority Vote model and Linear Threshold model. Under the generalized model, we study the hardness of influence maximization and the approximation guarantee of the greedy algorithm. We also combine several strategies to accelerate the algorithm. Experimental results show that after the acceleration, our algorithm significantly outperforms other heuristics, and it is three orders of magnitude faster than the original greedy method.", "Uncertainty about models and data is ubiquitous in the computational social sciences, and it creates a need for robust social network algorithms, which can simultaneously provide guarantees across a spectrum of models and parameter settings. We begin an investigation into this broad domain by studying robust algorithms for the Influence Maximization problem, in which the goal is to identify a set of k nodes in a social network whose joint influence on the network is maximized. We define a Robust Influence Maximization framework wherein an algorithm is presented with a set of influence functions, typically derived from different influence models or different parameter settings for the same model. The different parameter settings could be derived from observed cascades on different topics, under different conditions, or at different times. The algorithm's goal is to identify a set of k nodes who are simultaneously influential for all influence functions, compared to the (function-specific) optimum solutions. We show strong approximation hardness results for this problem unless the algorithm gets to select at least a logarithmic factor more seeds than the optimum solution. However, when enough extra seeds may be selected, we show that techniques of can be used to approximate the optimum robust influence to within a factor of 1-1 e. We evaluate this bicriteria approximation algorithm against natural heuristics on several real-world data sets. Our experiments indicate that the worst-case hardness does not necessarily translate into bad performance on real-world data sets; all algorithms perform fairly well.", "Given a water distribution network, where should we place sensors toquickly detect contaminants? Or, which blogs should we read to avoid missing important stories?. These seemingly different problems share common structure: Outbreak detection can be modeled as selecting nodes (sensor locations, blogs) in a network, in order to detect the spreading of a virus or information asquickly as possible. We present a general methodology for near optimal sensor placement in these and related problems. We demonstrate that many realistic outbreak detection objectives (e.g., detection likelihood, population affected) exhibit the property of \"submodularity\". We exploit submodularity to develop an efficient algorithm that scales to large problems, achieving near optimal placements, while being 700 times faster than a simple greedy algorithm. We also derive online bounds on the quality of the placements obtained by any algorithm. Our algorithms and bounds also handle cases where nodes (sensor locations, blogs) have different costs. We evaluate our approach on several large real-world problems,including a model of a water distribution network from the EPA, andreal blog data. The obtained sensor placements are provably near optimal, providing a constant fraction of the optimal solution. We show that the approach scales, achieving speedups and savings in storage of several orders of magnitude. We also show how the approach leads to deeper insights in both applications, answering multicriteria trade-off, cost-sensitivity and generalization questions.", "Given a social network G and a constant @math , the influence maximization problem asks for k nodes in G that (directly and indirectly) influence the largest number of nodes under a pre-defined diffusion model. This problem finds important applications in viral marketing, and has been extensively studied in the literature. Existing algorithms for influence maximization, however, either trade approximation guarantees for practical efficiency, or vice versa. In particular, among the algorithms that achieve constant factor approximations under the prominent independent cascade (IC) model or linear threshold (LT) model, none can handle a million-node graph without incurring prohibitive overheads. This paper presents TIM, an algorithm that aims to bridge the theory and practice in influence maximization. On the theory side, we show that TIM runs in O((k+ l) (n+m) log n e2) expected time and returns a (1-1 e-e)-approximate solution with at least 1 - n-l probability. The time complexity of TIM is near-optimal under the IC model, as it is only a log n factor larger than the Ω(m + n) lower-bound established in previous work (for fixed k, l, and e). Moreover, TIM supports the triggering model, which is a general diffusion model that includes both IC and LT as special cases. On the practice side, TIM incorporates novel heuristics that significantly improve its empirical efficiency without compromising its asymptotic performance. We experimentally evaluate TIM with the largest datasets ever tested in the literature, and show that it outperforms the state-of-the-art solutions (with approximation guarantees) by up to four orders of magnitude in terms of running time. In particular, when k = 50, e = 0.2, and l = 1, TIM requires less than one hour on a commodity machine to process a network with 41.6 million nodes and 1.4 billion edges. This demonstrates that influence maximization algorithms can be made practical while still offering strong theoretical guarantees.", "Diffusion is a fundamental graph process, underpinning such phenomena as epidemic disease contagion and the spread of innovation by word-of-mouth. We address the algorithmic problem of finding a set of k initial seed nodes in a network so that the expected size of the resulting cascade is maximized, under the standard independent cascade model of network diffusion. Runtime is a primary consideration for this problem due to the massive size of the relevant input networks. We provide a fast algorithm for the influence maximization problem, obtaining the near-optimal approximation factor of (1--1 e -- e), for any e > 0, in time O((m + n)e-3 log n). Our algorithm is runtime-optimal (up to a logarithmic factor) and substantially improves upon the previously best-known algorithms which run in time Ω(mnk · POLY(e-1)). Furthermore, our algorithm can be modified to allow early termination: if it is terminated after O(β(m + n) log n) steps for some β < 1 (which can depend on n), then it returns a solution with approximation factor O(β). Finally, we show that this runtime is optimal (up to logarithmic factors) for any β and fixed seed size k." ] }
1906.09357
2952717691
Influence maximization (IM) has been extensively studied for better viral marketing. However, previous works put less emphasis on how balancedly the audience are affected across different communities and how diversely the seed nodes are selected. In this paper, we incorporate audience diversity and seed diversity into the IM task. From the model perspective, in order to characterize both influence spread and diversity in our objective function, we adopt three commonly used utilities in economics (i.e., Perfect Substitutes, Perfect Complements and Cobb-Douglas). We validate our choices of these three functions by showing their nice properties. From the algorithmic perspective, we present various approximation strategies to maximize the utilities. In audience diversification, we propose a solution-dependent approximation algorithm to circumvent the hardness results. In seed diversification, we prove a ( @math ) approximation ratio based on non-monotonic submodular maximization. Experimental results show that our framework outperforms other natural heuristics both in utility maximization and result diversification.
Diversifying ranked items is first studied in information retrieval @cite_2 . Subsequent efforts @cite_12 @cite_29 @cite_16 @cite_4 propose different models to describe the tradeoff between relevance and diversity. Some of these methods show their power in IR, but they hinge on specific choices of similarity functions and cannot be easily generalized to social network scenarios. @cite_35 , @cite_14 and @cite_6 then transfer this framework to social networks for diversified node ranking. @cite_32 further generalize the framework to a wider range of relevance and similarity functions. However, they require the authority part to be modular (i.e., authority @math is the sum of authority @math and authority @math ), which does not hold for IM. Besides, some studies mentioned above require the dissimilarity function between two elements to form a metric, which is not true in some settings (e.g., the community and embedding settings discussed below).
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_4", "@cite_29", "@cite_32", "@cite_6", "@cite_2", "@cite_16", "@cite_12" ], "mid": [ "1546477643", "2147406357", "2787705708", "2152228468", "2160774044", "1975917757", "2083305840", "", "1993320088" ], "abstract": [ "We introduce a novel ranking algorithm called GRASSHOPPER, which ranks items with an emphasis on diversity. That is, the top items should be different from each other in order to have a broad coverage of the whole item set. Many natural language processing tasks can benefit from such diversity ranking. Our algorithm is based on random walks in an absorbing Markov chain. We turn ranked items into absorbing states, which effectively prevents redundant items from receiving a high rank. We demonstrate GRASSHOPPER’s effectiveness on extractive text summarization: our algorithm ranks between the 1st and 2nd systems on DUC 2004 Task 2; and on a social network analysis task that identifies movie stars of the world.", "Information networks are widely used to characterize the relationships between data items such as text documents. Many important retrieval and mining tasks rely on ranking the data items based on their centrality or prestige in the network. Beyond prestige, diversity has been recognized as a crucial objective in ranking, aiming at providing a non-redundant and high coverage piece of information in the top ranked results. Nevertheless, existing network-based ranking approaches either disregard the concern of diversity, or handle it with non-optimized heuristics, usually based on greedy vertex selection. We propose a novel ranking algorithm, DivRank, based on a reinforced random walk in an information network. This model automatically balances the prestige and the diversity of the top ranked vertices in a principled way. DivRank not only has a clear optimization explanation, but also well connects to classical models in mathematics and network science. We evaluate DivRank using empirical experiments on three different networks as well as a text summarization task. DivRank outperforms existing network-based ranking methods in terms of enhancing diversity in prestige.", "Max-sum diversity is a fundamental primitive for web search and data mining. For a given set S of n elements, it returns a subset of k«l n representatives maximizing the sum of their pairwise distances, where distance models dissimilarity. An important variant of the primitive prescribes that the desired subset of representatives satisfies an additional orthogonal requirement, which can be specified as a matroid constraint (i.e., a feasible solution must be an independent set of size k). While unconstrained max-sum diversity admits efficient coreset-based strategies, the only known approaches dealing with the additional matroid constraint are inherently sequential and are based on an expensive local search over the entire input set. We devise the first coreset constructions for max-sum diversity under various matroid constraints, together with efficient sequential, MapReduce and Streaming implementations. By running the local-search on the coreset rather than on the entire input, we obtain the first practical solutions for large instances. Technically, our coresets are subsets of S containing a feasible solution which is no more than a factor 1-e away from the optimal solution, for any fixed e", "Understanding user intent is key to designing an effective ranking system in a search engine. In the absence of any explicit knowledge of user intent, search engines want to diversify results to improve user satisfaction. In such a setting, the probability ranking principle-based approach of presenting the most relevant results on top can be sub-optimal, and hence the search engine would like to trade-off relevance for diversity in the results. In analogy to prior work on ranking and clustering systems, we use the axiomatic approach to characterize and design diversification systems. We develop a set of natural axioms that a diversification system is expected to satisfy, and show that no diversification function can satisfy all the axioms simultaneously. We illustrate the use of the axiomatic framework by providing three example diversification objectives that satisfy different subsets of the axioms. We also uncover a rich link to the facility dispersion problem that results in algorithms for a number of diversification objectives. Finally, we propose an evaluation methodology to characterize the objectives and the underlying axioms. We conduct a large scale evaluation of our objectives based on two data sets: a data set derived from the Wikipedia disambiguation pages and a product database.", "Diversified ranking is a fundamental task in machine learning. It is broadly applicable in many real world problems, e.g., information retrieval, team assembling, product search, etc. In this paper, we consider a generic setting where we aim to diversify the top-k ranking list based on an arbitrary relevance function and an arbitrary similarity function among all the examples. We formulate it as an optimization problem and show that in general it is NP-hard. Then, we show that for a large volume of the parameter space, the proposed objective function enjoys the diminishing returns property, which enables us to design a scalable, greedy algorithm to find the (1 - 1 e) near-optimal solution. Experimental results on real data sets demonstrate the effectiveness of the proposed algorithm.", "Diversified ranking on graphs is a fundamental mining task and has a variety of high-impact applications. There are two important open questions here. The first challenge is the measure - how to quantify the goodness of a given top-k ranking list that captures both the relevance and the diversity? The second challenge lies in the algorithmic aspect - how to find an optimal, or near-optimal, top-k ranking list that maximizes the measure we defined in a scalable way? In this paper, we address these challenges from an optimization point of view. Firstly, we propose a goodness measure for a given top-k ranking list. The proposed goodness measure intuitively captures both (a) the relevance between each individual node in the ranking list and the query; and (b) the diversity among different nodes in the ranking list. Moreover, we propose a scalable algorithm (linear wrt the size of the graph) that generates a provably near-optimal solution. The experimental evaluations on real graphs demonstrate its effectiveness and efficiency.", "This paper presents a method for combining query-relevance with information-novelty in the context of text retrieval and summarization. The Maximal Marginal Relevance (MMR) criterion strives to reduce redundancy while maintaining query relevance in re-ranking retrieved documents and in selecting apprw priate passages for text summarization. Preliminary results indicate some benefits for MMR diversity ranking in document retrieval and in single document summarization. The latter are borne out by the recent results of the SUMMAC conference in the evaluation of summarization systems. However, the clearest advantage is demonstrated in constructing non-redundant multi-document summaries, where MMR results are clearly superior to non-MMR passage selection.", "", "We study the problem of answering ambiguous web queries in a setting where there exists a taxonomy of information, and that both queries and documents may belong to more than one category according to this taxonomy. We present a systematic approach to diversifying results that aims to minimize the risk of dissatisfaction of the average user. We propose an algorithm that well approximates this objective in general, and is provably optimal for a natural special case. Furthermore, we generalize several classical IR metrics, including NDCG, MRR, and MAP, to explicitly account for the value of diversification. We demonstrate empirically that our algorithm scores higher in these generalized metrics compared to results produced by commercial search engines." ] }
1906.09357
2952717691
Influence maximization (IM) has been extensively studied for better viral marketing. However, previous works put less emphasis on how balancedly the audience are affected across different communities and how diversely the seed nodes are selected. In this paper, we incorporate audience diversity and seed diversity into the IM task. From the model perspective, in order to characterize both influence spread and diversity in our objective function, we adopt three commonly used utilities in economics (i.e., Perfect Substitutes, Perfect Complements and Cobb-Douglas). We validate our choices of these three functions by showing their nice properties. From the algorithmic perspective, we present various approximation strategies to maximize the utilities. In audience diversification, we propose a solution-dependent approximation algorithm to circumvent the hardness results. In seed diversification, we prove a ( @math ) approximation ratio based on non-monotonic submodular maximization. Experimental results show that our framework outperforms other natural heuristics both in utility maximization and result diversification.
@cite_31 @cite_8 @cite_7 study how to maximize the diversity of in a network. Their goal is to recommend diverse content to the audience so that friends can have different knowledge. This is different from our goal to diversify the influenced nodes with the same piece of content. To the best of our knowledge, @cite_23 is the only previous work exploring audience seed diversity in social influence maximization. However, their objective is essentially a weighted sum of spread and diversity. In contrast, our framework systematically studies a family of utility functions from the perspective of economics @cite_0 .
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_0", "@cite_23", "@cite_31" ], "mid": [ "2963864976", "2890093401", "2090317466", "2041860676", "2963383366" ], "abstract": [ "Social media have a great potential to improve information dissemination in our society, yet, they have been held accountable for a number of undesirable effects, such as polarization and filter bubbles. It is thus important to understand these negative phenomena and develop methods to combat them. In this paper we propose a novel approach to address the problem of breaking filter bubbles in social media. We do so by aiming to maximize the diversity of the information exposed to connected social-media users. We formulate the problem of maximizing the diversity of exposure as a quadratic-knapsack problem. We show that the proposed diversity-maximization problem is inapproximable, and thus, we resort to polynomial non-approximable algorithms, inspired by solutions developed for the quadratic knapsack problem, as well as scalable greedy heuristics. We complement our algorithms with instance-specific upper bounds, which are used to provide empirical approximation guarantees for the given problem instances. Our experimental evaluation shows that a proposed greedy algorithm followed by randomized local search is the algorithm of choice given its quality-vs.-efficiency trade-off.", "Social-media platforms have created new ways for citizens to stay informed and participate in public debates. However, to enable a healthy environment for information sharing, social deliberation, and opinion formation, citizens need to be exposed to sufficiently diverse viewpoints that challenge their assumptions, instead of being trapped inside filter bubbles. In this paper, we take a step in this direction and propose a novel approach to maximize the diversity of exposure in a social network. We formulate the problem in the context of information propagation, as a task of recommending a small number of news articles to selected users. We propose a realistic setting where we take into account content and user leanings, and the probability of further sharing an article. This setting allows us to capture the balance between maximizing the spread of information and ensuring the exposure of users to diverse viewpoints. The resulting problem can be cast as maximizing a monotone and submodular function subject to a matroid constraint on the allocation of articles to users. It is a challenging generalization of the influence maximization problem. Yet, we are able to devise scalable approximation algorithms by introducing a novel extension to the notion of random reverse-reachable sets. We experimentally demonstrate the efficiency and scalability of our algorithm on several real-world datasets.", "The worldwide best-selling intermediate microeconomics textbook is distinguished by its remarkably up-to-date and rigorous yet accessible analytical approach. The seventh edition has been carefully updated and revised, adding a wealth of new applications and examples that analyse the important lessons offered by eBay, drug companies, the Yellow Pages and even Maine Lobstermen.", "For better viral marketing, there has been a lot of research on social influence maximization. However, the problem that who is influenced and how diverse the influenced population is, which is important in real-world marketing, has largely been neglected. To that end, in this paper, we propose to consider the magnitude of influence and the diversity of the influenced crowd simultaneously. Specifically, we formulate it as an optimization problem, i.e., diversified social influence maximization. First, we present a general framework for this problem, under which we construct a class of diversity measures to quantify the diversity of the influenced crowd. Meanwhile, we prove that a simple greedy algorithm guarantees to provide a near-optimal solution to the optimization problem. Furthermore, we relax the problem by focusing on the diversity of the nodes targeted for initial activation, and show how this relaxed form could be used to diversify the results of many heuristics, e.g., PageRank. Finally, we run extensive experiments on two real-world datasets, showing that our formulation is effective in generating diverse results.", "Social media has brought a revolution on how people are consuming news. Beyond the undoubtedly large number of advantages brought by social-media platforms, a point of criticism has been the creation of echo chambers and filter bubbles, caused by social homophily and algorithmic personalization. In this paper we address the problem of balancing the information exposure in a social network. We assume that two opposing campaigns (or viewpoints) are present in the network, and that network nodes have different preferences towards these campaigns. Our goal is to find two sets of nodes to employ in the respective campaigns, so that the overall information exposure for the two campaigns is balanced. We formally define the problem, characterize its hardness, develop approximation algorithms, and present experimental evaluation results. Our model is inspired by the literature on influence maximization, but we offer significant novelties. First, balance of information exposure is modeled by a symmetric difference function, which is neither monotone nor submodular, and thus, not amenable to existing approaches. Second, while previous papers consider a setting with selfish agents and provide bounds on best response strategies (i.e., move of the last player), we consider a setting with a centralized agent and provide bounds for a global objective function." ] }
1906.09383
2950630404
In this report, we present the Baidu-UTS submission to the EPIC-Kitchens Action Recognition Challenge in CVPR 2019. This is the winning solution to this challenge. In this task, the goal is to predict verbs, nouns, and actions from the vocabulary for each video segment. The EPIC-Kitchens dataset contains various small objects, intense motion blur, and occlusions. It is challenging to locate and recognize the object that an actor interacts with. To address these problems, we utilize object detection features to guide the training of 3D Convolutional Neural Networks (CNN), which can significantly improve the accuracy of noun prediction. Specifically, we introduce a Gated Feature Aggregator module to learn from the clip feature and the object feature. This module can strengthen the interaction between the two kinds of activations and avoid gradient exploding. Experimental results demonstrate our approach outperforms other methods on both seen and unseen test set.
Third-person video classification has attracted lots of research works in the last a few years. Two-stream convolutional networks @cite_23 utilize optical flow information for motion modeling, while 3D convolutional networks @cite_9 @cite_6 @cite_8 @cite_22 recently achieved better performance than its 2D counterpart. Recurrent Neural Networks (RNNs) are effective architectures for long sequence modeling and have been found useful for video classification in @cite_10 @cite_12 . Other aggregation methods like VLAD @cite_20 , actionVLAD @cite_7 are also commonly used.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_8", "@cite_9", "@cite_6", "@cite_23", "@cite_20", "@cite_10", "@cite_12" ], "mid": [ "2951971882", "2608988379", "2953328958", "1522734439", "2619082050", "2156303437", "1950136256", "2952550003", "2524365899" ], "abstract": [ "Video classification methods often divide the video into short clips, do inference on these clips independently, and then aggregate these predictions to generate the final classification result. Treating these highly-correlated clips as independent both ignores the temporal structure of the signal and carries a large computational cost: the model must process each clip from scratch. To reduce this cost, recent efforts have focused on designing more efficient clip-level network architectures. Less attention, however, has been paid to the overall framework, including how to benefit from correlations between neighboring clips and improving the aggregation strategy itself. In this paper we leverage the correlation between adjacent video clips to address the problem of computational cost efficiency in video classification at the aggregation stage. More specifically, given a clip feature representation, the problem of computing next clip's representation becomes much easier. We propose a novel recurrent architecture called FASTER for video-level classification, that combines high quality, expensive representations of clips, that capture the action in detail, and lightweight representations, which capture scene changes in the video and avoid redundant computation. We also propose a novel processing unit to learn integration of clip-level representations, as well as their temporal structure. We call this unit FAST-GRU, as it is based on the Gated Recurrent Unit (GRU). The proposed framework achieves significantly better FLOPs vs. accuracy trade-off at inference time. Compared to existing approaches, our proposed framework reduces the FLOPs by more than 10x while maintaining similar accuracy across popular datasets, such as Kinetics, UCF101 and HMDB51.", "In this work, we introduce a new video representation for action classification that aggregates local convolutional features across the entire spatio-temporal extent of the video. We do so by integrating state-of-the-art two-stream networks [42] with learnable spatio-temporal feature aggregation [6]. The resulting architecture is end-to-end trainable for whole-video classification. We investigate different strategies for pooling across space and time and combining signals from the different streams. We find that: (i) it is important to pool jointly across space and time, but (ii) appearance and motion streams are best aggregated into their own separate representations. Finally, we show that our representation outperforms the two-stream base architecture by a large margin (13 relative) as well as outperforms other baselines with comparable base architectures on HMDB51, UCF101, and Charades video classification benchmarks.", "We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call \"cardinality\" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.", "We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.", "The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9 on HMDB-51 and 98.0 on UCF-101.", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "In this paper, we propose a discriminative video representation for event detection over a large scale video dataset when only limited hardware resources are available. The focus of this paper is to effectively leverage deep Convolutional Neural Networks (CNNs) to advance event detection, where only frame level static descriptors can be extracted by the existing CNN toolkits. This paper makes two contributions to the inference of CNN video representation. First, while average pooling and max pooling have long been the standard approaches to aggregating frame level static features, we show that performance can be significantly improved by taking advantage of an appropriate encoding method. Second, we propose using a set of latent concept descriptors as the frame descriptor, which enriches visual information while keeping it computationally affordable. The integration of the two contributions results in a new state-of-the-art performance in event detection over the largest video datasets. Compared to improved Dense Trajectories, which has been recognized as the best video representation for event detection, our new representation improves the Mean Average Precision (mAP) from 27.6 to 36.8 for the TRECVID MEDTest 14 dataset and from 34.0 to 44.6 for the TRECVID MEDTest 13 dataset.", "Despite the recent success of neural networks in image feature learning, a major problem in the video domain is the lack of sufficient labeled data for learning to model temporal information. In this paper, we propose an unsupervised temporal modeling method that learns from untrimmed videos. The speed of motion varies constantly, e.g., a man may run quickly or slowly. We therefore train a Multirate Visual Recurrent Model (MVRM) by encoding frames of a clip with different intervals. This learning process makes the learned model more capable of dealing with motion speed variance. Given a clip sampled from a video, we use its past and future neighboring clips as the temporal context, and reconstruct the two temporal transitions, i.e., present @math past transition and present @math future transition, reflecting the temporal information in different views. The proposed method exploits the two transitions simultaneously by incorporating a bidirectional reconstruction which consists of a backward reconstruction and a forward reconstruction. We apply the proposed method to two challenging video tasks, i.e., complex event detection and video captioning, in which it achieves state-of-the-art performance. Notably, our method generates the best single feature for event detection with a relative improvement of 10.4 on the MEDTest-13 dataset and achieves the best performance in video captioning across all evaluation metrics on the YouTube2Text dataset.", "Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of 8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics." ] }
1906.09383
2950630404
In this report, we present the Baidu-UTS submission to the EPIC-Kitchens Action Recognition Challenge in CVPR 2019. This is the winning solution to this challenge. In this task, the goal is to predict verbs, nouns, and actions from the vocabulary for each video segment. The EPIC-Kitchens dataset contains various small objects, intense motion blur, and occlusions. It is challenging to locate and recognize the object that an actor interacts with. To address these problems, we utilize object detection features to guide the training of 3D Convolutional Neural Networks (CNN), which can significantly improve the accuracy of noun prediction. Specifically, we introduce a Gated Feature Aggregator module to learn from the clip feature and the object feature. This module can strengthen the interaction between the two kinds of activations and avoid gradient exploding. Experimental results demonstrate our approach outperforms other methods on both seen and unseen test set.
We discuss several methods evaluated on the EPIC-Kitchens dataset. The authors of EPIC-Kitchens provide a baseline result on the recognition benchmark. They train Temporal Segment Network (TSN) @cite_2 to predict both verb and noun classes jointly. The two-stream TSN achieves the best performance on verb prediction and RGB-TSN outperforms their other models for noun prediction. However, without special design for egocentric videos, this state-of-the-art method for third-person video recognition does not achieve promising results, especially on noun classification.
{ "cite_N": [ "@cite_2" ], "mid": [ "2950971447" ], "abstract": [ "Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( @math ) and UCF101 ( @math ). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices." ] }
1906.09383
2950630404
In this report, we present the Baidu-UTS submission to the EPIC-Kitchens Action Recognition Challenge in CVPR 2019. This is the winning solution to this challenge. In this task, the goal is to predict verbs, nouns, and actions from the vocabulary for each video segment. The EPIC-Kitchens dataset contains various small objects, intense motion blur, and occlusions. It is challenging to locate and recognize the object that an actor interacts with. To address these problems, we utilize object detection features to guide the training of 3D Convolutional Neural Networks (CNN), which can significantly improve the accuracy of noun prediction. Specifically, we introduce a Gated Feature Aggregator module to learn from the clip feature and the object feature. This module can strengthen the interaction between the two kinds of activations and avoid gradient exploding. Experimental results demonstrate our approach outperforms other methods on both seen and unseen test set.
The attention mechanism is efficient to locate the region of interest on the feature map. @cite_14 propose a Long Short-Term Attention model to focus on features from relevant spatial parts. They extend LSTM with a recurrent attention component and an output pooling component to track the discriminative area smoothly across the video sequence. Their model obtains a significant gain over the TSN baseline.
{ "cite_N": [ "@cite_14" ], "mid": [ "2903409063" ], "abstract": [ "Egocentric activity recognition is one of the most challenging tasks in video analysis. It requires a fine-grained discrimination of small objects and their manipulation. While some methods base on strong supervision and attention mechanisms, they are either annotation consuming or do not take spatio-temporal patterns into account. In this paper we propose LSTA as a mechanism to focus on features from spatial relevant parts while attention is being tracked smoothly across the video sequence. We demonstrate the effectiveness of LSTA on egocentric activity recognition with an end-to-end trainable two-stream architecture, achieving state of the art performance on four standard benchmarks." ] }
1906.09383
2950630404
In this report, we present the Baidu-UTS submission to the EPIC-Kitchens Action Recognition Challenge in CVPR 2019. This is the winning solution to this challenge. In this task, the goal is to predict verbs, nouns, and actions from the vocabulary for each video segment. The EPIC-Kitchens dataset contains various small objects, intense motion blur, and occlusions. It is challenging to locate and recognize the object that an actor interacts with. To address these problems, we utilize object detection features to guide the training of 3D Convolutional Neural Networks (CNN), which can significantly improve the accuracy of noun prediction. Specifically, we introduce a Gated Feature Aggregator module to learn from the clip feature and the object feature. This module can strengthen the interaction between the two kinds of activations and avoid gradient exploding. Experimental results demonstrate our approach outperforms other methods on both seen and unseen test set.
The object detection model is another powerful way to extract object-related features. @cite_5 propose to perform object-level visual reasoning about spatio-temporal interactions in videos through the integration of object detection networks. More recently, @cite_19 combine Long-Term Feature Banks that contains object-centric detection features with 3D CNN to improve the accuracy of noun recognition.
{ "cite_N": [ "@cite_19", "@cite_5" ], "mid": [ "2904080086", "2808675313" ], "abstract": [ "To understand the world, we humans constantly need to relate the present to the past, and put events in context. In this paper, we enable existing video models to do the same. We propose a long-term feature bank---supportive information extracted over the entire span of a video---to augment state-of-the-art video models that otherwise would only view short clips of 2-5 seconds. Our experiments demonstrate that augmenting 3D convolutional networks with a long-term feature bank yields state-of-the-art results on three challenging video datasets: AVA, EPIC-Kitchens, and Charades.", "Human activity recognition is typically addressed by detecting key concepts like global and local motion, features related to object classes present in the scene, as well as features related to the global context. The next open challenges in activity recognition require a level of understanding that pushes beyond this and call for models with capabilities for fine distinction and detailed comprehension of interactions between actors and objects in a scene. We propose a model capable of learning to reason about semantically meaningful spatio-temporal interactions in videos. The key to our approach is a choice of performing this reasoning at the object level through the integration of state of the art object detection networks. This allows the model to learn detailed spatial interactions that exist at a semantic, object-interaction relevant level. We evaluate our method on three standard datasets (Twenty-BN Something-Something, VLOG and EPIC Kitchens) and achieve state of the art results on all of them. Finally, we show visualizations of the interactions learned by the model, which illustrate object classes and their interactions corresponding to different activity classes." ] }
1906.09383
2950630404
In this report, we present the Baidu-UTS submission to the EPIC-Kitchens Action Recognition Challenge in CVPR 2019. This is the winning solution to this challenge. In this task, the goal is to predict verbs, nouns, and actions from the vocabulary for each video segment. The EPIC-Kitchens dataset contains various small objects, intense motion blur, and occlusions. It is challenging to locate and recognize the object that an actor interacts with. To address these problems, we utilize object detection features to guide the training of 3D Convolutional Neural Networks (CNN), which can significantly improve the accuracy of noun prediction. Specifically, we introduce a Gated Feature Aggregator module to learn from the clip feature and the object feature. This module can strengthen the interaction between the two kinds of activations and avoid gradient exploding. Experimental results demonstrate our approach outperforms other methods on both seen and unseen test set.
According to the success of image recognition, pretraining on large scale dataset can boost the performance of deep learning models. @cite_16 construct a large-scale video dataset with verb-noun label space. They pretrain a deep 3D CNN on the data and then finetune the model on EPIC-Kitchens. Their model achieves relatively high results, especially on the unseen test set.
{ "cite_N": [ "@cite_16" ], "mid": [ "2942642798" ], "abstract": [ "Current fully-supervised video datasets consist of only a few hundred thousand videos and fewer than a thousand domain-specific labels. This hinders the progress towards advanced video architectures. This paper presents an in-depth study of using large volumes of web videos for pre-training video models for the task of action recognition. Our primary empirical finding is that pre-training at a very large scale (over 65 million videos), despite on noisy social-media videos and hashtags, substantially improves the state-of-the-art on three challenging public action recognition datasets. Further, we examine three questions in the construction of weakly-supervised video action datasets. First, given that actions involve interactions with objects, how should one construct a verb-object pre-training label space to benefit transfer learning the most? Second, frame-based models perform quite well on action recognition; is pre-training for good image features sufficient or is pre-training for spatio-temporal features valuable for optimal transfer learning? Finally, actions are generally less well-localized in long videos vs. short videos; since action labels are provided at a video level, how should one choose video clips for best performance, given some fixed budget of number or minutes of videos?" ] }
1906.09223
2951308022
We propose a novel framework for multi-task reinforcement learning (MTRL). Using a variational inference formulation, we learn policies that generalize across both changing dynamics and goals. The resulting policies are parametrized by shared parameters that allow for transfer between different dynamics and goal conditions, and by task-specific latent-space embeddings that allow for specialization to particular tasks. We show how the latent-spaces enable generalization to unseen dynamics and goals conditions. Additionally, policies equipped with such embeddings serve as a space of skills (or options) for hierarchical reinforcement learning. Since we can change task dynamics and goals independently, we name our framework Disentangled Skill Embeddings (DSE).
We derive our algorithm using a variational infernence formulation for RL. Several previous works have described RL as an inference problem including its relation to entropy regularization @cite_2 @cite_4 @cite_0 @cite_6 @cite_1 . This formalism has recently attracted attention , because it provides a powerful and intuitive way to describe more complex agent architectures using the tools from graphical models.
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_6", "@cite_0", "@cite_2" ], "mid": [ "2781726626", "2098774185", "1499669280", "2337698339", "2594103415" ], "abstract": [ "Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.", "Recent research has shown the benefit of framing problems of imitation learning as solutions to Markov Decision Problems. This approach reduces learning to the problem of recovering a utility function that makes the behavior induced by a near-optimal policy closely mimic demonstrated behavior. In this work, we develop a probabilistic approach based on the principle of maximum entropy. Our approach provides a well-defined, globally normalized distribution over decision sequences, while providing the same performance guarantees as existing methods. We develop our technique in the context of modeling real-world navigation and driving behaviors where collected data is inherently noisy and imperfect. Our probabilistic approach enables modeling of route preferences as well as a powerful new approach to inferring destinations and routes based on partial trajectories.", "Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant policy gradients (Bagnell and Schneider 2003), many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest the Relative Entropy Policy Search (REPS) method. The resulting method differs significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems.", "Information-theoretic principles for learning and acting have been proposed to solve particular classes of Markov Decision Problems. Mathematically, such approaches are governed by a variational free energy principle and allow solving MDP planning problems with information-processing constraints expressed in terms of a Kullback-Leibler divergence with respect to a reference distribution. Here we consider a generalization of such MDP planners by taking model uncertainty into account. As model uncertainty can also be formalized as an information-processing constraint, we can derive a unified solution from a single generalized variational principle. We provide a generalized value iteration scheme together with a convergence proof. As limit cases, this generalized scheme includes standard value iteration with a known model, Bayesian MDP planning, and robust planning. We demonstrate the benefits of this approach in a grid world simulation.", "We propose a method for learning expressive energy-based policies for continuous states and actions, which has been feasible only in tabular domains before. We apply our method to learning maximum entropy policies, resulting into a new algorithm, called soft Q-learning, that expresses the optimal policy via a Boltzmann distribution. We use the recently proposed amortized Stein variational gradient descent to learn a stochastic sampling network that approximates samples from this distribution. The benefits of the proposed algorithm include improved exploration and compositionality that allows transferring skills between tasks, which we confirm in simulated experiments with swimming and walking robots. We also draw a connection to actor-critic methods, which can be viewed performing approximate inference on the corresponding energy-based model." ] }
1906.09494
2952268029
This paper considers sparse device activity detection for cellular machine-type communications with non-orthogonal signatures using the approximate message passing algorithm. This paper compares two network architectures, massive multiple-input-multiple-output (MIMO) and cooperative MIMO, in terms of their effectiveness in overcoming inter-cell interference. In the massive MIMO architecture, each base station (BS) detects only the users from its own cell while treating inter-cell interference as noise. In the cooperative MIMO architecture, each BS detects the users from neighboring cells as well; the detection results are then forwarded in the form of a log-likelihood ratio (LLR) to a central unit where final decisions are made. This paper analytically characterizes the probabilities of false alarm and missed detection for both architectures. The numerical results validate the analytic characterization and show that as the number of antennas increases, a massive MIMO system effectively drives the detection error to zero, while as the cooperation size increases, the cooperative MIMO architecture mainly improves the cell-edge user performance. Moreover, this paper studies the effect of LLR quantization to account for the finite-capacity fronthaul. The numerical simulations of a practical scenario suggest that in specific case cooperating three BSs in a cooperative MIMO system achieves about the same cell-edge detection reliability as a non-cooperative massive MIMO system with four times the number of antennas per BS.
Device activity detection problem has been investigated in a variety of wireless systems using different approaches. For example, @cite_12 @cite_20 propose the use of compressed sensing techniques for joint user activity detection and data detection channel estimation in cellular systems without considering the effect of inter-cell interference. In code-division multiple access systems, sparse user activity detection is considered jointly with multi-user detection via a sparsity-exploiting maximum approach in @cite_23 . By further exploiting channel statistics, @cite_18 adopts the AMP algorithm with Bayesian denoiser for activity detection, and characterizes the detection performance. In @cite_4 , two approaches, compressed sensing technique and coded slotted ALOHA, are compared in terms of detection accuracy and energy efficiency in user activity detection.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_23", "@cite_12", "@cite_20" ], "mid": [ "2784331297", "2964348058", "2125838225", "1543985333", "2061225176" ], "abstract": [ "This paper considers the massive connectivity application in which a large number of devices communicate with a base-station (BS) in a sporadic fashion. Device activity detection and channel estimation are central problems in such a scenario. Due to the large number of potential devices, the devices need to be assigned non-orthogonal signature sequences. The main objective of this paper is to show that by using random signature sequences and by exploiting sparsity in the user activity pattern, the joint user detection and channel estimation problem can be formulated as a compressed sensing single measurement vector (SMV) or multiple measurement vector (MMV) problem depending on whether the BS has a single antenna or multiple antennas and efficiently solved using an approximate message passing (AMP) algorithm. This paper proposes an AMP algorithm design that exploits the statistics of the wireless channel and provides an analytical characterization of the probabilities of false alarm and missed detection via state evolution. We consider two cases depending on whether or not the large-scale component of the channel fading is known at the BS and design the minimum mean squared error denoiser for AMP according to the channel statistics. Simulation results demonstrate the substantial advantage of exploiting the channel statistics in AMP design; however, knowing the large-scale fading component does not appear to offer tangible benefits. For the multiple-antenna case, we employ two different AMP algorithms, namely the AMP with vector denoiser and the parallel AMP-MMV, and quantify the benefit of deploying multiple antennas.", "Machine-type communication services in mobile cellular systems are currently evolving with an aim to efficiently address a massive-scale user access to the system. One of the key problems in this respect is to efficiently identify active users in order to allocate them resources for the subsequent transmissions. In this paper, we examine two recently suggested approaches for user activity detection: compressed-sensing (CS) and coded slotted ALOHA (CSA), and provide their comparison in terms of performance vs resource utilization. Our preliminary results show that CS-based approach is able to provide the target user activity detection performance with less overall system resource utilization. However, this comes at a price of lower energy-efficiency per user, as compared to CSA-based approach.", "The number of active users in code-division multiple access (CDMA) systems is often much lower than the spreading gain. The present paper exploits fruitfully this a priori information to improve performance of multiuser detectors. A low-activity factor manifests itself in a sparse symbol vector with entries drawn from a finite alphabet that is augmented by the zero symbol to capture user inactivity. The non-equiprobable symbols of the augmented alphabet motivate a sparsity-exploiting maximum a posteriori probability (S-MAP) criterion, which is shown to yield a cost comprising the l2 least-squares error penalized by the p-th norm of the wanted symbol vector (p = 0, 1, 2). Related optimization problems appear in variable selection (shrinkage) schemes developed for linear regression, as well as in the emerging field of compressive sampling (CS). The contribution of this work to such sparse CDMA systems is a gamut of sparsity-exploiting multiuser detectors trading off performance for complexity requirements. From the vantage point of CS and the least-absolute shrinkage selection operator (Lasso) spectrum of applications, the contribution amounts to sparsity-exploiting algorithms when the entries of the wanted signal vector adhere to finite-alphabet constraints.", "Massive Machine Type Communication is seen as one major driver for the research of new physical layer technologies for future communication systems. To handle massive access, the main challenges are avoiding control signaling overhead, low complexity data processing per sensor, supporting of diverse but rather low data rates and a flexible and scalable access. To address all these challenges, we propose a combination of compressed sensing based detection known as Compressed Sensing based Multi User Detection (CS-MUD) with multicarrier access schemes. We name this novel combination Multicarrier CS-MUD (MCSM). Previous investigations on CS-MUD facilitates massive direct random access by exploiting the signal sparsity caused by sporadic sensor activity. The new combined scheme MCSM with its flexibility in accessing time frequency resources additionally allows for either reducing the number of subcarriers or shortening the multicarrier symbol duration, i.e., we gain a high spectral efficiency. Simulation results are given to show the performance of the proposed scheme.", "As it becomes increasingly apparent that 4G will not be able to meet the emerging demands of future mobile communication systems, the question what could make up a 5G system, what are the crucial challenges, and what are the key drivers is part of intensive, ongoing discussions. Partly due to the advent of compressive sensing, methods that can optimally exploit sparsity in signals have received tremendous attention in recent years. In this paper, we will describe a variety of scenarios in which signal sparsity arises naturally in 5G wireless systems. Signal sparsity and the associated rich collection of tools and algorithms will thus be a viable source for innovation in 5G wireless system design. We will also describe applications of this sparse signal processing paradigm in Multiple Input Multiple Output random access, cloud radio access networks, compressive channel-source network coding, and embedded security. We will also emphasize an important open problem that may arise in 5G system design, for which sparsity will potentially play a key role in their solution." ] }
1906.09494
2952268029
This paper considers sparse device activity detection for cellular machine-type communications with non-orthogonal signatures using the approximate message passing algorithm. This paper compares two network architectures, massive multiple-input-multiple-output (MIMO) and cooperative MIMO, in terms of their effectiveness in overcoming inter-cell interference. In the massive MIMO architecture, each base station (BS) detects only the users from its own cell while treating inter-cell interference as noise. In the cooperative MIMO architecture, each BS detects the users from neighboring cells as well; the detection results are then forwarded in the form of a log-likelihood ratio (LLR) to a central unit where final decisions are made. This paper analytically characterizes the probabilities of false alarm and missed detection for both architectures. The numerical results validate the analytic characterization and show that as the number of antennas increases, a massive MIMO system effectively drives the detection error to zero, while as the cooperation size increases, the cooperative MIMO architecture mainly improves the cell-edge user performance. Moreover, this paper studies the effect of LLR quantization to account for the finite-capacity fronthaul. The numerical simulations of a practical scenario suggest that in specific case cooperating three BSs in a cooperative MIMO system achieves about the same cell-edge detection reliability as a non-cooperative massive MIMO system with four times the number of antennas per BS.
In the context of massive MIMO systems, the user activity detection is considered in @cite_8 @cite_27 @cite_1 @cite_7 . Assuming non-orthogonal Gaussian sequences, @cite_8 studies the user activity detection performance in single-cell scenario in the asymptotic regime, showing that perfect detection can be achieved by employing AMP. By using mutually orthogonal pilot sequences and designing an uncoordinated pilot collision resolution protocol, @cite_27 investigates the user activity detection in multi-cell massive MIMO systems and analyzes the collision probability. In the massive MIMO setup, @cite_1 studies the scaling law of user activity detection with finite-length signature sequences by focusing on the covariance matrix of the received signal across the antenna domain. In particular, @cite_1 shows that using the covariance-based techniques enables a massive connectivity network to accommodate more users as compared to employing the existing compressed sensing techniques. By embedding one bit information in the user activity detection for control signaling, @cite_7 proposes an AMP-based joint user activity detection and information decoding method for massive MIMO systems.
{ "cite_N": [ "@cite_27", "@cite_1", "@cite_7", "@cite_8" ], "mid": [ "2338465892", "2808785612", "2963216488", "2706056020" ], "abstract": [ "The massive multiple-input multiple-output (MIMO) technology has great potential to manage the rapid growth of wireless data traffic. Massive MIMO achieves tremendous spectral efficiency by spatial multiplexing many tens of user equipments (UEs). These gains are only achieved in practice if many more UEs can connect efficiently to the network than today. As the number of UEs increases, while each UE intermittently accesses the network, the random access functionality becomes essential to share the limited number of pilots among the UEs. In this paper, we revisit the random access problem in the Massive MIMO context and develop a reengineered protocol, termed strongest-user collision resolution (SUCRe). An accessing UE asks for a dedicated pilot by sending an uncoordinated random access pilot, with a risk that other UEs send the same pilot. The favorable propagation of massive MIMO channels is utilized to enable distributed collision detection at each UE, thereby determining the strength of the contenders’ signals and deciding to repeat the pilot if the UE judges that its signal at the receiver is the strongest. The SUCRe protocol resolves the vast majority of all pilot collisions in crowded urban scenarios and continues to admit UEs efficiently in overloaded networks.", "In this paper, we study the problem of (AD) in a massive MIMO setup, where the Base Station (BS) has @math antennas. We consider a block fading channel model where the @math -dim channel vector of each user remains almost constant over a (CB) containing @math signal dimensions. We study a setting in which the number of potential users @math assigned to a specific CB is much larger than the dimension of the CB @math ( @math ) but at each time slot only @math of them are active. Most of the previous results, based on compressed sensing, require that @math , which is a bottleneck in massive deployment scenarios such as Internet-of-Things (IoT) and Device-to-Device (D2D) communication. In this paper, we show that one can overcome this fundamental limitation when the number of BS antennas @math is sufficiently large. More specifically, we derive a on the parameters @math and also (SNR) under which our proposed AD scheme succeeds. Our analysis indicates that with a CB of dimension @math , and a sufficient number of BS antennas @math with @math , one can identify the activity of @math active users, which is much larger than the previous bound @math obtained via traditional compressed sensing techniques. In particular, in our proposed scheme one needs to pay only a poly-logarithmic penalty @math for increasing the number of potential users @math , which makes it ideally suited for AD in IoT setups. We propose low-complexity algorithms for AD and provide numerical simulations to illustrate our results. We also compare the performance of our proposed AD algorithms with that of other competitive algorithms in the literature.", "Future cellular networks will support a massive number of devices as a result of emerging technologies such as Internet-of-Things and sensor networks. Enhanced by machine type communication (MTC), low-power low-complex devices in the order of billions are projected to receive service from cellular networks. Contrary to traditional networks which are designed to handle human driven traffic, future networks must cope with MTC based systems that exhibit sparse traffic properties, operate with small packets and contain a large number of devices. Such a system requires smarter control signaling schemes for efficient use of system resources. In this work, we consider a grant-free random access cellular network and propose an approach which jointly detects user activity and single information bit per packet. The proposed approach is inspired by the approximate message passing (AMP) and demonstrates a superior performance compared to the original AMP approach. Furthermore, the numerical analysis reveals that the performance of the proposed approach scales with number of devices, which makes it suitable for user detection in cellular networks with massive number of devices.", "This two-part paper considers an uplink massive device communication scenario in which a large number of devices are connected to a base station (BS), but user traffic is sporadic so that in any given coherence interval, only a subset of users is active. The objective is to quantify the cost of active user detection and channel estimation and to characterize the overall achievable rate of a grant-free two-phase access scheme in which device activity detection and channel estimation are performed jointly using pilot sequences in the first phase and data is transmitted in the second phase. In order to accommodate a large number of simultaneously transmitting devices, this paper studies an asymptotic regime where the BS is equipped with a massive number of antennas. The main contributions of Part I of this paper are as follows. First, we note that as a consequence of having a large pool of potentially active devices but limited coherence time, the pilot sequences cannot all be orthogonal. However, despite the nonorthogonality, this paper shows that in the asymptotic massive multiple-input multiple-output regime, both the missed device detection and the false alarm probabilities for activity detection can always be made to go to zero by utilizing compressed sensing techniques that exploit sparsity in the user activity pattern. Part II of this paper further characterizes the achievable rates using the proposed scheme and quantifies the cost of using nonorthogonal pilot sequences for channel estimation in achievable rates." ] }
1906.09494
2952268029
This paper considers sparse device activity detection for cellular machine-type communications with non-orthogonal signatures using the approximate message passing algorithm. This paper compares two network architectures, massive multiple-input-multiple-output (MIMO) and cooperative MIMO, in terms of their effectiveness in overcoming inter-cell interference. In the massive MIMO architecture, each base station (BS) detects only the users from its own cell while treating inter-cell interference as noise. In the cooperative MIMO architecture, each BS detects the users from neighboring cells as well; the detection results are then forwarded in the form of a log-likelihood ratio (LLR) to a central unit where final decisions are made. This paper analytically characterizes the probabilities of false alarm and missed detection for both architectures. The numerical results validate the analytic characterization and show that as the number of antennas increases, a massive MIMO system effectively drives the detection error to zero, while as the cooperation size increases, the cooperative MIMO architecture mainly improves the cell-edge user performance. Moreover, this paper studies the effect of LLR quantization to account for the finite-capacity fronthaul. The numerical simulations of a practical scenario suggest that in specific case cooperating three BSs in a cooperative MIMO system achieves about the same cell-edge detection reliability as a non-cooperative massive MIMO system with four times the number of antennas per BS.
The user activity detection is also studied in C-RAN in @cite_11 @cite_2 . A Bayesian compressed sensing algorithm is proposed in @cite_11 , where the received signals from all the BSs are concatenated at the CU followed by a joint user activity detection. By considering limited capacity of the fronthaul links between the BSs and the CU, @cite_2 compares two schemes---centralized detection with received signal quantization and distributed detection with log-likelihood ratio (LLR) quantization---via simulations, demonstrating that centralized detection is preferred with high fronthaul capacity whereas distributed detection is preferred with low fronthaul capacity.
{ "cite_N": [ "@cite_2", "@cite_11" ], "mid": [ "2418615607", "1574260077" ], "abstract": [ "Cloud-radio access network (C-RAN) is characterized by a hierarchical structure, in which the baseband-processing functionalities of remote radio heads (RRHs) are implemented by means of cloud computing at a central unit (CU). A key limitation of C-RANs is given by the capacity constraints of the fronthaul links connecting RRHs to the CU. In this letter, the impact of this architectural constraint is investigated for the fundamental functions of random access and active user equipment (UE) identification in the presence of a potentially massive number of UEs. In particular, the standard C-RAN approach based on quantize-and-forward and centralized detection is compared to a scheme based on an alternative CU-RRH functional split that enables local detection. Both techniques leverage Bayesian sparse detection. Numerical results illustrate the relative merits of the two schemes as a function of the system parameters.", "Cloud Radio Access Network (CRAN) is proposed as a promising network architecture for future mobile communications. In this paper, we consider the topic of active user detection (AUD) and channel estimation (CE) in uplink CRAN systems with sparse active users. Different from conventional AUD and CE approaches which require the length of uplink pilots to scale with the number of users times the number of antennas per user, a novel algorithm will be proposed to substantially reduce the uplink training overhead by leveraging the technique of compressive sensing (CS). To achieve this goal, we first transform the problem of AUD and CE into standard CS problems. We then propose a modified Bayesian compressive sensing (BCS) algorithm to conduct AUD and CE in CRAN, which exploits not only the active user sparsity, but also the innate heterogeneous path loss effects and the joint sparsity structures in multi-antenna uplink CRAN systems." ] }
1906.09494
2952268029
This paper considers sparse device activity detection for cellular machine-type communications with non-orthogonal signatures using the approximate message passing algorithm. This paper compares two network architectures, massive multiple-input-multiple-output (MIMO) and cooperative MIMO, in terms of their effectiveness in overcoming inter-cell interference. In the massive MIMO architecture, each base station (BS) detects only the users from its own cell while treating inter-cell interference as noise. In the cooperative MIMO architecture, each BS detects the users from neighboring cells as well; the detection results are then forwarded in the form of a log-likelihood ratio (LLR) to a central unit where final decisions are made. This paper analytically characterizes the probabilities of false alarm and missed detection for both architectures. The numerical results validate the analytic characterization and show that as the number of antennas increases, a massive MIMO system effectively drives the detection error to zero, while as the cooperation size increases, the cooperative MIMO architecture mainly improves the cell-edge user performance. Moreover, this paper studies the effect of LLR quantization to account for the finite-capacity fronthaul. The numerical simulations of a practical scenario suggest that in specific case cooperating three BSs in a cooperative MIMO system achieves about the same cell-edge detection reliability as a non-cooperative massive MIMO system with four times the number of antennas per BS.
Besides the aforementioned works on design and analysis for practical networks, there are also related works @cite_17 @cite_6 @cite_22 @cite_16 that address the massive random access problem with sparse user activity from information theoretical perspectives.
{ "cite_N": [ "@cite_16", "@cite_22", "@cite_6", "@cite_17" ], "mid": [ "2743723736", "2742359230", "2753791741", "2963128632" ], "abstract": [ "We consider an uncoordinated Gaussian multiple access channel with a relatively large number of active users within each block. A low complexity coding scheme is proposed, which is based on a combination of compute-and-forward and coding for a binary adder channel. For a wide regime of parameters of practical interest, the energy-per-bit required by each user in the proposed scheme is significantly smaller than that required by popular solutions such as slotted-ALOHA and treating interference as noise.", "This paper discusses the contemporary problem of providing multiple-access (MAC) to a massive number of uncoordinated users. First, we define a random-access code for Ka-user Gaussian MAC to be a collection of norm-constrained vectors such that the noisy sum of any K a of them can be decoded with a given (suitably defined) probability of error. An achievability bound for such codes is proposed and compared against popular practical solutions: ALOHA, coded slotted ALOHA, CDMA, and treating interference as noise. It is found out that as the number of users increases existing solutions become vastly energy-inefficient. Second, we discuss the asymptotic (in blocklength) problem of coding for a K-user Gaussian MAC when K is proportional to blocklength and each user's payload is fixed. It is discovered that the energy-per-bit vs. spectral efficiency exhibits a rather curious tradeoff in this case.", "This paper aims to provide an information theoretical analysis of massive device connectivity scenario in which a large number of devices with sporadic traffic communicate in the uplink to a base-station (BS). In each coherence time interval, the BS needs to identify the active devices, to estimate their channels, and to decode the transmitted messages from the devices. This paper first derives an information theoretic upper bound on the overall transmission rate. We then provide a degree-of-freedom (DoF) analysis that illustrates the cost of device identification for massive connectivity. We show that the optimal number of active devices is strictly less than half of the coherence time slots, and the achievable DoF decreases linearly with the number of active devices when it exceeds the number of receive antennas. This paper further presents a two-phase practical framework in which device identification and channel estimation are performed jointly using compressed sensing techniques in the first phase, with data transmission taking place in the second phase. We outline the opportunities in utilizing compressed sensing results to analyze the performance of the overall framework and to optimize the system parameters.", "Classical multiuser information theory studies the fundamental limits of models with a fixed (often small) number of users as the coding blocklength goes to infinity. This paper proposes a new paradigm, referred to as many-user information theory , where the number of users is allowed to grow with the blocklength. This paradigm is motivated by emerging systems with a massive number of users in an area, such as the Internet of Things. The focus of this paper is the many-access channel model, which consists of a single receiver and many transmitters, whose number increases unboundedly with the blocklength. Moreover, an unknown subset of transmitters may transmit in a given block and need to be identified as well as decoded by the receiver. A new notion of capacity is introduced and characterized for the Gaussian many-access channel with random user activities. The capacity can be achieved by first detecting the set of active users and then decoding their messages. The minimum cost of identifying the active users is also quantified." ] }
1906.09314
2950283806
Quorum systems are a key abstraction in distributed fault-tolerant computing for capturing trust assumptions. They can be found at the core of many algorithms for implementing reliable broadcasts, shared memory, consensus and other problems. This paper introduces asymmetric Byzantine quorum systems that model subjective trust. Every process is free to choose which combinations of other processes it trusts and which ones it considers faulty. Asymmetric quorum systems strictly generalize standard Byzantine quorum systems, which have only one global trust assumption for all processes. This work also presents protocols that implement abstractions of shared memory and broadcast primitives with processes prone to Byzantine faults and asymmetric trust. The model and protocols pave the way for realizing more elaborate algorithms with asymmetric trust.
Damg @cite_13 introduce asymmetric trust in the context of synchronous protocols for secure distributed computation by modeling process-specific fail-prone systems. They state the consistency property of asymmetric Byzantine quorums and claim (without proof) that the @math property is required for implementing a synchronous broadcast protocol in this setting (cf. ). However, they do not formalize quorum systems nor discuss asynchronous protocols.
{ "cite_N": [ "@cite_13" ], "mid": [ "1573994738" ], "abstract": [ "In the standard general-adversary model for multi-party protocols, a global adversary structure is given, and every party must trust in this particular structure. We introduce a more general model, the asymmetric-trust model, wherein every party is allowed to trust in a different, personally customized adversary structure. We have two main contributions. First, we present non-trivial lower and upper bounds for broadcast, verifiable secret sharing, and general multi-party computation in different variations of this new model. The obtained bounds demonstrate that the new model is strictly more powerful than the standard general-adversary model. Second, we propose a framework for expressing and analyzing asymmetric trust in the usual simulation paradigm for defining security of protocols, and in particular show a general composition theorem for protocols with asymmetric trust." ] }
1906.09314
2950283806
Quorum systems are a key abstraction in distributed fault-tolerant computing for capturing trust assumptions. They can be found at the core of many algorithms for implementing reliable broadcasts, shared memory, consensus and other problems. This paper introduces asymmetric Byzantine quorum systems that model subjective trust. Every process is free to choose which combinations of other processes it trusts and which ones it considers faulty. Asymmetric quorum systems strictly generalize standard Byzantine quorum systems, which have only one global trust assumption for all processes. This work also presents protocols that implement abstractions of shared memory and broadcast primitives with processes prone to Byzantine faults and asymmetric trust. The model and protocols pave the way for realizing more elaborate algorithms with asymmetric trust.
The also features open membership and lets every node express its own set of trusted nodes @cite_22 . Generalizing from Ripple's flat lists of unique nodes, every node declares a collection of trusted sets called , whereby a slice is the subset of a quorum convincing one particular node of agreement.'' A in Stellar is a set of nodes sufficient to reach agreement,'' defined as a set of nodes that contains one slice for each member node. The quorum choices of all nodes together yield a . The Stellar white paper states properties of FBQS and protocols that build on them. However, these protocols do not map to known protocol primitives in distributed computing.
{ "cite_N": [ "@cite_22" ], "mid": [ "2553854225" ], "abstract": [ "While several consensus algorithms exist for the Byzantine Generals Problem, specifically as it pertains to distributed payment systems, many suffer from high latency induced by the requirement that all nodes within the network communicate synchronously. In this work, we present a novel consensus algorithm that circumvents this requirement by utilizing collectively-trusted subnetworks within the larger network. We show that the “trust” required of these subnetworks is in fact minimal and can be further reduced with principled choice of the member nodes. In addition, we show that minimal connectivity is required to maintain agreement throughout the whole network. The result is a low-latency consensus algorithm which still maintains robustness in the face of Byzantine failures. We present this algorithm in its embodiment in the Ripple Protocol." ] }
1906.09314
2950283806
Quorum systems are a key abstraction in distributed fault-tolerant computing for capturing trust assumptions. They can be found at the core of many algorithms for implementing reliable broadcasts, shared memory, consensus and other problems. This paper introduces asymmetric Byzantine quorum systems that model subjective trust. Every process is free to choose which combinations of other processes it trusts and which ones it considers faulty. Asymmetric quorum systems strictly generalize standard Byzantine quorum systems, which have only one global trust assumption for all processes. This work also presents protocols that implement abstractions of shared memory and broadcast primitives with processes prone to Byzantine faults and asymmetric trust. The model and protocols pave the way for realizing more elaborate algorithms with asymmetric trust.
Garc ' a - P ' e rez and Gotsman @cite_23 build a link from FBQS to existing quorum-system concepts by investigating a Byzantine reliable broadcast abstraction in an FBQS. They show that the of Stellar @cite_22 is similar to Bracha's reliable broadcast @cite_17 and that it implements a variation of Byzantine reliable broadcast on an FBQS for executions that contain, additionally, a set of so-called intact nodes.
{ "cite_N": [ "@cite_22", "@cite_23", "@cite_17" ], "mid": [ "2553854225", "", "1988655303" ], "abstract": [ "While several consensus algorithms exist for the Byzantine Generals Problem, specifically as it pertains to distributed payment systems, many suffer from high latency induced by the requirement that all nodes within the network communicate synchronously. In this work, we present a novel consensus algorithm that circumvents this requirement by utilizing collectively-trusted subnetworks within the larger network. We show that the “trust” required of these subnetworks is in fact minimal and can be further reduced with principled choice of the member nodes. In addition, we show that minimal connectivity is required to maintain agreement throughout the whole network. The result is a low-latency consensus algorithm which still maintains robustness in the face of Byzantine failures. We present this algorithm in its embodiment in the Ripple Protocol.", "", "Abstract A consensus protocol enables a system of n asynchronous processes, some of them faulty, to reach agreement. Both the processes and the message system are capable of cooperating to prevent the correct processes from reaching decision. A protocol is t -resilient if in the presence of up to t faulty processes it reaches agreement with probability 1. Byzantine processes are faulty processes that can deviate arbitrarily from the protocol; Fail-Stop processes can just stop participating in it. In a recent paper, t -resilient randomized consensus protocols were presented for t n 5 . We improve this to t n 3 , thus matching the known lower bound on the number of correct processes necessary for consensus. The protocol uses a general technique in which the behavior of the Byzantine processes is restricted by the use of a broadcast protocol that filters some of the messages. The apparent behavior of the Byzantine processes, filtered by the broadcast protocol, is similar to that of Fail-Stop processes. Plugging the broadcast protocol as a communicating primitive into an agreement protocol for Fail-Stop processes gives the result. This technique, of using broadcast protocols to reduce the power of the faulty processes and then using them as communication primitives in algorithms designed for weaker failure models, was used succesfully in other contexts." ] }
1906.09505
2950264428
We present an error tolerant path planning algorithm for Micro Aerial Vehicles (MAV) swarms. We assume a MAV navigation system without relying on GPS-like techniques. The MAV find their navigation path by using their sensors and cameras, in order to identify and follow a series of visual landmarks. The visual landmarks lead the MAV towards the target destination. MAVs are assumed to be unaware of the terrain and locations of the landmarks. Landmarks are also assumed to hold a-priori information, whose interpretation (by the MAVs) is prone to errors. We distinguish two types of errors, namely, recognition and advice. Recognition errors are due to misinterpretation of sensed data and a-priori information or confusion of objects (e.g., due to faulty sensors). Advice errors are due to outdated or wrong information associated to the landmarks (e.g., due to weather conditions). Our path planning algorithm proposes swarm cooperation. MAVs communicate and exchange information wirelessly, to minimize the recognition and advice error ratios. By doing this, the navigation system experiences a quality amplification in terms of error reduction. As a result, our solution successfully provides an adaptive error tolerant navigation system. Quality amplification is parametetrized with regard to the number of MAVs. We validate our approach with theoretical proofs and numeric simulations.
Surveys on path planning algorithms for unmanned aerial vehicles have been authored by @cite_18 and @cite_21 . Several algorithms build on solutions originally created for computer networks. Some of the proposed solutions leverage algorithms created in the field of classical robotics, such as approaches using artificial potential functions @cite_12 , random trees @cite_2 or Voronoi diagrams @cite_5 . Path planning may be addressed in conjunction with team work and formation control @cite_10 . There are ideas that have been tailored specifically to quadcopters @cite_20 .
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_2", "@cite_5", "@cite_20", "@cite_10", "@cite_12" ], "mid": [ "1984505032", "2798000375", "131069610", "", "1993156509", "1964462572", "2103120971" ], "abstract": [ "A fundamental aspect of autonomous vehicle guidance is planning trajectories. Historically, two fields have contributed to trajectory or motion planning methods: robotics and dynamics and control. The former typically have a stronger focus on computational issues and real-time robot control, while the latter emphasize the dynamic behavior and more specific aspects of trajectory performance. Guidance for Unmanned Aerial Vehicles (UAVs), including fixed- and rotary-wing aircraft, involves significant differences from most traditionally defined mobile and manipulator robots. Qualities characteristic to UAVs include non-trivial dynamics, three-dimensional environments, disturbed operating conditions, and high levels of uncertainty in state knowledge. Otherwise, UAV guidance shares qualities with typical robotic motion planning problems, including partial knowledge of the environment and tasks that can range from basic goal interception, which can be precisely specified, to more general tasks like surveillance and reconnaissance, which are harder to specify. These basic planning problems involve continual interaction with the environment. The purpose of this paper is to provide an overview of existing motion planning algorithms while adding perspectives and practical examples from UAV guidance approaches.", "Unmanned aerial vehicles (UAVs) have recently attracted the attention of researchers due to their numerous potential civilian applications. However, current robot navigation technologies need furth...", "", "", "Potential-field-based control strategy for path planning and formation control of multi Quadrotor systems is proposed in this work. The potential field is used to attract the Quadrotor to the goal location as well as avoiding the obstacle. The algorithm to solve the so called local minima problem by utilizing the wall-following behavior is also proposed. The resulted path planning via potential function strategy is then used to design formation control algorithm. Using the virtual leader approach, the formation control strategy by means of potential function is formulated. Each Quadrotor is assigned attractive potential to reach its desired position in formation. Each Quadrotor is also treated as moving obstacle which induces repulsive potential to the others in order to prevent inter-robot collision. The desired position for each Quadrotor in certain configuration is calculated based on the virtual leaderâ��s current position. To get smooth movement, the virtual leader is set as unicycle robot. The repulsive potential is also attached to each agent in case it moves in the region inside the obstacleâ��s influence. The overall strategy has been successfully applied to the Quadrotorâ��s model of Parrot AR Drone 2.0 in Gazebo simulator programmed using Robot Operating System.", "In this paper, we consider the problem of concurrent assignment and planning of trajectories (which we denote Capt) for a team of robots. This problem involves simultaneously addressing two challenges: (1) the combinatorially complex problem of finding a suitable assignment of robots to goal locations, and (2) the generation of collision-free, time parameterized trajectories for every robot. We consider the Capt problem for unlabeled (interchangeable) robots and propose algorithmic solutions to two variations of the Capt problem. The first algorithm, c-Capt, is a provably correct, complete, centralized algorithm which guarantees collision-free optimal solutions to the Capt problem in an obstacle-free environment. To achieve these strong claims, c-Capt exploits the synergy obtained by combining the two subproblems of assignment and trajectory generation to provide computationally tractable solutions for large numbers of robots. We then propose a decentralized solution to the Capt problem through d-Capt, a decentralized algorithm that provides suboptimal results compared to c-Capt . We illustrate the algorithms and resulting performance through simulation and experimentation.", "This paper presents a unique real-time obstacle avoidance approach for manipulators and mobile robots based on the artificial potential field concept. Collision avoidance, tradi tionally considered a high level planning problem, can be effectively distributed between different levels of control, al lowing real-time robot operations in a complex environment. This method has been extended to moving obstacles by using a time-varying artificial patential field. We have applied this obstacle avoidance scheme to robot arm mechanisms and have used a new approach to the general problem of real-time manipulator control. We reformulated the manipulator con trol problem as direct control of manipulator motion in oper ational space—the space in which the task is originally described—rather than as control of the task's corresponding joint space motion obtained only after geometric and kine matic transformation. Outside the obstacles' regions of influ ence, we caused the end effector to move in a straight line with an..." ] }
1906.09505
2950264428
We present an error tolerant path planning algorithm for Micro Aerial Vehicles (MAV) swarms. We assume a MAV navigation system without relying on GPS-like techniques. The MAV find their navigation path by using their sensors and cameras, in order to identify and follow a series of visual landmarks. The visual landmarks lead the MAV towards the target destination. MAVs are assumed to be unaware of the terrain and locations of the landmarks. Landmarks are also assumed to hold a-priori information, whose interpretation (by the MAVs) is prone to errors. We distinguish two types of errors, namely, recognition and advice. Recognition errors are due to misinterpretation of sensed data and a-priori information or confusion of objects (e.g., due to faulty sensors). Advice errors are due to outdated or wrong information associated to the landmarks (e.g., due to weather conditions). Our path planning algorithm proposes swarm cooperation. MAVs communicate and exchange information wirelessly, to minimize the recognition and advice error ratios. By doing this, the navigation system experiences a quality amplification in terms of error reduction. As a result, our solution successfully provides an adaptive error tolerant navigation system. Quality amplification is parametetrized with regard to the number of MAVs. We validate our approach with theoretical proofs and numeric simulations.
Our research is closely related to works on navigation using topological maps @cite_8 . Navigation does not rely on coordinates. The * mavs find their way recognizing landmarks. @cite_13 propose the use of visual odometry as an alternative localization technique to, e.g., GPS-like techniques. The idea is as follows. The * mavs use their onboard cameras (e.g., downward facing cameras), combined by some inertial sensors, to identify and follow a series of visual landmarks. The visual landwarks lead the * mav towards the target destination. Unlike GPS, the technique allows the * mav to operate without boundaries in both indoor and outdoor environments. No precise information about concrete visual odometry techniques are reported by in their work. However, some ideas can be found in @cite_14 @cite_8 .
{ "cite_N": [ "@cite_14", "@cite_13", "@cite_8" ], "mid": [ "180745589", "2789410150", "2748785931" ], "abstract": [ "Indoor navigation of an unmanned aerial vehicle is the topic of this article. A dual feedforward feedback architecture has been used as the UAV´s controller and the K-NN classifier using the gray level image histogram as discriminant variables has been applied for landmarks recognition. After a brief description of the aerial vehicle we identify the two main components of its autonomous navigation, namely, the landmark recognition and the controller. Afterwards, the paper describes the experimental setup and discusses the experimental results centered mainly on the basic UAV´s behavior of landmark approximation which in topological navigation is known as the beaconing or homing problem.", "In this letter, we present the system infrastructure for a swarm of quadrotors, which perform all estimation on board using monocular visual inertial odometry. This is a novel system since it does not require an external motion capture system or GPS and is able to execute formation tasks without inter-robot collisions. The swarm can be deployed in nearly any indoor or outdoor scenario and is scalable to higher numbers of robots. We discuss the system architecture, estimation, planning, and control for the multirobot system. The robustness and scalability of the approach is validated in both indoor and outdoor environments with up to 12 quadrotors.", "We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks." ] }
1906.09505
2950264428
We present an error tolerant path planning algorithm for Micro Aerial Vehicles (MAV) swarms. We assume a MAV navigation system without relying on GPS-like techniques. The MAV find their navigation path by using their sensors and cameras, in order to identify and follow a series of visual landmarks. The visual landmarks lead the MAV towards the target destination. MAVs are assumed to be unaware of the terrain and locations of the landmarks. Landmarks are also assumed to hold a-priori information, whose interpretation (by the MAVs) is prone to errors. We distinguish two types of errors, namely, recognition and advice. Recognition errors are due to misinterpretation of sensed data and a-priori information or confusion of objects (e.g., due to faulty sensors). Advice errors are due to outdated or wrong information associated to the landmarks (e.g., due to weather conditions). Our path planning algorithm proposes swarm cooperation. MAVs communicate and exchange information wirelessly, to minimize the recognition and advice error ratios. By doing this, the navigation system experiences a quality amplification in terms of error reduction. As a result, our solution successfully provides an adaptive error tolerant navigation system. Quality amplification is parametetrized with regard to the number of MAVs. We validate our approach with theoretical proofs and numeric simulations.
@cite_14 @cite_8 propose the use of probabilistic knowledge-based classification and learning automata for the automatic recognition of patterns associated to the visual landmarks that must be identified by the * mavs . A series of classification rules in their conjunctive normal form (CNF) are associated to a series of probability weights that are adapted dynamically using supervised reinforcement learning @cite_11 . The adaptation process is conducted using a two-stage learning procedure. During the first process, a series of variables are associated to each rule. For instance, the variables associated to the construction of a landmark recognition classifier are constructed using images' histogram features, such as standard deviation, skewness, kurtosis, uniformity and entropy. During the second process, a series of weights are associated to every variable. Weights are obtained by applying a reinforcement algorithm, i.e., incremental R-L algorithm in @cite_11 @cite_8 , over a random environment. As a result, the authors obtain a specific image classifier for the recognition of landmarks, which is then loaded to the * mavs .
{ "cite_N": [ "@cite_14", "@cite_11", "@cite_8" ], "mid": [ "180745589", "1538558539", "2748785931" ], "abstract": [ "Indoor navigation of an unmanned aerial vehicle is the topic of this article. A dual feedforward feedback architecture has been used as the UAV´s controller and the K-NN classifier using the gray level image histogram as discriminant variables has been applied for landmarks recognition. After a brief description of the aerial vehicle we identify the two main components of its autonomous navigation, namely, the landmark recognition and the controller. Afterwards, the paper describes the experimental setup and discusses the experimental results centered mainly on the basic UAV´s behavior of landmark approximation which in topological navigation is known as the beaconing or homing problem.", "", "We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks." ] }
1906.09505
2950264428
We present an error tolerant path planning algorithm for Micro Aerial Vehicles (MAV) swarms. We assume a MAV navigation system without relying on GPS-like techniques. The MAV find their navigation path by using their sensors and cameras, in order to identify and follow a series of visual landmarks. The visual landmarks lead the MAV towards the target destination. MAVs are assumed to be unaware of the terrain and locations of the landmarks. Landmarks are also assumed to hold a-priori information, whose interpretation (by the MAVs) is prone to errors. We distinguish two types of errors, namely, recognition and advice. Recognition errors are due to misinterpretation of sensed data and a-priori information or confusion of objects (e.g., due to faulty sensors). Advice errors are due to outdated or wrong information associated to the landmarks (e.g., due to weather conditions). Our path planning algorithm proposes swarm cooperation. MAVs communicate and exchange information wirelessly, to minimize the recognition and advice error ratios. By doing this, the navigation system experiences a quality amplification in terms of error reduction. As a result, our solution successfully provides an adaptive error tolerant navigation system. Quality amplification is parametetrized with regard to the number of MAVs. We validate our approach with theoretical proofs and numeric simulations.
The resulting classifiers had been tested via experimental work. * mavs with high-definition cameras, recording images at a resolution of @math x @math pixels, at the speed of @math fps (frames per second) are loaded a given classifier, to evaluate a visual classification ratio. Each experiment consists of building a classifier and getting the averaged ratio. Results by in @cite_14 @cite_3 show an average empirical visual error ratio of about @math the landmarks, on average). The results are compared to some other well-established pattern recognition methods for the visual identification of objects, such as minimum distance and @math -nearest neighbor classification algorithms.
{ "cite_N": [ "@cite_14", "@cite_3" ], "mid": [ "180745589", "2032587104" ], "abstract": [ "Indoor navigation of an unmanned aerial vehicle is the topic of this article. A dual feedforward feedback architecture has been used as the UAV´s controller and the K-NN classifier using the gray level image histogram as discriminant variables has been applied for landmarks recognition. After a brief description of the aerial vehicle we identify the two main components of its autonomous navigation, namely, the landmark recognition and the controller. Afterwards, the paper describes the experimental setup and discusses the experimental results centered mainly on the basic UAV´s behavior of landmark approximation which in topological navigation is known as the beaconing or homing problem.", "In this paper, the fusion of probabilistic knowledge-based classification rules and learning automata theory is proposed and as a result we present a set of probabilistic classification rules with self-learning capability. The probabilities of the classification rules change dynamically guided by a supervised reinforcement process aimed at obtaining an optimum classification accuracy. This novel classifier is applied to the automatic recognition of digital images corresponding to visual landmarks for the autonomous navigation of an unmanned aerial vehicle (UAV) developed by the authors. The classification accuracy of the proposed classifier and its comparison with well-established pattern recognition methods is finally reported." ] }
1906.09505
2950264428
We present an error tolerant path planning algorithm for Micro Aerial Vehicles (MAV) swarms. We assume a MAV navigation system without relying on GPS-like techniques. The MAV find their navigation path by using their sensors and cameras, in order to identify and follow a series of visual landmarks. The visual landmarks lead the MAV towards the target destination. MAVs are assumed to be unaware of the terrain and locations of the landmarks. Landmarks are also assumed to hold a-priori information, whose interpretation (by the MAVs) is prone to errors. We distinguish two types of errors, namely, recognition and advice. Recognition errors are due to misinterpretation of sensed data and a-priori information or confusion of objects (e.g., due to faulty sensors). Advice errors are due to outdated or wrong information associated to the landmarks (e.g., due to weather conditions). Our path planning algorithm proposes swarm cooperation. MAVs communicate and exchange information wirelessly, to minimize the recognition and advice error ratios. By doing this, the navigation system experiences a quality amplification in terms of error reduction. As a result, our solution successfully provides an adaptive error tolerant navigation system. Quality amplification is parametetrized with regard to the number of MAVs. We validate our approach with theoretical proofs and numeric simulations.
The previous contribution is complemented by and in @cite_16 @cite_8 , by combining the probabilistic knowledge-based classifiers with bug algorithms @cite_5 , to provide the * mavs with a navigation technique to traverse a visual topological map composed of several visual landmarks. A technique is used to compute the entropy of the images captured by the * mav , in case a decision must be taken (e.g., to decide whether going south or north directions). The idea is as follows. The * mav uses the camera onboard, and takes images about several directions. Afterward, it processes the images to chose a given direction. The lower the entropy of a captured image, the lower the probability of going towards an area containing visual landmarks. Conversely, the higher the entropy of a captured image, the higher the probability of going towards an area surrounded by landmarks. Using this heuristic, the * mav collects candidate images with maximum entropy (e.g., by driving the * mav forward and backward some meters) prior executing a bug algorithm to locate the landmarks @cite_8 .
{ "cite_N": [ "@cite_5", "@cite_16", "@cite_8" ], "mid": [ "", "21021250", "2748785931" ], "abstract": [ "", "We introduce a novel method for landmark search and detection for the autonomous indoor navigation of an Unmanned Aerial Vehicle (UAV) using visual topological maps. The main contribution of this paper is the combination of the entropy of an image, with a dual feedforward-feedback controller for the task of object landmark search and detection. As the entropy of an image is directly related to the presence of a unique object or the presence of different objects inside the image (the lower the entropy of an image, the higher its probability of containing a single object inside it; and conversely, the higher the entropy, the higher its probability of containing several different objects inside it), we propose to implement landmark and object search and detection as a process of entropy maximization which corresponds to an image containing several target landmarks candidates. After converging to an image with maximum entropy containing several candidates for the target landmark, the UAV´s controller switches to the landmark´s homing mode based on a dual feed-forward feedback controller aimed at driving the UAV towards the target landmark. After the presentation of the theoretical foundations of the entropy-based search. The paper ends with the experimental work performed for its validation.", "We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks." ] }
1906.09505
2950264428
We present an error tolerant path planning algorithm for Micro Aerial Vehicles (MAV) swarms. We assume a MAV navigation system without relying on GPS-like techniques. The MAV find their navigation path by using their sensors and cameras, in order to identify and follow a series of visual landmarks. The visual landmarks lead the MAV towards the target destination. MAVs are assumed to be unaware of the terrain and locations of the landmarks. Landmarks are also assumed to hold a-priori information, whose interpretation (by the MAVs) is prone to errors. We distinguish two types of errors, namely, recognition and advice. Recognition errors are due to misinterpretation of sensed data and a-priori information or confusion of objects (e.g., due to faulty sensors). Advice errors are due to outdated or wrong information associated to the landmarks (e.g., due to weather conditions). Our path planning algorithm proposes swarm cooperation. MAVs communicate and exchange information wirelessly, to minimize the recognition and advice error ratios. By doing this, the navigation system experiences a quality amplification in terms of error reduction. As a result, our solution successfully provides an adaptive error tolerant navigation system. Quality amplification is parametetrized with regard to the number of MAVs. We validate our approach with theoretical proofs and numeric simulations.
The use of the entropy technique in @cite_16 @cite_8 can also be used before processing the images with the visual recognition classifier, to reduce the computational cost (i.e., by processing only those images with high entropy, the * mav can avoid that a costly classifier processes images with low likelihood of containing landmarks.
{ "cite_N": [ "@cite_16", "@cite_8" ], "mid": [ "21021250", "2748785931" ], "abstract": [ "We introduce a novel method for landmark search and detection for the autonomous indoor navigation of an Unmanned Aerial Vehicle (UAV) using visual topological maps. The main contribution of this paper is the combination of the entropy of an image, with a dual feedforward-feedback controller for the task of object landmark search and detection. As the entropy of an image is directly related to the presence of a unique object or the presence of different objects inside the image (the lower the entropy of an image, the higher its probability of containing a single object inside it; and conversely, the higher the entropy, the higher its probability of containing several different objects inside it), we propose to implement landmark and object search and detection as a process of entropy maximization which corresponds to an image containing several target landmarks candidates. After converging to an image with maximum entropy containing several candidates for the target landmark, the UAV´s controller switches to the landmark´s homing mode based on a dual feed-forward feedback controller aimed at driving the UAV towards the target landmark. After the presentation of the theoretical foundations of the entropy-based search. The paper ends with the experimental work performed for its validation.", "We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks." ] }