aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1607.07326 | 2500403796 | We propose Meta-Prod2vec, a novel method to compute item similarities for recommendation that leverages existing item metadata. Such scenarios are frequently encountered in applications such as content recommendation, ad targeting and web search. Our method leverages past user interactions with items and their attributes to compute low-dimensional embeddings of items. Specifically, the item metadata is injected into the model as side information to regularize the item embeddings. We show that the new item representations lead to better performance on recommendation tasks on an open music dataset. | Graph-based models have also been used to create unified representations. In particular, in @cite_2 user-item interactions and side information are modeled jointly through user and item latent factors. User factors are shared by the user-item interaction component and the side information component. @cite_13 leans the interaction weights between user actions and various features such as user and item metadata. The authors use a unified Boltzmann Machines to make a prediction. | {
"cite_N": [
"@cite_13",
"@cite_2"
],
"mid": [
"2001530795",
"2149490995"
],
"abstract": [
"Content-based recommendation systems can provide recommendations for \"cold-start\" items for which little or no training data is available, but typically have lower accuracy than collaborative filtering systems. Conversely, collaborative filtering techniques often provide accurate recommendations, but fail on cold start items. Hybrid schemes attempt to combine these different kinds of information to yield better recommendations across the board. We describe unified Boltzmann machines, which are probabilistic models that combine collaborative and content information in a coherent manner. They encode collaborative and content information as features, and then learn weights that reflect how well each feature predicts user actions. In doing so, information of different types is automatically weighted, without the need for careful engineering of features or for post-hoc hybridization of distinct recommender systems. We present empirical results in the movie and shopping domains showing that unified Boltzmann machines can be used to combine content and collaborative information to yield results that are competitive with collaborative techniques in recommending items that have been seen before, and also effective at recommending cold-start items.",
"Targeting interest to match a user with services (e.g. news, products, games, advertisements) and predicting friendship to build connections among users are two fundamental tasks for social network systems. In this paper, we show that the information contained in interest networks (i.e. user-service interactions) and friendship networks (i.e. user-user connections) is highly correlated and mutually helpful. We propose a framework that exploits homophily to establish an integrated network linking a user to interested services and connecting different users with common interests, upon which both friendship and interests could be efficiently propagated. The proposed friendship-interest propagation (FIP) framework devises a factor-based random walk model to explain friendship connections, and simultaneously it uses a coupled latent factor model to uncover interest interactions. We discuss the flexibility of the framework in the choices of loss objectives and regularization penalties and benchmark different variants on the Yahoo! Pulse social networking system. Experiments demonstrate that by coupling friendship with interest, FIP achieves much higher performance on both interest targeting and friendship prediction than systems using only one source of information."
]
} |
1607.07129 | 2502097041 | Many man-made objects have intrinsic symmetries and Manhattan structure. By assuming an orthographic projection model, this paper addresses the estimation of 3D structures and camera projection using symmetry and or Manhattan structure cues, which occur when the input is single- or multiple-image from the same category, e.g., multiple different cars. Specifically, analysis on the single image case implies that Manhattan alone is sufficient to recover the camera projection, and then the 3D structure can be reconstructed uniquely exploiting symmetry. However, Manhattan structure can be difficult to observe from a single image due to occlusion. To this end, we extend to the multiple-image case which can also exploit symmetry but does not require Manhattan axes. We propose a novel rigid structure from motion method, exploiting symmetry and using multiple images from the same category as input. Experimental results on the Pascal3D+ dataset show that our method significantly outperforms baseline methods. | Symmetry has been studied in computer vision for several decades. For example, symmetry has been used as a cue in depth recovery @cite_10 @cite_31 @cite_24 as well as for recognizing symmetric objects @cite_44 . Grossmann and Santos-Victor utilized various geometric clues, such as planarity, orthogonality, parallelism and symmetry, for 3D scene reconstruction @cite_29 @cite_25 , where the camera rotation matrix was pre-computed by vanishing points @cite_40 . Recently, researchers applied symmetry to scene reconstruction @cite_26 , and 3D mesh reconstruction with occlusion @cite_9 . In addition, symmetry, incorporated with planarity and compactness priors, has also been studied to reconstruct structures defined by 3D keypoints @cite_38 . By contrast, the Manhattan world assumption was developed originally for scenes @cite_19 @cite_20 @cite_21 , where the authors assumed visual scenes were based on a Manhattan 3D grid which provided 3 perpendicular axis constaints. Both symmetry and Manhattan can be straightforwardly combined, and adapted to 3D object reconstruction, particularly for man made objects. | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_29",
"@cite_9",
"@cite_21",
"@cite_24",
"@cite_44",
"@cite_40",
"@cite_19",
"@cite_31",
"@cite_10",
"@cite_25",
"@cite_20"
],
"mid": [
"2171543226",
"2081709959",
"",
"",
"",
"1973016074",
"2029528398",
"2139018239",
"2102271310",
"2009751730",
"",
"2143450077",
"1986377419"
],
"abstract": [
"We present a new algorithm for reconstructing 3D shapes. The algorithm takes one 2D image of a 3D shape and reconstructs the 3D shape by applying a priori constraints: symmetry, planarity and compactness. The shape is reconstructed without using information about the surfaces, such as shading, texture, binocular disparity or motion. Performance of the algorithm is illustrated on symmetric polyhedra, but the algorithm can be applied to a very wide range of shapes. Psychophysical plausibility of the algorithm is discussed.",
"In this paper, we provide a principled explanation of how knowledge in global 3-D structural invariants, typically captured by a group action on a symmetric structure, can dramatically facilitate the task of reconstructing a 3-D scene from one or more images. More importantly, since every symmetric structure admits a “canonical” coordinate frame with respect to which the group action can be naturally represented, the canonical pose between the viewer and this canonical frame can be recovered too, which explains why symmetric objects (e.g., buildings) provide us overwhelming clues to their orientation and position. We give the necessary and sufficient conditions in terms of the symmetry (group) admitted by a structure under which this pose can be uniquely determined. We also characterize, when such conditions are not satisfied, to what extent this pose can be recovered. We show how algorithms from conventional multiple-view geometry, after properly modified and extended, can be directly applied to perform such recovery, from all “hidden images” of one image of the symmetric structure. We also apply our results to a wide range of applications in computer vision and image processing such as camera self-calibration, image segmentation and global orientation, large baseline feature matching, image rendering and photo editing, as well as visual illusions (caused by symmetry if incorrectly assumed).",
"",
"",
"",
"We investigate the constraints placed on the image projection of a planar object having local reflectional symmetry. Under the affine approximation to projection, we demonstrate an efficient (low-complexity) algorithm for detecting and verifying symmetries despite the distorting effects of image skewing. The symmetries are utilized for three distinct tasks: first, determining image back-projection up to a similarity transformation ambiguity; second, determining the object plane orientation (slant and tilt); and third, as a test for non-coplanarity amongst a collection of objects. These results are illustrated throughout with examples from images of real scenes.",
"According to the 1.5-views theorem (Poggio, Technical Report #9005-03, IRST, Povo, 1990; Ullman and Basri, IEEE Trans. PAMI 13, 992-1006, 1991) recognition of a specific 3D object (defined in terms of pointwise features) from a novel 2D view can be achieved from at least two 2D model views (for each object, for orthographic projection). This note considers how recognition can be achieved from a single 2D model view by exploiting prior knowledge of an object's symmetry. It is proved that, for any bilaterally symmetric 3D object, one non-accidental 2D model view is sufficient for recognition since it can be used to generate additional 'virtual' views. It is also proved that, for bilaterally symmetric objects, the correspondence of four points between two views determines the correspondence of all other points. Symmetries of higher order allow the recovery of Euclidean structure from a single 2D view.1",
"We present a method for reconstruction of structured scenes from one or more views, in which the user provides image points and geometric knowledge -coplanarity, ratios of distances, angles- about the corresponding 3D points. First, the geometric information is analyzed. Then vanishing points are estimated, from which camera calibration is obtained. Finally, an algebraic method gives the reconstruction. Our algebraic reconstruction method improves the present state-of-the-art in many aspects : geometric knowledge includes not only planarity and alignment information, but also known ratios of lengths. The single and multipleview cases are treated in the same way and the method detects whether the input data is sufficient to define a rigid reconstruction. We benchmark, using synthetic data, the various steps of the estimation process and show reconstructions obtained from real-world situations in which other methods would fail. We also present a new method for maximum likelihood estimation of vanishing points.",
"When designing computer vision systems for the blind and visually impaired it is important to determine the orientation of the user relative to the scene. We observe that most indoor and outdoor (city) scenes are designed on a Manhattan three-dimensional grid. This Manhattan grid structure puts strong constraints on the intensity gradients in the image. We demonstrate an algorithm for detecting the orientation of the user in such scenes based on Bayesian inference using statistics which we have learnt in this domain. Our algorithm requires a single input image and does not involve pre-processing stages such as edge detection and Hough grouping. We demonstrate strong experimental results on a range of indoor and outdoor images. We also show that estimating the grid structure makes it significantly easier to detect target objects which are not aligned with the grid.",
"A new technique dramatically simplifies the analysis of matching and depth reconstruction by extracting three-dimensional rigid depth interpretation from pairwise comparisons of weak perspective projections. This method provides a simple linear criterion for testing the correctness of correspondence for a pair of images; the method also provides a description of a one-parameter family of interpretations for each pair of images that satisfies this criterion. We show that if at least three projections of a volumetric object are known, then a three-dimensional (3D) rigid interpretation can be inferred from pairwise comparisons between any one of these images and other images in the set. The 3D interpretation is derived from the intersection of corresponding one-parameter families. The method provides a common computational basis for different processes of depth perception, for example, depth-from-stereo and depth-from-motion. Thus, a single mechanism for these processes in the human visual system would be sufficient. The proposed method does not require information about relative positions of eye(s) or camera(s) for different projections, but this information can be easily incorporated. The method can be applied for pairwise comparison within a single image. If any nontrivial correspondence is found, then several views of the same object are present in the same image. This happens, for example, in views of volumetrically symmetric objects. Symmetry facilitates depth reconstruction; if an object possesses two or more symmetries, its depth can be reconstructed from a single image.",
"",
"We present a method to reconstruct from one or more images a scene that is rich in planes, alignments, symmetries, orthogonalities, and other forms of geometrical regularity. Given image points of interest and some geometric information, the method recovers least-squares estimates of the 3D points, camera position(s), orientation(s), and eventually calibration(s). Our contributions lie (i) in a novel way of exploiting some types of symmetry and of geometric regularity, (ii) in treating indifferently one or more images, (iii) in a geometric test that indicates whether the input data uniquely defines a reconstruction, and (iv) a parameterization method for collections of 3D points subject to geometric constraints. Moreover, the reconstruction algorithm lends itself to sensitivity analysis. The method is benchmarked on synthetic data and its effectiveness is shown on real-world data.",
"This letter argues that many visual scenes are based on a \"Manhattan\" three-dimensional grid that imposes regularities on the image statistics. We construct a Bayesian model that implements this assumption and estimates the viewer orientation relative to the Manhattan grid. For many images, these estimates are good approximations to the viewer orientation (as estimated manually by the authors). These estimates also make it easy to detect outlier structures that are unaligned to the grid. To determine the applicability of the Manhattan world model, we implement a null hypothesis model that assumes that the image statistics are independent of any three-dimensional scene structure. We then use the log-likelihood ratio test to determine whether an image satisfies the Manhattan world assumption. Our results show that if an image is estimated to be Manhattan, then the Bayesian model's estimates of viewer direction are almost always accurate (according to our manual estimates), and vice versa."
]
} |
1607.07129 | 2502097041 | Many man-made objects have intrinsic symmetries and Manhattan structure. By assuming an orthographic projection model, this paper addresses the estimation of 3D structures and camera projection using symmetry and or Manhattan structure cues, which occur when the input is single- or multiple-image from the same category, e.g., multiple different cars. Specifically, analysis on the single image case implies that Manhattan alone is sufficient to recover the camera projection, and then the 3D structure can be reconstructed uniquely exploiting symmetry. However, Manhattan structure can be difficult to observe from a single image due to occlusion. To this end, we extend to the multiple-image case which can also exploit symmetry but does not require Manhattan axes. We propose a novel rigid structure from motion method, exploiting symmetry and using multiple images from the same category as input. Experimental results on the Pascal3D+ dataset show that our method significantly outperforms baseline methods. | SfM methods have been used for category-specific object reconstruction, , estimating the structure of from images of different under various viewing conditions @cite_48 @cite_37 , but these did not exploit symmetry or Manhattan. We point out that in @cite_46 , the repetition patterns have been incorporated into SfM for urban facades reconstruction, but @cite_46 focused mainly on repetition detection and registration. | {
"cite_N": [
"@cite_46",
"@cite_48",
"@cite_37"
],
"mid": [
"2072637632",
"",
"1977792424"
],
"abstract": [
"Repeated structures are ubiquitous in urban facades. Such repetitions lead to ambiguity in establishing correspondences across sets of unordered images. A decoupled structure-from-motion reconstruction followed by symmetry detection often produces errors: outputs are either noisy and incomplete, or even worse, appear to be valid but actually have a wrong number of repeated elements. We present an optimization framework for extracting repeated elements in images of urban facades, while simultaneously calibrating the input images and recovering the 3D scene geometry using a graph-based global analysis. We evaluate the robustness of the proposed scheme on a range of challenging examples containing widespread repetitions and nondistinctive features. These image sets are common but cannot be handled well with state-of-the-art methods. We show that the recovered symmetry information along with the 3D geometry enables a range of novel image editing operations that maintain consistency across the images.",
"",
"We address the problem of populating object category detection datasets with dense, per-object 3D reconstructions, bootstrapped from class labels, ground truth figure-ground segmentations and a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion, then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions on one of the most challenging existing object-category detection datasets, PASCAL VOC. Our results may re-stimulate once popular geometry-oriented model-based recognition approaches."
]
} |
1607.06883 | 2486560399 | This paper presents a randomized Las Vegas distributed algorithm that constructs a minimum spanning tree (MST) in weighted networks with optimal (up to polylogarithmic factors) time and message complexity. This algorithm runs in @math time and exchanges @math messages (both with high probability), where @math is the number of nodes of the network, @math is the diameter, and @math is the number of edges. This is the first distributed MST algorithm that matches the time lower bound of @math [Elkin, SIAM J. Comput. 2006] and the message lower bound of @math [, J.ACM 2015] (which both apply to randomized algorithms). The prior time and message lower bounds are derived using two completely different graph constructions; the existing lower bound construction that shows one lower bound does not work for the other. To complement our algorithm, we present a new lower bound graph construction for which any distributed MST algorithm requires @math rounds and @math messages. | Some classes of graphs admit efficient MST algorithms that beat the general @math time lower bound. This is the case for planar graphs, graphs of bounded genus, treewidth, or pathwidth @cite_12 @cite_30 @cite_43 , and graphs with small random walk mixing time @cite_20 . | {
"cite_N": [
"@cite_30",
"@cite_43",
"@cite_20",
"@cite_12"
],
"mid": [
"2484918492",
"2515165863",
"2737872496",
"2294674552"
],
"abstract": [
"Distributed optimization algorithms are frequently faced with solving sub-problems on disjoint connected parts of a network. Unfortunately, the diameter of these parts can be significantly larger than the diameter of the underlying network, leading to slow running times. Recent work by [Ghaffari and Hauepler; SODA'16] showed that this phenomenon can be seen as the broad underlying reason for the pervasive Omega(√n + D) lower bounds that apply to most optimization problems in the CONGEST model. On the positive side, this work also introduced low-congestion shortcuts as an elegant solution to circumvent this problem in certain topologies of interest. Particularly, they showed that there exist good shortcuts for any planar network and more generally any bounded genus network. This directly leads to fast O(DlogO(1)n) distributed optimization algorithms on such topologies, e.g., for MST and Min-Cut approximation, given that one can efficiently construct these shortcuts in a distributed manner. Unfortunately, the shortcut construction of [Ghaffari and Hauepler; SODA'16] relies heavily on having access to a bounded genus embedding of the network. Computing such an embedding distributedly, however, is a hard problem - even for planar networks. No distributed embedding algorithm for bounded genus graphs is in sight. In this work, we side-step this problem by defining a slightly restricted and more structured form of shortcuts and giving a novel construction algorithm which efficiently finds a shortcut which is, up to a logarithmic factor, as good as the best shortcut that exists for a given network. This new construction algorithm directly leads to an O(D logO(1) n)-round algorithm for solving optimization problems like MST for any topology for which good restricted shortcuts exist - without the need to compute any embedding. This includes the first efficient algorithm for bounded genus graphs.",
"We show that many distributed network optimization problems can be solved much more efficiently in structured and topologically simple networks.",
"We present a randomized distributed algorithm that computes a minimum spanning tree in τ(G) · 2O(√(log n log log n))) rounds, in any n-node graph G with mixing time τ(G). This result provides a sub-polynomial complexity for a wide range of graphs of practical interest, and goes below the celebrated Ω(D+ √n) lower bound of Das [STOC'11] which holds for some worst-case general graphs. The core novelty in this result is a distributed method for permutation routing. In this problem, one is given a number of source-destination pairs, and we should deliver one packet from each source to its destination, all in parallel, in the shortest span of time possible. Our algorithm allows us to route and deliver all these packets in τ(G) · 2O(√(log n log log n)) rounds, assuming that each node v is the source or destination for at most dG(v) packets. The main technical ingredient in this routing result is a certain hierarchical embedding of good-expansion random graphs on the base graph, which we believe can be of interest well beyond this work.",
"This paper introduces the concept of low-congestion shortcuts for (near-)planar networks, and demonstrates their power by using them to obtain near-optimal distributed algorithms for problems such as Minimum Spanning Tree (MST) or Minimum Cut, in planar networks. Consider a graph G = (V, E) and a partitioning of V into subsets of nodes S1, . . ., SN, each inducing a connected subgraph G[Si]. We define an α-congestion shortcut with dilation β to be a set of subgraphs H1, . . ., HN ⊆ G, one for each subset Si, such that 1. For each i ∈ [1, N], the diameter of the subgraph G[Si] + Hi is at most β. 2. For each edge e ∈ E, the number of subgraphs G[Si] + Hi containing e is at most α. We prove that any partition of a D-diameter planar graph into individually-connected parts admits an O(D log D)-congestion shortcut with dilation O(D log D), and we also present a distributed construction of it in O(D) rounds. We moreover prove these parameters to be near-optimal; i.e., there are instances in which, unavoidably, max α, β = Ω(D[EQUATION]). Finally, we use low-congestion shortcuts, and their efficient distributed construction, to derive O(D)-round distributed algorithms for MST and Min-Cut, in planar networks. This complexity nearly matches the trivial lower bound of Ω(D). We remark that this is the first result bypassing the well-known Ω(D + [EQUATION]) existential lower bound of general graphs (see Peleg and Rubinovich [FOCS'99]; Elkin [STOC'04]; and Das [STOC'11]) in a family of graphs of interest."
]
} |
1607.06883 | 2486560399 | This paper presents a randomized Las Vegas distributed algorithm that constructs a minimum spanning tree (MST) in weighted networks with optimal (up to polylogarithmic factors) time and message complexity. This algorithm runs in @math time and exchanges @math messages (both with high probability), where @math is the number of nodes of the network, @math is the diameter, and @math is the number of edges. This is the first distributed MST algorithm that matches the time lower bound of @math [Elkin, SIAM J. Comput. 2006] and the message lower bound of @math [, J.ACM 2015] (which both apply to randomized algorithms). The prior time and message lower bounds are derived using two completely different graph constructions; the existing lower bound construction that shows one lower bound does not work for the other. To complement our algorithm, we present a new lower bound graph construction for which any distributed MST algorithm requires @math rounds and @math messages. | From a practical perspective, given that MST construction can take as much as @math time even in low-diameter networks, it is worth investigating whether one can design distributed algorithms that run faster and output an approximate minimum spanning tree. The question of devising faster approximation algorithms for MST was raised in @cite_3 . Elkin @cite_5 later established a hardness result on distributed MST approximation, showing that approximating the MST problem on a certain family of graphs of small diameter (e.g., @math ) within a ratio @math requires essentially @math time. Khan and Pandurangan @cite_22 showed that there can be an exponential time gap between exact and approximate MST construction by showing that there exist graphs where any distributed (exact) MST algorithm takes @math rounds, whereas an @math -approximate MST can be computed in @math rounds. The distributed approximation algorithm of Khan and Pandurangan is message-optimal but not time-optimal. | {
"cite_N": [
"@cite_5",
"@cite_22",
"@cite_3"
],
"mid": [
"1984090926",
"2058191123",
"2015819397"
],
"abstract": [
"The design of distributed approximation protocols is a relatively new and rapidly developing area of research. However, so far, little progress has been made in the study of the hardness of distributed approximation. In this paper we initiate the systematic study of this subject and show strong unconditional lower bounds on the time-approximation trade-off of the distributed minimum spanning tree problem, and show some of its variants.",
"We present a distributed algorithm that constructs an O(log n)-approximate minimum spanning tree (MST) in any arbitrary network. This algorithm runs in time O(D(G) + L(G, w)) where L(G, w) is a parameter called the local shortest path diameter and D(G) is the (unweighted) diameter of the graph. Our algorithm is existentially optimal (up to polylogarithmic factors), i.e., there exist graphs which need Ω(D(G) + L(G, w)) time to compute an H-approximation to the MST for any (H , ,[1, ( log n)] ) . Our result also shows that there can be a significant time gap between exact and approximate MST computation: there exists graphs in which the running time of our approximation algorithm is exponentially faster than the time-optimal distributed algorithm that computes the MST. Finally, we show that our algorithm can be used to find an approximate MST in wireless networks and in random weighted networks in almost optimal O(D(G)) time.",
"This paper presents a lower bound of @math on the time required for the distributed construction of a minimum-weight spanning tree (MST) in weighted n-vertex networks of diameter @math , in the bounded message model. This establishes the asymptotic near-optimality of existing time-efficient distributed algorithms for the problem, whose complexity is @math ."
]
} |
1607.06883 | 2486560399 | This paper presents a randomized Las Vegas distributed algorithm that constructs a minimum spanning tree (MST) in weighted networks with optimal (up to polylogarithmic factors) time and message complexity. This algorithm runs in @math time and exchanges @math messages (both with high probability), where @math is the number of nodes of the network, @math is the diameter, and @math is the number of edges. This is the first distributed MST algorithm that matches the time lower bound of @math [Elkin, SIAM J. Comput. 2006] and the message lower bound of @math [, J.ACM 2015] (which both apply to randomized algorithms). The prior time and message lower bounds are derived using two completely different graph constructions; the existing lower bound construction that shows one lower bound does not work for the other. To complement our algorithm, we present a new lower bound graph construction for which any distributed MST algorithm requires @math rounds and @math messages. | Das @cite_36 settled the time complexity of distributed approximate MST by showing that this problem, as well as approximating shortest paths and about twenty other problems, satisfies a time lower bound of @math . This applies to deterministic as well as randomized algorithms, and to both exact and approximate versions. In other words, any distributed algorithm for computing a @math -approximation to MST, for any @math , takes @math time in the worst case. | {
"cite_N": [
"@cite_36"
],
"mid": [
"2083323175"
],
"abstract": [
"We study the verification problem in distributed networks, stated as follows. Let @math be a subgraph of a network @math where each vertex of @math knows which edges incident on it are in @math . We would l..."
]
} |
1607.06883 | 2486560399 | This paper presents a randomized Las Vegas distributed algorithm that constructs a minimum spanning tree (MST) in weighted networks with optimal (up to polylogarithmic factors) time and message complexity. This algorithm runs in @math time and exchanges @math messages (both with high probability), where @math is the number of nodes of the network, @math is the diameter, and @math is the number of edges. This is the first distributed MST algorithm that matches the time lower bound of @math [Elkin, SIAM J. Comput. 2006] and the message lower bound of @math [, J.ACM 2015] (which both apply to randomized algorithms). The prior time and message lower bounds are derived using two completely different graph constructions; the existing lower bound construction that shows one lower bound does not work for the other. To complement our algorithm, we present a new lower bound graph construction for which any distributed MST algorithm requires @math rounds and @math messages. | It is important to point out that this paper and all the prior results discussed above (including the prior MST results @cite_6 @cite_26 @cite_18 @cite_38 @cite_10 @cite_2 @cite_35 ) assume the so-called clean network model , a.k.a. @math @cite_44 (cf. Section ), where nodes do not have initial knowledge of the identity of their neighbors. However, one can assume a model where nodes do have such a knowledge. This model is called the @math model . Although the distinction between @math and @math has clearly no bearing on the asymptotic bounds for the time complexity, it is significant when considering message complexity. @cite_21 show that @math is a message lower bound for MST in the @math model, if one allows only (possibly randomized Monte Carlo) comparison-based algorithms, i.e., algorithms that can operate on IDs only by comparing them. (We note that all prior MST algorithms mentioned earlier are comparison-based, including ours.) Hence, the result of @cite_21 implies that our MST algorithm (which is comparison-based and randomized) is time- and message-optimal in the @math model if one considers comparison-based algorithms only. | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_26",
"@cite_35",
"@cite_21",
"@cite_6",
"@cite_44",
"@cite_2",
"@cite_10"
],
"mid": [
"2163097371",
"2077371116",
"2100466311",
"2030982625",
"",
"2053788000",
"1568961751",
"2053981136",
"1534484868"
],
"abstract": [
"This paper considers the question of identifying the parameters governing the behavior of fundamental global network problems. Many papers on distributed network algorithms consider the task of optimizing the running time successful when an O(n) bound is achieved on an n-vertex network. We propose that a more sensitive parameter is the network's diameter @math . This is demonstrated in the paper by providing a distributed minimum-weight spanning tree algorithm whose time complexity is sublinear in n, but linear in @math (specifically, @math for @math ). Our result is achieved through the application of graph decomposition and edge-elimination-by-pipelining techniques that may be of independent interest.",
"",
"A distributed algorithm is presented that constructs the minimum-weight spanning tree of an undirected connected graph with distinct edge weights and distinct node identities. Initially each node knows only the weight of each of its adjacent edges. When the algorithm terminates, each node knows which of its adjacent edges are edges of the tree. For a graph with n nodes and e edges, the total number of messages required by our algorithm is at most 5nlogn+2e, and each message contains at most one edge weight or one node identity plus 3+logn bits. Although our algorithm has the same message complexity as the previously known algorithm by , the time complexity of our algorithm takes at most O(nG(n))+ time units, an improvement from Gallager's O(nlogn)+. A worst case O(nG(n)) is also possible.",
"This paper studies the problem of constructing a minimum-weight spanning tree (MST) in a distributed network. This is one of the most important problems in the area of distributed computing. There is a long line of gradually improving protocols for this problem, and the state of the art today is a protocol with running time O(Λ(G)+n⋅log∗n) due to Kutten and Peleg [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40–66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20–27], where Λ(G) denotes the diameter of the graph G. Peleg and Rubinovich [D. Peleg, V. Rubinovich, A near-tight lower bound on the time complexity of distributed MST construction, in: Proc. 40th IEEE Symp. on Foundations of Computer Science, 1999, pp. 253–261] have shown that Ω˜(n) time is required for constructing MST even on graphs of small diameter, and claimed that their result “establishes the asymptotic near-optimality” of the protocol of [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40–66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20–27]. In this paper we refine this claim, and devise a protocol that constructs the MST in Ω˜(μ(G,ω)+n) rounds, where μ(G,ω) is the MST-radius of the graph. The ratio between the diameter and the MST-radius may be as large as Θ(n), and, consequently, on some inputs our protocol is faster than the protocol of [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40–66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20–27] by a factor of Ω˜(n). Also, on every input, the running time of our protocol is never greater than twice the running time of the protocol of [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40–66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20–27]. As part of our protocol for constructing an MST, we develop a protocol for constructing neighborhood covers with a drastically improved running time. The latter result may be of independent interest.",
"",
"This paper develops linear time distributed algorithms for a class of problems in an asynchronous communication network. Those problems include Minimum-Weight Spanning Tree (MST), Leader Election, counting the number of network nodes, and computing a sensitive decomposable function (e.g. majority, parity, maximum, OR, AND). The main problem considered is the problem of finding the MST. This problem, which has been known for at least 9 years, is one of the most fundamental and the most studied problems in the field of distributed network algorithms. Any algorithm for any one of the problems above requires at least O( E + V log V ) communication and O( V ) time in the general network. In this paper, we present new algorithms, which achieve those lower bounds. The best previous algorithm requires T( E + V log V ) in communication and T( V log V ) in time. Our result enables to improve algorithms for many other problems in distributed computing, achieving lower bounds on their communication and time complexities.",
"",
"",
"This article presents a fast distributed algorithm to compute a smallk-dominating setD(for any fixedk) and to compute its induced graph partition (breaking the graph into radiuskclusters centered around the vertices ofD). The time complexity of the algorithm isO(klog*n). Smallk-dominating sets have applications in a number of areas, including routing with sparse routing tables, the design of distributed data structures, and center selection in a distributed network. The main application described in this article concerns a fast distributed algorithm for constructing a minimum-weight spanning tree (MST). On ann-vertex network of diameterd, the new algorithm constructs an MST in time, improving on previous results."
]
} |
1607.06883 | 2486560399 | This paper presents a randomized Las Vegas distributed algorithm that constructs a minimum spanning tree (MST) in weighted networks with optimal (up to polylogarithmic factors) time and message complexity. This algorithm runs in @math time and exchanges @math messages (both with high probability), where @math is the number of nodes of the network, @math is the diameter, and @math is the number of edges. This is the first distributed MST algorithm that matches the time lower bound of @math [Elkin, SIAM J. Comput. 2006] and the message lower bound of @math [, J.ACM 2015] (which both apply to randomized algorithms). The prior time and message lower bounds are derived using two completely different graph constructions; the existing lower bound construction that shows one lower bound does not work for the other. To complement our algorithm, we present a new lower bound graph construction for which any distributed MST algorithm requires @math rounds and @math messages. | The preliminary version of this paper @cite_9 raised the open problem of whether there exists a deterministic time- and message-optimal MST algorithm. We notice that our algorithm is randomized , due to the use of the randomized cover construction of @cite_35 , even though the rest of the algorithm is deterministic. Elkin @cite_28 , building on our work, answered this question affirmatively by devising a deterministic MST algorithm that achieves essentially the same bounds as in this paper, i.e., uses @math messages and runs in @math time. Actually, the bounds are better than in this paper by logarithmic factors. Elkin's algorithm is simpler as it bypasses Phase 2 of Part 2 of our algorithm, and thus bypasses the randomized cover construction; the rest of the high-level structure of Elkin's algorithm is similar to our algorithm. | {
"cite_N": [
"@cite_28",
"@cite_35",
"@cite_9"
],
"mid": [
"2592535728",
"2030982625",
""
],
"abstract": [
"Distributed minimum spanning tree (MST) problem is one of the most central and fundamental problems in distributed graph algorithms. Kutten and Peleg [KP98] devised an algorithm with running time O(D + √n . log* n), where D is the hop-diameter of the input n-vertex m-edge graph, and with message complexity O(m + n3 2). Peleg and Rubinovich [PR99] showed that the running time of the algorithm of [KP98] is essentially tight, and asked if one can achieve near-optimal running time together with near-optimal message complexity. In a recent breakthrough, [PRS16] answered this question in the affirmative, and devised a randomized algorithm with time O(D+ √n) and message complexity O(m). They asked if such a simultaneous time- and message-optimality can be achieved by a deterministic algorithm. In this paper, building upon the work of [PRS16], we answer this question in the affirmative, and devise a deterministic algorithm that computes MST in time O((D + √n). log n), using O(m . log n + n log n . log* n) messages. The polylogarithmic factors in the time and message complexities of our algorithm are significantly smaller than the respective factors in the result of [PRS16]. Also, our algorithm and its analysis are very simple and self-contained, as opposed to rather complicated previous sublinear-time algorithms [GKP98,KP98,E04b,PRS16].",
"This paper studies the problem of constructing a minimum-weight spanning tree (MST) in a distributed network. This is one of the most important problems in the area of distributed computing. There is a long line of gradually improving protocols for this problem, and the state of the art today is a protocol with running time O(Λ(G)+n⋅log∗n) due to Kutten and Peleg [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40–66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20–27], where Λ(G) denotes the diameter of the graph G. Peleg and Rubinovich [D. Peleg, V. Rubinovich, A near-tight lower bound on the time complexity of distributed MST construction, in: Proc. 40th IEEE Symp. on Foundations of Computer Science, 1999, pp. 253–261] have shown that Ω˜(n) time is required for constructing MST even on graphs of small diameter, and claimed that their result “establishes the asymptotic near-optimality” of the protocol of [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40–66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20–27]. In this paper we refine this claim, and devise a protocol that constructs the MST in Ω˜(μ(G,ω)+n) rounds, where μ(G,ω) is the MST-radius of the graph. The ratio between the diameter and the MST-radius may be as large as Θ(n), and, consequently, on some inputs our protocol is faster than the protocol of [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40–66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20–27] by a factor of Ω˜(n). Also, on every input, the running time of our protocol is never greater than twice the running time of the protocol of [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40–66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20–27]. As part of our protocol for constructing an MST, we develop a protocol for constructing neighborhood covers with a drastically improved running time. The latter result may be of independent interest.",
""
]
} |
1607.07027 | 2491995547 | We investigate the problem of verbalizing Web Ontology Language (OWL) axioms of domain ontologies in this paper. The existing approaches address the problem of fidelity of verbalized OWL texts to OWL semantics by exploring different ways of expressing the same OWL axiom in various linguistic forms. They also perform grouping and aggregating of the natural language (NL) sentences that are generated corresponding to each OWL statement into a comprehensible structure. However, no efforts have been taken to try out a semantic reduction at logical level to remove redundancies and repetitions, so that the reduced set of axioms can be used for generating a more meaningful and human-understandable (what we call redundancy-free) text. Our experiments show that, formal semantic reduction at logical level is very helpful to generate redundancy-free descriptions of ontology entities. In this paper, we particularly focus on generating descriptions of individuals of SHIQ based ontologies. The details of a case study are provided to support the usefulness of the redundancy-free NL descriptions of individuals, in knowledge validation application. | Over the last decade, several CNLs such as Attempto Controlled English (ACE) @cite_1 @cite_13 , Ordnance Survey's Rabbit (Rabbit) @cite_5 , and Sydney OWL Syntax (SOS) @cite_10 , have been specifically designed for ontology language OWL. All these languages are meant to make the interactions with formal ontological statements easier and faster for users who are unfamiliar with formal notations. Unlike the other languages @cite_0 @cite_12 @cite_8 that have been suggested to represent OWL in controlled English, these CNLs are designed to have formal language semantics and bidirectional mapping between NL fragments and OWL constructs. Even though these formal language semantics and bidirectional mapping are helpful in enabling a formal check that the resulting NL expressions are unambiguous, they generate a collection of unordered sentences that are difficult to comprehend. | {
"cite_N": [
"@cite_8",
"@cite_10",
"@cite_1",
"@cite_0",
"@cite_5",
"@cite_13",
"@cite_12"
],
"mid": [
"2132130164",
"89942291",
"14390946",
"",
"44013005",
"2189349789",
"1538808533"
],
"abstract": [
"We present Naturalowl, a natural language generation system that produces texts describing individuals or classes of owl ontologies. Unlike simpler owl verbalizers, which typically express a single axiom at a time in controlled, often not entirely fluent natural language primarily for the benefit of domain experts, we aim to generate fluent and coherent multi-sentence texts for end-users. With a system like Naturalowl, one can publish information in owl on the Web, along with automatically produced corresponding texts in multiple languages, making the information accessible not only to computer programs and domain experts, but also end-users. We discuss the processing stages of Naturalowl, the optional domain-dependent linguistic resources that the system can use at each stage, and why they are useful. We also present trials showing that when the domain-dependent linguistic resources are available, Naturalowl produces significantly better texts compared to a simpler verbalizer, and that the resources can be created with relatively light effort.",
"This paper describes a proposed new syntax that can be used to write and read OWL ontologies in Controlled Natural Language (CNL): a well-defined subset of the English language. Following the lead of Manchester OWL Syntax in making OWL more accessible for non-logicians, and building on the previous success of Schwitter’s PENG (Processable English), the proposed Sydney OWL Syntax enables two-way translation and generation of grammatically correct full English sentences to and from OWL 1.1 functional syntax. Used in conjunction with OWL tools, it is designed to facilitate ontology construction and editing by enabling authors to write an OWL ontology in a defined subset of English. It also improves readability and understanding of OWL statements or whole ontologies, by enabling them to be read as English sentences. It is hoped that by providing the option of an intuitive, easy to use English syntax which requires no specialized knowledge, the broader community will be far more likely to develop and benefit from Semantic Web applications. This paper is a discussion paper covering the scope, design, and examples of Sydney OWL Syntax in use, and the authors invite feedback on all aspects of the proposal via email to krr.sydneysyntax@cse.unsw.edu.au. Working drafts of the full specification are available at http: www.ics.mq.edu.au rolfs sos.",
"We describe a verbalization of the logical content of OWL ontologies — using OWL 1.1 without data-valued properties — in Attempto Controlled English (ACE). Because ACE is a subset of English, the verbalization makes OWL ontologies accessible to people with no training in formal methods. We conclude that OWL can be verbalized in concise and understandable English provided that a certain naming style is adopted for OWL individuals, classes, and properties.",
"",
"A demand defrost system in which a high resistive epoxy resin hermetically seals a capacitive sensor plate and a noise immune phase detector detects a phase shift caused by the build up of frost.",
"",
"Verbalization is the process of writing the semantics captured in axioms into natural language sentences, which enables domain experts (who are not trained to understand technical formal languages) to be able to participate in the modeling and validation processes of their domain knowledge. We present a novel approach to support multilingual verbalization of logical theories, axiomatizations, and other specifications such as business rules. This engineering solution is demonstrated with the Object Role Modeling language and the ontology engineering tool DogmaModeler, although its underlying principles can be reused with other conceptual models and formal languages, such as Description Logics, to improve its understandability and usability by the domain expert. Our engineering solution for multilingual verbalization is characterized by its flexibility, extensibility and maintainability of the verbalization templates, which allow for easy augmentation with other languages than the 10 currently supported."
]
} |
1607.07027 | 2491995547 | We investigate the problem of verbalizing Web Ontology Language (OWL) axioms of domain ontologies in this paper. The existing approaches address the problem of fidelity of verbalized OWL texts to OWL semantics by exploring different ways of expressing the same OWL axiom in various linguistic forms. They also perform grouping and aggregating of the natural language (NL) sentences that are generated corresponding to each OWL statement into a comprehensible structure. However, no efforts have been taken to try out a semantic reduction at logical level to remove redundancies and repetitions, so that the reduced set of axioms can be used for generating a more meaningful and human-understandable (what we call redundancy-free) text. Our experiments show that, formal semantic reduction at logical level is very helpful to generate redundancy-free descriptions of ontology entities. In this paper, we particularly focus on generating descriptions of individuals of SHIQ based ontologies. The details of a case study are provided to support the usefulness of the redundancy-free NL descriptions of individuals, in knowledge validation application. | To use these CNLs as a means for ontology authoring and for knowledge validation purposes, appropriate organization of the verbalized text is necessary. A detailed comparison of the systems that comprehend the NL texts is given in @cite_11 . Among such systems, SWAT tools @cite_9 are one of the recent and prominent tools which use standard techniques from computational linguistics to make the verbalized text more readable. They tried to give more clarity to the generated text by grouping, aggregation and elision. The Semantic Web Authoring (SWAT) tools have given much focus for comprehending the linguistic form of the sentences, rather than handling their logical forms, hence have deficiencies in their NL representations. | {
"cite_N": [
"@cite_9",
"@cite_11"
],
"mid": [
"1504748971",
"2159807474"
],
"abstract": [
"It has frequently been observed that domain experts are not necessarily ontology experts, and that the production of ontologies would be aided if they could read and edit axioms in natural language. The SWAT Tools Verbaliser is available, via a web interface, for verbalising OWL ontologies as texts in a controlled fragment of English. Taking as input any OWL ontology, the verbaliser creates a lexicon containing entries for all the entities in the input, and uses it to generate an English sentence corresponding to each logical axiom. These sentences are then organised into a document structure similar to that of an encyclopaedia, with an entry providing a definition, typology and examples for each entity. The output is either organised, easily-navigable English text encoded in XML, or a copy of the input OWL in which each entity is annotated with its description entry. The generated texts have been evaluated in a number of ways which are briefly presented here.",
"Background Text definitions for entities within bio-ontologies are a cornerstone of the effort to gain a consensus in understanding and usage of those ontologies. Writing these definitions is, however, a considerable effort and there is often a lag between specification of the main part of an ontology (logical descriptions and definitions of entities) and the development of the text-based definitions. The goal of natural language generation (NLG) from ontologies is to take the logical description of entities and generate fluent natural language. The application described here uses NLG to automatically provide text-based definitions from an ontology that has logical descriptions of its entities, so avoiding the bottleneck of authoring these definitions by hand."
]
} |
1607.07021 | 2951124177 | We consider single-hop topologies with saturated transmitting nodes, using IEEE 802.11 DCF for medium access. However, unlike the conventional WiFi, we study systems where one or more of the protocol parameters are different from the standard, and or where the propagation delays among the nodes are not negligible compared to the duration of a backoff slot. We observe that for several classes of protocol parameters, and for large propagation delays, such systems exhibit a certain performance anomaly known as short term unfairness, which may lead to severe performance degradation. The standard fixed point analysis technique (and its simple extensions) do not predict the system behavior well in such cases; a mean field model based asymptotic approach also is not adequate to predict the performance for networks of practical sizes in such cases. We provide a detailed stochastic model that accurately captures the system evolution. Since an exact analysis of this model is computationally intractable, we develop a novel approximate, but accurate, analysis that uses a parsimonious state representation for computational tractability. Apart from providing insights into the system behavior, the analytical method is also able to quantify the extent of short term unfairness in the system, and can therefore be used for tuning the protocol parameters to achieve desired throughput and fairness objectives. | There is a considerable body of literature on performance analysis of IEEE 802.11 DCF, starting with the seminal work by Bianchi @cite_15 , which was later generalized by @cite_6 to incorporate general backoff parameters. Several extensions have been proposed since then. For example, Jindal and Psounis @cite_5 proposed a throughput analysis for multi-hop IEEE 802.11 networks with non-saturated nodes. Nardelli and Knightly @cite_10 proposed a closed form analysis for the saturation throughput in the presence of hidden terminals, but under several simplifying assumptions. Considerable attention has also been given to performance analysis of IEEE 802.11e EDCA; see, for example, @cite_13 @cite_14 , and the references therein. However, none of this work is suitable for predicting the performance of systems that exhibit short term unfairness, and the same has been explicitly pointed out in @cite_13 . We will shed more light on this as we proceed further. | {
"cite_N": [
"@cite_14",
"@cite_10",
"@cite_6",
"@cite_5",
"@cite_15",
"@cite_13"
],
"mid": [
"",
"2032566227",
"",
"2126703982",
"2162598825",
"2163814678"
],
"abstract": [
"",
"We present a novel modeling approach to derive closed-form throughput expressions for CSMA networks with hidden terminals. The key modeling principle is to break the interdependence of events in a wireless network using conditional expressions that capture the effect of a specific factor each, yet preserve the required dependences when combined together. Different from existing models that use numerical aggregation techniques, our approach is the first to jointly characterize the three main critical factors affecting flow throughput (referred to as hidden terminals, information asymmetry and flow-in-the-middle) within a single analytical expression. We have developed a symbolic implementation of the model, that we use for validation against realistic simulations and experiments with real wireless hardware, observing high model accuracy in the evaluated scenarios. The derived closed-form expressions enable new analytical studies of capacity and protocol performance that would not be possible with prior models. We illustrate this through an application of network utility maximization in complex networks with collisions, hidden terminals, asymmetric interference and flow-in-the-middle instances. Despite that such problematic scenarios make utility maximization a challenging problem, the model-based optimization yields vast fairness gains and an average per-flow throughput gain higher than 500 with respect to 802.11 in the evaluated networks.",
"",
"In this paper, we characterize the achievable rate region for any IEEE 802.11-scheduled static multihop network. To do so, we first characterize the achievable edge-rate region, that is, the set of edge rates that are achievable on the given topology. This requires a careful consideration of the interdependence among edges since neighboring edges collide with and affect the idle time perceived by the edge under study. We approach this problem in two steps. First, we consider two-edge topologies and study the fundamental ways they interact. Then, we consider arbitrary multihop topologies, compute the effect that each neighboring edge has on the edge under study in isolation, and combine to get the aggregate effect. We then use the characterization of the achievable edge-rate region to characterize the achievable rate region. We verify the accuracy of our analysis by comparing the achievable rate region derived from simulations with the one derived analytically. We make a couple of interesting and somewhat surprising observations while deriving the rate regions. First, the achievable rate region with 802.11 scheduling is not necessarily convex. Second, the performance of 802.11 is surprisingly good. For example, in all the topologies used for model verification, the max-min allocation under 802.11 is at least 64 of the max-min allocation under a perfect scheduler.",
"The IEEE has standardized the 802.11 protocol for wireless local area networks. The primary medium access control (MAC) technique of 802.11 is called the distributed coordination function (DCF). The DCF is a carrier sense multiple access with collision avoidance (CSMA CA) scheme with binary slotted exponential backoff. This paper provides a simple, but nevertheless extremely accurate, analytical model to compute the 802.11 DCF throughput, in the assumption of finite number of terminals and ideal channel conditions. The proposed analysis applies to both the packet transmission schemes employed by DCF, namely, the basic access and the RTS CTS access mechanisms. In addition, it also applies to a combination of the two schemes, in which packets longer than a given threshold are transmitted according to the RTS CTS mechanism. By means of the proposed model, we provide an extensive throughput performance evaluation of both access mechanisms of the 802.11 protocol.",
"Analytical modeling of the 802.11e enhanced distributed channel access (EDCA) mechanism is today a fairly mature research area, considering the very large number of papers that have appeared in the literature. However, most work in this area models the EDCA operation through per-slot statistics, namely probability of transmission and collisions referred to \"slots.\" In so doing, they still share a methodology originally proposed for the 802.11 Distributed Coordination Function (DCF), although they do extend it by considering differentiated transmission collision probabilities over different slots.We aim to show that it is possible to devise 802.11e models that do not rely on per-slot statistics. To this purpose, we introduce and describe a novel modeling methodology that does not use per-slot transmission collision probabilities, but relies on the fixed-point computation of the whole (residual) backoff counter distribution occurring after a generic transmission attempt. The proposed approach achieves high accuracy in describing the channel access operations, not only in terms of throughput and delay performance, but also in terms of low-level performance metrics."
]
} |
1607.07295 | 2952745240 | People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize cross-modal scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for cross-modal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality. | Domain adaptation techniques address the problem of learning models on some data distribution that generalize to a different distribution. @cite_21 proposes a method for domain adaptation using metric learning. In @cite_18 this approach is extended to work on unsupervised settings where one does not have access to target data labels, while @cite_33 uses deep CNNs instead. @cite_30 shows the biases inherent in common vision datasets and @cite_40 proposes models that remain invariant to them. @cite_8 learns an aligned representation for domain adaptation using CNNs and the MMD metric. Our method differs from these works in that it seeks to find a cross-modal representations between highly different modalities instead of modelling close domain shifts. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_33",
"@cite_8",
"@cite_21",
"@cite_40"
],
"mid": [
"2031342017",
"2128053425",
"2953226914",
"2951670162",
"1722318740",
"1852255964"
],
"abstract": [
"Datasets are an integral part of contemporary object recognition research. They have been the chief reason for the considerable progress in the field, not just as source of large amounts of training data, but also as means of measuring and comparing performance of competing algorithms. At the same time, datasets have often been blamed for narrowing the focus of object recognition research, reducing it to a single benchmark performance number. Indeed, some datasets, that started out as data capture efforts aimed at representing the visual world, have become closed worlds unto themselves (e.g. the Corel world, the Caltech-101 world, the PASCAL VOC world). With the focus on beating the latest benchmark numbers on the latest dataset, have we perhaps lost sight of the original purpose? The goal of this paper is to take stock of the current state of recognition datasets. We present a comparison study using a set of popular datasets, evaluated based on a number of criteria including: relative data bias, cross-dataset generalization, effects of closed-world assumption, and sample value. The experimental results, some rather surprising, suggest directions that can improve dataset collection as well as algorithm evaluation protocols. But more broadly, the hope is to stimulate discussion in the community regarding this very important, but largely neglected issue.",
"Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.",
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.",
"Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.",
"Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.",
"The presence of bias in existing object recognition datasets is now well-known in the computer vision community. While it remains in question whether creating an unbiased dataset is possible given limited resources, in this work we propose a discriminative framework that directly exploits dataset bias during training. In particular, our model learns two sets of weights: (1) bias vectors associated with each individual dataset, and (2) visual world weights that are common to all datasets, which are learned by undoing the associated bias from each dataset. The visual world weights are expected to be our best possible approximation to the object model trained on an unbiased dataset, and thus tend to have good generalization ability. We demonstrate the effectiveness of our model by applying the learned weights to a novel, unseen dataset, and report superior results for both classification and detection tasks compared to a classical SVM that does not account for the presence of bias. Overall, we find that it is beneficial to explicitly account for bias when combining multiple datasets."
]
} |
1607.07171 | 2499487344 | This paper presents the results of a comprehensive investigation of complex linear physical-layer network (PNC) in two-way relay channels (TWRC). A critical question at relay R is as follows: "Given channel gain ratio @math , where @math and @math are the complex channel gains from nodes A and B to relay R, respectively, what is the optimal coefficients @math that minimizes the symbol error rate (SER) of @math when we attempt to detect @math in the presence of noise?" Our contributions with respect to this question are as follows: (1) We put forth a general Gaussian-integer formulation for complex linear PNC in which @math , and @math are elements of a finite field of Gaussian integers, that is, the field of @math where @math is a Gaussian prime. Previous vector formulation, in which @math , @math , and @math were represented by @math -dimensional vectors and @math and @math were represented by @math matrices, corresponds to a subcase of our Gaussian-integer formulation where @math is real prime only. Extension to Gaussian prime @math , where @math can be complex, gives us a larger set of signal constellations to achieve different rates at different SNR. (2) We show how to divide the complex plane of @math into different Voronoi regions such that the @math within each Voronoi region share the same optimal PNC mapping @math . We uncover the structure of the Voronoi regions that allows us to compute a minimum-distance metric that characterizes the SER of @math under optimal PNC mapping @math . Overall, the contributions in (1) and (2) yield a toolset for a comprehensive understanding of complex linear PNC in @math . We believe investigation of linear PNC beyond @math can follow the same approach. | : In nonlinear PNC systems, the NC mapping at the relay cannot be expressed as a linear weighted sum of the symbols transmitted from the end nodes. A representative work on nonlinear PNC is @cite_22 . Based on an exclusive law to avoid ambiguity in the decoding of NC symbols at the relay, @cite_22 made use of the closest-neighbor clustering principle (corresponding to mapping constellation points of two superimposed symbols separated by @math to the same NC symbol in this paper) to map the superimposed symbols of two QPSK symbols of two users to NC symbols in @math QAM constellation at the relay. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2127117141"
],
"abstract": [
"We investigate modulation schemes optimized for two-way wireless relaying systems, for which network coding is employed at the physical layer. We consider network coding based on denoise-and-forward (DNF) protocol, which consists of two stages: multiple access (MA) stage, where two terminals transmit simultaneously towards a relay, and broadcast (BC) stage, where the relay transmits towards the both terminals. We introduce a design principle of modulation and network coding, considering the superposed constellations during the MA stage. For the case of QPSK modulations at the MA stage, we show that QPSK constellations with an exclusive-or (XOR) network coding do not always offer the best transmission for the BC stage, and that there are several channel conditions in which unconventional 5-ary constellations lead to a better throughput performance. Through the use of sphere packing, we optimize the constellation for such an irregular network coding. We further discuss the design issue of the modulation in the case when the relay exploits diversity receptions such as multiple-antenna diversity and path diversity in frequency-selective fading. In addition, we apply our design strategy to a relaying system using higher-level modulations of 16QAM in the MA stage. Performance evaluations confirm that the proposed scheme can significantly improve end-to-end throughput for two-way relaying systems."
]
} |
1607.07171 | 2499487344 | This paper presents the results of a comprehensive investigation of complex linear physical-layer network (PNC) in two-way relay channels (TWRC). A critical question at relay R is as follows: "Given channel gain ratio @math , where @math and @math are the complex channel gains from nodes A and B to relay R, respectively, what is the optimal coefficients @math that minimizes the symbol error rate (SER) of @math when we attempt to detect @math in the presence of noise?" Our contributions with respect to this question are as follows: (1) We put forth a general Gaussian-integer formulation for complex linear PNC in which @math , and @math are elements of a finite field of Gaussian integers, that is, the field of @math where @math is a Gaussian prime. Previous vector formulation, in which @math , @math , and @math were represented by @math -dimensional vectors and @math and @math were represented by @math matrices, corresponds to a subcase of our Gaussian-integer formulation where @math is real prime only. Extension to Gaussian prime @math , where @math can be complex, gives us a larger set of signal constellations to achieve different rates at different SNR. (2) We show how to divide the complex plane of @math into different Voronoi regions such that the @math within each Voronoi region share the same optimal PNC mapping @math . We uncover the structure of the Voronoi regions that allows us to compute a minimum-distance metric that characterizes the SER of @math under optimal PNC mapping @math . Overall, the contributions in (1) and (2) yield a toolset for a comprehensive understanding of complex linear PNC in @math . We believe investigation of linear PNC beyond @math can follow the same approach. | Nonlinear PNC mapping based on Latin square was proposed in @cite_20 . Here, the row of the Latin square corresponds to the symbols of one node, and the column represents the symbols of the other node. Entry @math of the Latin square contains the NC symbol mapped to symbol @math and symbol @math of the two users. The exclusive law of PNC mapping is satisfied by the Latin square's constraint: an NC symbol appears once and only once in each row and in each column. The study of Latin-square nonlinear PNC in @cite_20 focused on low-order @math -PSK (the end nodes transmit @math -PSK signals), and the extension to high-order modulations requires high-order Latin squares. By contrast, as we will show, our Gaussian-integer formulation for linear PNC mapping is scalable with the NC operation with various high-order modulations such as @math -PAM in @cite_25 @cite_29 @cite_28 and complex modulations in this paper. In particular, for higher-order modulations, the Gaussian-integer formulation only requires selecting the optimal coefficients @math among a larger set of non-zero Gaussian integers. | {
"cite_N": [
"@cite_28",
"@cite_29",
"@cite_25",
"@cite_20"
],
"mid": [
"31453327",
"2087187421",
"1993626808",
""
],
"abstract": [
"This paper investigates various subtleties of applying linear physical-layer network coding (PNC) with @math -level pulse amplitude modulation ( @math -PAM) in two-way relay channels. A critical issue is how the PNC system performs when the received powers from the two users at the relay are imbalanced. In particular, how would the PNC system perform under slight power imbalance that is inevitable in practice, even when power control is applied? To answer these questions, this paper presents a comprehensive analysis of @math -PAM PNC. Our contributions are as follows. First, we give a systematic way to obtain the analytical relationship between the minimum distance of the signal constellation induced by the superimposed signals of the two users (a key performance determining factor) and the channel-gain ratio of the two users, for all @math . In particular, we show how the minimum distance changes in a piecewise linear fashion as the channel-gain ratio varies. Second, we show that the performance of @math -PAM PNC is highly sensitive to imbalanced received powers from the two users at the relay, even when the power imbalance is slight (e.g., the residual power imbalance in a power-controlled system). This sensitivity problem is exacerbated as @math increases, calling into question the robustness of high-order modulated PNC. Third, we propose an asynchronized PNC system in which the symbol arrival times of the two users at the relay are deliberately made to be asynchronous. We show that such asynchronized PNC, when operated with a belief propagation decoder, can remove the sensitivity problem, allowing a robust high-order modulated PNC system to be built.",
"The design of a reliable physical-layer network coding (PNC) scheme for practical fading two-way relay channels is a challenging task. This is because the signals transmitted by two users arrive at the relay with varied amplitudes and a relative carrier-phase offset, which will impair the performance of PNC. This paper studies a linear PNC scheme for fading two-way relay channels where the transmitters lack the channel state information. In this scheme, the relay computes and broadcast some finite-set integer combinations of two users' messages. The coefficients for the integer combinations used at the relay are carefully designed to minimize the error probability. This scheme can be viewed as a practical embodiment of the compute-and-forward concept. We develop a new LPNC design criterion called minimum set-distance maximization. Using this criterion, we derive an explicit expression for the optimized integer coefficients that minimizes the error probability of LPNC. The optimized integer coefficients turn out to resemble the fading channel coefficients. We further derive a closed-form expression on the average error probability performance over a complex-valued Rayleigh fading two-way relay channel, which shows that our designed LPNC scheme approaches the optimal error performance at a high SNR. Numerical results show that our designed LPNC outperforms existing schemes by more than 5 dB at a medium-to-high SNR regime.",
"We study a new linear physical-layer network coding (LPNC) scheme for fading two-way relay channels. In the uplink phase, two users transmit simultaneously. The relay selects some integer coefficients and computes a linear combination (in a size-q finite set) of the two users' messages, which is broadcast in the downlink phase. We develop a design criterion for choosing the integer coefficients that minimizes the error probability. Based on that, we derive an asymptotically tight bound, in a closed-form, for the error probability of the LPNC scheme over Rayleigh fading channels. Our analysis shows that the error-rate performance of the LPNC scheme becomes asymptotically optimal at a high SNR, and our designed LPNC scheme significantly outperforms existing schemes in the literature.",
""
]
} |
1607.07171 | 2499487344 | This paper presents the results of a comprehensive investigation of complex linear physical-layer network (PNC) in two-way relay channels (TWRC). A critical question at relay R is as follows: "Given channel gain ratio @math , where @math and @math are the complex channel gains from nodes A and B to relay R, respectively, what is the optimal coefficients @math that minimizes the symbol error rate (SER) of @math when we attempt to detect @math in the presence of noise?" Our contributions with respect to this question are as follows: (1) We put forth a general Gaussian-integer formulation for complex linear PNC in which @math , and @math are elements of a finite field of Gaussian integers, that is, the field of @math where @math is a Gaussian prime. Previous vector formulation, in which @math , @math , and @math were represented by @math -dimensional vectors and @math and @math were represented by @math matrices, corresponds to a subcase of our Gaussian-integer formulation where @math is real prime only. Extension to Gaussian prime @math , where @math can be complex, gives us a larger set of signal constellations to achieve different rates at different SNR. (2) We show how to divide the complex plane of @math into different Voronoi regions such that the @math within each Voronoi region share the same optimal PNC mapping @math . We uncover the structure of the Voronoi regions that allows us to compute a minimum-distance metric that characterizes the SER of @math under optimal PNC mapping @math . Overall, the contributions in (1) and (2) yield a toolset for a comprehensive understanding of complex linear PNC in @math . We believe investigation of linear PNC beyond @math can follow the same approach. | For link-by-link channel-coded PNC, the relay is aware of the channel coding employed by the two end nodes (specifically, the relay knows the codebooks used by the two end nodes), and the relay can exploit the correlations among the symbols within each of the channel-coded packets to further improve the accuracy of PNC decoding mapping. The study of channel-coded PNC systems also originated from low-order modulations such as BPSK @cite_24 @cite_19 , and then evolved to high-order modulations in search of higher throughput in the high SNR regime @cite_6 @cite_14 @cite_27 @cite_26 @cite_10 @cite_17 @cite_15 . | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_6",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_15",
"@cite_10",
"@cite_17"
],
"mid": [
"1512015702",
"",
"1575671689",
"",
"2017096524",
"2005196269",
"2057929374",
"2569741146",
"2048540241"
],
"abstract": [
"We address an open question, regarding whether a lattice code with lattice decoding (as opposed to maximum-likelihood (ML) decoding) can achieve the additive white Gaussian noise (AWGN) channel capacity. We first demonstrate how minimum mean-square error (MMSE) scaling along with dithering (lattice randomization) techniques can transform the power-constrained AWGN channel into a modulo-lattice additive noise channel, whose effective noise is reduced by a factor of spl radic (1+SNR SNR). For the resulting channel, a uniform input maximizes mutual information, which in the limit of large lattice dimension becomes 1 2 log (1+SNR), i.e., the full capacity of the original power constrained AWGN channel. We then show that capacity may also be achieved using nested lattice codes, the coarse lattice serving for shaping via the modulo-lattice transformation, the fine lattice for channel coding. We show that such pairs exist for any desired nesting ratio, i.e., for any signal-to-noise ratio (SNR). Furthermore, for the modulo-lattice additive noise channel lattice decoding is optimal. Finally, we show that the error exponent of the proposed scheme is lower bounded by the Poltyrev exponent.",
"",
"We propose and design a practical modulation-coded (MC) physical-layer network coding (PNC) scheme to approach the capacity limits of Gaussian and fading two-way relay channels (TWRCs). In the proposed scheme, an irregular repeat–accumulate (IRA) MC over @math with the same random coset is employed at two users, which directly maps the message sequences into coded PAM or QAM symbol sequences. The relay chooses appropriate network coding coefficients and computes the associated finite-field linear combinations of the two users' message sequences using an iterative belief propagation algorithm. For a symmetric Gaussian TWRC, we show that, by introducing the same random coset vector at the two users and a time-varying accumulator in the IRA code, the MC-PNC scheme exhibits symmetry and permutation-invariant properties for the soft information distribution of the network-coded message sequence (NCMS). We explore these properties in analyzing the convergence behavior of the scheme and optimizing the MC to approach the capacity limit of a TWRC. For a block fading TWRC, we present a new MC linear PNC scheme and an algorithm used at the relay for computing the NCMS. We demonstrate that our developed schemes achieve near-capacity performance in both Gaussian and Rayleigh fading TWRCs. For example, our designed codes over GF(7) and GF(3) with a code rate of 3 4 are within 1 and 1.2 dB of the TWRC capacity, respectively. Our method can be regarded as a practical embodiment of the notion of compute-and-forward with a good nested lattice code, and it can be applied to a wide range of network configurations.",
"",
"We investigate a channel-coded physical-layer network coding (CPNC) scheme for binary-input Gaussian two-way relay channels. In this scheme, the codewords of the two users are transmitted simultaneously. The relay computes and forwards a network-coded (NC) codeword without complete decoding of the two users' individual messages. We propose a new punctured codebook method to explicitly find the distance spectrum of the CPNC scheme. Based on that, we derive an asymptotically tight performance bound for the error probability. Our analysis shows that, compared to the single-user scenario, the CPNC scheme exhibits the same minimum Euclidean distance but an increased multiplicity of error events with minimum distance. At a high SNR, this leads to an SNR penalty of at most ln2 (in linear scale), for long channel codes of various rates. Our analytical results match well with the simulated performance.",
"Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them towards the destinations, which given enough equations, can recover their desired messages. The underlying codes are based on nested lattices whose algebraic structure ensures that integer combinations of codewords can be decoded reliably. Encoders map messages from a finite field to a lattice and decoders recover equations of lattice points which are then mapped back to equations over the finite field. This scheme is applicable even if the transmitters lack channel state information.",
"In this paper, a Gaussian two-way relay channel, where two source nodes exchange messages with each other through a relay, is considered. We assume that all nodes operate in full-duplex mode and there is no direct channel between the source nodes. We propose an achievable scheme composed of nested lattice codes for the uplink and structured binning for the downlink. Unlike conventional nested lattice codes, our codes utilize two different shaping lattices for source nodes based on a three-stage lattice partition chain, which is a key ingredient for producing the best gap-to-capacity results to date. Specifically, for all channel parameters, the achievable rate region of our scheme is within 1 2 bit from the capacity region for each user and its sum rate is within log3 2 bit from the sum capacity.",
"The problem of designing physical-layer network coding (PNC) schemes via nested lattices is considered. Building on the compute-and-forward (C&F) relaying strategy of Nazer and Gastpar, who demonstrated its asymptotic gain using information-theoretic tools, an algebraic approach is taken to show its potential in practical, nonasymptotic, settings. A general framework is developed for studying nested-lattice-based PNC schemes-called lattice network coding (LNC) schemes for short-by making a direct connection between C&F and module theory. In particular, a generic LNC scheme is presented that makes no assumptions on the underlying nested lattice code. C&F is reinterpreted in this framework, and several generalized constructions of LNC schemes are given. The generic LNC scheme naturally leads to a linear network coding channel over modules, based on which noncoherent network coding can be achieved. Next, performance complexity tradeoffs of LNC schemes are studied, with a particular focus on hypercube-shaped LNC schemes. The error probability of this class of LNC schemes is largely determined by the minimum intercoset distances of the underlying nested lattice code. Several illustrative hypercube-shaped LNC schemes are designed based on Constructions A and D, showing that nominal coding gains of 3 to 7.5 dB can be obtained with reasonable decoding complexity. Finally, the possibility of decoding multiple linear combinations is considered and related to the shortest independent vectors problem. A notion of dominant solutions is developed together with a suitable lattice-reduction-based algorithm.",
"When two or more users in a wireless network transmit simultaneously, their electromagnetic signals are linearly superimposed on the channel. As a result, a receiver that is interested in one of these signals sees the others as unwanted interference. This property of the wireless medium is typically viewed as a hindrance to reliable communication over a network. However, using a recently developed coding strategy, interference can in fact be harnessed for network coding. In a wired network, (linear) network coding refers to each intermediate node taking its received packets, computing a linear combination over a finite field, and forwarding the outcome towards the destinations. Then, given an appropriate set of linear combinations, a destination can solve for its desired packets. For certain topologies, this strategy can attain significantly higher throughputs over routing-based strategies. Reliable physical layer network coding takes this idea one step further: using judiciously chosen linear error-correcting codes, intermediate nodes in a wireless network can directly recover linear combinations of the packets from the observed noisy superpositions of transmitted signals. Starting with some simple examples, this paper explores the core ideas behind this new technique and the possibilities it offers for communication over interference-limited wireless networks."
]
} |
1607.07032 | 2950042905 | Detecting pedestrian has been arguably addressed as a special topic beyond general object detection. Although recent deep learning object detectors such as Fast Faster R-CNN [1, 2] have shown excellent performance for general object detection, they have limited success for detecting pedestrian, and previous leading pedestrian detectors were in general hybrid methods combining hand-crafted and deep convolutional features. In this paper, we investigate issues involving Faster R-CNN [2] for pedestrian detection. We discover that the Region Proposal Network (RPN) in Faster R-CNN indeed performs well as a stand-alone pedestrian detector, but surprisingly, the downstream classifier degrades the results. We argue that two reasons account for the unsatisfactory accuracy: (i) insufficient resolution of feature maps for handling small instances, and (ii) lack of any bootstrapping strategy for mining hard negative examples. Driven by these observations, we propose a very simple but effective baseline for pedestrian detection, using an RPN followed by boosted forests on shared, high-resolution convolutional feature maps. We comprehensively evaluate this method on several benchmarks (Caltech, INRIA, ETH, and KITTI), presenting competitive accuracy and good speed. Code will be made publicly available. | The Integrate Channel Features (ICF) detector @cite_20 , which extends the Viola-Jones framework @cite_15 , is among the most popular pedestrian detectors without using deep learning features. The ICF detector involves channel feature pyramids and boosted classifiers. The feature representations of ICF have been improved in several ways, including ACF @cite_13 , LDCF @cite_19 , SCF @cite_28 , and many others, but the boosting algorithm remains a key building block for pedestrian detection. | {
"cite_N": [
"@cite_28",
"@cite_19",
"@cite_15",
"@cite_13",
"@cite_20"
],
"mid": [
"1650122911",
"2170101770",
"2137401668",
"2125556102",
"2159386181"
],
"abstract": [
"Paper-by-paper results make it easy to miss the forest for the trees.We analyse the remarkable progress of the last decade by dis- cussing the main ideas explored in the 40+ detectors currently present in the Caltech pedestrian detection benchmark. We observe that there exist three families of approaches, all currently reaching similar detec- tion quality. Based on our analysis, we study the complementarity of the most promising ideas by combining multiple published strategies. This new decision forest detector achieves the current best known performance on the challenging Caltech-USA dataset.",
"Even with the advent of more sophisticated, data-hungry methods, boosted decision trees remain extraordinarily successful for fast rigid object detection, achieving top accuracy on numerous datasets. While effective, most boosted detectors use decision trees with orthogonal (single feature) splits, and the topology of the resulting decision boundary may not be well matched to the natural topology of the data. Given highly correlated data, decision trees with oblique (multiple feature) splits can be effective. Use of oblique splits, however, comes at considerable computational expense. Inspired by recent work on discriminative decorrelation of HOG features, we instead propose an efficient feature transform that removes correlations in local neighborhoods. The result is an overcomplete but locally decorrelated representation ideally suited for use with orthogonal decision trees. In fact, orthogonal trees with our locally decorrelated features outperform oblique trees trained over the original features at a fraction of the computational cost. The overall improvement in accuracy is dramatic: on the Caltech Pedestrian Dataset, we reduce false positives nearly tenfold over the previous state-of-the-art.",
"This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.",
"Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures).",
"We study the performance of ‘integral channel features’ for image classification tasks, focusing in particular on pedestrian detection. The general idea behind integral channel features is that multiple registered image channels are computed using linear and non-linear transformations of the input image, and then features such as local sums, histograms, and Haar features and their various generalizations are efficiently computed using integral images. Such features have been used in recent literature for a variety of tasks – indeed, variations appear to have been invented independently multiple times. Although integral channel features have proven effective, little effort has been devoted to analyzing or optimizing the features themselves. In this work we present a unified view of the relevant work in this area and perform a detailed experimental evaluation. We demonstrate that when designed properly, integral channel features not only outperform other features including histogram of oriented gradient (HOG), they also (1) naturally integrate heterogeneous sources of information, (2) have few parameters and are insensitive to exact parameter settings, (3) allow for more accurate spatial localization during detection, and (4) result in fast detectors when coupled with cascade classifiers."
]
} |
1607.07032 | 2950042905 | Detecting pedestrian has been arguably addressed as a special topic beyond general object detection. Although recent deep learning object detectors such as Fast Faster R-CNN [1, 2] have shown excellent performance for general object detection, they have limited success for detecting pedestrian, and previous leading pedestrian detectors were in general hybrid methods combining hand-crafted and deep convolutional features. In this paper, we investigate issues involving Faster R-CNN [2] for pedestrian detection. We discover that the Region Proposal Network (RPN) in Faster R-CNN indeed performs well as a stand-alone pedestrian detector, but surprisingly, the downstream classifier degrades the results. We argue that two reasons account for the unsatisfactory accuracy: (i) insufficient resolution of feature maps for handling small instances, and (ii) lack of any bootstrapping strategy for mining hard negative examples. Driven by these observations, we propose a very simple but effective baseline for pedestrian detection, using an RPN followed by boosted forests on shared, high-resolution convolutional feature maps. We comprehensively evaluate this method on several benchmarks (Caltech, INRIA, ETH, and KITTI), presenting competitive accuracy and good speed. Code will be made publicly available. | Driven by the success of ( slow'') R-CNN @cite_18 for general object detection, a recent series of methods @cite_28 @cite_29 @cite_11 adopt a two-stage pipeline for pedestrian detection. In @cite_16 , the SCF pedestrian detector @cite_28 is used to propose regions, followed by an R-CNN for classification; TA-CNN @cite_29 employs the ACF detector @cite_13 to generate proposals, and trains an R-CNN-style network to jointly optimize pedestrian detection with semantic tasks; the DeepParts method @cite_11 applies the LDCF detector @cite_19 to generate proposals and learns a set of complementary parts by neural networks. We note that these proposers are stand-alone pedestrian detectors consisting of hand-crafted features and boosted classifiers. | {
"cite_N": [
"@cite_18",
"@cite_28",
"@cite_29",
"@cite_19",
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"2102605133",
"1650122911",
"2953327122",
"2170101770",
"2949493420",
"2125556102",
"2200528286"
],
"abstract": [
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"Paper-by-paper results make it easy to miss the forest for the trees.We analyse the remarkable progress of the last decade by dis- cussing the main ideas explored in the 40+ detectors currently present in the Caltech pedestrian detection benchmark. We observe that there exist three families of approaches, all currently reaching similar detec- tion quality. Based on our analysis, we study the complementarity of the most promising ideas by combining multiple published strategies. This new decision forest detector achieves the current best known performance on the challenging Caltech-USA dataset.",
"Deep learning methods have achieved great success in pedestrian detection, owing to its ability to learn features from raw pixels. However, they mainly capture middle-level representations, such as pose of pedestrian, but confuse positive with hard negative samples, which have large ambiguity, e.g. the shape and appearance of tree trunk' or wire pole' are similar to pedestrian in certain viewpoint. This ambiguity can be distinguished by high-level representation. To this end, this work jointly optimizes pedestrian detection with semantic tasks, including pedestrian attributes (e.g. carrying backpack') and scene attributes (e.g. road', tree', and horizontal'). Rather than expensively annotating scene attributes, we transfer attributes information from existing scene segmentation datasets to the pedestrian dataset, by proposing a novel deep model to learn high-level features from multiple tasks and multiple data sources. Since distinct tasks have distinct convergence rates and data from different datasets have different distributions, a multi-task objective function is carefully designed to coordinate tasks and reduce discrepancies among datasets. The importance coefficients of tasks and network parameters in this objective function can be iteratively estimated. Extensive evaluations show that the proposed approach outperforms the state-of-the-art on the challenging Caltech and ETH datasets, where it reduces the miss rates of previous deep models by 17 and 5.5 percent, respectively.",
"Even with the advent of more sophisticated, data-hungry methods, boosted decision trees remain extraordinarily successful for fast rigid object detection, achieving top accuracy on numerous datasets. While effective, most boosted detectors use decision trees with orthogonal (single feature) splits, and the topology of the resulting decision boundary may not be well matched to the natural topology of the data. Given highly correlated data, decision trees with oblique (multiple feature) splits can be effective. Use of oblique splits, however, comes at considerable computational expense. Inspired by recent work on discriminative decorrelation of HOG features, we instead propose an efficient feature transform that removes correlations in local neighborhoods. The result is an overcomplete but locally decorrelated representation ideally suited for use with orthogonal decision trees. In fact, orthogonal trees with our locally decorrelated features outperform oblique trees trained over the original features at a fraction of the computational cost. The overall improvement in accuracy is dramatic: on the Caltech Pedestrian Dataset, we reduce false positives nearly tenfold over the previous state-of-the-art.",
"In this paper we study the use of convolutional neural networks (convnets) for the task of pedestrian detection. Despite their recent diverse successes, convnets historically underperform compared to other pedestrian detectors. We deliberately omit explicitly modelling the problem into the network (e.g. parts or occlusion modelling) and show that we can reach competitive performance without bells and whistles. In a wide range of experiments we analyse small and big convnets, their architectural choices, parameters, and the influence of different training data, including pre-training on surrogate tasks. We present the best convnet detectors on the Caltech and KITTI dataset. On Caltech our convnets reach top performance both for the Caltech1x and Caltech10x training setup. Using additional data at training time our strongest convnet model is competitive even to detectors that use additional data (optical flow) at test time.",
"Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures).",
"Recent advances in pedestrian detection are attained by transferring the learned features of Convolutional Neural Network (ConvNet) to pedestrians. This ConvNet is typically pre-trained with massive general object categories (e.g. ImageNet). Although these features are able to handle variations such as poses, viewpoints, and lightings, they may fail when pedestrian images with complex occlusions are present. Occlusion handling is one of the most important problem in pedestrian detection. Unlike previous deep models that directly learned a single detector for pedestrian detection, we propose DeepParts, which consists of extensive part detectors. DeepParts has several appealing properties. First, DeepParts can be trained on weakly labeled data, i.e. only pedestrian bounding boxes without part annotations are provided. Second, DeepParts is able to handle low IoU positive proposals that shift away from ground truth. Third, each part detector in DeepParts is a strong detector that can detect pedestrian by observing only a part of a proposal. Extensive experiments in Caltech dataset demonstrate the effectiveness of DeepParts, which yields a new state-of-the-art miss rate of 11:89 , outperforming the second best method by 10 ."
]
} |
1607.06952 | 2510415884 | Sentence ordering is a general and critical task for natural language generation applications. Previous works have focused on improving its performance in an external, downstream task, such as multi-document summarization. Given its importance, we propose to study it as an isolated task. We collect a large corpus of academic texts, and derive a data driven approach to learn pairwise ordering of sentences, and validate the efficacy with extensive experiments. Source codes and dataset of this paper will be made publicly available. | A fundamental problem in text generation is information ordering, including word and sentence ordering. Comparing with word ordering @cite_27 @cite_0 @cite_24 @cite_4 , sentence ordering is still less studied. Existing works of sentence ordering focus to improve the external and downstream applications, such as multi-document summarization and discourse coherence @cite_26 @cite_25 @cite_19 @cite_29 @cite_20 . There is also a lack of intrinsic evaluation for sentence ordering. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_29",
"@cite_0",
"@cite_24",
"@cite_27",
"@cite_19",
"@cite_25",
"@cite_20"
],
"mid": [
"",
"2347114283",
"",
"2096407857",
"1908479511",
"2065174184",
"2120932277",
"2124741472",
""
],
"abstract": [
"",
"Recent work on word ordering has argued that syntactic structure is important, or even required, for effectively recovering the order of a sentence. We find that, in fact, an n-gram language model with a simple heuristic gives strong results on this task. Furthermore, we show that a long short-term memory (LSTM) language model is even more effective at recovering order, with our basic model outperforming a state-of-the-art syntactic model by 11.5 BLEU points. Additional data and larger beams yield further gains, at the expense of training and search time.",
"",
"A fundamental problem in text generation is word ordering. Word ordering is a computationally difficult problem, which can be constrained to some extent for particular applications, for example by using synchronous grammars for statistical machine translation. There have been some recent attempts at the unconstrained problem of generating a sentence from a multi-set of input words (, 2009; Zhang and Clark, 2011). By using CCG and learning guided search, Zhang and Clark reported the highest scores on this task. One limitation of their system is the absence of an N-gram language model, which has been used by text generation systems to improve fluency. We take the Zhang and Clark system as the baseline, and incorporate an N-gram model by applying online large-margin training. Our system significantly improved on the baseline by 3.7 BLEU points.",
"Word ordering is a fundamental problem in text generation. In this article, we study word ordering using a syntax-based approach and a discriminative model. Two grammar formalisms are considered: Combinatory Categorial Grammar CCG and dependency grammar. Given the search for a likely string and syntactic analysis, the search space is massive, making discriminative training challenging. We develop a learning-guided search framework, based on best-first search, and investigate several alternative training algorithms. The framework we present is flexible in that it allows constraints to be imposed on output word orders. To demonstrate this flexibility, a variety of input conditions are considered. First, we investigate a \"pure\" word-ordering task in which the input is a multi-set of words, and the task is to order them into a grammatical and fluent sentence. This task has been tackled previously, and we report improved performance over existing systems on a standard Wall Street Journal test set. Second, we tackle the same reordering problem, but with a variety of input conditions, from the bare case with no dependencies or POS tags specified, to the extreme case where all POS tags and unordered, unlabeled dependencies are provided as input and various conditions in between. When applied to the NLG 2011 shared task, our system gives competitive results compared with the best-performing systems, which provide a further demonstration of the practical utility of our system.",
"In this paper, we describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). Starting from a DP-based solution to the traveling salesman problem, we present a novel technique to restrict the possible word reordering between source and target language in order to achieve an efficient search algorithm. A search restriction especially useful for the translation direction from German to English is presented. The experimental tests are carried out on the Verbmobil task (German-English, 8000-word vocabulary), which is a limited-domain spoken-language task.",
"In two ERP experiments we investigated how and when the language comprehension system relates an incoming word to semantic representations of an unfolding local sentence and a wider discourse. In Experiment 1, subjects were presented with short stories. The last sentence of these stories occasionally contained a critical word that, although acceptable in the local sentence context, was semantically anomalous with respect to the wider discourse (e.g., Jane told the brother that he was exceptionally slow in a discourse context where he had in fact been very quick). Relative to coherent control words (e.g., quick), these discourse-dependent semantic anomalies elicited a large N400 effect that began at about 200 to 250 msec after word onset. In Experiment 2, the same sentences were presented without their original story context. Although the words that had previously been anomalous in discourse still elicited a slightly larger average N400 than the coherent words, the resulting N400 effect was much reduced, showing that the large effect observed in stories depended on the wider discourse. In the same experiment, single sentences that contained a clear local semantic anomaly elicited a standard sentence-dependent N400 effect (e.g., Kutas & Hillyard, 1980). The N400 effects elicited in discourse and in single sentences had the same time course, overall morphology, and scalp distribution. We argue that these findings are most compatible with models of language processing in which there is no fundamental distinction between the integration of a word in its local (sentence-level) and its global (discourse-level) semantic context.",
"This paper concerns relationships among focus of attention, choice of referring expression, and perceived coherence of utterances within a discourse segment. It presents a framework and initial theory of centering intended to model the local component of attentional state. The paper examines interactions between local coherence and choice of referring expressions; it argues that differences in coherence correspond in part to the inference demands made by different types of referring expressions, given a particular attentional state. It demonstrates that the attentional state properties modeled by centering can account for these differences.",
""
]
} |
1607.06906 | 2509612528 | Fluctuations in electricity tariffs induced by the sporadic nature of demand loads on power grids has initiated immense efforts to find optimal scheduling solutions for charging and discharging plug-in electric vehicles (PEVs) subject to different objective sets. In this paper, we consider vehicle-to-grid (V2G) scheduling at a geographically large scale in which PEVs have the flexibility of charging discharging at multiple smart stations coordinated by individual aggregators. In such a realistic setting, we first formulate the objective of maximizing the overall profit of both, demand and supply entities, by defining a weighting parameter. Assuming random PEV arrivals, we then propose an online decentralized greedy algorithm for the formulated mixed integer non-linear programming (MINLP) problem, which incorporates efficient heuristics to practically guide each incoming vehicle to the most appropriate charging station (CS). The better performance of the presented algorithm in comparison with an alternative allocation strategy is demonstrated through simulations in terms of the overall achievable profit, computational time per vehicle, and flatness of the final electricity load. Moreover, simulation results obtained for various case studies also confirm the existence of optimal values for V2G penetration percentage and number of deployed stations at which the overall profit can be maximized. | Allocation strategies for EV charging discharging with different objectives have been studied in many recent works @cite_11 - @cite_21 , @cite_12 - @cite_13 . The main challenge in devising real-time allocation algorithms in V2G is the uncertainty of future departure times and charging demands of EVs a priori. The non-preemptive scheduling problem studied by He @cite_6 accounts for the real-time pricing and degradation fluctuation costs of batteries in obtaining the minimum charging costs. They consider the scenario of online EVs arrival to several small and closely-located CSs managed by a single aggregator and design a decentralized locally-optimal algorithm. In @cite_13 , the authors provide a closed-form solution to determine the optimal charging power of a single EV under time-of-use (ToU) pricing model and uncertain departure time. The authors of @cite_21 proposed an online algorithm with proven competitive ratio for obtaining a sub-optimal solution with slightly higher cost as compared to the offline optimal solution, while satisfying the desired energy requirements imposed by the vehicles. | {
"cite_N": [
"@cite_21",
"@cite_6",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2962789373",
"2051431619",
"2053823931",
"2346452219",
"1968987283"
],
"abstract": [
"",
"The vehicle electrification will have a significant impact on the power grid due to the increase in electricity consumption. It is important to perform intelligent scheduling for charging and discharging of electric vehicles (EVs). However, there are two major challenges in the scheduling problem. First, it is challenging to find the globally optimal scheduling solution which can minimize the total cost. Second, it is difficult to find a distributed scheduling scheme which can handle a large population and the random arrivals of the EVs. In this paper, we propose a globally optimal scheduling scheme and a locally optimal scheduling scheme for EV charging and discharging. We first formulate a global scheduling optimization problem, in which the charging powers are optimized to minimize the total cost of all EVs which perform charging and discharging during the day. The globally optimal solution provides the globally minimal total cost. However, the globally optimal scheduling scheme is impractical since it requires the information on the future base loads and the arrival times and the charging periods of the EVs that will arrive in the future time of the day. To develop a practical scheduling scheme, we then formulate a local scheduling optimization problem, which aims to minimize the total cost of the EVs in the current ongoing EV set in the local group. The locally optimal scheduling scheme is not only scalable to a large EV population but also resilient to the dynamic EV arrivals. Through simulations, we demonstrate that the locally optimal scheduling scheme can achieve a close performance compared to the globally optimal scheduling scheme.",
"In this paper, we show that an uncertain departure time significantly changes the analysis in optimizing the charging schedule of electric vehicles (EVs). We also obtain a closed-form solution for the stochastic optimization problem that is formulated to schedule charging of EVs with uncertain departure times in presence of hourly time-of-use pricing tariffs.",
"This paper proposes a novel cooperative charging strategy for a smart charging station in the dynamic electricity pricing environment, which helps electric vehicles (EVs) to economically accomplish the charging task by the given deadlines. This strategy allows EVs to share their battery-stored energy with each other under the coordination of an aggregator, so that more flexibility is given to the aggregator for better scheduling. Mathematically, the scheduling problem is formulated as a constrained mixed-integer linear program (MILP) to capture the discrete nature of the battery states, i.e., charging, idle and discharging. Then, an efficient algorithm is proposed to solve the MILP by means of dual decomposition and Benders decomposition. At last, the algorithm can be implemented in a distributed fashion, which makes it scalable and thus suitable for large-scale scheduling problems. Numerical results validate our theoretical analysis.",
"Vehicle-to-grid (V2G) has the potential of reducing the cost of owning and operating electric vehicles (EVs) while increasing utility system flexibility. Unidirectional V2G is a logical first step because it can be implemented on standard J1772 chargers and it does not degrade EV batteries from cycling. In this work an optimal combined bidding formulation for regulation and spinning reserves is developed to be used by aggregators. This formulation takes into account unplanned departures by EV owners during contract periods and compensates accordingly. Optional load level and price constraints are also developed. These algorithms maximize profits to the aggregator while increasing the benefits the customers and utility. Simulations over a three month period on the ERCOT system show that implementation of these algorithms can provide significant benefits to customers, utilities, and aggregators. Comparisons with bidirectional V2G show that while the benefits of unidirectional V2G are significantly lower, so are the risks."
]
} |
1607.06906 | 2509612528 | Fluctuations in electricity tariffs induced by the sporadic nature of demand loads on power grids has initiated immense efforts to find optimal scheduling solutions for charging and discharging plug-in electric vehicles (PEVs) subject to different objective sets. In this paper, we consider vehicle-to-grid (V2G) scheduling at a geographically large scale in which PEVs have the flexibility of charging discharging at multiple smart stations coordinated by individual aggregators. In such a realistic setting, we first formulate the objective of maximizing the overall profit of both, demand and supply entities, by defining a weighting parameter. Assuming random PEV arrivals, we then propose an online decentralized greedy algorithm for the formulated mixed integer non-linear programming (MINLP) problem, which incorporates efficient heuristics to practically guide each incoming vehicle to the most appropriate charging station (CS). The better performance of the presented algorithm in comparison with an alternative allocation strategy is demonstrated through simulations in terms of the overall achievable profit, computational time per vehicle, and flatness of the final electricity load. Moreover, simulation results obtained for various case studies also confirm the existence of optimal values for V2G penetration percentage and number of deployed stations at which the overall profit can be maximized. | We note that the most closing work to ours has been addressed in @cite_0 in which the authors investigate the problem of profit maximization considering multiple geographically distributed CSs. However, our work is different from @cite_0 in the following ways: We consider multiple categories of PEVs and investigate the maximization of relative obtainable profit of both supply and demand entities. Furthermore, in contrast to @cite_0 , more realistic system parameters are incorporated into the proposed optimization model such as PEV's battery as well as the ancillary associated costs with CSs. Furthermore, we achieve more insightful results through extensive simulations under the problem objective such as the optimal number of CSs under the proposed V2G system. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2344545089"
],
"abstract": [
"Plug-in electric vehicles (PEVs) are emerging as an eco-friendly and cost-effective alternative to conventional vehicles driven by internal combustion engines. However, uncoordinated charging of a large number of PEVs may cause grid failure. Therefore, charge scheduling of PEVs is an important problem. However, the charge scheduling by a single aggregator does not scale well as the PEV population grows. We propose a distributed framework for efficient PEV charging with multiple aggregators in a city where a PEV raises its charging request to a specific aggregator, and each aggregator has partial information about others. The aggregators collaborate among themselves for scheduling PEVs for charging in different charging stations owned by it or others. In this paper, we formulate a bi-objective charge scheduling optimization problem that attempts to maximize the total profit of the aggregators while maximizing the total number of PEVs charged. We first prove that the problem is NP-complete. We then propose distributed offline and online algorithms to solve the problem, and present simulation results for some realistic traffic scenarios."
]
} |
1607.06688 | 2295035820 | Various Cloud layers have to work in concert in order to manage and deploy complex multi-cloud applications, executing sophisticated workflows for Cloud resource deployment, activation, adjustment, interaction, and monitoring. While there are ample solutions for managing individual Cloud aspects (e.g. network controllers, deployment tools, and application security software), there are no well-integrated suites for managing an entire multi cloud environment with multiple providers and deployment models. This paper presents the CYCLONE architecture that integrates a number of existing solutions to create an open, unified, holistic Cloud management platform for multi-cloud applications, tailored to the needs of research organizations and SMEs. It discusses major challenges in providing a network and security infrastructure for the Intercloud and concludes with the demonstration how the architecture is implemented in a real life bioinformatics use case. | The most prominent IaaS ecosystem is Amazon EC2, a proprietary public Cloud platform. Two strong open source contenders are OpenStack and OpenNebula @cite_14 . However, they encompass a multitude of components whose set-up requires Cloud operators to follow extensive installation guides. | {
"cite_N": [
"@cite_14"
],
"mid": [
"1978123369"
],
"abstract": [
"In this installment of Trend Wars, we discuss cloud computing and OpenNebula with Ignacio M. Llorente and Ruben S. Montero, who are the principal investigator and the chief architect, respectively, of the open source OpenNebula project."
]
} |
1607.06153 | 2478432301 | In this paper, we present the first experiments using neural network models for the task of error detection in learner writing. We perform a systematic comparison of alternative compositional architectures and propose a framework for error detection based on bidirectional LSTMs. Experiments on the CoNLL-14 shared task dataset show the model is able to outperform other participants on detecting errors in learner writing. Finally, the model is integrated with a publicly deployed self-assessment system, leading to performance comparable to human annotators. | The field of automatically detecting errors in learner text has a long and rich history. Most work has focussed on tackling specific types of errors, such as usage of incorrect prepositions @cite_23 @cite_26 , articles @cite_17 @cite_13 , verb forms @cite_32 , and adjective-noun pairs @cite_12 . | {
"cite_N": [
"@cite_26",
"@cite_32",
"@cite_23",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2109494378",
"2147302173",
"",
"1996161790",
"2250510918",
"279308895"
],
"abstract": [
"This paper presents ongoing work on the detection of preposition errors of non-native speakers of English. Since prepositions account for a substantial proportion of all grammatical errors by ESL (English as a Second Language) learners, developing an NLP application that can reliably detect these types of errors will provide an invaluable learning resource to ESL students. To address this problem, we use a maximum entropy classifier combined with rule-based filters to detect preposition errors in a corpus of student essays. Although our work is preliminary, we achieve a precision of 0.8 with a recall of 0.3.",
"This paper proposes a method to correct English verb form errors made by non-native speakers. A basic approach is template matching on parse trees. The proposed method improves on this approach in two ways. To improve recall, irregularities in parse trees caused by verb form errors are taken into account; to improve precision, n-gram counts are utilized to filter proposed corrections. Evaluation on non-native corpora, representing two genres and mother tongues, shows promising results.",
"",
"One of the most difficult challenges faced by non-native speakers of English is mastering the system of English articles. We trained a maximum entropy classifier to select among a an, the, or zero article for noun phrases (NPs), based on a set of features extracted from the local context of each. When the classifier was trained on 6 million NPs, its performance on published text was about 83 correct. We then used the classifier to detect article errors in the TOEFL essays of native speakers of Chinese, Japanese, and Russian. These writers made such errors in about one out of every eight NPs, or almost once in every three sentences. The classifier's agreement with human annotators was 85 (kappa = 0.48) when it selected among a an, the, or zero article. Agreement was 89 (kappa = 0.56) when it made a binary (yes no) decision about whether the NP should have an article. Even with these levels of overall agreement, precision and recall in error detection were only 0.52 and 0.80, respectively. However, when the classifier was allowed to skip cases where its confidence was low, precision rose to 0.90, with 0.40 recall. Additional improvements in performance may require features that reflect general knowledge to handle phenomena such as indirect prior reference. In August 2005, the classifier was deployed as a component of Educational Testing Service's Criterion @math Online Writing Evaluation Service.",
"We describe a novel approach to error detection in adjective‐noun combinations. We present and release a new dataset of annotated errors where the examples are extracted from learner texts and annotated with error types. We show how compositional distributional semantic approaches can be applied to discriminate between correct and incorrect word combinations from learner data. Finally, we show how the output of the compositional distributional semantic models can be used as features in a classifier yielding good precision and accuracy.",
"One of the most difficult challenges faced by non-native speakers of English is mastering the system of English articles. We trained a maximum entropy classifier to select among a an, the, or zero article for noun phrases, based on a set of features extracted from the local context of each. When the classifier was trained on 6 million noun phrases, its performance was correct about 88 of the time. We also used the classifier to detect article errors in the TOEFL essays of native speakers of Chinese, Japanese, and Russian. Agreement with human annotators was about 88 (kappa = 0.36). Many of the disagreements were due to the classifier s lack of discourse information. Performance rose to 94 agreement (kappa = 0.47) when the system accepted noun phrases as correct in cases where its own confidence was low."
]
} |
1607.06153 | 2478432301 | In this paper, we present the first experiments using neural network models for the task of error detection in learner writing. We perform a systematic comparison of alternative compositional architectures and propose a framework for error detection based on bidirectional LSTMs. Experiments on the CoNLL-14 shared task dataset show the model is able to outperform other participants on detecting errors in learner writing. Finally, the model is integrated with a publicly deployed self-assessment system, leading to performance comparable to human annotators. | However, there has been limited work on more general error detection systems that could handle all types of errors in learner text. proposed a method based on mutual information and the chi-square statistic to detect sequences of part-of-speech tags and function words that are likely to be ungrammatical in English. used Maximum Entropy Markov Models with a range of features, such as POS tags, string features, and outputs from a constituency parser. The pilot Helping Our Own shared task @cite_24 also evaluated grammatical error detection of a number of different error types, though most systems were error-type specific and the best approach was heavily skewed towards article and preposition errors @cite_6 . We extend this line of research, working towards general error detection systems, and investigate the use of neural compositional models on this task. | {
"cite_N": [
"@cite_24",
"@cite_6"
],
"mid": [
"1640336798",
"41983068"
],
"abstract": [
"The aim of the Helping Our Own (HOO) Shared Task is to promote the development of automated tools and techniques that can assist authors in the writing task, with a specific focus on writing within the natural language processing community. This paper reports on the results of a pilot run of the shared task, in which six teams participated. We describe the nature of the task and the data used, report on the results achieved, and discuss some of the things we learned that will guide future versions of the task.",
"In this paper, we describe the University of Illinois system that participated in Helping Our Own (HOO), a shared task in text correction. We target several common errors, such as articles, prepositions, word choice, and punctuation errors, and we describe the approaches taken to address each error type. Our system is based on a combination of classifiers, combined with adaptation techniques for article and preposition detection. We ranked first in all three evaluation metrics (Detection, Recognition and Correction) among six participating teams. We also present type-based scores on preposition and article error correction and demonstrate that our approach achieves best performance in each task."
]
} |
1607.06153 | 2478432301 | In this paper, we present the first experiments using neural network models for the task of error detection in learner writing. We perform a systematic comparison of alternative compositional architectures and propose a framework for error detection based on bidirectional LSTMs. Experiments on the CoNLL-14 shared task dataset show the model is able to outperform other participants on detecting errors in learner writing. Finally, the model is integrated with a publicly deployed self-assessment system, leading to performance comparable to human annotators. | The related area of grammatical error has also gained considerable momentum in the past years, with four recent shared tasks highlighting several emerging directions @cite_24 @cite_19 @cite_21 @cite_31 . The current state-of-the-art approaches can broadly be separated into two categories: | {
"cite_N": [
"@cite_24",
"@cite_19",
"@cite_31",
"@cite_21"
],
"mid": [
"1640336798",
"2551336373",
"2917881729",
""
],
"abstract": [
"The aim of the Helping Our Own (HOO) Shared Task is to promote the development of automated tools and techniques that can assist authors in the writing task, with a specific focus on writing within the natural language processing community. This paper reports on the results of a pilot run of the shared task, in which six teams participated. We describe the nature of the task and the data used, report on the results achieved, and discuss some of the things we learned that will guide future versions of the task.",
"Incorrect usage of prepositions and determiners constitute the most common types of errors made by non-native speakers of English. It is not surprising, then, that there has been a significant amount of work directed towards the automated detection and correction of such errors. However, to date, the use of different data sets and different task definitions has made it difficult to compare work on the topic. This paper reports on the HOO 2012 shared task on error detection and correction in the use of prepositions and determiners, where systems developed by 14 teams from around the world were evaluated on the same previously unseen errorful text.",
"The CoNLL-2013 shared task was devoted to grammatical error correction. In this paper, we give the task definition, present the data sets, and describe the evaluation metric and scorer used in the shared task. We also give an overview of the various approaches adopted by the participating teams, and present the evaluation results.",
""
]
} |
1607.06132 | 2496827972 | Stochastic dominance is a technique for evaluating the performance of online algorithms that provides an intuitive, yet powerful stochastic order between the compared algorithms. Accordingly this holds for bijective analysis, which can be interpreted as stochastic dominance assuming the uniform distribution over requests. These techniques have been applied to some online problems, and have provided a clear separation between algorithms whose performance varies significantly in practice. However, there are situations in which they are not readily applicable due to the fact that they stipulate a stringent relation between the compared algorithms. In this paper, we propose remedies for these shortcomings. First, we establish sufficient conditions that allow us to prove the bijective optimality of a certain class of algorithms for a wide range of problems; we demonstrate this approach in the context of some well-studied online problems. Second, to account for situations in which two algorithms are incomparable or there is no clear optimum, we introduce the bijective ratio as a natural extension of (exact) bijective analysis. Our definition readily generalizes to stochastic dominance. This renders the concept of bijective analysis (and that of stochastic dominance) applicable to all online problems, and allows for the incorporation of other useful techniques such as amortized analysis. We demonstrate the applicability of the bijective ratio to one of the fundamental online problems, namely the continuous @math -server problem on metrics such as the line, the circle, and the star. Among other results, we show that the greedy algorithm attains bijective ratios of @math consistently across these metrics. These results confirm extensive previous studies that gave evidence of the efficiency of this algorithm on said metrics in practice, which, however, is not reflected in competitive analysis. | @cite_48 provided a systematic study of several measures for a simple version of the @math -server problem, namely, the two server problem on three colinear points. In particular, they showed that @math is bijectively optimal. Concerning the Max Max ratio, @cite_20 showed that the algorithm is asymptotically optimal up to a factor of 2 among all online algorithms and up to a factor of @math from the optimal offline algorithm. | {
"cite_N": [
"@cite_48",
"@cite_20"
],
"mid": [
"2017441966",
"2075354456"
],
"abstract": [
"This paper provides a systematic study of several proposed measures for online algorithms in the context of a specific problem, namely, the two server problem on three colinear points. Even though the problem is simple, it encapsulates a core challenge in online algorithms which is to balance greediness and adaptability. We examine Competitive Analysis, the Max Max Ratio, the Random Order Ratio, Bijective Analysis and Relative Worst Order Analysis, and determine how these measures compare the Greedy Algorithm, Double Coverage, and Lazy Double Coverage, commonly studied algorithms in the context of server problems. We find that by the Max Max Ratio and Bijective Analysis, Greedy is the best of the three algorithms. Under the other measures, Double Coverage and Lazy Double Coverage are better, though Relative Worst Order Analysis indicates that Greedy is sometimes better. Only Bijective Analysis and Relative Worst Order Analysis indicate that Lazy Double Coverage is better than Double Coverage. Our results also provide the first proof of optimality of an algorithm under Relative Worst Order Analysis.",
"An accepted measure for the performance of an on-line algorithm is the “competitive ratio“ introduced by Sleator and Tarjan. This measure is well motivated and has led to the development of a mathematical theory for on-line algorithms."
]
} |
1607.06132 | 2496827972 | Stochastic dominance is a technique for evaluating the performance of online algorithms that provides an intuitive, yet powerful stochastic order between the compared algorithms. Accordingly this holds for bijective analysis, which can be interpreted as stochastic dominance assuming the uniform distribution over requests. These techniques have been applied to some online problems, and have provided a clear separation between algorithms whose performance varies significantly in practice. However, there are situations in which they are not readily applicable due to the fact that they stipulate a stringent relation between the compared algorithms. In this paper, we propose remedies for these shortcomings. First, we establish sufficient conditions that allow us to prove the bijective optimality of a certain class of algorithms for a wide range of problems; we demonstrate this approach in the context of some well-studied online problems. Second, to account for situations in which two algorithms are incomparable or there is no clear optimum, we introduce the bijective ratio as a natural extension of (exact) bijective analysis. Our definition readily generalizes to stochastic dominance. This renders the concept of bijective analysis (and that of stochastic dominance) applicable to all online problems, and allows for the incorporation of other useful techniques such as amortized analysis. We demonstrate the applicability of the bijective ratio to one of the fundamental online problems, namely the continuous @math -server problem on metrics such as the line, the circle, and the star. Among other results, we show that the greedy algorithm attains bijective ratios of @math consistently across these metrics. These results confirm extensive previous studies that gave evidence of the efficiency of this algorithm on said metrics in practice, which, however, is not reflected in competitive analysis. | We denote by @math a sequence of requests, and by @math the set of all request sequences of size @math . Following @cite_4 , we denote by @math the subsequence @math . We also use sometimes @math to refer to the @math -th request of @math , namely @math . For the @math -server problem, we denote the distance between two points @math by @math . Unless otherwise noted, we assume that both the line and the circle have unit lengths. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2045732032"
],
"abstract": [
"It has long been known that for the paging problem in its standard form, competitive analysis cannot adequately distinguish algorithms based on their performance: there exists a vast class of algorithms that achieve the same competitive ratio, ranging from extremely naive and inefficient strategies (such as Flush-When-Full), to strategies of excellent performance in practice (such as Least-Recently-Used and some of its variants). A similar situation arises in the list update problem: in particular, under the cost formulation studied by Martinez and Roura [2000] and Munro [2000] every list update algorithm has, asymptotically, the same competitive ratio. Several refinements of competitive analysis, as well as alternative performance measures have been introduced in the literature, with varying degrees of success in narrowing this disconnect between theoretical analysis and empirical evaluation. In this article, we study these two fundamental online problems under the framework of bijective analysis [ 2007, 2008]. This is an intuitive technique that is based on pairwise comparison of the costs incurred by two algorithms on sets of request sequences of the same size. Coupled with a well-established model of locality of reference due to [2005], we show that Least-Recently-Used and Move-to-Front are the unique optimal algorithms for paging and list update, respectively. Prior to this work, only measures based on average-cost analysis have separated LRU and MTF from all other algorithms. Given that bijective analysis is a fairly stringent measure (and also subsumes average-cost analysis), we prove that in a strong sense LRU and MTF stand out as the best (deterministic) algorithms."
]
} |
1607.06062 | 2317063912 | We present a scalable method for detecting objects and estimating their 3D poses in RGB-D data. To this end, we rely on an efficient representation of object views and employ hashing techniques to match these views against the input frame in a scalable way. While a similar approach already exists for 2D detection, we show how to extend it to estimate the 3D pose of the detected objects. In particular, we explore different hashing strategies and identify the one which is more suitable to our problem. We show empirically that the complexity of our method is sublinear with the number of objects and we enable detection and pose estimation of many 3D objects with high accuracy while outperforming the state-of-the-art in terms of runtime. | 3D object detection has a long history. Early works were based on edges @cite_3 @cite_19 , then keypoint-based methods were shown to work reliably when distinctive features are available @cite_36 @cite_22 @cite_10 @cite_17 and robust schemes for correspondence filtering and verification are used @cite_23 @cite_20 @cite_2 . Furthermore, they are also scalable since they can be reduced to searching nearest neighbors efficiently in their feature spaces @cite_35 @cite_14 . However, if such features are missing, which is actually the case for many daily objects, this approach becomes unreliable. | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_22",
"@cite_36",
"@cite_3",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"2290980665",
"2124509324",
"2132079375",
"2128017662",
"2096600681",
"",
"2126959282",
"",
"2160643963",
"2170764835",
"1980931830"
],
"abstract": [
"",
"This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors.",
"In this paper we present two techniques for natural feature tracking in real-time on mobile phones. We achieve interactive frame rates of up to 20 Hz for natural feature tracking from textured planar targets on current-generation phones. We use an approach based on heavily modified state-of-the-art feature descriptors, namely SIFT and Ferns. While SIFT is known to be a strong, but computationally expensive feature descriptor, Ferns classification is fast, but requires large amounts of memory. This renders both original designs unsuitable for mobile phones. We give detailed descriptions on how we modified both approaches to make them suitable for mobile phones. We present evaluations on robustness and performance on various devices and finally discuss their appropriateness for augmented reality applications.",
"A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CDs. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.",
"Model-based recognition and motion tracking depend upon the ability to solve for projection and model parameters that will best fit a 3-D model to matching 2-D image features. The author extends current methods of parameter solving to handle objects with arbitrary curved surfaces and with any number of internal parameters representing articulation, variable dimensions, or surface deformations. Numerical stabilization methods are developed that take account of inherent inaccuracies in the image measurements and allow useful solutions to be determined even when there are fewer matches than unknown parameters. The Levenberg-Marquardt method is used to always ensure convergence of the solution. These techniques allow model-based vision to be used for a much wider class of problems than was possible with previous methods. Their application is demonstrated for tracking the motion of curved, parameterized objects. >",
"",
"3D model-based object recognition has been a noticeable research trend in recent years. Common methods find 2D-to-3D correspondences and make recognition decisions by pose estimation, whose efficiency usually suffers from noisy correspondences caused by the increasing number of target objects. To overcome this scalability bottleneck, we propose an efficient 2D-to-3D correspondence filtering approach, which combines a light-weight neighborhood-based step with a finer-grained pairwise step to remove spurious correspondences based on 2D 3D geometric cues. On a dataset of 300 3D objects, our solution achieves 10 times speed improvement over the baseline, with a comparable recognition accuracy. A parallel implementation on a quad-core CPU can run at 3fps for 1280×720 images.",
"",
"This paper deals with local 3D descriptors for surface matching. First, we categorize existing methods into two classes: Signatures and Histograms. Then, by discussion and experiments alike, we point out the key issues of uniqueness and repeatability of the local reference frame. Based on these observations, we formulate a novel comprehensive proposal for surface representation, which encompasses a new unique and repeatable local reference frame as well as a new 3D descriptor. The latter lays at the intersection between Signatures and Histograms, so as to possibly achieve a better balance between descriptiveness and robustness. Experiments on publicly available datasets as well as on range scans obtained with Spacetime Stereo provide a thorough validation of our proposal.",
"This paper proposes an effective algorithm for recognizing objects and accurately estimating their 6DOF pose in scenes acquired by a RGB-D sensor. The proposed method is based on a combination of different recognition pipelines, each exploiting the data in a diverse manner and generating object hypotheses that are ultimately fused together in an Hypothesis Verification stage that globally enforces geometrical consistency between model hypotheses and the scene. Such a scheme boosts the overall recognition performance as it enhances the strength of the different recognition pipelines while diminishing the impact of their specific weaknesses. The proposed method outperforms the state-of-the-art on two challenging benchmark datasets for object recognition comprising 35 object models and, respectively, 176 and 353 scenes.",
"We propose a novel model-based method for estimating and tracking the six-degrees-of-freedom (6DOF) pose of rigid objects of arbitrary shapes in real-time. By combining dense motion and stereo cues with sparse key point correspondences, and by feeding back information from the model to the cue extraction level, the method is both highly accurate and robust to noise and occlusions. A tight integration of the graphical and computational capability of Graphics Processing Units (GPUs) results in pose updates at frame rates exceeding 60 Hz. Since a benchmark dataset that enables the evaluation of stereo-vision-based pose estimators in complex scenarios is currently missing in the literature, we have introduced a novel synthetic benchmark dataset with varying objects, background motion, noise and occlusions. Using this dataset and a novel evaluation methodology, we show that the proposed method greatly outperforms state-of-the-art methods. Finally, we demonstrate excellent performance on challenging real-world sequences involving object manipulation."
]
} |
1607.06062 | 2317063912 | We present a scalable method for detecting objects and estimating their 3D poses in RGB-D data. To this end, we rely on an efficient representation of object views and employ hashing techniques to match these views against the input frame in a scalable way. While a similar approach already exists for 2D detection, we show how to extend it to estimate the 3D pose of the detected objects. In particular, we explore different hashing strategies and identify the one which is more suitable to our problem. We show empirically that the complexity of our method is sublinear with the number of objects and we enable detection and pose estimation of many 3D objects with high accuracy while outperforming the state-of-the-art in terms of runtime. | Template-based approaches then became popular. LineMOD @cite_6 achieved robust 3D object detection and pose estimation by efficiently matching templated views with quantized object contours and normal orientations. In @cite_13 the authors further optimize the matching via cascades and fine-tuned templates to achieve a notable run-time increase by a factor of 10. Nonetheless, these works still suffer from their linear time complexity. @cite_30 @cite_33 @cite_21 show how to build discriminative models based on these representations using SVM or boosting applied to training data. While @cite_30 @cite_21 do not consider the pose estimation problem, @cite_33 focuses on this problem only with a discriminatively trained mixture of HOG templates. Exemplars were also recently used for 3D object detection and pose estimation in @cite_24 , but the proposed approach still does not scale. | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_21",
"@cite_6",
"@cite_24",
"@cite_13"
],
"mid": [
"1989684337",
"",
"1989296367",
"",
"2010625607",
"2050966058"
],
"abstract": [
"This paper proposes a conceptually simple but surprisingly powerful method which combines the effectiveness of a discriminative object detector with the explicit correspondence offered by a nearest-neighbor approach. The method is based on training a separate linear SVM classifier for every exemplar in the training set. Each of these Exemplar-SVMs is thus defined by a single positive instance and millions of negatives. While each detector is quite specific to its exemplar, we empirically observe that an ensemble of such Exemplar-SVMs offers surprisingly good generalization. Our performance on the PASCAL VOC detection task is on par with the much more complex latent part-based model of , at only a modest computational cost increase. But the central benefit of our approach is that it creates an explicit association between each detection and a single training exemplar. Because most detections show good alignment to their associated exemplar, it is possible to transfer any available exemplar meta-data (segmentation, geometric structure, 3D model, etc.) directly onto the detections, which can then be used as part of overall scene understanding.",
"",
"In this paper we present a novel template-based approach for fast object detection. In particular we investigate the use of Dominant Orientation Templates (DOT), a binary template representation introduced by , as a means for fast detection of objects even if textureless. During training, we learn a binary mask for each template that allows to remove background clutter while at the same time including relevant context information. These mask templates then serve as weak classifiers in an Adaboost framework. We demonstrate our method on detection of shape-oriented object classes as well as multiview vehicle detection. We obtain a fast yet highly accurate method for category level detection that compares favorably to other more complicated yet much slower approaches. We further show how to efficiently transfer meta-data using the top most similar activated templates. Finally, we propose an optimization scheme for detection of specific objects using our proposed masks trained by the SVM, resulting in an increment of up to 17 in performance of the DOT method, without sacrificing testing speed and it is able to run the training on real time.",
"",
"This paper poses object category detection in images as a type of 2D-to-3D alignment problem, utilizing the large quantities of 3D CAD models that have been made publicly available online. Using the \"chair\" class as a running example, we propose an exemplar-based 3D category representation, which can explicitly model chairs of different styles as well as the large variation in viewpoint. We develop an approach to establish part-based correspondences between 3D CAD models and real photographs. This is achieved by (i) representing each 3D model using a set of view-dependent mid-level visual elements learned from synthesized views in a discriminative fashion, (ii) carefully calibrating the individual element detectors on a common dataset of negative images, and (iii) matching visual elements to the test image allowing for small mutual deformations but preserving the viewpoint and style constraints. We demonstrate the ability of our system to align 3D models with 2D objects in the challenging PASCAL VOC images, which depict a wide variety of chairs in complex scenes.",
"In this paper we propose a new method for detecting multiple specific 3D objects in real time. We start from the template-based approach based on the LINE2D LINEMOD representation introduced recently by , yet extend it in two ways. First, we propose to learn the templates in a discriminative fashion. We show that this can be done online during the collection of the example images, in just a few milliseconds, and has a big impact on the accuracy of the detector. Second, we propose a scheme based on cascades that speeds up detection. Since detection of an object is fast, new objects can be added with very low cost, making our approach scale well. In our experiments, we easily handle 10-30 3D objects at frame rates above 10fps using a single CPU core. We outperform the state-of-the-art both in terms of speed as well as in terms of accuracy, as validated on 3 different datasets. This holds both when using monocular color images (with LINE2D) and when using RGBD images (with LINEMOD). Moreover, we propose a challenging new dataset made of 12 objects, for future competing methods on monocular color images."
]
} |
1607.06062 | 2317063912 | We present a scalable method for detecting objects and estimating their 3D poses in RGB-D data. To this end, we rely on an efficient representation of object views and employ hashing techniques to match these views against the input frame in a scalable way. While a similar approach already exists for 2D detection, we show how to extend it to estimate the 3D pose of the detected objects. In particular, we explore different hashing strategies and identify the one which is more suitable to our problem. We show empirically that the complexity of our method is sublinear with the number of objects and we enable detection and pose estimation of many 3D objects with high accuracy while outperforming the state-of-the-art in terms of runtime. | Over the last years, hashing-based techniques became quite popular for large-scale image classification since they allow for immediate indexing into huge datasets. Apart from many works that focused on improving hashing of real-valued features into more compact binary codes @cite_7 @cite_26 , there has been ongoing research on applying hashing in a sliding window scenario for 2D object detection: @cite_11 applies hashing on HOG descriptors computed from Deformable Part Models to scale to 100,000 2D object classes. @cite_32 presents a scalable object category detector by representing HOG sparsely with a set of patches which can be retrieved immediately. | {
"cite_N": [
"@cite_26",
"@cite_32",
"@cite_7",
"@cite_11"
],
"mid": [
"1974647172",
"2056968687",
"2153273131",
""
],
"abstract": [
"This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or \"classemes\" on the ImageNet data set.",
"The objective of this work is object category detection in large scale image datasets in the manner of Video Google--an object category is specified by a HOG classifier template, and retrieval is immediate at run time. We make the following three contributions: (i) a new image representation based on mid-level discriminative patches, that is designed to be suited to immediate object category detection and inverted file indexing, (ii) a sparse representation of a HOG classifier using a set of mid-level discriminative classifier patches, and (iii) a fast method for spatial reranking images on their detections. We evaluate the detection method on the standard PASCAL VOC 2007 dataset, together with a 100K image subset of ImageNet, and demonstrate near state of the art detection performance at low ranks whilst maintaining immediate retrieval speeds. Applications are also demonstrated using an exemplar-SVM for pose matched retrieval.",
"Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the Hamming space. Non-linear hash functions have demonstrated their advantage over linear ones due to their powerful generalization capability. In the literature, kernel functions are typically used to achieve non-linearity in hashing, which achieve encouraging retrieval perfor- mance at the price of slow evaluation and training time. Here we propose to use boosted decision trees for achieving non-linearity in hashing, which are fast to train and evalu- ate, hence more suitable for hashing with high dimensional data. In our approach, we first propose sub-modular for- mulations for the hashing binary code inference problem and an efficient GraphCut based block search method for solving large-scale inference. Then we learn hash func- tions by training boosted decision trees to fit the binary codes. Experiments demonstrate that our proposed method significantly outperforms most state-of-the-art methods in retrieval precision and training time. Especially for high- dimensional data, our method is orders of magnitude faster than many methods in terms of training time.",
""
]
} |
1607.05968 | 2490949124 | We present a system for generating and understanding of dynamic and static spatial relations in robotic interaction setups. Robots describe an environment of moving blocks using English phrases that include spatial relations such as "across" and "in front of". We evaluate the system in robot-robot interactions and show that the system can robustly deal with visual perception errors, language omissions and ungrammatical utterances. | Earliest systems for spatial language @cite_2 @cite_18 @cite_25 showed how artificial agents can understand static spatial relations such as front'', back''. This work has continued. We have now various ways of modeling static spatial relations: proximity fields for proximal relations @cite_16 , prototypes for projective and absolute spatial relations @cite_3 . Models of static spatial relations are interesting but they only cover relations not encoding dynamic qualities. | {
"cite_N": [
"@cite_18",
"@cite_3",
"@cite_2",
"@cite_16",
"@cite_25"
],
"mid": [
"",
"56782673",
"1918758839",
"2157104268",
"2107537133"
],
"abstract": [
"",
"Grounding language in sensorimotor spaces is an important and difficult task. In order, for robots to be able to interpret and produce utterances about the real world, they have to link symbolic information to continuous perceptual spaces. This requires dealing with inherent vagueness, noise and differences in perspective in the perception of the real world. This paper presents two case studies for spatial language and quantification that show how cognitive operations – the building blocks of grounded procedural semantics – can be efficiently grounded in sensorimotor spaces.",
"In this article, principles involving the intrinsic, deictic, and extrinsic use of spatial prepositions are examined from linguistic, psychological, and AI approaches. First, I define some important terms. Second, those prepositions which permit intrinsic, deictic, and extrinsic use are specified. Third, I examine how the frame of reference is determined for all three cases. Fourth, I look at ambiguities in the use of prepositions and how they can be resolved. Finally, I introduce the natural language dialog system CITYTOUR, which can cope with the intrinsic, deictic, and extrinsic use of spatial prepositions, and compare it with the approaches dealt with in the previous sections as well as to some other AI systems.",
"The paper presents a new model for context dependent interpretation of linguistic expressions about spatial proximity between objects in a natural scene. The paper discusses novel psycholinguistic experimental data that tests and verifies the model. The model has been implemented, and enables a conversational robot to identify objects in a scene through topological spatial relations (e.g. \"X near Y\"). The model can help motivate the choice between topological and projective prepositions.",
"In conversation, people often use spatial relationships to describe their environment, e.g., \"There is a desk in front of me and a doorway behind it,\" and to issue directives, e.g., \"go around the desk and through the doorway.\" In our research, we have been investigating the use of spatial relationships to establish a natural communication mechanism between people and robots, in particular, for novice users. In this paper, the work on robot spatial relationships is combined with a multimodal robot interface. We show how linguistic spatial descriptions and other spatial information can be extracted from an evidence grid map and how this information can be used in a natural, human-robot dialog. Examples using spatial language are included for both robot-to-human feedback and also human-to-robot commands. We also discuss some linguistic consequences in the semantic representations of spatial and locative information based on this work."
]
} |
1607.05968 | 2490949124 | We present a system for generating and understanding of dynamic and static spatial relations in robotic interaction setups. Robots describe an environment of moving blocks using English phrases that include spatial relations such as "across" and "in front of". We evaluate the system in robot-robot interactions and show that the system can robustly deal with visual perception errors, language omissions and ungrammatical utterances. | Recent models of dynamic spatial relations use semantic fields @cite_17 and probabilistic graphical models @cite_1 for dealing with temporal aspects of spatial relations. In some cases, the work is on (hand-) modeling spatial relations. Others rely on large task-dependent data sets in order to learn the representations of spatial relations. In general there are fewer approaches using formal methods for spatial language @cite_24 . | {
"cite_N": [
"@cite_24",
"@cite_1",
"@cite_17"
],
"mid": [
"2402698561",
"1949907236",
"1995304820"
],
"abstract": [
"This paper presents a computational model of the processing of dynamic spatial relations occurring in an embodied robotic interaction setup. A complete system is introduced that allows autonomous robots to produce and interpret dynamic spatial phrases (in English) given an environment of moving objects. The model unites two separate research strands: computational cognitive semantics and on commonsense spatial representation and reasoning. The model for the first time demonstrates an integration of these different strands.",
"n order for robots to engage in dialog with human teammates, they must have the ability to map between words in the language and aspects of the external world. A solution to this symbol grounding problem (Harnad, 1990) would enable a robot to interpret commands such as “Drive over to receiving and pick up the tire pallet.” In this article we describe several of our results that use probabilistic inference to address the symbol grounding problem. Our specific approach is to develop models that factor according to the linguistic structure of a command. We first describe an early result, a generative model that factors according to the sequential structure of language, and then discuss our new framework, generalized grounding graphs (G3). The G3 framework dynamically instantiates a probabilistic graphical model for a natural language input, enabling a mapping between words in language and concrete objects, places, paths and events in the external world. We report on corpus-based experiments where the robot is able to learn and use word meanings in three real-world tasks: indoor navigation, spatial language video retrieval, and mobile manipulation.",
"We present a methodology for enabling service robots to follow natural language commands from non-expert users, with and without user-specified constraints, with a particular focus on spatial language understanding. As part of our approach, we propose a novel extension to the semantic field model of spatial prepositions that enables the representation of dynamic spatial relations involving paths. The design, system modules, and implementation details of our robot software architecture are presented and the relevance of the proposed methodology to interactive instruction and task modification through the addition of constraints is discussed. The paper concludes with an evaluation of our robot software architecture implemented on a simulated mobile robot operating in both a 2D home environment and in real world environment maps to demonstrate the generalizability and usefulness of our approach in real world applications."
]
} |
1607.05968 | 2490949124 | We present a system for generating and understanding of dynamic and static spatial relations in robotic interaction setups. Robots describe an environment of moving blocks using English phrases that include spatial relations such as "across" and "in front of". We evaluate the system in robot-robot interactions and show that the system can robustly deal with visual perception errors, language omissions and ungrammatical utterances. | One important aspect of robot natural language processing is robustness @cite_9 . Researchers have proposed large coverage, data-driven approaches @cite_13 , as well as precision grammar-based approaches for dealing with language problems @cite_21 . There are also systems that integrate planning for handling robustness issues @cite_8 . More often than not, systems are evaluated only with respect to natural language errors. In this paper, we investigate how the integration of formal reasoning methods with incremental semantic processing and fluid parsing and production grammars can contribute to robust, grounded language processing. | {
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_13",
"@cite_8"
],
"mid": [
"1747013363",
"2060304006",
"2118781169",
"1477234322"
],
"abstract": [
"Robots are slowly becoming part of everyday life, as they are being marketed for commercial applications (viz. telepresence, cleaning or entertainment). Thus, the ability to interact with non-expert users is becoming a key requirement. Even if user utterances can be efficiently recognized and transcribed by Automatic Speech Recognition systems, several issues arise in translating them into suitable robotic actions. In this paper, we will discuss both approaches providing two existing Natural Language Understanding workflows for Human Robot Interaction. First, we discuss a grammar based approach: it is based on grammars thus recognizing a restricted set of commands. Then, a data driven approach, based on a free-from speech recognizer and a statistical semantic parser, is discussed. The main advantages of both approaches are discussed, also from an engineering perspective, i.e. considering the effort of realizing HRI systems, as well as their reusability and robustness. An empirical evaluation of the proposed approaches is carried out on several datasets, in order to understand performances and identify possible improvements towards the design of NLP components in HRI.",
"Natural human-robot interaction requires different and more robust models of language understanding (NLU) than non-embodied NLU systems. In particular, architectures are required that (1) process language incrementally in order to be able to provide early backchannel feedback to human speakers; (2) use pragmatic contexts throughout the understanding process to infer missing information; and (3) handle the underspecified, fragmentary, or otherwise ungrammatical utterances that are common in spontaneous speech. In this paper, we describe our attempts at developing an integrated natural language understanding architecture for HRI, and demonstrate its novel capabilities using challenging data collected in human-human interaction experiments.",
"The ability to understand natural-language instructions is critical to building intelligent agents that interact with humans. We present a system that learns to transform natural-language navigation instructions into executable formal plans. Given no prior linguistic knowledge, the system learns by simply observing how humans follow navigation instructions. The system is evaluated in three complex virtual indoor environments with numerous objects and landmarks. A previously collected realistic corpus of complex English navigation instructions for these environments is used for training and testing data. By using a learned lexicon to refine inferred plans and a supervised learner to induce a semantic parser, the system is able to automatically learn to correctly interpret a reasonable fraction of the complex instructions in this corpus.",
"In this paper, we propose a flexible system for robust natural language interpretation of spoken commands on a mobile robot in domestic service robotics applications. Existing language processing for instructing a mobile robot is often restricted by using a simple grammar where precisely pre-defined utterances are directly mapped to system calls. These approaches do not regard fallibility of human users and they only allow for binary processing of an utterance; either a command is part of the grammar and hence understood correctly, or it is not part of the grammar and gets rejected. We model the language processing as an interpretation process where the utterance needs to be mapped to the robot’s capabilities. We do so by casting the processing as a (decision-theoretic) planning problem on interpretation actions. This allows for a flexible system that can resolve ambiguities and which is also capable of initiating steps to achieve clarification. We show how we evaluated several versions of the system with multiple utterances of different complexity as well as with incomplete and erroneous requests."
]
} |
1607.06038 | 2953021746 | We present a 3D object detection method that uses regressed descriptors of locally-sampled RGB-D patches for 6D vote casting. For regression, we employ a convolutional auto-encoder that has been trained on a large collection of random local patches. During testing, scene patch descriptors are matched against a database of synthetic model view patches and cast 6D object votes which are subsequently filtered to refined hypotheses. We evaluate on three datasets to show that our method generalizes well to previously unseen input data, delivers robust detection results that compete with and surpass the state-of-the-art while being scalable in the number of objects. | There has recently been an intense research activity in the field of 3D object detection, with many methods proposed in literature traditionally subdivided into feature-based and template-based. As for the first class, earlier approaches relied on features @cite_31 @cite_0 directly detected on the RGB image and then back-projected to 3D @cite_17 @cite_30 . With the introduction of 3D descriptors @cite_4 @cite_10 , approaches replaced image features with features directly computed on the 3D point cloud @cite_28 , and introduced robust schemes for filtering wrong 3D correspondences and for hypothesis verification @cite_20 @cite_18 @cite_1 . They can handle occlusion and are scalable in the number of models, thanks to the use of approximate nearest neighbor schemes for feature matching @cite_27 yielding sub-linear complexity. Nevertheless, they are limited when matching surfaces of poor informative shape and tend to report non real-time run-times. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_4",
"@cite_28",
"@cite_1",
"@cite_0",
"@cite_27",
"@cite_31",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"1980931830",
"2170764835",
"2151837863",
"2136020167",
"",
"1677409904",
"2290980665",
"2151103935",
"2160643963",
"",
"2162601563"
],
"abstract": [
"We propose a novel model-based method for estimating and tracking the six-degrees-of-freedom (6DOF) pose of rigid objects of arbitrary shapes in real-time. By combining dense motion and stereo cues with sparse key point correspondences, and by feeding back information from the model to the cue extraction level, the method is both highly accurate and robust to noise and occlusions. A tight integration of the graphical and computational capability of Graphics Processing Units (GPUs) results in pose updates at frame rates exceeding 60 Hz. Since a benchmark dataset that enables the evaluation of stereo-vision-based pose estimators in complex scenarios is currently missing in the literature, we have introduced a novel synthetic benchmark dataset with varying objects, background motion, noise and occlusions. Using this dataset and a novel evaluation methodology, we show that the proposed method greatly outperforms state-of-the-art methods. Finally, we demonstrate excellent performance on challenging real-world sequences involving object manipulation.",
"This paper proposes an effective algorithm for recognizing objects and accurately estimating their 6DOF pose in scenes acquired by a RGB-D sensor. The proposed method is based on a combination of different recognition pipelines, each exploiting the data in a diverse manner and generating object hypotheses that are ultimately fused together in an Hypothesis Verification stage that globally enforces geometrical consistency between model hypotheses and the scene. Such a scheme boosts the overall recognition performance as it enhances the strength of the different recognition pipelines while diminishing the impact of their specific weaknesses. The proposed method outperforms the state-of-the-art on two challenging benchmark datasets for object recognition comprising 35 object models and, respectively, 176 and 353 scenes.",
"In this paper we present a new approach for labeling 3D points with different geometric surface primitives using a novel feature descriptor - the Fast Point Feature Histograms, and discriminative graphical models. To build informative and robust 3D feature point representations, our descriptors encode the underlying surface geometry around a point p using multi-value histograms. This highly dimensional feature space copes well with noisy sensor data and is not dependent on pose or sampling density. By defining classes of 3D geometric surfaces and making use of contextual information using Conditional Random Fields (CRFs), our system is able to successfully segment and label 3D point clouds, based on the type of surfaces the points are lying on. We validate and demonstrate the method's efficiency by comparing it against similar initiatives as well as present results for table setting datasets acquired in indoor environments.",
"3D object recognition from local features is robust to occlusions and clutter. However, local features must be extracted from a small set of feature rich keypoints to avoid computational complexity and ambiguous features. We present an algorithm for the detection of such keypoints on 3D models and partial views of objects. The keypoints are highly repeatable between partial views of an object and its complete 3D model. We also propose a quality measure to rank the keypoints and select the best ones for extracting local features. Keypoints are identified at locations where a unique local 3D coordinate basis can be derived from the underlying surface in order to extract invariant features. We also propose an automatic scale selection technique for extracting multi-scale and scale invariant features to match objects at different unknown scales. Features are projected to a PCA subspace and matched to find correspondences between a database and query object. Each pair of matching features gives a transformation that aligns the query and database object. These transformations are clustered and the biggest cluster is used to identify the query object. Experiments on a public database revealed that the proposed quality measure relates correctly to the repeatability of keypoints and the multi-scale features have a recognition rate of over 95 for up to 80 occluded objects.",
"",
"In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance.",
"",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"This paper deals with local 3D descriptors for surface matching. First, we categorize existing methods into two classes: Signatures and Histograms. Then, by discussion and experiments alike, we point out the key issues of uniqueness and repeatability of the local reference frame. Based on these observations, we formulate a novel comprehensive proposal for surface representation, which encompasses a new unique and repeatable local reference frame as well as a new 3D descriptor. The latter lays at the intersection between Signatures and Histograms, so as to possibly achieve a better balance between descriptiveness and robustness. Experiments on publicly available datasets as well as on range scans obtained with Spacetime Stereo provide a thorough validation of our proposal.",
"",
"There have been important recent advances in object recognition through the matching of invariant local image features. However, the existing approaches are based on matching to individual training images. This paper presents a method for combining multiple images of a 3D object into a single model representation. This provides for recognition of 3D objects from any viewpoint, the generalization of models to non-rigid changes, and improved robustness through the combination of features acquired under a range of imaging conditions. The decision of whether to cluster a training image into an existing view representation or to treat it as a new view is based on the geometric accuracy of the match to previous model views. A new probabilistic model is developed to reduce the false positive matches that would otherwise arise due to loosened geometric constraints on matching 3D and non-rigid models. A system has been developed based on these approaches that is able to robustly recognize 3D objects in cluttered natural images in sub-second times."
]
} |
1607.06038 | 2953021746 | We present a 3D object detection method that uses regressed descriptors of locally-sampled RGB-D patches for 6D vote casting. For regression, we employ a convolutional auto-encoder that has been trained on a large collection of random local patches. During testing, scene patch descriptors are matched against a database of synthetic model view patches and cast 6D object votes which are subsequently filtered to refined hypotheses. We evaluate on three datasets to show that our method generalizes well to previously unseen input data, delivers robust detection results that compete with and surpass the state-of-the-art while being scalable in the number of objects. | On the other hand, template-based approaches are often very robust to clutter but scale linearly with the number of models. LineMOD @cite_6 performed robust 3D object detection by matching templates extracted from rendered views of 3D models and embedding quantized image contours and normal orientations. Successively, @cite_32 optimized the matching via a cascaded classification scheme, achieving a run-time increase by a factor of 10. Improvements in efficiency are also achieved by the two-stage cascaded detection method in @cite_9 and by the hashing matching approach tailored to LineMOD templates proposed in @cite_23 . Other recent approaches @cite_24 @cite_25 @cite_21 build discriminative models based on such representations using SVM or boosting applied to training data. | {
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_32",
"@cite_6",
"@cite_24",
"@cite_23",
"@cite_25"
],
"mid": [
"2135625544",
"2010625607",
"2050966058",
"1526868886",
"1989684337",
"2963203908",
""
],
"abstract": [
"We propose a fast edge-based approach for detection and approximate pose estimation of multiple textureless objects in a single image. The objects are trained from a set of edge maps, each showing one object in one pose. To each scanning window in the input image, the nearest neighbor is found among these training templates by a two-level cascade. The first cascade level, based on a novel edge-based sparse image descriptor and fast search by index table, prunes the majority of background windows. The second level verifies the surviving detection hypotheses by oriented chamfer matching, improved by selecting discriminative edges and by compensating a bias towards simple objects. The method outperforms the state-of-the-art approach by (2012). The processing is near real-time, ranging from 2 to 4 frames per second for the training set size 104.",
"This paper poses object category detection in images as a type of 2D-to-3D alignment problem, utilizing the large quantities of 3D CAD models that have been made publicly available online. Using the \"chair\" class as a running example, we propose an exemplar-based 3D category representation, which can explicitly model chairs of different styles as well as the large variation in viewpoint. We develop an approach to establish part-based correspondences between 3D CAD models and real photographs. This is achieved by (i) representing each 3D model using a set of view-dependent mid-level visual elements learned from synthesized views in a discriminative fashion, (ii) carefully calibrating the individual element detectors on a common dataset of negative images, and (iii) matching visual elements to the test image allowing for small mutual deformations but preserving the viewpoint and style constraints. We demonstrate the ability of our system to align 3D models with 2D objects in the challenging PASCAL VOC images, which depict a wide variety of chairs in complex scenes.",
"In this paper we propose a new method for detecting multiple specific 3D objects in real time. We start from the template-based approach based on the LINE2D LINEMOD representation introduced recently by , yet extend it in two ways. First, we propose to learn the templates in a discriminative fashion. We show that this can be done online during the collection of the example images, in just a few milliseconds, and has a big impact on the accuracy of the detector. Second, we propose a scheme based on cascades that speeds up detection. Since detection of an object is fast, new objects can be added with very low cost, making our approach scale well. In our experiments, we easily handle 10-30 3D objects at frame rates above 10fps using a single CPU core. We outperform the state-of-the-art both in terms of speed as well as in terms of accuracy, as validated on 3 different datasets. This holds both when using monocular color images (with LINE2D) and when using RGBD images (with LINEMOD). Moreover, we propose a challenging new dataset made of 12 objects, for future competing methods on monocular color images.",
"We propose a framework for automatic modeling, detection, and tracking of 3D objects with a Kinect. The detection part is mainly based on the recent template-based LINEMOD approach [1] for object detection. We show how to build the templates automatically from 3D models, and how to estimate the 6 degrees-of-freedom pose accurately and in real-time. The pose estimation and the color information allow us to check the detection hypotheses and improves the correct detection rate by 13 with respect to the original LINEMOD. These many improvements make our framework suitable for object manipulation in Robotics applications. Moreover we propose a new dataset made of 15 registered, 1100+ frame video sequences of 15 various objects for the evaluation of future competing methods.",
"This paper proposes a conceptually simple but surprisingly powerful method which combines the effectiveness of a discriminative object detector with the explicit correspondence offered by a nearest-neighbor approach. The method is based on training a separate linear SVM classifier for every exemplar in the training set. Each of these Exemplar-SVMs is thus defined by a single positive instance and millions of negatives. While each detector is quite specific to its exemplar, we empirically observe that an ensemble of such Exemplar-SVMs offers surprisingly good generalization. Our performance on the PASCAL VOC detection task is on par with the much more complex latent part-based model of , at only a modest computational cost increase. But the central benefit of our approach is that it creates an explicit association between each detection and a single training exemplar. Because most detections show good alignment to their associated exemplar, it is possible to transfer any available exemplar meta-data (segmentation, geometric structure, 3D model, etc.) directly onto the detections, which can then be used as part of overall scene understanding.",
"We present a scalable method for detecting objects and estimating their 3D poses in RGB-D data. To this end, we rely on an efficient representation of object views and employ hashing techniques to match these views against the input frame in a scalable way. While a similar approach already exists for 2D detection, we show how to extend it to estimate the 3D pose of the detected objects. In particular, we explore different hashing strategies and identify the one which is more suitable to our problem. We show empirically that the complexity of our method is sublinear with the number of objects and we enable detection and pose estimation of many 3D objects with high accuracy while outperforming the state-of-the-art in terms of runtime.",
""
]
} |
1607.05749 | 2490143098 | The traditional frequent pattern mining algorithms generate an exponentially large number of patterns of which a substantial proportion are not much significant for many data analysis endeavors. Discovery of a small number of personalized interesting patterns from the large output set according to a particular user's interest is an important as well as challenging task. Existing works on pattern summarization do not solve this problem from the personalization viewpoint. In this work, we propose an interactive pattern discovery framework named PRIIME which identifies a set of interesting patterns for a specific user without requiring any prior input on the interestingness measure of patterns from the user. The proposed framework is generic to support discovery of the interesting set, sequence and graph type patterns. We develop a softmax classification based iterative learning algorithm that uses a limited number of interactive feedback from the user to learn her interestingness profile, and use this profile for pattern recommendation. To handle sequence and graph type patterns PRIIME adopts a neural net (NN) based unsupervised feature construction approach. We also develop a strategy that combines exploration and exploitation to select patterns for feedback. We show experimental results on several real-life datasets to validate the performance of the proposed method. We also compare with the existing methods of interactive pattern discovery to show that our method is substantially superior in performance. To portray the applicability of the framework, we present a case study from the real-estate domain. | There are some recent works on interactive knowledge discovery which solves domain specific problems. Examples include mining geospatial redescriptions @cite_8 , and subgroup discovery @cite_27 @cite_4 . In the work on mining geospatial redescriptions @cite_8 , the authors propose a system called SIREN, in which a user builds queries based on his interests and the system answers the queries through visualization of redescriptions (different ways of characterizing the same things). @cite_27 @cite_4 , the authors present a framework which utilizes user's feedback and devises a search procedure for finding subgroups. | {
"cite_N": [
"@cite_27",
"@cite_4",
"@cite_8"
],
"mid": [
"181982077",
"2093457890",
"2027686971"
],
"abstract": [
"Although subgroup discovery aims to be a practical tool for exploratory data mining, its wider adoption is hampered by redundancy and the re-discovery of common knowledge. This can be remedied by parameter tuning and manual result filtering, but this requires considerable effort from the data analyst. In this paper we argue that it is essential to involve the user in the discovery process to solve these issues. To this end, we propose an interactive algorithm that allows a user to provide feedback during search, so that it is steered towards more interesting subgroups. Specifically, the algorithm exploits user feedback to guide a diverse beam search. The empirical evaluation and a case study demonstrate that uninteresting subgroups can be effectively eliminated from the results, and that the overall effort required to obtain interesting and diverse subgroup sets is reduced. This confirms that within-search interactivity can be useful for data analysis.",
"User data is becoming increasingly available in multiple domains ranging from phone usage traces to data on the social Web. The analysis of user data is appealing to scientists who work on population studies, recommendations, and large-scale data analytics. We argue for the need for an interactive analysis to understand the multiple facets of user data and address different analytics scenarios. Since user data is often sparse and noisy, we propose to produce labeled groups that describe users with common properties and develop IUGA, an interactive framework based on group discovery primitives to explore the user space. At each step of IUGA, an analyst visualizes group members and may take an action on the group (add remove members) and choose an operation (exploit explore) to discover more groups and hence more users. Each discovery operation results in k most relevant and diverse groups. We formulate group exploitation and exploration as optimization problems and devise greedy algorithms to enable efficient group discovery. Finally, we design a principled validation methodology and run extensive experiments that validate the effectiveness of IUGA on large datasets for different user space analysis scenarios.",
"We present SIREN, an interactive tool for mining and visualizing geospatial redescriptions. Redescription mining is a powerful data analysis tool that aims at finding alternative descriptions of the same entities. For example, in biology, an important task is to identify the bioclimatic constraints that allow some species to survive, that is, to describe geographical regions in terms of both the fauna that inhabits them and their bioclimatic conditions. Using SIREN, users can explore geospatial data of their interest by visualizing the redescriptions on a map, interactively edit, extend and filter them. To demonstrate the use of the tool, we focus on climatic niche-finding over Europe, as an example task. Yet, SIREN is by no means limited to a particular dataset or application."
]
} |
1607.05423 | 2499540656 | Deep neural networks have achieved remarkable success in a wide range of practical problems. However, due to the inherent large parameter space, deep models are notoriously prone to overfitting and difficult to be deployed in portable devices with limited memory. In this paper, we propose an iterative hard thresholding (IHT) approach to train Skinny Deep Neural Networks (SDNNs). An SDNN has much fewer parameters yet can achieve competitive or even better performance than its full CNN counterpart. More concretely, the IHT approach trains an SDNN through following two alternative phases: (I) perform hard thresholding to drop connections with small activations and fine-tune the other significant filters; (II) re-activate the frozen connections and train the entire network to improve its overall discriminative capability. We verify the superiority of SDNNs in terms of efficiency and classification performance on four benchmark object recognition datasets, including CIFAR-10, CIFAR-100, MNIST and ImageNet. Experimental results clearly demonstrate that IHT can be applied for training SDNN based on various CNN architectures such as NIN and AlexNet. | Early approaches for deep model compression including optimal brain damage @cite_20 and optimal brain surgeon @cite_4 that prune the connections in networks based on the second order information. However, those methods are not feasible for deep networks due to high computational complexity. Recent works aiming at network pruning include @cite_6 @cite_14 @cite_2 @cite_15 @cite_26 , which prune connections in a progressively greedy way @cite_6 or using sparsity related regularizer @cite_14 @cite_15 . Although those works can reduce model size significantly, they suffer from the dramatic performance loss. In contrast, SDNN does not only offer significant compression ratios but also improves the performance simultaneously. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_26",
"@cite_6",
"@cite_2",
"@cite_15",
"@cite_20"
],
"mid": [
"1570197553",
"2125389748",
"2177847924",
"2963674932",
"2952826672",
"2949273893",
"2114766824"
],
"abstract": [
"In this work, we investigate the use of sparsity-inducing regularizers during training of Convolution Neural Networks (CNNs). These regularizers encourage that fewer connections in the convolution and fully connected layers take non-zero values and in effect result in sparse connectivity between hidden units in the deep network. This in turn reduces the memory and runtime cost involved in deploying the learned CNNs. We show that training with such regularization can still be performed using stochastic gradient descent implying that it can be used easily in existing codebases. Experimental evaluation of our approach on MNIST, CIFAR, and ImageNet datasets shows that our regularizers can result in dramatic reductions in memory requirements. For instance, when applied on AlexNet, our method can reduce the memory consumption by a factor of four with minimal loss in accuracy.",
"We investigate the use of information from all second order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization, simplify networks, reduce hardware or storage requirements, increase the speed of further training, and in some cases enable rule extraction. Our method, Optimal Brain Surgeon (OBS), is Significantly better than magnitude-based methods and Optimal Brain Damage [Le Cun, Denker and Solla, 1990], which often remove the wrong weights. OBS permits the pruning of more weights than other methods (for the same error on the training set), and thus yields better generalization on test data. Crucial to OBS is a recursion relation for calculating the inverse Hessian matrix H-1 from training data and structural information of the net. OBS permits a 90 , a 76 , and a 62 reduction in weights over backpropagation with weight decay on three benchmark MONK's problems [, 1991]. Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case. Finally, whereas Sejnowski and Rosenberg [1987] used 18,000 weights in their NETtalk network, we used OBS to prune a network to just 1560 weights, yielding better generalization.",
"Although the latest high-end smartphone has powerful CPU and GPU, running deeper convolutional neural networks (CNNs) for complex tasks such as ImageNet classification on mobile devices is challenging. To deploy deep CNNs on mobile devices, we present a simple and effective scheme to compress the entire CNN, which we call one-shot whole network compression. The proposed scheme consists of three steps: (1) rank selection with variational Bayesian matrix factorization, (2) Tucker decomposition on kernel tensor, and (3) fine-tuning to recover accumulated loss of accuracy, and each step can be easily implemented using publicly available tools. We demonstrate the effectiveness of the proposed scheme by testing the performance of various compressed CNNs (AlexNet, VGGS, GoogLeNet, and VGG-16) on the smartphone. Significant reductions in model size, runtime, and energy consumption are obtained, at the cost of small loss in accuracy. In addition, we address the important implementation level issue on 1?1 convolution, which is a key operation of inception module of GoogLeNet as well as CNNs compressed by our proposed scheme.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.",
"Real time application of deep learning algorithms is often hindered by high computational complexity and frequent memory accesses. Network pruning is a promising technique to solve this problem. However, pruning usually results in irregular network connections that not only demand extra representation efforts but also do not fit well on parallel computation. We introduce structured sparsity at various scales for convolutional neural networks, which are channel wise, kernel wise and intra kernel strided sparsity. This structured sparsity is very advantageous for direct computational resource savings on embedded computers, parallel computing environments and hardware based systems. To decide the importance of network connections and paths, the proposed method uses a particle filtering approach. The importance weight of each particle is assigned by computing the misclassification rate with corresponding connectivity pattern. The pruned network is re-trained to compensate for the losses due to pruning. While implementing convolutions as matrix products, we particularly show that intra kernel strided sparsity with a simple constraint can significantly reduce the size of kernel and feature map matrices. The pruned network is finally fixed point optimized with reduced word length precision. This results in significant reduction in the total storage size providing advantages for on-chip memory based implementations of deep neural networks.",
"We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion by adding group-sparsity regularization to the standard training process. After such group-wise pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. In the comparison on AlexNet, the method achieves very competitive performance.",
"We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application."
]
} |
1607.05423 | 2499540656 | Deep neural networks have achieved remarkable success in a wide range of practical problems. However, due to the inherent large parameter space, deep models are notoriously prone to overfitting and difficult to be deployed in portable devices with limited memory. In this paper, we propose an iterative hard thresholding (IHT) approach to train Skinny Deep Neural Networks (SDNNs). An SDNN has much fewer parameters yet can achieve competitive or even better performance than its full CNN counterpart. More concretely, the IHT approach trains an SDNN through following two alternative phases: (I) perform hard thresholding to drop connections with small activations and fine-tune the other significant filters; (II) re-activate the frozen connections and train the entire network to improve its overall discriminative capability. We verify the superiority of SDNNs in terms of efficiency and classification performance on four benchmark object recognition datasets, including CIFAR-10, CIFAR-100, MNIST and ImageNet. Experimental results clearly demonstrate that IHT can be applied for training SDNN based on various CNN architectures such as NIN and AlexNet. | Our work is also in line with model compression. For example, @cite_2 proposes to quantize the deep model by minimizing L2 error and @cite_21 seeks an low-rank approximation of the model. Recently, @cite_13 combined pruning, quantization and Huffman coding techniques and provided rather high compression ratios. However, those methods also introduce performance drop. There are also works trying to compress a model by using binning methods @cite_28 , but they can only be applied over fully connected layers. In contrast, our method can be applied for compressing both convolution layers and fully connected layers. @PARASPLIT | {
"cite_N": [
"@cite_28",
"@cite_21",
"@cite_13",
"@cite_2"
],
"mid": [
"1724438581",
"2167215970",
"",
"2952826672"
],
"abstract": [
"Deep convolutional neural networks (CNN) has become the most promising method for object recognition, repeatedly demonstrating record breaking results for image classification and object detection in recent years. However, a very deep CNN generally involves many layers with millions of parameters, making the storage of the network model to be extremely large. This prohibits the usage of deep CNNs on resource limited hardware, especially cell phones or other embedded devices. In this paper, we tackle this model storage issue by investigating information theoretical vector quantization methods for compressing the parameters of CNNs. In particular, we have found in terms of compressing the most storage demanding dense connected layers, vector quantization methods have a clear gain over existing matrix factorization methods. Simply applying k-means clustering to the weights or conducting product quantization can lead to a very good balance between model size and recognition accuracy. For the 1000-category classification task in the ImageNet challenge, we are able to achieve 16-24 times compression of the network with only 1 loss of classification accuracy using the state-of-the-art CNN.",
"We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy, but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2 x, while keeping the accuracy within 1 of the original model.",
"",
"Real time application of deep learning algorithms is often hindered by high computational complexity and frequent memory accesses. Network pruning is a promising technique to solve this problem. However, pruning usually results in irregular network connections that not only demand extra representation efforts but also do not fit well on parallel computation. We introduce structured sparsity at various scales for convolutional neural networks, which are channel wise, kernel wise and intra kernel strided sparsity. This structured sparsity is very advantageous for direct computational resource savings on embedded computers, parallel computing environments and hardware based systems. To decide the importance of network connections and paths, the proposed method uses a particle filtering approach. The importance weight of each particle is assigned by computing the misclassification rate with corresponding connectivity pattern. The pruned network is re-trained to compensate for the losses due to pruning. While implementing convolutions as matrix products, we particularly show that intra kernel strided sparsity with a simple constraint can significantly reduce the size of kernel and feature map matrices. The pruned network is finally fixed point optimized with reduced word length precision. This results in significant reduction in the total storage size providing advantages for on-chip memory based implementations of deep neural networks."
]
} |
1607.05520 | 2491938110 | We introduce bendlets, a shearlet-like system that is based on anisotropic scaling, translation, shearing, and bending of a compactly supported generator. With shearing being linear and bending quadratic in spatial coordinates, bendlets provide what we term a second-order shearlet system. As we show in this article, the decay rates of the associated transform enable the precise characterization of location, orientation and curvature of discontinuities in piecewise constant images. These results yield an improvement over existing directional representation systems where curvature only controls the constant of the decay rate of the transform. We also detail the construction of shearlet systems of arbitrary order. A practical implementation of bendlets is provided as an extension of the ShearLab toolbox, which we use to verify our theoretical classification results. | Since their invention in 2005, shearlet systems @cite_34 have been established in applied and computational harmonic analysis as efficient representation systems, in particular for image data. Their success is due to their superior approximation performance for images compared to wavelets. In fact, they share the (quasi) optimal approximation properties of curvelets @cite_8 , yet they are better suited for digital implementation due to the utilization of shears instead of rotations. Moreover, in contrast to curvelets, frames of compactly supported shearlets @cite_35 are available. | {
"cite_N": [
"@cite_35",
"@cite_34",
"@cite_8"
],
"mid": [
"2072476274",
"",
"2069912449"
],
"abstract": [
"Shearlet tight frames have been extensively studied in recent years due to their optimal approximation properties of cartoon-like images and their unified treatment of the continuum and digital settings. However, these studies only concerned shearlet tight frames generated by a band-limited shearlet, whereas for practical purposes compact support in spatial domain is crucial.",
"",
"This paper introduces new tight frames of curvelets to address the problem of finding optimally sparse representations of objects with discontinuities along piecewise C 2 edges. Conceptually, the curvelet transform is a multiscale pyramid with many directions and positions at each length scale, and needle-shaped elements at fine scales. These elements have many useful geometric multiscale features that set them apart from classical multiscale representations such as wavelets. For instance, curvelets obey a parabolic scaling relation which says that at scale 2 -j , each element has an envelope that is aligned along a ridge of length 2 -j 2 and width 2 -j . We prove that curvelets provide an essentially optimal representation of typical objects f that are C 2 except for discontinuities along piecewise C 2 curves. Such representations are nearly as sparse as if f were not singular and turn out to be far more sparse than the wavelet decomposition of the object. For instance, the n-term partial reconstruction f C n obtained by selecting the n largest terms in the curvelet series obeys ∥f - f C n ∥ 2 L2 ≤ C . n -2 . (log n) 3 , n → ∞. This rate of convergence holds uniformly over a class of functions that are C 2 except for discontinuities along piecewise C 2 curves and is essentially optimal. In comparison, the squared error of n-term wavelet approximations only converges as n -1 as n → ∞, which is considerably worse than the optimal behavior."
]
} |
1607.05520 | 2491938110 | We introduce bendlets, a shearlet-like system that is based on anisotropic scaling, translation, shearing, and bending of a compactly supported generator. With shearing being linear and bending quadratic in spatial coordinates, bendlets provide what we term a second-order shearlet system. As we show in this article, the decay rates of the associated transform enable the precise characterization of location, orientation and curvature of discontinuities in piecewise constant images. These results yield an improvement over existing directional representation systems where curvature only controls the constant of the decay rate of the transform. We also detail the construction of shearlet systems of arbitrary order. A practical implementation of bendlets is provided as an extension of the ShearLab toolbox, which we use to verify our theoretical classification results. | Besides approximation, shearlets also provide a powerful tool for feature analysis of functions. It was first demonstrated in @cite_2 that, analogous to the curvelet transform @cite_4 , the shearlet transform provides a precise characterization of the wavefront set, which corresponds to the aforementioned boundary curves between image regions. In particular, the wavefront set is characterized as those positions and orientations in parameter space where the transform decays slowly for increasing scales. By now, the analysis of edges and singularities of functions utilizing a continuous shearlet transform is a well-established area of research @cite_1 . | {
"cite_N": [
"@cite_1",
"@cite_4",
"@cite_2"
],
"mid": [
"",
"1979116456",
"1990805796"
],
"abstract": [
"",
"We discuss a Continuous Curvelet Transform (CCT), a transform f → Γf (a, b, θ) of functions f(x1, x2) on R^2, into a transform domain with continuous scale a > 0, location b ∈ R^2, and orientation θ ∈ [0, 2π). The transform is defined by Γf (a, b, θ) = f, γabθ where the inner products project f onto analyzing elements called curvelets γ_(abθ) which are smooth and of rapid decay away from an a by √a rectangle with minor axis pointing in direction θ. We call them curvelets because this anisotropic behavior allows them to ‘track’ the behavior of singularities along curves. They are continuum scale space orientation analogs of the discrete frame of curvelets discussed in Candes and Donoho (2002). We use the CCT to analyze several objects having singularities at points, along lines, and along smooth curves. These examples show that for fixed (x0, θ0), Γf (a, x0, θ0) decays rapidly as a → 0 if f is smooth near x0, or if the singularity of f at x0 is oriented in a different direction than θ_0. Generalizing these examples, we state general theorems showing that decay properties of Γf (a, x0, θ0) for fixed (x0, θ0), as a → 0 can precisely identify the wavefront set and the H^m- wavefront set of a distribution. In effect, the wavefront set of a distribution is the closure of the set of (x0, θ0) near which Γf (a, x, θ) is not of rapid decay as a → 0; the H^m-wavefront set is the closure of those points (x0, θ0) where the ‘directional parabolic square function’ S^m(x, θ) = ( ʃ|Γf (a, x, θ)|^2 ^(da) _a^3+^(2m))^(1 2) is not locally integrable. The CCT is closely related to a continuous transform used by Hart Smith in his study of Fourier Integral Operators. Smith’s transform is based on strict affine parabolic scaling of a single mother wavelet, while for the transform we discuss, the generating wavelet changes (slightly) scale by scale. The CCT can also be compared to the FBI (Fourier-Bros-Iagolnitzer) and Wave Packets (Cordoba-Fefferman) transforms. We describe their similarities and differences in resolving the wavefront set.",
"It is known that the Continuous Wavelet Transform of a distribution f decays rapidly near the points where f is smooth, while it decays slowly near the irregular points. This property allows the identification of the singular support of f. However, the Continuous Wavelet Transform is unable to describe the geometry of the set of singularities of f and, in particular, identify the wavefront set of a distribution. In this paper, we employ the same framework of affine systems which is at the core of the construction of the wavelet transform to introduce the Continuous Shearlet Transform. This is defined by SH ψ f(a,s,t) = (fψ ast ), where the analyzing elements ψ ast are dilated and translated copies of a single generating function ψ. The dilation matrices form a two-parameter matrix group consisting of products of parabolic scaling and shear matrices. We show that the elements ψ ast form a system of smooth functions at continuous scales a > 0, locations t ∈ R 2 , and oriented along lines of slope s ∈ R in the frequency domain. We then prove that the Continuous Shearlet Transform does exactly resolve the wavefront set of a distribution f."
]
} |
1607.05520 | 2491938110 | We introduce bendlets, a shearlet-like system that is based on anisotropic scaling, translation, shearing, and bending of a compactly supported generator. With shearing being linear and bending quadratic in spatial coordinates, bendlets provide what we term a second-order shearlet system. As we show in this article, the decay rates of the associated transform enable the precise characterization of location, orientation and curvature of discontinuities in piecewise constant images. These results yield an improvement over existing directional representation systems where curvature only controls the constant of the decay rate of the transform. We also detail the construction of shearlet systems of arbitrary order. A practical implementation of bendlets is provided as an extension of the ShearLab toolbox, which we use to verify our theoretical classification results. | These results have been generalized in @cite_2 for the so-called classical shearlet. @cite_14 these results were subsequently generalized to include also compactly supported shearlet generators. It has also been analyzed to what extent more general dilation groups than those employed for the standard shearlet system can characterize the wavefront set @cite_24 . | {
"cite_N": [
"@cite_24",
"@cite_14",
"@cite_2"
],
"mid": [
"2964197071",
"2025423598",
"1990805796"
],
"abstract": [
"We consider the problem of characterizing the wavefront set of a tempered distribution (u S '( R ^ d ) ) in terms of its continuous wavelet transform, where the latter is defined with respect to a suitably chosen dilation group (H GL ( R ^ d ) ). In this paper we develop a comprehensive and unified approach that allows to establish characterizations of the wavefront set in terms of rapid coefficient decay, for a large variety of dilation groups. For this purpose, we introduce two technical conditions on the dual action of the group H, called microlocal admissibility and (weak) cone approximation property. Essentially, microlocal admissibility sets up a systematic relationship between the scales in a wavelet dilated by (h H ) on one side, and the matrix norm of h on the other side. The (weak) cone approximation property describes the ability of the wavelet system to adapt its frequency-side localization to arbitrary frequency cones. Together, microlocal admissibility and the weak cone approximation property allow the characterization of points in the wavefront set using multiple wavelets. Replacing the weak cone approximation by its stronger counterpart gives rise to single wavelet characterizations. We illustrate the scope of our results by discussing—in any dimension (d 2 )—the similitude, diagonal and shearlet dilation groups, for which we verify the pertinent conditions. As a result, similitude and diagonal groups can be employed for multiple wavelet characterizations, whereas for the shearlet groups a single wavelet suffices. In particular, the shearlet characterization (previously only established for (d=2 )) holds in arbitrary dimensions.",
"In recent years directional multiscale transformations like the curvelet- or shearlet transformation have gained considerable attention. The reason for this is that these transforms are—unlike more traditional transforms like wavelets—able to efficiently handle data with features along edges. The main result in Kutyniok and Labate (Trans. Am. Math. Soc. 361:2719–2754, 2009) confirming this property for shearlets is due to Kutyniok and Labate where it is shown that for very special functions ψ with frequency support in a compact conical wegde the decay rate of the shearlet coefficients of a tempered distribution f with respect to the shearlet ψ can resolve the wavefront set of f. We demonstrate that the same result can be verified under much weaker assumptions on ψ, namely to possess sufficiently many anisotropic vanishing moments. We also show how to build frames for ( L^2( R ^2) ) from any such function. To prove our statements we develop a new approach based on an adaption of the Radon transform to the shearlet structure.",
"It is known that the Continuous Wavelet Transform of a distribution f decays rapidly near the points where f is smooth, while it decays slowly near the irregular points. This property allows the identification of the singular support of f. However, the Continuous Wavelet Transform is unable to describe the geometry of the set of singularities of f and, in particular, identify the wavefront set of a distribution. In this paper, we employ the same framework of affine systems which is at the core of the construction of the wavelet transform to introduce the Continuous Shearlet Transform. This is defined by SH ψ f(a,s,t) = (fψ ast ), where the analyzing elements ψ ast are dilated and translated copies of a single generating function ψ. The dilation matrices form a two-parameter matrix group consisting of products of parabolic scaling and shear matrices. We show that the elements ψ ast form a system of smooth functions at continuous scales a > 0, locations t ∈ R 2 , and oriented along lines of slope s ∈ R in the frequency domain. We then prove that the Continuous Shearlet Transform does exactly resolve the wavefront set of a distribution f."
]
} |
1607.05520 | 2491938110 | We introduce bendlets, a shearlet-like system that is based on anisotropic scaling, translation, shearing, and bending of a compactly supported generator. With shearing being linear and bending quadratic in spatial coordinates, bendlets provide what we term a second-order shearlet system. As we show in this article, the decay rates of the associated transform enable the precise characterization of location, orientation and curvature of discontinuities in piecewise constant images. These results yield an improvement over existing directional representation systems where curvature only controls the constant of the decay rate of the transform. We also detail the construction of shearlet systems of arbitrary order. A practical implementation of bendlets is provided as an extension of the ShearLab toolbox, which we use to verify our theoretical classification results. | Recent work furthermore established that the shearlet transform provides geometric information that goes beyond the wavefront set, for which it is sufficient that the transform decays slower than any polynomial. For a bounded domain @math with piecewise smooth boundary @math , it was shown in @cite_17 and @cite_9 that using a classical shearlet one can associate precise decay rates to the transform of the characteristic function @math of @math so that points on @math and the corresponding normal direction can be detected as well as points where @math is not smooth. An edge detection algorithm based on shearlets was implemented in @cite_27 . | {
"cite_N": [
"@cite_27",
"@cite_9",
"@cite_17"
],
"mid": [
"2144506334",
"",
"2027712695"
],
"abstract": [
"It is well known that the wavelet transform provides a very effective framework for analysis of multiscale edges. In this paper, we propose a novel approach based on the shearlet transform: a multiscale directional transform with a greater ability to localize distributed discontinuities such as edges. Indeed, unlike traditional wavelets, shearlets are theoretically optimal in representing images with edges and, in particular, have the ability to fully capture directional and other geometrical features. Numerical examples demonstrate that the shearlet approach is highly effective at detecting both the location and orientation of edges, and outperforms methods based on wavelets as well as other standard methods. Furthermore, the shearlet approach is useful to design simple and effective algorithms for the detection of corners and junctions.",
"",
"This paper shows that the continuous shearlet transform, a novel directional multiscale transform recently introduced by the authors and their collaborators, provides a precise geometrical characterization for the boundary curves of very general planar regions. This study is motivated by imaging applications, where such boundary curves represent edges of images. The shearlet approach is able to characterize both locations and orientations of the edge points, including corner points and junctions, where the edge curves exhibit abrupt changes in tangent or curvature. Our results encompass and greatly extend previous results based on the shearlet and curvelet transforms which were limited to very special cases such as polygons and smooth boundary curves with nonvanishing curvature."
]
} |
1607.05520 | 2491938110 | We introduce bendlets, a shearlet-like system that is based on anisotropic scaling, translation, shearing, and bending of a compactly supported generator. With shearing being linear and bending quadratic in spatial coordinates, bendlets provide what we term a second-order shearlet system. As we show in this article, the decay rates of the associated transform enable the precise characterization of location, orientation and curvature of discontinuities in piecewise constant images. These results yield an improvement over existing directional representation systems where curvature only controls the constant of the decay rate of the transform. We also detail the construction of shearlet systems of arbitrary order. A practical implementation of bendlets is provided as an extension of the ShearLab toolbox, which we use to verify our theoretical classification results. | Numerous generalizations of the just described method have been developed, including a generalization to 3D domains in @cite_28 , with a particular emphasis on detecting line singularities within the 2D manifolds given by the boundaries @cite_20 . In another line of research, the region @math is no longer required to be constant but can be smooth, see @cite_29 . All the results above deal with band-limited generators. @cite_32 the decay of a continuous shearlet transform with a compactly supported generator was analyzed. It was demonstrated that a similar classification of edges as with band-limited generators could be achieved in both 2D and 3D, with the additional improvement that the decay rates are uniform, meaning they can be analyzed in a pre-asymptotic regime. Furthermore, it was also established that information on the curvature of @math could be extracted from the shearlet transform in a weak form. This is, however, very different from the results in the present paper that provide an exact description of the asymptotic behavior of a higher-order shearlet transform for different curvatures. | {
"cite_N": [
"@cite_28",
"@cite_29",
"@cite_32",
"@cite_20"
],
"mid": [
"2038675637",
"1887046642",
"",
"2114809774"
],
"abstract": [
"Abstract Directional multiscale transforms such as the shearlet transform have emerged in recent years for their ability to capture the geometrical information associated with the singularity sets of bivariate functions and distributions. One of the most striking features of the continuous shearlet transform is that it provides a very simple and precise geometrical characterization for the boundary curves of general planar regions. However, no specific results were known so far in higher dimensions, since the arguments used in dimension n = 2 do not directly carry over to the higher dimensional setting. In this paper, we extend this framework for the analysis of singularities to the 3-dimensional setting, and show that the 3-dimensional continuous shearlet transform precisely characterizes the boundary set of solid regions in R 3 by identifying both its location and local orientation.",
"Abstract The analysis and detection of edges is a central problem in applied mathematics and image processing. A number of results in recent years have shown that directional multiscale methods such as continuous curvelet and shearlet transforms offer a powerful theoretical framework to capture the geometry of edge singularities, going far beyond the capabilities of the conventional wavelet transform. The continuous shearlet transform, in particular, provides a precise geometric characterization of edges in piecewise constant functions in R 2 and R 3 , including corner points. However, a question has been raised frequently: What happens if the function is piecewise smooth and not just piecewise constant? Clearly, a piecewise smooth function is a much more realistic model of images with edges. In this paper, we extend the characterization results previously known and show that, also in the case of piecewise smooth functions, the continuous shearlet transform can detect the location and orientation of edge points, including corner points, through its asymptotic decay at fine scales. The new proof introduces innovative technical constructions to deal with the more challenging problem. The new results set the theoretical groundwork for the application of the shearlet framework to a wider class of problems from image processing.",
"",
"Suppose that is a three dimensional solid with boundary surface S = S1[ [ Sq, where each Sr is a smooth surface with boundary curve r. Multiscale directional representation systems (e.g., shearlets) are able to capture the essential geometry of by precisely identifying the boundary set"
]
} |
1607.05697 | 2952383768 | In this paper, we study PUSH-PULL style rumor spreading algorithms in the mobile telephone model, a variant of the classical telephone model in which each node can participate in at most one connection per round; i.e., you can no longer have multiple nodes pull information from the same source in a single round. Our model also includes two new parameterized generalizations: (1) the network topology can undergo a bounded rate of change (for a parameterized rate that spans from no changes to changes in every round); and (2) in each round, each node can advertise a bounded amount of information to all of its neighbors before connection decisions are made (for a parameterized number of bits that spans from no advertisement to large advertisements). We prove that in the mobile telephone model with no advertisements and no topology changes, PUSH-PULL style algorithms perform poorly with respect to a graph's vertex expansion and graph conductance as compared to the known tight results in the classical telephone model. We then prove, however, that if nodes are allowed to advertise a single bit in each round, a natural variation of PUSH-PULL terminates in time that matches (within logarithmic factors) this strategy's performance in the classical telephone model---even in the presence of frequent topology changes. We also analyze how the performance of this algorithm degrades as the rate of change increases toward the maximum possible amount. We argue that our model matches well the properties of emerging peer-to-peer communication standards for mobile devices, and that our efficient PUSH-PULL variation that leverages small advertisements and adapts well to topology changes is a good choice for rumor spreading in this increasingly important setting. | The telephone model described above was first introduced by Frieze and Grimmett @cite_0 . A key problem in this model is rumor spreading : a rumor must spread from a single source to the whole network. In studying this problem, algorithmic simplicity is typically prioritized over absolute optimality. The PUSH algorithm (first mentioned @cite_0 ), for example, simply has every node with the message choose a neighbor with uniform randomness and send it the message. The PULL algorithm (first mentioned @cite_2 ), by contrast, has every node without the message choose a neighbor with uniform randomness and ask for the message. The PUSH-PULL algorithm combines those two strategies. In a complete graph, both PUSH and PULL complete in @math rounds, with high probability---leveraging epidemic-style spreading behavior. Karp et al. @cite_12 proved that the average number of connections per node when running PUSH-PULL in the complete graph is bounded at @math . | {
"cite_N": [
"@cite_0",
"@cite_12",
"@cite_2"
],
"mid": [
"2059014957",
"2157004711",
"191133884"
],
"abstract": [
"We consider the problem of finding the shortest distance between all pairs of vertices in a complete digraph on n vertices, whose arc-lengths are non-negative random variables. We describe an algorithm which solves this problem in O(n(m+nlogn)) expected time, where m is the expected number of arcs with finite lenght. If m is small enough, this represents a small improvement over the bound in Bloniarz [3]. We consider also the case when the arc-lengths are random variables which are independently distributed with distribution function F, where F(0)=0 and F is differentiable at 0; for this case, we describe an algorithm which runs in O(n2logn) expected time. In our treatment of the shortest-path problem we consider the following problem in combinatorial probability theory. A town contains n people, one of whom knows a rumour. At the first stage he tells someone chosen randomly from the town; at each stage, each person who knows the rumour tells someone else, chosen randomly from the town and indeependently of all other choices. Let Sn be the number of stages before the whole town rnows the rumour. We show that Sn log2n → 1 + loge2 in probability as n → ∞, and estimate the probabilities of large deviations in Sn.",
"Investigates the class of epidemic algorithms that are commonly used for the lazy transmission of updates to distributed copies of a database. These algorithms use a simple randomized communication mechanism to ensure robustness. Suppose n players communicate in parallel rounds in each of which every player calls a randomly selected communication partner. In every round, players can generate rumors (updates) that are to be distributed among all players. Whenever communication is established between two players, each one must decide which of the rumors to transmit. The major problem is that players might not know which rumors their partners have already received. For example, a standard algorithm forwarding each rumor form the calling to the called players for spl Theta (ln n) rounds needs to transmit the rumor spl Theta (n ln n) times in order to ensure that every player finally receives the rumor with high probability. We investigate whether such a large communication overhead is inherent to epidemic algorithms. On the positive side, we show that the communication overhead can be reduced significantly. We give an algorithm using only O(n ln ln n) transmissions and O(ln n) rounds. In addition, we prove the robustness of this algorithm. On the negative side, we show that any address-oblivious algorithm needs to send spl Omega (n ln ln n) messages for each rumor, regardless of the number of rounds. Furthermore, we give a general lower bound showing that time and communication optimality cannot be achieved simultaneously using random phone calls, i.e. every algorithm that distributes a rumor in O(ln n) rounds needs spl omega (n) transmissions.",
""
]
} |
1607.05540 | 2952789856 | A framework for consensus modelling is introduced using Kleene's three valued logic as a means to express vagueness in agents' beliefs. Explicitly borderline cases are inherent to propositions involving vague concepts where sentences of a propositional language may be absolutely true, absolutely false or borderline. By exploiting these intermediate truth values, we can allow agents to adopt a more vague interpretation of underlying concepts in order to weaken their beliefs and reduce the levels of inconsistency, so as to achieve consensus. We consider a consensus combination operation which results in agents adopting the borderline truth value as a shared viewpoint if they are in direct conflict. Simulation experiments are presented which show that applying this operator to agents chosen at random (subject to a consistency threshold) from a population, with initially diverse opinions, results in convergence to a smaller set of more precise shared beliefs. Furthermore, if the choice of agents for combination is dependent on the payoff of their beliefs, this acting as a proxy for performance or usefulness, then the system converges to beliefs which, on average, have higher payoff. | A number of models for consensus have been proposed in the literature which have influenced the development of the framework described in this paper. @cite_4 introduced a model for reaching a consensus involving a weighted, global updating of beliefs, iterating until an agreement is reached. In DeGroot's model, agents assign a weight distribution to the population before forming a new opinion. By applying their assigned weights to the other agents' beliefs, an agent can control the influence that others have on their own beliefs. | {
"cite_N": [
"@cite_4"
],
"mid": [
"1998692453"
],
"abstract": [
"Abstract Consider a group of individuals who must act together as a team or committee, and suppose that each individual in the group has his own subjective probability distribution for the unknown value of some parameter. A model is presented which describes how the group might reach agreement on a common subjective probability distribution for the parameter by pooling their individual opinions. The process leading to the consensus is explicitly described and the common distribution that is reached is explicitly determined. The model can also be applied to problems of reaching a consensus when the opinion of each member of the group is represented simply as a point estimate of the parameter rather than as a probability distribution."
]
} |
1607.05540 | 2952789856 | A framework for consensus modelling is introduced using Kleene's three valued logic as a means to express vagueness in agents' beliefs. Explicitly borderline cases are inherent to propositions involving vague concepts where sentences of a propositional language may be absolutely true, absolutely false or borderline. By exploiting these intermediate truth values, we can allow agents to adopt a more vague interpretation of underlying concepts in order to weaken their beliefs and reduce the levels of inconsistency, so as to achieve consensus. We consider a consensus combination operation which results in agents adopting the borderline truth value as a shared viewpoint if they are in direct conflict. Simulation experiments are presented which show that applying this operator to agents chosen at random (subject to a consistency threshold) from a population, with initially diverse opinions, results in convergence to a smaller set of more precise shared beliefs. Furthermore, if the choice of agents for combination is dependent on the payoff of their beliefs, this acting as a proxy for performance or usefulness, then the system converges to beliefs which, on average, have higher payoff. | As an alternative to DeGroot's model, the Bounded Confidence (BC) model introduced in @cite_0 provides agents with a confidence measure. An agent quantifies their level of confidence in their own opinions and are then able to limit their interactions to those agents who possess similar beliefs if they are highly confident (small bounds), or extend the range of possible interactions if the agents possess low confidence (large bounds). In this model agents do not a priori assign weights to the beliefs of others, but instead determine such weightings based on similarity and on their own confidence levels. This is similar in essence to the inconsistency threshold that we introduce in section , but applied on an individual basis. | {
"cite_N": [
"@cite_0"
],
"mid": [
"37686529"
],
"abstract": [
"Consensus formation among n experts is modeled as a positive discrete dynamical system in n dimensions. The well–known linear but non–autonomous model is extended to a nonlinear one admitting also various kinds of averaging beside the weighted arithmetic mean. For this model a sufficient condition for reaching a consensus is presented. As a special case consensus formation under bounded confidence is analyzed."
]
} |
1607.05540 | 2952789856 | A framework for consensus modelling is introduced using Kleene's three valued logic as a means to express vagueness in agents' beliefs. Explicitly borderline cases are inherent to propositions involving vague concepts where sentences of a propositional language may be absolutely true, absolutely false or borderline. By exploiting these intermediate truth values, we can allow agents to adopt a more vague interpretation of underlying concepts in order to weaken their beliefs and reduce the levels of inconsistency, so as to achieve consensus. We consider a consensus combination operation which results in agents adopting the borderline truth value as a shared viewpoint if they are in direct conflict. Simulation experiments are presented which show that applying this operator to agents chosen at random (subject to a consistency threshold) from a population, with initially diverse opinions, results in convergence to a smaller set of more precise shared beliefs. Furthermore, if the choice of agents for combination is dependent on the payoff of their beliefs, this acting as a proxy for performance or usefulness, then the system converges to beliefs which, on average, have higher payoff. | The Relative Agreement (RA) model @cite_6 then extends the Bounded Confidence model to allow agents to assign weights to the beliefs of others by quantifying the extent of the overlap of their respective confidence bounds. By having agents declare a confidence interval for their beliefs, the model then restricts interactions to those pairs of agents with overlapping intervals. Consequently, agents are only required to assess their own beliefs and are not required to make explicit judgements about those of other agents. @cite_6 also moved to a model of pair-wise interactions to better capture social interactions of individuals, the latter being a setting in which group-wide updates to beliefs are unintuitive in that they do not reflect typical social behaviour. | {
"cite_N": [
"@cite_6"
],
"mid": [
"1582135188"
],
"abstract": [
"Abstract: We model opinion dynamics in populations of agents with continuous opinion and uncertainty. The opinions and uncertainties are modified by random pair interactions. We propose a new model of interactions, called relative agreement model, which is a variant of the previously discussed bounded confidence. In this model, uncertainty as well as opinion can be modified by interactions. We introduce extremist agents by attributing a much lower uncertainty (and thus higher persuasion) to a small proportion of agents at the extremes of the opinion distribution. We study the evolution of the opinion distribution submitted to the relative agreement model. Depending upon the choice of parameters, the extremists can have a very local influence or attract the whole population. We propose a qualitative analysis of the convergence process based on a local field notion. The genericity of the observed results is tested on several variants of the bounded confidence model."
]
} |
1607.05540 | 2952789856 | A framework for consensus modelling is introduced using Kleene's three valued logic as a means to express vagueness in agents' beliefs. Explicitly borderline cases are inherent to propositions involving vague concepts where sentences of a propositional language may be absolutely true, absolutely false or borderline. By exploiting these intermediate truth values, we can allow agents to adopt a more vague interpretation of underlying concepts in order to weaken their beliefs and reduce the levels of inconsistency, so as to achieve consensus. We consider a consensus combination operation which results in agents adopting the borderline truth value as a shared viewpoint if they are in direct conflict. Simulation experiments are presented which show that applying this operator to agents chosen at random (subject to a consistency threshold) from a population, with initially diverse opinions, results in convergence to a smaller set of more precise shared beliefs. Furthermore, if the choice of agents for combination is dependent on the payoff of their beliefs, this acting as a proxy for performance or usefulness, then the system converges to beliefs which, on average, have higher payoff. | A fundamental difference between our approach and the above models is that we use Kleene's three valued logic to represent beliefs in a propositional logic setting, rather than identify opinions with real values or intervals. @cite_5 have shown that through use of a three-state model for networked consensus of complete graphs, nodes converge to a consensus much faster and with greater accuracy when compared to a restrictive binary model. In the sequel we extend this approach to a more general setting involving larger languages and incorporating a measure of payoff for beliefs. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2142465565"
],
"abstract": [
"We consider the binary consensus problem where each node in the network initially observes one of two states and the goal for each node is to eventually decide which one of the two states was initially held by the majority of the nodes. Each node contacts other nodes and updates its current state based on the state communicated by the last contacted node. We assume that both signaling (the information exchanged at node contacts) and memory (computation state at each node) are limited and restrict our attention to systems where each node can contact any other node (i.e., complete graphs). It is well known that for systems with binary signaling and memory, the probability of reaching incorrect consensus is equal to the fraction of nodes that initially held the minority state. We show that extending both the signaling and memory by just one state dramatically improves the reliability and speed of reaching the correct consensus. Specifically, we show that the probability of error decays exponentially with the number of nodes N and the convergence time is logarithmic in N for large N. We also examine the case when the state is ternary and signaling is binary. The convergence of this system to consensus is again shown to be logarithmic in N for large N, and is therefore faster than purely binary systems. The type of distributed consensus problems that we study arises in the context of decentralized peer-to-peer networks, e.g. sensor networks and opinion formation in social networks - our results suggest that robust and efficient protocols can be built with rather limited signaling and memory."
]
} |
1607.05597 | 2949706073 | This paper studies the complexity of distributed construction of purely additive spanners in the CONGEST model. We describe algorithms for building such spanners in several cases. Because of the need to simultaneously make decisions at far apart locations, the algorithms use additional mechanisms compared to their sequential counterparts. We complement our algorithms with a lower bound on the number of rounds required for computing pairwise spanners. The standard reductions from set-disjointness and equality seem unsuitable for this task because no specific edge needs to be removed from the graph. Instead, to obtain our lower bound, we define a new communication complexity problem that reduces to computing a sparse spanner, and prove a lower bound on its communication complexity using information theory. This technique significantly extends the current toolbox used for obtaining lower bounds for the CONGEST model, and we believe it may find additional applications. | Sparse spanners with a small multiplicative stretch are well-understood: Alth " @cite_39 in 1993 showed that any weighted graph @math on @math vertices has a spanner of size @math with multiplicative stretch @math , for every integer @math . Since then, several works @cite_33 @cite_52 @cite_2 @cite_46 @cite_56 @cite_48 @cite_50 @cite_41 @cite_42 have considered the problem of efficiently constructing sparse spanners with small stretch and have used spanners in the applications of computing approximate distances and approximate shortest paths efficiently. | {
"cite_N": [
"@cite_33",
"@cite_41",
"@cite_48",
"@cite_42",
"@cite_52",
"@cite_39",
"@cite_56",
"@cite_50",
"@cite_2",
"@cite_46"
],
"mid": [
"1966206200",
"2038084411",
"2061507472",
"2033698604",
"2156047991",
"2167816765",
"",
"1564010364",
"",
"1987669032"
],
"abstract": [
"A spanner is a sparse subgraph of a given graph that preserves approximate distance between each pair of vertices. In precise words, a t-spanner of a graph G = (V,E), for any t ∈ N, is a subgraph (V,ES), ES ⊆ E such that, for any u, v ∈ V , their distance in the subgraph is at most t times their distance in the original graph. The parameter t is called the stretch associated with the t-spanner. The concept of spanner was defined formally by Peleg and Schaffer [13] though the associated notion was used implicitly by Awerbuch [3] in the context of network synchronizers. Computing tspanner of smallest size for a given graph is a well motivated combinatorial problem with numerous applications in the area of distributed systems, communication networks and all pairs approximate shortest paths (see [4, 13] and references therein). However, computing t-spanner of smallest size for a graph is NP-hard. In fact, for t > 2, it is NP-hard [6] even to approximate the smallest size of t-spanner of a graph with ratio O(2(1−μ) lnn) for any μ > 0. Having realized this fact, researchers have pursued another direction : to design an efficient algorithm which, for a given graph on n vertices, outputs a t-spanner whose size is of the order of the maximum size of the sparsest tspanner of a graph on n vertices. A 43 years old girth lower bound conjecture by Erdős [7] implies that there are graphs on n vertices whose 2k as well as (2k− 1)spanner will requireΩ(n1+1 k) edges. This conjecture",
"We obtain the following results related to dynamic versions of the shortest-paths problem: Reductions that show that the incremental and decremental single-source shortest-paths problems, for weighted directed or undirected graphs, are, in a strong sense, at least as hard as the static all-pairs shortest-paths problem. We also obtain slightly weaker results for the corresponding unweighted problems. A randomized fully-dynamic algorithm for the all-pairs shortest-paths problem in directed unweighted graphs with an amortized update time of @math (we use @math to hide small poly-logarithmic factors) and a worst case query time is O(n3 4). A deterministic O(n2log n) time algorithm for constructing an O(log n)-spanner with O(n) edges for any weighted undirected graph on n vertices. The algorithm uses a simple algorithm for incrementally maintaining single-source shortest-paths tree up to a given distance.",
"A spanner of an undirected unweighted graph is a subgraph that approximates the distance metric of the original graph with some specified accuracy. Specifically, we say H ⊆ G is an f-spanner of G if any two vertices u,v at distance d in G are at distance at most f(d) in H. There is clearly some trade-off between the sparsity of H and the distortion function f, though the nature of the optimal trade-off is still poorly understood. In this article we present a simple, modular framework for constructing sparse spanners that is based on interchangable components called connection schemes. By assembling connection schemes in different ways we can recreate the additive 2- and 6-spanners of [1999] and [2009], and give spanners whose multiplicative distortion quickly tends toward 1. Our results rival the simplicity of all previous algorithms and provide substantial improvements (up to a doubly exponential reduction in edge density) over the comparable spanners of Elkin and Peleg [2004] and Thorup and Zwick [2006].",
"Let k ≥ 2 be an integer. We show that any undirected and unweighted graph G = (V, E) on n vertices has a subgraph G' = (V, E') with O(kn1+1 k) edges such that for any two vertices u, v ∈ V, if Δ G (u, v) = d, then Δ G' (u, v) = d+O(d1-1 k-1). Furthermore, we show that such subgraphs can be constructed in O(mn1 k) time, where m and n are the number of edges and vertices in the original graph. We also show that it is possible to construct a weighted graph G* = (V, E*) with O(kn1+1 (2k-1)) edges such that for every u, v ∈ V, if Δ G (u, v) = d, then Δ ≤ Δ G* (u, v) = d + O(d1-1 k-1). These are the first such results with additive error terms of the form o(d), i.e., additive error terms that are sublinear in the distance being approximated.",
"Let G=(V,E) be an unweighted undirected graph on n vertices. A simple argument shows that computing all distances in G with an additive one-sided error of at most 1 is as hard as Boolean matrix multiplication. Building on recent work of [SIAM J. Comput., 28 (1999), pp. 1167--1181], we describe an @math -time algorithm APASP2 for computing all distances in G with an additive one-sided error of at most 2. Algorithm APASP2 is simple, easy to implement, and faster than the fastest known matrix-multiplication algorithm. Furthermore, for every even k>2, we describe an @math -time algorithm APASPk for computing all distances in G with an additive one-sided error of at most k. We also give an @math -time algorithm @math for producing stretch 3 estimated distances in an unweighted and undirected graph on n vertices. No constant stretch factor was previously achieved in @math time. We say that a weighted graph F=(V,E') k-emulates an unweighted graph G=(V,E) if for every @math we have @math . We show that every unweighted graph on n vertices has a 2-emulator with @math edges and a 4-emulator with @math edges. These results are asymptotically tight. Finally, we show that any weighted undirected graph on n vertices has a 3-spanner with @math edges and that such a 3-spanner can be built in @math time. We also describe an @math -time algorithm for estimating all distances in a weighted undirected graph on n vertices with a stretch factor of at most 3.",
"From the Publisher: Discrete geometry investigates combinatorial properties of configurations of geometric objects. To a working mathematician or computer scientist, it offers sophisticated results and techniques of great diversity and it is a foundation for fields such as computational geometry or combinatorial optimization. This book is primarily a textbook introduction to various areas of discrete geometry. In each area, it explains several key results and methods, in an accessible and concrete manner. It also contains more advanced material in separate sections and thus it can serve as a collection of surveys in several narrower subfields. The main topics include: basics on convex sets, convex polytopes, and hyperplane arrangements; combinatorial complexity of geometric configurations; intersection patterns and transversals of convex sets; geometric Ramsey-type results; polyhedral combinatorics and high-dimensional convexity; and lastly, embeddings of finite metric spaces into normed spaces. Jiri Matousek is Professor of Computer Science at Charles University in Prague. His research has contributed to several of the considered areas and to their algorithmic applications. This is his third book.",
"",
"Thorup and Zwick showed that for any integer k≥ 1, it is possible to preprocess any positively weighted undirected graph G=(V,E), with |E|=m and |V|=n, in O(kmn @math ) expected time and construct a data structure (a (2k–1)-approximate distance oracle) of size O(kn @math ) capable of returning in O(k) time an approximation @math of the distance δ(u,v) from u to v in G that satisfies @math , for any two vertices u,v∈ V. They also presented a much slower O(kmn) time deterministic algorithm for constructing approximate distance oracle with the slightly larger size of O(kn @math log n). We present here a deterministic O(kmn @math ) time algorithm for constructing oracles of size O(kn @math ). Our deterministic algorithm is slower than the randomized one by only a logarithmic factor. Using our derandomization technique we also obtain the first deterministic linear time algorithm for constructing optimal spanners of weighted graphs. We do that by derandomizing the O(km) expected time algorithm of Baswana and Sen (ICALP’03) for constructing (2k–1)-spanners of size O(kn @math ) of weighted undirected graphs without incurring any asymptotic loss in the running time or in the size of the spanners produced.",
"",
"We study the s-sources almost shortest paths(abbreviated s-ASP) problem. Given an unweightedgraph G e (V,E),and a subset S s Vof s nodes, the goal is to compute almostshortest paths between all the pairs of nodes St V. We devise an algorithm withrunning timeO(mEmnrp s mn1 p z)for this problem that computes the pathsPu,wfor all pairs (u,w) iS t V such that thelength ofPu,wis at most (1 p e)dG(u,w)p b(z,r,e), andb(z,r,e) is constant whenz, r, and e are arbitrarily smallconstants. We also devise a distributed protocol for thes-ASP problem that computes the pathsP u,w as above, and has time and communication complexities ofO(s mDiam(G) pn1 pz 2) (respectively,O(s mDiam(G) log3n p n1p z 2 log n)) andO(mEmnr ps m n1p z) (respectively,O(mEmnr ps m n1p z pn1 p r pz(r m z 2) 2)) in thesynchronous (respectively asynchronous) setting. Our sequential algorithm, as well as the distributed protocol,is based on a novel algorithm for constructing (1 pe, b(z,r, e))-spannersof size O(n1p z), developed in this article. Thisalgorithm has running time ofO(mEmnr), which issignificantly faster than the previously known algorithm given inElkin and Peleg [2001], whose running time isO(n2p r). We also develop the firstdistributed protocol for constructing (1 pe,b)-spanners. The communication complexity ofthis protocol is near optimal."
]
} |
1607.05597 | 2949706073 | This paper studies the complexity of distributed construction of purely additive spanners in the CONGEST model. We describe algorithms for building such spanners in several cases. Because of the need to simultaneously make decisions at far apart locations, the algorithms use additional mechanisms compared to their sequential counterparts. We complement our algorithms with a lower bound on the number of rounds required for computing pairwise spanners. The standard reductions from set-disjointness and equality seem unsuitable for this task because no specific edge needs to be removed from the graph. Instead, to obtain our lower bound, we define a new communication complexity problem that reduces to computing a sparse spanner, and prove a lower bound on its communication complexity using information theory. This technique significantly extends the current toolbox used for obtaining lower bounds for the CONGEST model, and we believe it may find additional applications. | For unweighted graphs, one seeks spanners where the stretch is purely additive and as mentioned earlier, an almost tight bound of @math is known for how sparse a purely additive spanner can be. Bollob ' a @cite_31 were the first to study a variant of pairwise preservers called distance preservers , where the set of relevant pairs is @math , for a given parameter @math . Coppersmith and Elkin @cite_30 showed pairwise preservers of size @math and @math for any @math . For @math , the bound of @math for pairwise preservers has very recently been improved to @math by Bodwin and Williams @cite_49 . | {
"cite_N": [
"@cite_30",
"@cite_31",
"@cite_49"
],
"mid": [
"2031536548",
"2082352769",
""
],
"abstract": [
"We introduce and study the notions of pairwise and sourcewise preservers. Given an undirected N-vertex graph G = (V,E) and a set P of pairs of vertices, let G' = (V,H), H E, be called a pairwise preserver of G with respect to P if for every pair u,w P, distG'(u,w) = distG(u,w). For a set S V of sources, a pairwise preserver of G with respect to the set of all pairs P = (S 2) of sources is called a sourcewise preserver of G with respect to S. We prove that for every undirected possibly weighted N-vertex graph G and every set P of P = O(N1 2) pairs of vertices of G, there exists a linear-size pairwise preserver of G with respect to P. Consequently, for every subset S V of S = O(N1 4) sources, there exists a linear-size sourcewise preserver of G with respect to S. On the negative side we show that neither of the two exponents (1 2 and 1 4) can be improved even when the attention is restricted to unweighted graphs. Our lower bounds involve constructions of dense convexly independent sets of vectors with small Euclidean norms. We believe that the link between the areas of discrete geometry and spanners that we establish is of independent interest and might be useful in the study of other problems in the area of low-distortion embeddings.",
"For an unweighted graph @math , @math is a subgraph if @math , and @math is a Steiner graph if @math , and for any pair of vertices @math , the distance between them in @math (denoted @math ) is at least the distance between them in @math (denoted @math ). In this paper we introduce the notion of distance preserver. A subgraph (resp., Steiner graph) @math of a graph @math is a subgraph (resp., Steiner) @math -preserver of @math if for every pair of vertices @math with @math , @math . We show that any graph (resp., digraph) has a subgraph @math -preserver with at most @math edges (resp., arcs), and there are graphs and digraphs for which any undirected Steiner @math -preserver contains @math edges. However, we show that if one allows a directed Steiner (diSteiner) @math -preserver, then these bounds can be improved. Specifically, we show that for any graph or digraph there exists a diSteiner @math -preserver with @math arcs, and that this result is tight up to a constant factor. We also study @math -preserving distance labeling schemes, that are labeling schemes that guarantee precise calculation of distances between pairs of vertices that are at a distance of at least @math one from another. We show that there exists a @math -preserving labeling scheme with labels of size @math , and that labels of size @math are required for any @math -preserving labeling scheme.",
""
]
} |
1607.05597 | 2949706073 | This paper studies the complexity of distributed construction of purely additive spanners in the CONGEST model. We describe algorithms for building such spanners in several cases. Because of the need to simultaneously make decisions at far apart locations, the algorithms use additional mechanisms compared to their sequential counterparts. We complement our algorithms with a lower bound on the number of rounds required for computing pairwise spanners. The standard reductions from set-disjointness and equality seem unsuitable for this task because no specific edge needs to be removed from the graph. Instead, to obtain our lower bound, we define a new communication complexity problem that reduces to computing a sparse spanner, and prove a lower bound on its communication complexity using information theory. This technique significantly extends the current toolbox used for obtaining lower bounds for the CONGEST model, and we believe it may find additional applications. | The problem of designing sparse pairwise spanners was first considered by @cite_27 who showed a tradeoff between the additive stretch and size of the spanner. The current sparsest pairwise spanner with purely additive stretch has size @math and additive stretch 6 @cite_26 . Woodruff @cite_54 and Abboud and Bodwin @cite_10 @cite_43 showed lower bounds for additive spanners and pairwise spanners. Parter @cite_53 showed sparse multiplicative sourcewise spanners and a lower bound of @math on the size of a sourcewise spanner with additive stretch @math , for any integer @math . | {
"cite_N": [
"@cite_26",
"@cite_54",
"@cite_53",
"@cite_43",
"@cite_27",
"@cite_10"
],
"mid": [
"2562627112",
"1607700260",
"2809995963",
"2227069303",
"2951859740",
""
],
"abstract": [
"Let G=(V,E) be an undirected unweighted graph on n vertices. A subgraph H of G is called an (all-pairs) purely additive spanner with stretch β if for every (u,v)∈V×V, d i s t H (u,v)≤d i s t G (u,v) + β. The problem of computing sparse spanners with small stretch β is well-studied. Here we consider the following variant: we are given ( P V V ) and we seek a sparse subgraph H where d i s t H (u,v)≤d i s t G (u,v) + β for each ((u,v) P ). That is, distances for pairs outside ( P ) need not be well-approximated in H. Such a subgraph is called a pairwise spanner with additive stretch β and our goal is to construct such subgraphs that are sparser than all-pairs spanners with the same stretch. We show sparse pairwise spanners with additive stretch 4 and with additive stretch 6. We also consider the following special cases: ( P = S V ) and ( P = S T ), where S⊆V and T⊆V, and show sparser pairwise spanners for these cases.",
"We consider the problem of efficiently finding an additive C-spanner of an undirected unweighted graph G, that is, a subgraph H so that for all pairs of vertices u,v, δ H (u,v) ≤ δ G (u,v) + C, where δ denotes shortest path distance. It is known that for every graph G, one can find an additive 6-spanner with O(n 4 3) edges in O(mn 2 3) time. It is unknown if there exists a constant C and an additive C-spanner with o(n 4 3) edges. Moreover, for C ≤ 5 all known constructions require Ω(n 3 2) edges.",
"An (α,β)-spanner of an n-vertex graph G = (V,E) is a subgraph H of G satisfying that dist(u, v, H) ≤ α·dist(u, v, G) + β for every pair (u, v) ∈ V ×V, where dist(u,v,G′) denotes the distance between u and v in G′ ⊆ G. It is known that for every integer k ≥ 1, every graph G has a polynomially constructible (2k − 1,0)-spanner of size O(n 1 + 1 k ). This size-stretch bound is essentially optimal by the girth conjecture. Yet, it is important to note that any argument based on the girth only applies to adjacent vertices. It is therefore intriguing to ask if one can “bypass” the conjecture by settling for a multiplicative stretch of 2k − 1 only for neighboring vertex pairs, while maintaining a strictly better multiplicative stretch for the rest of the pairs. We answer this question in the affirmative and introduce the notion of k-hybrid spanners, in which non neighboring vertex pairs enjoy a multiplicative k stretch and the neighboring vertex pairs enjoy a multiplicative (2k − 1) stretch (hence, tight by the conjecture). We show that for every unweighted n-vertex graph G, there is a (polynomially constructible) k-hybrid spanner with O(k 2 ·n 1 + 1 k ) edges. This should be compared against the current best (α,β) spanner construction of [5] that obtains (k,k − 1) stretch with O(k ·n 1 + 1 k ) edges. An alternative natural approach to bypass the girth conjecture is to allow ourself to take care only of a subset of pairs S ×V for a given subset of vertices S ⊆ V referred to here as sources. Spanners in which the distances in S ×V are bounded are referred to as sourcewise spanners. Several constructions for this variant are provided (e.g., multiplicative sourcewise spanners, additive sourcewise spanners and more).",
"A spanner is a sparse subgraph that approximately preserves the pairwise distances of the original graph. It is well known that there is a smooth tradeoff between the sparsity of a spanner and the quality of its approximation, so long as distance error is measured multiplicatively. A central open question in the field is to prove or disprove whether such a tradeoff exists also in the regime of additive error. That is, is it true that for all e>0, there is a constant ke such that every graph has a spanner on O(n1+e) edges that preserves its pairwise distances up to +ke? Previous lower bounds are consistent with a positive resolution to this question, while previous upper bounds exhibit the beginning of a tradeoff curve: all graphs have +2 spanners on O(n3 2) edges, +4 spanners on O(n7 5) edges, and +6 spanners on O(n4 3) edges. However, progress has mysteriously halted at the n4 3 bound, and despite significant effort from the community, the question has remained open for all 0 Our main result is a surprising negative resolution of the open question, even in a highly generalized setting. We show a new information theoretic incompressibility bound: there is no function that compresses graphs into O(n4 3 − e) bits so that distance information can be recovered within +no(1) error. As a special case of our theorem, we get a tight lower bound on the sparsity of additive spanners: the +6 spanner on O(n4 3) edges cannot be improved in the exponent, even if any subpolynomial amount of additive error is allowed. Our theorem implies new lower bounds for related objects as well; for example, the twenty-year-old +4 emulator on O(n4 3) edges also cannot be improved in the exponent unless the error allowance is polynomial. Central to our construction is a new type of graph product, which we call the Obstacle Product. Intuitively, it takes two graphs G,H and produces a new graph G H whose shortest paths structure looks locally like H but globally like G.",
"Given an undirected @math -node unweighted graph @math , a spanner with stretch function @math is a subgraph @math such that, if two nodes are at distance @math in @math , then they are at distance at most @math in @math . Spanners are very well studied in the literature. The typical goal is to construct the sparsest possible spanner for a given stretch function. In this paper we study pairwise spanners, where we require to approximate the @math - @math distance only for pairs @math in a given set @math . Such @math -spanners were studied before [Coppersmith,Elkin'05] only in the special case that @math is the identity function, i.e. distances between relevant pairs must be preserved exactly (a.k.a. pairwise preservers). Here we present pairwise spanners which are at the same time sparser than the best known preservers (on the same @math ) and of the best known spanners (with the same @math ). In more detail, for arbitrary @math , we show that there exists a @math -spanner of size @math with @math . Alternatively, for any @math , there exists a @math -spanner of size @math with @math . We also consider the relevant special case that there is a critical set of nodes @math , and we wish to approximate either the distances within nodes in @math or from nodes in @math to any other node. We show that there exists an @math -spanner of size @math with @math , and an @math -spanner of size @math with @math . All the mentioned pairwise spanners can be constructed in polynomial time.",
""
]
} |
1607.05597 | 2949706073 | This paper studies the complexity of distributed construction of purely additive spanners in the CONGEST model. We describe algorithms for building such spanners in several cases. Because of the need to simultaneously make decisions at far apart locations, the algorithms use additional mechanisms compared to their sequential counterparts. We complement our algorithms with a lower bound on the number of rounds required for computing pairwise spanners. The standard reductions from set-disjointness and equality seem unsuitable for this task because no specific edge needs to be removed from the graph. Instead, to obtain our lower bound, we define a new communication complexity problem that reduces to computing a sparse spanner, and prove a lower bound on its communication complexity using information theory. This technique significantly extends the current toolbox used for obtaining lower bounds for the CONGEST model, and we believe it may find additional applications. | In his seminal paper, Pettie @cite_21 presents lower bounds for the number of rounds needed by distributed algorithms in order to construct several families of spanners. Specifically, it is shown that computing an all-pair additive @math -spanner with size @math in expectation, for a constant @math , requires @math rounds of communication. Because this is an indistinguishability-based lower bound, it holds even for the less restricted LOCAL mode, where message lengths can be unbounded. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2032654248"
],
"abstract": [
"We present efficient algorithms for computing very sparse low distortion spanners in distributed networks and prove some non-trivial lower bounds on the trade-off between time, sparseness, and distortion. All of our algorithms assume a synchronized distributed network, where relatively short messages may be communicated in each time step. Our first result is an O(log n)1+o(1)-time algorithm for finding a (2O(log* n)log n)-spanner with size O(n). Besides being nearly optimal in time and distortion, this algorithm appears to be the first that constructs a O(n)-size skeleton without requiring unbounded length messages or time proportional to the diameter of the network. Our second result is a new class of efficiently constructible (α,β)-spanners called Fibonacci spanners whose distortion improves with the distance being approximated. At their sparsest Fibonacci spanners can have nearly linear size O(n(log log n)φ) where φ = 1+☂5 2 is the golden ratio. As the distance increases the Fibonacci spanner's multiplicative distortion passes through four discrete stages, moving from logarithmic to doubly logarithmic, then into a period where it is constant, tending to 3, followed by another period tending to 1. On the lower bound side we prove that many recent sequential spanner constructions have no efficient counterparts in distributed networks, even if the desired distortion only needs to be achieved on the average or for a tiny fraction of the vertices. In particular, any distance preservers, purely additive spanners, or spanners with sublinear additive distortion must either be very dense, slow to construct, or have very weak guarantees on distortion."
]
} |
1607.05547 | 2480249651 | We consider the problem of augmenting an n-vertex graph embedded in a metric space, by inserting one additional edge in order to minimize the diameter of the resulting graph. We present exact algorithms for the cases when (i) the input graph is a path, running in O(n ^3 n) time, and (ii) the input graph is a tree, running in O(n^2 n) time. We also present an algorithm that computes a (1+ )-approximation in O(n + 1 ^3) time, for paths in R^d, where d is a constant. | The Diameter-Optimal @math -Augmentation Problem for edge-weighted graphs, and many of its variants, have been shown to be NP-hard @cite_16 , or even @math -hard @cite_6 @cite_18 . Because of this, several special classes of graphs have been considered. Chung and Gary @cite_13 and @cite_9 considered paths and cycles with unit edge weights and gave upper and lower bounds on the diameter that can be achieved. Ishii @cite_10 gave a constant factor approximation algorithm (approximating both @math and the diameter) for the case when the input graph is outerplanar. Erd o @cite_19 investigated upper and lower bounds for the case when the augmented graph must be triangle-free. | {
"cite_N": [
"@cite_18",
"@cite_10",
"@cite_9",
"@cite_6",
"@cite_19",
"@cite_16",
"@cite_13"
],
"mid": [
"2148324291",
"2148734039",
"2021989224",
"",
"",
"2118748668",
"2014447809"
],
"abstract": [
"The diameter of a graph is the maximum distance between any pair of vertices in the graph. The Diameter-tAugmentation problem takes as input a graph G=(V,E) and a positive integer k and asks whether there exists a set E\"2 of at most k new edges so that the graph G\"2=(V,[email protected]?E\"2) has diameter t. This problem is NP-hard ( 1987) [10], even in the t=2 case ( 1992) [7]. We give a parameterized reduction from Dominating Set to Diameter-tAugmentation to prove that Diameter-tAugmentation is W[2]-hard for every t.",
"Given an undirected graph and an integer , we consider the problem of augmenting G by a minimum set of new edges so that the diameter becomes at most D. It is known that no constant factor approximation algorithms to this problem with an arbitrary graph G can be obtained unless , while the problem with only a few graph classes such as forests is approximable within a constant factor. In this article, we give the first constant factor approximation algorithm to the problem with an outerplanar graph G. We also show that if the target diameter D is even, then the case where G is a partial 2-tree is also approximable within a constant.",
"Let fd (G) denote the minimum number of edges that have to be added to a graph G to transform it into a graph of diameter at most d. We prove that for any graph G with maximum degree D and n > n0 (D) vertices, f2(G) = n - D - 1 and f3(G) ≥ n - O(D3). For d ≥ 4, fd (G) depends strongly on the actual structure of G, not only on the maximum degree of G. We prove that the maximum of fd (G) over all connected graphs on n vertices is n-⌊d-2 ⌋ - O(1). As a byproduct, we show that for the n-cycle Cn, fd (Cn) = n-(2⌊d-2 ⌋ - 1) - O(1) for every d and n, improving earlier estimates of Chung and Garey in certain ranges. © 2000 John Wiley & Sons, Inc. J Graph Theory 35: 161–172, 2000 1991 Mathematics Subject classification: 05C12.",
"",
"",
"We consider the following problem: Given positive integers k and D, what is the maximum diameter of the graph obtained by deleting k edges from a graph G with diameter D, assuming that the resulting graph is still connected? For undirected graphs G we prove an upper bound of (k + 1)D and a lower bound of (k + 1)D − k for even D and of (k + 1)D − 2k + 2 for odd D ⩾ 3. For the special cases of k = 2 and k = 3, we derive the exact bounds of 3D − 1 and 4D − 2, respectively. For D = 2 we prove exact bounds of k + 2 and k + 3, for k ⩽ 4 and k = 6, and k = 5 and k ⩾ 7, respectively. For the special case of D = 1 we derive an exact bound on the resulting maximum diameter of order θ(√k). For directed graphs G, the bounds depend strongly on D: for D = 1 and D = 2 we derive exact bounds of θ(√k) and of 2k + 2, respectively, while for D ⩾ 3 the resulting diameter is in general unbounded in terms of k and D. Finally, we prove several related problems NP-complete.",
"The main question addressed in this article is the following: If t edges are removed from a (t + 1) edge-connected graph G having diameter D, how large can the diameter of the resulting graph be? (The diameter of a graph is the maximum, over all pairs of vertices, of the length of the shortest path joining those vertices.) We provide bounds on this value that imply that the maximum possible diameter of the resulting graph, for large D and fixed t, is essentially (t + 1) · D. The bulk of the proof consists of showing that, if t edges are added to an n-vertex path Pn, then the diameter of the resulting graph is at least (n (t + 1)) - 1. Using a similar proof, we also show that if t edges are added to an n-vertex cycle Cn, then the least possible diameter of the resulting graph is (for large n) essentially n (t + 2) when t is even and n (t + 1) when t is odd. Examples are given in all these cases to show that there exist graphs for which the bounds are achieved. We also give results for the corresponding vertex deletion problem for general graphs. Such results are of interest, for example, when studying the potential effects of node or link failures on the performance of a communication network, especially for networks in which the maximum time-delay or signal degradation is directly related to the diameter of the network."
]
} |
1607.05547 | 2480249651 | We consider the problem of augmenting an n-vertex graph embedded in a metric space, by inserting one additional edge in order to minimize the diameter of the resulting graph. We present exact algorithms for the cases when (i) the input graph is a path, running in O(n ^3 n) time, and (ii) the input graph is a tree, running in O(n^2 n) time. We also present an algorithm that computes a (1+ )-approximation in O(n + 1 ^3) time, for paths in R^d, where d is a constant. | In the geometric setting, when the input is a geometric graph embedded in the Euclidean plane, there are only a few results on graph augmentation in general. Rutter and Wolff @cite_0 proved that the @math -connectivity and @math -edge-connectivity augmentation problems are NP-hard on plane geometric graphs, for @math , and @math ; the problem is infeasible for @math because every planar graph has a vertex of degree at most 5. Currently, there are no known approximation algorithms for this problem. @cite_17 gave approximation algorithms for the problem of adding one edge to a geometric graph while minimizing the dilation. There were several follow-up papers @cite_11 @cite_7 , but there is still no non-trivial result known for the case when @math . | {
"cite_N": [
"@cite_0",
"@cite_11",
"@cite_7",
"@cite_17"
],
"mid": [
"",
"1584607632",
"2126150987",
"2071776560"
],
"abstract": [
"",
"Given a graph embedded in a metric space, its dilation is the maximum over all distinct pairs of vertices of the ratio between their distance in the graph and the metric distance between them. Given such a graph G with n vertices and m edges and consisting of at most two connected components, we consider the problem of augmenting G with an edge such that the resulting graph has minimum dilation. We show that we can find such an edge in @math time using linear space which solves an open problem of whether a linear-space algorithm with o(n 4) running time exists. We show that O(n 2logn) time is achievable if G is a simple path or the union of two vertex-disjoint simple paths. Finally, we show how to find an edge that maximizes the dilation of the resulting graph in O(n 3) time with O(n 2) space and in O(n 3logn) time with linear space.",
"Let G=(V,E) be an undirected graph with n vertices embedded in a metric space. We consider the problem of adding a shortcut edge in G that minimizes the dilation of the resulting graph. The fastest algorithm to date for this problem has O(n^4) running time and uses O(n^2) space. We show how to improve the running time to O(n^3logn) while maintaining quadratic space requirement. In fact, our algorithm not only determines the best shortcut but computes the dilation of [email protected]? (u,v) for every pair of distinct vertices u and v.",
"Given a Euclidean graph @math in @math with @math vertices and @math edges, we consider the problem of adding an edge to @math such that the stretch factor of the resulting graph is minimized. Currently, the fastest algorithm for computing the stretch factor of a graph with positive edge weights runs in @math @math time, resulting in a trivial @math @math -time algorithm for computing the optimal edge. First, we show that a simple modification yields the optimal solution in @math @math time using @math @math space. To reduce the running time we consider several approximation algorithms."
]
} |
1607.05809 | 2495059805 | Neural conversational models tend to produce generic or safe responses in different contexts, e.g., reply to narrative statements or to questions. In this paper, we propose an end-to-end approach to avoid such problem in neural generative models. Additional memory mechanisms have been introduced to standard sequence-to-sequence (seq2seq) models, so that context can be considered while generating sentences. Three seq2seq models, which memorize a fix-sized contextual vector from hidden input, hidden input output and a gated contextual attention structure respectively, have been trained and tested on a dataset of labeled question-answering pairs in Chinese. The model with contextual attention outperforms others including the state-of-the-art seq2seq models on perplexity test. The novel contextual model generates diverse and robust responses, and is able to carry out conversations on a wide range of topics appropriately. | Natural language conversation has been a popular topic in the field of natural language processing. In different practical scenarios, conversations are reduced to some traditional NLP tasks, e.g., question-answering, information retrieval and dialogue management. Recently, neural network-based generative models have been applied to generate responses conversationally, since these models capture deeper semantic and contextual relevancy. With the help of user-generated contents such as Twitter and cQA websites, these conversational corpora has become good resources as large-scaled training data @cite_9 @cite_4 . Following this strategy, researchers have started to solve more challenging tasks, such as dynamic contexts @cite_9 , discourse structures with attention and intention @cite_2 , and response diversity by maximizing mutual information @cite_8 . | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_4",
"@cite_2"
],
"mid": [
"1958706068",
"2951580200",
"889023230",
"1847211030"
],
"abstract": [
"Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e.g., \"I don't know\") regardless of the input. We suggest that the traditional objective function, i.e., the likelihood of output (response) given input (message) is unsuited to response generation tasks. Instead we propose using Maximum Mutual Information (MMI) as the objective function in neural models. Experimental results demonstrate that the proposed MMI models produce more diverse, interesting, and appropriate responses, yielding substantive gains in BLEU scores on two conversational datasets and in human evaluations.",
"We present a novel response generation system that can be trained end to end on large quantities of unstructured Twitter conversations. A neural network architecture is used to address sparsity issues that arise when integrating contextual information into classic statistical models, allowing the system to take into account previous dialog utterances. Our dynamic-context generative models show consistent gains over both context-sensitive and non-context-sensitive Machine Translation and Information Retrieval baselines.",
"We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings.",
"In a conversation or a dialogue process, attention and intention play intrinsic roles. This paper proposes a neural network based approach that models the attention and intention processes. It essentially consists of three recurrent networks. The encoder network is a word-level model representing source side sentences. The intention network is a recurrent network that models the dynamics of the intention process. The decoder network is a recurrent network produces responses to the input from the source side. It is a language model that is dependent on the intention and has an attention mechanism to attend to particular source side words, when predicting a symbol in the response. The model is trained end-to-end without labeling data. Experiments show that this model generates natural responses to user inputs."
]
} |
1607.05635 | 2476237970 | A natural way to measure the power of a distributed-computing model is to characterize the set of tasks that can be solved in it. the model. In general, however, the question of whether a given task can be solved in a given model is undecidable, even if we only consider the wait-free shared-memory model. In this paper, we address this question for restricted classes of models and tasks. We show that the question of whether a collection @math of objects, for various @math (the number of processes that can invoke the object) and @math (the number of distinct outputs the object returns), can be used by @math processes to solve wait-free @math -set consensus is decidable. Moreover, we provide a simple @math decision algorithm, based on a dynamic programming solution to the Knapsack optimization problem. We then present an wait-free set-consensus algorithm that, for each set of participating processes, achieves the best level of agreement that is possible to achieve using @math . Overall, this gives us a complete characterization of a read-write model defined by a collection of set-consensus objects through its . | Our algorithm computing the power of a set-consensus collection in @math steps (for a system of @math processes) is inspired by the dynamic programming solution to the Knapsack optimization problem described, e.g., in [Chap. 5] knapsack . Herlihy @cite_8 introduced the notion of of a given object type, i.e., the maximum number of processes that can solve consensus using instances of the type and read-write registers. It has been shown that @math -process consensus objects have consensus power @math . However, the corresponding consensus hierarchy is in general not robust, i.e., there exist object types, each of consensus number @math which, combined together, can be used to solve @math -process consensus @cite_17 . Besides objects of the same consensus number @math may not be equivalent in a system of more than @math processes @cite_20 . | {
"cite_N": [
"@cite_17",
"@cite_20",
"@cite_8"
],
"mid": [
"2091404845",
"2478076217",
""
],
"abstract": [
"A wait-free hierarchy ACM Transactions on Programming Languages and Systems, 11 (1991), pp. 124--149; Proceedings of the 12th ACM Symposium on Principles of Distributed Computing, 1993, pp. 145--158] classifies object types on the basis of their strength in supporting wait-free implementations of other types. Such a hierarchy is robust if it is impossible to implement objects of types that it classifies as \"strong\" by combining objects of types that it classifies as \"weak.\" We prove that if nondeterministic types are allowed, the only wait-free hierarchy that is robust is the trivial one, which lumps all types into a single level. In particular, the consensus hierarchy (the most closely studied wait-free hierarchy) is not robust. Our result implies that, in general, it is not possible to determine the power of a concurrent system that supports a given set of primitive object types by reasoning about the power of each primitive type in isolation.",
"For all integers m ≥ 2, we construct an infinite sequence of deterministic objects of consensus number m with strictly increasing computational power. In particular, this refutes the Common2 Conjecture, which claimed that every deterministic object of consensus number 2 has a deterministic, wait-free implementation from 2-consensus objects and registers in a system with any finite number of processes.",
""
]
} |
1607.05635 | 2476237970 | A natural way to measure the power of a distributed-computing model is to characterize the set of tasks that can be solved in it. the model. In general, however, the question of whether a given task can be solved in a given model is undecidable, even if we only consider the wait-free shared-memory model. In this paper, we address this question for restricted classes of models and tasks. We show that the question of whether a collection @math of objects, for various @math (the number of processes that can invoke the object) and @math (the number of distinct outputs the object returns), can be used by @math processes to solve wait-free @math -set consensus is decidable. Moreover, we provide a simple @math decision algorithm, based on a dynamic programming solution to the Knapsack optimization problem. We then present an wait-free set-consensus algorithm that, for each set of participating processes, achieves the best level of agreement that is possible to achieve using @math . Overall, this gives us a complete characterization of a read-write model defined by a collection of set-consensus objects through its . | Gafni and Koutsoupias @cite_15 and Herlihy and Rajsbaum @cite_2 showed that wait-free solvability of tasks for @math or more processes using registers is an undecidable question. We show that in a special case of solving set consensus using a set-consensus collection, the question is decidable. Moreover, we give an explicit polynomial algorithm for computing the power of a set-consensus collection. | {
"cite_N": [
"@cite_15",
"@cite_2"
],
"mid": [
"2042928046",
"2072035175"
],
"abstract": [
"We show that no algorithm exists for deciding whether a finite task for three or more processors is wait-free solvable in the asynchronous read-write shared-memory model. This impossibility result implies that there is no constructive (recursive) characterization of wait-free solvable tasks. It also applies to other shared-memory models of distributed computing, such as the comparison-based model.",
"A task is a distributed coordination problem in which each process starts with a private input value taken from a tlnite set, communicates with the other processes by applying operations to shared objects, and eventually halts with a private output value, also taken from a finite set. A protocol is a distributed program that solves a task. A protocol is t-resikent if it tolerates failures by t or fewer processes. A task is solvable in a given model of computation if it has a t-resilientprotocol in that model. A set of tasks is decidable in a given model of computation if there exists an effective procedure for deciding whether any task in that set has a t-resilient protocol. This paper gives the first necessary and sufficient conditions for task decidability in a range of different models and resilience levels. We prove undecidability by exploiting classical decidabilit y results from algebraic topology, and we prove decidability by explicit construction."
]
} |
1607.05369 | 2486219732 | Person re-identification (ReID) focuses on identifying people across different scenes in video surveillance, which is usually formulated as a binary classification task or a ranking task in current person ReID approaches. In this paper, we take both tasks into account and propose a multi-task deep network (MTDnet) that makes use of their own advantages and jointly optimize the two tasks simultaneously for person ReID. To the best of our knowledge, we are the first to integrate both tasks in one network to solve the person ReID. We show that our proposed architecture significantly boosts the performance. Furthermore, deep architecture in general requires a sufficient dataset for training, which is usually not met in person ReID. To cope with this situation, we further extend the MTDnet and propose a cross-domain architecture that is capable of using an auxiliary set to assist training on small target sets. In the experiments, our approach outperforms most of existing person ReID algorithms on representative datasets including CUHK03, CUHK01, VIPeR, iLIDS and PRID2011, which clearly demonstrates the effectiveness of the proposed approach. | Most of existing methods in person ReID focus on either feature extraction @cite_31 @cite_0 @cite_36 , or similarity measurement @cite_16 @cite_39 @cite_38 . Person image descriptors commonly used include color histogram @cite_7 @cite_16 @cite_13 , local binary patterns @cite_7 , Gabor features @cite_16 , and etc., which show certain robustness to the variations of poses, illumination and viewpoints. For similarity measurement, many metric learning approaches are proposed to learn a suitable metric, such as locally adaptive decision functions @cite_8 , local fisher discriminant analysis @cite_26 , cross-view quadratic discriminant analysis @cite_18 , and etc. A few of them @cite_13 @cite_2 learn a combination of multiple metrics. However, manually crafting features and metrics require empirical knowledge, and are usually not optimal to cope with large intra-person variations. | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_26",
"@cite_7",
"@cite_8",
"@cite_36",
"@cite_39",
"@cite_0",
"@cite_2",
"@cite_31",
"@cite_16",
"@cite_13"
],
"mid": [
"",
"1949591461",
"",
"2068042582",
"",
"",
"1709635438",
"2203864774",
"2310919327",
"",
"2047632871",
"166429404"
],
"abstract": [
"",
"Person re-identification is an important technique towards automatic search of a person's presence in a surveillance video. Two fundamental problems are critical for person re-identification, feature representation and metric learning. An effective feature representation should be robust to illumination and viewpoint changes, and a discriminant metric should be learned to match various person images. In this paper, we propose an effective feature representation called Local Maximal Occurrence (LOMO), and a subspace and metric learning method called Cross-view Quadratic Discriminant Analysis (XQDA). The LOMO feature analyzes the horizontal occurrence of local features, and maximizes the occurrence to make a stable representation against viewpoint changes. Besides, to handle illumination variations, we apply the Retinex transform and a scale invariant texture operator. To learn a discriminant metric, we propose to learn a discriminant low dimensional subspace by cross-view quadratic discriminant analysis, and simultaneously, a QDA metric is learned on the derived subspace. We also present a practical computation method for XQDA, as well as its regularization. Experiments on four challenging person re-identification databases, VIPeR, QMUL GRID, CUHK Campus, and CUHK03, show that the proposed method improves the state-of-the-art rank-1 identification rates by 2.2 , 4.88 , 28.91 , and 31.55 on the four databases, respectively.",
"",
"In this paper, we raise important issues on scalability and the required degree of supervision of existing Mahalanobis metric learning methods. Often rather tedious optimization procedures are applied that become computationally intractable on a large scale. Further, if one considers the constantly growing amount of data it is often infeasible to specify fully supervised labels for all data points. Instead, it is easier to specify labels in form of equivalence constraints. We introduce a simple though effective strategy to learn a distance metric from equivalence constraints, based on a statistical inference perspective. In contrast to existing methods we do not rely on complex optimization problems requiring computationally expensive iterations. Hence, our method is orders of magnitudes faster than comparable methods. Results on a variety of challenging benchmarks with rather diverse nature demonstrate the power of our method. These include faces in unconstrained environments, matching before unseen object instances and person re-identification across spatially disjoint cameras. In the latter two benchmarks we clearly outperform the state-of-the-art.",
"",
"",
"This paper addresses the problem of handling spatial misalignments due to camera-view changes or human-pose variations in person re-identification. We first introduce a boosting-based approach to learn a correspondence structure which indicates the patch-wise matching probabilities between images from a target camera pair. The learned correspondence structure can not only capture the spatial correspondence pattern between cameras but also handle the viewpoint or human-pose variation in individual images. We further introduce a global-based matching process. It integrates a global matching constraint over the learned correspondence structure to exclude cross-view misalignments during the image patch matching process, hence achieving a more reliable matching score between images. Experimental results on various datasets demonstrate the effectiveness of our approach.",
"We propose a novel Multi-Task Learning with Low Rank Attribute Embedding (MTL-LORAE) framework for person re-identification. Re-identifications from multiple cameras are regarded as related tasks to exploit shared information to improve re-identification accuracy. Both low level features and semantic data-driven attributes are utilized. Since attributes are generally correlated, we introduce a low rank attribute embedding into the MTL formulation to embed original binary attributes to a continuous attribute space, where incorrect and incomplete attributes are rectified and recovered to better describe people. The learning objective function consists of a quadratic loss regarding class labels and an attribute embedding error, which is solved by an alternating optimization procedure. Experiments on three person re-identification datasets have demonstrated that MTL-LORAE outperforms existing approaches by a large margin and produces state-of-the-art results.",
"",
"",
"In this paper, we propose a new approach for matching images observed in different camera views with complex cross-view transforms and apply it to person re-identification. It jointly partitions the image spaces of two camera views into different configurations according to the similarity of cross-view transforms. The visual features of an image pair from different views are first locally aligned by being projected to a common feature space and then matched with softly assigned metrics which are locally optimized. The features optimal for recognizing identities are different from those for clustering cross-view transforms. They are jointly learned by utilizing sparsity-inducing norm and information theoretical regularization. This approach can be generalized to the settings where test images are from new camera views, not the same as those in the training set. Extensive experiments are conducted on public datasets and our own dataset. Comparisons with the state-of-the-art metric learning and person re-identification methods show the superior performance of our approach.",
"Re-identification of individuals across camera networks with limited or no overlapping fields of view remains challenging in spite of significant research efforts. In this paper, we propose the use, and extensively evaluate the performance, of four alternatives for re-ID classification: regularized Pairwise Constrained Component Analysis, kernel Local Fisher Discriminant Analysis, Marginal Fisher Analysis and a ranking ensemble voting scheme, used in conjunction with different sizes of sets of histogram-based features and linear, χ 2 and RBF-χ 2 kernels. Comparisons against the state-of-art show significant improvements in performance measured both in terms of Cumulative Match Characteristic curves (CMC) and Proportion of Uncertainty Removed (PUR) scores on the challenging VIPeR, iLIDS, CAVIAR and 3DPeS datasets."
]
} |
1607.05818 | 2501642273 | Advances in topic modeling have yielded effective methods for characterizing the latent semantics of textual data. However, applying standard topic modeling approaches to sentence-level tasks introduces a number of challenges. In this paper, we adapt the approach of latent-Dirichlet allocation to include an additional layer for incorporating information about the sentence boundaries in documents. We show that the addition of this minimal information of document structure improves the perplexity results of a trained model. | This work was inspired by a number of efforts for extending the standard LDA model. Our work differs from Pachinko allocation model @cite_5 , nested Chinese restaurant process @cite_8 , and mixture network @cite_6 , each of which allows an arbitrary number of sub-document units and an arbitrary number of dependency links in between to model topic correlations. Instead, our model adheres strictly to the sentence boundaries to define document structure. Our approach is most similar to that of the latent-Dirichlet co-clustering model @cite_2 . Our work differs in that it utilizes multiple sentence-level LDA machines to better account for the contribution of sentences to the topics of documents. | {
"cite_N": [
"@cite_5",
"@cite_2",
"@cite_6",
"@cite_8"
],
"mid": [
"2106490775",
"",
"1532908297",
"2132827946"
],
"abstract": [
"Latent Dirichlet allocation (LDA) and other related topic models are increasingly popular tools for summarization and manifold discovery in discrete data. However, LDA does not capture correlations between topics. In this paper, we introduce the pachinko allocation model (PAM), which captures arbitrary, nested, and possibly sparse correlations between topics using a directed acyclic graph (DAG). The leaves of the DAG represent individual words in the vocabulary, while each interior node represents a correlation among its children, which may be words or other interior nodes (topics). PAM provides a flexible alternative to recent work by Blei and Lafferty (2006), which captures correlations only between pairs of topics. Using text data from newsgroups, historic NIPS proceedings and other research paper corpora, we show improved performance of PAM in document classification, likelihood of held-out data, the ability to support finer-grained topics, and topical keyword coherence.",
"",
"This article contributes a generic model of topic models. To define the problem space, general characteristics for this class of models are derived, which give rise to a representation of topic models as \"mixture networks\", a domain-specific compact alternative to Bayesian networks. Besides illustrating the interconnection of mixtures in topic models, the benefit of this representation is its straight-forward mapping to inference equations and algorithms, which is shown with the derivation and implementation of a generic Gibbs sampling algorithm.",
"We address the problem of learning topic hierarchies from data. The model selection problem in this domain is daunting—which of the large collection of possible trees to use? We take a Bayesian approach, generating an appropriate prior via a distribution on partitions that we refer to as the nested Chinese restaurant process. This nonparametric prior allows arbitrarily large branching factors and readily accommodates growing data collections. We build a hierarchical topic model by combining this prior with a likelihood that is based on a hierarchical variant of latent Dirichlet allocation. We illustrate our approach on simulated data and with an application to the modeling of NIPS abstracts."
]
} |
1607.05695 | 2504204199 | High-quality 3D object recognition is an important component of many vision and robotics systems. We tackle the object recognition problem using two data representations, to achieve leading results on the Princeton ModelNet challenge. The two representations: 1. Volumetric representation: the 3D object is discretized spatially as binary voxels - @math if the voxel is occupied and @math otherwise. 2. Pixel representation: the 3D object is represented as a set of projected 2D pixel images. Current leading submissions to the ModelNet Challenge use Convolutional Neural Networks (CNNs) on pixel representations. However, we diverge from this trend and additionally, use Volumetric CNNs to bridge the gap between the efficiency of the above two representations. We combine both representations and exploit them to learn new features, which yield a significantly better classifier than using either of the representations in isolation. To do this, we introduce new Volumetric CNN (V-CNN) architectures. | * Shape descriptors A large body of literature in the computer vision and graphics research community has been devoted to designing shape descriptors for 3D objects. Depending on data representations used to describe these 3D models, there has been work on shape descriptors for voxel representations and point cloud representation, among many others. In the past, shapes have been represented as histograms or bag of features models which were constructed using surface normals and surface curvatures @cite_5 . Other shape descriptors include the Light Field Descriptor @cite_23 , Heat kernel signatures @cite_10 @cite_18 and SPH @cite_12 . Classification of 3D objects has been proposed using hand-crafted features along with a machine learning classifier in @cite_7 , @cite_25 and @cite_20 . However, more recently, the focus of research has also included finding better ways to represent 3D data. In a way, better representation has enabled better classification. The creators of the Princeton ModelNet dataset have proposed a volumetric representation of the 3D model and a 3D Volumetric CNN to classify them @cite_8 . | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_8",
"@cite_23",
"@cite_5",
"@cite_20",
"@cite_10",
"@cite_25",
"@cite_12"
],
"mid": [
"2059917035",
"",
"2951755740",
"2021122545",
"",
"2053939793",
"2036163530",
"",
"1561952261"
],
"abstract": [
"In this work, we present intrinsic shape context (ISC) descriptors for 3D shapes. We generalize to surfaces the polar sampling of the image domain used in shape contexts: for this purpose, we chart the surface by shooting geodesic outwards from the point being analyzed; ‘angle’ is treated as tantamount to geodesic shooting direction, and radius as geodesic distance. To deal with orientation ambiguity, we exploit properties of the Fourier transform. Our charting method is intrinsic, i.e., invariant to isometric shape transformations. The resulting descriptor is a meta-descriptor that can be applied to any photometric or geometric property field defined on the shape, in particular, we can leverage recent developments in intrinsic shape analysis and construct ISC based on state-of-the-art dense shape descriptors such as heat kernel signatures. Our experiments demonstrate a notable improvement in shape matching on standard benchmarks.",
"",
"3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.",
"A large number of 3D models are created and available on the Web, since more and more 3D modelling and digitizing tools are developed for ever increasing applications. The techniques for content-based 3D model retrieval then become necessary. In this paper, a visual similarity-based 3D model retrieval system is proposed. This approach measures the similarity among 3D models by visual similarity, and the main idea is that if two 3D models are similar, they also look similar from all viewing angles. Therefore, one hundred orthogonal projections of an object, excluding symmetry, are encoded both by Zernike moments and Fourier descriptors as features for later retrieval. The visual similarity-based approach is robust against similarity transformation, noise, model degeneracy etc., and provides 42 , 94 and 25 better performance (precision-recall evaluation diagram) than three other competing approaches: (1)the spherical harmonics approach developed by , (2)the MPEG-7 Shape 3D descriptors, and (3)the MPEG-7 Multiple View Descriptor. The proposed system is on the Web for practical trial use (http: 3d.csie.ntu.edu.tw), and the database contains more than 10,000 publicly available 3D models collected from WWW pages. Furthermore, a user friendly interface is provided to retrieve 3D models by drawing 2D shapes. The retrieval is fast enough on a server with Pentium IV 2.4GHz CPU, and it takes about 2 seconds and 0.1 seconds for querying directly by a 3D model and by hand drawn 2D shapes, respectively.",
"",
"This is a primer on extended Gaussian images. Extended Gaussian images are useful for representing the shapes of surfaces. They can be computed easily from: 1. needle maps obtained using photometric stereo; or 2. depth maps generated by ranging devices or binocular stereo. Importantly, they can also be determined simply from geometric models of the objects. Extended Gaussian images can be of use in at least two of the tasks facing a machine vision system: 1. recognition, and 2. determining the attitude in space of an object. Here, the extended Gaussian image is defined and some of its properties discussed. An elaboration for nonconvex objects is presented and several examples are shown.",
"The computer vision and pattern recognition communities have recently witnessed a surge of feature-based methods in object recognition and image retrieval applications. These methods allow representing images as collections of “visual words” and treat them using text search approaches following the “bag of features” paradigm. In this article, we explore analogous approaches in the 3D world applied to the problem of nonrigid shape retrieval in large databases. Using multiscale diffusion heat kernels as “geometric words,” we construct compact and informative shape descriptors by means of the “bag of features” approach. We also show that considering pairs of “geometric words” (“geometric expressions”) allows creating spatially sensitive bags of features with better discriminative power. Finally, adopting metric learning approaches, we show that shapes can be efficiently represented as binary codes. Our approach achieves state-of-the-art results on the SHREC 2010 large-scale shape retrieval benchmark.",
"",
"One of the challenges in 3D shape matching arises from the fact that in many applications, models should be considered to be the same if they differ by a rotation. Consequently, when comparing two models, a similarity metric implicitly provides the measure of similarity at the optimal alignment. Explicitly solving for the optimal alignment is usually impractical. So, two general methods have been proposed for addressing this issue: (1) Every model is represented using rotation invariant descriptors. (2) Every model is described by a rotation dependent descriptor that is aligned into a canonical coordinate system defined by the model. In this paper, we describe the limitations of canonical alignment and discuss an alternate method, based on spherical harmonics, for obtaining rotation invariant representations. We describe the properties of this tool and show how it can be applied to a number of existing, orientation dependent descriptors to improve their matching performance. The advantages of this tool are two-fold: First, it improves the matching performance of many descriptors. Second, it reduces the dimensionality of the descriptor, providing a more compact representation, which in turn makes comparing two models more efficient."
]
} |
1607.05695 | 2504204199 | High-quality 3D object recognition is an important component of many vision and robotics systems. We tackle the object recognition problem using two data representations, to achieve leading results on the Princeton ModelNet challenge. The two representations: 1. Volumetric representation: the 3D object is discretized spatially as binary voxels - @math if the voxel is occupied and @math otherwise. 2. Pixel representation: the 3D object is represented as a set of projected 2D pixel images. Current leading submissions to the ModelNet Challenge use Convolutional Neural Networks (CNNs) on pixel representations. However, we diverge from this trend and additionally, use Volumetric CNNs to bridge the gap between the efficiency of the above two representations. We combine both representations and exploit them to learn new features, which yield a significantly better classifier than using either of the representations in isolation. To do this, we introduce new Volumetric CNN (V-CNN) architectures. | With recent improvements in the field of deep learning, Convolutional Neural Networks (CNN) have been widely and successfully used on 2D RGB images for a variety of tasks in computer vision, such as image classification @cite_29 @cite_22 , object detection, semantic segmentation @cite_19 @cite_4 and scene recognition @cite_13 . | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_29",
"@cite_19",
"@cite_13"
],
"mid": [
"",
"1686810756",
"2953360861",
"2102605133",
"2022508996"
],
"abstract": [
"",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction."
]
} |
1607.05695 | 2504204199 | High-quality 3D object recognition is an important component of many vision and robotics systems. We tackle the object recognition problem using two data representations, to achieve leading results on the Princeton ModelNet challenge. The two representations: 1. Volumetric representation: the 3D object is discretized spatially as binary voxels - @math if the voxel is occupied and @math otherwise. 2. Pixel representation: the 3D object is represented as a set of projected 2D pixel images. Current leading submissions to the ModelNet Challenge use Convolutional Neural Networks (CNNs) on pixel representations. However, we diverge from this trend and additionally, use Volumetric CNNs to bridge the gap between the efficiency of the above two representations. We combine both representations and exploit them to learn new features, which yield a significantly better classifier than using either of the representations in isolation. To do this, we introduce new Volumetric CNN (V-CNN) architectures. | More recently, they have also been used to perform classification and retrieval of 3D CAD models @cite_8 @cite_24 @cite_21 @cite_28 @cite_0 @cite_14 . CNNs not only allow for end to end training, it is also an automated feature learning method. The features learned through CNNs generalize well to other datasets, sometimes containing very different category of images. In particular, the distributed representation of basic features in different layers and different neurons means that there are a huge number of ways to aggregate this information in order to accomplish a task like classification or retrieval. It is also known that the features learned by training from a large 2D RGB image dataset like ImageNet generalize well, even to images not belonging to the original set of target classes. This is in contrast to handcrafted features which do not necessarily generalize well to other domains or category of images. | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_28",
"@cite_21",
"@cite_24",
"@cite_0"
],
"mid": [
"",
"2951755740",
"192761727",
"1629010235",
"2211722331",
"2400418317"
],
"abstract": [
"",
"3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.",
"",
"This letter introduces a robust representation of 3-D shapes, named DeepPano, learned with deep convolutional neural networks (CNN). Firstly, each 3-D shape is converted into a panoramic view, namely a cylinder projection around its principle axis. Then, a variant of CNN is specifically designed for learning the deep representations directly from such views. Different from typical CNN, a row-wise max-pooling layer is inserted between the convolution and fully-connected layers, making the learned representations invariant to the rotation around a principle axis. Our approach achieves state-of-the-art retrieval classification results on two large-scale 3-D model datasets (ModelNet-10 and ModelNet-40), outperforming typical methods by a large margin.",
"Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second.",
"A multi-view image sequence provides a much richer capacity for object recognition than from a single image. However, most existing solutions to multi-view recognition typically adopt hand-crafted, model-based geometric methods, which do not readily embrace recent trends in deep learning. We propose to bring Convolutional Neural Networks to generic multi-view recognition, by decomposing an image sequence into a set of image pairs, classifying each pair independently, and then learning an object classifier by weighting the contribution of each pair. This allows for recognition over arbitrary camera trajectories, without requiring explicit training over the potentially infinite number of camera paths and lengths. Building these pairwise relationships then naturally extends to the next-best-view problem in an active recognition framework. To achieve this, we train a second Convolutional Neural Network to map directly from an observed image to next viewpoint. Finally, we incorporate this into a trajectory optimisation task, whereby the best recognition confidence is sought for a given trajectory length. We present state-of-the-art results in both guided and unguided multi-view recognition on the ModelNet dataset, and show how our method can be used with depth images, greyscale images, or both."
]
} |
1607.05506 | 2489515414 | Concentration inequalities are indispensable tools for studying the generalization capacity of learning models. Hoeffding's and McDiarmid's inequalities are commonly used, giving bounds independent of the data distribution. Although this makes them widely applicable, a drawback is that the bounds can be too loose in some specific cases. Although efforts have been devoted to improving the bounds, we find that the bounds can be further tightened in some distribution-dependent scenarios and conditions for the inequalities can be relaxed. In particular, we propose four types of conditions for probabilistic boundedness and bounded differences, and derive several distribution-dependent extensions of Hoeffding's and McDiarmid's inequalities. These extensions provide bounds for functions not satisfying the conditions of the existing inequalities, and in some special cases, tighter bounds. Furthermore, we obtain generalization bounds for unbounded and hierarchy-bounded loss functions. Finally we discuss the potential applications of our extensions to learning theory. | @math Being similar to the boundedness conditions given by @cite_27 (See Assumptions 1 and 3 in Section 4). In this case, we will obtain a refined bound. | {
"cite_N": [
"@cite_27"
],
"mid": [
"2247326188"
],
"abstract": [
"We derive an extension of McDiarmid’s inequality for functions f with bounded differences on a high probability set Y (instead of almost surely). The behavior of f outside Y may be arbitrary. The proof is short and elementary, and relies on an extension argument similar to Kirszbraun’s theorem [4]."
]
} |
1607.05171 | 2513293839 | The Long Term Evolution (LTE) is the latest mobile standard being implemented globally to provide connectivity and access to advanced services for personal mobile devices. Moreover, LTE networks are considered to be one of the main pillars for the deployment of Machine to Machine (M2M) communication systems and the spread of the Internet of Things (IoT). As an enabler for advanced communications services with a subscription count in the billions, security is of capital importance in LTE. Although legacy GSM (Global System for Mobile Communications) networks are known for being insecure and vulnerable to rogue base stations, LTE is assumed to guarantee confidentiality and strong authentication. However, LTE networks are vulnerable to security threats that tamper availability, privacy and authentication. This manuscript, which summarizes and expands the results presented by the author at ShmooCon 2016 jover2016lte , investigates the insecurity rationale behind LTE protocol exploits and LTE rogue base stations based on the analysis of real LTE radio link captures from the production network. Implementation results are discussed from the actual deployment of LTE rogue base stations, IMSI catchers and exploits that can potentially block a mobile device. A previously unknown technique to potentially track the location of mobile devices as they move from cell to cell is also discussed, with mitigations being proposed. | LTE security research has been increasingly predominant over the last couple of years, mainly in network availability related projects. For example, there has been interesting studies aiming to quantify and investigate the impact of large spikes of traffic load originated from M2M systems against the LTE infrastructure @cite_14 . Also in the context of M2M, several studies have focused on the control plane signaling impact of the IoT against the LTE mobile core @cite_10 @cite_30 . | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_10"
],
"mid": [
"1932683804",
"2112998006",
"2147825047"
],
"abstract": [
"Machine to Machine (M2M) systems are actively spreading, with mobile networks rapidly evolving to provide connectivity beyond smartphones and tablets. With billions of embedded devices expected to join cellular networks over the next few years, novel applications are emerging and contributing to the Internet of Things (IoT) paradigm. The new generation of mobile networks, the Long Term Evolution (LTE), has been designed to provide enhanced capacity for a large number of mobile devices and is expected to be the main enabler of the emergence of the IoT. In this context, there is growing interest in the industry and standardization bodies on understanding the potential impact of the scalability of M2M systems on LTE networks. The highly heterogeneous traffic patterns of most M2M systems, very different from those of smartphones and other mobile devices, and the surge of M2M connected devices over the next few years, present a great challenge for the network. This paper presents the first insights and answers on the scalability of the IoT on LTE networks, determining to what extent mobile networks could be overwhelmed by the large amount of devices attempting to communicate. Based on a detailed analysis with a custom-built, standards-compliant, large-scale LTE simulation testbed, we determine the main potential congestion points and bottlenecks, and determine which types of M2M traffic present a larger challenge. To do so, the simulation testbed implements realistic statistical M2M traffic models derived from fully anonymized real LTE traces of six popular M2M systems from one of the main tier-1 operators in the United States.",
"The number of Machine-to-Machine (M2M) applications is rapidly increasing in cellular communication systems. In order to ensure a maximum system capacity, the impact of this special kind of traffic on common Human-to-Human (H2H) communication needs to be analyzed. In this paper, a system model for performance evaluation of cellular networks like Long Term Evolution (LTE) in the presence of M2M communication and under different Quality of Service (QoS) constraints is presented. By means of a Markovian model, which is parameterized by laboratory measurements and ray tracing simulations, an estimation of the behavior of LTE for different traffic characteristics is shown. We present blocking probabilities for an LTE network with heterogeneous M2M and H2H traffic and compare different transmission strategies for M2M communication to minimize the impact on human users. The results show that particularly a large number of devices with a low data rate influences the utilization of an LTE cell very negatively.",
"The global wide area coverage of cellular networks offers a great advantage to provide connectivity to a wide range of emerging machine-to-machine (M2M) services. The anticipated massive growth of M2M devices is expected to challenge the planning and operation of cellular networks due to new models of traffic behaviour and high signalling loads over the radio access network. Accordingly, it is important to capture the signalling generated by M2M traffic and its effect on cellular network planning and operation. In this paper, we adopt an experimental approach to measure, quantify, and analyze the signalling overhead of two classes of M2M services that resemble smart metering and vehicular M2M connections. Different perspectives are provided in the analysis, identifying the relation between traffic payload data size, transmission direction and incurred signalling overhead for static and mobile scenarios. A security perspective is also presented looking at ways of utilizing the heavy signalling loads generated by M2M devices to implement denial of service (DoS) attacks over the radio access network. The insights drawn from the experimental study are reinforced by results obtained through elaborate network simulations for further analysis and validation."
]
} |
1607.05171 | 2513293839 | The Long Term Evolution (LTE) is the latest mobile standard being implemented globally to provide connectivity and access to advanced services for personal mobile devices. Moreover, LTE networks are considered to be one of the main pillars for the deployment of Machine to Machine (M2M) communication systems and the spread of the Internet of Things (IoT). As an enabler for advanced communications services with a subscription count in the billions, security is of capital importance in LTE. Although legacy GSM (Global System for Mobile Communications) networks are known for being insecure and vulnerable to rogue base stations, LTE is assumed to guarantee confidentiality and strong authentication. However, LTE networks are vulnerable to security threats that tamper availability, privacy and authentication. This manuscript, which summarizes and expands the results presented by the author at ShmooCon 2016 jover2016lte , investigates the insecurity rationale behind LTE protocol exploits and LTE rogue base stations based on the analysis of real LTE radio link captures from the production network. Implementation results are discussed from the actual deployment of LTE rogue base stations, IMSI catchers and exploits that can potentially block a mobile device. A previously unknown technique to potentially track the location of mobile devices as they move from cell to cell is also discussed, with mitigations being proposed. | Applied LTE security research and protocol exploit experimentation have been close to non-existent over the last few years. However, the recent availability of open source tools for LTE experimentation have provided the means for very interesting security research work. For example, some recent studies aimed at analyzing and evaluating sophisticated jamming threats against LTE networks @cite_27 @cite_33 . Also leveraging LTE open source tools, the authors of @cite_32 were the first to publicly disclose the implementation and analysis of the device blocking and soft downgrade to GSM exploits, which were also implemented in this manuscript. The same authors are responsible for some other excellent mobile protocol exploit experimentation, such as a study on intercepting phone calls and text messages in GSM networks @cite_16 and mobile phone baseband fuzzing @cite_22 . | {
"cite_N": [
"@cite_33",
"@cite_22",
"@cite_32",
"@cite_27",
"@cite_16"
],
"mid": [
"2083164542",
"2293493339",
"1951189428",
"2006575006",
"1778464109"
],
"abstract": [
"LTE is well on its way to becoming the primary cellular standard, due to its performance and low cost. Over the next decade we will become dependent on LTE, which is why we must ensure it is secure and available when we need it. Unfortunately, like any wireless technology, disruption through radio jamming is possible. This paper investigates the extent to which LTE is vulnerable to intentional jamming, by analyzing the components of the LTE downlink and uplink signals. The LTE physical layer consists of several physical channels and signals, most of which are vital to the operation of the link. By taking into account the density of these physical channels and signals with respect to the entire frame, as well as the modulation and coding schemes involved, we come up with a series of vulnerability metrics in the form of jammer to signal ratios. The “weakest links” of the LTE signals are then identified, and used to establish the overall vulnerability of LTE to hostile interference.",
"Mobile communication is an essential part of our daily lives. Therefore, it needs to be secure and reliable. In this paper, we study the security of feature phones, the most common type of mobile phone in the world. We built a framework to analyze the security of SMS clients of feature phones. The framework is based on a small GSM base station, which is readily available on the market. Through our analysis we discovered vulnerabilities in the feature phone platforms of all major manufacturers. Using these vulnerabilities we designed attacks against end-users as well as mobile operators. The threat is serious since the attacks can be used to prohibit communication on a large scale and can be carried out from anywhere in the world. Through further analysis we determined that such attacks are amplified by certain configurations of the mobile network. We conclude our research by providing a set of countermeasures.",
"Mobile communication systems now constitute an essential part of life throughout the world. Fourth generation \"Long Term Evolution\" (LTE) mobile communication networks are being deployed. The LTE suite of specifications is considered to be significantly better than its predecessors not only in terms of functionality but also with respect to security and privacy for subscribers. We carefully analyzed LTE access network protocol specifications and uncovered several vulnerabilities. Using commercial LTE mobile devices in real LTE networks, we demonstrate inexpensive, and practical attacks exploiting these vulnerabilities. Our first class of attacks consists of three different ways of making an LTE device leak its location: A semi-passive attacker can locate an LTE device within a 2 sq.km area within a city whereas an active attacker can precisely locate an LTE device using GPS co-ordinates or trilateration via cell-tower signal strength information. Our second class of attacks can persistently deny some or all services to a target LTE device. To the best of our knowledge, our work constitutes the first publicly reported practical attacks against LTE access network protocols. We present several countermeasures to resist our specific attacks. We also discuss possible trade-offs that may explain why these vulnerabilities exist and recommend that safety margins introduced into future specifications to address such trade-offs should incorporate greater agility to accommodate subsequent changes in the trade-off equilibrium.",
"LTE is universally recognized as the world-wide standard for next-generation mobile broadband services. Operating LTE in military spectrum has recently been proposed for a variety of use cases. First, due to the lack of available commercial spectrum, commercial operators are seeking to leverage underutilized military bands to offer improved commercial service. Second, given the ability for LTE to operate at high data rates in multipath environments, the technology could provide the military with more resilient communications in tactical environments. However, these proposed use cases for LTE are non-standard and certain military requirements cannot be immediately supported within the scope of the 3GPP standards. Emerging features within LTE-Advanced provide the initial building blocks for supporting LTE operations in heterogeneous environments that may include hostile interferers. LTE operation in military bands may be possible through the use of distributed spectrum sensing, spectrum allocation databases, carrier aggregation with non-military bands, and real-time radio resource management to cope with interference.",
"Mobile telecommunication has become an important part of our daily lives. Yet, industry standards such as GSM often exclude scenarios with active attackers. Devices participating in communication are seen as trusted and non-malicious. By implementing our own baseband firmware based on OsmocomBB, we violate this trust and are able to evaluate the impact of a rogue device with regard to the usage of broadcast information. Through our analysis we show two new attacks based on the paging procedure used in cellular networks. We demonstrate that for at least GSM, it is feasible to hijack the transmission of mobile terminated services such as calls, perform targeted denial of service attacks against single subscribers and as well against large geographical regions within a metropolitan area."
]
} |
1607.05048 | 2442469506 | We consider the positioning problem of aerial drone systems for efficient three-dimensional (3-D) coverage. Our solution draws from molecular geometry, where forces among electron pairs surrounding a central atom arrange their positions. In this paper, we propose a 3-D clustering algorithm for autonomous positioning (VBCA) of aerial drone networks based on virtual forces. These virtual forces induce interactions among drones and structure the system topology. The advantages of our approach are that (1) virtual forces enable drones to self-organize the positioning process and (2) VBCA can be implemented entirely localized. Extensive simulations show that our virtual forces clustering approach produces scalable 3-D topologies exhibiting near-optimal volume coverage. VBCA triggers efficient topology rearrangement for an altering number of nodes, while providing network connectivity to the central drone. We also draw a comparison of volume coverage achieved by VBCA against existing approaches and find VBCA up to 40 more efficient. | There are various approaches for the 3-D coordination and positioning of aerial networks. The aerial network presented by Elston and Frew @cite_20 contains a central ship with multiple drones, which use field tracking for the hierarchy. Dumiak @cite_24 proposes a coordination mechanism for aerial networks to complete the tasks by multiple drones. UAVNet is an autonomous deployment framework for FANETs by @cite_0 . UAVNet aims to provide an efficient way to construct a communication network, which can be controlled by a single remote user. The mobility prediction clustering algorithm @cite_19 uses the dictionary tree structure prediction algorithm with link expiration time mobility model to overcome the challenge of frequent cluster updates by predicting the network topology updates. @cite_2 propose a positioning and collision avoidance strategy for UAVs in search scenarios, which uses received signal strength (RSS) from the onboard communication module. De @cite_4 propose a prognostics and health monitoring based multi-UAV task assignment approach to include system probability of failure into task assignment in a drone system. This method assigns tasks based on the drone health condition using the Receding Horizon Task Assignment (RHTA) algorithm. | {
"cite_N": [
"@cite_4",
"@cite_24",
"@cite_19",
"@cite_0",
"@cite_2",
"@cite_20"
],
"mid": [
"2052791775",
"",
"2056915251",
"2084156584",
"1969589107",
"2121464912"
],
"abstract": [
"This paper is relating to the application of Integrated Vehicle Health Management (IVHM) concepts based on Prognostics and Health Monitoring (PHM) techniques to Multi-UAV systems. Considering UAV as a mission critical system, it is expected and required to accomplish its operational objectives with minimal unscheduled interruptions. So that, it does make sense for UAV to take advantage of those techniques as enablers for the readiness of multi-UAV. The main goal of this paper is to apply information from a PHM system to support decision making through an IVHM framework. PHM system information, in this case, comprises UAV remaining useful life (RUL) estimations. UAV RUL is computed by means of a fault tree analysis that it is fed by a distribution function from a probability density function relating time and failure probability for each UAV critical components. The IVHM framework, in this case, it is the task assignment based on UAV health condition (RUL information) using the Receding Horizon Task Assignment (RHTA) algorithm. The study case was developed considering a team of electrical small UAVs and pitch control system was chosen as the critical system.",
"",
"In recent years, with the increasingly widespread application of unmanned aerial vehicle (UAV), the network technology of UAV has also caused for concern. In this paper, according to the background of related technologies of UAV, a mobility prediction clustering algorithm (MPCA) relying on the attributes of UAV is proposed. The dictionary Trie structure prediction algorithm and link expiration time mobility model are applied in this clustering algorithm to solve the difficulty of high mobility of UAV. The simulation shows that the reasonable clusterhead electing algorithm and on-demand cluster maintenance mechanism guarantee the stability of the cluster structure and the performance of the network.",
"We developed UAVNet, a framework for the autonomous deployment of a flying Wireless Mesh Network using small quadrocopter-based Unmanned Aerial Vehicles (UAVs). The flying wireless mesh nodes are automatically interconnected to each other and building an IEEE 802.11s wireless mesh network. The implemented UAVNet prototype is able to autonomously interconnect two end systems by setting up an airborne relay, consisting of one or several flying wireless mesh nodes. The developed software includes basic functionality to control the UAVs and to setup, deploy, manage, and monitor a wireless mesh network. Our evaluations have shown that UAVNet can significantly improve network performance.",
"Unmanned aerial vehicles (UAVs) play an invaluable role in information collection and data fusion. Because of their mobility and the complexity of deployed environments, constant position awareness and collision avoidance are essential. UAVs may encounter and or cause danger if their Global Positioning System (GPS) signal is weak or unavailable. This paper tackles the problem of constant positioning and collision avoidance on UAVs in outdoor (wildness) search scenarios by using received signal strength (RSS) from the onboard communication module. Colored noise is found in the RSS, which invalidates the unbiased assumptions in least squares (LS) algorithms that are widely used in RSS-based position estimation. A colored noise model is thus proposed and applied in the extended Kalman filter (EKF) for distance estimation. Furthermore, the constantly changing path-loss factor during UAV flight can also affect the accuracy of estimation. To overcome this challenge, we present an adaptive algorithm to estimate the path-loss factor. Given the position and velocity information, if a collision is detected, we further employ an orthogonal rule to adapt the UAV predefined trajectory. Theoretical results prove that such an algorithm can provide effective modification to satisfy the required performance. Experiments have confirmed the advantages of the proposed algorithms.",
"This paper presents a hierarchical control architecture that enables cooperative surveillance by a heterogeneous aerial robot network comprised of mothership unmanned aircraft and daughtership micro air vehicles. Combining the endurance, range, and processing capabilities of the motherships with the stealth, flexibility, and maneuverability of swarms of daughterships enables robust control of aerial robot networks conducting collaborative operations. The hierarchical control structure decomposes the system into components that take advantage of the abilities of the different types of vehicles. The motherships act as distributed databases, fusion centers, negotiation agents, and task supervisors while daughtership control is achieved using cooperative vector field tracking. This paper describes the overall architecture and then focuses on the assignment and tracking algorithms used once sub- teams of daughtership vehicles have been deployed. A summary of the communication, command, and control structure of a heterogeneous unmanned aircraft system is also given in this paper along with hardware-in-the-loop and software simulation results verifying several components of the distributed control architecture."
]
} |
1607.05048 | 2442469506 | We consider the positioning problem of aerial drone systems for efficient three-dimensional (3-D) coverage. Our solution draws from molecular geometry, where forces among electron pairs surrounding a central atom arrange their positions. In this paper, we propose a 3-D clustering algorithm for autonomous positioning (VBCA) of aerial drone networks based on virtual forces. These virtual forces induce interactions among drones and structure the system topology. The advantages of our approach are that (1) virtual forces enable drones to self-organize the positioning process and (2) VBCA can be implemented entirely localized. Extensive simulations show that our virtual forces clustering approach produces scalable 3-D topologies exhibiting near-optimal volume coverage. VBCA triggers efficient topology rearrangement for an altering number of nodes, while providing network connectivity to the central drone. We also draw a comparison of volume coverage achieved by VBCA against existing approaches and find VBCA up to 40 more efficient. | Brust and Strimbu @cite_14 introduce a UAV networked swarm model for forestry assessment and environmental monitoring. The UAV swarm is able to establish, maintain multi-hop connectivity and avoid obstacles, while assessing the forest environment (e.g. tree localization, tree mapping). @cite_5 use Linear Model Predictive Control (LMPC) to implement line abreast, triangular and cross formation flights for drones in simulations and experiments. The Reconfigurable Flight Control System Architecture (RFCSA) @cite_9 is a control framework for small UAVs. RFCSA utilizes a different module for each function of a drone to minimize the complexity of implementation and coordination during the flight. | {
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_14"
],
"mid": [
"2053868191",
"2015026308",
"1582929720"
],
"abstract": [
"A team of three Unmanned Aerial Vehicles (UAVs) accomplishes a line abreast, triangular and cross formation based on high-level Linear Model Predictive Control (LMPC). All flight tests respect Reynold's rules of flocking, where the UAVs avoid collisions with nearby flockmates, attempt to match velocity of other team members and attempt to stay close to other flockmates. A linear system identification model is at the base of the error dynamics describing the formation control algorithm. The main contribution of this paper lies in the use of LMPC to implement multiple formations on UAVs in simulation and using the Qball-X4 quadrotor.",
"Small Unmanned Aerial Vehicles (UAVs) have been proposed for use in a variety of areas including hazard analysis, disaster monitoring, agricultural mapping, and so on. Currently, the development of flight control systems (FCS) for small UAVs is complicated, time-consuming and error-prone. To address these challenges, we present a reconfigurable flight control system architecture (RFCSA) for small UAVs. RFCSA allows rapidly integrating hardware modules and verifying control laws. It utilizes a modular-based framework, in which implementation details of a typical function are packaged into a function module. In addition, each function module in RFCSA is stand-alone with its own processors, memories, power conversion and communication interfaces. In this way, the system designer could be able to focus on the system implementation, rather than pay much attention to the design details of low-level functions. Moreover, an event-driven Service-Oriented Architectures (SOA) is proposed to minimize the coupling between modules and improve the performance of inter-module interactions. The paper is organized as follows: Firstly, the challenges and requirements for small UAVs' flight control systems are discussed. Secondly, the hardware architecture based on modular concept is developed. Thirdly, event driven SOA is proposed, and the mechanisms for modules to share information and coordinate activities are introduced. Finally, the conclusion and future work is pointed out.",
"Autonomous Unmanned Aerial Vehicles (UAVs) have gained popularity due to their many potential application fields. Alongside sophisticated sensors, UAVs can be equipped with communication adaptors aimed for inter-UAV communication. Inter-communication of UAVs to form a UAV swarm raises questions on how to manage its communication structure and mobility. In this paper, we consider therefore the problem of establishing an efficient swarm movement model and a network topology between a collection of UAVs, which are specifically deployed for the scenario of high-quality forest-mapping."
]
} |
1607.05048 | 2442469506 | We consider the positioning problem of aerial drone systems for efficient three-dimensional (3-D) coverage. Our solution draws from molecular geometry, where forces among electron pairs surrounding a central atom arrange their positions. In this paper, we propose a 3-D clustering algorithm for autonomous positioning (VBCA) of aerial drone networks based on virtual forces. These virtual forces induce interactions among drones and structure the system topology. The advantages of our approach are that (1) virtual forces enable drones to self-organize the positioning process and (2) VBCA can be implemented entirely localized. Extensive simulations show that our virtual forces clustering approach produces scalable 3-D topologies exhibiting near-optimal volume coverage. VBCA triggers efficient topology rearrangement for an altering number of nodes, while providing network connectivity to the central drone. We also draw a comparison of volume coverage achieved by VBCA against existing approaches and find VBCA up to 40 more efficient. | A geometry-based deployment and positioning strategy is used in VBCA. @cite_21 proposes a geometric approach for addressing the deployment of an autonomous mobile robot swarm randomly distributed in 3-D space. Through selective and dynamic interaction, four robots form a tetrahedron topology. The Regular Tetrahedron Formation (RTF) strategy @cite_1 by Zeng and Li is proposed for a swarm of robots which is based on a virtual spring mechanism to form the topologies. The movements at each time step are dependent on the local position information of three neighbors. | {
"cite_N": [
"@cite_21",
"@cite_1"
],
"mid": [
"2007846897",
"2099764439"
],
"abstract": [
"This paper addresses the deployment problem for a swarm of autonomous mobile robots initially randomly distributed in 3 dimensional space. A fully decentralized geometric self-configuration approach is proposed to deploy individual robots at a given spatial density. Specifically, each robot interacts with three neighboring robots in a selective and dynamic fashion without using any explicit communication so that four robots eventually form a regular tetrahedron. Using such local interactions, the proposed algorithms enable a swarm of robots to span a network of regular tetrahedrons in a designated space. The convergence of the algorithms is theoretically proved using Lyapunov theory. Through extensive simulations, we validate the effectiveness and scalability of the proposed algorithms.",
"A new decentralized control method, named Regular Tetrahedron Formation (RTF) strategy, is presented for a swarm of simple robots operating in three dimensional space. This strategy is based on virtual spring mechanism and basically, allows four neighboring robots to autonomously form a Regular Tetrahedron (RT) regardless of their initial positions. RTF strategy is made scalable and applied for various sizes of swarms through a dynamic neighbor selection procedure. Thus, each robot's behavior in each time step is only dependent on the local position information of three dynamically selected neighbors. In addition, an obstacle avoidance model suitable for swarm maneuvering is also introduced. Algorithm is studied with computational experiments which demonstrated that it is effective and practical."
]
} |
1607.04982 | 2486884974 | In this paper, we present an approach to improve the accuracy of a strong transition-based dependency parser by exploiting dependency language models that are extracted from a large parsed corpus. We integrated a small number of features based on the dependency language models into the parser. To demonstrate the effectiveness of the proposed approach, we evaluate our parser on standard English and Chinese data where the base parser could achieve competitive accuracy scores. Our enhanced parser achieved state-of-the-art accuracy on Chinese data and competitive results on English data. We gained a large absolute improvement of one point (UAS) on Chinese and 0.5 points for English. | The first group uses unlabeled data (usually parsed data) directly in the training process as additional training data. The most common approaches in this group are self- co-training. applied first self-training to a constituency parser. This was later adapted to dependency parsing by and . Compared to the self-training approach used by , both self-training approaches for dependency parsing need an additional selection step to predict high-quality parsed sentences for retraining. The basic idea behind this is similar to 's co-training approach. Instead of using a separately trained classifier @cite_3 or confidence-based methods @cite_14 , used two different parsers to obtain the additional training data. shows that when two parsers assign the same syntactic analysis to sentences then the parse trees have usually a higher parsing accuracy. Tri-training @cite_20 @cite_19 is a variant of co-training which involves a third parser. The base parser is retrained on additional parse trees that the other two parsers agreed on. | {
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_20",
"@cite_3"
],
"mid": [
"1716844867",
"2250737685",
"2133556223",
"2396955895"
],
"abstract": [
"(2008) presented what to the best of our knowledge still ranks as the best overall result on the CONLL-X Shared Task datasets. The paper shows how triads of stacked dependency parsers described in (2008) can label unlabeled data for each other in a way similar to co-training and produce end parsers that are significantly better than any of the stacked input parsers. We evaluate our system on five datasets from the CONLL-X Shared Task and obtain 10--20 error reductions, incl. the best reported results on four of them. We compare our approach to other semi-supervised learning algorithms.",
"This paper presents a successful approach for domain adaptation of a dependency parser via self-training. We improve parsing accuracy for out-of-domain texts with a self-training approach that uses confidence-based methods to select additional training samples. We compare two confidence-based methods: The first method uses the parse score of the employed parser to measure the confidence into a parse tree. The second method calculates the score differences between the best tree and alternative trees. With these methods, we were able to improve the labeled accuracy score by 1.6 percentage points on texts from a chemical domain and by 0.6 on average on texts of three web domains. Our improvements on the chemical texts of 1.5 UAS is substantially higher than improvements reported in previous work of 0.5 UAS. For the three web domains, no positive results for self-training have been reported before.",
"In many practical data mining applications, such as Web page classification, unlabeled training examples are readily available, but labeled ones are fairly expensive to obtain. Therefore, semi-supervised learning algorithms such as co-training have attracted much attention. In this paper, a new co-training style semi-supervised learning algorithm, named tri-training, is proposed. This algorithm generates three classifiers from the original labeled example set. These classifiers are then refined using unlabeled examples in the tri-training process. In detail, in each round of tri-training, an unlabeled example is labeled for a classifier if the other two classifiers agree on the labeling, under certain conditions. Since tri-training neither requires the instance space to be described with sufficient and redundant views nor does it put any constraints on the supervised learning algorithm, its applicability is broader than that of previous co-training style algorithms. Experiments on UCI data sets and application to the Web page classification task indicate that tri-training can effectively exploit unlabeled data to enhance the learning performance.",
"The accuracy of parsing has exceeded 90 recently, but this is not high enough to use parsing results practically in natural language processing (NLP) applications such as paraphrase acquisition and relation extraction. We present a method for detecting reliable parses out of the outputs of a single dependency parser. This technique is also applied to domain adaptation of dependency parsing. Our goal was to improve the performance of a state-of-the-art dependency parser on the data set of the domain adaptation track of the CoNLL 2007 shared task, a formidable challenge."
]
} |
1607.05178 | 2953331043 | Many businesses possess a small infrastructure that they can use for their computing tasks, but also often buy extra computing resources from clouds. Cloud vendors such as Amazon EC2 offer two types of purchase options: on-demand and spot instances. As tenants have limited budgets to satisfy their computing needs, it is crucial for them to determine how to purchase different options and utilize them (in addition to possible self-owned instances) in a cost-effective manner while respecting their response-time targets. In this paper, we propose a framework to design policies to allocate self-owned, on-demand and spot instances to arriving jobs. In particular, we propose a near-optimal policy to determine the number of self-owned instance and an optimal policy to determine the number of on-demand instances to buy and the number of spot instances to bid for at each time unit. Our policies rely on a small number of parameters and we use an online learning technique to infer their optimal values. Through numerical simulations, we show the effectiveness of our proposed policies, in particular that they achieve a cost reduction of up to 64.51 when spot and on-demand instances are considered and of up to 43.74 when self-owned instances are considered, compared to previously proposed or intuitive policies. | In this paper, the online learning technique is used to learn the most-effective parameters for utilizing various instances. Jain first enable the application of this approach to the scenario of cloud computing The objective of this paper corresponds to a special case in @cite_16 @cite_18 where the value of each job is larger than the cost of completing it. @cite_16 @cite_18 . They do not consider the problem of how to optimally utilize the purchase options in IaaS clouds and self-owned instances are also not taken into account. This approach is interesting partially due to that online learning not only does not impose the restriction of a priori statistical knowledge of workload, compared with other techniques such as stochastic programming, but also achieves a good performance only if effective scheduling policies can be proposed. | {
"cite_N": [
"@cite_18",
"@cite_16"
],
"mid": [
"1463932917",
"2282197438"
],
"abstract": [
"Cloud computing provides an attractive computing paradigm in which computational resources are rented on-demand to users with zero capital and maintenance costs. Cloud providers offer different pricing options to meet computing requirements of a wide variety of applications. An attractive option for batch computing is spot-instances, which allows users to place bids for spare computing instances and rent them at a (often) substantially lower price compared to the fixed on-demand price. However, this raises three main challenges for users: how many instances to rent at any time? what type (on-demand, spot, or both)? and what bid value to use for spot instances? In particular, renting on-demand risks high costs while renting spot instances risks job interruption and delayed completion when the spot market price exceeds the bid. This paper introduces an online learning algorithm for resource allocation to address this fundamental tradeoff between computation cost and performance. Our algorithm dynamically adapts resource allocation by learning from its performance on prior job executions while incorporating history of spot prices and workload characteristics. We provide theoretical bounds on its performance and prove that the average regret of our approach (compared to the best policy in hindsight) vanishes to zero with time. Evaluation on traces from a large datacenter cluster shows that our algorithm outperforms greedy allocation heuristics and quickly converges to a small set of best performing policies.",
"A method for adaptively allocating resources to a plurality of jobs. The method comprises selecting a first policy from a plurality of policies for a first job in the plurality of jobs by using a policy selection mechanism, allocating at least one resource to the first job in accordance with the first policy, and in response to completion of the first job, updating the policy selection mechanism to obtain an updated policy selection mechanism by using at least one processor. Updating the policy selection mechanism comprises evaluating the performance of the first policy with respect to the first job by calculating a value of a metric of utility for the first policy based on conditions associated with execution of the first job and updating the policy selection mechanism based on the calculated value and a delay of execution of the first job."
]
} |
1607.05178 | 2953331043 | Many businesses possess a small infrastructure that they can use for their computing tasks, but also often buy extra computing resources from clouds. Cloud vendors such as Amazon EC2 offer two types of purchase options: on-demand and spot instances. As tenants have limited budgets to satisfy their computing needs, it is crucial for them to determine how to purchase different options and utilize them (in addition to possible self-owned instances) in a cost-effective manner while respecting their response-time targets. In this paper, we propose a framework to design policies to allocate self-owned, on-demand and spot instances to arriving jobs. In particular, we propose a near-optimal policy to determine the number of self-owned instance and an optimal policy to determine the number of on-demand instances to buy and the number of spot instances to bid for at each time unit. Our policies rely on a small number of parameters and we use an online learning technique to infer their optimal values. Through numerical simulations, we show the effectiveness of our proposed policies, in particular that they achieve a cost reduction of up to 64.51 when spot and on-demand instances are considered and of up to 43.74 when self-owned instances are considered, compared to previously proposed or intuitive policies. | Similar to this paper and @cite_16 @cite_18 , executing deadline-constrained jobs cost-effectively in IaaS clouds is also studied in @cite_8 @cite_12 . In particular, Zafer characterize the evolution of spot prices by a Markov model and propose an optimal bidding strategy to utilize spot instances to complete a serial or parallel job by some deadline @cite_8 . Yao study the problem of utilizing reserved and on-demand instances to complete online batch jobs by their deadlines and formulate it as integer programming problems; then heuristic algorithms are proposed to give approximate solutions @cite_12 . | {
"cite_N": [
"@cite_18",
"@cite_16",
"@cite_12",
"@cite_8"
],
"mid": [
"1463932917",
"2282197438",
"2051390526",
"2061849602"
],
"abstract": [
"Cloud computing provides an attractive computing paradigm in which computational resources are rented on-demand to users with zero capital and maintenance costs. Cloud providers offer different pricing options to meet computing requirements of a wide variety of applications. An attractive option for batch computing is spot-instances, which allows users to place bids for spare computing instances and rent them at a (often) substantially lower price compared to the fixed on-demand price. However, this raises three main challenges for users: how many instances to rent at any time? what type (on-demand, spot, or both)? and what bid value to use for spot instances? In particular, renting on-demand risks high costs while renting spot instances risks job interruption and delayed completion when the spot market price exceeds the bid. This paper introduces an online learning algorithm for resource allocation to address this fundamental tradeoff between computation cost and performance. Our algorithm dynamically adapts resource allocation by learning from its performance on prior job executions while incorporating history of spot prices and workload characteristics. We provide theoretical bounds on its performance and prove that the average regret of our approach (compared to the best policy in hindsight) vanishes to zero with time. Evaluation on traces from a large datacenter cluster shows that our algorithm outperforms greedy allocation heuristics and quickly converges to a small set of best performing policies.",
"A method for adaptively allocating resources to a plurality of jobs. The method comprises selecting a first policy from a plurality of policies for a first job in the plurality of jobs by using a policy selection mechanism, allocating at least one resource to the first job in accordance with the first policy, and in response to completion of the first job, updating the policy selection mechanism to obtain an updated policy selection mechanism by using at least one processor. Updating the policy selection mechanism comprises evaluating the performance of the first policy with respect to the first job by calculating a value of a metric of utility for the first policy based on conditions associated with execution of the first job and updating the policy selection mechanism based on the calculated value and a delay of execution of the first job.",
"Many web service providers use commercial cloud computing infrastructures like Amazon for flexible and reliable service deployment. For these web service providers, the cost of cloud computing usage becomes a big part of their IT department cost. Facing the diverse pricing models including on-demand, reserved, and spot instance, it is difficult for web service providers to optimize their cost. This paper introduces a new cloud brokerage service to help web service providers to minimize their cloud computing cost for deadline-constrained batch jobs, which have been a significant workload in web services. Our cloud brokerage service associates each batch job with deadline, and always tries to use cheaper reserved instances for computation to maintain a minimum cost. We achieve this with the following two steps: (1) given a set of jobs' specifications, determine the scheduling of jobs, (2) given the scheduling and pricing options, find an optimal instance renting strategy. We prove that both problems in two steps are computation intractable, and propose approximation algorithms for them. Trace-based evaluation shows that our cloud brokerage service can reduce up to 57 of the cloud computing cost.",
"Spot virtual-machine (VM) instances, such as Amazon EC2 Spot VMs, are a class of VMs that are purchased through a market mechanism of price-bids submitted by cloud users. Spot VMs can be obtained at substantially lower cost than other VM classes such as Reserved and On-demand instances, but they do not have guaranteed availability since it depends on the submitted price bids and the fluctuating spot VM price. Many applications with large computing requirements but no real-time availability constraints, such as scientific computing, financial modelling and large data analysis, can be carried out at a significantly lower cost using spot VMs. For such jobs, an important question that arises is what should the submitted price bids be so that the computation is completed within a fixed time interval while the cost is minimized. Towards this goal, we model a job as a fixed computation request with a deadline constraint and formulate the problem of designing a dynamic bidding policy that minimizes the average cost of job completion. We obtain analytical and closed-form results for the optimal strategy under a Markov spot price evolution, and then evaluate the performance of the algorithms on the actual spot price history of Amazon EC2 Spot VMs."
]
} |
1607.05178 | 2953331043 | Many businesses possess a small infrastructure that they can use for their computing tasks, but also often buy extra computing resources from clouds. Cloud vendors such as Amazon EC2 offer two types of purchase options: on-demand and spot instances. As tenants have limited budgets to satisfy their computing needs, it is crucial for them to determine how to purchase different options and utilize them (in addition to possible self-owned instances) in a cost-effective manner while respecting their response-time targets. In this paper, we propose a framework to design policies to allocate self-owned, on-demand and spot instances to arriving jobs. In particular, we propose a near-optimal policy to determine the number of self-owned instance and an optimal policy to determine the number of on-demand instances to buy and the number of spot instances to bid for at each time unit. Our policies rely on a small number of parameters and we use an online learning technique to infer their optimal values. Through numerical simulations, we show the effectiveness of our proposed policies, in particular that they achieve a cost reduction of up to 64.51 when spot and on-demand instances are considered and of up to 43.74 when self-owned instances are considered, compared to previously proposed or intuitive policies. | In fact, there have been substantial works on cost-effective resource provisioning in IaaS clouds @cite_10 and in the following we introduce some of the typical approaches used in this problem. There are many works with the assumption of a priori statistical knowledge of the workload or spot prices @cite_7 @cite_17 @cite_11 and then several techniques could be applied. In @cite_7 @cite_17 , the techniques of stochastic programming is applied to achieve the cost-optimal acquisition of reserved and on-demand instances. In @cite_11 , the optimal strategy for the users to bid for the spot instances are derived, given a predicted distribution over spot prices. However, when implementing these techniques, there is a high computation complexity although the statistical knowledge could be derived by the techniques such as dynamic programming @cite_5 . | {
"cite_N": [
"@cite_7",
"@cite_17",
"@cite_5",
"@cite_10",
"@cite_11"
],
"mid": [
"1975099936",
"2133664094",
"1793037266",
"2048402506",
"2082819362"
],
"abstract": [
"Cloud computing holds the exciting potential of elastically scaling computation to match time-varying demand, thus eliminating the need to provision for peak demand to satisfy response-time requirements. Moreover, cloud vendors often offer several commitment levels for their machine instances (e.g., users can choose to pay an upfront premium for the discounted hourly usage price). Because cost is a major concern that may limit the cloud adoption, two key challenges are to determine (a) the number of machines to provision and (b) the commitment level at which the machine instances should be acquired, to minimize cost while satisfying response-time targets. This paper address the above two challenges in an Infrastructure-as-a-Service (IaaS) cloud. Our simulations with real Web server load traces reveal that our techniques offer a cost reduction between 13 and 29 (21 on average) under Amazon EC2 pricing models.",
"In cloud computing, cloud providers can offer cloud consumers two provisioning plans for computing resources, namely reservation and on-demand plans. In general, cost of utilizing computing resources provisioned by reservation plan is cheaper than that provisioned by on-demand plan, since cloud consumer has to pay to provider in advance. With the reservation plan, the consumer can reduce the total resource provisioning cost. However, the best advance reservation of resources is difficult to be achieved due to uncertainty of consumer's future demand and providers' resource prices. To address this problem, an optimal cloud resource provisioning (OCRP) algorithm is proposed by formulating a stochastic programming model. The OCRP algorithm can provision computing resources for being used in multiple provisioning stages as well as a long-term plan, e.g., four stages in a quarter plan and twelve stages in a yearly plan. The demand and price uncertainty is considered in OCRP. In this paper, different approaches to obtain the solution of the OCRP algorithm are considered including deterministic equivalent formulation, sample-average approximation, and Benders decomposition. Numerical studies are extensively performed in which the results clearly show that with the OCRP algorithm, cloud consumer can successfully minimize total cost of resource provisioning in cloud computing environments.",
"Recent years witness the proliferation of Infrastructure-as-a-Service (IaaS) cloud services, which provide on-demand resources (CPU, RAM, disk) in the form of virtual machines (VMs) for hosting applications services of third parties. Given the state-of-the-art IaaS offerings, it is still a problem of fundamental importance how the Application Service Providers (ASPs) should rent VMs from the clouds to serve their application needs, in order to minimize the cost while meeting their job demands over a long run. Cloud providers offer different pricing options to meet computing requirements of a variety of applications. However, the challenge facing an ASP is how these pricing options can be dynamically combined to serve arbitrary demands at the optimal cost. In this paper, we propose an online VM purchasing algorithm based on the Lyapunov optimization technique, for minimizing the long-term-averaged VM rental cost of an ASP with time-varying and delay-tolerant workloads, while bounding the maximum response delay of its jobs. In stark contrast with the existing studies, the proposed algorithm enables an ASP to optimally decide the amount of reserved, on-demand and spot instances to purchase simultaneously. Rigorous analysis shows that our algorithm can achieve a time-averaged resource cost close to the offline optimum. Trace-driven simulations further verify the efficacy of our algorithm.",
"Abstract The cloud phenomenon is quickly becoming an important service in Internet computing. Infrastructure as a Service (IaaS) in cloud computing is one of the most significant and fastest growing field. In this service model, cloud providers offer resources to users machines that include computers as virtual machines, raw (block) storage, firewalls, load balancers, and network devices. One of the most pressing issues in cloud computing for IaaS is the resource management. Resource management problems include allocation, provisioning, requirement mapping, adaptation, discovery, brokering, estimation, and modeling. Resource management for IaaS in cloud computing offers following benefits: scalability, quality of service, optimal utility, reduced overheads, improved throughput, reduced latency, specialized environment, cost effectiveness and simplified interface. This paper focuses on some of the important resource management techniques such as resource provisioning, resource allocation, resource mapping and resource adaptation. It brings out an exhaustive survey of such techniques for IaaS in cloud computing, and also put forth the open challenges for further research.",
"Amazon's Elastic Compute Cloud (EC2) uses auction-based spot pricing to sell spare capacity, allowing users to bid for cloud resources at a highly reduced rate. Amazon sets the spot price dynamically and accepts user bids above this price. Jobs with lower bids (including those already running) are interrupted and must wait for a lower spot price before resuming. Spot pricing thus raises two basic questions: how might the provider set the price, and what prices should users bid? Computing users' bidding strategies is particularly challenging: higher bid prices reduce the probability of, and thus extra time to recover from, interruptions, but may increase users' cost. We address these questions in three steps: (1) modeling the cloud provider's setting of the spot price and matching the model to historically offered prices, (2) deriving optimal bidding strategies for different job requirements and interruption overheads, and (3) adapting these strategies to MapReduce jobs with master and slave nodes having different interruption overheads. We run our strategies on EC2 for a variety of job sizes and instance types, showing that spot pricing reduces user cost by 90 with a modest increase in completion time compared to on-demand pricing."
]
} |
1607.05178 | 2953331043 | Many businesses possess a small infrastructure that they can use for their computing tasks, but also often buy extra computing resources from clouds. Cloud vendors such as Amazon EC2 offer two types of purchase options: on-demand and spot instances. As tenants have limited budgets to satisfy their computing needs, it is crucial for them to determine how to purchase different options and utilize them (in addition to possible self-owned instances) in a cost-effective manner while respecting their response-time targets. In this paper, we propose a framework to design policies to allocate self-owned, on-demand and spot instances to arriving jobs. In particular, we propose a near-optimal policy to determine the number of self-owned instance and an optimal policy to determine the number of on-demand instances to buy and the number of spot instances to bid for at each time unit. Our policies rely on a small number of parameters and we use an online learning technique to infer their optimal values. Through numerical simulations, we show the effectiveness of our proposed policies, in particular that they achieve a cost reduction of up to 64.51 when spot and on-demand instances are considered and of up to 43.74 when self-owned instances are considered, compared to previously proposed or intuitive policies. | Wang use the competitive analysis technique to purchase reserved and on-demand instances without knowing the future workload @cite_14 , where the Bahncard problem is applied to propose a deterministic and a randomized algorithm. In @cite_15 , a genetic algorithm is proposed to quickly approximate the pareto-set of makespan and cost for a bag of tasks where on-demand and spot instances are considered. In @cite_5 , the technique of Lyapunov optimization is applied and it's said to be the first effort on jointly leveraging all three common IaaS cloud pricing options to comprehensively reduce the cost of users. The less interesting aspect of this technique is that a large delay will be caused when processing jobs and in order to achieve an @math close-to-optimal performance, the queue size has to be @math @cite_6 . | {
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_14",
"@cite_6"
],
"mid": [
"1793037266",
"2085732010",
"2051996845",
"2101178122"
],
"abstract": [
"Recent years witness the proliferation of Infrastructure-as-a-Service (IaaS) cloud services, which provide on-demand resources (CPU, RAM, disk) in the form of virtual machines (VMs) for hosting applications services of third parties. Given the state-of-the-art IaaS offerings, it is still a problem of fundamental importance how the Application Service Providers (ASPs) should rent VMs from the clouds to serve their application needs, in order to minimize the cost while meeting their job demands over a long run. Cloud providers offer different pricing options to meet computing requirements of a variety of applications. However, the challenge facing an ASP is how these pricing options can be dynamically combined to serve arbitrary demands at the optimal cost. In this paper, we propose an online VM purchasing algorithm based on the Lyapunov optimization technique, for minimizing the long-term-averaged VM rental cost of an ASP with time-varying and delay-tolerant workloads, while bounding the maximum response delay of its jobs. In stark contrast with the existing studies, the proposed algorithm enables an ASP to optimally decide the amount of reserved, on-demand and spot instances to purchase simultaneously. Rigorous analysis shows that our algorithm can achieve a time-averaged resource cost close to the offline optimum. Trace-driven simulations further verify the efficacy of our algorithm.",
"Commercial cloud offerings let users allocate compute resources on demand, charging based on reserved time intervals. Users, however, lack guidance for assembling instance pools from different cloud instance types, in order to control completion time and monetary budget. BaTS, our budget-constrained scheduler uses tiny statistical samples of task executions in order to predict completion times (and associated costs) for given bags of tasks, allowing the user to favor either fast execution or low computation budget. BaTS' estimator, however, can not handle variably-priced spot instances appropriately. In this work, we present a new prediction module for BaTS that quickly computes accurate approximations to the Pareto set of mixed on-demand and spot instance pools, based on a genetic algorithm (GA). This new approach allows BaTS to react to changing spot instance prices at runtime by re-configuring the instance pool according to the user's runtime and budget constraints. Simulator-based results show that the GA can approximate the Pareto sets for machine configurations in about 30 seconds time, without noticeable loss of quality, compared to an exact solution, computed offline within 40 to 60 minutes time.",
"Infrastructure-as-a-service (IaaS) clouds offer diverse instance purchasing options. A user can either run instances on demand and pay only for what it uses, or it can prepay to reserve instances for a long period, during which a usage discount is entitled. An important problem facing a user is how these two instance options can be dynamically combined to serve time-varying demands at minimum cost. Existing strategies in the literature, however, require either exact knowledge or the distribution of demands in the long-term future, which significantly limits their use in practice. Unlike existing works, we propose two practical online algorithms , one deterministic and another randomized, that dynamically combine the two instance options online without any knowledge of the future. We show that the proposed deterministic (resp., randomized) algorithm incurs no more than @math (resp., @math ) times the minimum cost obtained by an optimal offline algorithm that knows the exact future a priori , where @math is the entitled discount after reservation. Our online algorithms achieve the best possible competitive ratios in both the deterministic and randomized cases, and can be easily extended to cases when short-term predictions are reliable. Simulations driven by a large volume of real-world traces show that significant cost savings can be achieved with prevalent IaaS prices.",
"In this paper, we investigate the power of online learning in stochastic network optimization with unknown system statistics a priori. We are interested in understanding how information and learning can be efficiently incorporated into system control techniques, and what are the fundamental benefits of doing so. We propose two Online Learning-Aided Control techniques, OLAC and OLAC2, that explicitly utilize the past system information in current system control via a learning procedure called dual learning. We prove strong performance guarantees of the proposed algorithms: OLAC and OLAC2 achieve the near-optimal [O(e), O([log(1 e)]2)] utility-delay tradeoff and OLAC2 possesses an O(e-2 3) convergence time. Simulation results also confirm the superior performance of the proposed algorithms in practice. To the best of our knowledge, OLAC and OLAC2 are the first algorithms that simultaneously possess explicit near-optimal delay guarantee and sub-linear convergence time, and our attempt is the first to explicitly incorporate online learning into stochastic network optimization and to demonstrate its power in both theory and practice."
]
} |
1607.05159 | 2952551898 | Software-defined networking (SDN) allows operators to control the behavior of a network by programatically managing the forwarding rules installed on switches. However, as is common in distributed systems, it can be difficult to ensure that certain consistency properties are preserved during periods of reconfiguration. The widely-accepted notion of PER-PACKET CONSISTENCY requires every packet to be forwarded using the new configuration or the old configuration, but not a mixture of the two. If switches can be updated in some (partial) order which guarantees that per-packet consistency is preserved, we call this order a CONSISTENT ORDER UPDATE. In particular, switches that are incomparable in this order can be updated in parallel. We call a consistent order update OPTIMAL if it allows maximal parallelism. This paper presents a polynomial-time algorithm for finding an optimal consistent order update. This contrasts with other recent results in the literature, which show that for other classes of properties (e.g., loop-freedom and waypoint enforcement), the optimal update problem is NP-complete. | There are various approaches for producing a sequence of switch updates guaranteed to respect certain path-based consistency properties (e.g., properties representable using temporal logic, etc.). For example, use counter-example guided search and incremental LTL model checking, FLIP @cite_9 uses integer linear programming, and CCG @cite_5 uses custom reachability-based graph algorithms. Other works such as Dionysus @cite_7 , zUpdate @cite_6 , and , seek to perform updates with respect to quantitative properties. | {
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_6",
"@cite_7"
],
"mid": [
"",
"2519194043",
"2162547852",
"2137826183"
],
"abstract": [
"",
"We study complexity and algorithms for network updates in the setting of Software Defined Networks. Our focus lies on consistent updates for the case of updating forwarding rules in a loop free manner and the migration of flows without congestion. In both cases, we study how the power of two affects the respective problem setting. For loop freedom, we show that scheduling consistent updates for two destinations is NP-hard for a sublinear number of rounds. We also consider the dynamic case, and show that this problem is NP-hard as well via a reduction from Feedback Arc Set. While the power of two increases the complexity for loop freedom, the converse is true when allowing to split flows twice. For the NP-hard problem of consistently migrating unsplittable flows to new routes while respecting waypointing and service chains, we prove that two-splittability allows the problem to be tractable again.",
"Datacenter networks (DCNs) are constantly evolving due to various updates such as switch upgrades and VM migrations. Each update must be carefully planned and executed in order to avoid disrupting many of the mission-critical, interactive applications hosted in DCNs. The key challenge arises from the inherent difficulty in synchronizing the changes to many devices, which may result in unforeseen transient link load spikes or even congestions. We present one primitive, zUpdate, to perform congestion-free network updates under asynchronous switch and traffic matrix changes. We formulate the update problem using a network model and apply our model to a variety of representative update scenarios in DCNs. We develop novel techniques to handle several practical challenges in realizing zUpdate as well as implement the zUpdate prototype on OpenFlow switches and deploy it on a testbed that resembles real DCN topology. Our results, from both real-world experiments and large-scale trace-driven simulations, show that zUpdate can effectively perform congestion-free updates in production DCNs.",
"We present Dionysus, a system for fast, consistent network updates in software-defined networks. Dionysus encodes as a graph the consistency-related dependencies among updates at individual switches, and it then dynamically schedules these updates based on runtime differences in the update speeds of different switches. This dynamic scheduling is the key to its speed; prior update methods are slow because they pre-determine a schedule, which does not adapt to runtime conditions. Testbed experiments and data-driven simulations show that Dionysus improves the median update speed by 53--88 in both wide area and data center networks compared to prior methods."
]
} |
1607.05159 | 2952551898 | Software-defined networking (SDN) allows operators to control the behavior of a network by programatically managing the forwarding rules installed on switches. However, as is common in distributed systems, it can be difficult to ensure that certain consistency properties are preserved during periods of reconfiguration. The widely-accepted notion of PER-PACKET CONSISTENCY requires every packet to be forwarded using the new configuration or the old configuration, but not a mixture of the two. If switches can be updated in some (partial) order which guarantees that per-packet consistency is preserved, we call this order a CONSISTENT ORDER UPDATE. In particular, switches that are incomparable in this order can be updated in parallel. We call a consistent order update OPTIMAL if it allows maximal parallelism. This paper presents a polynomial-time algorithm for finding an optimal consistent order update. This contrasts with other recent results in the literature, which show that for other classes of properties (e.g., loop-freedom and waypoint enforcement), the optimal update problem is NP-complete. | introduce dependency-graphs for network updates, and propose properties which could be addressed via this general approach. They show how to handle one of the properties ( loop-freedom ) in a minimal way. detail general algorithms for building dependency graphs and using these graphs to perform a consistent update. extend @cite_11 , and show that for blackhole-freedom , computing an update with a minimal number of rounds is -hard (when memory limits are assumed on switches). They also show -hardness results for rule-granular loop-free updates with maximal parallelism. Per-packet consistency in our problem is stronger than loop freedom and blackhole freedom, but we only consider solutions where each switch is updated once , and where a switch update swaps the entire old forwarding table with the new one simultaneously. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2479920125"
],
"abstract": [
"We study consistent migration of flows, with special focus on software defined networks. Given a current and a desired network flow configuration, we give the first polynomial-time algorithm to decide if a congestion-free migration is possible. However, if all flows must be integer or are unsplittable, this is NP-hard to decide. A similar problem is providing increased bandwidth to an application, while keeping all other flows in the network, but possibly migrating them consistently to other paths. We show that the maximum increase can be approximated arbitrarily well in polynomial time. Current methods as RSVP-TE consider unsplittable flows and remove flows of lesser importance in order to increase bandwidth for an application: We prove that deciding what flows need to be removed is an NP-hard optimization problem with no PTAS possible unless P = NP."
]
} |
1607.04648 | 2512784927 | Given the vast amounts of video available online, and recent breakthroughs in object detection with static images, object detection in video offers a promising new frontier. However, motion blur and compression artifacts cause substantial frame-level variability, even in videos that appear smooth to the eye. Additionally, video datasets tend to have sparsely annotated frames. We present a new framework for improving object detection in videos that captures temporal context and encourages consistency of predictions. First, we train a pseudo-labeler, that is, a domain-adapted convolutional neural network for object detection. The pseudo-labeler is first trained individually on the subset of labeled frames, and then subsequently applied to all frames. Then we train a recurrent neural network that takes as input sequences of pseudo-labeled frames and optimizes an objective that encourages both accuracy on the target frame and consistency across consecutive frames. The approach incorporates strong supervision of target frames, weak-supervision on context frames, and regularization via a smoothness penalty. Our approach achieves mean Average Precision (mAP) of 68.73, an improvement of 7.1 over the strongest image-based baselines for the Youtube-Video Objects dataset. Our experiments demonstrate that neighboring frames can provide valuable information, even absent labels. | Prest al @cite_24 , utilize weak supervision for object detection in videos via category-level annotations of frames, absent localization ground truth. This method assumes that the target object is moving, outputting a spatio-temporal tube that captures this most salient moving object. This paper, however, does not consider context within video for detecting multiple objects. | {
"cite_N": [
"@cite_24"
],
"mid": [
"1973054923"
],
"abstract": [
"Object detectors are typically trained on a large set of still images annotated by bounding-boxes. This paper introduces an approach for learning object detectors from real-world web videos known only to contain objects of a target class. We propose a fully automatic pipeline that localizes objects in a set of videos of the class and learns a detector for it. The approach extracts candidate spatio-temporal tubes based on motion segmentation and then selects one tube per video jointly over all videos. To compare to the state of the art, we test our detector on still images, i.e., Pascal VOC 2007. We observe that frames extracted from web videos can differ significantly in terms of quality to still images taken by a good camera. Thus, we formulate the learning from videos as a domain adaptation task. We show that training from a combination of weakly annotated videos and fully annotated still images using domain adaptation improves the performance of a detector trained from still images alone."
]
} |
1607.04648 | 2512784927 | Given the vast amounts of video available online, and recent breakthroughs in object detection with static images, object detection in video offers a promising new frontier. However, motion blur and compression artifacts cause substantial frame-level variability, even in videos that appear smooth to the eye. Additionally, video datasets tend to have sparsely annotated frames. We present a new framework for improving object detection in videos that captures temporal context and encourages consistency of predictions. First, we train a pseudo-labeler, that is, a domain-adapted convolutional neural network for object detection. The pseudo-labeler is first trained individually on the subset of labeled frames, and then subsequently applied to all frames. Then we train a recurrent neural network that takes as input sequences of pseudo-labeled frames and optimizes an objective that encourages both accuracy on the target frame and consistency across consecutive frames. The approach incorporates strong supervision of target frames, weak-supervision on context frames, and regularization via a smoothness penalty. Our approach achieves mean Average Precision (mAP) of 68.73, an improvement of 7.1 over the strongest image-based baselines for the Youtube-Video Objects dataset. Our experiments demonstrate that neighboring frames can provide valuable information, even absent labels. | Recently, Kang al @cite_12 introduced tubelets with convolutional neural networks (T-CNN) for detecting objects in video. T-CNN uses spatio-temporal tubelet proposal generation followed by the classification and re-scoring, incorporating temporal and contextual information from tubelets obtained in videos. T-CNN won the recently introduced ImageNet object-detection-from-video (VID) task with provided densely annotated video clips. Although the method is effective for densely annotated training data, it's behavior for sparsely labeled data is not evaluated. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2335901184"
],
"abstract": [
"Deep Convolution Neural Networks (CNNs) have shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. For object detection, particularly in still images, the performance has been significantly increased last year thanks to powerful deep networks (e.g. GoogleNet) and detection frameworks (e.g. Regions with CNN features (RCNN)). The lately introduced ImageNet [6] task on object detection from video (VID) brings the object detection task into the video domain, in which objects' locations at each frame are required to be annotated with bounding boxes. In this work, we introduce a complete framework for the VID task based on still-image object detection and general object tracking. Their relations and contributions in the VID task are thoroughly studied and evaluated. In addition, a temporal convolution network is proposed to incorporate temporal information to regularize the detection results and shows its effectiveness for the task. Code is available at https: github.com myfavouritekk vdetlib."
]
} |
1607.04564 | 2950806816 | Vehicle detection and annotation for streaming video data with complex scenes is an interesting but challenging task for urban traffic surveillance. In this paper, we present a fast framework of Detection and Annotation for Vehicles (DAVE), which effectively combines vehicle detection and attributes annotation. DAVE consists of two convolutional neural networks (CNNs): a fast vehicle proposal network (FVPN) for vehicle-like objects extraction and an attributes learning network (ALN) aiming to verify each proposal and infer each vehicle's pose, color and type simultaneously. These two nets are jointly optimized so that abundant latent knowledge learned from the ALN can be exploited to guide FVPN training. Once the system is trained, it can achieve efficient vehicle detection and annotation for real-world traffic surveillance data. We evaluate DAVE on a new self-collected UTS dataset and the public PASCAL VOC2007 car and LISA 2010 datasets, with consistent improvements over existing algorithms. | Vehicle detection is a fundamental objective of traffic surveillance. Traditional vehicle detection methods can be categorized into frame-based and motion-based approaches @cite_27 @cite_26 . For motion-based approaches, frames subtraction @cite_23 , adaptive background modeling @cite_1 and optical flow @cite_28 @cite_12 are often utilized. However, some non-vehicle moving objects will be falsely detected with motion-based approaches since less visual information is exploited. To achieve higher detection performance, recently, the deformable part-based model (DPM) @cite_9 employs a star-structured architecture consisting of root and parts filters with associated deformation models for object detection. DPM can successfully handle deformable object detection even when the target is partially occluded. However, it leads to heavy computational costs due to the use of the sliding window precedure for appearance features extraction and classification. | {
"cite_N": [
"@cite_26",
"@cite_28",
"@cite_9",
"@cite_1",
"@cite_27",
"@cite_23",
"@cite_12"
],
"mid": [
"2134576786",
"",
"2168356304",
"2102625004",
"2165065922",
"62293165",
"2073212499"
],
"abstract": [
"Automatic video analysis from urban surveillance cameras is a fast-emerging field based on computer vision techniques. We present here a comprehensive review of the state-of-the-art computer vision for traffic video with a critical analysis and an outlook to future research directions. This field is of increasing relevance for intelligent transport systems (ITSs). The decreasing hardware cost and, therefore, the increasing deployment of cameras have opened a wide application field for video analytics. Several monitoring objectives such as congestion, traffic rule violation, and vehicle interaction can be targeted using cameras that were typically originally installed for human operators. Systems for the detection and classification of vehicles on highways have successfully been using classical visual surveillance techniques such as background estimation and motion tracking for some time. The urban domain is more challenging with respect to traffic density, lower camera angles that lead to a high degree of occlusion, and the variety of road users. Methods from object categorization and 3-D modeling have inspired more advanced techniques to tackle these challenges. There is no commonly used data set or benchmark challenge, which makes the direct comparison of the proposed algorithms difficult. In addition, evaluation under challenging weather conditions (e.g., rain, fog, and darkness) would be desirable but is rarely performed. Future work should be directed toward robust combined detectors and classifiers for all road users, with a focus on realistic conditions during evaluation.",
"",
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.",
"A common method for real-time segmentation of moving regions in image sequences involves \"background subtraction\", or thresholding the error between an estimate of the image without moving objects and the current image. The numerous approaches to this problem differ in the type of background model used and the procedure used to update the model. This paper discusses modeling each pixel as a mixture of Gaussians and using an on-line approximation to update the model. The Gaussian, distributions of the adaptive mixture model are then evaluated to determine which are most likely to result from a background process. Each pixel is classified based on whether the Gaussian distribution which represents it most effectively is considered part of the background model. This results in a stable, real-time outdoor tracker which reliably deals with lighting changes, repetitive motions from clutter, and long-term scene changes. This system has been run almost continuously for 16 months, 24 hours a day, through rain and snow.",
"This paper provides a review of the literature in on-road vision-based vehicle detection, tracking, and behavior understanding. Over the past decade, vision-based surround perception has progressed from its infancy into maturity. We provide a survey of recent works in the literature, placing vision-based vehicle detection in the context of sensor-based on-road surround analysis. We detail advances in vehicle detection, discussing monocular, stereo vision, and active sensor-vision fusion for on-road vehicle detection. We discuss vision-based vehicle tracking in the monocular and stereo-vision domains, analyzing filtering, estimation, and dynamical models. We discuss the nascent branch of intelligent vehicles research concerned with utilizing spatiotemporal measurements, trajectories, and various features to characterize on-road behavior. We provide a discussion on the state of the art, detail common performance metrics and benchmarks, and provide perspective on future research directions in the field.",
"Novel 1-heterocyclyloxy- or 1-aryloxy-3-amidoalkylamino-2-propanol derivatives, processes for their manufacture, pharmaceutical compositions containing them and methods of using them in the treatment of heart diseases. The compounds possess beta -adrenergic blocking activity. Representative of the compounds disclosed is 1-(4-indolyloxy)-3- beta -isobutyramidoethylamino-2-propanol.",
"In this paper, we present a new approach for human action recognition based on key-pose selection and representation. Poses in video frames are described by the proposed extensive pyramidal features (EPFs), which include the Gabor, Gaussian, and wavelet pyramids. These features are able to encode the orientation, intensity, and contour information and therefore provide an informative representation of human poses. Due to the fact that not all poses in a sequence are discriminative and representative, we further utilize the AdaBoost algorithm to learn a subset of discriminative poses. Given the boosted poses for each video sequence, a new classifier named weighted local naive Bayes nearest neighbor is proposed for the final action classification, which is demonstrated to be more accurate and robust than other classifiers, e.g., support vector machine (SVM) and naive Bayes nearest neighbor. The proposed method is systematically evaluated on the KTH data set, the Weizmann data set, the multiview IXMAS data set, and the challenging HMDB51 data set. Experimental results manifest that our method outperforms the state-of-the-art techniques in terms of recognition rate."
]
} |
1607.04564 | 2950806816 | Vehicle detection and annotation for streaming video data with complex scenes is an interesting but challenging task for urban traffic surveillance. In this paper, we present a fast framework of Detection and Annotation for Vehicles (DAVE), which effectively combines vehicle detection and attributes annotation. DAVE consists of two convolutional neural networks (CNNs): a fast vehicle proposal network (FVPN) for vehicle-like objects extraction and an attributes learning network (ALN) aiming to verify each proposal and infer each vehicle's pose, color and type simultaneously. These two nets are jointly optimized so that abundant latent knowledge learned from the ALN can be exploited to guide FVPN training. Once the system is trained, it can achieve efficient vehicle detection and annotation for real-world traffic surveillance data. We evaluate DAVE on a new self-collected UTS dataset and the public PASCAL VOC2007 car and LISA 2010 datasets, with consistent improvements over existing algorithms. | With the wide success of deep networks on image classification @cite_0 @cite_30 @cite_18 @cite_22 @cite_19 , a Region-based CNN (RCNN) @cite_16 combines object proposals, CNN learned features and an SVM classifier for accurate object detection. To further increase the detection speed and accuracy, Fast RCNN @cite_21 adopts a region of interest (ROI) pooling layer and the multi-task loss to estimate object classes while predicting bounding-box positions. Objectness" proposal methods such as Selective Search @cite_13 and Edgeboxes @cite_14 can be introduced in RCNN and Fast RCNN to improve the efficiency compared to the traditional sliding window fashion. Furthermore, Faster RCNN @cite_3 employs a Region Proposal Network (RPN) with shared convolutional features to enable cost-free effective proposals. All these deep models target general object detection. In our task, we aim for real-time detection of a special object type, vehicle. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_21",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_16",
"@cite_13"
],
"mid": [
"",
"2295038166",
"7746136",
"1985469025",
"",
"2953106684",
"",
"",
"2102605133",
"2088049833"
],
"abstract": [
"",
"This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatio-temporal representations using deep neural networks suited to the input modality: a Gaussian-Bernouilli Deep Belief Network ( DBN ) to handle skelet al dynamics, and a 3D Convolutional Neural Network ( 3DCNN ) to manage and fuse batches of depth and RGB images. This is achieved through the modeling and learning of the emission probabilities of the HMM required to infer the gesture sequence. This purely data driven approach achieves a Jaccard index score of 0.81 in the ChaLearn LAP gesture spotting challenge. The performance is on par with a variety of state-of-the-art hand-tuned feature-based approaches and other learning-based methods, therefore opening the door to the use of deep learning techniques in order to further explore multimodal time series data.",
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.",
"Over the last few years, with the immense popularity of the Kinect, there has been renewed interest in developing methods for human gesture and action recognition from 3D skelet al data. A number of approaches have been proposed to extract representative features from 3D skelet al data, most commonly hard wired geometric or bio-inspired shape context features. We propose a hierarchial dynamic framework that first extracts high level skelet al joints features and then uses the learned representation for estimating emission probability to infer action sequences. Currently gaussian mixture models are the dominant technique for modeling the emission distribution of hidden Markov models. We show that better action recognition using skelet al features can be achieved by replacing gaussian mixture models by deep neural networks that contain many layers of features to predict probability distributions over states of hidden Markov models. The framework can be easily extended to include a ergodic state to segment and recognize actions simultaneously.",
"",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"",
"",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )."
]
} |
1607.04209 | 2486374285 | Online surveys have the potential to support adaptive questions, where later questions depend on earlier responses. Past work has taken a rule-based approach, uniformly across all respondents. We envision a richer interpretation of adaptive questions, which we call dynamic question ordering (DQO), where question order is personalized. Such an approach could increase engagement, and therefore response rate, as well as imputation quality. We present a DQO framework to improve survey completion and imputation. In the general survey-taking setting, we want to maximize survey completion, and so we focus on ordering questions to engage the respondent and collect hopefully all information, or at least the information that most characterizes the respondent, for accurate imputations. In another scenario, our goal is to provide a personalized prediction. Since it is possible to give reasonable predictions with only a subset of questions, we are not concerned with motivating users to answer all questions. Instead, we want to order questions to get information that reduces prediction uncertainty, while not being too burdensome. We illustrate this framework with an example of providing energy estimates to prospective tenants. We also discuss DQO for national surveys and consider connections between our statistics-based question-ordering approach and cognitive survey methodology. | In the field of medical statistics, adaptive treatment strategies (also called dynamic treatment regimes) continually adjust treatments, according to decision rules, depending on an individual's responses to previous treatments as well as characteristics of the patient @cite_43 . This technique contrasts with the research standard of randomized controlled trials, but more closely matches real-world practice of medical intervention (since, when a treatment fails for a particular patient, that patient is reassigned to a new treatment, based on how they reacted). Adaptive treatment strategies are targeted for an , rather than basing future treatment decisions on outcomes of previous patients. | {
"cite_N": [
"@cite_43"
],
"mid": [
"1510413338"
],
"abstract": [
"Abstract In this article two new methods for building and evaluating eHealth interventions are described. The first is the Multiphase Optimization Strategy (MOST). It consists of a screening phase, in which intervention components are efficiently identified for inclusion in an intervention or for rejection, based on their performance; a refining phase, in which the selected components are fine tuned and issues such as optimal levels of each component are investigated; and a confirming phase, in which the optimized intervention, consisting of the selected components delivered at optimal levels, is evaluated in a standard randomized controlled trial. The second is the Sequential Multiple Assignment Randomized Trial (SMART), which is an innovative research design especially suited for building time-varying adaptive interventions. A SMART trial can be used to identify the best tailoring variables and decision rules for an adaptive intervention empirically. Both the MOST and SMART approaches use randomized experimentation to enable valid inferences. When properly implemented, these approaches will lead to the development of more potent eHealth interventions."
]
} |
1607.04209 | 2486374285 | Online surveys have the potential to support adaptive questions, where later questions depend on earlier responses. Past work has taken a rule-based approach, uniformly across all respondents. We envision a richer interpretation of adaptive questions, which we call dynamic question ordering (DQO), where question order is personalized. Such an approach could increase engagement, and therefore response rate, as well as imputation quality. We present a DQO framework to improve survey completion and imputation. In the general survey-taking setting, we want to maximize survey completion, and so we focus on ordering questions to engage the respondent and collect hopefully all information, or at least the information that most characterizes the respondent, for accurate imputations. In another scenario, our goal is to provide a personalized prediction. Since it is possible to give reasonable predictions with only a subset of questions, we are not concerned with motivating users to answer all questions. Instead, we want to order questions to get information that reduces prediction uncertainty, while not being too burdensome. We illustrate this framework with an example of providing energy estimates to prospective tenants. We also discuss DQO for national surveys and consider connections between our statistics-based question-ordering approach and cognitive survey methodology. | The design of the sequential multiple assignment randomized (SMAR) trial @cite_20 chooses a decision to make at each point according to what action will maximize the expected treatment outcome, given past information that has occurred. SMAR trials randomize individuals to different treatments at each decision time point. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2027795129"
],
"abstract": [
"In adaptive treatment strategies, the treatment level and type is repeatedly adjusted according to ongoing individual response. Since past treatment may have delayed effects, the development of these treatment strategies is challenging. This paper advocates the use of sequential multiple assignment randomized trials in the development of adaptive treatment strategies. Both a simple ad hoc method for ascertaining sample sizes and simple analysis methods are provided. Copyright © 2004 John Wiley & Sons, Ltd."
]
} |
1607.04209 | 2486374285 | Online surveys have the potential to support adaptive questions, where later questions depend on earlier responses. Past work has taken a rule-based approach, uniformly across all respondents. We envision a richer interpretation of adaptive questions, which we call dynamic question ordering (DQO), where question order is personalized. Such an approach could increase engagement, and therefore response rate, as well as imputation quality. We present a DQO framework to improve survey completion and imputation. In the general survey-taking setting, we want to maximize survey completion, and so we focus on ordering questions to engage the respondent and collect hopefully all information, or at least the information that most characterizes the respondent, for accurate imputations. In another scenario, our goal is to provide a personalized prediction. Since it is possible to give reasonable predictions with only a subset of questions, we are not concerned with motivating users to answer all questions. Instead, we want to order questions to get information that reduces prediction uncertainty, while not being too burdensome. We illustrate this framework with an example of providing energy estimates to prospective tenants. We also discuss DQO for national surveys and consider connections between our statistics-based question-ordering approach and cognitive survey methodology. | Adaptive treatment strategies have been applied to treat depression, with the STAR*D (sequenced treatment alternatives to relieve depression) treatment @cite_21 , in which patients who did not respond to less-intensive therapies were randomly assigned to more intensive treatments at higher levels; to treat schizophrenia, with the CATIE (clinical antipsychotic trials of intervention effectiveness) design @cite_6 , a three-phase study where patients were randomly assigned to new treatments at successive phases if they did not respond to earlier treatments; to treat advanced prostate cancer @cite_34 by randomizing nonfavorably-responding patients to untried chemotherapy treatments at eight-week intervals, up to four times; and many other medical settings ( smoking cessation @cite_7 , pediatric generalized anxiety disorders @cite_18 , and mood disorders @cite_29 @cite_2 ). | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_29",
"@cite_21",
"@cite_6",
"@cite_2",
"@cite_34"
],
"mid": [
"2019384544",
"",
"1987132730",
"2023634568",
"2162993517",
"1972388887",
"2167371237"
],
"abstract": [
"There is growing interest in how best to adapt and re-adapt treatments to individuals to maximize clinical benefit. In response, adaptive treatment strategies (ATS), which operationalize adaptive, sequential clinical decision making, have been developed. From a patient's perspective an ATS is a sequence of treatments, each individualized to the patient's evolving health status. From a clinician's perspective, an ATS is a sequence of decision rules that input the patient's current health status and output the next recommended treatment. Sequential multiple assignment randomized trials (SMART) have been developed to address the sequencing questions that arise in the development of ATSs, but SMARTs are relatively new in clinical research. This article provides an introduction to ATSs and SMART designs. This article also discusses the design of SMART pilot studies to address feasibility concerns, and to prepare investigators for a full-scale SMART. As an example, we consider an example SMART for the development of an ATS in the treatment of pediatric generalized anxiety disorders. Using the example SMART, we identify and discuss design issues unique to SMARTs that are best addressed in an external pilot study prior to the full-scale SMART. We also address the question of how many participants are needed in a SMART pilot study. A properly executed pilot study can be used to effectively address concerns about acceptability and feasibility in preparation for (that is, prior to) executing a full-scale SMART.",
"",
"Multiple treatments are available for nearly all the mood disorders. This range of treatment options adds a new dimension of choice to clinical decision making. In addition to prescribing the best initial treatment, clinicians should have an algorithm for deciding if and when to make subsequent changes in treatment to take advantage of second-line treatment options when necessary. This article aims to 1) show that a wide variety of clinical decisions can be framed as choices among adaptive (within-patient) threshold-based strategies or algorithms, illustrating the generality of the concept; 2) illustrate two ways to design randomized clinical trials to compare treatment strategies with each other to decide which strategy is best; and 3) discuss some of the advantages offered by these designs, in terms of both patient acceptability and adherence to experimental protocols.",
"Abstract STAR*D is a multisite, prospective, randomized, multistep clinical trial of outpatients with nonpsychotic major depressive disorder. The study compares various treatment options for those who do not attain a satisfactory response with citalopram, a selective serotonin reuptake inhibitor antidepressant. The study enrolls 4000 adults (ages 18–75) from both primary and specialty care practices who have not had either a prior inadequate response or clear-cut intolerance to a robust trial of protocol treatments during the current major depressive episode. After receiving citalopram (level 1), participants without sufficient symptomatic benefit are eligible for randomization to level 2 treatments, which entail four switch options (sertraline, bupropion, venlafaxine, cognitive therapy) and three citalopram augment options (bupropion, buspirone, cognitive therapy). Those who receive cognitive therapy (switch or augment options) at level 2 without sufficient improvement are eligible for randomization to one of two level 2A switch options (venlafaxine or bupropion). Level 2 and 2A participants without sufficient improvement are eligible for random assignment to two switch options (mirtazapine or nortriptyline) and to two augment options (lithium or thyroid hormone) added to the primary antidepressant (citalopram, bupropion, sertraline, or venlafaxine) (level 3). Those without sufficient improvement at level 3 are eligible for level 4 random assignment to one of two switch options (tranylcypromine or the combination of mirtazapine and venlafaxine). The primary outcome is the clinician-rated, 17-item Hamilton Rating Scale for Depression, administered at entry and exit from each treatment level through telephone interviews by assessors masked to treatment assignments. Secondary outcomes include self-reported depressive symptoms, physical and mental function, side-effect burden, client satisfaction, and health care utilization and cost. Participants with an adequate symptomatic response may enter the 12-month naturalistic follow-up phase with brief monthly and more complete quarterly assessments.",
"Abstract The National Institute of Mental Health initiated the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) program to evaluate the effectiveness of antipsychotic drugs in typical settings and populations so that the study results will be maximally useful in routine clinical situations. The CATIE schizophrenia trial blends features of efficacy studies and large, simple trials to create a pragmatic trial that will provide extensive information about antipsychotic drug effectiveness over at least 18 months. The protocol allows for subjects who receive a study drug that is not effective to receive subsequent treatments within the context of the study. Medication dosages are adjusted within a defined range according to clinical judgment. The primary outcome is all-cause treatment discontinuation because it represents an important clinical endpoint that reflects both clinician and patient judgments about efficacy and tolerability. Secondary outcomes include symptoms, side effects, neurocognitive functioning, and cost-effectiveness. Approximately 50 clinical sites across the United States are seeking to enroll a total of 1,500 persons with schizophrenia. Phase 1 is a double-blinded randomized clinical trial comparing treatment with the second generation antipsychotics olanzapine, quetiapine, risperidone, and ziprasidone to perphenazine, a midpotency first generation antipsychotic. If the initially assigned medication is not effective, subjects may choose one of the following phase 2 trials: (1) randomization to open-label clozapine or a double-blinded second generation drug that was available but not assigned in phase 1; or (2) double-blinded randomization to ziprasidone or another second generation drug that was available but not assigned in phase 1. If the phase 2 study drug is discontinued, subjects may enter phase 3, in which clinicians help subjects select an open-label treatment based on individuals' experiences in phases 1 and 2.",
"Background: Despite the availability of psychosocial evidence-based practices (EBPs), treatment and outcomes for persons with mental disorders remain suboptimal. Replicating Effective Programs (REP), an effective implementation strategy, still resulted in less than half of sites using an EBP. The primary aim of this cluster randomized trial is to determine, among sites not initially responding to REP, the effect of adaptive implementation strategies that begin with an External Facilitator (EF) or with an External Facilitator plus an Internal Facilitator (IF) on improved EBP use and patient outcomes in 12 months. Methods Design: This study employs a sequential multiple assignment randomized trial (SMART) design to build an adaptive implementation strategy. The EBP to be implemented is life goals (LG) for patients with mood disorders across 80 community-based outpatient clinics (N = 1,600 patients) from different U.S. regions. Sites not initially responding to REP (defined as <50 patients receiving ≥3 EBP sessions) will be randomized to receive additional support from an EF or both EF IF. Additionally, sites randomized to EF and still not responsive will be randomized to continue with EF alone or to receive EF IF. The EF provides technical expertise in adapting LG in routine practice, whereas the on-site IF has direct reporting relationships to site leadership to support LG use in routine practice. The primary outcome is mental health-related quality of life; secondary outcomes include receipt of LG sessions, mood symptoms, implementation costs, and organizational change.",
"We present new statistical analyses of data arising from a clinical trial designed to compare two-stage dynamic treatment regimes (DTRs) for advanced prostate cancer. The trial protocol mandated that patients be initially randomized among four chemotherapies, and that those who responded poorly be re-randomized to one of the remaining candidate therapies. The primary aim was to compare the DTRs’ overall success rates, with success defined by the occurrence of successful responses in each of two consecutive courses of the patient’s therapy. Of the 150 study participants, 47 did not complete their therapy as per the algorithm. However, 35 of them did so for reasons that precluded further chemotherapy, that is, toxicity and or progressive disease. Consequently, rather than comparing the overall success rates of the DTRs in the unrealistic event that these patients had remained on their assigned chemotherapies, we conducted an analysis that compared viable switch rules defined by the per-protocol rules but with the additional provision that patients who developed toxicity or progressive disease switch to a non-prespecified therapeutic or palliative strategy. This modification involved consideration of bivariate per-course outcomes encoding both efficacy and toxicity. We used numerical scores elicited from the trial’s principal investigator to quantify the clinical desirability of each bivariate per-course outcome, and defined one endpoint as their average over all courses of treatment. Two other simpler sets of scores as well as log survival time were also used as endpoints. Estimation of each DTR-specific mean score was conducted using inverse probability weighted methods that assumed that missingness in the 12 remaining dropouts was informative but explainable in that it only depended on past recorded data. We conducted additional worst- and best-case analyses to evaluate sensitivity of our findings to extreme departures from the explainable dropout assumption."
]
} |
1607.04209 | 2486374285 | Online surveys have the potential to support adaptive questions, where later questions depend on earlier responses. Past work has taken a rule-based approach, uniformly across all respondents. We envision a richer interpretation of adaptive questions, which we call dynamic question ordering (DQO), where question order is personalized. Such an approach could increase engagement, and therefore response rate, as well as imputation quality. We present a DQO framework to improve survey completion and imputation. In the general survey-taking setting, we want to maximize survey completion, and so we focus on ordering questions to engage the respondent and collect hopefully all information, or at least the information that most characterizes the respondent, for accurate imputations. In another scenario, our goal is to provide a personalized prediction. Since it is possible to give reasonable predictions with only a subset of questions, we are not concerned with motivating users to answer all questions. Instead, we want to order questions to get information that reduces prediction uncertainty, while not being too burdensome. We illustrate this framework with an example of providing energy estimates to prospective tenants. We also discuss DQO for national surveys and consider connections between our statistics-based question-ordering approach and cognitive survey methodology. | For tests that measure ability or aptitude, adaptive testing selects test questions based on the respondent's answers to previous questions. The goal is to measure the examinee's achievement accurately, without making the examinee answer too many questions. Adaptive tests have been shown to be as reliable and valid as conventional tests (with static question orders), while reducing test length up to 50 Weiss and Kingsbury @cite_42 introduce adaptive mastery testing to assess a student's achievement level @math , specifically how the estimated achievement level compares to a mastery level," @math . At each time point, a question is selected which gives the maximum information at the student's current estimated mastery level and asked. As the student answers questions, the estimate @math is updated, along with a confidence interval. Once the confidence interval for @math no longer includes @math , the test is finished and the student's mastery level is assigned as sufficient or not (depending if @math lies above or below the confidence interval for @math ). | {
"cite_N": [
"@cite_42"
],
"mid": [
"1982500367"
],
"abstract": [
"Three applications of computerized adaptive testing (CAT) to help solve problems encountered in educational settings are described and discussed. Each of these applications makes use of item response theory to select test questions from an item pool to estimate a student's achievement level and its precision. These estimates may then be used in conjunction with certain testing strategies to facilitate certain educational decisions. The three applications considered are (a) adaptive mastery testing for determining whether or not a student has mastered a particular content area, (b) adaptive grading for assigning grades to students, and (c) adaptive self-referenced testing for estimating change in a student's achievement level. Differences between currently used classroom procedures and these CAT procedures are discussed. For the adaptive mastery testing procedure, evidence from a series of studies comparing conventional and adaptive testing procedures is presented showing that the adaptive procedure results in more accurate mastery classifications than do conventional mastery tests, while using fewer test questions."
]
} |
1607.04209 | 2486374285 | Online surveys have the potential to support adaptive questions, where later questions depend on earlier responses. Past work has taken a rule-based approach, uniformly across all respondents. We envision a richer interpretation of adaptive questions, which we call dynamic question ordering (DQO), where question order is personalized. Such an approach could increase engagement, and therefore response rate, as well as imputation quality. We present a DQO framework to improve survey completion and imputation. In the general survey-taking setting, we want to maximize survey completion, and so we focus on ordering questions to engage the respondent and collect hopefully all information, or at least the information that most characterizes the respondent, for accurate imputations. In another scenario, our goal is to provide a personalized prediction. Since it is possible to give reasonable predictions with only a subset of questions, we are not concerned with motivating users to answer all questions. Instead, we want to order questions to get information that reduces prediction uncertainty, while not being too burdensome. We illustrate this framework with an example of providing energy estimates to prospective tenants. We also discuss DQO for national surveys and consider connections between our statistics-based question-ordering approach and cognitive survey methodology. | More recently, IRT-based adaptive testing has been used for diagnoses of mental health disorders through patient questionnaires @cite_27 . Their experiments show that their adaptive diagnosis process can, in only one minute of testing, arrive at the same diagnosis as a trained clinician in one hour. Montgomery and Cutler @cite_41 have also used IRT-based adaptive testing, but for public opinion surveys. In an empirical study using adaptive testing to measure respondents' political knowledge, the authors found that the adaptive testing approach could produce more accurate measurements than traditional test administration, at a 40 | {
"cite_N": [
"@cite_41",
"@cite_27"
],
"mid": [
"2164632218",
"2107823993"
],
"abstract": [
"Survey researchers avoid using large multi-item scales to measure latent traits due to both the financial costs and the risk of driving up non-response rates. Typically, investigators select a subset of available scale items rather than asking the full battery. Reduced batteries, however, can sharply reduce measurement precision and introduce bias. In this article, we present computerized adaptive testing (CAT) as a method for minimizing the number of questions each respondent must answer while preserving measurement accuracy and precision. CAT algorithms respond to individuals’ previous answers to select subsequent questions that most efficiently reveal respondents’ position on a latent dimension. We introduce the basic stages of a CAT algorithm and present the details for one approach to item-selection appropriate for public opinion research. We then demonstrate the advantages of CAT via simulation and empirically comparing dynamic and static measures of political knowledge.",
"In this review we explore recent developments in computerized adaptive diagnostic screening and computerized adaptive testing for the presence and severity of mental health disorders such as depression, anxiety, and mania. The statistical methodology is unique in that it is based on multidimensional item response theory (severity) and random forests (diagnosis) instead of traditional mental health measurement based on classical test theory (a simple total score) or unidimensional item response theory. We show that the information contained in large item banks consisting of hundreds of symptom items can be efficiently calibrated using multidimensional item response theory, and the information contained in these large item banks can be precisely extracted using adaptive administration of a small set of items for each individual. In terms of diagnosis, computerized adaptive diagnostic screening can accurately track an hour-long face-to-face clinician diagnostic interview for major depressive disorder (as an example) in less than a minute using an average of four questions with unprecedented high sensitivity and specificity. Directions for future research and applications are discussed."
]
} |
1607.04347 | 2514300030 | Despite being one of the most basic tasks in software development, debugging is still performed in a mostly manual way, leading to high cost and low performance. To address this problem, researchers have studied promising approaches, such as Spectrum-based Fault Localization (SFL) techniques, which pinpoint program elements more likely to contain faults. This survey discusses the state-of-the-art of SFL, including the different techniques that have been proposed, the type and number of faults they address, the types of spectra they use, the programs they utilize in their validation, the testing data that support them, and their use at industrial settings. Notwithstanding the advances, there are still challenges for the industry to adopt these techniques, which we analyze in this paper. SFL techniques should propose new ways to generate reduced sets of suspicious entities, combine different spectra to fine-tune the fault localization ability, use strategies to collect fine-grained coverage levels from suspicious coarser levels for balancing execution costs and output precision, and propose new techniques to cope with multiple-fault programs. Moreover, additional user studies are needed to understand better how SFL techniques can be used in practice. We conclude by presenting a concept map about topics and challenges for future research in SFL. | Other studies were proposed to provide an overview of the fault localization area. The studies shown in @cite_54 @cite_31 @cite_13 @cite_32 evaluated and compared the performance of several ranking metrics. alipour2011 conducted a survey on fault localization . The author considered that the major fault localization approaches are program slicing, spectrum-based, statistical inference, delta debugging, dynamic, and model checking. In his survey, only six studies of SFL techniques were addressed---most of the studies are related to model checking-based techniques. The author concludes that such techniques are far from practical use due to difficulties related to execution time and scalability for large programs. Beyond these concerns, model checking techniques usually require formal specifications of programs, which is difficult to obtain for most programs. agarwal2014 presented a literature review on fault localization, including studies from 2007 to 2013 . They selected 30 papers from major Software Engineering journals and conferences. Most of the papers are focused on test suite improvements for fault localization and SFL techniques. The results are presented in a table describing studies' characteristics and a description of the most frequent techniques and strategies in the area. | {
"cite_N": [
"@cite_13",
"@cite_54",
"@cite_31",
"@cite_32"
],
"mid": [
"2070249305",
"2010833880",
"2053060859",
"2036853814"
],
"abstract": [
"An important research area of Spectrum-Based Fault Localization (SBFL) is the effectiveness of risk evaluation formulas. Most previous studies have adopted an empirical approach, which can hardly be considered as sufficiently comprehensive because of the huge number of combinations of various factors in SBFL. Though some studies aimed at overcoming the limitations of the empirical approach, none of them has provided a completely satisfactory solution. Therefore, we provide a theoretical investigation on the effectiveness of risk evaluation formulas. We define two types of relations between formulas, namely, equivalent and better. To identify the relations between formulas, we develop an innovative framework for the theoretical investigation. Our framework is based on the concept that the determinant for the effectiveness of a formula is the number of statements with risk values higher than the risk value of the faulty statement. We group all program statements into three disjoint sets with risk values higher than, equal to, and lower than the risk value of the faulty statement, respectively. For different formulas, the sizes of their sets are compared using the notion of subset. We use this framework to identify the maximal formulas which should be the only formulas to be used in SBFL.",
"This article presents an improved approach to assist diagnosis of failures in software (fault localisation) by ranking program statements or blocks in accordance with to how likely they are to be buggy. We present a very simple single-bug program to model the problem. By examining different possible execution paths through this model program over a number of test cases, the effectiveness of different proposed spectral ranking methods can be evaluated in idealised conditions. The results are remarkably consistent to those arrived at empirically using the Siemens test suite and Space benchmarks. The model also helps identify groups of metrics that are equivalent for ranking. Due to the simplicity of the model, an optimal ranking method can be devised. This new method out-performs previously proposed methods for the model program, the Siemens test suite and Space. It also helps provide insight into other ranking methods.",
"Software fault localization is an expensive component of program debugging, and thus, many different types of fault localization techniques have been proposed over the recent years. Such techniques aim to rank program components (such as statements, blocks, functions, etc.) in decreasing order of their likelihood of being faulty, such that programmers may then examine the ranking starting from the top, until a fault is found. However, comparisons between fault localization techniques (to see which one is more effective) have generally been based on case studies and empirical data. In this paper we propose an equivalence relation by virtue of which two or more fault localization techniques may be considered equivalent if they produce identical rankings of program components, and are therefore, equally as effective. We then make use of the proposed equivalence relation to prove that several similarity coefficient-based fault localization techniques are in fact equivalent to one another. Furthermore, no case studies and or data were required for any of the proofs of equivalency provided in this paper.",
"Spectrum-based fault localization refers to the process of identifying program units that are buggy from two sets of execution traces: normal traces and faulty traces. These approaches use statistical formulas to measure the suspiciousness of program units based on the execution traces. There have been many spectrum-based fault localization approaches proposing various formulas in the literature. Two of the best performing and well-known ones are Tarantula and Ochiai. Recently, find that theoretically, under certain assumptions, two families of spectrum-based fault localization formulas outperform all other formulas including those of Tarantula and Ochiai. In this work, we empirically validate 's findings by comparing the performance of the theoretically best formulas against popular approaches on a dataset containing 199 buggy versions of 10 programs. Our empirical study finds that Ochiai and Tarantula statistically significantly outperforms 3 out of 5 theoretically best fault localization techniques. For the remaining two, Ochiai also outperforms them, albeit not statistically significantly. This happens because an assumption in 's work is not satisfied in many fault localization settings."
]
} |
1607.04378 | 2508000579 | This paper presents a novel two-phase method for audio representation, Discriminative and Compact Audio Representation (DCAR), and evaluates its performance at detecting events in consumer-produced videos. In the first phase of DCAR, each audio track is modeled using a Gaussian mixture model (GMM) that includes several components to capture the variability within that track. The second phase takes into account both global structure and local structure. In this phase, the components are rendered more discriminative and compact by formulating an optimization problem on Grassmannian manifolds, which we found represents the structure of audio effectively. Our experiments used the YLI-MED dataset (an open TRECVID-style video corpus based on YFCC100M), which includes ten events. The results show that the proposed DCAR representation consistently outperforms state-of-the-art audio representations. DCAR's advantage over i-vector, mv-vector, and GMM representations is significant for both easier and harder discrimination tasks. We discuss how these performance differences across easy and hard cases follow from how each type of model leverages (or doesn't leverage) the intrinsic structure of the data. Furthermore, DCAR shows a particularly notable accuracy advantage on events where humans have more difficulty classifying the videos, i.e., events with lower mean annotator confidence. | Audio representations include low-level features (e.g., energy, cepstral, and harmonic features) and intermediate-level features obtained via further processing steps such as filtering, linear combination, unsupervised learning, and matrix factorization (see overview in 2015 @cite_24 ). | {
"cite_N": [
"@cite_24"
],
"mid": [
"2103235956"
],
"abstract": [
"In this article, we present an account of the state of the art in acoustic scene classification (ASC), the task of classifying environments from the sounds they produce. Starting from a historical review of previous research in this area, we define a general framework for ASC and present different implementations of its components. We then describe a range of different algorithms submitted for a data challenge that was held to provide a general and fair benchmark for ASC techniques. The data set recorded for this purpose is presented along with the performance metrics that are used to evaluate the algorithms and statistical significance tests to compare the submitted methods."
]
} |
1607.04378 | 2508000579 | This paper presents a novel two-phase method for audio representation, Discriminative and Compact Audio Representation (DCAR), and evaluates its performance at detecting events in consumer-produced videos. In the first phase of DCAR, each audio track is modeled using a Gaussian mixture model (GMM) that includes several components to capture the variability within that track. The second phase takes into account both global structure and local structure. In this phase, the components are rendered more discriminative and compact by formulating an optimization problem on Grassmannian manifolds, which we found represents the structure of audio effectively. Our experiments used the YLI-MED dataset (an open TRECVID-style video corpus based on YFCC100M), which includes ten events. The results show that the proposed DCAR representation consistently outperforms state-of-the-art audio representations. DCAR's advantage over i-vector, mv-vector, and GMM representations is significant for both easier and harder discrimination tasks. We discuss how these performance differences across easy and hard cases follow from how each type of model leverages (or doesn't leverage) the intrinsic structure of the data. Furthermore, DCAR shows a particularly notable accuracy advantage on events where humans have more difficulty classifying the videos, i.e., events with lower mean annotator confidence. | A typical audio representation method for event detection is to model each audio file as a vector so that traditional classification methods can be easily applied. The most popular low-level feature used is Mel-frequency cepstral coefficients (MFCCs) @cite_4 , which describe the local spectral envelope of audio signals. However, MFCC is a short-term frame-level representation, so it does not capture the whole structure hidden in each audio signal. As one means to address this, some researchers have used end-to-end classification methods (e.g., neural networks), for example to simultaneously learn intermediate-level audio concepts and train an event classifier @cite_0 . Several approaches have used first-order statistics derived from the frames' MFCC features, which empirically improves performance on audio-based event detection. For example, adopted a codebook model to define audio concepts @cite_8 . This method uses first-order statistics to represent audio: it quantizes low-level features into discrete codewords, generated via clustering, and provides a histogram of codeword counts for each audio file (i.e., it uses the mean of the data in each cluster). | {
"cite_N": [
"@cite_0",
"@cite_4",
"@cite_8"
],
"mid": [
"2018832017",
"2097187469",
"1600745603"
],
"abstract": [
"Multimedia Event Detection (MED) aims to identify events-also called scenes-in videos, such as a flash mob or a wedding ceremony. Audio content information complements cues such as visual content and text. In this paper, we explore the optimization of neural networks (NNs) for audio-based multimedia event classification, and discuss some insights towards more effectively using this paradigm for MED. We explore different architectures, in terms of number of layers and number of neurons. We also assess the performance impact of pre-training with Restricted Boltzmann Machines (RBMs) in contrast with random initialization, and explore the effect of varying the context window for the input to the NNs. Lastly, we compare the performance of Hidden Markov Models (HMMs) with a discriminative classifier for the event classification. We used the publicly available event-annotated YLI-MED dataset. Our results showed a performance improvement of more than 6 absolute accuracy compared to the latest results reported in the literature. Interestingly, these results were obtained with a single-layer neural network with random initialization, suggesting that standard approaches with deep learning and RBM pre-training are not fully adequate to address the high-level video event-classification task.",
"Summary form only given. The paper concerns the development of a system for the recognition of a context or an environment based on acoustic information only. Our system uses Mel-frequency cepstral coefficients and their derivatives as features, and continuous density hidden Markov models (HMM) as acoustic models. We evaluate different model topologies and training methods for HMMs and show that discriminative training can yield a 10 reduction in error rate compared to maximum-likelihood training. A listening test is made to study the human accuracy in the task and to obtain a base-line for the assessment of the performance of the system. Direct comparison to human performance indicates that the system performs somewhat worse than human subjects do in the recognition of 18 everyday contexts and almost comparably in recognizing six higher level categories.",
""
]
} |
1607.04378 | 2508000579 | This paper presents a novel two-phase method for audio representation, Discriminative and Compact Audio Representation (DCAR), and evaluates its performance at detecting events in consumer-produced videos. In the first phase of DCAR, each audio track is modeled using a Gaussian mixture model (GMM) that includes several components to capture the variability within that track. The second phase takes into account both global structure and local structure. In this phase, the components are rendered more discriminative and compact by formulating an optimization problem on Grassmannian manifolds, which we found represents the structure of audio effectively. Our experiments used the YLI-MED dataset (an open TRECVID-style video corpus based on YFCC100M), which includes ten events. The results show that the proposed DCAR representation consistently outperforms state-of-the-art audio representations. DCAR's advantage over i-vector, mv-vector, and GMM representations is significant for both easier and harder discrimination tasks. We discuss how these performance differences across easy and hard cases follow from how each type of model leverages (or doesn't leverage) the intrinsic structure of the data. Furthermore, DCAR shows a particularly notable accuracy advantage on events where humans have more difficulty classifying the videos, i.e., events with lower mean annotator confidence. | However, such methods do not capture the complexity of real-life audio recordings. For event detection, researchers have therefore modeled audio using the second-order statistical covariance matrix of the low-level MFCC features @cite_18 @cite_9 @cite_7 @cite_1 . There are two ways to compute the second-order statistics. The first assumes that each audio file can be characterized by the mean and variance of the MFCC features representing each audio frame, then modeled via a vector by concatenating the mean and variance @cite_1 ; this representation can be referred to as a mean variance vector or . The other method is to model all training audio via a Gaussian mixture model and then compute the Baum-Welch statistics of each audio file according to the mixture components, as in GMM-supervector representations @cite_18 . Again, each audio file is represented by stacking the means and covariance matrices. However, such a vectorization process will inevitably distort the geometric structure By , we mean intrinsic structure within data such as affine structure, projective structure, etc. of the data @cite_3 . | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_9",
"@cite_1",
"@cite_3"
],
"mid": [
"2164882318",
"2018951638",
"2294701319",
"2086384421",
"1930205234"
],
"abstract": [
"Given the exponential growth of videos published on the Internet, mechanisms for clustering, searching, and browsing large numbers of videos have become a major research area. More importantly, there is a demand for event detectors that go beyond the simple finding of objects but rather detect more abstract concepts, such as \"feeding an animal\" or a \"wedding ceremony\". This article presents an approach for event classification that enables searching for arbitrary events, including more abstract concepts, in found video collections based on the analysis of the audio track. The approach does not rely on speech processing, and is language-indepent, instead it generates models for a set of example query videos using a mixture of two types of audio features: Linear-Frequency Cepstral Coefficients and Modulation Spectrogram Features. This approach can be used in complement with video analysis and requires no domain specific tagging. Application of the approach to the TRECVid MED 2011 development set, which consists of more than 4000 random \"wild\" videos from the Internet, has shown a detection accuracy of 64 including those videos which do not contain an audio track.",
"Audio-based video event detection (VED) on user-generated content (UGC) aims to find videos that show an observable event such as a wedding ceremony or birthday party rather than a sound, such as music, clapping or singing. The difficulty of video content analysis on UGC lies in the acoustic variability and lack of structure of the data. The UGC task has been explored mainly by computer vision, but can be benefited by the used of audio. The i-vector system is state-of-the-art in Speaker Verification, and is outperforming a conventional Gaussian Mixture Model (GMM)-based approach. The system compensates for undesired acoustic variability and extracts information from the acoustic environment, making it a meaningful choice for detection on UGC. This paper employs the i-vector-based system for audio-based VED on UGC and expands the understanding of the system on the task. It also includes a performance comparison with the conventional GMM-based and state-of-the-art Random Forest (RF)-based systems. The i-vector system aids audio-based event detection by addressing UGC audio characteristics. It outperforms the GMM-based system, and is competitive with the RF-based system in terms of the Missed Detection (MD) rate at 4 and 2.8 False Alarm (FA) rates, and complements the RF-based system by demonstrating slightly improvement in combination over the standalone systems.",
"We propose a new blind segmentation approach to acoustic event detection (AED) based on i-vectors. Conventional approaches to AED often required well-segmented data with non-overlapping boundaries for competing events. Inspired by block-based automatic image annotation in image retrieval tasks, we blindly segment audio streams into equal-length pieces, label the underlying observed acoustic events with multiple categories and with no event boundary information, extract i-vector for them, and perform classification using support vector machine and maximal figure-of-merit based classifiers. Experiments on various sets of audio data show promising results with an average of 8 absolute gain in F1 over the conventional hidden Markov model based approach. An enhanced robustness at different noise levels is also observed. The key to the success lies in the enhanced discrimination power offered by the i-vector representation of the acoustic data. Index Terms: acoustic event detection, i-vector, blind segmentation, support vector machine, maximal figure-of-merit",
"For intelligent systems to make best use of the audio modality, it is important that they can recognize not just speech and music, which have been researched as specific tasks, but also general sounds in everyday environments. To stimulate research in this field we conducted a public research challenge: the IEEE Audio and Acoustic Signal Processing Technical Committee challenge on Detection and Classification of Acoustic Scenes and Events (DCASE). In this paper, we report on the state of the art in automatically classifying audio scenes, and automatically detecting and classifying audio events. We survey prior work as well as the state of the art represented by the submissions to the challenge from various research groups. We also provide detail on the organization of the challenge, so that our experience as challenge hosts may be useful to those organizing challenges in similar domains. We created new audio datasets and baseline systems for the challenge; these, as well as some submitted systems, are publicly available under open licenses, to serve as benchmarks for further research in general-purpose machine listening.",
""
]
} |
1607.04378 | 2508000579 | This paper presents a novel two-phase method for audio representation, Discriminative and Compact Audio Representation (DCAR), and evaluates its performance at detecting events in consumer-produced videos. In the first phase of DCAR, each audio track is modeled using a Gaussian mixture model (GMM) that includes several components to capture the variability within that track. The second phase takes into account both global structure and local structure. In this phase, the components are rendered more discriminative and compact by formulating an optimization problem on Grassmannian manifolds, which we found represents the structure of audio effectively. Our experiments used the YLI-MED dataset (an open TRECVID-style video corpus based on YFCC100M), which includes ten events. The results show that the proposed DCAR representation consistently outperforms state-of-the-art audio representations. DCAR's advantage over i-vector, mv-vector, and GMM representations is significant for both easier and harder discrimination tasks. We discuss how these performance differences across easy and hard cases follow from how each type of model leverages (or doesn't leverage) the intrinsic structure of the data. Furthermore, DCAR shows a particularly notable accuracy advantage on events where humans have more difficulty classifying the videos, i.e., events with lower mean annotator confidence. | An exciting area of recent work is the i-vector approach, which uses latent factor analysis to compensate for foreground and background variability @cite_13 . The i-vector approach can be seen as an extension of the GMM-supervector. It assumes that these high-dimensional supervectors can be confined to a low-dimensional subspace; this can be implemented by applying probabilistic principal component analysis (PCA) to the supervectors. The advantage of an i-vector is that the system learns the total variance from the training data and then uses it on new data, so that the representation of the new data has similar discriminativity to the representation of the training data. I-vectors have shown promising performance in audio-based event detection @cite_7 @cite_9 . | {
"cite_N": [
"@cite_9",
"@cite_13",
"@cite_7"
],
"mid": [
"2294701319",
"2150769028",
"2018951638"
],
"abstract": [
"We propose a new blind segmentation approach to acoustic event detection (AED) based on i-vectors. Conventional approaches to AED often required well-segmented data with non-overlapping boundaries for competing events. Inspired by block-based automatic image annotation in image retrieval tasks, we blindly segment audio streams into equal-length pieces, label the underlying observed acoustic events with multiple categories and with no event boundary information, extract i-vector for them, and perform classification using support vector machine and maximal figure-of-merit based classifiers. Experiments on various sets of audio data show promising results with an average of 8 absolute gain in F1 over the conventional hidden Markov model based approach. An enhanced robustness at different noise levels is also observed. The key to the success lies in the enhanced discrimination power offered by the i-vector representation of the acoustic data. Index Terms: acoustic event detection, i-vector, blind segmentation, support vector machine, maximal figure-of-merit",
"This paper presents an extension of our previous work which proposes a new speaker representation for speaker verification. In this modeling, a new low-dimensional speaker- and channel-dependent space is defined using a simple factor analysis. This space is named the total variability space because it models both speaker and channel variabilities. Two speaker verification systems are proposed which use this new representation. The first system is a support vector machine-based system that uses the cosine kernel to estimate the similarity between the input data. The second system directly uses the cosine similarity as the final decision score. We tested three channel compensation techniques in the total variability space, which are within-class covariance normalization (WCCN), linear discriminate analysis (LDA), and nuisance attribute projection (NAP). We found that the best results are obtained when LDA is followed by WCCN. We achieved an equal error rate (EER) of 1.12 and MinDCF of 0.0094 using the cosine distance scoring on the male English trials of the core condition of the NIST 2008 Speaker Recognition Evaluation dataset. We also obtained 4 absolute EER improvement for both-gender trials on the 10 s-10 s condition compared to the classical joint factor analysis scoring.",
"Audio-based video event detection (VED) on user-generated content (UGC) aims to find videos that show an observable event such as a wedding ceremony or birthday party rather than a sound, such as music, clapping or singing. The difficulty of video content analysis on UGC lies in the acoustic variability and lack of structure of the data. The UGC task has been explored mainly by computer vision, but can be benefited by the used of audio. The i-vector system is state-of-the-art in Speaker Verification, and is outperforming a conventional Gaussian Mixture Model (GMM)-based approach. The system compensates for undesired acoustic variability and extracts information from the acoustic environment, making it a meaningful choice for detection on UGC. This paper employs the i-vector-based system for audio-based VED on UGC and expands the understanding of the system on the task. It also includes a performance comparison with the conventional GMM-based and state-of-the-art Random Forest (RF)-based systems. The i-vector system aids audio-based event detection by addressing UGC audio characteristics. It outperforms the GMM-based system, and is competitive with the RF-based system in terms of the Missed Detection (MD) rate at 4 and 2.8 False Alarm (FA) rates, and complements the RF-based system by demonstrating slightly improvement in combination over the standalone systems."
]
} |
1607.04423 | 2516196286 | Cloze-style queries are representative problems in reading comprehension. Over the past few months, we have seen much progress that utilizing neural network approach to solve Cloze-style questions. In this paper, we present a novel model called attention-over-attention reader for the Cloze-style reading comprehension task. Our model aims to place another attention mechanism over the document-level attention, and induces "attended attention" for final predictions. Unlike the previous works, our neural network model requires less pre-defined hyper-parameters and uses an elegant architecture for modeling. Experimental results show that the proposed attention-over-attention model significantly outperforms various state-of-the-art systems by a large margin in public datasets, such as CNN and Children's Book Test datasets. | Our work is primarily inspired by and , where the latter model is widely applied to many follow-up works @cite_8 @cite_20 @cite_10 . Unlike the CAS Reader @cite_10 , we do not assume any heuristics to our model, such as using merge functions: @math , @math etc. We used a mechanism called attention-over-attention'' to explicitly calculate the weights between different individual document-level attentions, and get the final attention by computing the weighted sum of them. Also, we find that our model is typically general and simple than the recently proposed model, and brings significant improvements over these cutting edge systems. | {
"cite_N": [
"@cite_10",
"@cite_20",
"@cite_8"
],
"mid": [
"2512457506",
"2416043263",
"2417356443"
],
"abstract": [
"Reading comprehension has embraced a booming in recent NLP research. Several institutes have released the Cloze-style reading comprehension data, and these have greatly accelerated the research of machine comprehension. In this work, we firstly present Chinese reading comprehension datasets, which consist of People Daily news dataset and Children's Fairy Tale (CFT) dataset. Also, we propose a consensus attention-based neural network architecture to tackle the Cloze-style reading comprehension problem, which aims to induce a consensus attention over every words in the query. Experimental results show that the proposed neural network significantly outperforms the state-of-the-art baselines in several public datasets. Furthermore, we setup a baseline for Chinese reading comprehension task, and hopefully this would speed up the process for future research.",
"We present the EpiReader, a novel model for machine comprehension of text. Machine comprehension of unstructured, real-world text is a major research goal for natural language processing. Current tests of machine comprehension pose questions whose answers can be inferred from some supporting text, and evaluate a model's response to the questions. The EpiReader is an end-to-end neural model comprising two components: the first component proposes a small set of candidate answers after comparing a question to its supporting text, and the second component formulates hypotheses using the proposed candidates and the question, then reranks the hypotheses based on their estimated concordance with the supporting text. We present experiments demonstrating that the EpiReader sets a new state-of-the-art on the CNN and Children's Book Test machine comprehension benchmarks, outperforming previous neural models by a significant margin.",
"Described herein are systems and methods for providing a natural language comprehension system (NLCS) that iteratively performs an alternating search to gather information that may be used to predict the answer to the question. The NLCS first attends to a query glimpse of the question, and then finds one or more corresponding matches by attending to a text glimpse of the text."
]
} |
1607.04439 | 1582929720 | Autonomous Unmanned Aerial Vehicles (UAVs) have gained popularity due to their many potential application fields. Alongside sophisticated sensors, UAVs can be equipped with communication adaptors aimed for inter-UAV communication. Inter-communication of UAVs to form a UAV swarm raises questions on how to manage its communication structure and mobility. In this paper, we consider therefore the problem of establishing an efficient swarm movement model and a network topology between a collection of UAVs, which are specifically deployed for the scenario of high-quality forest-mapping. | Existing approaches for optimizing formation acquisition and maintenance mostly focus on @cite_18 @cite_3 . A variant of the coverage problem is the , where packing of a maximum number of circles is required. For the two dimensional case the problem has a polynomial time solution @cite_5 . | {
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_3"
],
"mid": [
"2154730439",
"",
"2068268985"
],
"abstract": [
"One of the fundamental issues in sensor networks is the coverage problem, which reflects howwell a sensor network is monitored or tracked by sensors. In this paper, we formulate this problem as a decision problem, whose goal is to determine whether every point in the service area of the sensor network is covered by at least k sensors, where k is a given parameter. The sensing ranges of sensors can be unit disks or non-unit disks. We present polynomial-time algorithms, in terms of the number of sensors, that can be easily translated to distributed protocols. The result is a generalization of some earlier results where only k =1 is assumed. Applications of the result include determining insufficiently covered areas in a sensor network, enhancing fault-tolerant capability in hostile regions, and conserving energies of redundant sensors in a randomly deployed network. Our solutions can be easily translated to distributed protocols to solve the coverage problem.",
"",
"An Unmanned Aerial Vehicle (UAV) is an aircraft without onboard human pilot, which motion can be remotely and or autonomously controlled. Using multiple UAVs, i.e. a fleet, offers various advantages compared to the single UAV scenario, such as longer mission duration, bigger mission area or the load balancing of the mission payload. For collaboration purposes, it is assumed that the UAVs are equipped with ad hoc communication capabilities and thus form a special case of mobile ad hoc networks. However, the coordination of one or more fleets of UAVs, in order to fulfill collaborative missions, raises multiples issues in particular when UAVs are required to act in an autonomous fashion. Thus, we propose a decentralised and localised algorithm to control the mobility of the UAVs. This algorithm is designed to perform surveillance missions with network connectivity constraints, which are required in most practical use cases for security purposes as any UAV should be able to be contacted at any moment in case of an emergency. The connectivity is maintained via a tree-based overlay network, which root is the base station of the mission, and created by predicting the future positions of one-hop neighbours. This algorithm is compared to the state of the art contributions by introducing new quality metrics to quantify different aspect of the area coverage process (speed, exhaustivity and fairness). Numerical results obtained via simulations show that the maintenance of the connectivity has a slight negative impact on the coverage performances while the connectivity performances are significantly better."
]
} |
1607.04439 | 1582929720 | Autonomous Unmanned Aerial Vehicles (UAVs) have gained popularity due to their many potential application fields. Alongside sophisticated sensors, UAVs can be equipped with communication adaptors aimed for inter-UAV communication. Inter-communication of UAVs to form a UAV swarm raises questions on how to manage its communication structure and mobility. In this paper, we consider therefore the problem of establishing an efficient swarm movement model and a network topology between a collection of UAVs, which are specifically deployed for the scenario of high-quality forest-mapping. | When considering three dimensions, the question for optimal node positioning is called the @cite_14 . A related problem in geometry is the @cite_1 , which is the number of non-overlapping unit spheres arranged such that each sphere touches another. | {
"cite_N": [
"@cite_14",
"@cite_1"
],
"mid": [
"1993644547",
"2169164634"
],
"abstract": [
"Abstract The sphere packing problem asks whether any packing of spheres of equal radius in three dimensions has density exceeding that of the face-centered-cubic lattice packing (of density φ 18 ). This paper sketches a solution to this problem.",
"The kissing number k(3) is the maximal number of equal size nonoverlapping spheres in three dimensions that can touch another sphere of the same size. This number was the subject of a famous discussion between Isaac Newton and David Gregory in 1694. The first proof that k(3) = 12 was given by Schutte and van der Waerden only in 1953. In this paper we present a new solution of the Newton--Gregory problem that uses our extension of the Delsarte method. This proof relies on basic calculus and simple spherical geometry."
]
} |
1607.04439 | 1582929720 | Autonomous Unmanned Aerial Vehicles (UAVs) have gained popularity due to their many potential application fields. Alongside sophisticated sensors, UAVs can be equipped with communication adaptors aimed for inter-UAV communication. Inter-communication of UAVs to form a UAV swarm raises questions on how to manage its communication structure and mobility. In this paper, we consider therefore the problem of establishing an efficient swarm movement model and a network topology between a collection of UAVs, which are specifically deployed for the scenario of high-quality forest-mapping. | The properties of network topologies resulting from random deployment of nodes in a three-dimensional area are studied by Ravelomanana @cite_0 . Ravelomanana considers the , which looks for the lower bound of the transmission range @math , so that every node has at least @math direct neighbors. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2167140162"
],
"abstract": [
"We analyze various critical transmitting sensing ranges for connectivity and coverage in three-dimensional sensor networks. As in other large-scale complex systems, many global parameters of sensor networks undergo phase transitions. For a given property of the network, there is a critical threshold, corresponding to the minimum amount of the communication effort or power expenditure by individual nodes, above (respectively, below) which the property exists with high (respectively, a low) probability. For sensor networks, properties of interest include simple and multiple degrees of connectivity coverage. First, we investigate the network topology according to the region of deployment, the number of deployed sensors, and their transmitting sensing ranges. More specifically, we consider the following problems: assume that n nodes, each capable of sensing events within a radius of r, are randomly and uniformly distributed in a 3-dimensional region R of volume V, how large must the sensing range R sub SENSE be to ensure a given degree of coverage of the region to monitor? For a given transmission range R sub TRANS , what is the minimum (respectively, maximum) degree of the network? What is then the typical hop diameter of the underlying network? Next, we show how these results affect algorithmic aspects of the network by designing specific distributed protocols for sensor networks."
]
} |
1607.04439 | 1582929720 | Autonomous Unmanned Aerial Vehicles (UAVs) have gained popularity due to their many potential application fields. Alongside sophisticated sensors, UAVs can be equipped with communication adaptors aimed for inter-UAV communication. Inter-communication of UAVs to form a UAV swarm raises questions on how to manage its communication structure and mobility. In this paper, we consider therefore the problem of establishing an efficient swarm movement model and a network topology between a collection of UAVs, which are specifically deployed for the scenario of high-quality forest-mapping. | @cite_3 propose a decentralized and localized approach for UAV mobility control, which optimizes the network connectivity. The approach maintains the connectivity via a tree-based overlay network, whereby the root is the base station. Their empirical results show that the maintenance of the connectivity can have a negative impact on the coverage while the overall connectivity can improve. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2068268985"
],
"abstract": [
"An Unmanned Aerial Vehicle (UAV) is an aircraft without onboard human pilot, which motion can be remotely and or autonomously controlled. Using multiple UAVs, i.e. a fleet, offers various advantages compared to the single UAV scenario, such as longer mission duration, bigger mission area or the load balancing of the mission payload. For collaboration purposes, it is assumed that the UAVs are equipped with ad hoc communication capabilities and thus form a special case of mobile ad hoc networks. However, the coordination of one or more fleets of UAVs, in order to fulfill collaborative missions, raises multiples issues in particular when UAVs are required to act in an autonomous fashion. Thus, we propose a decentralised and localised algorithm to control the mobility of the UAVs. This algorithm is designed to perform surveillance missions with network connectivity constraints, which are required in most practical use cases for security purposes as any UAV should be able to be contacted at any moment in case of an emergency. The connectivity is maintained via a tree-based overlay network, which root is the base station of the mission, and created by predicting the future positions of one-hop neighbours. This algorithm is compared to the state of the art contributions by introducing new quality metrics to quantify different aspect of the area coverage process (speed, exhaustivity and fairness). Numerical results obtained via simulations show that the maintenance of the connectivity has a slight negative impact on the coverage performances while the connectivity performances are significantly better."
]
} |
1607.04439 | 1582929720 | Autonomous Unmanned Aerial Vehicles (UAVs) have gained popularity due to their many potential application fields. Alongside sophisticated sensors, UAVs can be equipped with communication adaptors aimed for inter-UAV communication. Inter-communication of UAVs to form a UAV swarm raises questions on how to manage its communication structure and mobility. In this paper, we consider therefore the problem of establishing an efficient swarm movement model and a network topology between a collection of UAVs, which are specifically deployed for the scenario of high-quality forest-mapping. | The traditional remote sensing techniques employed in forest resources assessment and monitoring rely on imaginary data often operating in the visible spectrum, which fail to provide useful information on cloudy days. Although remedies have been found, such as combining visible and infrared light @cite_15 a more direct approach is the usage of UAVs, specifically when flying under the canopy. A significant reduction in data acquisition costs while increasing accuracy occurs when instead of one flying entity multiple UAVs are used @cite_8 @cite_6 , which conducted studies on land use in the Congo Basin, where the ground was often obscured by clouds and, therefore, developed ways to create composite pictures of sources with visible and infrared light. | {
"cite_N": [
"@cite_15",
"@cite_6",
"@cite_8"
],
"mid": [
"2029058476",
"2045963103",
"2155430515"
],
"abstract": [
"In this paper we demonstrate a new approach that uses regional continental MODIS (MODerate Resolution Imaging Spectroradiometer) derived forest cover products to calibrate Landsat data for exhaustive high spatial resolution mapping of forest cover and clearing in the Congo River Basin. The approach employs multi-temporal Landsat acquisitions to account for cloud cover, a primary limiting factor in humid tropical forest mapping. A Basin-wide MODIS 250 m Vegetation Continuous Field (VCF) percent tree cover product is used as a regionally consistent reference data set to train Landsat imagery. The approach is automated and greatly shortens mapping time. Results for approximately one third of the Congo Basin are shown. Derived high spatial resolution forest change estimates indicate that less than 1 of the forests were cleared from 1990 to 2000. However, forest clearing is spatially pervasive and fragmented in the landscapes studied to date, with implications for sustaining the region's biodiversity. The forest cover and change data are being used by the Central African Regional Program for the Environment (CARPE) program to study deforestation and biodiversity loss in the Congo Basin forest zone. Data from this study are available at http: carpe.umd.edu.",
"This paper presents a cost-effective framework for the prototyping of vision-based quadrotor multi-robot systems, which core characteristics are: modularity, compatibility with different platforms and being flight-proven. The framework is fully operative, which is shown in the paper through simulations and real flight tests of up to 5 drones, and was demonstrated with the participation in an international micro-aerial vehicles competition 3 where it was awarded with the First Prize in the Indoors Autonomy Challenge. The motivation of this framework is to allow the developers to focus on their own research by decoupling the development of dependent modules, leading to a more cost-effective progress in the project. The basic instance of the framework that we propose, which is flight-proven with the cost-efficient and reliable platform Parrot AR Drone 2.0 and is open-source, includes several modules that can be reused and modified, such as: a basic sequential mission planner, a basic 2D trajectory planner, an odometry state estimator, localization and mapping modules which obtain absolute position measurements using visual markers, a trajectory controller and a visualization module.",
"Teams of autonomous cooperating vehicles are well-suited for meeting the challenges associated with mobile marine sensor networks. Swarms built using a physicomimetics approach exhibit predictable behavior - an important benefit for extended duration deployments of autonomous ocean platforms. By using a decentralized control framework, we minimize energy consumption via short-range communication and self-contained on-board data processing, all without a specified leader. We introduce the task of autonomous surface vehicle (ASV) navigation inside a bioluminescent plume to motivate future study of how the agility and scalability of our physics-based solution can benefit a mobile distributed sensor network."
]
} |
1607.04439 | 1582929720 | Autonomous Unmanned Aerial Vehicles (UAVs) have gained popularity due to their many potential application fields. Alongside sophisticated sensors, UAVs can be equipped with communication adaptors aimed for inter-UAV communication. Inter-communication of UAVs to form a UAV swarm raises questions on how to manage its communication structure and mobility. In this paper, we consider therefore the problem of establishing an efficient swarm movement model and a network topology between a collection of UAVs, which are specifically deployed for the scenario of high-quality forest-mapping. | While UAV swarms approaches are popular in military, communication, and marine applications @cite_4 @cite_2 , only few applications are focused on forestry, mainly in forest fire surveillance @cite_10 . The scenario considered throughout this paper aims forest-mapping of healthy trees, and introduces a set of particular challenges to the UAV swarm movement, such as a continuous change of the communication topology due to the high occurrence of trees in a relatively reduced area, which differentiates it to existing approaches. | {
"cite_N": [
"@cite_10",
"@cite_4",
"@cite_2"
],
"mid": [
"2103128941",
"1986607129",
"2003668616"
],
"abstract": [
"The objective of this paper is to explore the feasibility of using multiple low-altitude, short endurance (LASE) unmanned air vehicles (UAVs) to cooperatively monitor and track the propagation of large forest fires. A real-time algorithm is described for tracking the perimeter of fires with an on-board infrared sensor. Using this algorithm, we develop a decentralized multiple-UAV approach to monitoring the perimeter of a fire. The UAVs are assumed to have limited communication and sensing range. The effectiveness of the approach is demonstrated in simulation using a six degree-of-freedom dynamic model for the UAV and a numerical propagation model for the forest fire. Salient features of the approach include the ability to monitor a changing fire perimeter, the ability to systematically add and remove UAVs from the team, and the ability to supply time-critical information to fire fighters.",
"In recent years, Unmanned Aerial Vehicles (UAV), have been increasingly utilized by both military and civilian organizations because they are less expensive, provide greater flexibilities and remove the need for on-board pilot support. Largely due to their utility and increased capabilities, in the near future, swarms of UAVs will replace single UAV use. Efficient control of swarms opens a set of new challenges, such as automatic UAV coordination, efficient swarm monitoring and dynamic mission planning. In this paper, we investigate the problem of dynamic mission planning for a UAV swarm. A centralized-distributed hybrid control framework is proposed for mission assignment and scheduling. The Dynamic Data Driven Application System (DDDAS) principles are applied to the framework so that it can adapt to the changing nature of the environment and the missions. A prototype simulation program is implemented as a proof-ofconcept of the framework. Experimentation with the framework suggests the effectiveness of swarm control for several mission planning mechanisms.",
"In most swarm systems, agents are either aware of the position of their direct neighbors or they possess a substrate on which they can deposit information (stigmergy). However, such resources are not always obtainable in real-world applications because of hardware and environmental constraints. In this paper we study in 2D simulation the design of a swarm system which does not make use of positioning information or stigmergy. This endeavor is motivated by an application whereby a large number of Swarming Micro Air Vehicles (SMAVs), of fixed-wing configuration, must organize autonomously to establish a wireless communication network (SMAVNET) between users located on ground. Rather than relative or absolute positioning, agents must rely only on their own heading measurements and local communication with neighbors. Designing local interactions responsible for the emergence of the SMAVNET deployment and maintenance is a challenging task. For this reason, artificial evolution is used to automatically develop neuronal controllers for the swarm of homogenous agents. This approach has the advantage of yielding original and efficient swarming strategies. A detailed behavioral analysis is then performed on the fittest swarm to gain insight as to the behavior of the individual agents."
]
} |
1607.04492 | 2496570145 | Neural networks with recurrent or recursive architecture have shown promising results on various natural language processing (NLP) tasks. The recurrent and recursive architectures have their own strength and limitations. The recurrent networks process input text sequentially and model the conditional transition between word tokens. In contrast, the recursive networks explicitly model the compositionality and the recursive structure of natural language. Current recursive architecture is based on syntactic tree, thus limiting its practical applicability in different NLP applications. In this paper, we introduce a class of tree structured model, Neural Tree Indexers (NTI) that provides a middle ground between the sequential RNNs and the syntactic tree-based recursive models. NTI constructs a full n-ary tree by processing the input text with its node function in a bottom-up fashion. Attention mechanism can then be applied to both structure and different forms of node function. We demonstrated the effectiveness and the flexibility of a binary-tree model of NTI, showing the model achieved the state-of-the-art performance on three different NLP tasks: natural language inference, answer sentence selection, and sentence classification. | RNNs model input text sequentially by taking a single token at each time step and producing a corresponding hidden state. The hidden state is then passed along through the next time step to provide historical sequence information. Although a great success in a variety of tasks, RNNs have limitations @cite_23 @cite_28 . Among them, it is not efficient at memorizing long or distant sequence @cite_8 . This is frequently called as information flow bottleneck. Approaches have therefore been developed to overcome the limitations. For example, to mitigate the information flow bottleneck, extended RNNs with a soft attention mechanism in the context of neural machine translation, leading to improved the results in translating longer sentences. | {
"cite_N": [
"@cite_28",
"@cite_23",
"@cite_8"
],
"mid": [
"2069143585",
"",
"2949888546"
],
"abstract": [
"Recurrent nets are in principle capable to store past inputs to produce the currently desired output. Because of this property recurrent nets are used in time series prediction and process control. Practical applications involve temporal dependencies spanning many time steps, e.g. between relevant inputs and desired outputs. In this case, however, gradient based learning methods take too much time. The extremely increased learning time arises because the error vanishes as it gets propagated back. In this article the de-caying error flow is theoretically analyzed. Then methods trying to overcome vanishing gradients are briefly discussed. Finally, experiments comparing conventional algorithms and alternative methods are presented. With advanced methods long time lag problems can be solved in reasonable time.",
"",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier."
]
} |
1607.04492 | 2496570145 | Neural networks with recurrent or recursive architecture have shown promising results on various natural language processing (NLP) tasks. The recurrent and recursive architectures have their own strength and limitations. The recurrent networks process input text sequentially and model the conditional transition between word tokens. In contrast, the recursive networks explicitly model the compositionality and the recursive structure of natural language. Current recursive architecture is based on syntactic tree, thus limiting its practical applicability in different NLP applications. In this paper, we introduce a class of tree structured model, Neural Tree Indexers (NTI) that provides a middle ground between the sequential RNNs and the syntactic tree-based recursive models. NTI constructs a full n-ary tree by processing the input text with its node function in a bottom-up fashion. Attention mechanism can then be applied to both structure and different forms of node function. We demonstrated the effectiveness and the flexibility of a binary-tree model of NTI, showing the model achieved the state-of-the-art performance on three different NLP tasks: natural language inference, answer sentence selection, and sentence classification. | Unlike RNNs, recursive neural networks explicitly model the compositionality and the recursive structure of natural language over tree. The tree structure can be predefined by a syntactic parser @cite_16 . Each non-leaf tree node is associated with a node composition function which combines its children nodes and produces its own representation. The model is then trained by back-propagating error through structures @cite_0 . | {
"cite_N": [
"@cite_0",
"@cite_16"
],
"mid": [
"2104518905",
"2251939518"
],
"abstract": [
"While neural networks are very successfully applied to the processing of fixed-length vectors and variable-length sequences, the current state of the art does not allow the efficient processing of structured objects of arbitrary shape (like logical terms, trees or graphs). We present a connectionist architecture together with a novel supervised learning scheme which is capable of solving inductive inference tasks on complex symbolic structures of arbitrary size. The most general structures that can be handled are labeled directed acyclic graphs. The major difference of our approach compared to others is that the structure-representations are exclusively tuned for the intended inference task. Our method is applied to tasks consisting in the classification of logical terms. These range from the detection of a certain subterm to the satisfaction of a specific unification pattern. Compared to previously known approaches we obtained superior results in that domain.",
"Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases."
]
} |
1607.04492 | 2496570145 | Neural networks with recurrent or recursive architecture have shown promising results on various natural language processing (NLP) tasks. The recurrent and recursive architectures have their own strength and limitations. The recurrent networks process input text sequentially and model the conditional transition between word tokens. In contrast, the recursive networks explicitly model the compositionality and the recursive structure of natural language. Current recursive architecture is based on syntactic tree, thus limiting its practical applicability in different NLP applications. In this paper, we introduce a class of tree structured model, Neural Tree Indexers (NTI) that provides a middle ground between the sequential RNNs and the syntactic tree-based recursive models. NTI constructs a full n-ary tree by processing the input text with its node function in a bottom-up fashion. Attention mechanism can then be applied to both structure and different forms of node function. We demonstrated the effectiveness and the flexibility of a binary-tree model of NTI, showing the model achieved the state-of-the-art performance on three different NLP tasks: natural language inference, answer sentence selection, and sentence classification. | The node composition function can be varied. A single layer network with @math non-linearity was adopted in recursive auto-associate memories @cite_4 and recursive autoencoders @cite_27 . extended this network with an additional matrix representation for each node to augment the expressive power of the model. Tensor networks have also been used as composition function for sentence-level sentiment analysis task @cite_16 . Recently, introduced S-LSTM which extends LSTM units to compose tree nodes in a recursive fashion. | {
"cite_N": [
"@cite_27",
"@cite_16",
"@cite_4"
],
"mid": [
"71795751",
"2251939518",
""
],
"abstract": [
"We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model's ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines.",
"Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.",
""
]
} |
1607.04149 | 2482076478 | In a combinatorial auction with item bidding, agents participate in multiple single-item second-price auctions at once. As some items might be substitutes, agents need to strategize in order to maximize their utilities. A number of results indicate that high welfare can be achieved this way, giving bounds on the welfare at equilibrium. Recently, however, criticism has been raised that equilibria are hard to compute and therefore unlikely to be attained. In this paper, we take a different perspective. We study simple best-response dynamics. That is, agents are activated one after the other and each activated agent updates his strategy myopically to a best response against the other agents' current strategies. Often these dynamics may take exponentially long before they converge or they may not converge at all. However, as we show, convergence is not even necessary for good welfare guarantees. Given that agents' bid updates are aggressive enough but not too aggressive, the game will remain in states of good welfare after each agent has updated his bid at least once. In more detail, we show that if agents have fractionally subadditive valuations, natural dynamics reach and remain in a state that provides a @math approximation to the optimal welfare after each agent has updated his bid at least once. For subadditive valuations, we can guarantee an @math approximation in case of @math items that applies after each agent has updated his bid at least once and at any point after that. The latter bound is complemented by a negative result, showing that no kind of best-response dynamics can guarantee more than an @math fraction of the optimal social welfare. | The study of the Price of Anarchy in combinatorial auctions with item bidding was initiated by @cite_10 , and subsequently refined and improved upon in @cite_0 @cite_21 @cite_2 @cite_15 @cite_13 . Some of these bounds are based on mechanism smoothness, others are not. They provide welfare guarantees for a broad range of equilibrium concepts ranging from pure Nash equilibria, over (coarse) correlated equilibria, to Bayes-Nash equilibria. For fractionally subadditive valuations there is a smoothness-based proof that shows that the Price of Anarchy with respect to pure Nash equilibria is at most @math @cite_10 @cite_2 . For subadditive valuations the Price of Anarchy with respect to pure Nash equilibria is also at most @math @cite_0 , but the best smoothness-based proof gives a bound of @math @cite_0 @cite_2 . In fact, as shown by Roughgarden @cite_12 , combinatorial auctions with item bidding achieve (near-)optimal Price of Anarchy among a broad class of simple'' mechanisms. | {
"cite_N": [
"@cite_13",
"@cite_21",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_10",
"@cite_12"
],
"mid": [
"",
"2060346604",
"2052719152",
"",
"2953389549",
"2326228838",
"2030208840"
],
"abstract": [
"",
"We study markets of indivisible items in which price-based (Walrasian) equilibria often do not exist due to the discrete non-convex setting. Instead we consider Nash equilibria of the market viewed as a game, where players bid for items, and where the highest bidder on an item wins it and pays his bid. We first observe that pure Nash-equilibria of this game excatly correspond to price-based equilibiria (and thus need not exist), but that mixed-Nash equilibria always do exist, and we analyze their structure in several simple cases where no price-based equilibrium exists. We also undertake an analysis of the welfare properties of these equilibria showing that while pure equilibria are always perfectly efficient (“first welfare theorem”), mixed equilibria need not be, and we provide upper and lower bounds on their amount of inefficiency.",
"We analyze the price of anarchy (POA) in a simple and practical non-truthful combinatorial auction when players have subadditive valuations for goods. We study the mechanism that sells every good in parallel with separate second-price auctions. We first prove that under a standard \"no overbidding\" assumption, for every subadditive valuation profile, every pure Nash equilibrium has welfare at least 50 of optimal --- i.e., the POA is at most 2. For the incomplete information setting, we prove that the POA with respect to Bayes-Nash equilibria is strictly larger than 2 --- an unusual separation from the full-information model --- and is at most 2 ln m, where m is the number of goods.",
"",
"The focus of classic mechanism design has been on truthful direct-revelation mechanisms. In the context of combinatorial auctions the truthful direct-revelation mechanism that maximizes social welfare is the VCG mechanism. For many valuation spaces computing the allocation and payments of the VCG mechanism, however, is a computationally hard problem. We thus study the performance of the VCG mechanism when bidders are forced to choose bids from a subspace of the valuation space for which the VCG outcome can be computed efficiently. We prove improved upper bounds on the welfare loss for restrictions to additive bids and upper and lower bounds for restrictions to non-additive bids. These bounds show that the welfare loss increases in expressiveness. All our bounds apply to equilibrium concepts that can be computed in polynomial time as well as to learning outcomes.",
"We study the following simple Bayesian auction setting: m items are sold to n selfish bidders in m independent second-price auctions. Each bidder has a private valuation function that specifies his or her complex preferences over all subsets of items. Bidders only have beliefs about the valuation functions of the other bidders, in the form of probability distributions. The objective is to allocate the items to the bidders in a way that provides a good approximation to the optimal social welfare value. We show that if bidders have submodular or, more generally, fractionally subadditive (aka XOS) valuation functions, every Bayes-Nash equilibrium of the resulting game provides a 2-approximation to the optimal social welfare. Moreover, we show that in the full-information game, a pure Nash always exists and can be found in time that is polynomial in both m and n.",
"This paper explains when and how communication and computational lower bounds for algorithms for an optimization problem translate to lower bounds on the worst-case quality of equilibria in games derived from the problem. We give three families of lower bounds on the quality of equilibria, each motivated by a different set of problems: congestion, scheduling, and distributed welfare games, welfare-maximization in combinatorial auctions with \"black-box\" bidder valuations, and welfare-maximization in combinatorial auctions with succinctly described valuations. The most straightforward use of our lower bound framework is to harness an existing computational or communication lower bound to derive a lower bound on the worst-case price of anarchy (POA) in a class of games. This is a new approach to POA lower bounds, which relies on reductions in lieu of explicit constructions. More generally, the POA lower bounds implied by our framework apply to all classes of games that share the same underlying optimization problem, independent of the details of players' utility functions. For this reason, our lower bounds are particularly significant for problems of game design--ranging from the design of simple combinatorial auctions to the computation of tolls for routing networks--where the goal is to design a game that has only near-optimal equilibria. For example, our results imply that the simultaneous first-price auction format is optimal among all \"simple combinatorial auctions\" in several settings."
]
} |
1607.04149 | 2482076478 | In a combinatorial auction with item bidding, agents participate in multiple single-item second-price auctions at once. As some items might be substitutes, agents need to strategize in order to maximize their utilities. A number of results indicate that high welfare can be achieved this way, giving bounds on the welfare at equilibrium. Recently, however, criticism has been raised that equilibria are hard to compute and therefore unlikely to be attained. In this paper, we take a different perspective. We study simple best-response dynamics. That is, agents are activated one after the other and each activated agent updates his strategy myopically to a best response against the other agents' current strategies. Often these dynamics may take exponentially long before they converge or they may not converge at all. However, as we show, convergence is not even necessary for good welfare guarantees. Given that agents' bid updates are aggressive enough but not too aggressive, the game will remain in states of good welfare after each agent has updated his bid at least once. In more detail, we show that if agents have fractionally subadditive valuations, natural dynamics reach and remain in a state that provides a @math approximation to the optimal welfare after each agent has updated his bid at least once. For subadditive valuations, we can guarantee an @math approximation in case of @math items that applies after each agent has updated his bid at least once and at any point after that. The latter bound is complemented by a negative result, showing that no kind of best-response dynamics can guarantee more than an @math fraction of the optimal social welfare. | Also relevant to our analysis in this context is that @cite_10 gave a simple, best-response dynamics for fractionally subadditive valuations, that they called Potential Procedure. They showed that this procedure always converges to a pure Nash equilibrium, but also that it may take exponentially many steps before it converges. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2326228838"
],
"abstract": [
"We study the following simple Bayesian auction setting: m items are sold to n selfish bidders in m independent second-price auctions. Each bidder has a private valuation function that specifies his or her complex preferences over all subsets of items. Bidders only have beliefs about the valuation functions of the other bidders, in the form of probability distributions. The objective is to allocate the items to the bidders in a way that provides a good approximation to the optimal social welfare value. We show that if bidders have submodular or, more generally, fractionally subadditive (aka XOS) valuation functions, every Bayes-Nash equilibrium of the resulting game provides a 2-approximation to the optimal social welfare. Moreover, we show that in the full-information game, a pure Nash always exists and can be found in time that is polynomial in both m and n."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.