aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1608.04342 | 2951352790 | We present a method to automatically decompose a light field into its intrinsic shading and albedo components. Contrary to previous work targeted to 2D single images and videos, a light field is a 4D structure that captures non-integrated incoming radiance over a discrete angular domain. This higher dimensionality of the problem renders previous state-of-the-art algorithms impractical either due to their cost of processing a single 2D slice, or their inability to enforce proper coherence in additional dimensions. We propose a new decomposition algorithm that jointly optimizes the whole light field data for proper angular coherence. For efficiency, we extend Retinex theory, working on the gradient domain, where new albedo and occlusion terms are introduced. Results show our method provides 4D intrinsic decompositions difficult to achieve with previous state-of-the-art algorithms. We further provide a comprehensive analysis and comparisons with existing intrinsic image video decomposition methods on light field images. | * Multiple Images and Video. Several works leverage information from multiple images of the same scene from a fixed viewpoint under varying illumination @cite_20 @cite_39 @cite_42 @cite_18 . @cite_19 coarsely estimate a 3D point cloud of the scene from non-structured image collections. Pixels with similar chromaticity and orientation in the point cloud will be used as reflectance constraints within an optimization. Assuming outdoor environments, the work of @cite_14 estimates sunlight position and orientation and reconstructs a 3D model of the scene, taking as input several captures of the same scene under constant illumination. Although a light field can be seen as a structured collection of images, we do not make assumptions about the lighting nor the scale of the captured scene. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_42",
"@cite_39",
"@cite_19",
"@cite_20"
],
"mid": [
"",
"2057552653",
"2220470871",
"2024270601",
"2020733757",
"2136748901"
],
"abstract": [
"",
"We introduce a method to compute intrinsic images for a multiview set of outdoor photos with cast shadows, taken under the same lighting. We use an automatic 3D reconstruction from these photos and the sun direction as input and decompose each image into reflectance and shading layers, despite the inaccuracies and missing data of the 3D model. Our approach is based on two key ideas. First, we progressively improve the accuracy of the parameters of our image formation model by performing iterative estimation and combining 3D lighting simulation with 2D image optimization methods. Second, we use the image formation model to express reflectance as a function of discrete visibility values for shadow and light, which allows to introduce a robust visibility classifier for pairs of points in a scene. This classifier is used for shadow labeling, allowing to compute high-quality reflectance and shading layers. Our multiview intrinsic decomposition is of sufficient quality to allow relighting of the input images. We create shadow-caster geometry which preserves shadow silhouettes and, using the intrinsic layers, we can perform multiview relighting with moving cast shadows. We present results on several multiview datasets, and show how it is now possible to perform image-based rendering with changing illumination conditions.",
"We present a method for intrinsic image decomposition, which aims to decompose images into reflectance and shading layers. Our input is a sequence of images with varying illumination acquired by a static camera, e.g. an indoor scene with a moving light source or an outdoor timelapse. We leverage the local color variations observed over time to infer constraints on the reflectance and solve the ill-posed image decomposition problem. In particular, we derive an adaptive local energy from the observations of each local neighborhood over time, and integrate distant pairwise constraints to enforce coherent decomposition across all surfaces with consistent shading changes. Our method is solely based on multiple observations of a Lambertian scene under varying illumination and does not require user interaction, scene geometry, or an explicit lighting model. We compare our results with several intrinsic decomposition methods on a number of synthetic and captured datasets.",
"",
"An intrinsic image is a decomposition of a photo into an illumination layer and a reflectance layer, which enables powerful editing such as the alteration of an object's material independently of its illumination. However, decomposing a single photo is highly under-constrained and existing methods require user assistance or handle only simple scenes. In this paper, we compute intrinsic decompositions using several images of the same scene under different viewpoints and lighting conditions. We use multi-view stereo to automatically reconstruct 3D points and normals from which we derive relationships between reflectance values at different locations, across multiple views and consequently different lighting conditions. We use robust estimation to reliably identify reflectance ratios between pairs of points. From these, we infer constraints for our optimization and enforce a coherent solution across multiple views and illuminations. Our results demonstrate that this constrained optimization yields high-quality and coherent intrinsic decompositions of complex scenes. We illustrate how these decompositions can be used for image-based illumination transfer and transitions between views with consistent lighting.",
"Intrinsic images are a useful midlevel description of scenes proposed by H.G. Barrow and J.M. Tenenbaum (1978). An image is de-composed into two images: a reflectance image and an illumination image. Finding such a decomposition remains a difficult problem in computer vision. We focus on a slightly, easier problem: given a sequence of T images where the reflectance is constant and the illumination changes, can we recover T illumination images and a single reflectance image? We show that this problem is still imposed and suggest approaching it as a maximum-likelihood estimation problem. Following recent work on the statistics of natural images, we use a prior that assumes that illumination images will give rise to sparse filter outputs. We show that this leads to a simple, novel algorithm for recovering reflectance images. We illustrate the algorithm's performance on real and synthetic image sequences."
]
} |
1608.04342 | 2951352790 | We present a method to automatically decompose a light field into its intrinsic shading and albedo components. Contrary to previous work targeted to 2D single images and videos, a light field is a 4D structure that captures non-integrated incoming radiance over a discrete angular domain. This higher dimensionality of the problem renders previous state-of-the-art algorithms impractical either due to their cost of processing a single 2D slice, or their inability to enforce proper coherence in additional dimensions. We propose a new decomposition algorithm that jointly optimizes the whole light field data for proper angular coherence. For efficiency, we extend Retinex theory, working on the gradient domain, where new albedo and occlusion terms are introduced. Results show our method provides 4D intrinsic decompositions difficult to achieve with previous state-of-the-art algorithms. We further provide a comprehensive analysis and comparisons with existing intrinsic image video decomposition methods on light field images. | * Video. A few methods dealing with intrinsic video have been recently presented. @cite_28 propose a probabilistic solution based a casual-anticasual, coarse-to-fine iterative reflectance propagation. @cite_44 present an efficient gradient-based solver which allows interactive decompositions. @cite_7 rely on optical flow to estimate surface boundaries to guide the decomposition. Recently, @cite_5 presented a novel variational approach suitable for real-time processing, based on a hierarchical coarse-to-fine optimization. While this approach can provide coherent and stable results even applied straightforwardly to light fields, the actual decomposition is performed on a per-frame basis, so it shares the limitations with previous 2D methods. | {
"cite_N": [
"@cite_28",
"@cite_44",
"@cite_5",
"@cite_7"
],
"mid": [
"1994246617",
"2146721395",
"2468336759",
""
],
"abstract": [
"We present a method to decompose a video into its intrinsic components of reflectance and shading, plus a number of related example applications in video editing such as segmentation, stylization, material editing, recolorization and color transfer. Intrinsic decomposition is an ill-posed problem, which becomes even more challenging in the case of video due to the need for temporal coherence and the potentially large memory requirements of a global approach. Additionally, user interaction should be kept to a minimum in order to ensure efficiency. We propose a probabilistic approach, formulating a Bayesian Maximum a Posteriori problem to drive the propagation of clustered reflectance values from the first frame, and defining additional constraints as priors on the reflectance and shading. We explicitly leverage temporal information in the video by building a causal-anticausal, coarse-to-fine iterative scheme, and by relying on optical flow information. We impose no restrictions on the input video, and show examples representing a varied range of difficult cases. Our method is the first one designed explicitly for video; moreover, it naturally ensures temporal consistency, and compares favorably against the state of the art in this regard.",
"Separating a photograph into its reflectance and illumination intrinsic images is a fundamentally ambiguous problem, and state-of-the-art algorithms combine sophisticated reflectance and illumination priors with user annotations to create plausible results. However, these algorithms cannot be easily extended to videos for two reasons: first, naively applying algorithms designed for single images to videos produce results that are temporally incoherent; second, effectively specifying user annotations for a video requires interactive feedback, and current approaches are orders of magnitudes too slow to support this. We introduce a fast and temporally consistent algorithm to decompose video sequences into their reflectance and illumination components. Our algorithm uses a hybrid e2ep formulation that separates image gradients into smooth illumination and sparse reflectance gradients using look-up tables. We use a multi-scale parallelized solver to reconstruct the reflectance and illumination from these gradients while enforcing spatial and temporal reflectance constraints and user annotations. We demonstrate that our algorithm automatically produces reasonable results, that can be interactively refined by users, at rates that are two orders of magnitude faster than existing tools, to produce high-quality decompositions for challenging real-world video sequences. We also show how these decompositions can be used for a number of video editing applications including recoloring, retexturing, illumination editing, and lighting-aware compositing.",
"Intrinsic video decomposition refers to the fundamentally ambiguous task of separating a video stream into its constituent layers, in particular reflectance and shading layers. Such a decomposition is the basis for a variety of video manipulation applications, such as realistic recoloring or retexturing of objects. We present a novel variational approach to tackle this underconstrained inverse problem at real-time frame rates, which enables on-line processing of live video footage. The problem of finding the intrinsic decomposition is formulated as a mixed variational e2-ep-optimization problem based on an objective function that is specifically tailored for fast optimization. To this end, we propose a novel combination of sophisticated local spatial and global spatio-temporal priors resulting in temporally coherent decompositions at real-time frame rates without the need for explicit correspondence search. We tackle the resulting high-dimensional, non-convex optimization problem via a novel data-parallel iteratively reweighted least squares solver that runs on commodity graphics hardware. Real-time performance is obtained by combining a local-global solution strategy with hierarchical coarse-to-fine optimization. Compelling real-time augmented reality applications, such as recoloring, material editing and retexturing, are demonstrated in a live setup. Our qualitative and quantitative evaluation shows that we obtain high-quality real-time decompositions even for challenging sequences. Our method is able to outperform state-of-the-art approaches in terms of runtime and result quality -- even without user guidance such as scribbles.",
""
]
} |
1608.04342 | 2951352790 | We present a method to automatically decompose a light field into its intrinsic shading and albedo components. Contrary to previous work targeted to 2D single images and videos, a light field is a 4D structure that captures non-integrated incoming radiance over a discrete angular domain. This higher dimensionality of the problem renders previous state-of-the-art algorithms impractical either due to their cost of processing a single 2D slice, or their inability to enforce proper coherence in additional dimensions. We propose a new decomposition algorithm that jointly optimizes the whole light field data for proper angular coherence. For efficiency, we extend Retinex theory, working on the gradient domain, where new albedo and occlusion terms are introduced. Results show our method provides 4D intrinsic decompositions difficult to achieve with previous state-of-the-art algorithms. We further provide a comprehensive analysis and comparisons with existing intrinsic image video decomposition methods on light field images. | * Light fields. Related work on intrinsic decomposition of light field images and videos has been published concurrently. @cite_45 present a general approach for stabilizing the results of image processing algorithms over an array of images and videos. Their approach can produce very stable results, but its generality does not exploit a 4D structure that can be used to handle complex non-lambertian materials @cite_6 @cite_15 . On the other hand, Alperovich and Goldluecke @cite_41 present an approach similar to ours posing the problem in ray space. By doing this, they ensure angular coherence and also handle non-lambertian materials. While we do not handle such materials explicitly, our algorithm produces sharper and more stable results, with comparable reconstructions of reflectances under specular highlights. | {
"cite_N": [
"@cite_41",
"@cite_15",
"@cite_45",
"@cite_6"
],
"mid": [
"2594329099",
"2578266682",
"",
"2346725554"
],
"abstract": [
"We present a novel variational model for intrinsic light field decomposition, which is performed on four-dimensional ray space instead of a traditional 2D image. As most existing intrinsic image algorithms are designed for Lambertian objects, their performance suffers when considering scenes which exhibit glossy surfaces. In contrast, the rich structure of the light field with many densely sampled views allows us to cope with non-Lambertian objects by introducing an additional decomposition term that models specularity. Regularization along the epipolar plane images further encourages albedo and shading consistency across views. In evaluations of our method on real-world data sets captured with a Lytro Illum plenoptic camera, we demonstrate the advantages of our approach with respect to intrinsic image decomposition and specular removal.",
"We present a method to separate a dichromatic reflection component from diffuse object colors for the set of rays in a 4D light field such that the separation is consistent across all subaperture views. The separation model is based on explaining the observed light field as a sparse linear combination of a constant-color specular term and a small finite set of albedos. Consistency across the light field is achieved by embedding the ray-wise separation into a global optimization framework. On each individual epipolar plane image (EPI), the diffuse coefficients need to be constant along lines which are the projections of the same scene point, while the specular coefficient needs to be constant along the direction of the specular flow within the epipolar volume. We handle both constraints with depth-dependent anisotropic regularizers, and demonstrate promising performance on a number of real-world light fields captured with a Lytro Illum plenoptic camera.",
"",
"Light-field cameras have now become available in both consumer and industrial applications, and recent papers have demonstrated practical algorithms for depth recovery from a passive single-shot capture. However, current light-field depth estimation methods are designed for Lambertian objects and fail or degrade for glossy or specular surfaces. The standard Lambertian photoconsistency measure considers the variance of different views, effectively enforcing point-consistency , i.e., that all views map to the same point in RGB space. This variance or point-consistency condition is a poor metric for glossy surfaces. In this paper, we present a novel theory of the relationship between light-field data and reflectance from the dichromatic model. We present a physically-based and practical method to estimate the light source color and separate specularity. We present a new photo consistency metric, line-consistency , which represents how viewpoint changes affect specular points. We then show how the new metric can be used in combination with the standard Lambertian variance or point-consistency measure to give us results that are robust against scenes with glossy surfaces. With our analysis, we can also robustly estimate multiple light source colors and remove the specular component from glossy objects. We show that our method outperforms current state-of-the-art specular removal and depth estimation algorithms in multiple real world scenarios using the consumer Lytro and Lytro Illum light field cameras."
]
} |
1608.04366 | 2508135389 | Porous structures such as trabecular bone are widely seen in nature. These structures are lightweight and exhibit strong mechanical properties. In this paper, we present a method to generate bone-like porous structures as lightweight infill for additive manufacturing. Our method builds upon and extends voxel-wise topology optimization. In particular, for the purpose of generating sparse yet stable structures distributed in the interior of a given shape, we propose upper bounds on the localized material volume in the proximity of each voxel in the design domain. We then aggregate the local per-voxel constraints by their p -norm into an equivalent global constraint, in order to facilitate an efficient optimization process. Implemented on a high-resolution topology optimization framework, our results demonstrate mechanically optimized, detailed porous structures which mimic those found in nature. We further show variants of the optimized structures subject to different design specifications, and we analyze the optimality and robustness of the obtained structures. | With the ever-increasing popularity of consumer 3D printers, much research has been devoted to address geometric and physical modeling problems for computational fabrication, including the toolpath generation @cite_10 . In this section, we review techniques related to the optimization of mechanical properties. For an overview of geometric and physical modeling for 3D printing, let us refer to a recent survey article @cite_40 and a tutorial @cite_26 . | {
"cite_N": [
"@cite_40",
"@cite_26",
"@cite_10"
],
"mid": [
"2080574972",
"2095025857",
"2473797470"
],
"abstract": [
"Additive manufacturing (AM) is poised to bring about a revolution in the way products are designed, manufactured, and distributed to end users. This technology has gained significant academic as well as industry interest due to its ability to create complex geometries with customizable material properties. AM has also inspired the development of the maker movement by democratizing design and manufacturing. Due to the rapid proliferation of a wide variety of technologies associated with AM, there is a lack of a comprehensive set of design principles, manufacturing guidelines, and standardization of best practices. These challenges are compounded by the fact that advancements in multiple technologies (for example materials processing, topology optimization) generate a \"positive feedback loop\" effect in advancing AM. In order to advance research interest and investment in AM technologies, some fundamental questions and trends about the dependencies existing in these avenues need highlighting. The goal of our review paper is to organize this body of knowledge surrounding AM, and present current barriers, findings, and future trends significantly to the researchers. We also discuss fundamental attributes of AM processes, evolution of the AM industry, and the affordances enabled by the emergence of AM in a variety of areas such as geometry processing, material design, and education. We conclude our paper by pointing out future directions such as the \"print-it-all\" paradigm, that have the potential to re-imagine current research and spawn completely new avenues for exploration. The fundamental attributes and challenges barriers of Additive Manufacturing (AM).The evolution of research on AM with a focus on engineering capabilities.The affordances enabled by AM such as geometry, material and tools design.The developments in industry, intellectual property, and education-related aspects.The important future trends of AM technologies.",
"",
"We develop a new kind of \"space-filling\" curves, connected Fermat spirals, and show their compelling properties as a tool path fill pattern for layered fabrication. Unlike classical space-filling curves such as the Peano or Hilbert curves, which constantly wind and bind to preserve locality, connected Fermat spirals are formed mostly by long, low-curvature paths. This geometric property, along with continuity, influences the quality and efficiency of layered fabrication. Given a connected 2D region, we first decompose it into a set of sub-regions, each of which can be filled with a single continuous Fermat spiral. We show that it is always possible to start and end a Fermat spiral fill at approximately the same location on the outer boundary of the filled region. This special property allows the Fermat spiral fills to be joined systematically along a graph traversal of the decomposed sub-regions. The result is a globally continuous curve. We demonstrate that printing 2D layers following tool paths as connected Fermat spirals leads to efficient and quality fabrication, compared to conventional fill patterns."
]
} |
1608.04366 | 2508135389 | Porous structures such as trabecular bone are widely seen in nature. These structures are lightweight and exhibit strong mechanical properties. In this paper, we present a method to generate bone-like porous structures as lightweight infill for additive manufacturing. Our method builds upon and extends voxel-wise topology optimization. In particular, for the purpose of generating sparse yet stable structures distributed in the interior of a given shape, we propose upper bounds on the localized material volume in the proximity of each voxel in the design domain. We then aggregate the local per-voxel constraints by their p -norm into an equivalent global constraint, in order to facilitate an efficient optimization process. Implemented on a high-resolution topology optimization framework, our results demonstrate mechanically optimized, detailed porous structures which mimic those found in nature. We further show variants of the optimized structures subject to different design specifications, and we analyze the optimality and robustness of the obtained structures. | To assist users in the design of 3D printed shapes, Stave Stava et al @cite_27 presented a system to detect structural deficiencies by finite element analysis. A set of correction operations including hollowing, thickening, and strut insertion, are proposed to improve the structural soundness. Targeting specifically for reducing the material usage inside a 3D model, @cite_5 introduced skin-frame structures as a composition of the shape interior, and optimized the layout and size of frames. @cite_18 proposed honeycomb-like Voronoi structures to hollow the interior volume, and optimized the distribution and size of Voronoi cells. In early work by @cite_52 , the layout of truss structures is optimized for designing bridges and towers. Manufacturing constraints regarding the avoidance of overhang surfaces and small geometric feature size, have been addressed by @cite_44 via self-supporting rhombic structures for infill optimization. Manufacturable micro-structures have been investigated in graphics @cite_36 @cite_31 @cite_32 and in mechanical topology optimization approaches (e.g., @cite_11 @cite_47 , among others). Our work is inspired by these works , but and builds upon topology optimization which doesn't prescribe the structural composition and thus does not limit the design space. | {
"cite_N": [
"@cite_18",
"@cite_47",
"@cite_36",
"@cite_52",
"@cite_32",
"@cite_44",
"@cite_27",
"@cite_5",
"@cite_31",
"@cite_11"
],
"mid": [
"2046095156",
"2405385330",
"2082772288",
"2244285656",
"2462407094",
"2477717204",
"2052590126",
"2040459740",
"1981948516",
"2069041228"
],
"abstract": [
"The emergence of low-cost 3D printers steers the investigation of new geometric problems that control the quality of the fabricated object. In this paper, we present a method to reduce the material cost and weight of a given object while providing a durable printed model that is resistant to impact and external forces. We introduce a hollowing optimization algorithm based on the concept of honeycomb-cells structure. Honeycombs structures are known to be of minimal material cost while providing strength in tension. We utilize the Voronoi diagram to compute irregular honeycomb-like volume tessellations which define the inner structure. We formulate our problem as a strength--to--weight optimization and cast it as mutually finding an optimal interior tessellation and its maximal hollowing subject to relieve the interior stress. Thus, our system allows to build-to-last 3D printed objects with large control over their strength-to-weight ratio and easily model various interior structures. We demonstrate our method on a collection of 3D objects from different categories. Furthermore, we evaluate our method by printing our hollowed models and measure their stress and weights.",
"Abstract This paper applies topology optimisation to the design of structures with periodic and layered microstructural details without length scale separation, i.e. considering the complete macroscopic structure and its response, while resolving all microstructural details, as compared to the often used homogenisation approach. The approach takes boundary conditions into account and ensures connected and macroscopically optimised microstructures regardless of the difference in micro- and macroscopic length scales. This results in microstructures tailored for specific applications rather than specific properties. Manufacturability is further ensured by the use of robust topology optimisation. Dealing with the complete macroscopic structure and its response is computationally challenging as very fine discretisations are needed in order to resolve all microstructural details. Therefore, this paper shows the benefits of applying a contrast-independent spectral preconditioner based on the multiscale finite element method (MsFEM) to large structures with fully-resolved microstructural details. It is shown that a single preconditioner can be reused for many design iterations and used for several design realisations, in turn leading to massive savings in computational cost. The density-based topology optimisation approach combined with a Heaviside projection filter and a stochastic robust formulation is used on various problems, with both periodic and layered microstructures. The presented approach is shown to allow for the topology optimisation of very large problems in Matlab , specifically a problem with 26 million displacement degrees of freedom in 26 hours using a single computational thread.",
"We propose a method for fabricating deformable objects with spatially varying elasticity using 3D printing. Using a single, relatively stiff printer material, our method designs an assembly of small-scale microstructures that have the effect of a softer material at the object scale, with properties depending on the microstructure used in each part of the object. We build on work in the area of metamaterials, using numerical optimization to design tiled microstructures with desired properties, but with the key difference that our method designs families of related structures that can be interpolated to smoothly vary the material properties over a wide range. To create an object with spatially varying elastic properties, we tile the object's interior with microstructures drawn from these families, generating a different microstructure for each cell using an efficient algorithm to select compatible structures for neighboring cells. We show results computed for both 2D and 3D objects, validating several 2D and 3D printed structures using standard material tests as well as demonstrating various example applications.",
"We present a method for designing truss structures, a common and complex category of buildings, using non-linear optimization. Truss structures are ubiquitous in the industrialized world, appearing as bridges, towers, roof supports and building exoskeletons, yet are complex enough that modeling them by hand is time consuming and tedious. We represent trusses as a set of rigid bars connected by pin joints, which may change location during optimization. By including the location of the joints as well as the strength of individual beams in our design variables, we can simultaneously optimize the geometry and the mass of structures. We present the details of our technique together with examples illustrating its use, including comparisons with real structures.",
"Microstructures at the scale of tens of microns change the physical properties of objects, making them lighter or more flexible. While traditionally difficult to produce, additive manufacturing now lets us physically realize such microstructures at low cost. In this paper we propose to study procedural, aperiodic microstructures inspired by Voronoi open-cell foams. The absence of regularity affords for a simple approach to grade the foam geometry --- and thus its mechanical properties --- within a target object and its surface. Rather than requiring a global optimization process, the microstructures are directly generated to exhibit a specified elastic behavior. The implicit evaluation is akin to procedural textures in computer graphics, and locally adapts to follow the elasticity field. This allows very detailed structures to be generated in large objects without having to explicitly produce a full representation --- mesh or voxels --- of the complete object: the structures are added on the fly, just before each object slice is manufactured. We study the elastic behavior of the microstructures and provide a complete description of the procedure generating them. We explain how to determine the geometric parameters of the microstructures from a target elasticity, and evaluate the result on printed samples. Finally, we apply our approach to the fabrication of objects with spatially varying elasticity, including the implicit modeling of a frame following the object surface and seamlessly connecting to the microstructures.",
"Abstract Recent work has demonstrated that the interior material layout of a 3D model can be designed to make a fabricated replica satisfy application-specific demands on its physical properties, such as resistance to external loads. A widely used practice to fabricate such models is by layer-based additive manufacturing (AM), which however suffers from the problem of adding and removing interior supporting structures. In this paper, we present a novel method for generating application-specific infill structures on rhombic cells so that the resultant structures can automatically satisfy manufacturing requirements on overhang-angle and wall-thickness. Additional supporting structures can be avoided entirely in our framework. To achieve this, we introduce the usage of an adaptive rhombic grid, which is built from an input surface model. Starting from the initial sparse set of rhombic cells, via numerical optimization techniques an objective function can be improved by adaptively subdividing the rhombic grid and thus adding more walls in cells. We demonstrate the effectiveness of our method for generating interior designs in the applications of improving mechanical stiffness and static stability.",
"The use of 3D printing has rapidly expanded in the past couple of years. It is now possible to produce 3D-printed objects with exceptionally high fidelity and precision. However, although the quality of 3D printing has improved, both the time to print and the material costs have remained high. Moreover, there is no guarantee that a printed model is structurally sound. The printed product often does not survive cleaning, transportation, or handling, or it may even collapse under its own weight. We present a system that addresses this issue by providing automatic detection and correction of the problematic cases. The structural problems are detected by combining a lightweight structural analysis solver with 3D medial axis approximations. After areas with high structural stress are found, the model is corrected by combining three approaches: hollowing, thickening, and strut insertion. Both detection and correction steps are repeated until the problems have been eliminated. Our process is designed to create a model that is visually similar to the original model but possessing greater structural integrity.",
"3D printers have become popular in recent years and enable fabrication of custom objects for home users. However, the cost of the material used in printing remains high. In this paper, we present an automatic solution to design a skin-frame structure for the purpose of reducing the material cost in printing a given 3D object. The frame structure is designed by an optimization scheme which significantly reduces material volume and is guaranteed to be physically stable, geometrically approximate, and printable. Furthermore, the number of struts is minimized by solving an l0 sparsity optimization. We formulate it as a multi-objective programming problem and an iterative extension of the preemptive algorithm is developed to find a compromise solution. We demonstrate the applicability and practicability of our solution by printing various objects using both powder-type and extrusion-type 3D printers. Our method is shown to be more cost-effective than previous works.",
"We introduce elastic textures: a set of parametric, tileable, printable, cubic patterns achieving a broad range of isotropic elastic material properties: the softest pattern is over a thousand times softer than the stiffest, and the Poisson's ratios range from below zero to nearly 0.5. Using a combinatorial search over topologies followed by shape optimization, we explore a wide space of truss-like, symmetric 3D patterns to obtain a small family. This pattern family can be printed without internal support structure on a single-material 3D printer and can be used to fabricate objects with prescribed mechanical behavior. The family can be extended easily to create anisotropic patterns with target orthotropic properties. We demonstrate that our elastic textures are able to achieve a user-supplied varying material property distribution. We also present a material optimization algorithm to choose material properties at each point within an object to best fit a target deformation under a prescribed scenario. We show that, by fabricating these spatially varying materials with elastic textures, the desired behavior is achieved.",
"Abstract We present a method to design manufacturable extremal elastic materials. Extremal materials can possess interesting properties such as a negative Poisson’s ratio. The effective properties of the obtained microstructures are shown to be close to the theoretical limit given by mathematical bounds, and the deviations are due to the imposed manufacturing constraints. The designs are generated using topology optimization. Due to high resolution and the imposed robustness requirement they are manufacturable without any need for post-processing. This has been validated by the manufacturing of an isotropic material with a Poisson’s ratio of ν = - 0.5 and a bulk modulus of 0.2 times the solid base material’s bulk modulus."
]
} |
1608.04366 | 2508135389 | Porous structures such as trabecular bone are widely seen in nature. These structures are lightweight and exhibit strong mechanical properties. In this paper, we present a method to generate bone-like porous structures as lightweight infill for additive manufacturing. Our method builds upon and extends voxel-wise topology optimization. In particular, for the purpose of generating sparse yet stable structures distributed in the interior of a given shape, we propose upper bounds on the localized material volume in the proximity of each voxel in the design domain. We then aggregate the local per-voxel constraints by their p -norm into an equivalent global constraint, in order to facilitate an efficient optimization process. Implemented on a high-resolution topology optimization framework, our results demonstrate mechanically optimized, detailed porous structures which mimic those found in nature. We further show variants of the optimized structures subject to different design specifications, and we analyze the optimality and robustness of the obtained structures. | Topology optimization is based on a volumetric element-wise parametrization of the design domain. This general formulation does not prescribe the topology , but allows structures to appear and adapt during the iterative optimization process. For a thorough review of topology optimization techniques, let us refer to recent survey articles @cite_28 @cite_22 . Our work is based on the density approach, which is known as Solid Isotropic Material with Penalization (SIMP) @cite_41 . The method is related to the length scale problem in the literature of topology optimization, where the interest is to control the minimal and or maximal structure size for manufacturability @cite_33 @cite_14 . In particular, we follow the idea of the projection filter @cite_13 @cite_48 in our implementation to impose local volume constraints. Different from exact length scale control, we propose a projection method in an approximate manner which facilitates fast numerical solution. In our work we employ the potential of numerical multigrid schemes to enable topology optimization at high resolution and efficiency @cite_1 . | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_33",
"@cite_28",
"@cite_41",
"@cite_48",
"@cite_1",
"@cite_13"
],
"mid": [
"2291404204",
"2146674345",
"2139458377",
"2220999736",
"2137726847",
"2028947724",
"2260417192",
"2120191698"
],
"abstract": [
"Since its original introduction in structural design, density-based topology optimization has been applied to a number of other fields such as microelectromechanical systems, photonics, acoustics and fluid mechanics. The methodology has been well accepted in industrial design processes where it can provide competitive designs in terms of cost, materials and functionality under a wide set of constraints. However, the optimized topologies are often considered as conceptual due to loosely defined topologies and the need of postprocessing. Subsequent amendments can affect the optimized design performance and in many cases can completely destroy the optimality of the solution. Therefore, the goal of this paper is to review recent advancements in obtaining manufacturable topology-optimized designs. The focus is on methods for imposing minimum and maximum length scales, and ensuring manufacturable, well-defined designs with robust performances. The overview discusses the limitations, the advantages and the associated computational costs. The review is completed with optimized designs for minimum compliance, mechanism design and heat transfer.",
"Topology optimization is the process of determining the optimal layout of material and connectivity inside a design domain. This paper surveys topology optimization of continuum structures from the year 2000 to 2012. It focuses on new developments, improvements, and applications of finite element-based topology optimization, which include a maturation of classical methods, a broadening in the scope of the field, and the introduction of new methods for multiphysics problems. Four different types of topology optimization are reviewed: (1) density-based methods, which include the popular Solid Isotropic Material with Penalization (SIMP) technique, (2) hard-kill methods, including Evolutionary Structural Optimization (ESO), (3) boundary variation methods (level set and phase field), and (4) a new biologically inspired method based on cellular division rules. We hope that this survey will provide an update of the recent advances and novel applications of popular methods, provide exposure to lesser known, yet promising, techniques, and serve as a resource for those new to the field. The presentation of each method's focuses on new developments and novel applications.",
"A new scheme for imposing a minimum length scale in topology optimization is presented. It guarantees the existence of an optimal design for a large class of topology optimization problems of practical interest. It is formulated as one constraint that is computationally cheap and for which sensitivities are also cheap to compute. The constraint value is ideally zero, but it can be relaxed to a positive value. The effect of the method is illustrated in topology optimization for minimum compliance and design of compliant mechanisms. Notably, the method produces compliant mechanisms with distributed flexibility, something that has previously been difficult to obtain using topology optimization for the design of compliant mechanisms. The term ‘MOLE method’ is suggested for the method. Copyright © 2003 John Wiley & Sons, Ltd.",
"Topology optimization has undergone a tremendous development since its introduction in the seminal paper by Bendsoe and Kikuchi in 1988. By now, the concept is developing in many different directions, including “density”, “level set”, “topological derivative”, “phase field”, “evolutionary” and several others. The paper gives an overview, comparison and critical review of the different approaches, their strengths, weaknesses, similarities and dissimilarities and suggests guidelines for future research.",
"The paper presents a compact Matlab implementation of a topology optimization code for compliance minimization of statically loaded structures. The total number of Matlab input lines is 99 including optimizer and Finite Element subroutine. The 99 lines are divided into 36 lines for the main program, 12 lines for the Optimality Criteria based optimizer, 16 lines for a mesh-independency filter and 35 lines for the finite element code. In fact, excluding comment lines and lines associated with output and finite element analysis, it is shown that only 49 Matlab input lines are required for solving a well-posed topology optimization problem. By adding three additional lines, the program can solve problems with multiple load cases. The code is intended for educational purposes. The complete Matlab code is given in the Appendix and can be down-loaded from the web-site http: www.topopt.dtu.dk.",
"This paper presents a technique for imposing maximum length scale on features in continuum topology optimization. The design domain is searched and local constraints prevent the formation of features that are larger than the prescribed maximum length scale. The technique is demonstrated in the context of structural and fluid topology optimization. Specifically, maximum length scale criterion is applied to (a) the solid phase in minimum compliance design to restrict the size of structural (load-carrying) members, and (b) the fluid (void) phase in minimum dissipated power problems to limit the size of flow channels. Solutions are shown to be near 0 1 (void solid) topologies that satisfy the maximum length scale criterion. When combined with an existing minimum length scale methodology, the designer gains complete control over member sizes that can influence cost and manufacturability. Further, results suggest restricting maximum length scale may provide a means for influencing performance characteristics, such as redundancy in structural design.",
"A key requirement in 3D fabrication is to generate objects with individual exterior shapes and their interior being optimized to application-specific force constraints and low material consumption. Accomplishing this task is challenging on desktop computers, due to the extreme model resolutions that are required to accurately predict the physical shape properties, requiring memory and computational capacities going beyond what is currently available. Moreover, fabrication-specific constraints need to be considered to enable printability. To address these challenges, we present a scalable system for generating 3D objects using topology optimization, which allows to efficiently evolve the topology of high-resolution solids towards printable and light-weight-high-resistance structures. To achieve this, the system is equipped with a high-performance GPU solver which can efficiently handle models comprising several millions of elements. A minimum thickness constraint is built into the optimization process to automatically enforce printability of the resulting shapes. We further shed light on the question how to incorporate geometric shape constraints, such as symmetry and pattern repetition, in the optimization process. We analyze the performance of the system and demonstrate its potential by a variety of different shapes such as interior structures within closed surfaces, exposed support structures, and surface models.",
"A methodology for imposing a minimum length scale on structural members in discretized topology optimization problems is described. Nodal variables are implemented as the design variables and are projected onto element space to determine the element volume fractions that traditionally define topology. The projection is made via mesh independent functions that are based upon the minimum length scale. A simple linear projection scheme and a non-linear scheme using a regularized Heaviside step function to achieve nearly 0–1 solutions are examined. The new approach is demonstrated on the minimum compliance problem and the popular SIMP method is used to penalize the stiffness of intermediate volume fraction elements. Solutions are shown to meet user-defined length scale criterion without additional constraints, penalty functions or sensitivity filters. No instances of mesh dependence or checkerboard patterns have been observed. Copyright © 2004 John Wiley & Sons, Ltd."
]
} |
1608.04366 | 2508135389 | Porous structures such as trabecular bone are widely seen in nature. These structures are lightweight and exhibit strong mechanical properties. In this paper, we present a method to generate bone-like porous structures as lightweight infill for additive manufacturing. Our method builds upon and extends voxel-wise topology optimization. In particular, for the purpose of generating sparse yet stable structures distributed in the interior of a given shape, we propose upper bounds on the localized material volume in the proximity of each voxel in the design domain. We then aggregate the local per-voxel constraints by their p -norm into an equivalent global constraint, in order to facilitate an efficient optimization process. Implemented on a high-resolution topology optimization framework, our results demonstrate mechanically optimized, detailed porous structures which mimic those found in nature. We further show variants of the optimized structures subject to different design specifications, and we analyze the optimality and robustness of the obtained structures. | While we approach bone-like porous structures for their superior mechanical properties by using topology optimization, we note that other directions---considering different aspects of bone---exist for generating such structures. One such direction is material reconstruction. For instance, Liu and Shapiro @cite_23 proposed to reconstruct 3D micro-structures from 2D sample images using example-based texture synthesis @cite_39 , such that the synthesised structures preserve statistical features of the given sample. | {
"cite_N": [
"@cite_23",
"@cite_39"
],
"mid": [
"2073958030",
"1485415000"
],
"abstract": [
"Abstract Computer models of random heterogeneous materials are becoming increasingly important in order to support the latest advances in material science, biomedical applications and manufacturing. Such models usually take the form of a microstructure whose geometry is reconstructed from a small material sample, an exemplar. A widely used traditional approach to material reconstruction relies on stochastic optimization to approximate the material descriptors of the exemplar, such as volume fraction and two-point correlation functions. This approach is computationally intensive and is limited to certain types of isotropic materials. We show that formulating material reconstruction as a Markov Random Field (MRF) texture synthesis leads to a number of advantages over the traditional optimization-based approaches. These include improved computational efficiency, preservation of many material descriptors, including correlation functions and Minkowski functionals, ability to reconstruct anisotropic materials, and direct use of the gray-scale material images and two-dimensional cross-sections. Quantifying the quality of reconstruction in terms of correlation functions as material descriptors suggests a systematic procedure for selecting a size of neighborhood, a key parameter in the texture synthesis procedure. We support our observations by experiments using implementation with Gaussian pyramid and periodic boundary conditions.",
"Recent years have witnessed significant progress in example-based texture synthesis algorithms. Given an example texture, these methods produce a larger texture that is tailored to the user's needs. In this state-of-the-art report, we aim to achieve three goals: (1) provide a tutorial that is easy to follow for readers who are not already familiar with the subject, (2) make a comprehensive survey and comparisons of different methods, and (3) sketch a vision for future work that can help motivate and guide readers that are interested in texture synthesis research. We cover fundamental algorithms as well as extensions and applications of texture synthesis."
]
} |
1608.04426 | 2517056711 | Unsupervised neural networks, such as restricted Boltzmann machines (RBMs) and deep belief networks (DBNs), are powerful tools for feature selection and pattern recognition tasks. We demonstrate that overfitting occurs in such models just as in deep feedforward neural networks, and discuss possible regularization methods to reduce overfitting. We also propose a "partial" approach to improve the efficiency of Dropout DropConnect in this scenario, and discuss the theoretical justification of these methods from model convergence and likelihood bounds. Finally, we compare the performance of these methods based on their likelihood and classification error rates for various pattern recognition data sets. | An intuitive extension of Dropout is DropConnect (DC) @cite_18 , which has the form below [ ] and thus masks the weights rather than the nodes. The objective @math has the same form as in (3). There are a number of related model averaging regularization methods, each of which averages over subsets of the original model. For instance, Standout varies Dropout probabilities for different nodes which constitute a binary belief network @cite_0 . Shakeout adds additional noise to Dropout so that it approximates elastic-net regularization @cite_2 . Fast Dropout accelerates Dropout with Gaussian approximation @cite_17 . Variational Dropout applies variational Bayes to infer the Dropout function @cite_13 . | {
"cite_N": [
"@cite_18",
"@cite_0",
"@cite_2",
"@cite_13",
"@cite_17"
],
"mid": [
"4919037",
"2136836265",
"2479750863",
"1826234144",
"35527955"
],
"abstract": [
"We introduce DropConnect, a generalization of Dropout (, 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models.",
"Recently, it was shown that deep neural networks can perform very well if the activities of hidden units are regularized during learning, e.g, by randomly dropping out 50 of their activities. We describe a method called 'standout' in which a binary belief network is overlaid on a neural network and is used to regularize of its hidden units by selectively setting activities to zero. This 'adaptive dropout network' can be trained jointly with the neural network by approximately computing local expectations of binary dropout variables, computing derivatives using back-propagation, and using stochastic gradient descent. Interestingly, experiments show that the learnt dropout network parameters recapitulate the neural network parameters, suggesting that a good dropout network regularizes activities according to magnitude. When evaluated on the MNIST and NORB datasets, we found that our method achieves lower classification error rates than other feature learning methods, including standard dropout, denoising auto-encoders, and restricted Boltzmann machines. For example, our method achieves 0.80 and 5.8 errors on the MNIST and NORB test sets, which is better than state-of-the-art results obtained using feature learning methods, including those that use convolutional architectures.",
"Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. The invention of effective training techniques largely contributes to this success. The so-called \"Dropout\" training scheme is one of the most powerful tool to reduce over-fitting. From the statistic point of view, Dropout works by implicitly imposing an L2 regularizer on the weights. In this paper, we present a new training scheme: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, our method randomly chooses to enhance or inverse the contributions of each unit to the next layer. We show that our scheme leads to a combination of L1 regularization and L2 regularization imposed on the weights, which has been proved effective by the Elastic Net models in practice. We have empirically evaluated the Shakeout scheme and demonstrated that sparse network weights are obtained via Shakeout training. Our classification experiments on real-life image datasets MNIST and CIFAR-10 show that Shakeout deals with over-fitting effectively.",
"We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the mini-batch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments.",
"Preventing feature co-adaptation by encouraging independent contributions from different features often improves classification and regression performance. Dropout training (, 2012) does this by randomly dropping out (zeroing) hidden units and input features during training of neural networks. However, repeatedly sampling a random subset of input features makes training much slower. Based on an examination of the implied objective function of dropout training, we show how to do fast dropout training by sampling from or integrating a Gaussian approximation, instead of doing Monte Carlo optimization of this objective. This approximation, justified by the central limit theorem and empirical evidence, gives an order of magnitude speedup and more stability. We show how to do fast dropout training for classification, regression, and multilayer neural networks. Beyond dropout, our technique is extended to integrate out other types of noise and small image transformations."
]
} |
1608.04426 | 2517056711 | Unsupervised neural networks, such as restricted Boltzmann machines (RBMs) and deep belief networks (DBNs), are powerful tools for feature selection and pattern recognition tasks. We demonstrate that overfitting occurs in such models just as in deep feedforward neural networks, and discuss possible regularization methods to reduce overfitting. We also propose a "partial" approach to improve the efficiency of Dropout DropConnect in this scenario, and discuss the theoretical justification of these methods from model convergence and likelihood bounds. Finally, we compare the performance of these methods based on their likelihood and classification error rates for various pattern recognition data sets. | We note that while Dropout has been discussed for RBMs @cite_8 , to the best of our knowledge, there is no literature extending common regularization methods to RBMs and unsupervised deep neural networks; for instance, adaptive @math regularization and DropConnect as mentioned. Therefore, below we discuss their implementations and examine their empirical performance. In addition to studying model convergence and likelihood bounds, we propose partial Dropout DropConnect which iteratively drops a subset of nodes or edges based on a given calibrated model, therefore improving robustness in many situations. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2095705004"
],
"abstract": [
"Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets."
]
} |
1608.04260 | 2963194712 | In this paper, we consider the problem of location-dependent opportunistic bandwidth sharing between static and mobile (i.e., moving) downlink users in a cellular network. Each cell of the network has some fixed number of static users. Mobile users enter the cell, move inside the cell for some time, and then leave the cell. In order to provide higher data rate to the highly mobile users whose fast fading channel variation is difficult to track, we propose location dependent bandwidth sharing between the two classes of static and mobile users; the idea is to provide higher bandwidth to the mobile users at favourable locations, and provide higher bandwidth to the static users in other times. Our approach is agnostic to the way the bandwidth is further shared within the same class of users; it can be combined with any particular bandwidth allocation policy employed for one of these two classes of users. We formulate the problem as a long run average reward Markov decision process (MDP) where the per-step reward is a linear combination of instantaneous data volumes received by static and mobile users, and find the optimal policy. The optimal policy is binary in nature; it allocates the entire bandwidth either to the static users or to the mobile users at any given time. The reward structure of this MDP is not known in general, and it may change with time. To alleviate these issues, we propose a learning algorithm based on single timescale stochastic approximation. Also, noting that the MDP problem can be used to maximize the long run average data rate for mobile users subject to a constraint on the long run average data rate of static users, we provide a learning algorithm based on multi-timescale stochastic approximation. We prove asymptotic convergence of the bandwidth sharing policies under these learning algorithms to the optimal policy. The results are extended to address the issue of fair bandwidth sharing between the two classes of static and mobile users, where the notion of fairness is motivated by the popular notion of @math α-fairness in the literature. Numerical results exhibit significant performance improvement by our scheme, as well as fast convergence, and also demonstrate the trade-off between performance gain and fairness requirement. | There has been a vast literature on the impact of user mobility in wireless networks. The authors in @cite_13 have shown that mobility increases the capacity. @cite_5 has explored the trade-off between delay and throughput in ad-hoc networks in presence of mobility. The papers @cite_8 , @cite_2 , @cite_9 , @cite_19 , @cite_3 study the impact of inter and intra cell mobility on capacity, and also the trade-off between throughput and fairness; these results show that mobility increases the capacity of cellular networks when base stations cooperate among themselves. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_13"
],
"mid": [
"2007089224",
"2121141468",
"2043542548",
"2068403222",
"2128510063",
"2155685934",
"2149959815"
],
"abstract": [
"The performance evaluation of wireless networks is severely complicated by the specific features of radio communication, such as highly variable channel conditions, interference issues, and possible hand-offs among base stations. The latter elements have no natural counterparts in wireline scenarios, and create a need for novel performance models that account for the impact of these characteristics on the service rates of users. Motivated by the above issues, we review several models for characterizing the capacity and evaluating the flow-level performance of wireless networks carrying elastic data transfers. We first examine the flow-level performance and stability of a wide family of so-called ?-fair channel-aware scheduling strategies. We establish that these disciplines provide maximum stability, and describe how the special case of the Proportional Fair policy gives rise to a Processor-Sharing model with a state-dependent service rate. Next we turn attention to a network of several base stations with inter-cell interference. We derive both necessary and sufficient stability conditions and construct lower and upper bounds for the flow-level performance measures. Lastly we investigate the impact of user mobility that occurs on a slow timescale and causes possible hand-offs of active sessions. We show that the mobility tends to increase the capacity region, both in the case of globally optimal scheduling and local ?-fair scheduling. It is additionally demonstrated that the capacity and user throughput improve with lower values of the fairness index ?.",
"Abstract—The performance of wireless data systems has been thoroughly studied in the context of a single base station. In the present paper we analyze networks with several interacting base stations, and specifically examine the capacity impact of intraand inter-cell mobility. We consider a dynamic setting where users come and go over time as governed by random finite-size data transfers, and explicitly allow for users to roam around over the course of their service. We show that mobility tends to increase the capacity, not only in case of globally optimal scheduling, but also when each of the base stations operates according to a fair sharing policy. The latter approach offers the advantages that it avoids complex centralized control, and grants each user a fair share of the resources, preventing the potential starvation that may occur under a globally optimal strategy. An important implication is that a simple, conservative capacity estimate is obtained by ‘ignoring’ mobility, and assuming that users remain stationary for the duration of their service. We further demonstrate that the capacity region for globally optimal scheduling is in general strictly larger than the stability region for a fair sharing discipline. However, if the users distribute themselves so as to maximize their individual throughputs, thus enabling some implicit coordination, then a fair sharing policy is in fact guaranteed to achieve stability whenever a globally optimal strategy is able to do so.",
"Proportional Fair (PF) scheduling algorithms are the de facto standard in cellular networks. They exploit the users' channel state diversity (induced by fast-fading) and are optimal for stationary channel state distributions and an infinite time-horizon. However, mobile users experience a nonstationary channel, due to slow-fading (on the order of seconds), and are associated with base stations for short periods. Hence, we develop the Predictive Finite-horizon PF Scheduling ((PF)2S) Framework that exploits mobility. We present extensive channel measurement results from a 3G network and characterize mobility-induced channel state trends. We show that a user's channel state is highly reproducible and leverage that to develop a data rate prediction mechanism. We then present a few channel allocation estimation algorithms that exploit the prediction mechanism. Our trace-based simulations consider instances of the ((PF)2S) Framework composed of combinations of prediction and channel allocation estimation algorithms. They indicate that the framework can increase the throughput by 15 -55 compared to traditional PF schedulers, while improving fairness.",
"The objective of the present paper is to give an analytic approximation of the performance of elastic traffic in wireless cellular networks accounting for user's mobility. To do so we build a Markovian model for users arrivals, departures and mobility in such networks; which we call WET model. We firstly consider intracell mobility where each user is confined to remain within its serving cell. Then we consider the complete mobility where users may either move within each cell or make a handover (i.e. change to another cell). We propose to approximate the WET model by a Whittle one for which the performance is expressed analytically. We validate the approximation by simulating an OFDMA cellular network. We observe that the Whittle approximation underestimates the throughput per user of the WET model. Thus it may be used for a conservative dimensioning of the cellular networks. Moreover, when the traffic demand and the user speed are moderate, the Whittle approximation is good and thus leads to a precise dimensioning.",
"The potential for exploiting rate variations to increase the capacity of wireless systems by opportunistic scheduling has been extensively studied at packet level. In the present paper, we examine how slower, mobility-induced rate variations impact performance at flow level, accounting for the random number of flows sharing the transmission resource. We identify two limit regimes, termed fluid and quasistationary, where the rate variations occur on an infinitely fast and an infinitely slow time scale, respectively. Using stochastic comparison techniques, we show that these limit regimes provide simple performance bounds that only depend on easily calculated load factors. Additionally, we prove that for a broad class of fading processes, performance varies monotically with the speed of the rate variations. These results are illustrated through numerical experiments, showing that the fluid and quasistationary bounds are remarkably tight in certain usual cases",
"Network throughput and packet delay are two important parameters in the design and the evaluation of routing protocols for ad-hoc networks. While mobility has been shown to increase the capacity of a network, it is not clear whether the delay can be kept low without trading off the throughput. We consider a theoretical framework and propose a routing algorithm which exploits the patterns in the mobility of nodes to provide guarantees on the delay. Moreover, the throughput achieved by the algorithm is only a poly-logarithmic factor off from the optimal. The algorithm itself is fairly simple. In order to analyze its feasibility and the performance guarantee, we used various techniques of probabilistic analysis of algorithms. The approach taken in this paper could be applied to the analyses of some other routing algorithms for mobile ad hoc networks proposed in the literature.",
"The capacity of ad hoc wireless networks is constrained by the mutual interference of concurrent transmissions between nodes. We study a model of an ad hoc network where n nodes communicate in random source-destination pairs. These nodes are assumed to be mobile. We examine the per-session throughput for applications with loose delay constraints, such that the topology changes over the time-scale of packet delivery. Under this assumption, the per-user throughput can increase dramatically when nodes are mobile rather than fixed. This improvement can be achieved by exploiting a form of multiuser diversity via packet relaying."
]
} |
1608.04138 | 2515899054 | Conventional synthetic aperture radar (SAR) systems are limited in their ability to satisfy the increasing requirement for improved spatial resolution and wider coverage. The demand for high resolution requires high sampling rates, while coverage is limited by the pulse repetition frequency. Consequently, sampling rate reduction is of high practical value in SAR imaging. In this paper, we introduce a new algorithm, equivalent to the well-known range-Doppler method, to process SAR data using the Fourier series coefficients of the raw signals. We then demonstrate how to exploit the algorithm features to reduce sampling rate in both range and azimuth axes and process the signals at sub-Nyquist rates, by using compressive sensing (CS) tools. In particular, we demonstrate recovery of an image using only a portion of the received signal’s bandwidth and also while dropping a large percentage of the transmitted pulses. The complementary pulses may be used to capture other scenes within the same coherent processing interval. In addition, we propose exploiting the ability to reconstruct the image from narrow bands in order to dynamically adapt the transmitted waveform energy to vacant spectral bands, paving the way to cognitive SAR. The proposed recovery algorithms form a new CS-SAR imaging method that can be applied to high-resolution SAR data acquired at sub-Nyquist rates in range and azimuth. The performance of our method is assessed using simulated and real data sets. Finally, our approach is implemented in hardware using a previously suggested Xampling radar prototype. | CS theory has shown promising results in the field of sub-Nyquist sampling in radar applications. The use of Fourier series coefficients in pulse-Doppler radar enables practical sub-Nyquist sampling when the illuminated scene consists of moving targets that correspond to a sparse range-Doppler map @cite_25 @cite_31 @cite_23 . CS has also been explored in a wide range of radar imaging applications @cite_39 . In @cite_35 , the authors applied CS on SAR images by separating the processing into two decoupled one-dimensional operations. They showed that CS theory can then be applied in order to reduce the rate in azimuth. However, since RCMC is ignored, this method does not consider system setups with range varying parameters, hence, the quality of some images might be degraded. | {
"cite_N": [
"@cite_35",
"@cite_39",
"@cite_23",
"@cite_31",
"@cite_25"
],
"mid": [
"2155473288",
"2082476922",
"1990646690",
"2132874190",
"2043422651"
],
"abstract": [
"Radar data have already proven to be compressible with no significant losses for most of the applications in which it is used. In the framework of information theory, the compressibility of a signal implies that it can be decomposed onto a reduced set of basic elements. Since the same quantity of information is carried by the original signal and its decomposition, it can be deduced that a certain degree of redundancy exists in the explicit representation. According to the theory of compressive sensing (CS), due to this redundancy, it is possible to infer an accurate representation of an unknown compressible signal through a highly incomplete set of measurements. Based on this assumption, this paper proposes a novel method for the focusing of raw data in the framework of radar imaging. The technique presented is introduced as an alternative option to the traditional matched filtering, and it suggests that the new modes of acquisition of data are more efficient in orbital configurations. In this paper, this method is first tested on 1-D simulated signals, and results are discussed. An experiment with synthetic aperture radar (SAR) raw data is also described. Its purpose is to show the potential of CS applied to SAR systems. In particular, we show that an image can be reconstructed, without the loss of resolution, after dropping a large percentage of the received pulses, which would allow the implementation of wide-swath modes without reducing the azimuth resolution.",
"Remote sensing with radar is typically an ill-posed linear inverse problem: a scene is to be inferred from limited measurements of scattered electric fields. Parsimonious models provide a compressed representation of the unknown scene and offer a means for regularizing the inversion task. The emerging field of compressed sensing combines nonlinear reconstruction algorithms and pseudorandom linear measurements to provide reconstruction guarantees for sparse solutions to linear inverse problems. This paper surveys the use of sparse reconstruction algorithms and randomized measurement strategies in radar processing. Although the two themes have a long history in radar literature, the accessible framework provided by compressed sensing illuminates the impact of joining these themes. Potential future directions are conjectured both for extension of theory motivated by practice and for modification of practice based on theoretical insights.",
"Recently, a new approach to sub-Nyquist sampling and processing in pulse-Doppler radar was introduced, based on the Xampling approach to reduced-rate sampling combined with Doppler focusing performed on the low-rate samples. This method imposes no restrictions on the transmitter, reduces both the sam- pling and processing rates and exhibits linear signal-to-noise ratio improvement with the number of pulses. Here we extend previous work on sub-Nyquist pulse-Doppler radar by incorporating a clutter removal algorithm operating on the sub-Nyquist samples, allowing to reject clutter modeled as a colored Gaussian random process. In particular, we show how to adapt standard clutter rejection techniques to work directly on the sub-Nyquist samples, allowing to preserve high detection rates even in the presence of strong clutter, while avoiding the need to first interpolate the samples to the Nyquist grid.",
"Traditional radar sensing typically employs matched filtering between the received signal and the shape of the transmitted pulse. Matched filtering (MF) is conventionally carried out digitally, after sampling the received analog signals. Here, principles from classic sampling theory are generally employed, requiring that the received signals be sampled at twice their baseband bandwidth. The resulting sampling rates necessary for correlation-based radar systems become quite high, as growing demands for target distinction capability and spatial resolution stretch the bandwidth of the transmitted pulse. The large amounts of sampled data also necessitate vast memory capacity. In addition, real-time data processing typically results in high power consumption. Recently, new approaches for radar sensing and estimation were introduced, based on the finite rate of innovation (FRI) and Xampling frameworks. Exploiting the parametric nature of radar signals, these techniques allow significant reduction in sampling rate, implying potential power savings, while maintaining the system's estimation capabilities at sufficiently high signal-to-noise ratios (SNRs). Here we present for the first time a design and implementation of an Xampling-based hardware prototype that allows sampling of radar signals at rates much lower than Nyquist. We demonstrate by real-time analog experiments that our system is able to maintain reasonable recovery capabilities, while sampling radar signals that require sampling at a rate of about 30 MHz at a total rate of 1 MHz.",
"We investigate the problem of a monostatic pulse-Doppler radar transceiver trying to detect targets sparsely populated in the radar's unambiguous time-frequency region. Several past works employ compressed sensing (CS) algorithms to this type of problem but either do not address sample rate reduction, impose constraints on the radar transmitter, propose CS recovery methods with prohibitive dictionary size, or perform poorly in noisy conditions. Here, we describe a sub-Nyquist sampling and recovery approach called Doppler focusing, which addresses all of these problems: it performs low rate sampling and digital processing, imposes no restrictions on the transmitter, and uses a CS dictionary with size, which does not increase with increasing number of pulses P. Furthermore, in the presence of noise, Doppler focusing enjoys a signal-to-noise ratio (SNR) improvement, which scales linearly with P, obtaining good detection performance even at SNR as low as - 25 dB. The recovery is based on the Xampling framework, which allows reduction of the number of samples needed to accurately represent the signal, directly in the analog-to-digital conversion process. After sampling, the entire digital recovery process is performed on the low rate samples without having to return to the Nyquist rate. Finally, our approach can be implemented in hardware using a previously suggested Xampling radar prototype."
]
} |
1608.04138 | 2515899054 | Conventional synthetic aperture radar (SAR) systems are limited in their ability to satisfy the increasing requirement for improved spatial resolution and wider coverage. The demand for high resolution requires high sampling rates, while coverage is limited by the pulse repetition frequency. Consequently, sampling rate reduction is of high practical value in SAR imaging. In this paper, we introduce a new algorithm, equivalent to the well-known range-Doppler method, to process SAR data using the Fourier series coefficients of the raw signals. We then demonstrate how to exploit the algorithm features to reduce sampling rate in both range and azimuth axes and process the signals at sub-Nyquist rates, by using compressive sensing (CS) tools. In particular, we demonstrate recovery of an image using only a portion of the received signal’s bandwidth and also while dropping a large percentage of the transmitted pulses. The complementary pulses may be used to capture other scenes within the same coherent processing interval. In addition, we propose exploiting the ability to reconstruct the image from narrow bands in order to dynamically adapt the transmitted waveform energy to vacant spectral bands, paving the way to cognitive SAR. The proposed recovery algorithms form a new CS-SAR imaging method that can be applied to high-resolution SAR data acquired at sub-Nyquist rates in range and azimuth. The performance of our method is assessed using simulated and real data sets. Finally, our approach is implemented in hardware using a previously suggested Xampling radar prototype. | The authors in @cite_38 and @cite_1 used CS in order to reduce the rate in both dimensions. In @cite_38 RDA and CS were combined in order to exploit RDA benefits, however, only linear interpolation was considered. To achieve accurate results, the data is normally oversampled and the kernel of the interpolator may span many samples which comes at the expense of efficiency and computational load. In @cite_1 , due to its simplicity, the authors suggest a compressive sensing algorithm based on the chirp scaling algorithm (CSA). This processing technique does not require interpolation @cite_19 . Unlike RDA, this method is based on the assumption that the transmitted signal has a chirp form and is known to be less robust to noise. Both methods apply random sampling in time without proposing a practical sampling mechanism which enables the extraction of the low-rate samples directly from the analog signals. | {
"cite_N": [
"@cite_19",
"@cite_38",
"@cite_1"
],
"mid": [
"2120024360",
"2047042135",
"2055452283"
],
"abstract": [
"A space-variant interpolation is required to compensate for the migration of signal energy through range resolution cells when processing synthetic aperture radar (SAR) data, using either the classical range Doppler (R D) algorithm or related frequency domain techniques. In general, interpolation requires significant computation time, and leads to loss of image quality, especially in the complex image. The new chirp scaling algorithm avoids interpolation, yet performs range cell migration correction accurately. The algorithm requires only complex multiplies and Fourier transforms to implement, is inherently phase preserving, and is suitable for wide-swath, large-beamwidth, and large-squint applications. This paper describes the chirp scaling algorithm, summarizes simulation results, presents imagery processed with the algorithm, and reviews quantitative measures of its performance. Based on quantitative comparison, the chirp scaling algorithm provides image quality equal to or better than the precision range Doppler processor. Over the range of parameters tested, image quality results approach the theoretical limit, as defined by the system bandwidth. >",
"In recent years, compressed sensing (CS) has been applied in the field of synthetic aperture radar (SAR) imaging and shows great potential. The existing models are, however, based on application of the sensing matrix acquired by the exact observation functions. As a result, the corresponding reconstruction algorithms are much more time consuming than traditional matched filter (MF)-based focusing methods, especially in high resolution and wide swath systems. In this paper, we formulate a new CS-SAR imaging model based on the use of the approximated SAR observation deducted from the inverse of focusing procedures. We incorporate CS and MF within an sparse regularization framework that is then solved by a fast iterative thresholding algorithm. The proposed model forms a new CS-SAR imaging method that can be applied to high-quality and high-resolution imaging under sub-Nyquist rate sampling, while saving the computational cost substantially both in time and memory. Simulations and real SAR data applications support that the proposed method can perform SAR imaging effectively and efficiently under Nyquist rate, especially for large scale applications.",
"A novel compressive sensing (CS) algorithm for synthetic aperture radar (SAR) imaging is proposed which is called as the two-dimensional double CS algorithm (2-D-DCSA). We first derive the imaging operator for SAR, which is named as the chirp-scaling operator (CSO), from the chirp-scaling algorithm (CSA), then we show its inverse is a linear map, which transforms the SAR image to the received baseband radar signal. We show that the SAR image can be reconstructed simultaneously in the range and azimuth directions from a small number of the raw data. The proposed algorithm can handle large-scale data because both the CSO and its inverse allow fast matrix-vector multiplications. Both the simulated and real data are processed to test the algorithm and the results show that the 2-D-DCSA can be applied to reconstructing the SAR images effectively with much less data than regularly required."
]
} |
1608.04138 | 2515899054 | Conventional synthetic aperture radar (SAR) systems are limited in their ability to satisfy the increasing requirement for improved spatial resolution and wider coverage. The demand for high resolution requires high sampling rates, while coverage is limited by the pulse repetition frequency. Consequently, sampling rate reduction is of high practical value in SAR imaging. In this paper, we introduce a new algorithm, equivalent to the well-known range-Doppler method, to process SAR data using the Fourier series coefficients of the raw signals. We then demonstrate how to exploit the algorithm features to reduce sampling rate in both range and azimuth axes and process the signals at sub-Nyquist rates, by using compressive sensing (CS) tools. In particular, we demonstrate recovery of an image using only a portion of the received signal’s bandwidth and also while dropping a large percentage of the transmitted pulses. The complementary pulses may be used to capture other scenes within the same coherent processing interval. In addition, we propose exploiting the ability to reconstruct the image from narrow bands in order to dynamically adapt the transmitted waveform energy to vacant spectral bands, paving the way to cognitive SAR. The proposed recovery algorithms form a new CS-SAR imaging method that can be applied to high-resolution SAR data acquired at sub-Nyquist rates in range and azimuth. The performance of our method is assessed using simulated and real data sets. Finally, our approach is implemented in hardware using a previously suggested Xampling radar prototype. | Following subsampling, most of the existing CS imaging schemes stack the entire two-dimensional reflectivity map into a vector in order to apply CS recovery methods. For real SAR images, this vectorization operation results in large memory requirements and long reconstruction times. Alternatively, the authors in @cite_26 suggested to split the image into segments and use several computing units to process the data in parallel and solve the vectorized CS problem. This approach achieves better runtime, but does not utilize the two-dimensional structure of the SAR sampling problem. | {
"cite_N": [
"@cite_26"
],
"mid": [
"2114213376"
],
"abstract": [
"The compressed sensing (CS) synthetic aperture radar (SAR) imaging scheme can use random undersampled data to reconstruct images of sparse or compressible targets. However, compared to Nyquist sampling, the cost of the CS imaging scheme is the long reconstruction time, particularly for the conventional reconstruction strategy, which reconstructs the whole scene in one process. It also needs a large memory to access the sensing matrix used for reconstruction. In this paper, a segmented reconstruction strategy for the CS SAR imaging scheme is proposed. The whole scene is split into a set of small subscenes, so that the reconstruction time can be reduced significantly. The proposed method also needs much less memory for computation than the conventional method. In this proposed method, the range profiles are reconstructed first, and then, the range profiles can be split into subpatches. Subscenes can be reconstructed by using the subpatch data, and the whole scene can be obtained by combining the reconstructed subscenes. Simulation and experimental results are shown to demonstrate the validity of the proposed method."
]
} |
1608.04217 | 2517831099 | Given a set of @math elements separated by a pairwise distance matrix, the minimum differential dispersion problem (Min-Diff DP) aims to identify a subset of m elements (m < n) such that the difference between the maximum sum and the minimum sum of the inter-element distances between any two chosen elements is minimized. We propose an effective iterated local search (denoted by ILS_MinDiff) for Min-Diff DP. To ensure an effective exploration and exploitation of the search space, the proposed ILS_MinDiff algorithm iterates through three sequential search phases: a fast descent-based neighborhood search phase to find a local optimum from a given starting solution, a local optima exploring phase to visit nearby high-quality solutions around a given local optimum, and a local optima escaping phase to move away from the current search region. Experimental results on six data sets of 190 benchmark instances demonstrate that ILS_MinDiff competes favorably with the state-of-the-art algorithms by finding 130 improved best results (new upper bounds). | Based on the general ILS framework, several variants and extended approaches have recently been proposed in the literature, of which two representative examples are breakout local search (BLS) @cite_35 @cite_5 and three-phase search (TPS) @cite_18 . The effectiveness of BLS and TPS have been verified on a variety of hard optimization problems and applications (see examples of Table ). In the following, we present a brief review of these ILS variants. | {
"cite_N": [
"@cite_35",
"@cite_5",
"@cite_18"
],
"mid": [
"",
"2395540747",
"1881940886"
],
"abstract": [
"",
"In this paper, we propose the first heuristic approach for the vertex separator problem (VSP), based on Breakout Local Search (BLS). BLS is a recent meta-heuristic that follows the general framework of the popular Iterated Local Search (ILS) with a particular focus on the perturbation strategy. Based on some relevant information on search history, it tries to introduce the most suitable degree of diversification by determining adaptively the number and type of moves for the next perturbation phase. The proposed heuristic is highly competitive with the exact state-of-art approaches from the literature on the current VSP benchmark. Moreover, we present for the first time computational results for a set of large graphs with up to 3000 vertices, which constitutes a new challenging benchmark for VSP approaches.",
"Given an undirected graph with costs associated with each edge as well as each pair of edges, the quadratic minimum spanning tree problem (QMSTP) consists of determining a spanning tree of minimum cost. QMSTP is useful to model many real-life network design applications. We propose a three-phase search approach named TPS for solving QMSTP, which organizes the search process into three distinctive phases which are iterated: (1) a descent neighborhood search phase using two move operators to reach a local optimum from a given starting solution, (2) a local optima exploring phase to discover nearby local optima within a given regional area, and (3) a perturbation-based diversification phase to jump out of the current regional search area. TPS also introduces a pre-estimation criterion to significantly improve the efficiency of neighborhood evaluation, and develops a new swap-vertex neighborhood (as well as a swap-vertex based perturbation operator) which prove to be quite powerful for solving a series of special instances with particular structures. Computational experiments based on 7 sets of 659 popular benchmarks show that TPS produces highly competitive results compared to the best performing approaches in the literature. TPS discovers improved best known results (new upper bounds) for 33 open instances and matches the best known results for all the remaining instances. Critical elements and parameters of the TPS algorithm are analyzed to understand its behavior. HighlightsQMSTP is a general model able to formulate a number of network design problems.We propose a three phase search heuristic (TPS) for this problem.TPS is assessed on 7 groups of 659 representative benchmarks of the literature.TPS finds improved best solutions for 33 challenging instances.TPS finds all the optimal solutions for the 29 instances transformed from the QAP."
]
} |
1608.04217 | 2517831099 | Given a set of @math elements separated by a pairwise distance matrix, the minimum differential dispersion problem (Min-Diff DP) aims to identify a subset of m elements (m < n) such that the difference between the maximum sum and the minimum sum of the inter-element distances between any two chosen elements is minimized. We propose an effective iterated local search (denoted by ILS_MinDiff) for Min-Diff DP. To ensure an effective exploration and exploitation of the search space, the proposed ILS_MinDiff algorithm iterates through three sequential search phases: a fast descent-based neighborhood search phase to find a local optimum from a given starting solution, a local optima exploring phase to visit nearby high-quality solutions around a given local optimum, and a local optima escaping phase to move away from the current search region. Experimental results on six data sets of 190 benchmark instances demonstrate that ILS_MinDiff competes favorably with the state-of-the-art algorithms by finding 130 improved best results (new upper bounds). | Three-phase search proposed in @cite_18 follows and generalizes the basic ILS scheme. TPS iterates through three distinctive and sequential search phases. The basic idea of TPS is described as follows. Starting from an initial solution, a descent-based neighborhood search procedure is first employed to find a local optimal solution. Then, a local optima exploring phase is triggered with the purpose of discovering nearby local optima of better quality. When the search stagnates in the current search zone, TPS turns into a diversified perturbation phase, which strongly modifies the current solution to jump into a new search region. The process iteratively runs the above three phases until a given stopping condition is met. Compared to BLS, TPS further divides the perturbation phase into a local optima exploring phase (to discover more local optima within a given region) and a diversified perturbation phase (to displace the search to a new and distant search region). TPS has been successfully used to solve several optimization problems, as shown in the right column of Table . The general framework of TPS is outlined in Algorithm . Actually, the ILS algorithm proposed in this work follows the TPS framework. | {
"cite_N": [
"@cite_18"
],
"mid": [
"1881940886"
],
"abstract": [
"Given an undirected graph with costs associated with each edge as well as each pair of edges, the quadratic minimum spanning tree problem (QMSTP) consists of determining a spanning tree of minimum cost. QMSTP is useful to model many real-life network design applications. We propose a three-phase search approach named TPS for solving QMSTP, which organizes the search process into three distinctive phases which are iterated: (1) a descent neighborhood search phase using two move operators to reach a local optimum from a given starting solution, (2) a local optima exploring phase to discover nearby local optima within a given regional area, and (3) a perturbation-based diversification phase to jump out of the current regional search area. TPS also introduces a pre-estimation criterion to significantly improve the efficiency of neighborhood evaluation, and develops a new swap-vertex neighborhood (as well as a swap-vertex based perturbation operator) which prove to be quite powerful for solving a series of special instances with particular structures. Computational experiments based on 7 sets of 659 popular benchmarks show that TPS produces highly competitive results compared to the best performing approaches in the literature. TPS discovers improved best known results (new upper bounds) for 33 open instances and matches the best known results for all the remaining instances. Critical elements and parameters of the TPS algorithm are analyzed to understand its behavior. HighlightsQMSTP is a general model able to formulate a number of network design problems.We propose a three phase search heuristic (TPS) for this problem.TPS is assessed on 7 groups of 659 representative benchmarks of the literature.TPS finds improved best solutions for 33 challenging instances.TPS finds all the optimal solutions for the 29 instances transformed from the QAP."
]
} |
1608.03694 | 2507470618 | In this paper, we focus on the problem of inferring the underlying reward function of an expert given demonstrations, which is often referred to as inverse reinforcement learning (IRL). In particular, we propose a model-free density-based IRL algorithm, named density matching reward learning (DMRL), which does not require model dynamics. The performance of DMRL is analyzed theoretically and the sample complexity is derived. Furthermore, the proposed DMRL is extended to handle nonlinear IRL problems by assuming that the reward function is in the reproducing kernel Hilbert space (RKHS) and kernel DMRL (KDMRL) is proposed. The parameters for KDMRL can be computed analytically, which greatly reduces the computation time. The performance of KDMRL is extensively evaluated in two sets of experiments: grid world and track driving experiments. In grid world experiments, the proposed KDMRL method is compared with both model-based and model-free IRL methods and shows superior performance on a nonlinear reward setting and competitive performance on a linear reward setting in terms of expected value differences. Then we move on to more realistic experiments of learning different driving styles for autonomous navigation in complex and dynamic tracks using KDMRL and receding horizon control. | A margin based approach aims to maximize the margin between the value of the expert's policy compared to other possible policies. An IRL problem was first introduced by Ng and Russell @cite_23 , where the parameters of the reward function are optimized by maximizing the margin between randomly sampled reward functions. Abbeel and Ng @cite_32 proposed an apprenticeship learning algorithm, which minimizes the difference between the empirical distribution of an expert and the distribution induced by the reward function. @cite_25 proposed the maximum margin planning (MMP) algorithm, where Bellman-flow constraints are utilized to maximize the margin between the experts' policy and all other policies. | {
"cite_N": [
"@cite_25",
"@cite_32",
"@cite_23"
],
"mid": [
"2169498096",
"1999874108",
"2061562262"
],
"abstract": [
"Imitation learning of sequential, goal-directed behavior by standard supervised techniques is often difficult. We frame learning such behaviors as a maximum margin structured prediction problem over a space of policies. In this approach, we learn mappings from features to cost so an optimal policy in an MDP with these cost mimics the expert's behavior. Further, we demonstrate a simple, provably efficient approach to structured maximum margin learning, based on the subgradient method, that leverages existing fast algorithms for inference. Although the technique is general, it is particularly relevant in problems where A* and dynamic programming approaches make learning policies tractable in problems beyond the limitations of a QP formulation. We demonstrate our approach applied to route planning for outdoor mobile robots, where the behavior a designer wishes a planner to execute is often clear, while specifying cost functions that engender this behavior is a much more difficult task.",
"We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be difficult to write down an explicit reward function specifying exactly how different desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and give an algorithm for learning the task demonstrated by the expert. Our algorithm is based on using \"inverse reinforcement learning\" to try to recover the unknown reward function. We show that our algorithm terminates in a small number of iterations, and that even though we may never recover the expert's reward function, the policy output by the algorithm will attain performance close to that of the expert, where here performance is measured with respect to the expert's unknown reward function.",
"Objective—To evaluate the pharmacokinetics of a novel commercial formulation of ivermectin after administration to goats. Animals—6 healthy adult goats. Procedure—Ivermectin (200 μg kg) was initially administered IV to each goat, and plasma samples were obtained for 36 days. After a washout period of 3 weeks, each goat received a novel commercial formulation of ivermectin (200 μg kg) by SC injection. Plasma samples were then obtained for 42 days. Drug concentrations were quantified by use of high-performance liquid chromatography with fluorescence detection. Results—Pharmacokinetics of ivermectin after IV administration were best described by a 2-compartment open model; values for main compartmental variables included volume of distribution at a steady state (9.94 L kg), clearance (1.54 L kg d), and area under the plasma concentration-time curve (AUC; 143 [ng•d] mL). Values for the noncompartmental variables included mean residence time (7.37 days), AUC (153 [ng•d] mL), and clearance (1.43 L kg d). After ..."
]
} |
1608.03694 | 2507470618 | In this paper, we focus on the problem of inferring the underlying reward function of an expert given demonstrations, which is often referred to as inverse reinforcement learning (IRL). In particular, we propose a model-free density-based IRL algorithm, named density matching reward learning (DMRL), which does not require model dynamics. The performance of DMRL is analyzed theoretically and the sample complexity is derived. Furthermore, the proposed DMRL is extended to handle nonlinear IRL problems by assuming that the reward function is in the reproducing kernel Hilbert space (RKHS) and kernel DMRL (KDMRL) is proposed. The parameters for KDMRL can be computed analytically, which greatly reduces the computation time. The performance of KDMRL is extensively evaluated in two sets of experiments: grid world and track driving experiments. In grid world experiments, the proposed KDMRL method is compared with both model-based and model-free IRL methods and shows superior performance on a nonlinear reward setting and competitive performance on a linear reward setting in terms of expected value differences. Then we move on to more realistic experiments of learning different driving styles for autonomous navigation in complex and dynamic tracks using KDMRL and receding horizon control. | On the other hand, probabilistic model based methods aim to find the reward function which maximizes the likelihood or posterior distribution. @cite_2 proposed maximum entropy IRL (MaxEnt) using the principle of maximum entropy to alleviate the inherent ambiguity problem of the reward function. In @cite_20 , proposed Bayesian IRL (BIRL), where the Bayesian probabilistic model was defined and effectively solved by using a Metropolis-Hastings sampling method. Levine and Koltun @cite_14 proposed Gaussian process inverse reinforcement learning (GPIRL), where a nonlinear reward function is effectively modeled by a sparse Gaussian process with inducing inputs. | {
"cite_N": [
"@cite_14",
"@cite_20",
"@cite_2"
],
"mid": [
"2117675763",
"1591675293",
"2098774185"
],
"abstract": [
"We present a probabilistic algorithm for nonlinear inverse reinforcement learning. The goal of inverse reinforcement learning is to learn the reward function in a Markov decision process from expert demonstrations. While most prior inverse reinforcement learning algorithms represent the reward as a linear combination of a set of features, we use Gaussian processes to learn the reward as a nonlinear function, while also determining the relevance of each feature to the expert's policy. Our probabilistic algorithm allows complex behaviors to be captured from suboptimal stochastic demonstrations, while automatically balancing the simplicity of the learned reward structure against its consistency with the observed actions.",
"Inverse Reinforcement Learning (IRL) is the problem of learning the reward function underlying a Markov Decision Process given the dynamics of the system and the behaviour of an expert. IRL is motivated by situations where knowledge of the rewards is a goal by itself (as in preference elicitation) and by the task of apprenticeship learning (learning policies from an expert). In this paper we show how to combine prior knowledge and evidence from the expert's actions to derive a probability distribution over the space of reward functions. We present efficient algorithms that find solutions for the reward learning and apprenticeship learning tasks that generalize well over these distributions. Experimental results show strong improvement for our methods over previous heuristic-based approaches.",
"Recent research has shown the benefit of framing problems of imitation learning as solutions to Markov Decision Problems. This approach reduces learning to the problem of recovering a utility function that makes the behavior induced by a near-optimal policy closely mimic demonstrated behavior. In this work, we develop a probabilistic approach based on the principle of maximum entropy. Our approach provides a well-defined, globally normalized distribution over decision sequences, while providing the same performance guarantees as existing methods. We develop our technique in the context of modeling real-world navigation and driving behaviors where collected data is inherently noisy and imperfect. Our probabilistic approach enables modeling of route preferences as well as a powerful new approach to inferring destinations and routes based on partial trajectories."
]
} |
1608.03694 | 2507470618 | In this paper, we focus on the problem of inferring the underlying reward function of an expert given demonstrations, which is often referred to as inverse reinforcement learning (IRL). In particular, we propose a model-free density-based IRL algorithm, named density matching reward learning (DMRL), which does not require model dynamics. The performance of DMRL is analyzed theoretically and the sample complexity is derived. Furthermore, the proposed DMRL is extended to handle nonlinear IRL problems by assuming that the reward function is in the reproducing kernel Hilbert space (RKHS) and kernel DMRL (KDMRL) is proposed. The parameters for KDMRL can be computed analytically, which greatly reduces the computation time. The performance of KDMRL is extensively evaluated in two sets of experiments: grid world and track driving experiments. In grid world experiments, the proposed KDMRL method is compared with both model-based and model-free IRL methods and shows superior performance on a nonlinear reward setting and competitive performance on a linear reward setting in terms of expected value differences. Then we move on to more realistic experiments of learning different driving styles for autonomous navigation in complex and dynamic tracks using KDMRL and receding horizon control. | Recently, several model-free IRL methods have been proposed. In @cite_22 , proposed a relative entropy IRL (RelEnt) method, which minimizes the relative entropy between the empirical distribution of demonstrations from the baseline policy and the distribution under the learned policy. Structured classification based IRL @cite_27 was proposed by formulating IRL problem as a structured multi-class classification problem by founding that the parameters of the action-value function are shared with those of the reward function. In @cite_19 , proposed a model-free IRL method by simultaneously estimating the reward and dynamics, where they separate the real transition model (dynamics) and the belief transition model of an agent to handle previously unseen states. @cite_13 proposed a guided cost learning, which extends the maximum entropy IRL to a model-free setting based on approximation by importance sampling. In this paper, we propose a model-free IRL method that first compute the empirical state action distribution of an expert and then find the reward function that matches the density function. | {
"cite_N": [
"@cite_19",
"@cite_27",
"@cite_13",
"@cite_22"
],
"mid": [
"2338865890",
"",
"2290104316",
"2133068870"
],
"abstract": [
"Inverse Reinforcement Learning (IRL) describes the problem of learning an unknown reward function of a Markov Decision Process (MDP) from observed behavior of an agent. Since the agent's behavior originates in its policy and MDP policies depend on both the stochastic system dynamics as well as the reward function, the solution of the inverse problem is significantly influenced by both. Current IRL approaches assume that if the transition model is unknown, additional samples from the system's dynamics are accessible, or the observed behavior provides enough samples of the system's dynamics to solve the inverse problem accurately. These assumptions are often not satisfied. To overcome this, we present a gradient-based IRL approach that simultaneously estimates the system's dynamics. By solving the combined optimization problem, our approach takes into account the bias of the demonstrations, which stems from the generating policy. The evaluation on a synthetic MDP and a transfer learning task shows improvements regarding the sample efficiency as well as the accuracy of the estimated reward functions and transition models.",
"",
"Reinforcement learning can acquire complex behaviors from high-level specifications. However, defining a cost function that can be optimized effectively and encodes the correct task is challenging in practice. We explore how inverse optimal control (IOC) can be used to learn behaviors from demonstrations, with applications to torque control of high-dimensional robotic systems. Our method addresses two key challenges in inverse optimal control: first, the need for informative features and effective regularization to impose structure on the cost, and second, the difficulty of learning the cost function under unknown dynamics for high-dimensional continuous systems. To address the former challenge, we present an algorithm capable of learning arbitrary nonlinear cost functions, such as neural networks, without meticulous feature engineering. To address the latter challenge, we formulate an efficient sample-based approximation for MaxEnt IOC. We evaluate our method on a series of simulated tasks and real-world robotic manipulation problems, demonstrating substantial improvement over prior methods both in terms of task complexity and sample efficiency.",
"We consider the problem of imitation learning where the examples, demonstrated by an expert, cover only a small part of a large state space. Inverse Reinforcement Learning (IRL) provides an ecient tool for generalizing the demonstration, based on the assumption that the expert is optimally acting in a Markov Decision Process (MDP). Most of the past work on IRL requires that a (near)optimal policy can be computed for dierent reward functions. However, this requirement can hardly be satised in systems with a large, or continuous, state space. In this paper, we propose a model-free IRL algorithm, where the relative entropy between the empirical distribution of the state-action trajectories under a baseline policy and their distribution under the learned policy is minimized by stochastic gradient descent. We compare this new approach to well-known IRL algorithms using learned MDP models. Empirical results on simulated car racing, gridworld and ball-in-a-cup problems show that our approach is able to learn good policies from a small number of demonstrations."
]
} |
1608.03803 | 2509344220 | This paper studies how word embeddings trained on the British National Corpus interact with part of speech boundaries. Our work targets the Universal PoS tag set, which is currently actively being used for annotation of a range of languages. We experiment with training classifiers for predicting PoS tags for words based on their embeddings. The results show that the information about PoS affiliation contained in the distributional vectors allows us to discover groups of words with distributional patterns that differ from other words of the same part of speech. This data often reveals hidden inconsistencies of the annotation process or guidelines. At the same time, it supports the notion of soft' or graded' part of speech affiliations. Finally, we show that information about PoS is distributed among dozens of vector components, not limited to only one or two features. | Various types of distributional information has also played an important role in previous work done on the related problem of unsupervised PoS acquisition. As discussed in , we can separate at least three main directions within this line of work: approaches @cite_17 @cite_3 @cite_13 that start out from a dictionary providing possible tags for different words; approaches @cite_14 @cite_8 based on a small number of prototypical examples for each PoS; approaches that are completely unsupervised and make no use of prior knowledge. This is also the main focus of the comparative survey provided by @cite_8 . | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_3",
"@cite_13",
"@cite_17"
],
"mid": [
"2078058974",
"2108622839",
"2251348291",
"2140460010",
"2112861996"
],
"abstract": [
"We investigate prototype-driven learning for primarily unsupervised sequence modeling. Prior knowledge is specified declaratively, by providing a few canonical examples of each target annotation label. This sparse prototype information is then propagated across a corpus using distributional similarity features in a log-linear generative model. On part-of-speech induction in English and Chinese, as well as an information extraction task, prototype features provide substantial error rate reductions over competitive baselines and outperform previous work. For example, we can achieve an English part-of-speech tagging accuracy of 80.5 using only three examples of each tag and no dictionary constraints. We also compare to semi-supervised learning and discuss the system's error trends.",
"Part-of-speech (POS) induction is one of the most popular tasks in research on unsupervised NLP. Many different methods have been proposed, yet comparisons are difficult to make since there is little consensus on evaluation framework, and many papers evaluate against only one or two competitor systems. Here we evaluate seven different POS induction systems spanning nearly 20 years of work, using a variety of measures. We show that some of the oldest (and simplest) systems stand up surprisingly well against more recent approaches. Since most of these systems were developed and tested using data from the WSJ corpus, we compare their generalization abilities by testing on both WSJ and the multilingual Multext-East corpus. Finally, we introduce the idea of evaluating systems based on their ability to produce cluster prototypes that are useful as input to a prototype-driven learner. In most cases, the prototype-driven learner outperforms the unsupervised system used to initialize it, yielding state-of-the-art results on WSJ and improvements on non-English corpora.",
"We propose a neural network approach to benefit from the non-linearity of corpuswide statistics for part-of-speech (POS) tagging. We investigated several types of corpus-wide information for the words, such as word embeddings and POS tag distributions. Since these statistics are encoded as dense continuous features, it is not trivial to combine these features comparing with sparse discrete features. Our tagger is designed as a combination of a linear model for discrete features and a feed-forward neural network that captures the non-linear interactions among the continuous features. By using several recent advances in the activation functions for neural networks, the proposed method marks new state-of-the-art accuracies for English POS tagging tasks.",
"We describe a novel method for the task of unsupervised POS tagging with a dictionary, one that uses integer programming to explicitly search for the smallest model that explains the data, and then uses EM to set parameter values. We evaluate our method on a standard test corpus using different standard tagsets (a 45-tagset as well as a smaller 17-tagset), and show that our approach performs better than existing state-of-the-art systems in both settings.",
"In this paper we present some experiments on the use of a probabilistic model to tag English text, i.e. to assign to each word the correct tag (part of speech) in the context of the sentence. The main novelty of these experiments is the use of untagged text in the training of the model. We have used a simple triclass Markov model and are looking for the best way to estimate the parameters of this model, depending on the kind and amount of training data provided. Two approaches in particular are compared and combined:using text that has been tagged by hand and computing relative frequency counts,using text without tags and training the model as a hidden Markov process, according to a Maximum Likelihood principle.Experminents show that the best training is obtained by using as much tagged text as possible. They also show that Maximum Likelihood training, the procedure that is routinely used to estimate hidden Markov models parameters from training data, will not necessarily improve the tagging accuracy. In fact, it will generally degrade this accuracy, except when only a limited amount of hand-tagged text is available."
]
} |
1608.03803 | 2509344220 | This paper studies how word embeddings trained on the British National Corpus interact with part of speech boundaries. Our work targets the Universal PoS tag set, which is currently actively being used for annotation of a range of languages. We experiment with training classifiers for predicting PoS tags for words based on their embeddings. The results show that the information about PoS affiliation contained in the distributional vectors allows us to discover groups of words with distributional patterns that differ from other words of the same part of speech. This data often reveals hidden inconsistencies of the annotation process or guidelines. At the same time, it supports the notion of soft' or graded' part of speech affiliations. Finally, we show that information about PoS is distributed among dozens of vector components, not limited to only one or two features. | Work on PoS induction has a long history -- including the use of distributional methods -- going back at least to , and recent work has demonstrated that word embeddings can be useful for this task as well @cite_15 @cite_7 @cite_16 . | {
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_7"
],
"mid": [
"1831478036",
"2251830157",
"1801581093"
],
"abstract": [
"We investigate paradigmatic representations of word context in the domain of unsupervised syntactic category acquisition. Paradigmatic representations of word context are based on potential substitutes of a word in contrast to syntagmatic representations based on properties of neighboring words. We compare a bigram based baseline model with several paradigmatic models and demonstrate significant gains in accuracy. Our best model based on Euclidean co-occurrence embedding combines the paradigmatic context representation with morphological and orthographic features and achieves 80 many-to-one accuracy on a 45-tag 1M word corpus.",
"We introduce an extension to the bag-ofwords model for learning words representations that take into account both syntactic and semantic properties within language. This is done by employing an attention model that finds within the contextual words, the words that are relevant for each prediction. The general intuition of our model is that some words are only relevant for predicting local context (e.g. function words), while other words are more suited for determining global context, such as the topic of the document. Experiments performed on both semantically and syntactically oriented tasks show gains using our model over the existing bag of words model. Furthermore, compared to other more sophisticated models, our model scales better as we increase the size of the context of the model.",
"Unsupervised word embeddings have been shown to be valuable as features in supervised learning problems; however, their role in unsupervised problems has been less thoroughly explored. In this paper, we show that embeddings can likewise add value to the problem of unsupervised POS induction. In two representative models of POS induction, we replace multinomial distributions over the vocabulary with multivariate Gaussian distributions over word embeddings and observe consistent improvements in eight languages. We also analyze the effect of various choices while inducing word embeddings on \"downstream\" POS induction results."
]
} |
1608.03803 | 2509344220 | This paper studies how word embeddings trained on the British National Corpus interact with part of speech boundaries. Our work targets the Universal PoS tag set, which is currently actively being used for annotation of a range of languages. We experiment with training classifiers for predicting PoS tags for words based on their embeddings. The results show that the information about PoS affiliation contained in the distributional vectors allows us to discover groups of words with distributional patterns that differ from other words of the same part of speech. This data often reveals hidden inconsistencies of the annotation process or guidelines. At the same time, it supports the notion of soft' or graded' part of speech affiliations. Finally, we show that information about PoS is distributed among dozens of vector components, not limited to only one or two features. | It seems clear that one can infer data about PoS classes of words from distributional models in general, including embedding models. As a next step then, these models could also prove useful for deeper analysis of part of speech boundaries, leading to discovery of separate words or whole classes that tend to behave in non-typical ways. Discovering such cases is one possible way to improve the performance of existing automatic PoS taggers @cite_1 . These outliers' may signal the necessity to revise the annotation strategy or classification system in general. Section describes the process of constructing typical PoS clusters and detecting words that belong to a cluster different from their traditional annotation. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2170206653"
],
"abstract": [
"This paper presents an algorithm for tagging words whose part-of-speech properties are unknown. Unlike previous work, the algorithm categorizes word tokens in context instead of word types. The algorithm is evaluated on the Brown Corpus."
]
} |
1608.03628 | 2514213613 | We consider a collection of @math points in @math measured at @math times, which are encoded in an @math data tensor. Our objective is to define a single embedding of the @math points into Euclidean space which summarizes the geometry as described by the data tensor. In the case of a fixed data set, diffusion maps (and related graph Laplacian methods) define such an embedding via the eigenfunctions of a diffusion operator constructed on the data. Given a sequence of @math measurements of @math points, we construct a corresponding sequence of diffusion operators and study their product. Via this product, we introduce the notion of time coupled diffusion distance and time coupled diffusion maps which have natural geometric and probabilistic interpretations. To frame our method in the context of manifold learning, we model evolving data as samples from an underlying manifold with a time dependent metric, and we describe a connection of our method to the heat equation over a manifold with time dependent metric. | Our work extends the current diffusion maps literature by considering evolving dynamics rather than enhancing the analysis of a fixed system. We consider an @math data tensor describing a system of @math points in @math over @math times, and seek to construct a manifold model and diffusion geometry framework for this setting. A similar framework is considered by Banisch and Koltai in @cite_10 . However, rather than study the product operator , they study the sum of the operators @math and prove a relation with the dynamic Laplacian. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2299660775"
],
"abstract": [
"Dynamical systems often exhibit the emergence of long-lived coherent sets, which are regions in state space that keep their geometric integrity to a high extent and thus play an important role in transport. In this article, we provide a method for extracting coherent sets from possibly sparse Lagrangian trajectory data. Our method can be seen as an extension of diffusion maps to trajectory space, and it allows us to construct \"dynamical coordinates\" which reveal the intrinsic low-dimensional organization of the data. The only a priori knowledge about the dynamics that we require is a locally valid notion of distance, which renders our method highly suitable for automated data analysis. We show convergence of our method to the analytic transfer operator framework of coherence in the infinite data limit, and illustrate its potential on several two- and three-dimensional examples as well as real world data."
]
} |
1608.03611 | 2514846057 | In this work, we present a new algorithm for maximizing a non-monotone submodular function subject to a general constraint. Our algorithm finds an approximate fractional solution for maximizing the multilinear extension of the function over a down-closed polytope. The approximation guarantee is 0.372 and it is the first improvement over the 1 e approximation achieved by the unified Continuous Greedy algorithm [, FOCS 2011]. | Submodular maximization problems are well-studied and it is hard to do justice to all previous literature on this subject. Starting with the seminal work @cite_14 , the classical approaches to these problems are largely combinatorial and based on greedy and local search algorithms. Over the years, with increasing sophistication, this direction has led to many tight results such as the algorithm of @cite_4 for a knapsack constraint, and the current best results for constraints such as multiple matroid constraints @cite_15 . In the last few years, another approach emerged @cite_0 that follows the popular paradigm in approximation algorithms of optimizing a continuous relaxation and rounding the resulting fractional solution. A key difficulty that separates the submodular setting from the classical setting is that even finding a may be quite challenging, and in particular it is @math -hard to solve the continuous relaxation for maximizing submodular functions that is based on the multilinear extension. Thus, a line of work has been developed to optimize this relaxation @cite_0 @cite_17 @cite_7 @cite_12 culminating in the work @cite_12 , which we extend here. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_0",
"@cite_15",
"@cite_12",
"@cite_17"
],
"mid": [
"1997783781",
"2033885045",
"",
"2621717961",
"2802297243",
"2045492898",
"2026338082"
],
"abstract": [
"A real-valued function z whose domain is all of the subsets of N = 1,..., n is said to be submodular if zS + zT ≥ zS ∪ T + zS ∩ T, ∀S, T ⊆ N, and nondecreasing if zS ≤ zT, ∀S ⊂ T ⊆ N. We consider the problem maxS⊂N zS: |S| ≤ K, z submodular and nondecreasing, zO = 0 . Many combinatorial optimization problems can be posed in this framework. For example, a well-known location problem and the maximization of certain boolean polynomials are in this class. We present a family of algorithms that involve the partial enumeration of all sets of cardinality q and then a greedy selection of the remaining elements, q = 0,..., K-1. For fixed K, the qth member of this family requires Onq+1 computations and is guaranteed to achieve at least @math Our main result is that this is the best performance guarantee that can be obtained by any algorithm whose number of computations does not exceed Onq+1.",
"In this paper, we obtain an (1-e^-^1)-approximation algorithm for maximizing a nondecreasing submodular set function subject to a knapsack constraint. This algorithm requires O(n^5) function value computations.",
"",
"",
"Submodular function maximization is a central problem in combinatorial optimization, generalizing many important NP-hard problems including max cut in digraphs, graphs, and hypergraphs; certain constraint satisfaction problems; maximum entropy sampling; and maximum facility location problems. Our main result is that for any k ≥ 2 and any e > 0, there is a natural local search algorithm that has approximation guarantee of 1 (k + e) for the problem of maximizing a monotone submodular function subject to k matroid constraints. This improves upon the 1 (k + 1)-approximation of Fisher, Nemhauser, and Wolsey obtained in 1978 [Fisher, M., G. Nemhauser, L. Wolsey. 1978. An analysis of approximations for maximizing submodular set functions---II. Math. Programming Stud.8 73--87]. Also, our analysis can be applied to the problem of maximizing a linear objective function and even a general nonmonotone submodular function subject to k matroid constraints. We show that, in these cases, the approximation guarantees of our algorithms are 1 (k-1 + e) and 1 (k + 1 + 1 (k-1) + e), respectively. Our analyses are based on two new exchange properties for matroids. One is a generalization of the classical Rota exchange property for matroid bases, and another is an exchange property for two matroids based on the structure of matroid intersection.",
"The study of combinatorial problems with a sub modular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to economics, algorithmic game theory and combinatorial optimization. Classical works on these problems are mostly combinatorial in nature. Recently, however, many results based on continuous algorithmic tools have emerged. The main bottleneck of such continuous techniques is how to approximately solve a non-convex relaxation for the sub modular problem at hand. Thus, the efficient computation of better fractional solutions immediately implies improved approximations for numerous applications. A simple and elegant method, called continuous greedy'', successfully tackles this issue for monotone sub modular objective functions, however, only much more complex tools are known to work for general non-monotone sub modular objectives. In this work we present a new unified continuous greedy algorithm which finds approximate fractional solutions for both the non-monotone and monotone cases, and improves on the approximation ratio for many applications. For general non-monotone sub modular objective functions, our algorithm achieves an improved approximation ratio of about @math . For monotone sub modular objective functions, our algorithm achieves an approximation ratio that depends on the density of the poly tope defined by the problem at hand, which is always at least as good as the previously known best approximation ratio of @math . Some notable immediate implications are an improved @math -approximation for maximizing a non-monotone sub modular function subject to a matroid or @math -knapsack constraints, and information-theoretic tight approximations for Sub modular Max-SAT and Sub modular Welfare with @math players, for any number of players @math . A framework for sub modular optimization problems, called the , was introduced recently by The improved approximation ratio of the unified continuous greedy algorithm implies improved approximation ratios for many problems through this framework. Moreover, via a parameter called , our algorithm merges the relaxation solving and re-normalization steps of the framework, and achieves, for some applications, further improvements. We also describe new monotone balanced contention resolution schemes for various matching, scheduling and packing problems, thus, improving the approximations achieved for these problems via the framework.",
"Submodular function maximization is a central problem in combinatorial optimization, generalizing many important problems including Max Cut in directed undirected graphs and in hypergraphs, certain constraint satisfaction problems, maximum entropy sampling, and maximum facility location problems. Unlike submodular minimization, submodular maximization is NP-hard. In this paper, we give the first constant-factor approximation algorithm for maximizing any non-negative submodular function subject to multiple matroid or knapsack constraints. We emphasize that our results are for non-monotone submodular functions. In particular, for any constant k, we present a (1 k+2+1 k+e)-approximation for the submodular maximization problem under k matroid constraints, and a (1 5-e)-approximation algorithm for this problem subject to k knapsack constraints (e>0 is any constant). We improve the approximation guarantee of our algorithm to 1 k+1+ 1 k-1 +e for k≥2 partition matroid constraints. This idea also gives a ( 1 k+e)-approximation for maximizing a monotone submodular function subject to k≥2 partition matroids, which improves over the previously best known guarantee of 1 k+1."
]
} |
1608.03905 | 2513005004 | We propose a document retrieval method for question answering that represents documents and questions as weighted centroids of word embeddings and reranks the retrieved documents with a relaxation of Word Mover's Distance. Using biomedical questions and documents from BIOASQ, we show that our method is competitive with PUBMED. With a top-k approximation, our method is fast, and easily portable to other domains and languages. | The dataset @cite_5 is often used in biomedical information retrieval experiments. It is much smaller (101 queries, approx. 350K documents) than the dataset that we used, but we plan to experiment with in future work for completeness. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2071664212"
],
"abstract": [
"A series of information retrieval experiments was carried out with a computer installed in a medical practice setting for relatively inexperienced physician end-users. Using a commercial MEDLINE product based on the vector space model, these physicians searched just as effectively as more experienced searchers using Boolean searching. The results of this experiment were subsequently used to create a new large medical test collection, which was used in experiments with the SMART retrieval system to obtain baseline performance data as well as compare SMART with the other searchers."
]
} |
1608.03914 | 2511423172 | In this paper, we explore deep learning methods for estimating when objects were made. Automatic methods for this task could potentially be useful for historians, collectors, or any individual interested in estimating when their artifact was created. Direct applications include large-scale data organization or retrieval. Toward this goal, we utilize features from existing deep networks and also fine-tune new networks for temporal estimation. In addition, we create two new datasets of 67,771 dated clothing items from Flickr and museum collections. Our method outperforms both a color-based baseline and previous state of the art methods for temporal estimation. We also provide several analyses of what our networks have learned, and demonstrate applications to identifying temporal inspiration in fashion collections. | Deep Learning: Convolutional neural networks (CNNs) have been one of the driving forces toward improving performance of visual recognition algorithms. These deep learning approaches, originally introduced as perceptrons in the 1950s @cite_29 , experienced a resurgence in popularity during the 1990s, and then in recent years have almost taken over the recognition community after demonstrations of remarkable image classification @cite_3 @cite_15 @cite_25 and detection @cite_34 performance on benchmark tasks. We make use of both networks, from @cite_3 @cite_15 , trained for classification of the 1.2 million labeled images in ImageNet @cite_2 . These networks demonstrate better performance than the best hand-crafted features on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) @cite_5 . | {
"cite_N": [
"@cite_29",
"@cite_3",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_34",
"@cite_25"
],
"mid": [
"",
"",
"2108598243",
"2952020226",
"1686810756",
"2950179405",
"2949650786"
],
"abstract": [
"",
"",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."
]
} |
1608.03914 | 2511423172 | In this paper, we explore deep learning methods for estimating when objects were made. Automatic methods for this task could potentially be useful for historians, collectors, or any individual interested in estimating when their artifact was created. Direct applications include large-scale data organization or retrieval. Toward this goal, we utilize features from existing deep networks and also fine-tune new networks for temporal estimation. In addition, we create two new datasets of 67,771 dated clothing items from Flickr and museum collections. Our method outperforms both a color-based baseline and previous state of the art methods for temporal estimation. We also provide several analyses of what our networks have learned, and demonstrate applications to identifying temporal inspiration in fashion collections. | The feature representations learned by these networks on ImageNet data have been shown to generalize well to other image classification tasks @cite_35 as well as related tasks such as object detection @cite_17 @cite_16 , pose estimation and action detection @cite_31 , or fine-grained category detection @cite_21 . Moreover, in a somewhat related task to ours, S. Karayev @cite_14 show that using a pre-trained network @cite_35 as a generic feature extractor, produces a better classifier for photo and painting style than hand-crafted features. We are not aware of any prior work on modeling historical visual style using deep-learning based methods. | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_21",
"@cite_31",
"@cite_16",
"@cite_17"
],
"mid": [
"2953360861",
"2166242527",
"",
"259216465",
"2949966521",
"2102605133"
],
"abstract": [
"We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.",
"The style of an image plays a significant role in how it is viewed, but style has received little attention in computer vision research. We describe an approach to predicting style of images, and perform a thorough evaluation of different image features for these tasks. We find that features learned in a multi-layer network generally perform best -- even when trained with object class (not style) labels. Our large-scale learning methods results in the best published performance on an existing dataset of aesthetic ratings and photographic style annotations. We present two novel datasets: 80K Flickr photographs annotated with 20 curated style labels, and 85K paintings annotated with 25 style genre labels. Our approach shows excellent classification performance on both datasets. We use the learned classifiers to extend traditional tag-based image search to consider stylistic constraints, and demonstrate cross-dataset understanding of style.",
"",
"We present convolutional neural networks for the tasks of keypoint (pose) prediction and action classification of people in unconstrained images. Our approach involves training an R-CNN detector with loss functions depending on the task being tackled. We evaluate our method on the challenging PASCAL VOC dataset and compare it to previous leading approaches. Our method gives state-of-the-art results for keypoint and action prediction. Additionally, we introduce a new dataset for action detection, the task of simultaneously localizing people and classifying their actions, and present results using our approach.",
"Pedestrian detection is a problem of considerable practical interest. Adding to the list of successful applications of deep learning methods to vision, we report state-of-the-art and competitive results on all major pedestrian datasets with a convolutional network model. The model uses a few new twists, such as multi-stage features, connections that skip layers to integrate global shape information with local distinctive motif information, and an unsupervised method based on convolutional sparse coding to pre-train the filters at each stage.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn."
]
} |
1608.03914 | 2511423172 | In this paper, we explore deep learning methods for estimating when objects were made. Automatic methods for this task could potentially be useful for historians, collectors, or any individual interested in estimating when their artifact was created. Direct applications include large-scale data organization or retrieval. Toward this goal, we utilize features from existing deep networks and also fine-tune new networks for temporal estimation. In addition, we create two new datasets of 67,771 dated clothing items from Flickr and museum collections. Our method outperforms both a color-based baseline and previous state of the art methods for temporal estimation. We also provide several analyses of what our networks have learned, and demonstrate applications to identifying temporal inspiration in fashion collections. | Analysis of CNNs: Unlike hand-crafted features such as SIFT @cite_37 or HOG @cite_36 , the representation learned by CNNs is not obviously interpretable. For many tasks the CNN is used as a black-box algorithm where it is not always clear why the CNN is outperforming previous approaches. Several recent works have attempted to peer into this box, to better understand the representations learned by CNNs. P. Fischer @cite_4 compares the learned representation with SIFT in a descriptor matching task. M.D. Zeiler @cite_12 propose several heuristic visualization techniques for units in the network. J. Long @cite_32 study the effectiveness of convnet activation features for tasks requiring correspondence. Recently, B Zhou @cite_1 present a technique to visualize learned representations of each unit in the network. Here they focus on a large dataset of scene images and show that object detection is embedded in the network as a result of learning. We use variants of this approach to evaluate what our temporal networks have learned. | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_36",
"@cite_1",
"@cite_32",
"@cite_12"
],
"mid": [
"2151103935",
"2128237624",
"2161969291",
"1899185266",
"2950124505",
"2952186574"
],
"abstract": [
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"Latest results indicate that features learned via convolutional neural networks outperform previous descriptors on classification tasks by a large margin. It has been shown that these networks still work well when they are applied to datasets or recognition tasks different from those they were trained on. However, descriptors like SIFT are not only used in recognition but also for many correspondence problems that rely on descriptor matching. In this paper we compare features from various layers of convolutional neural nets to standard SIFT descriptors. We consider a network that was trained on ImageNet and another one that was trained without supervision. Surprisingly, convolutional neural networks clearly outperform SIFT on descriptor matching. This paper has been merged with arXiv:1406.6909",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.",
"Convolutional neural nets (convnets) trained from massive labeled datasets have substantially improved the state-of-the-art in image classification and object detection. However, visual understanding requires establishing correspondence on a finer level than object category. Given their large pooling regions and training from whole-image labels, it is not clear that convnets derive their success from an accurate correspondence model which could be used for precise localization. In this paper, we study the effectiveness of convnet activation features for tasks requiring correspondence. We present evidence that convnet features localize at a much finer scale than their receptive field sizes, that they can be used to perform intraclass alignment as well as conventional hand-engineered features, and that they outperform conventional features in keypoint prediction on objects from PASCAL VOC 2011.",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets."
]
} |
1608.03914 | 2511423172 | In this paper, we explore deep learning methods for estimating when objects were made. Automatic methods for this task could potentially be useful for historians, collectors, or any individual interested in estimating when their artifact was created. Direct applications include large-scale data organization or retrieval. Toward this goal, we utilize features from existing deep networks and also fine-tune new networks for temporal estimation. In addition, we create two new datasets of 67,771 dated clothing items from Flickr and museum collections. Our method outperforms both a color-based baseline and previous state of the art methods for temporal estimation. We also provide several analyses of what our networks have learned, and demonstrate applications to identifying temporal inspiration in fashion collections. | Visual Data Mining: Several visual data mining approaches have been used for tasks such as unsupervised discovery of object categories in image collections @cite_26 @cite_18 @cite_23 @cite_11 , or for finding discriminative parts of actions @cite_8 , cities @cite_13 , or objects @cite_27 . There has also been related work on discovering localized attributes @cite_28 @cite_19 @cite_24 for improved fine-grained recognition. Most of these approaches start from existing hand-crafted feature representations and pre-defined measures of visual similarity, then based on discovered patterns of features in the data, group visual elements into discovered entities. The most relevant work is from @cite_27 which go beyond simply detecting recurring visual elements to model the stylistic differences between objects over time or space. Unlike this prior work, we do not use hand-crafted visual representations. Instead, we use representations learned by CNN models and achieve significantly improved performance. Additionally, we analyze what the network has learned, temporally, and in comparison to the existing data mining approach @cite_27 . | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_8",
"@cite_28",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_13",
"@cite_11"
],
"mid": [
"2167652126",
"2119474464",
"2055753778",
"1528802670",
"1953590900",
"",
"2171322814",
"",
"2055132753",
"2103658758"
],
"abstract": [
"This paper presents an approach to object discovery in a given unlabeled image set, based on mining repetitive spatial configurations of image contours. Contours that similarly deform from one image to another are viewed as collaborating, or, otherwise, conflicting. This is captured by a graph over all pairs of matching contours, whose maximum a posteriori multicoloring assignment is taken to represent the shapes of discovered objects. Multicoloring is conducted by our new Coordinate Ascent Swendsen-Wang cut (CASW). CASW uses the Metropolis-Hastings (MH) reversible jumps to probabilistically sample graph edges, and color nodes. CASW extends SW cut by introducing a regularization in the posterior of multicoloring assignments that prevents the MH jumps to arrive at trivial solutions. Also, CASW seeks to learn parameters of the posterior via maximizing a lower bound of the MH acceptance rate. This speeds up multicoloring iterations, and facilitates MH jumps from local minima. On benchmark datasets, we outperform all existing approaches to unsupervised object discovery.",
"We present a method to automatically learn object categories from unlabeled images. Each image is represented by an unordered set of local features, and all sets are embedded into a space where they cluster according to their partial-match feature correspondences. After efficiently computing the pairwise affinities between the input images in this space, a spectral clustering technique is used to recover the primary groupings among the images. We introduce an efficient means of refining these groupings according to intra-cluster statistics over the subsets of features selected by the partial matches between the images, and based on an optional, variable amount of user supervision. We compute the consistent subsets of feature correspondences within a grouping to infer category feature masks. The output of the algorithm is a partition of the data into a set of learned categories, and a set of classifiers trained from these ranked partitions that can recognize the categories in novel images.",
"We describe a mid-level approach for action recognition. From an input video, we extract salient spatio-temporal structures by forming clusters of trajectories that serve as candidates for the parts of an action. The assembly of these clusters into an action class is governed by a graphical model that incorporates appearance and motion constraints for the individual parts and pairwise constraints for the spatio-temporal dependencies among them. During training, we estimate the model parameters discriminatively. During classification, we efficiently match the model to a video using discrete optimization. We validate the model's classification ability in standard benchmark datasets and illustrate its potential to support a fine-grained analysis that not only gives a label to a video, but also identifies and localizes its constituent parts.",
"It is common to use domain specific terminology - attributes - to describe the visual appearance of objects. In order to scale the use of these describable visual attributes to a large number of categories, especially those not well studied by psychologists or linguists, it will be necessary to find alternative techniques for identifying attribute vocabularies and for learning to recognize attributes without hand labeled training data. We demonstrate that it is possible to accomplish both these tasks automatically by mining text and image data sampled from the Internet. The proposed approach also characterizes attributes according to their visual representation: global or local, and type: color, texture, or shape. This work focuses on discovering attributes and their visual appearance, and is as agnostic as possible about the textual description.",
"We present images with binary codes in a way that balances discrimination and learnability of the codes. In our method, each image claims its own code in a way that maintains discrimination while being predictable from visual data. Category memberships are usually good proxies for visual similarity but should not be enforced as a hard constraint. Our method learns codes that maximize separability of categories unless there is strong visual evidence against it. Simple linear SVMs can achieve state-of-the-art results with our short codes. In fact, our method produces state-of-the-art results on Caltech256 with only 128-dimensional bit vectors and outperforms state of the art by using longer codes. We also evaluate our method on ImageNet and show that our method outperforms state-of-the-art binary code methods on this large scale dataset. Lastly, our codes can discover a discriminative set of attributes.",
"",
"We present a weakly-supervised visual data mining approach that discovers connections between recurring mid-level visual elements in historic (temporal) and geographic (spatial) image collections, and attempts to capture the underlying visual style. In contrast to existing discovery methods that mine for patterns that remain visually consistent throughout the dataset, our goal is to discover visual elements whose appearance changes due to change in time or location; i.e., exhibit consistent stylistic variations across the label space (date or geo-location). To discover these elements, we first identify groups of patches that are style-sensitive. We then incrementally build correspondences to find the same element across the entire dataset. Finally, we train style-aware regressors that model each element's range of stylistic differences. We apply our approach to date and geo-location prediction and show substantial improvement over several baselines that do not model visual style. We also demonstrate the method's effectiveness on the related task of fine-grained classification.",
"",
"Given a large repository of geotagged imagery, we seek to automatically find visual elements, e. g. windows, balconies, and street signs, that are most distinctive for a certain geo-spatial area, for example the city of Paris. This is a tremendously difficult task as the visual features distinguishing architectural elements of different places can be very subtle. In addition, we face a hard search problem: given all possible patches in all images, which of them are both frequently occurring and geographically informative? To address these issues, we propose to use a discriminative clustering approach able to take into account the weak geographic supervision. We show that geographically representative image elements can be discovered automatically from Google Street View imagery in a discriminative manner. We demonstrate that these elements are visually interpretable and perceptually geo-informative. The discovered visual elements can also support a variety of computational geography tasks, such as mapping architectural correspondences and influences within and across cities, finding representative elements at different geo-spatial scales, and geographically-informed image retrieval.",
"We seek to discover the object categories depicted in a set of unlabelled images. We achieve this using a model developed in the statistical text literature: probabilistic latent semantic analysis (pLSA). In text analysis, this is used to discover topics in a corpus using the bag-of-words document representation. Here we treat object categories as topics, so that an image containing instances of several categories is modeled as a mixture of topics. The model is applied to images by using a visual analogue of a word, formed by vector quantizing SIFT-like region descriptors. The topic discovery approach successfully translates to the visual domain: for a small set of objects, we show that both the object categories and their approximate spatial layout are found without supervision. Performance of this unsupervised method is compared to the supervised approach of (2003) on a set of unseen images containing only one object per image. We also extend the bag-of-words vocabulary to include 'doublets' which encode spatially local co-occurring regions. It is demonstrated that this extended vocabulary gives a cleaner image segmentation. Finally, the classification and segmentation methods are applied to a set of images containing multiple objects per image. These results demonstrate that we can successfully build object class models from an unsupervised analysis of images."
]
} |
1608.03932 | 2951242168 | Human pose estimation (i.e., locating the body parts joints of a person) is a fundamental problem in human-computer interaction and multimedia applications. Significant progress has been made based on the development of depth sensors, i.e., accessible human pose prediction from still depth images [32]. However, most of the existing approaches to this problem involve several components models that are independently designed and optimized, leading to suboptimal performances. In this paper, we propose a novel inference-embedded multi-task learning framework for predicting human pose from still depth images, which is implemented with a deep architecture of neural networks. Specifically, we handle two cascaded tasks: i) generating the heat (confidence) maps of body parts via a fully convolutional network (FCN); ii) seeking the optimal configuration of body parts based on the detected body part proposals via an inference built-in MatchNet [10], which measures the appearance and geometric kinematic compatibility of body parts and embodies the dynamic programming inference as an extra network layer. These two tasks are jointly optimized. Our extensive experiments show that the proposed deep model significantly improves the accuracy of human pose estimation over other several state-of-the-art methods or SDKs. We also release a large-scale dataset for comparison, which includes 100K depth images under challenging scenarios. | Estimating the human pose from unconstrained color data is an important but challenging problem. Many approaches have been recently developed @cite_8 @cite_28 @cite_5 @cite_39 @cite_25 @cite_29 @cite_17 @cite_1 @cite_0 @cite_57 . @cite_7 introduced an edge-based histogram, named shape-context'', to represent exemplar 2D views of the human body in a variety of different configurations and viewpoints with respect to the camera. @cite_31 introduced silhouette shape features to infer 3D structure parameters using a probabilistic multi-view shape model. More recently, Pictorial Structures @cite_4 @cite_54 and Deformable Part Models @cite_30 @cite_55 @cite_56 were proposed and have achieved tractable and practical performance on handling large variances of the body parts. As a result, large amounts of related models have been subsequently developed. @cite_14 proposed to capture orientation, co-occurrence and spatial relations with a mixture of templates for each body part. In order to introduce richer high-level spatial relationships, @cite_18 presented a hierarchical spatial model that can capture an exponential number of poses with a compact mixture representation on each part. @cite_46 presented a binary conditional random field model to detect human body parts of articulated people in a single color image. | {
"cite_N": [
"@cite_30",
"@cite_29",
"@cite_54",
"@cite_5",
"@cite_18",
"@cite_4",
"@cite_8",
"@cite_39",
"@cite_46",
"@cite_17",
"@cite_7",
"@cite_28",
"@cite_55",
"@cite_56",
"@cite_57",
"@cite_25",
"@cite_14",
"@cite_1",
"@cite_0",
"@cite_31"
],
"mid": [
"2168356304",
"",
"",
"",
"171061157",
"2045798786",
"2026720449",
"",
"2086618980",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"2116735535"
],
"abstract": [
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.",
"",
"",
"",
"Human pose estimation requires a versatile yet well-constrained spatial model for grouping locally ambiguous parts together to produce a globally consistent hypothesis. Previous works either use local deformable models deviating from a certain template, or use a global mixture representation in the pose space. In this paper, we propose a new hierarchical spatial model that can capture an exponential number of poses with a compact mixture representation on each part. Using latent nodes, it can represent high-order spatial relationship among parts with exact inference. Different from recent hierarchical models that associate each latent node to a mixture of appearance templates (like HoG), we use the hierarchical structure as a pure spatial prior avoiding the large and often confounding appearance space. We verify the effectiveness of this model in three ways. First, samples representing human-like poses can be drawn from our model, showing its ability to capture high-order dependencies of parts. Second, our model achieves accurate reconstruction of unseen poses compared to a nearest neighbor pose representation. Finally, our model achieves state-of-art performance on three challenging datasets, and substantially outperforms recent hierarchical models.",
"The primary problem dealt with in this paper is the following. Given some description of a visual object, find that object in an actual photograph. Part of the solution to this problem is the specification of a descriptive scheme, and a metric on which to base the decision of \"goodness\" of matching or detection.",
"A system capable of analyzing image sequences of human motion is described. The system is structured as a feedback loop between high and low levels: predictions are made at the semantic level and verifications are sought at the image level. The domain of human motion lends itself to a model-driven analysis, and the system includes a detailed model of the human body. All information extracted from the image is interpreted through a constraint network based on the structure of the human model. A constraint propagation operator is defined and its theoretical properties outlined. An implementation of this operator is described, and results of the analysis system for short image sequences are presented.",
"",
"Random forests have been successfully applied to various high level computer vision tasks such as human pose estimation and object segmentation. These models are extremely efficient but work under the assumption that the output variables (such as body part locations or pixel labels) are independent. In this paper, we present a conditional regression forest model for human pose estimation that incorporates dependency relationships between output variables through a global latent variable while still maintaining a low computational cost. We show that the incorporation of a global latent variable encoding torso orientation, or human height, etc., can dramatically increase the accuracy of body joint location prediction. Our model also allows efficient and seamless incorporation of prior knowledge about the problem instance such as the height or orientation of the human subject which can be available from the problem context or via a temporal model. We show that our method significantly outperforms state-of-the-art methods for pose estimation from depth images. The conditional regression model proposed in the paper is general and can be applied to other problems where random forests are used.",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"We present an image-based approach to infer 3D structure parameters using a probabilistic \"shape+structure\" model. The 3D shape of an object class is represented by sets of contours from silhouette views simultaneously observed from multiple calibrated cameras, while structural features of interest on the object are denoted by a number of 3D locations. A prior density over the multiview shape and corresponding structure is constructed with a mixture of probabilistic principal components analyzers. Given a novel set of contours, we infer the unknown structure parameters from the new shape's Bayesian reconstruction. Model matching and parameter inference are done entirely in the image domain and require no explicit 3D construction. Our shape model enables accurate estimation of structure despite segmentation errors or missing views in the input silhouettes, and it works even with only a single input view. Using a training set of thousands of pedestrian images generated from a synthetic model, we can accurately infer the 3D locations of 19 joints on the body based on observed silhouette contours from real images."
]
} |
1608.03932 | 2951242168 | Human pose estimation (i.e., locating the body parts joints of a person) is a fundamental problem in human-computer interaction and multimedia applications. Significant progress has been made based on the development of depth sensors, i.e., accessible human pose prediction from still depth images [32]. However, most of the existing approaches to this problem involve several components models that are independently designed and optimized, leading to suboptimal performances. In this paper, we propose a novel inference-embedded multi-task learning framework for predicting human pose from still depth images, which is implemented with a deep architecture of neural networks. Specifically, we handle two cascaded tasks: i) generating the heat (confidence) maps of body parts via a fully convolutional network (FCN); ii) seeking the optimal configuration of body parts based on the detected body part proposals via an inference built-in MatchNet [10], which measures the appearance and geometric kinematic compatibility of body parts and embodies the dynamic programming inference as an extra network layer. These two tasks are jointly optimized. Our extensive experiments show that the proposed deep model significantly improves the accuracy of human pose estimation over other several state-of-the-art methods or SDKs. We also release a large-scale dataset for comparison, which includes 100K depth images under challenging scenarios. | Recently, deep convolutional neural networks (CNNs) @cite_12 with sufficient training data have achieved remarkable success in computer vision. Several approaches have been proposed to employ CNNs to learn feature representation for human pose estimation. @cite_50 formulated the pose estimation as a deep CNN based regression problem towards body joints. @cite_32 proposed a novel hybrid architecture that consists of a convolutional neural network and a Markov Random Field for human pose estimation in the color monocular images. @cite_11 , Tompson further proposed an efficient position refinement' model that is trained to estimate the joint offset location within a small region of the image. @cite_43 presented a deep graph model, which exploits deep CNNs to learn conditional probability for the presence of parts and their spatial relationships within these parts. | {
"cite_N": [
"@cite_32",
"@cite_43",
"@cite_50",
"@cite_12",
"@cite_11"
],
"mid": [
"2952422028",
"2155394491",
"2113325037",
"2204578866",
""
],
"abstract": [
"This paper proposes a new hybrid architecture that consists of a deep Convolutional Network and a Markov Random Field. We show how this architecture is successfully applied to the challenging problem of articulated human pose estimation in monocular images. The architecture can exploit structural domain constraints such as geometric relationships between body joint locations. We show that joint training of these two model paradigms improves performance and allows us to significantly outperform existing state-of-the-art techniques.",
"We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency and statistical power of DCNNs. Our method significantly outperforms the state of the art methods on the LSP and FLIC datasets and also performs very well on the Buffy dataset without any training.",
"We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.",
"In this work, we address the human parsing task with a novel Contextualized Convolutional Neural Network (Co-CNN) architecture, which well integrates the cross-layer context, global image-level context, within-super-pixel context and cross-super-pixel neighborhood context into a unified network. Given an input human image, Co-CNN produces the pixel-wise categorization in an end-to-end way. First, the cross-layer context is captured by our basic local-to-global-to-local structure, which hierarchically combines the global semantic structure and the local fine details within the cross-layers. Second, the global image-level label prediction is used as an auxiliary objective in the intermediate layer of the Co-CNN, and its outputs are further used for guiding the feature learning in subsequent convolutional layers to leverage the global image-level context. Finally, to further utilize the local super-pixel contexts, the within-super-pixel smoothing and cross-super-pixel neighbourhood voting are formulated as natural sub-components of the Co-CNN to achieve the local label consistency in both training and testing process. Comprehensive evaluations on two public datasets well demonstrate the significant superiority of our Co-CNN architecture over other state-of-the-arts for human parsing. In particular, the F-1 score on the large dataset [15] reaches 76.95 by Co-CNN, significantly higher than 62.81 and 64.38 by the state-of-the-art algorithms, M-CNN [21] and ATR [15], respectively.",
""
]
} |
1608.03220 | 2949268223 | We study a family of closely-related distributed graph problems, which we call degree splitting, where roughly speaking the objective is to partition (or orient) the edges such that each node's degree is split almost uniformly. Our findings lead to answers for a number of problems, a sampling of which includes: -- We present a @math round deterministic algorithm for @math -edge-coloring, where @math denotes the maximum degree. Modulo the @math factor, this settles one of the long-standing open problems of the area from the 1990's (see e.g. Panconesi and Srinivasan [PODC'92]). Indeed, a weaker requirement of @math -edge-coloring in @math rounds was asked for in the 4th open question in the Distributed Graph Coloring book by Barenboim and Elkin. -- We show that sinkless orientation---i.e., orienting edges such that each node has at least one outgoing edge---on @math -regular graphs can be solved in @math rounds randomized and in @math rounds deterministically. These prove the corresponding lower bounds by [STOC'16] and Chang, Kopelowitz, and Pettie [FOCS'16] to be tight. Moreover, these show that sinkless orientation exhibits an exponential separation between its randomized and deterministic complexities, akin to the results of for @math -coloring @math -regular trees. -- We present a randomized @math round algorithm for orienting @math -arboricity graphs with maximum out-degree @math . This can be also turned into a decomposition into @math forests when @math and into @math pseduo-forests when @math . Obtaining an efficient distributed decomposition into less than @math forests was stated as the 10th open problem in the book by Barenboim and Elkin. | Throughout, we work with the standard distributed model called @math , due to Linial @cite_12 : The network is abstracted as a graph @math . There is one processor on each vertex, which initially knows only its neighbors. Per round, each processor can send one message to each of its neighbors. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2054910423"
],
"abstract": [
"This paper concerns a number of algorithmic problems on graphs and how they may be solved in a distributed fashion. The computational model is such that each node of the graph is occupied by a processor which has its own ID. Processors are restricted to collecting data from others which are at a distance at most t away from them in t time units, but are otherwise computationally unbounded. This model focuses on the issue of locality in distributed processing, namely, to what extent a global solution to a computational problem can be obtained from locally available data.Three results are proved within this model: • A 3-coloring of an n-cycle requires time @math . This bound is tight, by previous work of Cole and Vishkin. • Any algorithm for coloring the d-regular tree of radius r which runs for time at most @math requires at least @math colors. • In an n-vertex graph of largest degree @math , an @math -coloring may be found in time @math ."
]
} |
1608.03220 | 2949268223 | We study a family of closely-related distributed graph problems, which we call degree splitting, where roughly speaking the objective is to partition (or orient) the edges such that each node's degree is split almost uniformly. Our findings lead to answers for a number of problems, a sampling of which includes: -- We present a @math round deterministic algorithm for @math -edge-coloring, where @math denotes the maximum degree. Modulo the @math factor, this settles one of the long-standing open problems of the area from the 1990's (see e.g. Panconesi and Srinivasan [PODC'92]). Indeed, a weaker requirement of @math -edge-coloring in @math rounds was asked for in the 4th open question in the Distributed Graph Coloring book by Barenboim and Elkin. -- We show that sinkless orientation---i.e., orienting edges such that each node has at least one outgoing edge---on @math -regular graphs can be solved in @math rounds randomized and in @math rounds deterministically. These prove the corresponding lower bounds by [STOC'16] and Chang, Kopelowitz, and Pettie [FOCS'16] to be tight. Moreover, these show that sinkless orientation exhibits an exponential separation between its randomized and deterministic complexities, akin to the results of for @math -coloring @math -regular trees. -- We present a randomized @math round algorithm for orienting @math -arboricity graphs with maximum out-degree @math . This can be also turned into a decomposition into @math forests when @math and into @math pseduo-forests when @math . Obtaining an efficient distributed decomposition into less than @math forests was stated as the 10th open problem in the book by Barenboim and Elkin. | One related problem which has received attention recently is , where the objective is to orient edges of a @math -regular graph such that each node has out-degree at least @math . Note that this is clearly a much weaker requirement than that of the directed degree splitting problem. Recently, @cite_13 gave an elegant @math round lower bound for this problem, which was then extended by @cite_21 to an @math lower bound for deterministic algorithms. For this weaker orientation problem, we can achieve much better round complexities, which match the respective lower bounds. | {
"cite_N": [
"@cite_21",
"@cite_13"
],
"mid": [
"2279830512",
"2951615666"
],
"abstract": [
"Over the past 30 years numerous algorithms have been designed for symmetry breaking problems in the LOCAL model, such as maximal matching, MIS, vertex coloring, and edge-coloring. For most problems the best randomized algorithm is at least exponentially faster than the best deterministic algorithm. In this paper we prove that these exponential gaps are necessary and establish connections between the deterministic and randomized complexities in the LOCAL model. Each result has a very compelling take-away message: 1. Fast @math -coloring of trees requires random bits: Building on the recent lower bounds of , we prove that the randomized complexity of @math -coloring a tree with maximum degree @math is @math , whereas its deterministic complexity is @math for any @math . This also establishes a large separation between the deterministic complexity of @math -coloring and @math -coloring trees. 2. Randomized lower bounds imply deterministic lower bounds: We prove that any deterministic algorithm for a natural class of problems that runs in @math rounds can be transformed to run in @math rounds. If the transformed algorithm violates a lower bound (even allowing randomization), then one can conclude that the problem requires @math time deterministically. 3. Deterministic lower bounds imply randomized lower bounds: We prove that the randomized complexity of any natural problem on instances of size @math is at least its deterministic complexity on instances of size @math . This shows that a deterministic @math lower bound for any problem implies a randomized @math lower bound. It also illustrates that the graph shattering technique is absolutely essential to the LOCAL model.",
"We show that any randomised Monte Carlo distributed algorithm for the Lov 'asz local lemma requires @math communication rounds, assuming that it finds a correct assignment with high probability. Our result holds even in the special case of @math , where @math is the maximum degree of the dependency graph. By prior work, there are distributed algorithms for the Lov 'asz local lemma with a running time of @math rounds in bounded-degree graphs, and the best lower bound before our work was @math rounds [ 2014]."
]
} |
1608.03220 | 2949268223 | We study a family of closely-related distributed graph problems, which we call degree splitting, where roughly speaking the objective is to partition (or orient) the edges such that each node's degree is split almost uniformly. Our findings lead to answers for a number of problems, a sampling of which includes: -- We present a @math round deterministic algorithm for @math -edge-coloring, where @math denotes the maximum degree. Modulo the @math factor, this settles one of the long-standing open problems of the area from the 1990's (see e.g. Panconesi and Srinivasan [PODC'92]). Indeed, a weaker requirement of @math -edge-coloring in @math rounds was asked for in the 4th open question in the Distributed Graph Coloring book by Barenboim and Elkin. -- We show that sinkless orientation---i.e., orienting edges such that each node has at least one outgoing edge---on @math -regular graphs can be solved in @math rounds randomized and in @math rounds deterministically. These prove the corresponding lower bounds by [STOC'16] and Chang, Kopelowitz, and Pettie [FOCS'16] to be tight. Moreover, these show that sinkless orientation exhibits an exponential separation between its randomized and deterministic complexities, akin to the results of for @math -coloring @math -regular trees. -- We present a randomized @math round algorithm for orienting @math -arboricity graphs with maximum out-degree @math . This can be also turned into a decomposition into @math forests when @math and into @math pseduo-forests when @math . Obtaining an efficient distributed decomposition into less than @math forests was stated as the 10th open problem in the book by Barenboim and Elkin. | @cite_13 used sinkless orientation to prove an @math round lower bound on @math -algorithms for Lovasz Local Lemma (LLL). Since LLL can provide much finer degree splits, studying these stronger degree-splits might expose higher lower bounds for LLL. Moreover, @cite_21 recently presented the first exponential separation between randomized and deterministic distributed complexities, by showing that @math -vertex-coloring @math -regular trees requires @math rounds randomized and @math rounds deterministically. thm:sinkless , in conjunction with the aforementioned lower bounds, exhibits the same exponential separation on sinkless orientation. | {
"cite_N": [
"@cite_21",
"@cite_13"
],
"mid": [
"2279830512",
"2951615666"
],
"abstract": [
"Over the past 30 years numerous algorithms have been designed for symmetry breaking problems in the LOCAL model, such as maximal matching, MIS, vertex coloring, and edge-coloring. For most problems the best randomized algorithm is at least exponentially faster than the best deterministic algorithm. In this paper we prove that these exponential gaps are necessary and establish connections between the deterministic and randomized complexities in the LOCAL model. Each result has a very compelling take-away message: 1. Fast @math -coloring of trees requires random bits: Building on the recent lower bounds of , we prove that the randomized complexity of @math -coloring a tree with maximum degree @math is @math , whereas its deterministic complexity is @math for any @math . This also establishes a large separation between the deterministic complexity of @math -coloring and @math -coloring trees. 2. Randomized lower bounds imply deterministic lower bounds: We prove that any deterministic algorithm for a natural class of problems that runs in @math rounds can be transformed to run in @math rounds. If the transformed algorithm violates a lower bound (even allowing randomization), then one can conclude that the problem requires @math time deterministically. 3. Deterministic lower bounds imply randomized lower bounds: We prove that the randomized complexity of any natural problem on instances of size @math is at least its deterministic complexity on instances of size @math . This shows that a deterministic @math lower bound for any problem implies a randomized @math lower bound. It also illustrates that the graph shattering technique is absolutely essential to the LOCAL model.",
"We show that any randomised Monte Carlo distributed algorithm for the Lov 'asz local lemma requires @math communication rounds, assuming that it finds a correct assignment with high probability. Our result holds even in the special case of @math , where @math is the maximum degree of the dependency graph. By prior work, there are distributed algorithms for the Lov 'asz local lemma with a running time of @math rounds in bounded-degree graphs, and the best lower bound before our work was @math rounds [ 2014]."
]
} |
1608.03220 | 2949268223 | We study a family of closely-related distributed graph problems, which we call degree splitting, where roughly speaking the objective is to partition (or orient) the edges such that each node's degree is split almost uniformly. Our findings lead to answers for a number of problems, a sampling of which includes: -- We present a @math round deterministic algorithm for @math -edge-coloring, where @math denotes the maximum degree. Modulo the @math factor, this settles one of the long-standing open problems of the area from the 1990's (see e.g. Panconesi and Srinivasan [PODC'92]). Indeed, a weaker requirement of @math -edge-coloring in @math rounds was asked for in the 4th open question in the Distributed Graph Coloring book by Barenboim and Elkin. -- We show that sinkless orientation---i.e., orienting edges such that each node has at least one outgoing edge---on @math -regular graphs can be solved in @math rounds randomized and in @math rounds deterministically. These prove the corresponding lower bounds by [STOC'16] and Chang, Kopelowitz, and Pettie [FOCS'16] to be tight. Moreover, these show that sinkless orientation exhibits an exponential separation between its randomized and deterministic complexities, akin to the results of for @math -coloring @math -regular trees. -- We present a randomized @math round algorithm for orienting @math -arboricity graphs with maximum out-degree @math . This can be also turned into a decomposition into @math forests when @math and into @math pseduo-forests when @math . Obtaining an efficient distributed decomposition into less than @math forests was stated as the 10th open problem in the book by Barenboim and Elkin. | See thm:directedDegreeSplit and lem:forestDecomp for the formal statements. We note that efficient distributed orientation with out-degree less than @math had remained open. The best previously known results were as follows: an orientation with out-degree at most @math in @math rounds and an orientation with out-degree at most @math in @math rounds, both due to Barenboim and Elkin @cite_23 . The same authors state the closely-related problem of efficient distributed decomposition into less than @math forests as the 10th open problem in their book [Section 11] barenboim2013monograph . | {
"cite_N": [
"@cite_23"
],
"mid": [
"1987125980"
],
"abstract": [
"We study the distributed maximal independent set (henceforth, MIS) problem on sparse graphs. Currently, there are known algorithms with a sublogarithmic running time for this problem on oriented trees and graphs of bounded degrees. We devise the first sublogarithmic algorithm for computing MIS on graphs of bounded arboricity. This is a large family of graphs that includes graphs of bounded degree, planar graphs, graphs of bounded genus, graphs of bounded treewidth, graphs that exclude a fixed minor, and many other graphs. We also devise efficient algorithms for coloring graphs from these families. These results are achieved by the following technique that may be of independent interest. Our algorithm starts with computing a certain graph-theoretic structure, called Nash-Williams forests-decomposition. Then this structure is used to compute the MIS or coloring. Our results demonstrate that this methodology is very powerful. Finally, we show nearly-tight lower bounds on the running time of any distributed algorithm for computing a forests-decomposition."
]
} |
1608.03220 | 2949268223 | We study a family of closely-related distributed graph problems, which we call degree splitting, where roughly speaking the objective is to partition (or orient) the edges such that each node's degree is split almost uniformly. Our findings lead to answers for a number of problems, a sampling of which includes: -- We present a @math round deterministic algorithm for @math -edge-coloring, where @math denotes the maximum degree. Modulo the @math factor, this settles one of the long-standing open problems of the area from the 1990's (see e.g. Panconesi and Srinivasan [PODC'92]). Indeed, a weaker requirement of @math -edge-coloring in @math rounds was asked for in the 4th open question in the Distributed Graph Coloring book by Barenboim and Elkin. -- We show that sinkless orientation---i.e., orienting edges such that each node has at least one outgoing edge---on @math -regular graphs can be solved in @math rounds randomized and in @math rounds deterministically. These prove the corresponding lower bounds by [STOC'16] and Chang, Kopelowitz, and Pettie [FOCS'16] to be tight. Moreover, these show that sinkless orientation exhibits an exponential separation between its randomized and deterministic complexities, akin to the results of for @math -coloring @math -regular trees. -- We present a randomized @math round algorithm for orienting @math -arboricity graphs with maximum out-degree @math . This can be also turned into a decomposition into @math forests when @math and into @math pseduo-forests when @math . Obtaining an efficient distributed decomposition into less than @math forests was stated as the 10th open problem in the book by Barenboim and Elkin. | Distributed low out-degree orientation of low-arboricity graphs was first studied by @cite_23 and the same results have been used in a few subsequent works. This orientation was then turned into a forest decomposition which subsequently lead to sublinear-time algorithms for maximal independent set, vertex coloring, edge coloring and maximal matching in graphs of low arboricity. See [Chapter 4 & Chapter 11.3] barenboim2013monograph . Sinkless orientation was recently introduced by @cite_13 and studied also by @cite_21 . | {
"cite_N": [
"@cite_21",
"@cite_13",
"@cite_23"
],
"mid": [
"2279830512",
"2951615666",
"1987125980"
],
"abstract": [
"Over the past 30 years numerous algorithms have been designed for symmetry breaking problems in the LOCAL model, such as maximal matching, MIS, vertex coloring, and edge-coloring. For most problems the best randomized algorithm is at least exponentially faster than the best deterministic algorithm. In this paper we prove that these exponential gaps are necessary and establish connections between the deterministic and randomized complexities in the LOCAL model. Each result has a very compelling take-away message: 1. Fast @math -coloring of trees requires random bits: Building on the recent lower bounds of , we prove that the randomized complexity of @math -coloring a tree with maximum degree @math is @math , whereas its deterministic complexity is @math for any @math . This also establishes a large separation between the deterministic complexity of @math -coloring and @math -coloring trees. 2. Randomized lower bounds imply deterministic lower bounds: We prove that any deterministic algorithm for a natural class of problems that runs in @math rounds can be transformed to run in @math rounds. If the transformed algorithm violates a lower bound (even allowing randomization), then one can conclude that the problem requires @math time deterministically. 3. Deterministic lower bounds imply randomized lower bounds: We prove that the randomized complexity of any natural problem on instances of size @math is at least its deterministic complexity on instances of size @math . This shows that a deterministic @math lower bound for any problem implies a randomized @math lower bound. It also illustrates that the graph shattering technique is absolutely essential to the LOCAL model.",
"We show that any randomised Monte Carlo distributed algorithm for the Lov 'asz local lemma requires @math communication rounds, assuming that it finds a correct assignment with high probability. Our result holds even in the special case of @math , where @math is the maximum degree of the dependency graph. By prior work, there are distributed algorithms for the Lov 'asz local lemma with a running time of @math rounds in bounded-degree graphs, and the best lower bound before our work was @math rounds [ 2014].",
"We study the distributed maximal independent set (henceforth, MIS) problem on sparse graphs. Currently, there are known algorithms with a sublogarithmic running time for this problem on oriented trees and graphs of bounded degrees. We devise the first sublogarithmic algorithm for computing MIS on graphs of bounded arboricity. This is a large family of graphs that includes graphs of bounded degree, planar graphs, graphs of bounded genus, graphs of bounded treewidth, graphs that exclude a fixed minor, and many other graphs. We also devise efficient algorithms for coloring graphs from these families. These results are achieved by the following technique that may be of independent interest. Our algorithm starts with computing a certain graph-theoretic structure, called Nash-Williams forests-decomposition. Then this structure is used to compute the MIS or coloring. Our results demonstrate that this methodology is very powerful. Finally, we show nearly-tight lower bounds on the running time of any distributed algorithm for computing a forests-decomposition."
]
} |
1608.03351 | 2060799023 | Consider a Gaussian multiple-input multiple-output (MIMO) multiple-access channel (MAC) with channel matrix @math and a Gaussian MIMO broadcast channel (BC) with channel matrix @math . For the MIMO MAC, the integer-forcing architecture consists of first decoding integer-linear combinations of the transmitted codewords, which are then solved for the original messages. For the MIMO BC, the integer-forcing architecture consists of pre-inverting the integer-linear combinations at the transmitter so that each receiver can obtain its desired codeword by decoding an integer-linear combination. In both cases, integer-forcing offers higher achievable rates than zero-forcing while maintaining a similar implementation complexity. This paper establishes an uplink-downlink duality relationship for integer-forcing, i.e., any sum rate that is achievable via integer-forcing on the MIMO MAC can be achieved via integer-forcing on the MIMO BC with the same sum power and vice versa. Using this duality relationship, it is shown that integer-forcing can operate within a constant gap of the MIMO BC sum capacity. Finally, the paper proposes a duality-based iterative algorithm for the non-convex problem of selecting optimal beamforming and equalization vectors, and establishes that it converges to a local optimum. | Prior work on integer-forcing @cite_9 @cite_34 @cite_50 @cite_3 has focused on the important special case where all codewords have the same effective power. This constraint is implicitly imposed by the original compute-and-forward framework @cite_24 . In order to establish uplink-downlink duality, we need the flexibility to allocate power unequally across codewords. We will thus employ the expanded compute-and-forward framework @cite_44 , which can handle unequal powers. Our achievability results draw upon capacity-achieving nested lattice codes, whose existence has been shown in a series of recent works @cite_27 @cite_46 @cite_4 @cite_18 @cite_19 @cite_5 . We refer interested readers to the textbook of Zamir for a detailed history as well as a comprehensive treatment of lattice codes @cite_7 . | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_3",
"@cite_24",
"@cite_44",
"@cite_27",
"@cite_19",
"@cite_50",
"@cite_5",
"@cite_46",
"@cite_34"
],
"mid": [
"2111992817",
"2144099979",
"",
"2035266863",
"2963217749",
"2005196269",
"1809190717",
"2129344187",
"2122621581",
"2027894224",
"",
"2151450116",
"2005762169"
],
"abstract": [
"Network information theory promises high gains over simple point-to-point communication techniques, at the cost of higher complexity. However, lack of structured coding schemes limited the practical application of these concepts so far. One of the basic elements of a network code is the binning scheme. Wyner (1974, 1978) and other researchers proposed various forms of coset codes for efficient binning, yet these schemes were applicable only for lossless source (or noiseless channel) network coding. To extend the algebraic binning approach to lossy source (or noisy channel) network coding, previous work proposed the idea of nested codes, or more specifically, nested parity-check codes for the binary case and nested lattices in the continuous case. These ideas connect network information theory with the rich areas of linear codes and lattice codes, and have strong potential for practical applications. We review these developments and explore their tight relation to concepts such as combined shaping and precoding, coding for memories with defects, and digital watermarking. We also propose a few novel applications adhering to a unified approach.",
"A simple sphere bound gives the best possible tradeoff between the volume per point of an infinite array L and its error probability on an additive white Gaussian noise (AWGN) channel. It is shown that the sphere bound can be approached by a large class of coset codes or multilevel coset codes with multistage decoding, including certain binary lattices. These codes have structure of the kind that has been found to be useful in practice. Capacity curves and design guidance for practical codes are given. Exponential error bounds for coset codes are developed, generalizing Poltyrev's (1994) bounds for lattices. These results are based on the channel coding theorems of information theory, rather than the Minkowski-Hlawka theorem of lattice theory.",
"",
"A new architecture called integer-forcing (IF) linear receiver has been recently proposed for multiple-input multiple-output (MIMO) fading channels, wherein an appropriate integer linear combination of the received symbols has to be computed as a part of the decoding process. In this paper, we propose a method based on Hermite-Korkine-Zolotareff (HKZ) and Minkowski lattice basis reduction algorithms to obtain the integer coefficients for the IF receiver. We show that the proposed method provides a lower bound on the ergodic rate, and achieves the full receive diversity. Suitability of complex Lenstra-Lenstra-Lovasz (LLL) lattice reduction algorithm (CLLL) to solve the problem is also investigated. Furthermore, we establish the connection between the proposed IF linear receivers and lattice reduction-aided MIMO detectors (with equivalent complexity), and point out the advantages of the former class of receivers over the latter. For the 2 × 2 and 4× 4 MIMO channels, we compare the coded-block error rate and bit error rate of the proposed approach with that of other linear receivers. Simulation results show that the proposed approach outperforms the zero-forcing (ZF) receiver, minimum mean square error (MMSE) receiver, and the lattice reduction-aided MIMO detectors.",
"We study a distributed antenna system where L antenna terminals (ATs) are connected to a central processor (CP) via digital error-free links of finite capacity R0, and serve K user terminals (UTs). This model has been widely investigated both for the uplink (UTs to CP) and for the downlink (CP to UTs), which are instances of the general multiple-access relay and broadcast relay networks. We contribute to the subject in the following ways: 1) For the uplink, we consider the recently proposed “compute and forward” (CoF) approach and examine the corresponding system optimization at finite SNR. 2) For the downlink, we propose a novel precoding scheme nicknamed “reverse compute and forward” (RCoF). 3) In both cases, we present low-complexity versions of CoF and RCoF based on standard scalar quantization at the receivers, that lead to discrete-input discrete-output symmetric memoryless channel models for which near-optimal performance can be achieved by standard single-user linear coding. 4) We provide extensive numerical results and finite SNR comparison with other “state of the art” information theoretic techniques, in scenarios including fading and shadowing. The proposed uplink and downlink system optimization focuses specifically on the ATs and UTs selection problem. In both cases, for a given set of transmitters, the goal consists of selecting a subset of the receivers such that the corresponding system matrix has full rank and the sum rate is maximized. We present low-complexity ATs and UTs selection schemes and demonstrate through Monte Carlo simulation that the proposed schemes essentially eliminate the problem of rank deficiency of the system matrix and greatly mitigate the noninteger penalty affecting CoF RCoF at high SNR. Comparison with other state-of-the art information theoretic schemes, show competitive performance of the proposed approaches with significantly lower complexity.",
"Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them towards the destinations, which given enough equations, can recover their desired messages. The underlying codes are based on nested lattices whose algebraic structure ensures that integer combinations of codewords can be decoded reliably. Encoders map messages from a finite field to a lattice and decoders recover equations of lattice points which are then mapped back to equations over the finite field. This scheme is applicable even if the transmitters lack channel state information.",
"The compute-and-forward framework permits each receiver in a Gaussian network to directly decode a linear combination of the transmitted messages. The resulting linear combinations can then be employed as an end-to-end communication strategy for relaying, interference alignment, and other applications. Recent efforts have demonstrated the advantages of employing unequal powers at the transmitters and decoding more than one linear combination at each receiver. However, neither of these techniques fit naturally within the original formulation of compute-and-forward. This paper proposes an expanded compute-and-forward framework that incorporates both of these possibilities and permits an intuitive interpretation in terms of signal levels. Within this framework, recent achievability and optimality results are unified and generalized.",
"We present several results regarding the properties of a random vector, uniformly distributed over a lattice cell. This random vector is the quantization noise of a lattice quantizer at high resolution, or the noise of a dithered lattice quantizer at all distortion levels. We find that for the optimal lattice quantizers this noise is wide-sense-stationary and white. Any desirable noise spectra may be realized by an appropriate linear transformation (\"shaping\") of a lattice quantizer. As the dimension increases, the normalized second moment of the optimal lattice quantizer goes to 1 2 spl pi e, and consequently the quantization noise approaches a white Gaussian process in the divergence sense. In entropy-coded dithered quantization, which can be modeled accurately as passing the source through an additive noise channel, this limit behavior implies that for large lattice dimension both the error and the bit rate approach the error and the information rate of an additive white Gaussian noise (AWGN) channel.",
"We define an ensemble of lattices, and show that for asymptotically high dimension most of its members are simultaneously good as sphere packings, sphere coverings, additive white Gaussian noise (AWGN) channel codes and mean-squared error (MSE) quantization codes. These lattices are generated by applying Construction A to a random linear code over a prime field of growing size, i.e., by \"lifting\" the code to spl Ropf sup n .",
"We consider a distributed antenna system where L antenna terminals (ATs) are connected to a Central Processor (CP) via digital error-free links of finite capacity R 0 , and serve K user terminals (UTs). This system model has been widely investigated both for the uplink and the downlink, which are instances of the general multiple-access relay and broadcast relay networks. In this work we focus on the downlink, and propose a novel downlink precoding scheme nicknamed “Reverse Quantized Compute and Forward”(RQCoF). For this scheme we obtain achievable rates and compare with the state-of-the-art available in the literature. We also provide simulation results for a realistic network with fading with K > L UTs, and show that channel-based user selection produces large benefits and essentially removes the problem of rank deficiency in the system matrix.1",
"",
"General random coding theorems for lattices are derived from the Minkowski-Hlawka theorem and their close relation to standard averaging arguments for linear codes over finite fields is pointed out. A new version of the Minkowski-Hlawka theorem itself is obtained as the limit, for p spl rarr spl infin , of a simple lemma for linear codes over GF(p) used with p-level amplitude modulation. The relation between the combinatorial packing of solid bodies and the information-theoretic \"soft packing\" with arbitrarily small, but positive, overlap is illuminated. The \"soft-packing\" results are new. When specialized to the additive white Gaussian noise channel, they reduce to (a version of) the de Buda-Poltyrev result that spherically shaped lattice codes and a decoder that is unaware of the shaping can achieve the rate 1 2 log sub 2 (P N).",
"Integer-forcing receivers generalize traditional linear receivers for the multiple-input multiple-output channel by decoding integer-linear combinations of the transmitted streams, rather then the streams themselves. Previous works have shown that the additional degree of freedom in choosing the integer coefficients enables this receiver to approach the performance of maximum-likelihood decoding in various scenarios. Nonetheless, even for the optimal choice of integer coefficients, the additive noise at the equalizer's output is still correlated. In this work we study a variant of integer-forcing, termed successive integer-forcing, that exploits these noise correlations to improve performance. This scheme is the integer-forcing counterpart of successive interference cancellation for traditional linear receivers. Similarly to the latter, we show that successive integer-forcing is capacity achieving when it is possible to optimize the rate allocation to the different streams. In comparison to standard successive interference cancellation receivers, the successive integer-forcing receiver offers more possibilities for capacity achieving rate tuples, and in particular, ones that are more balanced."
]
} |
1608.03351 | 2060799023 | Consider a Gaussian multiple-input multiple-output (MIMO) multiple-access channel (MAC) with channel matrix @math and a Gaussian MIMO broadcast channel (BC) with channel matrix @math . For the MIMO MAC, the integer-forcing architecture consists of first decoding integer-linear combinations of the transmitted codewords, which are then solved for the original messages. For the MIMO BC, the integer-forcing architecture consists of pre-inverting the integer-linear combinations at the transmitter so that each receiver can obtain its desired codeword by decoding an integer-linear combination. In both cases, integer-forcing offers higher achievable rates than zero-forcing while maintaining a similar implementation complexity. This paper establishes an uplink-downlink duality relationship for integer-forcing, i.e., any sum rate that is achievable via integer-forcing on the MIMO MAC can be achieved via integer-forcing on the MIMO BC with the same sum power and vice versa. Using this duality relationship, it is shown that integer-forcing can operate within a constant gap of the MIMO BC sum capacity. Finally, the paper proposes a duality-based iterative algorithm for the non-convex problem of selecting optimal beamforming and equalization vectors, and establishes that it converges to a local optimum. | For the sake of notational simplicity, we will state all of our results for real-valued channels. Analogous results can be obtained for complex-valued channels via real-valued decompositions. Recent efforts have shown that compute-and-forward can also be realized for more general algebraic structures @cite_35 . For instance, building lattices from the Eisenstein integers yields better approximations for complex numbers on average, and can increase the average performance of compute-and-forward @cite_16 . | {
"cite_N": [
"@cite_35",
"@cite_16"
],
"mid": [
"2569741146",
"2562805420"
],
"abstract": [
"The problem of designing physical-layer network coding (PNC) schemes via nested lattices is considered. Building on the compute-and-forward (C&F) relaying strategy of Nazer and Gastpar, who demonstrated its asymptotic gain using information-theoretic tools, an algebraic approach is taken to show its potential in practical, nonasymptotic, settings. A general framework is developed for studying nested-lattice-based PNC schemes-called lattice network coding (LNC) schemes for short-by making a direct connection between C&F and module theory. In particular, a generic LNC scheme is presented that makes no assumptions on the underlying nested lattice code. C&F is reinterpreted in this framework, and several generalized constructions of LNC schemes are given. The generic LNC scheme naturally leads to a linear network coding channel over modules, based on which noncoherent network coding can be achieved. Next, performance complexity tradeoffs of LNC schemes are studied, with a particular focus on hypercube-shaped LNC schemes. The error probability of this class of LNC schemes is largely determined by the minimum intercoset distances of the underlying nested lattice code. Several illustrative hypercube-shaped LNC schemes are designed based on Constructions A and D, showing that nominal coding gains of 3 to 7.5 dB can be obtained with reasonable decoding complexity. Finally, the possibility of decoding multiple linear combinations is considered and related to the shortest independent vectors problem. A notion of dominant solutions is developed together with a suitable lattice-reduction-based algorithm.",
"In this paper, we consider the use of lattice codes over Eisenstein integers for implementing a compute-and-forward protocol in wireless networks when channel state information is not available at the transmitter. We extend the compute-and-forward paradigm of Nazer and Gastpar to decoding Eisenstein integer combinations of transmitted messages at relays by proving the existence of a sequence of pairs of nested lattices over Eisenstein integers in which the coarse lattice is good for covering and the fine lattice can achieve the Poltyrev limit. Using this result, we show that both the outage performance and error-correcting performance of the nested lattice codebooks over Eisenstein integers surpass those of lattice codebooks over integers considered by Nazer and Gastpar with no additional computational complexity."
]
} |
1608.03351 | 2060799023 | Consider a Gaussian multiple-input multiple-output (MIMO) multiple-access channel (MAC) with channel matrix @math and a Gaussian MIMO broadcast channel (BC) with channel matrix @math . For the MIMO MAC, the integer-forcing architecture consists of first decoding integer-linear combinations of the transmitted codewords, which are then solved for the original messages. For the MIMO BC, the integer-forcing architecture consists of pre-inverting the integer-linear combinations at the transmitter so that each receiver can obtain its desired codeword by decoding an integer-linear combination. In both cases, integer-forcing offers higher achievable rates than zero-forcing while maintaining a similar implementation complexity. This paper establishes an uplink-downlink duality relationship for integer-forcing, i.e., any sum rate that is achievable via integer-forcing on the MIMO MAC can be achieved via integer-forcing on the MIMO BC with the same sum power and vice versa. Using this duality relationship, it is shown that integer-forcing can operate within a constant gap of the MIMO BC sum capacity. Finally, the paper proposes a duality-based iterative algorithm for the non-convex problem of selecting optimal beamforming and equalization vectors, and establishes that it converges to a local optimum. | Finally, we note that there is a rich body of work on lattice-aided reduction @cite_47 @cite_26 @cite_20 @cite_40 @cite_39 @cite_31 @cite_32 for MIMO channels. For instance, in the uplink version of this strategy, each transmitter employs a lattice-based constellation (such as QAM). The decoder steers the channel to a full-rank integer matrix using equalization, makes hard estimates of the resulting integer-linear combinations of lattice symbols, and then applies the inverse integer matrix to obtain estimates of the emitted symbols. Roughly speaking, integer-forcing can be viewed as lattice-aided reduction that operates on the codeword, rather than symbol, level. This in turn allows us to write explicit achievable rate expressions for integer-forcing, whereas rates for lattice-aided reduction must be evaluated numerically. | {
"cite_N": [
"@cite_31",
"@cite_26",
"@cite_32",
"@cite_39",
"@cite_40",
"@cite_47",
"@cite_20"
],
"mid": [
"2158815787",
"",
"",
"2167709639",
"2015853863",
"1542869039",
"2158492447"
],
"abstract": [
"This paper identifies the first general, explicit, and nonrandom MIMO encoder-decoder structures that guarantee optimality with respect to the diversity-multiplexing tradeoff (DMT), without employing a computationally expensive maximum-likelihood (ML) receiver. Specifically, the work establishes the DMT optimality of a class of regularized lattice decoders, and more importantly the DMT optimality of their lattice-reduction (LR)-aided linear counterparts. The results hold for all channel statistics, for all channel dimensions, and most interestingly, irrespective of the particular lattice-code applied. As a special case, it is established that the LLL-based LR-aided linear implementation of the MMSE-GDFE lattice decoder facilitates DMT optimal decoding of any lattice code at a worst-case complexity that grows at most linearly in the data rate. This represents a fundamental reduction in the decoding complexity when compared to ML decoding whose complexity is generally exponential in the rate. The results' generality lends them applicable to a plethora of pertinent communication scenarios such as quasi-static MIMO, MIMO-OFDM, ISI, cooperative-relaying, and MIMO-ARQ channels, in all of which the DMT optimality of the LR-aided linear decoder is guaranteed. The adopted approach yields insight, and motivates further study, into joint transceiver designs with an improved SNR gap to ML decoding.",
"",
"",
"Recently, lattice-reduction-aided detectors have been proposed for multiinput multioutput (MIMO) systems to achieve performance with full diversity like the maximum likelihood receiver. However, these lattice-reduction-aided detectors are based on the traditional Lenstra-Lenstra-Lovasz (LLL) reduction algorithm that was originally introduced for reducing real lattice bases, in spite of the fact that the channel matrices are inherently complex-valued. In this paper, we introduce the complex LLL algorithm for direct application to reducing the basis of a complex lattice which is naturally defined by a complex-valued channel matrix. We derive an upper bound on proximity factors, which not only show the full diversity of complex LLL reduction-aided detectors, but also characterize the performance gap relative to the lattice decoder. Our analysis reveals that the complex LLL algorithm can reduce the complexity by nearly 50 compared to the traditional LLL algorithm, and this is confirmed by simulation. Interestingly, our simulation results suggest that the complex LLL algorithm has practically the same bit-error-rate performance as the traditional LLL algorithm, in spite of its lower complexity.",
"Diversity order is an important measure for the performance of communication systems over multiple-input-multiple-output (MIMO) fading channels. In this correspondence, we prove that in MIMO multiple- access systems (or MIMO point-to-point systems with V-BLAST transmission), lattice-reduction-aided decoding achieves the maximum receive diversity (which is equal to the number of receive antennas). Also, we prove that the naive lattice decoding (which discards the out-of-region decoded points) achieves the maximum diversity.",
"Lattice-reduction (LR) techniques are developed for enhancing the performance of multiple-input multiple-output (MIMO) digital communication systems. When used in conjunction with traditional linear and nonlinear detectors, LR techniques substantially close the gap to fundamental performance limits with little additional system complexity. Results for individual channels and ensembles are developed, and illustrated in detail for the case of small (2 spl times 2), uncoded, coherent systems. For example, we show that, relative to the maximum likelihood bound, LR techniques get us within 3dB for any Gaussian channel, and allow us to achieve the same diversity on the Rayleigh fading channel, when sufficiently large constellations are used.",
"A new viewpoint for adopting the lattice reduction in communication over multiple-input multiple-output (MIMO) broadcast channels is introduced. Lattice basis reduction helps us to reduce the average transmitted energy by modifying the region which includes the constellation points. The new viewpoint helps us to generalize the idea of lattice-reduction-aided (LRA) preceding for the case of unequal-rate transmission, and obtain analytic results for the asymptotic behavior (signal-to-noise ratio (SNR) rarr infin) of the symbol error rate for the LRA precoding and the perturbation technique. Also, the outage probability for both cases of fixed-rate users and fixed sum rate is analyzed. It is shown that the LRA method, using the Lenstra-Lenstra-Lovasz (LLL) algorithm, achieves the optimum asymptotic slope of symbol error rate (called the precoding diversity)."
]
} |
1608.03542 | 2950663335 | We present WikiReading, a large-scale natural language understanding task and publicly-available dataset with 18 million instances. The task is to predict textual values from the structured knowledge base Wikidata by reading the text of the corresponding Wikipedia articles. The task contains a rich variety of challenging classification and extraction sub-tasks, making it well-suited for end-to-end models such as deep neural networks (DNNs). We compare various state-of-the-art DNN-based architectures for document classification, information extraction, and question answering. We find that models supporting a rich answer space, such as word or character sequences, perform best. Our best-performing model, a word-level sequence to sequence model with a mechanism to copy out-of-vocabulary words, obtains an accuracy of 71.8 . | A substantial body of work in relation extraction (RE) follows the distant supervision paradigm @cite_12 , where sentences containing both arguments of a knowledge base (KB) triple are assumed to express the triple's relation. Broadly, these models use these distant labels to identify syntactic features relating the subject and object entities in text that are indicative of the relation. apply distant supervision to extracting Freebase triples @cite_2 from Wikipedia text, analogous to the relational part of . Extensions to distant supervision include explicitly modelling whether the relation is actually expressed in the sentence @cite_21 , and jointly reasoning over larger sets of sentences and relations @cite_5 . Recently, developed methods for reducing the number of distant supervision examples required by sharing information between relations. | {
"cite_N": [
"@cite_5",
"@cite_21",
"@cite_12",
"@cite_2"
],
"mid": [
"174427690",
"1604644367",
"1954715867",
"2094728533"
],
"abstract": [
"Distant supervision for relation extraction (RE) -- gathering training data by aligning a database of facts with text -- is an efficient approach to scale RE to thousands of different relations. However, this introduces a challenging learning scenario where the relation expressed by a pair of entities found in a sentence is unknown. For example, a sentence containing Balzac and France may express BornIn or Died, an unknown relation, or no relation at all. Because of this, traditional supervised learning, which assumes that each example is explicitly mapped to a label, is not appropriate. We propose a novel approach to multi-instance multi-label learning for RE, which jointly models all the instances of a pair of entities in text and all their labels using a graphical model with latent variables. Our model performs competitively on two difficult domains.",
"Several recent works on relation extraction have been applying the distant supervision paradigm: instead of relying on annotated text to learn how to predict relations, they employ existing knowledge bases (KBs) as source of supervision. Crucially, these approaches are trained based on the assumption that each sentence which mentions the two related entities is an expression of the given relation. Here we argue that this leads to noisy patterns that hurt precision, in particular if the knowledge base is not directly related to the text we are working with. We present a novel approach to distant supervision that can alleviate this problem based on the following two ideas: First, we use a factor graph to explicitly model the decision whether two entities are related, and the decision whether this relation is mentioned in a given sentence; second, we apply constraint-driven semi-supervision to train this model without any knowledge about which sentences express the relations in our training KB. We apply our approach to extract relations from the New York Times corpus and use Freebase as knowledge base. When compared to a state-of-the-art approach for relation extraction under distant supervision, we achieve 31 error reduction.",
"Recently, there has been much effort in making databases for Inolecular biology more accessible osld interoperable. However, information in text. form, such as MEDLINE records, remains a greatly underutilized source of biological information. We have begun a research effort aimed at automatically mapping information from text. sources into structured representations, such as knowledge bases. Our approach to this task is to use machine-learning methods to induce routines for extracting facts from text. We describe two learning methods that we have applied to this task -a statistical text classification method, and a relational learning method -and our initial experiments in learning such information-extraction routines. We also present an approach to decreasing the cost of learning information-extraction routines by learning from \"weakly\" labeled training data.",
"Freebase is a practical, scalable tuple database used to structure general human knowledge. The data in Freebase is collaboratively created, structured, and maintained. Freebase currently contains more than 125,000,000 tuples, more than 4000 types, and more than 7000 properties. Public read write access to Freebase is allowed through an HTTP-based graph-query API using the Metaweb Query Language (MQL) as a data query and manipulation language. MQL provides an easy-to-use object-oriented interface to the tuple data in Freebase and is designed to facilitate the creation of collaborative, Web-based data-oriented applications."
]
} |
1608.03487 | 2512562408 | This paper focuses on convex constrained optimization problems, where the solution is subject to a convex inequality constraint. In particular, we aim at challenging problems for which both projection into the constrained domain and a linear optimization under the inequality constraint are time-consuming, which render both projected gradient methods and conditional gradient methods (a.k.a. the Frank-Wolfe algorithm) expensive. In this paper, we develop projection reduced optimization algorithms for both smooth and non-smooth optimization with improved convergence rates. We first present a general theory of optimization with only one projection. Its application to smooth optimization with only one projection yields @math iteration complexity, which can be further reduced under strong convexity and improves over the @math iteration complexity established before for non-smooth optimization. Then we introduce the local error bound condition and develop faster convergent algorithms for non-strongly convex optimization at the price of a logarithmic number of projections. In particular, we achieve a convergence rate of @math for non-smooth optimization and @math for smooth optimization, where @math is a constant in the local error bound condition. An experiment on solving the constrained @math minimization problem in compressive sensing demonstrates that the proposed algorithm achieve significant speed-up. | To tackle these complex constraints, the idea of optimization with a reduced number of projections was explored in several studies since @cite_0 . In a recent paper @cite_10 , the authors show that for stochastic strongly convex optimization, the optimal convergence rate can be achieved using a logarithmic number of projections. In contrast, the developed theory in this paper implies that only one projection is sufficient to achieve the optimal convergence rate for strongly convex optimization, and a logarithmic number of projections can be used to accelerate convergence rates for non-strongly convex optimization. proposed a stochastic algorithm for solving heavily constrained problems with many constraint functions by extending the work of . Nonetheless, their focus is not to improve the convergence rates. studied the smooth and strongly convex optimization and they proposed a stochastic algorithm with @math projections and proved an @math convergence rate, where @math is the condition number and @math is the total number of iterations. Nonetheless, if the condition number is high the number of projections could be very large. In addition, their algorithm utilizes the mini-batch to avoid frequent projections in stochastic optimization, which is different from the present paper. | {
"cite_N": [
"@cite_0",
"@cite_10"
],
"mid": [
"2170262790",
"2500690480"
],
"abstract": [
"Although many variants of stochastic gradient descent have been proposed for large-scale convex optimization, most of them require projecting the solution at each iteration to ensure that the obtained solution stays within the feasible domain. For complex domains (e.g., positive semidefinite cone), the projection step can be computationally expensive, making stochastic gradient descent unattractive for large-scale optimization problems. We address this limitation by developing novel stochastic optimization algorithms that do not need intermediate projections. Instead, only one projection at the last iteration is needed to obtain a feasible solution in the given domain. Our theoretical analysis shows that with a high probability, the proposed algorithms achieve an O(1 √T) convergence rate for general convex optimization, and an O(ln T T) rate for strongly convex optimization under mild conditions about the domain and the objective function.",
"We consider stochastic strongly convex optimization with a complex inequality constraint. This complex inequality constraint may lead to computationally expensive projections in algorithmic iterations of the stochastic gradient descent (SGD) methods. To reduce the computation costs pertaining to the projections, we propose an Epoch-Projection Stochastic Gradient Descent (Epro-SGD) method. The proposed Epro-SGD method consists of a sequence of epochs; it applies SGD to an augmented objective function at each iteration within the epoch, and then performs a projection at the end of each epoch. Given a strongly convex optimization and for a total number of @math iterations, Epro-SGD requires only @math projections, and meanwhile attains an optimal convergence rate of @math , both in expectation and with a high probability. To exploit the structure of the optimization problem, we propose a proximal variant of Epro-SGD, namely Epro-ORDA, based on the optimal regularized dual averaging method. We apply the proposed methods on real-world applications; the empirical results demonstrate the effectiveness of our methods."
]
} |
1608.02824 | 2555464951 | Correspondences between 3D lines and their 2D images captured by a camera are often used to determine position and orientation of the camera in space. In this work, we propose a novel algebraic algorithm to estimate the camera pose. We parameterize 3D lines using Plucker coordinates that allow linear projecti on of the lines into the image. A line projection matrix is estimated using Linear Least Squares and the camera pose is then extracted from the matrix. An algebraic approach to handle mismatched line correspondences is also included. The proposed algorithm is an order of magnitude faster yet comparably accurate and robust to the state-of-the-art, it does not require initialization, and it yields only one solution. The described method requires at least 9 lines and is particularly suitable for scenarios with 25 and more lines, as also shown in the results. | The task of camera pose estimation from line correspondences is receiving attention for more than two decades. Some of the earliest works are the ones of Liu al @cite_7 and Dhome al @cite_21 . They introduce two different ways to deal with the PnL problem which can be tracked until today -- algebraic and iterative approaches. | {
"cite_N": [
"@cite_21",
"@cite_7"
],
"mid": [
"2153326665",
"2061246543"
],
"abstract": [
"A method for finding analytical solutions to the problem of determining the attitude of a 3D object in space from a single perspective image is presented. Its principle is based on the interpretation of a triplet of any image lines as the perspective projection of a triplet of linear ridges of the object model, and on the search for the model attitude consistent with these projections. The geometrical transformations to be applied to the model to bring it into the corresponding location are obtained by the resolution of an eight-degree equation in the general case. Using simple logical rules, it is shown on examples related to polyhedra that this approach leads to results useful for both location and recognition of 3D objects because few admissible hypotheses are retained from the interpolation of the three line segments. Line matching by the prediction-verification procedure is thus less complex. >",
"A method for the determination of camera location from two-dimensional (2-D) to three-dimensional (3-D) straight line or point correspondences is presented. With this method, the computations of the rotation matrix and the translation vector of the camera are separable. First, the rotation matrix is found by a linear algorithm using eight or more line correspondences, or by a nonlinear algorithm using three or more line correspondences, where the line correspondences are either given or derived from point correspondences. Then, the translation vector is obtained by solving a set of linear equations based on three or more line correspondences, or two or more point correspondences. Eight 2-D to 3-D line correspondences or six 2-D to 3-D point correspondences are needed for the linear approach; three 2-D to 3-D line or point correspondences for the nonlinear approach. Good results can be obtained in the presence of noise if more than the minimum required number of correspondences are used. >"
]
} |
1608.02824 | 2555464951 | Correspondences between 3D lines and their 2D images captured by a camera are often used to determine position and orientation of the camera in space. In this work, we propose a novel algebraic algorithm to estimate the camera pose. We parameterize 3D lines using Plucker coordinates that allow linear projecti on of the lines into the image. A line projection matrix is estimated using Linear Least Squares and the camera pose is then extracted from the matrix. An algebraic approach to handle mismatched line correspondences is also included. The proposed algorithm is an order of magnitude faster yet comparably accurate and robust to the state-of-the-art, it does not require initialization, and it yields only one solution. The described method requires at least 9 lines and is particularly suitable for scenarios with 25 and more lines, as also shown in the results. | The estimate the camera pose by solving a system of (usually polynomial) equations, minimizing an algebraic error. Dhome al @cite_21 and Chen @cite_15 solve the minimal problem of pose estimation from 3 line correspondences whereas Ansar and Daniilidis @cite_19 work with 4 or more lines. Their algorithm has quadratic computational complexity depending on the number of lines and it may fail if the polynomial system has more than 1 solution. More crucial disadvantage of these methods is that they become unstable in the presence of image noise and must be plugged into a RANSAC or similar loop. | {
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_21"
],
"mid": [
"2122612384",
"2098735323",
"2153326665"
],
"abstract": [
"Estimation of camera pose from an image of n points or lines with known correspondence is a thoroughly studied problem in computer vision. Most solutions are iterative and depend on nonlinear optimization of some geometric constraint, either on the world coordinates or on the projections to the image plane. For real-time applications, we are interested in linear or closed-form solutions free of initialization. We present a general framework which allows for a novel set of linear solutions to the pose estimation problem for both n points and n lines. We then analyze the sensitivity of our solutions to image noise and show that the sensitivity analysis can be used as a conservative predictor of error for our algorithms. We present a number of simulations which compare our results to two other recent linear algorithms, as well as to iterative approaches. We conclude with tests on real imagery in an augmented reality setup.",
"Consideration is given to a specific pose determination problem in which the sensory features are lines and the matched reference features are planes. The lines discussed are different from edge lines of an object in that they are not the intersection of boundary faces of the object. The author describes a polynomial method that, unlike previous methods, does not require prior knowledge about the location of the object. Closed-form solutions for orthogonal, coplanar, and parallel feature configurations of critical importance in real applications are derived. Basic findings concerning the necessary and sufficient conditions under which the pose determination problem can be solved are presented. >",
"A method for finding analytical solutions to the problem of determining the attitude of a 3D object in space from a single perspective image is presented. Its principle is based on the interpretation of a triplet of any image lines as the perspective projection of a triplet of linear ridges of the object model, and on the search for the model attitude consistent with these projections. The geometrical transformations to be applied to the model to bring it into the corresponding location are obtained by the resolution of an eight-degree equation in the general case. Using simple logical rules, it is shown on examples related to polyhedra that this approach leads to results useful for both location and recognition of 3D objects because few admissible hypotheses are retained from the interpolation of the three line segments. Line matching by the prediction-verification procedure is thus less complex. >"
]
} |
1608.02824 | 2555464951 | Correspondences between 3D lines and their 2D images captured by a camera are often used to determine position and orientation of the camera in space. In this work, we propose a novel algebraic algorithm to estimate the camera pose. We parameterize 3D lines using Plucker coordinates that allow linear projecti on of the lines into the image. A line projection matrix is estimated using Linear Least Squares and the camera pose is then extracted from the matrix. An algebraic approach to handle mismatched line correspondences is also included. The proposed algorithm is an order of magnitude faster yet comparably accurate and robust to the state-of-the-art, it does not require initialization, and it yields only one solution. The described method requires at least 9 lines and is particularly suitable for scenarios with 25 and more lines, as also shown in the results. | Recently, two major improvements of algebraic approaches have been achieved. First, Mirzaei and Roumeliotis @cite_4 proposed a method which is both efficient (linear computational complexity depending on the number of lines) and robust in the presence of image noise. The cases with 3 or more lines can be handled. A polynomial system with 27 candidate solutions is constructed and solved through the eigendecomposition of a multiplication matrix. Camera orientations having the least square error are considered to be the optimal ones. Camera positions are obtained separately using the Linear Least Squares. Nonetheless, the problem of this algorithm is that it often yields multiple solutions. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2168241268"
],
"abstract": [
"Correspondences between 2D lines in an image and 3D lines in the surrounding environment can be exploited to determine the camera's position and attitude (pose). In this paper, we introduce a novel approach to estimate the camera's pose by directly solving the corresponding least-squares problem algebraically. Specifically, the optimality conditions of the least-squares problem form a system of polynomial equations, which we efficiently solve through the eigendecomposition of a so-called multiplication matrix. Contrary to existing methods, the proposed algorithm (i) is guaranteed to find the globally optimal estimate in the least-squares sense, (ii) does not require initialization, and (iii) has computational cost only linear in the number of measurements. The superior performance of the proposed algorithm compared to previous approaches is demonstrated through extensive simulations and experiments."
]
} |
1608.02824 | 2555464951 | Correspondences between 3D lines and their 2D images captured by a camera are often used to determine position and orientation of the camera in space. In this work, we propose a novel algebraic algorithm to estimate the camera pose. We parameterize 3D lines using Plucker coordinates that allow linear projecti on of the lines into the image. A line projection matrix is estimated using Linear Least Squares and the camera pose is then extracted from the matrix. An algebraic approach to handle mismatched line correspondences is also included. The proposed algorithm is an order of magnitude faster yet comparably accurate and robust to the state-of-the-art, it does not require initialization, and it yields only one solution. The described method requires at least 9 lines and is particularly suitable for scenarios with 25 and more lines, as also shown in the results. | The second recent improvement is the work of Zhang al @cite_6 . Their method works with 4 or more lines and is more accurate and robust than the method of Mirzaei and Roumeliotis. An intermediate model coordinate system is used in the method of Zhang al, which is aligned with a 3D line of longest projection. The lines are divided into triples for each of which a P3L polynomial is formed. The optimal solution of the polynomial system is selected from the roots of its derivative in terms of a least squares residual. A drawback of this algorithm is that the computational time increases strongly for higher number of lines. | {
"cite_N": [
"@cite_6"
],
"mid": [
"1598416351"
],
"abstract": [
"We propose a non-iterative solution for the Perspective-n-Line (PnL) problem, which can efficiently and accurately estimate the camera pose for both small number and large number of line correspondences. By selecting a rotation axis in the camera framework, the reference lines are divided into triplets to form a sixteenth order cost function, and then the optimum is retrieved from the roots of the derivative of the cost function by evaluating the orthogonal errors and the reprojected errors of the local minima. The final pose estimation is normalized by a 3D alignment approach. The advantages of the proposed method are as follows: (1) it stably retrieves the optimum of the solution with very little computational complexity and high accuracy; (2) small line sets can be robustly handled to achieve highly accurate results and; (3) large line sets can be efficiently handled because it is O(n)."
]
} |
1608.03030 | 2482622595 | Social media messages' brevity and unconventional spelling pose a challenge to language identification. We introduce a hierarchical model that learns character and contextualized word-level representations for language identification. Our method performs well against strong base- lines, and can also reveal code-switching. | Several other studies have investigated the use of character sequence models in language processing. These techinques were first applied only to create word embeddings @cite_15 @cite_13 and then later extended to have the word embeddings feed directly into a word level RNN. Applications include part-of-speech (POS) tagging @cite_16 , language modeling @cite_2 , dependency parsing @cite_19 , translation @cite_16 , and slot filling text analysis @cite_12 . The work is divided in terms of whether the character sequence is modeled with an LSTM or CNN, though virtually all now leverage the resulting word vectors in a word-level RNN. We are not aware of prior results comparing LSTMs and CNNs on a specific task, but the reduction in model size compared to word-only systems is reported to be much higher for LSTM architectures. All analyses report that the greatest improvements in performance from character sequence models are for infrequent and previously unseen words, as expected. | {
"cite_N": [
"@cite_19",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_12"
],
"mid": [
"2951336364",
"2949563612",
"2101609803",
"2220350356",
"1951325712",
"2313437166"
],
"abstract": [
"We present extensions to a continuous-state dependency parsing method that makes it applicable to morphologically rich languages. Starting with a high-performance transition-based parser that uses long short-term memory (LSTM) recurrent neural networks to learn representations of the parser state, we replace lookup-based word representations with representations constructed from the orthographic representations of the words, also using LSTMs. This allows statistical sharing across word forms that are similar on the surface. Experiments for morphologically rich languages show that the parsing model benefits from incorporating the character-based encodings of words.",
"We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form-function relationship in language, our \"composed\" word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).",
"Distributed word representations have recently been proven to be an invaluable resource for NLP. These representations are normally learned using neural networks and capture syntactic and semantic information about words. Information about word morphology and shape is normally ignored when learning word representations. However, for tasks like part-of-speech tagging, intra-word information is extremely useful, specially when dealing with morphologically rich languages. In this paper, we propose a deep neural network that learns character-level representation of words and associate them with usual word representations to perform POS tagging. Using the proposed approach, while avoiding the use of any handcrafted feature, we produce state-of-the-art POS taggers for two languages: English, with 97.32 accuracy on the Penn Treebank WSJ corpus; and Portuguese, with 97.47 accuracy on the Mac-Morpho corpus, where the latter represents an error reduction of 12.2 on the best previous known result.",
"We introduce a neural machine translation model that views the input and output sentences as sequences of characters rather than words. Since word-level information provides a crucial source of bias, our input model composes representations of character sequences into representations of words (as determined by whitespace boundaries), and then these are translated using a joint attention translation model. In the target language, the translation is modeled as a sequence of word vectors, but each word is generated one character at a time, conditional on the previous character generations in each word. As the representation and generation of words is performed at the character level, our model is capable of interpreting and generating unseen word forms. A secondary benefit of this approach is that it alleviates much of the challenges associated with preprocessing tokenization of the source and target languages. We show that our model can achieve translation results that are on par with conventional word-based models.",
"Most state-of-the-art named entity recognition (NER) systems rely on handcrafted features and on the output of other NLP tasks such as part-of-speech (POS) tagging and text chunking. In this work we propose a language-independent NER system that uses automatically learned features only. Our approach is based on the CharWNN deep neural network, which uses word-level and character-level representations (embeddings) to perform sequential classification. We perform an extensive number of experiments using two annotated corpora in two different languages: HAREM I corpus, which contains texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in Spanish. Our experimental results shade light on the contribution of neural character embeddings for NER. Moreover, we demonstrate that the same neural network which has been successfully applied to POS tagging can also achieve state-of-the-art results for language-independet NER, using the same hyperparameters, and without any handcrafted features. For the HAREM I corpus, CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score for the total scenario (ten NE classes), and by 7.2 points in the F1 for the selective scenario (five NE classes).",
"The goal of this paper is to use multi-task learning to efficiently scale slot filling models for natural language understanding to handle multiple target tasks or domains. The key to scalability is reducing the amount of training data needed to learn a model for a new task. The proposed multi-task model delivers better performance with less data by leveraging patterns that it learns from the other tasks. The approach supports an open vocabulary, which allows the models to generalize to unseen words, which is particularly important when very little training data is used. A newly collected crowd-sourced data set, covering four different domains, is used to demonstrate the effectiveness of the domain adaptation and open vocabulary techniques."
]
} |
1608.03037 | 2515445237 | Maximum likelihood estimation (MLE) is a well-known estimation method used in many robotic and computer vision applications. Under Gaussian assumption, the MLE converts to a nonlinear least squares (NLS) problem. Efficient solutions to NLS exist and they are based on iteratively solving sparse linear systems until convergence. In general, the existing solutions provide only an estimation of the mean state vector, the resulting covariance being computationally too expensive to recover. Nevertheless, in many simultaneous localisation and mapping (SLAM) applications, knowing only the mean vector is not enough. Data association, obtaining reduced state representations, active decisions and next best view are only a few of the applications that require fast state covariance recovery. Furthermore, computer vision and robotic applications are in general performed online. In this case, the state is updated and recomputed every step and its size is continuously growing, therefore, the estimation process may become highly computationally demanding. This paper introduces a general framework for incremental MLE called SLAM++, which fully benefits from the incremental nature of the online applications, and provides efficient estimation of both the mean and the covariance of the estimate. Based on that, we propose a strategy for maintaining a sparse and scalable state representation for large scale mapping, which uses information theory measures to integrate only informative and non-redundant contributions to the state representation. SLAM++ differs from existing implementations by performing all the matrix operations by blocks. This led to extremely fast matrix manipulation and arithmetic operations. Even though this paper tests SLAM++ efficiency on SLAM problems, its applicability remains general. | Several approximations for marginal covariance recovery have been proposed in the literature. @cite_5 suggested using conditional covariances, which are inversions of sub-blocks of @math called the Markov blankets. The result is an overconfident approximation of the marginal covariances. Online, conservative approximations were proposed in @cite_33 , where at every step, the covariances corresponding to the new variables are computed by solving the augmented system with a set of basis vectors. The recovered covariance column is passed to a Kalman filter bank, which updates the rest of the covariance matrix. The filtering is reported to run in constant time, and the recovery speed is bounded by the linear solving. In the context of MLE, belief propagation over a spanning tree or loopy intersection propagation can be used to obtain conservative approximations suitable for data association, @cite_18 . | {
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_33"
],
"mid": [
"1977464512",
"2061372028",
""
],
"abstract": [
"In this paper we describe a scalable algorithm for the simultaneous mapping and localization (SLAM) problem. SLAM is the problem of acquiring a map of a static environment with a mobile robot. The vast majority of SLAM algorithms are based on the extended Kalman filter (EKF). In this paper we advocate an algorithm that relies on the dual of the EKF, the extended information filter (EIF). We show that when represented in the information form, map posteriors are dominated by a small number of links that tie together nearby features in the map. This insight is developed into a sparse variant of the EIF, called the sparse extended information filter (SEIF). SEIFs represent maps by graphical networks of features that are locally interconnected, where links represent relative information between pairs of nearby features, as well as information about the robot’s pose relative to the map. We show that all essential update equations in SEIFs can be executed in constant time, irrespective of the size of the map. We...",
"Smoothing and optimization approaches are an effective means for solving the simultaneous localization and mapping (SLAM) problem. Most of the existing techniques focus mainly on determining the most likely map and leave open how to efficiently compute the marginal covariances. These marginal covariances, however, are essential for solving the data association problem. In this paper we present a novel algorithm for computing an approximation of the marginal. In experiments we demonstrate that our approach outperforms two commonly used techniques, namely loopy belief propagation and belief propagation on a spanning tree. Compared to these approaches, our algorithm yields better estimates while preserving the same time complexity.",
""
]
} |
1608.03037 | 2515445237 | Maximum likelihood estimation (MLE) is a well-known estimation method used in many robotic and computer vision applications. Under Gaussian assumption, the MLE converts to a nonlinear least squares (NLS) problem. Efficient solutions to NLS exist and they are based on iteratively solving sparse linear systems until convergence. In general, the existing solutions provide only an estimation of the mean state vector, the resulting covariance being computationally too expensive to recover. Nevertheless, in many simultaneous localisation and mapping (SLAM) applications, knowing only the mean vector is not enough. Data association, obtaining reduced state representations, active decisions and next best view are only a few of the applications that require fast state covariance recovery. Furthermore, computer vision and robotic applications are in general performed online. In this case, the state is updated and recomputed every step and its size is continuously growing, therefore, the estimation process may become highly computationally demanding. This paper introduces a general framework for incremental MLE called SLAM++, which fully benefits from the incremental nature of the online applications, and provides efficient estimation of both the mean and the covariance of the estimate. Based on that, we propose a strategy for maintaining a sparse and scalable state representation for large scale mapping, which uses information theory measures to integrate only informative and non-redundant contributions to the state representation. SLAM++ differs from existing implementations by performing all the matrix operations by blocks. This led to extremely fast matrix manipulation and arithmetic operations. Even though this paper tests SLAM++ efficiency on SLAM problems, its applicability remains general. | In their paper, @cite_34 proposed a covariance factorization for calculating linearized updates to the covariance matrix over arbitrary number of planning decision steps in a partially observable Markov decision process (POMDP). The method uses matrix inversion lemmas to efficiently calculate the updates. The idea of using factorizations for calculating inversion update is not new, though. A discussion of applications of the Sherman-Morrison and Woodbury formulas is presented in @cite_16 . Specifically, it states the usefulness of these formulas for updating matrix inversion, after a small-rank modification, where the rank must be kept low enough in order for the update to be faster than simply calculating the inverse. In our latest work @cite_2 , we proposed an algorithm which confirms this conclusion, and also proves its usefulness in online SLAM applications. | {
"cite_N": [
"@cite_34",
"@cite_16",
"@cite_2"
],
"mid": [
"",
"2139182243",
"1551362180"
],
"abstract": [
"",
"The Sherman–Morrison–Woodbury formulas relate the inverse of a matrix after a small-rank perturbation to the inverse of the original matrix. The history of these fomulas is presented and various applications to statistics, networks, structural analysis, asymptotic analysis, optimization, and partial differential equations are discussed. The Sherman-Morrison-Woodbury formulas express the inverse of a matrix after a small rank perturbation in terms of the inverse of the original matrix. This paper surveys the history of these formulas and we examine some applications where these formulas are helpful",
"Many estimation problems in robotics rely on efficiently solving nonlinear least squares (NLS). For example, it is well known that the simultaneous localisation and mapping (SLAM) problem can be formulated as a maximum likelihood estimation (MLE) and solved using NLS, yielding a mean state vector. However, for many applications recovering only the mean vector is not enough. Data association, active decisions, next best view, are only few of the applications that require fast state covariance recovery. The problem is not simple since, in general, the covariance is obtained by inverting the system matrix and the result is dense. The main contribution of this paper is a novel algorithm for fast incremental covariance update, complemented by a highly efficient implementation of the covariance recovery. This combination yields to two orders of magnitude reduction in computation time, compared to the other state of the art solutions. The proposed algorithm is applicable to any NLS solver implementation, and does not depend on incremental strategies described in our previous papers, which are not a subject of this paper."
]
} |
1608.03037 | 2515445237 | Maximum likelihood estimation (MLE) is a well-known estimation method used in many robotic and computer vision applications. Under Gaussian assumption, the MLE converts to a nonlinear least squares (NLS) problem. Efficient solutions to NLS exist and they are based on iteratively solving sparse linear systems until convergence. In general, the existing solutions provide only an estimation of the mean state vector, the resulting covariance being computationally too expensive to recover. Nevertheless, in many simultaneous localisation and mapping (SLAM) applications, knowing only the mean vector is not enough. Data association, obtaining reduced state representations, active decisions and next best view are only a few of the applications that require fast state covariance recovery. Furthermore, computer vision and robotic applications are in general performed online. In this case, the state is updated and recomputed every step and its size is continuously growing, therefore, the estimation process may become highly computationally demanding. This paper introduces a general framework for incremental MLE called SLAM++, which fully benefits from the incremental nature of the online applications, and provides efficient estimation of both the mean and the covariance of the estimate. Based on that, we propose a strategy for maintaining a sparse and scalable state representation for large scale mapping, which uses information theory measures to integrate only informative and non-redundant contributions to the state representation. SLAM++ differs from existing implementations by performing all the matrix operations by blocks. This led to extremely fast matrix manipulation and arithmetic operations. Even though this paper tests SLAM++ efficiency on SLAM problems, its applicability remains general. | This paper shows how, based on the efficient covariance recovery strategy, a principled method that incrementally maintain a compact representation of a SLAM problem can be obtained. The system only keeps non-redundant poses and informative links. The result is a set of robot poses nicely distributed in the information space. This translates in highly scalable solutions and great speed-ups for large scale estimation problems, while the accuracy is marginally affected. The idea was previously introduced in a filtering framework @cite_15 to maintain a compact representation of a 2D pose SLAM, and this paper extends it to large scale MLE estimation for 3D SLAM, addressing all the corresponding challenges. The method can be easily extended beyond pose SLAM, to problems with different types of measurements, for example trinary factors present in structure-less BA @cite_9 , or different types of variables, for example landmark SLAM, with the constrain that variables can be eliminated as long as measurement composition is possible. | {
"cite_N": [
"@cite_9",
"@cite_15"
],
"mid": [
"2036396246",
"2096052462"
],
"abstract": [
"Presented at the Ninth Conference on 23rd British Machine Vision Conference (BMVC 2012), 3-7 September 2012, Guildford, Surrey, UK.",
"Pose SLAM is the variant of simultaneous localization and map building (SLAM) is the variant of SLAM, in which only the robot trajectory is estimated and where landmarks are only used to produce relative constraints between robot poses. To reduce the computational cost of the information filter form of Pose SLAM and, at the same time, to delay inconsistency as much as possible, we introduce an approach that takes into account only highly informative loop-closure links and nonredundant poses. This approach includes constant time procedures to compute the distance between poses, the expected information gain for each potential link, and the exact marginal covariances while moving in open loop, as well as a procedure to recover the state after a loop closure that, in practical situations, scales linearly in terms of both time and memory. Using these procedures, the robot operates most of the time in open loop, and the cost of the loop closure is amortized over long trajectories. This way, the computational bottleneck shifts to data association, which is the search over the set of previously visited poses to determine good candidates for sensor registration. To speed up data association, we introduce a method to search for neighboring poses whose complexity ranges from logarithmic in the usual case to linear in degenerate situations. The method is based on organizing the pose information in a balanced tree whose internal levels are defined using interval arithmetic. The proposed Pose-SLAM approach is validated through simulations, real mapping sessions, and experiments using standard SLAM data sets."
]
} |
1608.02904 | 2507541756 | We describe TweeTIME, a temporal tagger for recognizing and normalizing time expressions in Twitter. Most previous work in social media analysis has to rely on temporal resolvers that are designed for well-edited text, and therefore suffer from the reduced performance due to domain mismatch. We present a minimally supervised method that learns from large quantities of unlabeled data and requires no hand-engineered rules or hand-annotated training corpora. TweeTIME achieves 0.68 F1 score on the end-to-end task of resolving date expressions, outperforming a broad range of state-of-the-art systems. | Temporal Resolvers primarily utilize either rule-based or probabilistic approaches. Notable rule-based systems such as TempEx @cite_17 , SUTime @cite_18 and HeidelTime @cite_14 provide particularly competitive performance compared to the state-of-the-art machine learning methods. Probabilistic approaches use supervised classifiers trained on in-domain annotated data @cite_38 @cite_6 @cite_20 or hybrid with hand-engineered rules @cite_40 @cite_15 . UWTime @cite_15 is one of the most recent and competitive systems and uses Combinatory Categorial Grammar (CCG). | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_14",
"@cite_6",
"@cite_40",
"@cite_15",
"@cite_20",
"@cite_17"
],
"mid": [
"2158390956",
"2251758222",
"2058070641",
"",
"1619424147",
"2147923392",
"2177402841",
"1978672522"
],
"abstract": [
"In this paper we describe a system for the recognition and normalization of temporal expressions (Task 13: TempEval-2, Task A). The recognition task is approached as a classification problem of sentence constituents and the normalization is implemented in a rule-based manner. One of the system features is extending positive annotations in the corpus by semantically similar words automatically obtained from a large unannotated textual corpus. The best results obtained by the system are 0.85 and 0.84 for precision and recall respectively for recognition of temporal expressions; the accuracy values of 0.91 and 0.55 were obtained for the feature values type and val respectively.",
"We describe SUTIME, a temporal tagger for recognizing and normalizing temporal expressions in English text. SUTIME is available as part of the Stanford CoreNLP pipeline and can be used to annotate documents with temporal information. It is a deterministic rule-based system designed for extensibility. Testing on the TempEval-2 evaluation corpus shows that this system outperforms state-of-the-art techniques.",
"Extraction and normalization of temporal expressions from documents are important steps towards deep text understanding and a prerequisite for many NLP tasks such as information extraction, question answering, and document summarization. There are different ways to express (the same) temporal information in documents. However, after identifying temporal expressions, they can be normalized according to some standard format. This allows the usage of temporal information in a term- and language-independent way. In this paper, we describe the challenges of temporal tagging in different domains, give an overview of existing annotated corpora, and survey existing approaches for temporal tagging. Finally, we present our publicly available temporal tagger HeidelTime, which is easily extensible to further languages due to its strict separation of source code and language resources like patterns and rules. We present a broad evaluation on multiple languages and domains on existing corpora as well as on a newly created corpus for a language domain combination for which no annotated corpus has been available so far.",
"",
"Extracting temporal information from raw text is fundamental for deep language understanding, and key to many applications like question answering, information extraction, and document summarization. In this paper, we describe two systems we submitted to the TempEval 2 challenge, for extracting temporal information from raw text. The systems use a combination of deep semantic parsing, Markov Logic Networks and Conditional Random Field classifiers. Our two submitted systems, TRIPS and TRIOS, approached all tasks and outperformed all teams in two tasks. Furthermore, TRIOS mostly had second-best performances in other tasks. TRIOS also outperformed the other teams that attempted all the tasks. Our system is notable in that for tasks C -- F, they operated on raw text while all other systems used tagged events and temporal expressions in the corpus as input.",
"We present an approach for learning context-dependent semantic parsers to identify and interpret time expressions. We use a Combinatory Categorial Grammar to construct compositional meaning representations, while considering contextual cues, such as the document creation time and the tense of the governing verb, to compute the final time values. Experiments on benchmark datasets show that our approach outperforms previous stateof-the-art systems, with error reductions of 13 to 21 in end-to-end performance.",
"This paper describes a temporal expression identification and normalization system, ManTIME, developed for the TempEval-3 challenge. The identification phase combines the use of conditional random fields along with a post-processing identification pipeline, whereas the normalization phase is carried out using NorMA, an open-source rule-based temporal normalizer. We investigate the performance variation with respect to different feature types. Specifically, we show that the use of WordNet-based features in the identification task negatively affects the overall performance, and that there is no statistically significant difference in using gazetteers, shallow parsing and propositional noun phrases labels on top of the morphological features. On the test data, the best run achieved 0.95 (P), 0.85 (R) and 0.90 (F1) in the identification phase. Normalization accuracies are 0.84 (type attribute) and 0.77 (value attribute). Surprisingly, the use of the silver data (alone or in addition to the gold annotated ones) does not improve the performance.",
"We introduce an annotation scheme for temporal expressions, and describe a method for resolving temporal expressions in print and broadcast news. The system, which is based on both hand-crafted and machine-learnt rules, achieves an 83.2 accuracy (F-measure) against hand-annotated data. Some initial steps towards tagging event chronologies are also described."
]
} |
1608.02971 | 2494907211 | The problem of Learning from Demonstration is targeted at learning to perform tasks based on observed examples. One approach to Learning from Demonstration is Inverse Reinforcement Learning, in which actions are observed to infer rewards. This work combines a feature based state evaluation approach to Inverse Reinforcement Learning with neuroevolution, a paradigm for modifying neural networks based on their performance on a given task. Neural networks are used to learn from a demonstrated expert policy and are evolved to generate a policy similar to the demonstration. The algorithm is discussed and evaluated against competitive feature-based Inverse Reinforcement Learning approaches. At the cost of execution time, neural networks allow for non-linear combinations of features in state evaluations. These valuations may correspond to state value or state reward. This results in better correspondence to observed examples as opposed to using linear combinations. This work also extends existing work on Bayesian Non-Parametric Feature Construction for Inverse Reinforcement Learning by using non-linear combinations of intermediate data to improve performance. The algorithm is observed to be specifically suitable for a linearly solvable non-deterministic Markov Decision Processes in which multiple rewards are sparsely scattered in state space. A conclusive performance hierarchy between evaluated algorithms is presented. | Feature construction for Inverse Reinforcement Learning (FIRL) uses regression trees and quadratic programming @cite_1 . Optimization then involves selection of a sub-tree without significant loss in regression fitness. In @cite_22 , FIRL is acknowledged to have limited capabilites in representing a reward function that uses a non-linear combination of features. Another such technique, called GPIRL, is based on Gaussian Process (GP) regression @cite_22 and fits the reward function into a GP kernel as a non-linear combination of state features. | {
"cite_N": [
"@cite_1",
"@cite_22"
],
"mid": [
"2162009473",
"2117675763"
],
"abstract": [
"The goal of inverse reinforcement learning is to find a reward function for a Markov decision process, given example traces from its optimal policy. Current IRL techniques generally rely on user-supplied features that form a concise basis for the reward. We present an algorithm that instead constructs reward features from a large collection of component features, by building logical conjunctions of those component features that are relevant to the example policy. Given example traces, the algorithm returns a reward function as well as the constructed features. The reward function can be used to recover a full, deterministic, stationary policy, and the features can be used to transplant the reward function into any novel environment on which the component features are well defined.",
"We present a probabilistic algorithm for nonlinear inverse reinforcement learning. The goal of inverse reinforcement learning is to learn the reward function in a Markov decision process from expert demonstrations. While most prior inverse reinforcement learning algorithms represent the reward as a linear combination of a set of features, we use Gaussian processes to learn the reward as a nonlinear function, while also determining the relevance of each feature to the expert's policy. Our probabilistic algorithm allows complex behaviors to be captured from suboptimal stochastic demonstrations, while automatically balancing the simplicity of the learned reward structure against its consistency with the observed actions."
]
} |
1608.02971 | 2494907211 | The problem of Learning from Demonstration is targeted at learning to perform tasks based on observed examples. One approach to Learning from Demonstration is Inverse Reinforcement Learning, in which actions are observed to infer rewards. This work combines a feature based state evaluation approach to Inverse Reinforcement Learning with neuroevolution, a paradigm for modifying neural networks based on their performance on a given task. Neural networks are used to learn from a demonstrated expert policy and are evolved to generate a policy similar to the demonstration. The algorithm is discussed and evaluated against competitive feature-based Inverse Reinforcement Learning approaches. At the cost of execution time, neural networks allow for non-linear combinations of features in state evaluations. These valuations may correspond to state value or state reward. This results in better correspondence to observed examples as opposed to using linear combinations. This work also extends existing work on Bayesian Non-Parametric Feature Construction for Inverse Reinforcement Learning by using non-linear combinations of intermediate data to improve performance. The algorithm is observed to be specifically suitable for a linearly solvable non-deterministic Markov Decision Processes in which multiple rewards are sparsely scattered in state space. A conclusive performance hierarchy between evaluated algorithms is presented. | Recent techniques have also incorporated a non-parametric Bayesian framework to improve learning. This means that the number of parameters used by the models increases based on the training data. Composite features in @cite_33 are defined as logical conjunctions of state features. The IRL model extends @cite_27 by defining a prior on these composite features. In @cite_48 and @cite_10 , the reward function is additionally assumed to be generated by a composition of sub-tasks to be performed in the MDP space. This algorithm targets detection of sub-goals but does not estimate the final policy over all states in state space (it only targets states observed in the demonstration). | {
"cite_N": [
"@cite_48",
"@cite_27",
"@cite_10",
"@cite_33"
],
"mid": [
"",
"1591675293",
"2105156548",
"2177382477"
],
"abstract": [
"",
"Inverse Reinforcement Learning (IRL) is the problem of learning the reward function underlying a Markov Decision Process given the dynamics of the system and the behaviour of an expert. IRL is motivated by situations where knowledge of the rewards is a goal by itself (as in preference elicitation) and by the task of apprenticeship learning (learning policies from an expert). In this paper we show how to combine prior knowledge and evidence from the expert's actions to derive a probability distribution over the space of reward functions. We present efficient algorithms that find solutions for the reward learning and apprenticeship learning tasks that generalize well over these distributions. Experimental results show strong improvement for our methods over previous heuristic-based approaches.",
"Learning from demonstration provides an attractive solution to the problem of teaching autonomous systems how to perform complex tasks. Reward learning from demonstration is a promising method of inferring a rich and transferable representation of the demonstrator's intents, but current algorithms suffer from intractability and inefficiency in large domains due to the assumption that the demonstrator is maximizing a single reward function throughout the whole task. This paper takes a different perspective by assuming that the reward function behind an unsegmented demonstration is actually composed of several distinct subtasks chained together. Leveraging this assumption, a Bayesian nonparametric reward-learning framework is presented that infers multiple subgoals and reward functions within a single unsegmented demonstration. The new framework is developed for discrete state spaces and also general continuous demonstration domains using Gaussian process reward representations. The algorithm is shown to have both performance and computational advantages over existing inverse reinforcement learning methods. Experimental results are given in both cases, demonstrating the ability to learn challenging maneuvers from demonstration on a quadrotor and a remote-controlled car.",
"Most of the algorithms for inverse reinforcement learning (IRL) assume that the reward function is a linear function of the pre-defined state and action features. However, it is often difficult to manually specify the set of features that can make the true reward function representable as a linear function. We propose a Bayesian nonparametric approach to identifying useful composite features for learning the reward function. The composite features are assumed to be the logical conjunctions of the pre-defined atomic features so that we can represent the reward function as a linear function of the composite features. We empirically show that our approach is able to learn composite features that capture important aspects of the reward function on synthetic domains, and predict taxi drivers' behaviour with high accuracy on a real GPS trace dataset."
]
} |
1608.02971 | 2494907211 | The problem of Learning from Demonstration is targeted at learning to perform tasks based on observed examples. One approach to Learning from Demonstration is Inverse Reinforcement Learning, in which actions are observed to infer rewards. This work combines a feature based state evaluation approach to Inverse Reinforcement Learning with neuroevolution, a paradigm for modifying neural networks based on their performance on a given task. Neural networks are used to learn from a demonstrated expert policy and are evolved to generate a policy similar to the demonstration. The algorithm is discussed and evaluated against competitive feature-based Inverse Reinforcement Learning approaches. At the cost of execution time, neural networks allow for non-linear combinations of features in state evaluations. These valuations may correspond to state value or state reward. This results in better correspondence to observed examples as opposed to using linear combinations. This work also extends existing work on Bayesian Non-Parametric Feature Construction for Inverse Reinforcement Learning by using non-linear combinations of intermediate data to improve performance. The algorithm is observed to be specifically suitable for a linearly solvable non-deterministic Markov Decision Processes in which multiple rewards are sparsely scattered in state space. A conclusive performance hierarchy between evaluated algorithms is presented. | Bayesian Non-Parametric FIRL (BNP-FIRL) @cite_33 uses the Indian Buffet Process (IBP) @cite_11 to define priors over composite state features. The IBP defines a distribution over infinitely large (in number of columns) binary matrices. It is used to determine the number of composite features and the composite features themselves. Features and corresponding weights are recorded over several iterations of the algorithm. These values are then either aggregated (as a mean-based result) or used for estimating Maximum A-Posteriori (MAP) @cite_13 (as a MAP-based result) state reward values. While the two results provide competitive performance when calculating reward functions, using the mean-based result performs significantly better than the MAP-based result when focusing on action matching rather than reward matching. For this reason, MAP-based results are excluded from comparison. More importantly, this emphasizes the importance of how the state reward values computed per iteration are finally used. A non-linear combination of this data intuitively provides better performance than a linear combination (as in the case of mean). Experimental results in @cite_33 indicate superiority of BNP-FIRL over GPIRL in a non-deterministic (a certain percentage of actions are random) MDP, such as the ones used to evaluate our work. | {
"cite_N": [
"@cite_13",
"@cite_33",
"@cite_11"
],
"mid": [
"2163037893",
"2177382477",
"2128002512"
],
"abstract": [
"The difficulty in inverse reinforcement learning (IRL) arises in choosing the best reward function since there are typically an infinite number of reward functions that yield the given behaviour data as optimal. Using a Bayesian framework, we address this challenge by using the maximum a posteriori (MAP) estimation for the reward function, and show that most of the previous IRL algorithms can be modeled into our framework. We also present a gradient method for the MAP estimation based on the (sub)differentiability of the posterior distribution. We show the effectiveness of our approach by comparing the performance of the proposed method to those of the previous algorithms.",
"Most of the algorithms for inverse reinforcement learning (IRL) assume that the reward function is a linear function of the pre-defined state and action features. However, it is often difficult to manually specify the set of features that can make the true reward function representable as a linear function. We propose a Bayesian nonparametric approach to identifying useful composite features for learning the reward function. The composite features are assumed to be the logical conjunctions of the pre-defined atomic features so that we can represent the reward function as a linear function of the composite features. We empirically show that our approach is able to learn composite features that capture important aspects of the reward function on synthetic domains, and predict taxi drivers' behaviour with high accuracy on a real GPS trace dataset.",
"We define a probability distribution over equivalence classes of binary matrices with a finite number of rows and an unbounded number of columns. This distribution is suitable for use as a prior in probabilistic models that represent objects using a potentially infinite array of features. We identify a simple generative process that results in the same distribution over equivalence classes, which we call the Indian buffet process. We illustrate the use of this distribution as a prior in an infinite latent feature model, deriving a Markov chain Monte Carlo algorithm for inference in this model and applying the algorithm to an image dataset."
]
} |
1608.02971 | 2494907211 | The problem of Learning from Demonstration is targeted at learning to perform tasks based on observed examples. One approach to Learning from Demonstration is Inverse Reinforcement Learning, in which actions are observed to infer rewards. This work combines a feature based state evaluation approach to Inverse Reinforcement Learning with neuroevolution, a paradigm for modifying neural networks based on their performance on a given task. Neural networks are used to learn from a demonstrated expert policy and are evolved to generate a policy similar to the demonstration. The algorithm is discussed and evaluated against competitive feature-based Inverse Reinforcement Learning approaches. At the cost of execution time, neural networks allow for non-linear combinations of features in state evaluations. These valuations may correspond to state value or state reward. This results in better correspondence to observed examples as opposed to using linear combinations. This work also extends existing work on Bayesian Non-Parametric Feature Construction for Inverse Reinforcement Learning by using non-linear combinations of intermediate data to improve performance. The algorithm is observed to be specifically suitable for a linearly solvable non-deterministic Markov Decision Processes in which multiple rewards are sparsely scattered in state space. A conclusive performance hierarchy between evaluated algorithms is presented. | In an expectation-maximization based approach, the reward function is modeled as a weighted sum of functions in @cite_26 . Parameters of the optimal policy in the model are defined as the probability distribution of actions for each state in the state space, based on the optimal policy. The algorithm then attempts to simultaneously estimate the weights and parameters. The algorithm is not compared with @cite_22 , but the two algorithms have been individually compared with standard Maximum Entropy IRL. Visual observation of performance of these two algorithms indicates that a GP kernel based approach is competitive to, if not better than, an expectation-maximization based approach. | {
"cite_N": [
"@cite_26",
"@cite_22"
],
"mid": [
"1574859627",
"2117675763"
],
"abstract": [
"Reinforcement Learning (RL) is an attractive tool for learning optimal controllers in the sense of a given reward function. In conventional RL, usually an expert is required to design the reward function as the efficiency of RL strongly depends on the latter. An alternative has been presented by the concept of Inverse Reinforcement Learning (IRL), where the reward function is estimated from observed data. In this work, we propose a novel approach for IRL based on a generative probabilistic model of RL. We derive an Expectation Maximization algorithm that is able to simultaneously estimate the reward and the optimal policy for finite state and action spaces, which can be easily extended for the infinite cases. By means of two toy examples, we show that the proposed algorithm works well even with a low number of observations and converges after only a few iterations.",
"We present a probabilistic algorithm for nonlinear inverse reinforcement learning. The goal of inverse reinforcement learning is to learn the reward function in a Markov decision process from expert demonstrations. While most prior inverse reinforcement learning algorithms represent the reward as a linear combination of a set of features, we use Gaussian processes to learn the reward as a nonlinear function, while also determining the relevance of each feature to the expert's policy. Our probabilistic algorithm allows complex behaviors to be captured from suboptimal stochastic demonstrations, while automatically balancing the simplicity of the learned reward structure against its consistency with the observed actions."
]
} |
1608.02971 | 2494907211 | The problem of Learning from Demonstration is targeted at learning to perform tasks based on observed examples. One approach to Learning from Demonstration is Inverse Reinforcement Learning, in which actions are observed to infer rewards. This work combines a feature based state evaluation approach to Inverse Reinforcement Learning with neuroevolution, a paradigm for modifying neural networks based on their performance on a given task. Neural networks are used to learn from a demonstrated expert policy and are evolved to generate a policy similar to the demonstration. The algorithm is discussed and evaluated against competitive feature-based Inverse Reinforcement Learning approaches. At the cost of execution time, neural networks allow for non-linear combinations of features in state evaluations. These valuations may correspond to state value or state reward. This results in better correspondence to observed examples as opposed to using linear combinations. This work also extends existing work on Bayesian Non-Parametric Feature Construction for Inverse Reinforcement Learning by using non-linear combinations of intermediate data to improve performance. The algorithm is observed to be specifically suitable for a linearly solvable non-deterministic Markov Decision Processes in which multiple rewards are sparsely scattered in state space. A conclusive performance hierarchy between evaluated algorithms is presented. | Since the function to be generated is unknown, so is its complexity. This implies uncertainty about the optimal number of layers and nodes in each layer to be used. Nodes in later layers of a neural network may be able to define a function for which it may take several nodes in the earlier layers of the neural network to define. There is therefore a trade-off between the number of hidden layers and the number of nodes in each layer. Because of this, using a fixed structure for the neural network in this scenario may be sub-optimal. Neuroevolution solves this problem by generating the optimal neural network using techniques such as Genetic Programming (GP) @cite_9 , Evolutionary Programming (EP) @cite_25 , Simulated Annealing (SA) @cite_36 , Genetic Algorithms (GA) @cite_28 @cite_23 , Evolution Strategies (ES) @cite_19 @cite_17 , Evolutionary Algorithms (EA) @cite_42 and Memetic Algorithms (MA) @cite_37 . | {
"cite_N": [
"@cite_37",
"@cite_36",
"@cite_28",
"@cite_9",
"@cite_42",
"@cite_19",
"@cite_23",
"@cite_25",
"@cite_17"
],
"mid": [
"648129165",
"2134514463",
"2140685838",
"1574913457",
"",
"157687528",
"2148872333",
"2138784882",
"1564754565"
],
"abstract": [
"Handbook of Neuroevolution Through Erlang presents both the theory behind, and the methodology of, developing a neuroevolutionary-based computational intelligence system using Erlang. With a foreword written by Joe Armstrong, this handbook offersan extensivetutorial forcreating a state of the art Topology and Weight Evolving Artificial Neural Network (TWEANN) platform. In a step-by-step format, the reader is guided from a single simulated neuron to a complete system. By following these steps, the reader will be able to use novel technology to build a TWEANN system, which can be applied to Artificial Life simulation, and Forex trading. Because of Erlangs architecture, it perfectly matches that of evolutionary and neurocomptational systems. As a programming language, it is a concurrent, message passing paradigm which allows the developers to make full use of the multi-core & multi-cpu systems. Handbook of Neuroevolution Through Erlang explains how to leverage Erlangs features in the field of machine learning, and the systems real world applications, ranging from algorithmic financial trading to artificial life and robotics.",
"This paper presents a new evolutionary system, i.e., EPNet, for evolving artificial neural networks (ANNs). The evolutionary algorithm used in EPNet is based on Fogel's evolutionary programming (EP). Unlike most previous studies on evolving ANN's, this paper puts its emphasis on evolving ANN's behaviors. Five mutation operators proposed in EPNet reflect such an emphasis on evolving behaviors. Close behavioral links between parents and their offspring are maintained by various mutations, such as partial training and node splitting. EPNet evolves ANN's architectures and connection weights (including biases) simultaneously in order to reduce the noise in fitness evaluation. The parsimony of evolved ANN's is encouraged by preferring node connection deletion to addition. EPNet has been tested on a number of benchmark problems in machine learning and ANNs, such as the parity problem, the medical diagnosis problems, the Australian credit card assessment problem, and the Mackey-Glass time series prediction problem. The experimental results show that EPNet can produce very compact ANNs with good generalization ability in comparison with other algorithms.",
"In most modern video games, character behavior is scripted; no matter how many times the player exploits a weakness, that weakness is never repaired. Yet, if game characters could learn through interacting with the player, behavior could improve as the game is played, keeping it interesting. This paper introduces the real-time Neuroevolution of Augmenting Topologies (rtNEAT) method for evolving increasingly complex artificial neural networks in real time, as a game is being played. The rtNEAT method allows agents to change and improve during the game. In fact, rtNEAT makes possible an entirely new genre of video games in which the player trains a team of agents through a series of customized exercises. To demonstrate this concept, the Neuroevolving Robotic Operatives (NERO) game was built based on rtNEAT. In NERO, the player trains a team of virtual robots for combat against other players' teams. This paper describes results from this novel application of machine learning, and demonstrates that rtNEAT makes possible video games like NERO where agents evolve and adapt in real time. In the future, rtNEAT may allow new kinds of educational and training applications through interactive and adapting games.",
"",
"",
"In this paper we present a novel method, called Evolution- ary Acquisition of Neural Topologies (EANT), of evolving the structure and weights of neural networks. The method introduces an ecien t and compact genetic encoding of a neural network onto a linear genome that enables one to evaluate the network without decoding it. The method explores new structures whenever it is not possible to further exploit the structures found so far. This enables it to nd minimal neural structures for solving a given learning task. We tested the algorithm on a benchmark control task and found it to perform very well.",
"",
"Standard methods for simultaneously inducing the structure and weights of recurrent neural networks limit every task to an assumed class of architectures. Such a simplification is necessary since the interactions between network structure and function are not well understood. Evolutionary computations, which include genetic algorithms and evolutionary programming, are population-based search methods that have shown promise in many similarly complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. GNARL's empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods. >",
"In this article we describe EANT, Evolutionary Acquisition of Neural Topologies, a method that creates neural networks by evolutionary reinforcement learning. The structure of the networks is developed using mutation operators, starting from a minimal structure. Their parameters are optimised using CMA-ES, Covariance Matrix Adaptation Evolution Strategy, a derandomised variant of evolution strategies. EANT can create neural networks that are very specialised; they achieve a very good performance while being relatively small. This can be seen in experiments where our method competes with a different one, called NEAT, NeuroEvolution of Augmenting Topologies, to create networks that control a robot in a visual servoing scenario."
]
} |
1608.02971 | 2494907211 | The problem of Learning from Demonstration is targeted at learning to perform tasks based on observed examples. One approach to Learning from Demonstration is Inverse Reinforcement Learning, in which actions are observed to infer rewards. This work combines a feature based state evaluation approach to Inverse Reinforcement Learning with neuroevolution, a paradigm for modifying neural networks based on their performance on a given task. Neural networks are used to learn from a demonstrated expert policy and are evolved to generate a policy similar to the demonstration. The algorithm is discussed and evaluated against competitive feature-based Inverse Reinforcement Learning approaches. At the cost of execution time, neural networks allow for non-linear combinations of features in state evaluations. These valuations may correspond to state value or state reward. This results in better correspondence to observed examples as opposed to using linear combinations. This work also extends existing work on Bayesian Non-Parametric Feature Construction for Inverse Reinforcement Learning by using non-linear combinations of intermediate data to improve performance. The algorithm is observed to be specifically suitable for a linearly solvable non-deterministic Markov Decision Processes in which multiple rewards are sparsely scattered in state space. A conclusive performance hierarchy between evaluated algorithms is presented. | NEAT evolves a population of neural networks governed by a GA @cite_5 @cite_32 , and therefore by a fitness function. There currently exist many extensions of NEAT, including rtNEAT @cite_28 (a real-time version of NEAT, which enforces perterbation to avoid stagnation of fitness level) and FS-NEAT @cite_47 (NEAT tailored to feature selection). | {
"cite_N": [
"@cite_28",
"@cite_5",
"@cite_47",
"@cite_32"
],
"mid": [
"2140685838",
"1576818901",
"2096232175",
"1569757501"
],
"abstract": [
"In most modern video games, character behavior is scripted; no matter how many times the player exploits a weakness, that weakness is never repaired. Yet, if game characters could learn through interacting with the player, behavior could improve as the game is played, keeping it interesting. This paper introduces the real-time Neuroevolution of Augmenting Topologies (rtNEAT) method for evolving increasingly complex artificial neural networks in real time, as a game is being played. The rtNEAT method allows agents to change and improve during the game. In fact, rtNEAT makes possible an entirely new genre of video games in which the player trains a team of agents through a series of customized exercises. To demonstrate this concept, the Neuroevolving Robotic Operatives (NERO) game was built based on rtNEAT. In NERO, the player trains a team of virtual robots for combat against other players' teams. This paper describes results from this novel application of machine learning, and demonstrates that rtNEAT makes possible video games like NERO where agents evolve and adapt in real time. In the future, rtNEAT may allow new kinds of educational and training applications through interactive and adapting games.",
"Background on genetic algorithms, LISP, and genetic programming hierarchical problem-solving introduction to automatically-defined functions - the two-boxes problem problems that straddle the breakeven point for computational effort Boolean parity functions determining the architecture of the program the lawnmower problem the bumblebee problem the increasing benefits of ADFs as problems are scaled up finding an impulse response function artificial ant on the San Mateo trail obstacle-avoiding robot the minesweeper problem automatic discovery of detectors for letter recognition flushes and four-of-a-kinds in a pinochle deck introduction to biochemistry and molecular biology prediction of transmembrane domains in proteins prediction of omega loops in proteins lookahead version of the transmembrane problem evolutionary selection of the architecture of the program evolution of primitives and sufficiency evolutionary selection of terminals evolution of closure simultaneous evolution of architecture, primitive functions, terminals, sufficiency, and closure the role of representation and the lens effect. Appendices: list of special symbols list of special functions list of type fonts default parameters computer implementation annotated bibliography of genetic programming electronic mailing list and public repository.",
"Feature selection is the process of finding the set of inputs to a machine learning algorithm that will yield the best performance. Developing a way to solve this problem automatically would make current machine learning methods much more useful. Previous efforts to automate feature selection rely on expensive meta-learning or are applicable only when labeled training data is available. This paper presents a novel method called FS-NEAT which extends the NEAT neuroevolution method to automatically determine an appropriate set of inputs for the networks it evolves. By learning the network's inputs, topology, and weights simultaneously, FS-NEAT addresses the feature selection problem without relying on meta-learning or labeled data. Initial experiments in an autonomous car racing simulation demonstrate that FS-NEAT can learn better and faster than regular NEAT. In addition, the networks it evolves are smaller and require fewer inputs. Furthermore, FS-NEAT's performance remains robust even as the feature selection task it faces is made increasingly difficult.",
" Four appendices summarize valuable resources available for the reader: Appendix A contains printed and recorded resources, Appendix B suggests web-related resources, Appendix C discusses GP software tools, including Discipulus, the GP software developed by the authors, and Appendix D mentions events most closely related to the field of genetic programming. URLs can be found online at http: mkp.com GPIntro."
]
} |
1608.02971 | 2494907211 | The problem of Learning from Demonstration is targeted at learning to perform tasks based on observed examples. One approach to Learning from Demonstration is Inverse Reinforcement Learning, in which actions are observed to infer rewards. This work combines a feature based state evaluation approach to Inverse Reinforcement Learning with neuroevolution, a paradigm for modifying neural networks based on their performance on a given task. Neural networks are used to learn from a demonstrated expert policy and are evolved to generate a policy similar to the demonstration. The algorithm is discussed and evaluated against competitive feature-based Inverse Reinforcement Learning approaches. At the cost of execution time, neural networks allow for non-linear combinations of features in state evaluations. These valuations may correspond to state value or state reward. This results in better correspondence to observed examples as opposed to using linear combinations. This work also extends existing work on Bayesian Non-Parametric Feature Construction for Inverse Reinforcement Learning by using non-linear combinations of intermediate data to improve performance. The algorithm is observed to be specifically suitable for a linearly solvable non-deterministic Markov Decision Processes in which multiple rewards are sparsely scattered in state space. A conclusive performance hierarchy between evaluated algorithms is presented. | In @cite_45 , demonstration bias is introduced to NEAT-based agent learning. This is done by providing advice in the form of a rule-based grammar. This does not, however, provide an embedded undertanding of preference for a state in state space. This is further explored by mapping states to actions in @cite_6 where different methods of demonstration are studied in a video game environment @cite_29 . NEAT-based IRL is also implemented in a multiple agent setting as in @cite_15 where groups of genomes share fitness information. | {
"cite_N": [
"@cite_29",
"@cite_45",
"@cite_6",
"@cite_15"
],
"mid": [
"2147596804",
"2168909409",
"2096645045",
"2099620844"
],
"abstract": [
"In most modern video games, character be- havior is scripted; no matter how many times the player exploits a weakness, that weakness is never repaired. Yet if game characters could learn through interacting with the player, behavior could improve during game- play, keeping it interesting. This paper introduces the real-time NeuroEvolution of Augmenting Topologies (rt- NEAT) method for evolving increasingly complex arti- ficial neural networks in real time, as a game is being played. The rtNEAT method allows agents to change and improve during the game. In fact, rtNEAT makes possible a new genre of video games in which the player teaches a team of agents through a series of customized training exercises. In order to demonstrate this concept in the NeuroEvolving Robotic Operatives (NERO) game, the player trains a team of robots for combat. This paper describes results from this novel application of machine learning, and also demonstrates how multiple agents can evolve and adapt in video games like NERO in real time using rtNEAT. In the future, rtNEAT may allow new kinds of educational and training applications that adapt online as the user gains new skills.",
"Neuroevolution is a promising learning method in tasks with extremely large state and action spaces and hidden states. Recent advances allow neuroevolution to take place in real time, making it possible to e.g. construct video games with adaptive agents. Often some of the desired behaviors for such agents are known, and it would make sense to prescribe them, rather than requiring evolution to discover them. This paper presents a technique for incorporating human-generated advice in real time into neuroevolution. The advice is given in a formal language and converted to a neural network structure through KBANN. The NEAT neuroevolution method then incorporates the structure into existing networks through evolution of network weights and topology. The method is evaluated in the NERO video game, where it makes learning faster even when the tasks change and novel ways of making use of the advice are required. Such ability to incorporate human knowledge into neuroevolution in real time may prove useful in several interactive adaptive domains in the future.",
"Many different methods for combining human expertise with machine learning in general, and evolutionary computation in particular, are possible. Which of these methods work best, and do they outperform human design and machine design alone? In order to answer this question, a human-subject experiment for comparing human-assisted machine learning methods was conducted. Three different approaches, i.e. advice, shaping, and demonstration, were employed to assist a powerful machine learning technique (neuroevolution) on a collection of agent training tasks, and contrasted with both a completely manual approach (scripting) and a completely hands-off one (neuroevolution alone). The results show that, (1) human-assisted evolution outperforms a manual scripting approach, (2) unassisted evolution performs consistently well across domains, and (3) different methods of assisting neuroevolution outperform unassisted evolution on different tasks. If done right, human-assisted neuroevolution can therefore be a powerful technique for constructing intelligent agents.",
"Neuroevolution is a promising approach for constructing intelligent agents in many complex tasks such as games, robotics, and decision making. It is also well suited for evolving team behavior for many multiagent tasks. However, new challenges and opportunities emerge in such tasks, including facilitating cooperation through reward sharing and communication, accelerating evolution through social learning, and measuring how good the resulting solutions are. This paper reviews recent progress in these three areas, and suggests avenues for future work."
]
} |
1608.02971 | 2494907211 | The problem of Learning from Demonstration is targeted at learning to perform tasks based on observed examples. One approach to Learning from Demonstration is Inverse Reinforcement Learning, in which actions are observed to infer rewards. This work combines a feature based state evaluation approach to Inverse Reinforcement Learning with neuroevolution, a paradigm for modifying neural networks based on their performance on a given task. Neural networks are used to learn from a demonstrated expert policy and are evolved to generate a policy similar to the demonstration. The algorithm is discussed and evaluated against competitive feature-based Inverse Reinforcement Learning approaches. At the cost of execution time, neural networks allow for non-linear combinations of features in state evaluations. These valuations may correspond to state value or state reward. This results in better correspondence to observed examples as opposed to using linear combinations. This work also extends existing work on Bayesian Non-Parametric Feature Construction for Inverse Reinforcement Learning by using non-linear combinations of intermediate data to improve performance. The algorithm is observed to be specifically suitable for a linearly solvable non-deterministic Markov Decision Processes in which multiple rewards are sparsely scattered in state space. A conclusive performance hierarchy between evaluated algorithms is presented. | State values are used to generate a corresponding policy. To incorporate learning by demonstration, this policy is matched with the demonstration and the neural network is evolved thereof. Such directed neuroevolution can be considered as an extension of @cite_45 @cite_6 with better insight to evaluation of a state. For the purpose of this document, the proposed work is referred to as NEAT-IRL. | {
"cite_N": [
"@cite_45",
"@cite_6"
],
"mid": [
"2168909409",
"2096645045"
],
"abstract": [
"Neuroevolution is a promising learning method in tasks with extremely large state and action spaces and hidden states. Recent advances allow neuroevolution to take place in real time, making it possible to e.g. construct video games with adaptive agents. Often some of the desired behaviors for such agents are known, and it would make sense to prescribe them, rather than requiring evolution to discover them. This paper presents a technique for incorporating human-generated advice in real time into neuroevolution. The advice is given in a formal language and converted to a neural network structure through KBANN. The NEAT neuroevolution method then incorporates the structure into existing networks through evolution of network weights and topology. The method is evaluated in the NERO video game, where it makes learning faster even when the tasks change and novel ways of making use of the advice are required. Such ability to incorporate human knowledge into neuroevolution in real time may prove useful in several interactive adaptive domains in the future.",
"Many different methods for combining human expertise with machine learning in general, and evolutionary computation in particular, are possible. Which of these methods work best, and do they outperform human design and machine design alone? In order to answer this question, a human-subject experiment for comparing human-assisted machine learning methods was conducted. Three different approaches, i.e. advice, shaping, and demonstration, were employed to assist a powerful machine learning technique (neuroevolution) on a collection of agent training tasks, and contrasted with both a completely manual approach (scripting) and a completely hands-off one (neuroevolution alone). The results show that, (1) human-assisted evolution outperforms a manual scripting approach, (2) unassisted evolution performs consistently well across domains, and (3) different methods of assisting neuroevolution outperform unassisted evolution on different tasks. If done right, human-assisted neuroevolution can therefore be a powerful technique for constructing intelligent agents."
]
} |
1608.03049 | 2951588086 | Visual fashion analysis has attracted many attentions in the recent years. Previous work represented clothing regions by either bounding boxes or human joints. This work presents fashion landmark detection or fashion alignment, which is to predict the positions of functional key points defined on the fashion items, such as the corners of neckline, hemline, and cuff. To encourage future studies, we introduce a fashion landmark dataset with over 120K images, where each image is labeled with eight landmarks. With this dataset, we study fashion alignment by cascading multiple convolutional neural networks in three stages. These stages gradually improve the accuracies of landmark predictions. Extensive experiments demonstrate the effectiveness of the proposed method, as well as its generalization ability to pose estimation. Fashion landmark is also compared to clothing bounding boxes and human joints in two applications, fashion attribute prediction and clothes retrieval, showing that fashion landmark is a more discriminative representation to understand fashion images. | Visual fashion understanding has been a long-pursuing topic due to the many human-centric applications it enables. Recent advances include predicting semantic attributes @cite_39 @cite_23 @cite_33 @cite_37 @cite_16 , clothes recognition and retrieval @cite_1 @cite_0 @cite_7 @cite_30 @cite_27 @cite_16 and fashion trends discovery @cite_17 @cite_28 @cite_3 . To better capture discriminative information in fashion items, previous works have explored the usage of full image @cite_27 , general object proposals @cite_27 , bounding boxes @cite_33 @cite_37 and even masks @cite_20 @cite_0 @cite_36 @cite_35 . However, these representations either lack sufficient discriminative ability or are too expensive to obtain. To overcome these drawbacks, we introduce the problem of clothes alignment in this work, which is a necessary step toward robust fashion recognition. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_37",
"@cite_33",
"@cite_7",
"@cite_28",
"@cite_36",
"@cite_1",
"@cite_3",
"@cite_39",
"@cite_0",
"@cite_27",
"@cite_23",
"@cite_16",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2204578866",
"",
"",
"2143183660",
"2030628044",
"2038147967",
"2135367695",
"",
"2051926867",
"2121339428",
"2200092826",
"146395692",
"2471768434",
"2074621908",
""
],
"abstract": [
"",
"In this work, we address the human parsing task with a novel Contextualized Convolutional Neural Network (Co-CNN) architecture, which well integrates the cross-layer context, global image-level context, within-super-pixel context and cross-super-pixel neighborhood context into a unified network. Given an input human image, Co-CNN produces the pixel-wise categorization in an end-to-end way. First, the cross-layer context is captured by our basic local-to-global-to-local structure, which hierarchically combines the global semantic structure and the local fine details within the cross-layers. Second, the global image-level label prediction is used as an auxiliary objective in the intermediate layer of the Co-CNN, and its outputs are further used for guiding the feature learning in subsequent convolutional layers to leverage the global image-level context. Finally, to further utilize the local super-pixel contexts, the within-super-pixel smoothing and cross-super-pixel neighbourhood voting are formulated as natural sub-components of the Co-CNN to achieve the local label consistency in both training and testing process. Comprehensive evaluations on two public datasets well demonstrate the significant superiority of our Co-CNN architecture over other state-of-the-arts for human parsing. In particular, the F-1 score on the large dataset [15] reaches 76.95 by Co-CNN, significantly higher than 62.81 and 64.38 by the state-of-the-art algorithms, M-CNN [21] and ATR [15], respectively.",
"",
"",
"We present a scalable approach to automatically suggest relevant clothing products, given a single image without metadata. We formulate the problem as cross-scenario retrieval: the query is a real-world image, while the products from online shopping catalogs are usually presented in a clean environment. We divide our approach into two main stages: a) Starting from articulated pose estimation, we segment the person area and cluster promising image regions in order to detect the clothing classes present in the query image. b) We use image retrieval techniques to retrieve visually similar products from each of the detected classes. We achieve clothing detection performance comparable to the state-of-the-art on a very recent annotated dataset, while being more than 50 times faster. Finally, we present a large scale clothing suggestion scenario, where the product database contains over one million products.",
"From Flickr to Facebook to Pinterest, pictures are increasingly becoming a core content type in social networks. But, how important is this visual content and how does it influence behavior in the network? In this paper we study the effects of visual, textual, and social factors on popularity in a large real-world network focused on fashion. We make use of state of the art computer vision techniques for clothing representation, as well as network and text information to predict post popularity in both in-network and out-of-network scenarios. Our experiments find significant statistical evidence that social factors dominate the in-network scenario, but that combinations of content and social factors can be helpful for predicting popularity outside of the network. This in depth study of image popularity in social networks suggests that social factors should be carefully considered for research involving social network photos.",
"This paper aims at developing an integrated system of clothing co-parsing, in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. We propose a data-driven framework consisting of two phases of inference. The first phase, referred as \"image co-segmentation\", iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM (ESVM) technique [23]. In the second phase (i.e. \"region colabeling\"), we construct a multi-image graphical model by taking the segmented regions as vertices, and incorporate several contexts of clothing configuration (e.g., item location and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [30], we construct a dataset called CCP consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29 88.23 segmentation accuracy and 65.52 63.89 recognition rate on the Fashionista and the CCP datasets, respectively, which are superior compared with state-of-the-art methods.",
"We address a cross-scenario clothing retrieval problem- given a daily human photo captured in general environment, e.g., on street, finding similar clothing in online shops, where the photos are captured more professionally and with clean background. There are large discrepancies between daily photo scenario and online shopping scenario. We first propose to alleviate the human pose discrepancy by locating 30 human parts detected by a well trained human detector. Then, founded on part features, we propose a two-step calculation to obtain more reliable one-to-many similarities between the query daily photo and online shopping photos: 1) the within-scenario one-to-many similarities between a query daily photo and an extra auxiliary set are derived by direct sparse reconstruction; 2) by a cross-scenario many-to-many similarity transfer matrix inferred offline from the auxiliary set and the online shopping set, the reliable cross-scenario one-to-many similarities between the query daily photo and all online shopping photos are obtained.",
"",
"Automatic clothes search in consumer photos is not a trivial problem as photos are usually taken under completely uncontrolled realistic imaging conditions. In this paper, a novel framework is presented to tackle this issue by leveraging low-level features (e.g., color) and high-level features (attributes) of clothes. First, a content-based image retrieval(CBIR) approach based on the bag-of-visual-words (BOW) model is developed as our baseline system, in which a codebook is constructed from extracted dominant color patches. A reranking approach is then proposed to improve search quality by exploiting clothes attributes, including the type of clothes, sleeves, patterns, etc. The experiments on photo collections show that our approach is robust to large variations of images taken in unconstrained environment, and the reranking algorithm based on attribute learning significantly improves retrieval performance in combination with the proposed baseline.",
"Clothing recognition is an extremely challenging problem due to wide variation in clothing item appearance, layering, and style. In this paper, we tackle the clothing parsing problem using a retrieval based approach. For a query image, we find similar styles from a large database of tagged fashion images and use these examples to parse the query. Our approach combines parsing from: pre-trained global clothing models, local clothing models learned on the fly from retrieved examples, and transferred parse masks (paper doll item transfer) from retrieved examples. Experimental evaluation shows that our approach significantly outperforms state of the art in parsing accuracy.",
"In this paper, we define a new task, Exact Street to Shop, where our goal is to match a real-world example of a garment item to the same item in an online shop. This is an extremely challenging task due to visual differences between street photos (pictures of people wearing clothing in everyday uncontrolled settings) and online shop photos (pictures of clothing items on people, mannequins, or in isolation, captured by professionals in more controlled settings). We collect a new dataset for this application containing 404,683 shop photos collected from 25 different online retailers and 20,357 street photos, providing a total of 39,479 clothing item matches between street and shop photos. We develop three different methods for Exact Street to Shop retrieval, including two deep learning baseline methods, and a method to learn a similarity measure between the street and shop domains. Experiments demonstrate that our learned similarity significantly outperforms our baselines that use existing deep learning based representations.",
"Describing clothing appearance with semantic attributes is an appealing technique for many important applications. In this paper, we propose a fully automated system that is capable of generating a list of nameable attributes for clothes on human body in unconstrained images. We extract low-level features in a pose-adaptive manner, and combine complementary features for learning attribute classifiers. Mutual dependencies between the attributes are then explored by a Conditional Random Field to further improve the predictions from independent classifiers. We validate the performance of our system on a challenging clothing attribute dataset, and introduce a novel application of dressing style analysis that utilizes the semantic attributes produced by our system.",
"Recent advances in clothes recognition have been driven by the construction of clothes datasets. Existing datasets are limited in the amount of annotations and are difficult to cope with the various challenges in real-world applications. In this work, we introduce DeepFashion1, a large-scale clothes dataset with comprehensive annotations. It contains over 800,000 images, which are richly annotated with massive attributes, clothing landmarks, and correspondence of images taken under different scenarios including store, street snapshot, and consumer. Such rich annotations enable the development of powerful algorithms in clothes recognition and facilitating future researches. To demonstrate the advantages of DeepFashion, we propose a new deep model, namely FashionNet, which learns clothing features by jointly predicting clothing attributes and landmarks. The estimated landmarks are then employed to pool or gate the learned features. It is optimized in an iterative manner. Extensive experiments demonstrate the effectiveness of FashionNet and the usefulness of DeepFashion.",
"In this paper we demonstrate an effective method for parsing clothing in fashion photographs, an extremely challenging problem due to the large number of possible garment items, variations in configuration, garment appearance, layering, and occlusion. In addition, we provide a large novel dataset and tools for labeling garment items, to enable future research on clothing estimation. Finally, we present intriguing initial results on using clothing estimates to improve pose identification, and demonstrate a prototype application for pose-independent visual garment retrieval.",
""
]
} |
1608.03045 | 2511950547 | We propose a new family of combinatorial inference problems for graphical models. Unlike classical statistical inference where the main interest is point estimation or parameter testing, combinatorial inference aims at testing the global structure of the underlying graph. Examples include testing the graph connectivity, the presence of a cycle of certain size, or the maximum degree of the graph. To begin with, we develop a unified theory for the fundamental limits of a large family of combinatorial inference problems. We propose new concepts including structural packing and buffer entropies to characterize how the complexity of combinatorial graph structures impacts the corresponding minimax lower bounds. On the other hand, we propose a family of novel and practical structural testing algorithms to match the lower bounds. We provide thorough numerical results on both synthetic graphical models and brain networks to illustrate the usefulness of these proposed methods. | Graphical model inference is relatively straightforward when @math , but becomes notoriously challenging when @math . In high-dimensions, estimation procedures were studied by @cite_26 @cite_4 @cite_25 @cite_34 among others, while for variable selection procedures see @cite_7 @cite_18 @cite_32 @cite_35 @cite_34 and references therein. Recently, motivated by @cite_0 , various inferential methods for high-dimensional graphical models were suggested [e.g.] liu2013, jankova2014confidence,chen2015asymptotically,ren2015asymptotic, neykov2015aunified, gu2015local , most of which focus on testing the presence of a single edge (except @cite_27 who took the FDR approach to conduct multiple tests and @cite_9 who developed procedures of edge testing in Gaussian copula models). None of the aforementioned works address the problem of combinatorial structure testing. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_32",
"@cite_0",
"@cite_27",
"@cite_34",
"@cite_25"
],
"mid": [
"",
"",
"2138019504",
"",
"2010824638",
"1599875939",
"2137892504",
"2075010327",
"1990551862",
"",
""
],
"abstract": [
"",
"",
"Summary. We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor analysis-of-variance problem as the most important and well-known example. Instead of selecting factors by stepwise backward elimination, we focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection. The lasso, the LARS algorithm and the non-negative garrotte are recently proposed regression methods that can be used to select individual variables. We study and propose efficient algorithms for the extensions of these methods for factor selection and show that these extensions give superior performance to the traditional stepwise backward elimination method in factor selection problems. We study the similarities and the differences between these methods. Simulations and real examples are used to illustrate the methods.",
"",
"The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs. Neighborhood selection estimates the conditional independence restrictions separately for each node in the graph and is hence equivalent to variable selection for Gaussian linear models. We show that the proposed neighborhood selection scheme is consistent for sparse high-dimensional graphs. Consistency hinges on the choice of the penalty parameter. The oracle value for optimal prediction does not lead to a consistent neighborhood estimate. Controlling instead the probability of falsely joining some distinct connectivity components of the graph, consistent estimation for sparse graphs is achieved (with exponential rates), even when the number of variables grows as the number of observations raised to an arbitrary power.",
"This paper proposes a unified framework to quantify local and global inferential uncertainty for high dimensional nonparanormal graphical models. In particular, we consider the problems of testing the presence of a single edge and constructing a uniform confidence subgraph. Due to the presence of unknown marginal transformations, we propose a pseudo likelihood based inferential approach. In sharp contrast to the existing high dimensional score test method, our method is free of tuning parameters given an initial estimator, and extends the scope of the existing likelihood based inferential framework. Furthermore, we propose a U-statistic multiplier bootstrap method to construct the confidence subgraph. We show that the constructed subgraph is contained in the true graph with probability greater than a given nominal level. Compared with existing methods for constructing confidence subgraphs, our method does not rely on Gaussian or sub-Gaussian assumptions. The theoretical properties of the proposed inferential methods are verified by thorough numerical experiments and real data analysis.",
"Recent methods for estimating sparse undirected graphs for real-valued data in high dimensional problems rely heavily on the assumption of normality. We show how to use a semiparametric Gaussian copula---or \"nonparanormal\"---for high dimensional inference. Just as additive models extend linear models by replacing linear functions with a set of one-dimensional smooth functions, the nonparanormal extends the normal by transforming the variables by smooth functions. We derive a method for estimating the nonparanormal, study the method's theoretical properties, and show that it works well in many examples.",
"Let @math be a zero mean Gaussian vector and @math be a subset of @math . Suppose we are given @math i.i.d. replications of the vector @math . We propose a new test for testing that @math is independent of @math conditionally to @math against the general alternative that it is not. This procedure does not depend on any prior information on the covariance of @math or the variance of @math and applies in a high-dimensional setting. It straightforwardly extends to test the neighbourhood of a Gaussian graphical model. The procedure is based on a model of Gaussian regression with random Gaussian covariates. We give non asymptotic properties of the test and we prove that it is rate optimal (up to a possible @math factor) over various classes of alternatives under some additional assumptions. Besides, it allows us to derive non asymptotic minimax rates of testing in this setting. Finally, we carry out a simulation study in order to evaluate the performance of our procedure.",
"",
"",
""
]
} |
1608.03045 | 2511950547 | We propose a new family of combinatorial inference problems for graphical models. Unlike classical statistical inference where the main interest is point estimation or parameter testing, combinatorial inference aims at testing the global structure of the underlying graph. Examples include testing the graph connectivity, the presence of a cycle of certain size, or the maximum degree of the graph. To begin with, we develop a unified theory for the fundamental limits of a large family of combinatorial inference problems. We propose new concepts including structural packing and buffer entropies to characterize how the complexity of combinatorial graph structures impacts the corresponding minimax lower bounds. On the other hand, we propose a family of novel and practical structural testing algorithms to match the lower bounds. We provide thorough numerical results on both synthetic graphical models and brain networks to illustrate the usefulness of these proposed methods. | In addition to estimation and model selection procedures, efforts have been made to understand the fundamental limits of these problems. Lower bounds on estimation were obtained by @cite_8 , where the authors show that the parametric estimation rate @math is unattainable unless @math . Lower bounds on the minimal sample size required for model selection in Ising models were established by @cite_10 , where it is shown that support recovery is unattainable when @math . In a follow up work, @cite_12 studied model selection limits on the sample size in Gaussian graphical models. The latter two works are remotely related to ours, in that both works exploit graph properties to obtain information-theoretic lower bounds. However, our problem differs significantly from theirs since we focus on developing lower bounds for testing graph structure, which is a fundamentally different problem from estimating the whole graph. | {
"cite_N": [
"@cite_10",
"@cite_12",
"@cite_8"
],
"mid": [
"2108829665",
"",
"1716904016"
],
"abstract": [
"The problem of graphical model selection is to estimate the graph structure of a Markov random field given samples from it. We analyze the information-theoretic limitations of the problem of graph selection for binary Markov random fields under high-dimensional scaling, in which the graph size and the number of edges k, and or the maximal node degree d, are allowed to increase to infinity as a function of the sample size n. For pair-wise binary Markov random fields, we derive both necessary and sufficient conditions for correct graph selection over the class Gp,k of graphs on vertices with at most k edges, and over the class Gp,d of graphs on p vertices with maximum degree at most d. For the class Gp,k, we establish the existence of constants c and c' such that if n ; c' k2 log p. Similarly, for the class Gp,d, we exhibit constants c and c' such that for n ; c' d3 log p.",
"",
"The Gaussian graphical model, a popular paradigm for studying relationship among variables in a wide range of applications, has attracted great attention in recent years. This paper considers a fundamental question: When is it possible to estimate low-dimensional parameters at parametric square-root rate in a large Gaussian graphical model? A novel regression approach is proposed to obtain asymptotically efficient estimation of each entry of a precision matrix under a sparseness condition relative to the sample size. When the precision matrix is not sufficiently sparse, or equivalently the sample size is not sufficiently large, a lower bound is established to show that it is no longer possible to achieve the parametric rate in the estimation of each entry. This lower bound result, which provides an answer to the delicate sample size question, is established with a novel construction of a subset of sparse precision matrices in an application of Le Cam's lemma. Moreover, the proposed estimator is proven to have optimal convergence rate when the parametric rate cannot be achieved, under a minimal sample requirement. The proposed estimator is applied to test the presence of an edge in the Gaussian graphical model or to recover the support of the entire model, to obtain adaptive rate-optimal estimation of the entire precision matrix as measured by the matrix @math operator norm and to make inference in latent variables in the graphical model. All of this is achieved under a sparsity condition on the precision matrix and a side condition on the range of its spectrum. This significantly relaxes the commonly imposed uniform signal strength condition on the precision matrix, irrepresentability condition on the Hessian tensor operator of the covariance matrix or the @math constraint on the precision matrix. Numerical results confirm our theoretical findings. The ROC curve of the proposed algorithm, Asymptotic Normal Thresholding (ANT), for support recovery significantly outperforms that of the popular GLasso algorithm."
]
} |
1608.02653 | 2951867283 | Node overlap removal is a necessary step in many scenarios including laying out a graph, or visualizing a tag cloud. Our contribution is a new overlap removal algorithm that iteratively builds a Minimum Spanning Tree on a Delaunay triangulation of the node centers and removes the node overlaps by "growing" the tree. The algorithm is simple to implement yet produces high quality layouts. According to our experiments it runs several times faster than the current state-of-the-art methods. | There is vast research on node overlap removal. Some methods, including hierarchical layouts , incorporate the overlap removal with the layout step. Likewise, force-directed methods have been extended to take the node sizes into account , but it is difficult to guarantee overlap-free layouts without increasing the repulsive forces extensively. @cite_4 show how to avoid node overlaps with Stress Majorization . The method can remove node overlaps during the layout step, but it needs an initial state that is overlap free; sometimes such a state is not given. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2158378644"
],
"abstract": [
"Existing information-visualization techniques that target small screens are usually limited to exploring a few hundred items. In this article we present a scatterplot tool for personal digital assistants that allows the handling of many thousands of items. The application's scalability is achieved by incorporating two alternative interaction techniques: a geometric-semantic zoom that provides smooth transition between overview and detail, and a fisheye distortion that displays the focus and context regions of the scatterplot in a single view. A user study with 24 participants was conducted to compare the usability and efficiency of both techniques when searching a book database containing 7500 items. The study was run on a pen-driven Wacom board simulating a PDA interface. While the results showed no significant difference in task-completion times, a clear majority of 20 users preferred the fisheye view over the zoom interaction. In addition, other dependent variables such as user satisfaction and subjective rating of orientation and navigation support revealed a preference for the fisheye distortion. These findings partly contradict related research and indicate that, when using a small screen, users place higher value on the ability to preserve navigational context than they do on the ease of use of a simplistic, metaphor-based interaction style"
]
} |
1608.02653 | 2951867283 | Node overlap removal is a necessary step in many scenarios including laying out a graph, or visualizing a tag cloud. Our contribution is a new overlap removal algorithm that iteratively builds a Minimum Spanning Tree on a Delaunay triangulation of the node centers and removes the node overlaps by "growing" the tree. The algorithm is simple to implement yet produces high quality layouts. According to our experiments it runs several times faster than the current state-of-the-art methods. | Starting from the center of a node, RWorldle removes the overlaps by discovering the free space around a node by using a spiral curve and then utilizing this space. The approach requires a large number of intersection queries that are time consuming. This idea is extended by @cite_7 to discover available space by scanning the plane with a line or a circle. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2028377123"
],
"abstract": [
"When representing 2D data points with spacious objects such as labels, overlap can occur. We present a simple algorithm which modifies the (Mani-) Wordle idea with scan-line based techniques to allow a better placement. We give an introduction to common placement techniques from different fields and compare our method to these techniques w.r.t. euclidean displacement, changes in orthogonal ordering as well as shape and size preservation. Especially in dense scenarios our method preserves the overall shape better than known techniques and allows a good trade-off between the other measures. Applications on real world data are given and discussed. © 2012 Wiley Periodicals, Inc."
]
} |
1608.02653 | 2951867283 | Node overlap removal is a necessary step in many scenarios including laying out a graph, or visualizing a tag cloud. Our contribution is a new overlap removal algorithm that iteratively builds a Minimum Spanning Tree on a Delaunay triangulation of the node centers and removes the node overlaps by "growing" the tree. The algorithm is simple to implement yet produces high quality layouts. According to our experiments it runs several times faster than the current state-of-the-art methods. | Another set of node overlap removal algorithms focus on the idea of defining pairwise node constraints and translating the nodes to satisfy the constraints . These methods consider horizontal and vertical problems separately, which often leads to a distorted aspect ratio . A Force-transfer-algorithm is introduced by @cite_3 ; horizontal and vertical scans of overlapped nodes create forces moving nodes vertically and horizontally; the algorithm takes @math steps, where @math is the number of the nodes. @cite_1 develop Mixed Integer Optimization for Layout Arrangement to remove overlaps in a set of rectangles. The paper discusses the quality of the layout, which seems to be high, but not the effectiveness of the method, which relies on a mixed integer problem solver. @cite_5 reduce the overlap removal to a quadratic problem and solve it efficiently in @math steps. According to Gansner and Hu @cite_0 , the quality and the speed of the method of @cite_5 is very similar to the ones of PRISM. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_1",
"@cite_3"
],
"mid": [
"2098329200",
"1584926591",
"2069391844",
"2031908202"
],
"abstract": [
"When drawing graphs whose nodes contain text or graphics, the nontrivial node sizes must be taken into account, either as partof the initial layout or as a post-processing step. The core problem in avoiding or removing overlaps is to retain the structural information inherent in a layout while minimizing the additional area required. Thispaper presents a new node overlap removal algorithm that does well at retaining a graph’s shape while using little additional area and time. As part ofthe analysis, we consider and evaluate two measures of dissimilarity for two layouts of the same graph.",
"The problem of node overlap removal is to adjust the layout generated by typical graph drawing methods so that nodes of non-zero width and height do not overlap, yet are as close as possible to their original positions. We give an O(n log n) algorithm for achieving this assuming that the number of nodes overlapping any single node is bounded by some constant. This method has two parts, a constraint generation algorithm which generates a linear number of “separation” constraints and an algorithm for finding a solution to these constraints “close” to the original node placement values. We also extend our constraint solving algorithm to give an active set based algorithm which is guaranteed to find the optimal solution but which has considerably worse theoretical complexity. We compare our method with convex quadratic optimization and force scan approaches and find that it is faster than either, gives results of better quality than force scan methods and similar quality to the quadratic optimisation approach.",
"Arranging geometric entities in a two-dimensional layout is a common task for most information visualization applications, where existing algorithms typically rely on heuristics to position shapes such as boxes or discs in a visual space. Geometric entities are used as a visual resource to convey information contained in data such as textual documents or videos and the challenge is to place objects with similar content close to each other while still avoiding overlap. In this work we present a novel mechanism to arrange rectangular boxes in a two-dimensional layout which copes with the two properties above, that is, it keeps similar object close and prevents overlap. In contrast to heuristic techniques, our approach relies on mixed integer quadratic programming, resulting in well structured arrangements which can be easily be tuned to take different forms. We show the effectiveness of our methodology through a comprehensive set of comparisons against state-of-art methods. Moreover, we employ the proposed technique in video data visualization, attesting its usefulness in a practical application.",
"Techniques for drawing graphs have proven successful in producing good layouts of undirected graphs. When nodes must be labeled however, the problem of overlapping nodes arises, particularly in dynamic graph visualization. Providing a formal description of this problem, this paper presents a new approach called the Force-Transfer algorithm that removes node overlaps. Compared to other methods, our algorithm is usually able to achieve a compact adjusted layout within a reasonable running time."
]
} |
1608.02305 | 2471190594 | Unmanned aerial vehicles, or drones, have the potential to significantly reduce the cost and time of making last-mile deliveries and responding to emergencies. Despite this potential, little work has gone into developing vehicle routing problems (VRPs) specifically for drone delivery scenarios. Existing VRPs are insufficient for planning drone deliveries: either multiple trips to the depot are not permitted, leading to solutions with excess drones, or the effect of battery and payload weight on energy consumption is not considered, leading to costly or infeasible routes. We propose two multitrip VRPs for drone delivery that address both issues. One minimizes costs subject to a delivery time limit, while the other minimizes the overall delivery time subject to a budget constraint. We mathematically derive and experimentally validate an energy consumption model for multirotor drones, demonstrating that energy consumption varies approximately linearly with payload and battery weight. We use this approximation to derive mixed integer linear programs for our VRPs. We propose a cost function that considers our energy consumption model and drone reuse, and apply it in a simulated annealing (SA) heuristic for finding suboptimal solutions to practical scenarios. To assist drone delivery practitioners with balancing cost and delivery time, the SA heuristic is used to show that the minimum cost has an inverse exponential relationship with the delivery time limit, and the minimum overall delivery time has an inverse exponential relationship with the budget. Numerical results confirm the importance of reusing drones and optimizing battery size in drone delivery VRPs. | Drone routing papers, such as @cite_32 and @cite_7 , typically focus on surveillance applications. The FSTSP @cite_3 considers a delivery scenario based on combining drone delivery with a delivery truck. There do not appear to be papers that focus solely on using drones to make deliveries. This existing work on drone routing ignores factors important to drone delivery such as vehicle capacity, battery weight, changing payload weight, and reusing vehicles to reduce costs. | {
"cite_N": [
"@cite_3",
"@cite_32",
"@cite_7"
],
"mid": [
"1977319619",
"2964117926",
"2053083032"
],
"abstract": [
"Abstract Once limited to the military domain, unmanned aerial vehicles are now poised to gain widespread adoption in the commercial sector. One such application is to deploy these aircraft, also known as drones, for last-mile delivery in logistics operations. While significant research efforts are underway to improve the technology required to enable delivery by drone, less attention has been focused on the operational challenges associated with leveraging this technology. This paper provides two mathematical programming models aimed at optimal routing and scheduling of unmanned aircraft, and delivery trucks, in this new paradigm of parcel delivery. In particular, a unique variant of the classical vehicle routing problem is introduced, motivated by a scenario in which an unmanned aerial vehicle works in collaboration with a traditional delivery truck to distribute parcels. We present mixed integer linear programming formulations for two delivery-by-drone problems, along with two simple, yet effective, heuristic solution approaches to solve problems of practical size. Solutions to these problems will facilitate the adoption of unmanned aircraft for last-mile delivery. Such a delivery system is expected to provide faster receipt of customer orders at less cost to the distributor and with reduced environmental impacts. A numerical analysis demonstrates the effectiveness of the heuristics and investigates the tradeoffs between using drones with faster flight speeds versus longer endurance.",
"We consider a single Unmanned Aerial Vehicle (UAV) routing problem where there are multiple depots and the vehicle is allowed to refuel at any depot. The objective of the problem is to find a path for the UAV such that each target is visited at least once by the vehicle, the fuel constraint is never violated along the path for the UAV, and the total fuel required by the UAV is a minimum. We develop an approximation algorithm for the problem, and propose fast construction and improvement heuristics to solve the same. Computational results show that solutions whose costs are on an average within 1.4 of the optimum can be obtained relatively fast for the problem involving five depots and 25 targets.",
"Aerial robotics can be very useful to perform complex tasks in a distributed and cooperative fashion, such as localization of targets and search of point of interests (PoIs). In this work, we propose a distributed system of autonomous Unmanned Aerial Vehicles (UAVs), able to self-coordinate and cooperate in order to ensure both spatial and temporal coverage of specific time and spatial varying PoIs. In particular, we consider an UAVs system able to solve distributed dynamic scheduling problems, since each device is required to move towards a certain position in a certain time. We give a mathematical formulation of the problem as a multi-criteria optimization model, in which the total distances traveled by the UAVs (to be minimized), the customer satisfaction (to be maximized) and the number of used UAVs (to be minimized) are considered simultaneously. A dynamic variant of the basic optimization model, defined by considering the rolling horizon concept, is shown. We introduce a case study as an application scenario, where sport actions of a football match are filmed through a distributed UAVs system. The customer satisfaction and the traveled distance are used as performance parameters to evaluate the proposed approaches on the considered scenario."
]
} |
1608.02305 | 2471190594 | Unmanned aerial vehicles, or drones, have the potential to significantly reduce the cost and time of making last-mile deliveries and responding to emergencies. Despite this potential, little work has gone into developing vehicle routing problems (VRPs) specifically for drone delivery scenarios. Existing VRPs are insufficient for planning drone deliveries: either multiple trips to the depot are not permitted, leading to solutions with excess drones, or the effect of battery and payload weight on energy consumption is not considered, leading to costly or infeasible routes. We propose two multitrip VRPs for drone delivery that address both issues. One minimizes costs subject to a delivery time limit, while the other minimizes the overall delivery time subject to a budget constraint. We mathematically derive and experimentally validate an energy consumption model for multirotor drones, demonstrating that energy consumption varies approximately linearly with payload and battery weight. We use this approximation to derive mixed integer linear programs for our VRPs. We propose a cost function that considers our energy consumption model and drone reuse, and apply it in a simulated annealing (SA) heuristic for finding suboptimal solutions to practical scenarios. To assist drone delivery practitioners with balancing cost and delivery time, the SA heuristic is used to show that the minimum cost has an inverse exponential relationship with the delivery time limit, and the minimum overall delivery time has an inverse exponential relationship with the budget. Numerical results confirm the importance of reusing drones and optimizing battery size in drone delivery VRPs. | VRPs @cite_26 @cite_20 have been applied to solve delivery problems that, at first glance, appear similar to the DDPs proposed in this paper. A VRP attempts to find the optimal routes for one or more vehicles to deliver commodities to a set of locations. Each location may have a unique demand representing the number or size of commodities it requires, or a time window in which the vehicle should arrive. Vehicles typically leave from one or more depots, deliver their commodities, and then return to the depots. This section will explain why existing VRPs are not adequate for the drone delivery problem. | {
"cite_N": [
"@cite_26",
"@cite_20"
],
"mid": [
"1481088856",
"174230839"
],
"abstract": [
"Publisher Summary This chapter discusses some of the most important vehicle routing problem types. The vehicle routing problem lies at the heart of distribution management. It is faced each day by thousands of companies and organizations engaged in the delivery and collection of goods or people. Because conditions vary from one setting to the next, the objectives and constraints encountered in practice are highly variable. Most algorithmic research and software development in this area focus on a limited number of prototype problems. By building enough flexibility in optimization systems, these can be adapted to various practical contexts. The classical vehicle routing problem (VRP) is one of the most popular problems in combinatorial optimization, and its study has given rise to several exact and heuristic solution techniques of general applicability. The chapter presents a comprehensive overview of the available exact and heuristic algorithms for the VRP, most of which have been adapted to solve other variants.",
"This paper presents a methodology for classifying the literature of the Vehicle Routing Problem (VRP). VRP as a field of study and practice is defined quite broadly. It is considered to encompass all of the managerial, physical, geographical, and informational considerations as well as the theoretic disciplines impacting this ever emerging-field. Over its lifespan the VRP literature has become quite disjointed and disparate. Keeping track of its development has become difficult because its subject matter transcends several academic disciplines and professions that range from algorithm design to traffic management. Consequently, this paper defines VRP's domain in its entirety, accomplishes an all-encompassing taxonomy for the VRP literature, and delineates all of VRP's facets in a parsimonious and discriminating manner. Sample articles chosen for their disparity are classified to illustrate the descriptive power and parsimony of the taxonomy. Moreover, all previously published VRP taxonomies are shown to be relatively myopic; that is, they are subsumed by what is herein presented. Because the VRP literature encompasses esoteric and highly theoretical articles at one extremum and descriptions of actual applications at the other, the article sampling includes the entire range of the VRP literature."
]
} |
1608.02305 | 2471190594 | Unmanned aerial vehicles, or drones, have the potential to significantly reduce the cost and time of making last-mile deliveries and responding to emergencies. Despite this potential, little work has gone into developing vehicle routing problems (VRPs) specifically for drone delivery scenarios. Existing VRPs are insufficient for planning drone deliveries: either multiple trips to the depot are not permitted, leading to solutions with excess drones, or the effect of battery and payload weight on energy consumption is not considered, leading to costly or infeasible routes. We propose two multitrip VRPs for drone delivery that address both issues. One minimizes costs subject to a delivery time limit, while the other minimizes the overall delivery time subject to a budget constraint. We mathematically derive and experimentally validate an energy consumption model for multirotor drones, demonstrating that energy consumption varies approximately linearly with payload and battery weight. We use this approximation to derive mixed integer linear programs for our VRPs. We propose a cost function that considers our energy consumption model and drone reuse, and apply it in a simulated annealing (SA) heuristic for finding suboptimal solutions to practical scenarios. To assist drone delivery practitioners with balancing cost and delivery time, the SA heuristic is used to show that the minimum cost has an inverse exponential relationship with the delivery time limit, and the minimum overall delivery time has an inverse exponential relationship with the budget. Numerical results confirm the importance of reusing drones and optimizing battery size in drone delivery VRPs. | The Green VRP (GVRP) @cite_39 , first described by Erdo g , allows vehicles to visit refuelling stations while making deliveries to extend their range. @cite_37 created a version for battery-powered delivery vehicles that considers time windows, capacity constraints, and location demands. @cite_33 built on 's work by allowing for a mixed fleet of various types of vehicles, perhaps with different capacities, battery sizes, and costs. They assume that charging infrastructure is in place, so vehicles can regain energy to extend their travel distance. While it could increase drone range, it is debatable whether delivery companies will deploy charging infrastructure for their drones, and unlikely that such infrastructure would be available in emergency response situations. In GVRPs, vehicles can visit the same charging station multiple times to restore energy, but appear to be able to visit the depot only once. Drones tend to have limited carrying capacities, so if the depot can only be visited once, a GVRP will probably require a large number of drones to satisfy every customer's demand. In ssec:drone-reuse we show that reusing drones by allowing them to make multiple trips can significantly reduce costs. | {
"cite_N": [
"@cite_37",
"@cite_33",
"@cite_39"
],
"mid": [
"2094764575",
"2325312983",
"1965856911"
],
"abstract": [
"Driven by new laws and regulations concerning the emission of greenhouse gases, carriers are starting to use electric vehicles for last-mile deliveries. The limited battery capacities of these vehicles necessitate visits to recharging stations during delivery tours of industry-typical length, which have to be considered in the route planning to avoid inefficient vehicle routes with long detours. We introduce the electric vehicle-routing problem with time windows and recharging stations E-VRPTW, which incorporates the possibility of recharging at any of the available stations using an appropriate recharging scheme. Furthermore, we consider limited vehicle freight capacities as well as customer time windows, which are the most important constraints in real-world logistics applications. As a solution method, we present a hybrid heuristic that combines a variable neighborhood search algorithm with a tabu search heuristic. Tests performed on newly designed instances for the E-VRPTW as well as on benchmark instances of related problems demonstrate the high performance of the heuristic proposed as well as the positive effect of the hybridization.",
"Due to new regulations and further technological progress in the field of electric vehicles, the research community faces the new challenge of incorporating the electric energy based restrictions into vehicle routing problems. One of these restrictions is the limited battery capacity which makes detours to recharging stations necessary, thus requiring efficient tour planning mechanisms in order to sustain the competitiveness of electric vehicles compared to conventional vehicles. We introduce the Electric Fleet Size and Mix Vehicle Routing Problem with Time Windows and recharging stations (E-FSMFTW) to model decisions to be made with regards to fleet composition and the actual vehicle routes including the choice of recharging times and locations. The available vehicle types differ in their transport capacity, battery size and acquisition cost. Furthermore, we consider time windows at customer locations, which is a common and important constraint in real-world routing and planning problems. We solve this problem by means of branch-and-price as well as proposing a hybrid heuristic, which combines an Adaptive Large Neighbourhood Search with an embedded local search and labelling procedure for intensification. By solving a newly created set of benchmark instances for the E-FSMFTW and the existing single vehicle type benchmark using an exact method as well, we show the effectiveness of the proposed approach.",
"A Green Vehicle Routing Problem (G-VRP) is formulated and solution techniques are developed to aid organizations with alternative fuel-powered vehicle fleets in overcoming difficulties that exist as a result of limited vehicle driving range in conjunction with limited refueling infrastructure. The G-VRP is formulated as a mixed integer linear program. Two construction heuristics, the Modified Clarke and Wright Savings heuristic and the Density-Based Clustering Algorithm, and a customized improvement technique, are developed. Results of numerical experiments show that the heuristics perform well. Moreover, problem feasibility depends on customer and station location configurations. Implications of technology adoption on operations are discussed."
]
} |
1608.02305 | 2471190594 | Unmanned aerial vehicles, or drones, have the potential to significantly reduce the cost and time of making last-mile deliveries and responding to emergencies. Despite this potential, little work has gone into developing vehicle routing problems (VRPs) specifically for drone delivery scenarios. Existing VRPs are insufficient for planning drone deliveries: either multiple trips to the depot are not permitted, leading to solutions with excess drones, or the effect of battery and payload weight on energy consumption is not considered, leading to costly or infeasible routes. We propose two multitrip VRPs for drone delivery that address both issues. One minimizes costs subject to a delivery time limit, while the other minimizes the overall delivery time subject to a budget constraint. We mathematically derive and experimentally validate an energy consumption model for multirotor drones, demonstrating that energy consumption varies approximately linearly with payload and battery weight. We use this approximation to derive mixed integer linear programs for our VRPs. We propose a cost function that considers our energy consumption model and drone reuse, and apply it in a simulated annealing (SA) heuristic for finding suboptimal solutions to practical scenarios. To assist drone delivery practitioners with balancing cost and delivery time, the SA heuristic is used to show that the minimum cost has an inverse exponential relationship with the delivery time limit, and the minimum overall delivery time has an inverse exponential relationship with the budget. Numerical results confirm the importance of reusing drones and optimizing battery size in drone delivery VRPs. | A VRP that reuses vehicles is known as a multi-trip vehicle routing problem (MTVRP), first proposed by B. Fleischmann @cite_8 . While a number of approaches are available for solving the MTVRP, such as a large neighborhood search algorithm @cite_14 , a hybrid genetic algorithm with a local search operator @cite_35 , a variable neighborhood search algorithm @cite_30 , and a branch-and-price algorithm @cite_38 , none appear to consider energy consumption as a function of vehicle weight. Routes found by the above approaches may be infeasible or unnecessarily costly in a drone delivery scenario, with batteries that are larger than necessary or too heavy to carry. In ssec:battery-optimization we demonstrate that optimizing battery weight can reduce costs. Battery weight cannot be optimized without modeling energy consumption as a function of vehicle weight: the model is used to find the battery energy, and therefore weight, required to complete each route. The DDPs proposed in this paper apply our linear energy consumption model to optimize battery weight and payload weight, ensuring that the routes found are low-cost and feasible. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_38",
"@cite_14",
"@cite_8"
],
"mid": [
"2056203753",
"2081708500",
"1853087932",
"1965882376",
""
],
"abstract": [
"Abstract The vehicle routing problem with multiple trips ( VRPMT ) is a variants of the standard ( VRP ), where each vehicle can be used more than once during the working period. For this NP-Hard problem, we propose a variable neighborhood search Algorithm in which four neighborhood structure are designed to find the planning of trips. The algorithm was tested over a set of benchmark problems and the obtained solutions were compared with five previously proposed algorithms. Encouraging results are obtained.",
"We consider the Multi Trip Vehicle Routing Problem, in which a set of geographically scattered customers have to be served by a fleet of vehicles. Each vehicle can perform several trips during the working day. The objective is to minimize the total travel time while respecting temporal and capacity constraints.",
"We investigate the exact solution of the vehicle routing problem with time windows, where multiple trips are allowed for the vehicles. In contrast to previous works in the literature, we specifically consider the case in which it is mandatory to visit all customers and there is no limitation on duration. We develop two branch-and-price frameworks based on two set covering formulations: a traditional one where columns (variables) represent routes, that is, a sequence of consecutive trips, and a second one in which columns are single trips. One important difficulty related to the latter is the way mutual temporal exclusion of trips can be handled. It raises the issue of time discretization when solving the pricing problem. Our dynamic programming algorithm is based on concept of groups of labels and representative labels. We provide computational results on modified small-sized instances (25 customers) from Solomon’s benchmarks in order to evaluate and compare the two methods. Results show that some difficult instances are out of reach for the first branch-and-price implementation, while they are consistently solved with the second.",
"The vehicle routing problem with multiple routes consists in determining the routing of a fleet of vehicles when each vehicle can perform multiple routes during its operations day. This problem is relevant in applications where the duration of each route is limited, for example when perishable goods are transported. In this work, we assume that a fixed-size fleet of vehicles is available and that it might not be possible to serve all customer requests, due to time constraints. Accordingly, the objective is first to maximize the number of served customers and then, to minimize the total distance traveled by the vehicles. An adaptive large neighborhood search, exploiting the ruin-and-recreate principle, is proposed for solving this problem. The various destruction and reconstruction operators take advantage of the hierarchical nature of the problem by working either at the customer, route or workday level. Computational results on Euclidean instances, derived from well-known benchmark instances, demonstrate the benefits of this multi-level approach.",
""
]
} |
1608.02305 | 2471190594 | Unmanned aerial vehicles, or drones, have the potential to significantly reduce the cost and time of making last-mile deliveries and responding to emergencies. Despite this potential, little work has gone into developing vehicle routing problems (VRPs) specifically for drone delivery scenarios. Existing VRPs are insufficient for planning drone deliveries: either multiple trips to the depot are not permitted, leading to solutions with excess drones, or the effect of battery and payload weight on energy consumption is not considered, leading to costly or infeasible routes. We propose two multitrip VRPs for drone delivery that address both issues. One minimizes costs subject to a delivery time limit, while the other minimizes the overall delivery time subject to a budget constraint. We mathematically derive and experimentally validate an energy consumption model for multirotor drones, demonstrating that energy consumption varies approximately linearly with payload and battery weight. We use this approximation to derive mixed integer linear programs for our VRPs. We propose a cost function that considers our energy consumption model and drone reuse, and apply it in a simulated annealing (SA) heuristic for finding suboptimal solutions to practical scenarios. To assist drone delivery practitioners with balancing cost and delivery time, the SA heuristic is used to show that the minimum cost has an inverse exponential relationship with the delivery time limit, and the minimum overall delivery time has an inverse exponential relationship with the budget. Numerical results confirm the importance of reusing drones and optimizing battery size in drone delivery VRPs. | Our DDPs consider payload weight similar to the energy minimizing vehicle routing problem introduced by in @cite_16 . The authors of @cite_16 utilize a VRP that factors in the effect of a vehicle's payload on total costs, but is based solely on Newtonian physics, and is not verified against an actual vehicle. Unlike @cite_16 , the work by @cite_34 provides a linear fuel consumption model for trucks based on actual fuel consumption measurements from @cite_10 , along with an SA algorithm for minimizing the cost of routes. The load dependant VRP @cite_9 provides a local search heuristic for doing the same in pickup and delivery scenarios. While these VRPs consider the effect of payload weight on energy consumption, they appear to have been designed with vehicles that have relatively large capacities in mind, as they only let vehicles depart from the depot once. Drones have a limited carrying capacity, so if they can only leave the depot once, a large number will be required to satisfy demand in scenarios with many customers. To reduce the number and therefore cost of drones, we reuse them after they return to the depot. | {
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_10",
"@cite_34"
],
"mid": [
"2049614971",
"1862661734",
"",
"2043459322"
],
"abstract": [
"The present paper examines a Vehicle Routing Problem (VRP) of major practical importance which is referred to as the Load-Dependent VRP (LDVRP). LDVRP is applicable for transportation activities where the weight of the transported cargo accounts for a significant part of the vehicle gross weight. Contrary to the basic VRP which calls for the minimization of the distance travelled, the LDVRP objective is aimed at minimizing the total product of the distance travelled and the gross weight carried along this distance. Thus, it is capable of producing sensible routing plans which take into account the variation of the cargo weight along the vehicle trips. The LDVRP objective is closely related to the total energy requirements of the vehicle fleet, making it a credible alternative when the environmental aspects of transportation activities are examined and optimized. A novel LDVRP extension which considers simultaneous pick-up and delivery service is introduced, formulated and solved for the first time. To deal with large-scale instances of the examined problems, we propose a local-search algorithm. Towards an efficient implementation, the local-search algorithm employs a computational scheme which calculates the complex weighted-distance objective changes in constant time. Solution results are presented for both problems on a variety of well-known test cases demonstrating the effectiveness of the proposed solution approach. The structure of the obtained LDVRP and VRP solutions is compared in pursuit of interesting conclusions on the relative suitability of the two routing models, when the decision maker must deal with the weighted distance objective. In addition, results of a branch-and-cut procedure for small-scale instances of the LDVRP with simultaneous pick-ups and deliveries are reported. Finally, extensive computational experiments have been performed to explore the managerial implications of three key problem characteristics, namely the deviation of customer demands, the cargo to tare weight ratio, as well as the size of the available vehicle fleet.",
"This paper proposes a new cost function based on distance and load of the vehicle for the Capacitated Vehicle Routing Problem. The vehicle-routing problem with this new load-based cost objective is called the Energy Minimizing Vehicle Routing Problem (EMVRP). Integer linear programming formulations with O(n2) binary variables and O(n2) constraints are developed for the collection and delivery cases, separately. The proposed models are tested and illustrated by classical Capacitated Vehicle Routing Problem (CVRP) instances from the literature using CPLEX 8.0.",
"",
"Fuel consumption accounts for a large and increasing part of transportation costs. In this paper, the Fuel Consumption Rate (FCR), a factor considered as a load dependant function, is added to the classical capacitated vehicle routing problem (CVRP) to extend traditional studies on CVRP with the objective of minimizing fuel consumption. We present a mathematical optimization model to formally characterize the FCR considered CVRP (FCVRP) as well as a string based version for calculation. A simulated annealing algorithm with a hybrid exchange rule is developed to solve FCVRP and shows good performance on both the traditional CVRP and the FCVRP in substantial computation experiments. The results of the experiments show that the FCVRP model can reduce fuel consumption by 5 on average compared to the CVRP model. Factors causing the variation in fuel consumption are also identified and discussed in this study."
]
} |
1608.02442 | 2950826972 | We present the SC-ABD algorithm that implements sequentially consistent distributed shared memory (DSM). The algorithm tolerates that less than half of the processes are faulty (crash-stop). Compared to the multi-writer ABD algorithm, SC-ABD requires one instead of two round-trips of communication to perform a write operation, and an equal number of round-trips (two) to perform a read operation. Although sequential consistency is not a compositional consistency condition, the provided correctness proof is compositional. | Lamport described sequential consistency @cite_8 . In multiprocessor systems, sequential consistency is widely regarded as the gold standard'', but most multiprocessor systems provide weaker consistency by default, and require that programs use memory fences to achieve sequentially consistent behavior. Proving that a shared memory implementation satisfies sequential consistency is a well-researched problem. Alur, McMillan, and Peled proved that, in general, the sequential consistency verification problem is undecidable @cite_2 . Bingham, Condon, and Hu suggested that the original formulation of sequential consistency, which is not prefix-closed, may be a reason why the verification problem is hard, and suggested two alternative variants to sequential consistency, Decisive Sequential Consistency (DSC) and Past-Time Sequential Consistency (PTSC) that are prefix-closed @cite_4 . Plakal, Sorin, Condon, and Hill use logical (Lamport) clocks as a tool to reason about correctness of their distributed shared memory protocol @cite_7 . Linearizability was described by Herlihy and Wing @cite_0 . Linearizability has the pleasant property that it is a compositional consistency condition. The cost of sequential consistency vs. linearizability was analyzed by Attiya and Welch @cite_5 . They proved that the cost of sequential consistency is lower than the cost of linearizability under reasonable assumptions. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_0",
"@cite_2",
"@cite_5"
],
"mid": [
"2107208179",
"2114658430",
"1973501242",
"2101939036",
"2672830324",
"2114925953"
],
"abstract": [
"A memory model specifies a correctness requirement for a distributed shared memory protocol. Sequential consistency (SC) is the most widely researched model; previous work citealur1996 has shown that, in general, the SC verification problem is undecidable. We identify two aspects of the formulation found in citealur1996 that we consider to be highly unnatural; we call these non-prefix-closedness and prophetic inheritance. We conjecture that preclusion of such behavior yields a decidable version of SC, which we call decisive sequential consistency (DSC). We also introduce a structure called a phview window (VW), which retains information about a protocol's history, and we define the notion of a phVW-bound, which essentially bounds the size of the VWs needed to maintain DSC. We prove that the class of DSC protocols with VW-bound k is decidable; left conjectured is the hypothesis that all DSC protocols have such a bound, and further that the bound is computable from the protocol description. This hypothesis is true for all real protocols known to us; we verify its truth for the Lazy Caching protocol citeafek1993.",
"Modern shared-memory multiprocessors use complex memory system implementations that include a variety of non-trivial and interacting optimizations. More time is spent in verifying the correctness of such implementations than in designing the system. In particular, large-scale Distributed Shared Memory (DSM) systems usually rely on a directory cache-coherence protocol to provide the illusion of a sequentially consistent shar ed address space. Verifying that such a distributed protocol satisfies sequential consistency is a difficult task. Current formal protocol verification techniques [18] complement simulation, but ar e some what nonintuitive to system designers and verifiers, and they do not scale well to practical systems. In this paper, we examine a new reasoning technique that is precise and (we find) intuitive. Our technique is based on Lamport’ s logical clocks, which were originally used in distributed systems. We make modest extensions to Lamport’ s logical clocking scheme to assign timestamps to r elevant protocol events to construct a total ordering of such events. Such total orderings can be used to verify that the requirements of a particular memory consistency model have been satisfied. We apply Lamport clocks to prove that a non-trivial directory protocol implements sequential consistency. T o do this, we describe an SGI Origin 2000-like protocol [12] in detail, provide a timestamping scheme that totally orders all protocol events, and then prove sequential consistency (i.e., a load always returns the value of the “last” stor e to the same address in timestamp order).",
"The concept of one event happening before another in a distributed system is examined, and is shown to define a partial ordering of the events. A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. The use of the total ordering is illustrated with a method for solving synchronization problems. The algorithm is then specialized for synchronizing physical clocks, and a bound is derived on how far out of synchrony the clocks can become.",
"A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable.",
"The notions of serializability, linearizability, and sequential consistency are used in the specification of concurrent systems. We show that the model checking problem for each of these properties can be cast in terms of the containment of one regular language in another regular language shuffled using a semicommutative alphabet. The three model checking problems are shown to be, respectively, in Pspace, in Expspace, and undecidable.",
"The power of two well-known consistency conditions for shared-memory multiprocessors, sequential consistency and linearizability , is compared. The cost measure studied is the worst-case response time in distributed implementations of virtual shared memory supporting one of the two conditions. Three types of shared-memory objects are considered: read write objects, FIFO queues, and stacks. If clocks are only approximately synchronized (or do not exist), then for all three object types it is shown that linearizability is more expensive than sequential consistency. We show that, for all three data types, the worst-case response time is very sensitive to the assumptions that are made about the timing information available to the system. Under the strong assumption that processes have perfectly synchronized clocks, it is shown that sequential consistency and linearizability are equally costly. We present upper bounds for linearizability and matching lower bounds for sequential consistency. The upper bounds are shown by presenting algorithms that use atomic broadcast in a modular fashion. The lower-bound proofs for the approximate case use the technique of “shifting,” first introduced for studying the clock synchronization problem."
]
} |
1608.02239 | 2949379496 | This paper presents a new method for parallel-jaw grasping of isolated objects from depth images, under large gripper pose uncertainty. Whilst most approaches aim to predict the single best grasp pose from an image, our method first predicts a score for every possible grasp pose, which we denote the grasp function. With this, it is possible to achieve grasping robust to the gripper's pose uncertainty, by smoothing the grasp function with the pose uncertainty function. Therefore, if the single best pose is adjacent to a region of poor grasp quality, that pose will no longer be chosen, and instead a pose will be chosen which is surrounded by a region of high grasp quality. To learn this function, we train a Convolutional Neural Network which takes as input a single depth image of an object, and outputs a score for each grasp pose across the image. Training data for this is generated by use of physics simulation and depth image simulation with 3D object meshes, to enable acquisition of sufficient data without requiring exhaustive real-world experiments. We evaluate with both synthetic and real experiments, and show that the learned grasp score is more robust to gripper pose uncertainty than when this uncertainty is not accounted for. | One challenge with deep learning is the need for a very large volume of training data, and the use of manually-labelled images is therefore not suitable for larger-scale training. One alternative approach, which we also adopt, has been to generate training data in simulation, and attempt to minimise the gap between synthetic data and real data. A popular example is the GraspIt! simulator @cite_11 , which processes 3D meshes of objects and computes the stability of a grasp based upon the grasp wrench space. Whilst these methods do not incorporate dynamic effects which are typically involved in real grasping, prediction of static grasps can be achieved to a high accuracy by close analysis of the object shape. In @cite_5 , this simulation was used to predict the suitability of a RGBD patch for finger locations in multi-fingered grasping, together with the suitability of each type of hand configuration. The work of @cite_15 used a similar static grasp metric and considered uncertainty in gripper pose, object pose, and frictional coefficients. In @cite_28 , grasping in clutter was achieved by using a static stability heuristic, based on a partial reconstruction of the objects. | {
"cite_N": [
"@cite_28",
"@cite_5",
"@cite_15",
"@cite_11"
],
"mid": [
"2290564286",
"1568751436",
"2414685554",
"1510186039"
],
"abstract": [
"This paper considers the problem of grasp pose detection in point clouds. We follow a general algorithmic structure that first generates a large set of 6-DOF grasp candidates and then classifies each of them as a good or a bad grasp. Our focus in this paper is on improving the second step by using depth sensor scans from large online datasets to train a convolutional neural network. We propose two new representations of grasp candidates, and we quantify the effect of using prior knowledge of two forms: instance or category knowledge of the object to be grasped, and pretraining the network on simulated depth data obtained from idealized CAD models. Our analysis shows that a more informative grasp candidate representation as well as pretraining and prior knowledge significantly improve grasp detection. We evaluate our approach on a Baxter Research Robot and demonstrate an average grasp success rate of 93 in dense clutter. This is a 20 improvement compared to our prior work.",
"This paper presents a method to transfer functional grasps among objects of the same category through contact warping and local replanning. The method transfers implicit knowledge that enables an action on a class of objects for which no explicit grasp or task information has been given in advance. Contact points on the source object are warped based on global and local shape similarities to the target object. These warped contacts are then used to define a hand posture that reaches close to them, while at the same time provides the desired functionality on the object. The approach is tested on different sets of objects with a success rate of 87.5 , and large benefits are shown when compared to a naive technique that only transfers a suitable hand pose to the novel object.",
"This paper presents the Dexterity Network (Dex-Net) 1.0, a dataset of 3D object models and a sampling-based planning algorithm to explore how Cloud Robotics can be used for robust grasp planning. The algorithm uses a Multi- Armed Bandit model with correlated rewards to leverage prior grasps and 3D object models in a growing dataset that currently includes over 10,000 unique 3D object models and 2.5 million parallel-jaw grasps. Each grasp includes an estimate of the probability of force closure under uncertainty in object and gripper pose and friction. Dex-Net 1.0 uses Multi-View Convolutional Neural Networks (MV-CNNs), a new deep learning method for 3D object classification, to provide a similarity metric between objects, and the Google Cloud Platform to simultaneously run up to 1,500 virtual cores, reducing experiment runtime by up to three orders of magnitude. Experiments suggest that correlated bandit techniques can use a cloud-based network of object models to significantly reduce the number of samples required for robust grasp planning. We report on system sensitivity to variations in similarity metrics and in uncertainty in pose and friction. Code and updated information is available at http: berkeleyautomation.github.io dex-net .",
"A robotic grasping simulator, called Graspit!, is presented as versatile tool for the grasping community. The focus of the grasp analysis has been on force-closure grasps, which are useful for pick-and-place type tasks. This work discusses the different types of world elements and the general robot definition, and presented the robot library. The paper also describes the user interface of Graspit! and present the collision detection and contact determination system. The grasp analysis and visualization method were also presented that allow a user to evaluate a grasp and compute optimal grasping forces. A brief overview of the dynamic simulation system was provided."
]
} |
1608.02239 | 2949379496 | This paper presents a new method for parallel-jaw grasping of isolated objects from depth images, under large gripper pose uncertainty. Whilst most approaches aim to predict the single best grasp pose from an image, our method first predicts a score for every possible grasp pose, which we denote the grasp function. With this, it is possible to achieve grasping robust to the gripper's pose uncertainty, by smoothing the grasp function with the pose uncertainty function. Therefore, if the single best pose is adjacent to a region of poor grasp quality, that pose will no longer be chosen, and instead a pose will be chosen which is surrounded by a region of high grasp quality. To learn this function, we train a Convolutional Neural Network which takes as input a single depth image of an object, and outputs a score for each grasp pose across the image. Training data for this is generated by use of physics simulation and depth image simulation with 3D object meshes, to enable acquisition of sufficient data without requiring exhaustive real-world experiments. We evaluate with both synthetic and real experiments, and show that the learned grasp score is more robust to gripper pose uncertainty than when this uncertainty is not accounted for. | Static metrics for generating training data have their limitations though, due to the ignorance of motion as the object is lifted from the surface. In @cite_16 @cite_0 @cite_1 , it has been shown that dynamic physics simulations offer a more accurate prediction of grasp quality than the standard static metrics. Furthermore, @cite_16 illustrated how good-quality grasps predicted by physics simulations are highly correlated with those predicted by human labellers. | {
"cite_N": [
"@cite_0",
"@cite_16",
"@cite_1"
],
"mid": [
"2088043683",
"1503925285",
""
],
"abstract": [
"Grasp quality metrics which analyze the contact wrench space are commonly used to synthesize and analyze preplanned grasps. Preplanned grasping approaches rely on the robustness of stored solutions. Analyzing the robustness of such solutions for large databases of preplanned grasps is a limiting factor for the applicability of data driven approaches to grasping. In this work, we will focus on the stability of the widely used grasp wrench space epsilon quality metric over a large range of poses in simulation. We examine a large number of grasps from the Columbia Grasp Database for the Barrett hand. We find that in most cases the grasp with the most robust force closure with respect to pose error for a particular object is not the grasp with the highest epsilon quality. We demonstrate that grasps can be reranked by an estimate of the stability of their epsilon quality. We find that the grasps ranked best by this method are successful more often in physical experiments than grasps ranked best by the epsilon quality.",
"We propose a new large-scale database containing grasps that are applied to a large set of objects from numerous categories. These grasps are generated in simulation and are annotated with different grasp stability metrics. We use a descriptive and efficient representation of the local object shape at which each grasp is applied. Given this data, we present a two-fold analysis: (i) We use crowdsourcing to analyze the correlation of the metrics with grasp success as predicted by humans. The results show that the metric based on physics simulation is a more consistent predictor for grasp success than the standard υ-metric. The results also support the hypothesis that human labels are not required for good ground truth grasp data. Instead the physics-metric can be used to generate datasets in simulation that may then be used to bootstrap learning in the real world. (ii) We apply a deep learning method and show that it can better leverage the large-scale database for prediction of grasp success compared to logistic regression. Furthermore, the results suggest that labels based on the physics-metric are less noisy than those from the υ-metric and therefore lead to a better classification performance.",
""
]
} |
1608.02367 | 2953388015 | Our objective is video retrieval based on natural language queries. In addition, we consider the analogous problem of retrieving sentences or generating descriptions given an input video. Recent work has addressed the problem by embedding visual and textual inputs into a common space where semantic similarities correlate to distances. We also adopt the embedding approach, and make the following contributions: First, we utilize web image search in sentence embedding process to disambiguate fine-grained visual concepts. Second, we propose embedding models for sentence, image, and video inputs whose parameters are learned simultaneously. Finally, we show how the proposed model can be applied to description generation. Overall, we observe a clear improvement over the state-of-the-art methods in the video and sentence retrieval tasks. In description generation, the performance level is comparable to the current state-of-the-art, although our embeddings were trained for the retrieval tasks. | The recent success of deep convolutional neural networks (CNNs) together with large-scale visual datasets @cite_4 @cite_24 @cite_22 has resulted in several powerful representation models for images @cite_35 @cite_1 @cite_15 . These CNN-based methods have been successfully applied to various types of computer vision tasks, such as object detection @cite_14 @cite_27 , video summarization @cite_20 , and image description generation @cite_10 @cite_3 . | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_1",
"@cite_3",
"@cite_24",
"@cite_27",
"@cite_15",
"@cite_10",
"@cite_20"
],
"mid": [
"2953360861",
"2102605133",
"",
"2117539524",
"219040644",
"",
"1889081078",
"2613718673",
"2950307714",
"2951912364",
"1904325426"
],
"abstract": [
"We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"",
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.",
"Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.",
"",
"In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions.",
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.",
"We present a novel method for summarizing raw, casually captured videos. The objective is to create a short summary that still conveys the story. It should thus be both, interesting and representative for the input video. Previous methods often used simplified assumptions and only optimized for one of these goals. Alternatively, they used handdefined objectives that were optimized sequentially by making consecutive hard decisions. This limits their use to a particular setting. Instead, we introduce a new method that (i) uses a supervised approach in order to learn the importance of global characteristics of a summary and (ii) jointly optimizes for multiple objectives and thus creates summaries that posses multiple properties of a good summary. Experiments on two challenging and very diverse datasets demonstrate the effectiveness of our method, where we outperform or match current state-of-the-art."
]
} |
1608.02367 | 2953388015 | Our objective is video retrieval based on natural language queries. In addition, we consider the analogous problem of retrieving sentences or generating descriptions given an input video. Recent work has addressed the problem by embedding visual and textual inputs into a common space where semantic similarities correlate to distances. We also adopt the embedding approach, and make the following contributions: First, we utilize web image search in sentence embedding process to disambiguate fine-grained visual concepts. Second, we propose embedding models for sentence, image, and video inputs whose parameters are learned simultaneously. Finally, we show how the proposed model can be applied to description generation. Overall, we observe a clear improvement over the state-of-the-art methods in the video and sentence retrieval tasks. In description generation, the performance level is comparable to the current state-of-the-art, although our embeddings were trained for the retrieval tasks. | Representation learning using deep neural networks is explored in many tasks @cite_16 @cite_8 @cite_30 @cite_32 @cite_18 @cite_11 . Frome al @cite_30 proposed image classification by computing similarity between joint representations of images and labels, and Zhu al @cite_11 addressed alignment of a movie and sentences in a book using joint representations for video clips and sentences. Their approach also computes similarity between sentences and subtitles of video clips to improve the alignment of video clips and sentences. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_8",
"@cite_32",
"@cite_16",
"@cite_11"
],
"mid": [
"2123024445",
"877909479",
"1946093182",
"2953276893",
"2157364932",
"2949433733"
],
"abstract": [
"Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.",
"Recently, joint video-language modeling has been attracting more and more attention. However, most existing approaches focus on exploring the language model upon on a fixed visual model. In this paper, we propose a unified framework that jointly models video and the corresponding text sentences. The framework consists of three parts: a compositional semantics language model, a deep video model and a joint embedding model. In our language model, we propose a dependency-tree structure model that embeds sentence into a continuous vector space, which preserves visually grounded meanings and word order. In the visual model, we leverage deep neural networks to capture essential semantic information from videos. In the joint embedding model, we minimize the distance of the outputs of the deep video model and compositional language model in the joint space, and update these two models jointly. Based on these three parts, our system is able to accomplish three tasks: 1) natural language generation, and 2) video retrieval and 3) language retrieval. In the experiments, the results show our approach outperforms SVM, CRF and CCA baselines in predicting Subject-Verb-Object triplet and natural sentence generation, and is better than CCA in video retrieval and language retrieval tasks.",
"The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations.",
"We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit.",
"We present a method for training a similarity metric from data. The method can be used for recognition or verification applications where the number of categories is very large and not known during training, and where the number of training samples for a single category is very small. The idea is to learn a function that maps input patterns into a target space such that the L sub 1 norm in the target space approximates the \"semantic\" distance in the input space. The method is applied to a face verification task. The learning process minimizes a discriminative loss function that drives the similarity metric to be small for pairs of faces from the same person, and large for pairs from different persons. The mapping from raw to the target space is a convolutional network whose architecture is designed for robustness to geometric distortions. The system is tested on the Purdue AR face database which has a very high degree of variability in the pose, lighting, expression, position, and artificial occlusions such as dark glasses and obscuring scarves.",
"Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This paper aims to align books to their movie releases in order to provide rich descriptive explanations for visual content that go semantically far beyond the captions available in current datasets. To align movies and books we exploit a neural sentence embedding that is trained in an unsupervised way from a large corpus of books, as well as a video-text neural embedding for computing similarities between movie clips and sentences in the book. We propose a context-aware CNN to combine information from multiple sources. We demonstrate good quantitative performance for movie book alignment and show several qualitative examples that showcase the diversity of tasks our model can be used for."
]
} |
1608.02367 | 2953388015 | Our objective is video retrieval based on natural language queries. In addition, we consider the analogous problem of retrieving sentences or generating descriptions given an input video. Recent work has addressed the problem by embedding visual and textual inputs into a common space where semantic similarities correlate to distances. We also adopt the embedding approach, and make the following contributions: First, we utilize web image search in sentence embedding process to disambiguate fine-grained visual concepts. Second, we propose embedding models for sentence, image, and video inputs whose parameters are learned simultaneously. Finally, we show how the proposed model can be applied to description generation. Overall, we observe a clear improvement over the state-of-the-art methods in the video and sentence retrieval tasks. In description generation, the performance level is comparable to the current state-of-the-art, although our embeddings were trained for the retrieval tasks. | Our approach is the closest to work by Xu al @cite_18 . They represent a sentence by a subject, verb, and object (SVO) triplet, and embed sentences as well as videos to a common vector space using deep neural networks. The main difference between ours and the work @cite_18 is the use of an RNN to encode a sentence and supplementary web images. The use of an RNN enables our model to encode all words in a sentence and capture details of the sentence, such as an object's attributes and scenes, together with corresponding web images. | {
"cite_N": [
"@cite_18"
],
"mid": [
"877909479"
],
"abstract": [
"Recently, joint video-language modeling has been attracting more and more attention. However, most existing approaches focus on exploring the language model upon on a fixed visual model. In this paper, we propose a unified framework that jointly models video and the corresponding text sentences. The framework consists of three parts: a compositional semantics language model, a deep video model and a joint embedding model. In our language model, we propose a dependency-tree structure model that embeds sentence into a continuous vector space, which preserves visually grounded meanings and word order. In the visual model, we leverage deep neural networks to capture essential semantic information from videos. In the joint embedding model, we minimize the distance of the outputs of the deep video model and compositional language model in the joint space, and update these two models jointly. Based on these three parts, our system is able to accomplish three tasks: 1) natural language generation, and 2) video retrieval and 3) language retrieval. In the experiments, the results show our approach outperforms SVM, CRF and CCA baselines in predicting Subject-Verb-Object triplet and natural sentence generation, and is better than CCA in video retrieval and language retrieval tasks."
]
} |
1608.02367 | 2953388015 | Our objective is video retrieval based on natural language queries. In addition, we consider the analogous problem of retrieving sentences or generating descriptions given an input video. Recent work has addressed the problem by embedding visual and textual inputs into a common space where semantic similarities correlate to distances. We also adopt the embedding approach, and make the following contributions: First, we utilize web image search in sentence embedding process to disambiguate fine-grained visual concepts. Second, we propose embedding models for sentence, image, and video inputs whose parameters are learned simultaneously. Finally, we show how the proposed model can be applied to description generation. Overall, we observe a clear improvement over the state-of-the-art methods in the video and sentence retrieval tasks. In description generation, the performance level is comparable to the current state-of-the-art, although our embeddings were trained for the retrieval tasks. | * Exploiting Image Search: The idea of exploiting web image search is adopted in many tasks, including object classification @cite_21 and video summarization @cite_25 . These approaches collect a vast amount of images from the web and utilize them to extract canonical visual concepts. Recent label prediction for images by Johnson al @cite_19 infers tags of target images by mining relevant Flickr images based on their metadata, such as user tags and photo groups curated by users. The relevant images serve as priors on tags for the target image. A similar motivation drives us to utilize web images for each sentence, which can disambiguate visual concepts of the sentence and highlight relevant target videos. | {
"cite_N": [
"@cite_19",
"@cite_21",
"@cite_25"
],
"mid": [
"1908139891",
"2172191903",
""
],
"abstract": [
"Some images that are difficult to recognize on their own may become more clear in the context of a neighborhood of related images with similar social-network metadata. We build on this intuition to improve multilabel image annotation. Our model uses image metadata nonparametrically to generate neighborhoods of related images using Jaccard similarities, then uses a deep neural network to blend visual information from the image and its neighbors. Prior work typically models image metadata parametrically, in contrast, our nonparametric treatment allows our model to perform well even when the vocabulary of metadata changes between training and testing. We perform comprehensive experiments on the NUS-WIDE dataset, where we show that our model outperforms state-of-the-art methods for multilabel image annotation even when our model is forced to generalize to new types of metadata.",
"Current approaches to object category recognition require datasets of training images to be manually prepared, with varying degrees of supervision. We present an approach that can learn an object category from just its name, by utilizing the raw output of image search engines available on the Internet. We develop a new model, TSI-pLSA, which extends pLSA (as applied to visual words) to include spatial information in a translation and scale invariant manner. Our approach can handle the high intra-class variability and large proportion of unrelated images returned by search engines. We evaluate tire models on standard test sets, showing performance competitive with existing methods trained on hand prepared datasets",
""
]
} |
1608.02432 | 2950128111 | There has been a wide interest in designing distributed algorithms for tiny robots. In particular, it has been shown that the robots can complete certain tasks even in the presence of faulty robots. In this paper, we focus on gathering of all non-faulty robots at a single point in presence of faulty robots. We propose a wait-free algorithm (i.e., no robot waits for other robot and algorithm instructs each robot to move in every step, unless it is already at the gathering location), that gathers all non-faulty robots in semi-synchronous model without any agreement in the coordinate system and with weak multiplicity detection (i.e., a robot can only detect that either there is one or more robot at a location) in the presence of at most @math faulty robots for @math . We show that the required capability for gathering robots is minimal in the above model, since relaxing it further makes gathering impossible to solve. Also, we introduce an intermediate scheduling model between the asynchronous ( i.e., no instantaneous movement or computation) and the semi-synchronous (i.e., both instantaneous movement and computation) as the asynchronous model with instantaneous computation. Then we propose another algorithm in model for gathering all non-faulty robots with weak multiplicity detection without any agreement on the coordinate system in the presence of at most @math faulty robots for @math . | Some of the common problems for these multi-robot systems include, : all robots agree on a leader among themselves @cite_12 @cite_3 @cite_15 , : all robots gather at a single point @cite_16 , : the robots come very close to each other @cite_6 and : the robots imitate a given pattern on the plane @cite_16 . The gathering problem has been studied for different models, including fully synchronous (), semi-synchronous () and asynchronous (). In model, the gathering problem has been solved without making any additional assumptions to the basic model @cite_5 . In @cite_16 , impossibility of gathering for @math without assumptions on local coordinate system agreement for and is proved. Also, for @math it is impossible to solve gathering without assumptions on either coordinate system agreement or multiplicity detection @cite_14 . In @cite_8 , have studied gathering with multiplicity detection. A practical implementation of non-transparent fat robots with omnidirectional cameras opened up several new algorithmic issues @cite_11 . @cite_12 have proposed a deterministic algorithm for leader election and gathering for transparent fat robots without common sense of direction or chirality. Common chirality is basically the common clockwise order. | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_3",
"@cite_6",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_12",
"@cite_11"
],
"mid": [
"2119517221",
"2087073465",
"1936642258",
"2563454818",
"2041571902",
"2159467419",
"2044484214",
"",
""
],
"abstract": [
"Given a set of n autonomous mobile robots that can freely move on a two dimensional plane, they are required to gather in a position on the plane not fixed in advance (Gathering Problem). The main research question we address in this paper is: Under which conditions can this task be accomplished by the robots? The studied robots are quite simple: they are anonymous, totally asynchronous, they do not have any memory of past computations, they cannot explicitly communicate between each other. We show that this simple task cannot be in general accomplished by the considered system of robots.",
"Consider a set of @math identical mobile computational entities in the plane, called robots, operating in Look-Compute-Move cycles, without any means of direct communication. The Gathering Problem is the primitive task of all entities gathering in finite time at a point not fixed in advance, without any external control. The problem has been extensively studied in the literature under a variety of strong assumptions (e.g., synchronicity of the cycles, instantaneous movements, complete memory of the past, common coordinate system, etc.). In this paper we consider the setting without those assumptions, that is, when the entities are oblivious (i.e., they do not remember results and observations from previous cycles), disoriented (i.e., have no common coordinate system), and fully asynchronous (i.e., no assumptions exist on timing of cycles and activities within a cycle). The existing algorithmic contributions for such robots are limited to solutions for @math or for restricted sets of initial configura...",
"A leader election algorithm based on the performance and the operation rate of nodes and links as proposed. The performance of the existing leader election algorithms should be improved because a distributed system pauses until a node becomes a unique leader among all nodes in the system. The pre-election algorithm that elects a provisional leader while the leader is executing is also introduced.",
"The common theoretical model adopted in recent studies on algorithms for systems of autonomous mobile robots assumes that the positional input of the robots is obtained by perfectly accurate visual sensors, that robot movements are accurate, and that internal calculations performed by the robots on (real) coordinates are perfectly accurate as well. The current paper concentrates on the effect of weakening this rather strong set of assumptions, and replacing it with the more realistic assumption that the robot sensors, movement and internal calculations may have slight inaccuracies. Specifically, the paper concentrates on the ability of robot systems with inaccurate sensors, movements and calculations to carry out the task of convergence. The paper presents several impossibility results, limiting the inaccuracy allowing convergence. The main positive result is an algorithm for convergence under bounded measurement, movement and calculation errors.",
"This paper considers the convergence problem in autonomous mobile robot systems. A natural algorithm for the problem requires the robots to move towards their center of gravity. This paper proves the correctness of the gravitational algorithm in the fully asynchronous model. It also analyzes its convergence rate and establishes its convergence in the presence of crash faults.",
"In a previous paper, Garcia-Molina specifies the leader election problem for synchronous and asynchronous distributed systems with crash and link failures and gives an elegant algorithm for each type of system. This paper points out a flaw in Garcia-Molina's specification of leader election in asynchronous systems and proposes a new specification.",
"In this note we make a minor correction to a scheme for robots to broadcast their private information. All major results of the paper [I. Suzuki and M. Yamashita, SIAM J. Comput., 28 (1999), pp. 1347-1363] hold with this correction.",
"",
""
]
} |
1608.02495 | 2501650229 | We use @math algebras to define open Gromov-Witten invariants with both boundary and interior constraints, associated to a Lagrangian submanifold @math of arbitrary odd dimension. The boundary constraints are bounding chains, which are shown to behave like points. The interior constraints are arbitrary even degree classes in the cohomology of @math relative to @math We show the invariants satisfy analogs of the axioms of closed Gromov-Witten theory. Our definition of invariants depends on the vanishing of a series of obstruction classes. One way to show vanishing is to impose certain cohomological conditions on @math Alternatively, if there exists an anti-symplectic involution fixing @math , then part of the obstructions vanish a priori, and weaker cohomological conditions suffice to guarantee vanishing of the remaining obstructions. In particular, our definition generalizes both Welschinger's and Georgieva's real enumerative invariants. | Welschinger @cite_14 @cite_19 corrects disk bubbling by taking into account linking numbers, thus obtaining invariants in dimensions @math and @math . We expect this approach is closely related to the present paper. Another related approach is being developed by Tessler @cite_10 . Zernik @cite_20 @cite_34 follows an approach closely related to the present work to define equivariant open Gromov-Witten invariants and give an equivariant localization formula for them. Biran-Cornea @cite_27 define an invariant counting disks of a given degree through three points, arising as the discriminant of a quadratic form associated to the Lagrangian quantum product. Cho @cite_43 gives an example of an open Gromov-Witten invariant for the Clifford torus in @math . It would be interesting to find a connection between either of these invariants and the present paper. | {
"cite_N": [
"@cite_14",
"@cite_19",
"@cite_27",
"@cite_43",
"@cite_34",
"@cite_10",
"@cite_20"
],
"mid": [
"2964237766",
"2015059094",
"1968701374",
"2963868479",
"",
"",
"2219763529"
],
"abstract": [
"Given a closed orientable Lagrangian surface L in a closed symplectic four-manifold X together with a relative homology class d in H_2 (X , L ; Z) with vanishing boundary in H_1 (L ; Z), we prove that the algebraic number of J-holomorphic discs with boundary on L, homologous to d and passing through the adequate number of points neither depends on the choice of the points nor on the generic choice of the almost-complex structure J. We furthermore get analogous open Gromov-Witten invariants by counting, for every non-negative integer k, unions of k discs instead of single discs.",
"Let (L ) be a closed orientable Lagrangian submanifold of a closed symplectic six-manifold ((X , ) ). We assume that the first homology group (H_1 (L ; A) ) with coefficients in a commutative ring (A ) injects into the group (H_1 (X ; A) ) and that (X ) contains no Maslov zero pseudo-holomorphic disc with boundary on (L ). Then, we prove that for every generic choice of a tame almost-complex structure (J ) on (X ), every relative homology class (d H_2 (X , L ; Z ) ) and adequate number of incidence conditions in (L ) or (X ), the weighted number of (J )-holomorphic discs with boundary on (L ), homologous to (d ), and either irreducible or reducible disconnected, which satisfy the conditions, does not depend on the generic choice of (J ), provided that at least one incidence condition lies in (L ). These numbers thus define open Gromov–Witten invariants in dimension six, taking values in the ring (A ).",
"We use the \"pearl\" machinery in our previous work to study certain enumerative invariants associated to monotone Lagrangian submanifolds.",
"We first compute three-point open Gromov-Witten numbers of Lagrangian torus fibers in toric Fano manifolds, and show that they depend on the choice of three points, hence they are not invariants. We show that for a cyclic A-infinity algebras, such counting may be defined up to Hochschild or cyclic boundary elements. In particular we obtain a well-defined function on Hochschild or cyclic homology of a cyclic A-infinity algebra, which has invariance property under cyclic A-infinity homomorphism.",
"",
"",
"We set up an algebraic framework for the study of pseudoholomorphic discs bounding nonorientable Lagrangians, as well as equivariant extensions of such structures arising from a torus action. First, we define unital cyclic twisted @math algebras and prove some basic results about them, including a homological perturbation lemma which allows one to construct minimal models of such algebras. We then construct an equivariant extension of @math algebras which are invariant under a torus action on the underlying complex. Finally, we construct a homotopy retraction of the Cartan-Weil complex to equivariant cohomology, which allows us to construct minimal models for equivariant cyclic twisted @math algebras. In a forthcoming paper we will use these results to define and obtain fixed-point expressions for the open Gromov-Witten theory of @math , as well as its equivariant extension."
]
} |
1608.02165 | 2952684211 | We introduce a new method for location recovery from pair-wise directions that leverages an efficient convex program that comes with exact recovery guarantees, even in the presence of adversarial outliers. When pairwise directions represent scaled relative positions between pairs of views (estimated for instance with epipolar geometry) our method can be used for location recovery, that is the determination of relative pose up to a single unknown scale. For this task, our method yields performance comparable to the state-of-the-art with an order of magnitude speed-up. Our proposed numerical framework is flexible in that it accommodates other approaches to location recovery and can be used to speed up other methods. These properties are demonstrated by extensively testing against state-of-the-art methods for location recovery on 13 large, irregular collections of images of real scenes in addition to simulated data with ground truth. | There are many efficient and stable algorithms for estimating global camera rotations @cite_19 @cite_6 @cite_3 @cite_12 @cite_4 @cite_11 @cite_24 @cite_18 @cite_0 @cite_21 @cite_7 @cite_15 . Empirically, @cite_9 demonstrates that a combination of filtering, factorization, and local refinement can accurately estimate 3d rotations. Theoretically, @cite_16 prove that rotations can be exactly and stably recovered for a synthetic model by a least unsquared deviation approach on a semidefinite relaxation. Alternatively, in many applications, such as location services from mobile platforms, augmented reality, and robotics, orientation can be estimated far more reliably than location and scale due to the relatively small gyrometer bias compared to the doubly-integrated accelerometer bias and global orientation references provided by gravity and magnetic field. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_24",
"@cite_19",
"@cite_0",
"@cite_15",
"@cite_16",
"@cite_12",
"@cite_11"
],
"mid": [
"1993228150",
"2107156416",
"2142753612",
"2464339273",
"2084613528",
"2058702778",
"2007806707",
"2155094592",
"2071616405",
"2171244244",
"117099968",
"2963537367",
"2153405862",
"2105413810"
],
"abstract": [
"We consider the problem of rotation averaging under the L 1 norm. This problem is related to the classic Fermat-Weber problem for finding the geometric median of a set of points in IRn. We apply the classical Weiszfeld algorithm to this problem, adapting it iteratively in tangent spaces of SO(3) to obtain a provably convergent algorithm for finding the L 1 mean. This results in an extremely simple and rapid averaging algorithm, without the need for line search. The choice of L 1 mean (also called geometric median) is motivated by its greater robustness compared with rotation averaging under the L 2 norm (the usual averaging process). We apply this problem to both single-rotation averaging (under which the algorithm provably finds the global L 1 optimum) and multiple rotation averaging (for which no such proof exists). The algorithm is demonstrated to give markedly improved results, compared with L 2 averaging. We achieve a median rotation error of 0.82 degrees on the 595 images of the Notre Dame image set.",
"Multiple rotation averaging is an important problem in computer vision. The problem is challenging because of the nonlinear constraints required to represent the set of rotations. To our knowledge no one has proposed any globally optimal solution for the case of simultaneous updates of the rotations. In this paper we propose a simple procedure based on Lagrangian duality that can be used to verify global optimality of a local solution, by solving a linear system of equations. We show experimentally on real and synthetic data that unless the noise levels are extremely high this procedure always generates the globally optimal solution.",
"",
"We consider the problem of robust rotation optimization in Structure from Motion applications. A number of different approaches have been recently proposed, with solutions that are at times incompatible, and at times complementary. The goal of this paper is to survey and compare these ideas in a unified manner, and to benchmark their robustness against the presence of outliers. In all, we have tested more than forty variants of a these methods (including novel ones), and we find the best performing combination.",
"Multi-view structure from motion (SfM) estimates the position and orientation of pictures in a common 3D coordinate frame. When views are treated incrementally, this external calibration can be subject to drift, contrary to global methods that distribute residual errors evenly. We propose a new global calibration approach based on the fusion of relative motions between image pairs. We improve an existing method for robustly computing global rotations. We present an efficient a contrario trifocal tensor estimation method, from which stable and precise translation directions can be extracted. We also define an efficient translation registration method that recovers accurate camera positions. These components are combined into an original SfM pipeline. Our experiments show that, on most datasets, it outperforms in accuracy other existing incremental and global pipelines. It also achieves strikingly good running times: it is about 20 times faster than the other global method we could compare to, and as fast as the best incremental method. More importantly, it features better scalability properties.",
"Let G=(V, A) denote a simple connected directed graph, and let n=|V|, m=|A|, where nt-1≤m≤(n2) A feedbackarc set (FAS) of G, denoted R(G), is a (possibly empty)set of arcs whose reversal makes G acyclic. A minimum feedbackarc set of G, denoted R∗(G), is a FAS of minimum cardinality r∗(G); the computation of R∗(G) is called the FASproblem. Berger and Shor have recently published an algorithm which, for a given digraph G, computes a FAS whose cardinality is at most m 2t-c1m Δ1 2 where Δ is the maximum degree of G and c1 is a constant. Further, they exhibited an infinite class of graphs with the property that for every Gϵ and some constant c2, r∗(G)≥m 2t-c2m Δ1 2. Thus the Berger-Shor algorithm provides, in a certain asymptotic sense, an optimal solution to the FAS problem. Unfortunately, the Berger-Shor algorithm is complicated and requires runni ng time O(mn). In this paper we present a simple FAS algorithm which guarantees a good (though not optimal) performance bound and executes in time O(m). Further, for the sparse graphs which arise frequently in graph drawing and other applications, our algorithm achieves the same asymptotic performance bound that Berger-Shor does.",
"In this paper we address the problem of robust and efficient averaging of relative 3D rotations. Apart from having an interesting geometric structure, robust rotation averaging addresses the need for a good initialization for large scale optimization used in structure-from-motion pipelines. Such pipelines often use unstructured image datasets harvested from the internet thereby requiring an initialization method that is robust to outliers. Our approach works on the Lie group structure of 3D rotations and solves the problem of large-scale robust rotation averaging in two ways. Firstly, we use modern l1 optimizers to carry out robust averaging of relative rotations that is efficient, scalable and robust to outliers. In addition, we also develop a two step method that uses the l1 solution as an initialisation for an iteratively reweighted least squares (IRLS) approach. These methods achieve excellent results on large-scale, real world datasets and significantly outperform existing methods, i.e. the state-of-the-art discrete-continuous optimization method of [3] as well as the Weiszfeld method of [8]. We demonstrate the efficacy of our method on two large scale real world datasets and also provide the results of the two aforementioned methods for comparison.",
"While motion estimation has been extensively studied in the computer vision literature, the inherent information redundancy in an image sequence has not been well utilised. In particular as many as N(N-1) 2 pairwise relative motions can be estimated efficiently from a sequence of N images. This highly redundant set of observations can be efficiently averaged resulting in fast motion estimation algorithms that are globally consistent. In this paper we demonstrate this using the underlying Lie-group structure of motion representations. The Lie-algebras of the Special Orthogonal and Special Euclidean groups are used to define averages on the Lie-group which in turn gives statistically meaningful, efficient and accurate algorithms for fusing motion information. Using multiple constraints also controls the drift in the solution due to accumulating error. The performance of the method in estimating camera motion is demonstrated on image sequences.",
"Multiview structure recovery from a collection of images requires the recovery of the positions and orientations of the cameras relative to a global coordinate system. Our approach recovers camera motion as a sequence of two global optimizations. First, pair wise Essential Matrices are used to recover the global rotations by applying robust optimization using either spectral or semi definite programming relaxations. Then, we directly employ feature correspondences across images to recover the global translation vectors using a linear algorithm based on a novel decomposition of the Essential Matrix. Our method is efficient and, as demonstrated in our experiments, achieves highly accurate results on collections of real images for which ground truth measurements are available.",
"It is known that the problem of multiview reconstruction can be solved in two steps: first estimate camera rotations and then translations using them. This paper presents new robust techniques for both of these steps, (i) Given pair-wise relative rotations, global camera rotations are estimated linearly in least squares, (ii) Camera translations are estimated using a standard technique based on Second Order Cone Programming. Robustness is achieved by using only a subset of points according to a new criterion that diminishes the risk of choosing a mismatch. It is shown that only four points chosen in a special way are sufficient to represent a pairwise reconstruction almost equally as all points. This leads to a significant speedup. In image sets with repetitive or similar structures, non-existent epipolar geometries may be found. Due to them, some rotations and consequently translations may be estimated incorrectly. It is shown that iterative removal of pairwise reconstructions with the largest residual and reregistration removes most non-existent epipolar geometries. The performance of the proposed method is demonstrated on difficult wide base-line image sets.",
"We present a new structure from motion (Sfm) technique based on point and vanishing point (VP) matches in images. First, all global camera rotations are computed from VP matches as well as relative rotation estimates obtained from pairwise image matches. A new multi-staged linear technique is then used to estimate all camera translations and 3D points simultaneously. The proposed method involves first performing pairwise reconstructions, then robustly aligning these in pairs, and finally aligning all of them globally by simultaneously estimating their unknown relative scales and translations. In doing so, measurements inconsistent in three views are efficiently removed. Unlike sequential Sfm, the proposed method treats all images equally, is easy to parallelize and does not require intermediate bundle adjustments. There is also a reduction of drift and significant speedups up to two order of magnitude over sequential Sfm. We compare our method with a standard Sfm pipeline [1] and demonstrate that our linear estimates are accurate on a variety of datasets, and can serve as good initializations for final bundle adjustment. Because we exploit VPs when available, our approach is particularly well-suited to the reconstruction of man-made scenes.",
"The problem has found applications in computer vision, computer graphics, and sensor network localization, among others. Its least squares solution can be approximated by either spectral relaxation or semidefinite programming followed by a rounding procedure, analogous to the approximation algorithms of MAX-CUT. The contribution of this paper is three-fold: First, we introduce a robust penalty function involving the sum of unsquared deviations and derive a relaxation that leads to a convex optimization problem; Second, we apply the alternating direction method to minimize the penalty function; Finally, under a specific model of the measurement noise and for both complete and random measurement graphs, we prove that the rotations are exactly and stably recovered, exhibiting a phase transition behavior in terms of the proportion of noisy measurements. Numerical simulations confirm the phase transition behavior for our method as well as its improved accuracy compared to existing methods.",
"Prior work on multi-view structure from motion is dominated by sequential approaches starting from a single two-view reconstruction, then adding new images one by one. In contrast, we propose a non-sequential methodology based on rotational consistency and robust estimation using convex optimization. The resulting system is more robust with respect to (i) unreliable two-view estimations caused by short baselines, (ii) repetitive scenes with locally consistent structures that are not consistent with the global geometry and (iii) loop closing as errors are not propagated in a sequential manner. Both theoretical justifications and experimental comparisons are given to support these claims.1",
"In this paper we describe two methods for estimating the motion parameters of an image sequence. For a sequence of N images, the global motion can be described by N-1 independent motion models. On the other hand, in a sequence there exist as many as sub 2 sup N(N-1) pairwise relative motion constraints that can be solve for efficiently. In this paper we show how to linearly solve for consistent global motion models using this highly redundant set of constraints. In the first case, our method involves estimating all available pairwise relative motions and linearly fining a global motion model to these estimates. In the second instance, we exploit the fact that algebraic (i.e. epipolar) constraints between various image pairs are all related to each other by the global motion model. This results in an estimation method that directly computes the motion of the sequence by using all possible algebraic constraints. Unlike using reprojection error, our optimisation method does not solve for the structure of points resulting in a reduction of the dimensionality of the search space. Our algorithms are used for both 3D camera motion estimation and camera calibration. We provide real examples of both applications."
]
} |
1608.02165 | 2952684211 | We introduce a new method for location recovery from pair-wise directions that leverages an efficient convex program that comes with exact recovery guarantees, even in the presence of adversarial outliers. When pairwise directions represent scaled relative positions between pairs of views (estimated for instance with epipolar geometry) our method can be used for location recovery, that is the determination of relative pose up to a single unknown scale. For this task, our method yields performance comparable to the state-of-the-art with an order of magnitude speed-up. Our proposed numerical framework is flexible in that it accommodates other approaches to location recovery and can be used to speed up other methods. These properties are demonstrated by extensively testing against state-of-the-art methods for location recovery on 13 large, irregular collections of images of real scenes in addition to simulated data with ground truth. | We concentrate on the location recovery problem from relative directions based on known camera rotations. There have been many different approaches to this problem, such as least squares @cite_19 @cite_23 @cite_11 @cite_0 , second-order cone programs, @math methods @cite_2 @cite_20 @cite_0 @cite_21 @cite_13 , spectral methods @cite_23 , similarity transformations for pair alignment @cite_15 , Lie-algebraic averaging @cite_24 , Markov random fields @cite_1 , and several others @cite_14 @cite_7 @cite_15 @cite_17 . Unfortunately, many location recovery algorithms either lack robustness to mismatches, at times produce collapsed solutions @cite_7 , or suffer from convergence to local minima, in sum causing large errors in (or even complete degradation of) the recovered locations. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_21",
"@cite_1",
"@cite_17",
"@cite_0",
"@cite_19",
"@cite_24",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_20",
"@cite_11"
],
"mid": [
"2107181183",
"2142753612",
"2084613528",
"2013472030",
"2148262136",
"2171244244",
"2071616405",
"2155094592",
"1592908867",
"",
"117099968",
"2119088708",
"2043995994",
"2105413810"
],
"abstract": [
"We present a linear method for global camera pose registration from pair wise relative poses encoded in essential matrices. Our method minimizes an approximate geometric error to enforce the triangular relationship in camera triplets. This formulation does not suffer from the typical unbalanced scale' problem in linear methods relying on pair wise translation direction constraints, i.e. an algebraic error, nor the system degeneracy from collinear motion. In the case of three cameras, our method provides a good linear approximation of the trifocal tensor. It can be directly scaled up to register multiple cameras. The results obtained are accurate for point triangulation and can serve as a good initialization for final bundle adjustment. We evaluate the algorithm performance with different types of data and demonstrate its effectiveness. Our system produces good accuracy, robustness, and outperforms some well-known systems on efficiency.",
"",
"Multi-view structure from motion (SfM) estimates the position and orientation of pictures in a common 3D coordinate frame. When views are treated incrementally, this external calibration can be subject to drift, contrary to global methods that distribute residual errors evenly. We propose a new global calibration approach based on the fusion of relative motions between image pairs. We improve an existing method for robustly computing global rotations. We present an efficient a contrario trifocal tensor estimation method, from which stable and precise translation directions can be extracted. We also define an efficient translation registration method that recovers accurate camera positions. These components are combined into an original SfM pipeline. Our experiments show that, on most datasets, it outperforms in accuracy other existing incremental and global pipelines. It also achieves strikingly good running times: it is about 20 times faster than the other global method we could compare to, and as fast as the best incremental method. More importantly, it features better scalability properties.",
"Recent work in structure from motion (SfM) has successfully built 3D models from large unstructured collections of images downloaded from the Internet. Most approaches use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the number of images grows, and can drift or fall into bad local minima. We present an alternative formulation for SfM based on finding a coarse initial solution using a hybrid discrete-continuous optimization, and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and the points, including noisy geotags and vanishing point estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it can produce models that are similar to or better than those produced with incremental bundle adjustment, but more robustly and in a fraction of the time.",
"We consider the problem of distributed estimation of the poses of N cameras in a camera sensor network using image measurements only. The relative rotation and translation (up to a scale factor) between pairs of neighboring cameras can be estimated using standard computer vision techniques. However, due to noise in the image measurements, these estimates may not be globally consistent. We address this problem by minimizing a cost function on SE(3)N in a distributed fashion using a generalization of the classical consensus algorithm for averaging Euclidean data. We also derive a condition for convergence, which relates the step-size of the consensus algorithm and the degree of the camera network graph. While our methods are designed with the camera sensor network application in mind, our results are applicable to other localization problems in a more general setting. We also provide synthetic simulations to test the validity of our approach.",
"It is known that the problem of multiview reconstruction can be solved in two steps: first estimate camera rotations and then translations using them. This paper presents new robust techniques for both of these steps, (i) Given pair-wise relative rotations, global camera rotations are estimated linearly in least squares, (ii) Camera translations are estimated using a standard technique based on Second Order Cone Programming. Robustness is achieved by using only a subset of points according to a new criterion that diminishes the risk of choosing a mismatch. It is shown that only four points chosen in a special way are sufficient to represent a pairwise reconstruction almost equally as all points. This leads to a significant speedup. In image sets with repetitive or similar structures, non-existent epipolar geometries may be found. Due to them, some rotations and consequently translations may be estimated incorrectly. It is shown that iterative removal of pairwise reconstructions with the largest residual and reregistration removes most non-existent epipolar geometries. The performance of the proposed method is demonstrated on difficult wide base-line image sets.",
"Multiview structure recovery from a collection of images requires the recovery of the positions and orientations of the cameras relative to a global coordinate system. Our approach recovers camera motion as a sequence of two global optimizations. First, pair wise Essential Matrices are used to recover the global rotations by applying robust optimization using either spectral or semi definite programming relaxations. Then, we directly employ feature correspondences across images to recover the global translation vectors using a linear algorithm based on a novel decomposition of the Essential Matrix. Our method is efficient and, as demonstrated in our experiments, achieves highly accurate results on collections of real images for which ground truth measurements are available.",
"While motion estimation has been extensively studied in the computer vision literature, the inherent information redundancy in an image sequence has not been well utilised. In particular as many as N(N-1) 2 pairwise relative motions can be estimated efficiently from a sequence of N images. This highly redundant set of observations can be efficiently averaged resulting in fast motion estimation algorithms that are globally consistent. In this paper we demonstrate this using the underlying Lie-group structure of motion representations. The Lie-algebras of the Special Orthogonal and Special Euclidean groups are used to define averages on the Lie-group which in turn gives statistically meaningful, efficient and accurate algorithms for fusing motion information. Using multiple constraints also controls the drift in the solution due to accumulating error. The performance of the method in estimating camera motion is demonstrated on image sequences.",
"Extrinsic calibration of large-scale ad hoc networks of cameras is posed as the following problem: Calculate the locations of N mobile, rotationally aligned cameras distributed over an urban region, subsets of which view some common environmental features. We show that this leads to a novel class of graph embedding problems that admit closed-form solutions in linear time via partial spectral decomposition of a quadratic form. The minimum squared error (mse) solution determines locations of cameras and or features in any number of dimensions. The spectrum also indicates insufficiently constrained problems, which can be decomposed into well-constrained rigid subproblems and analyzed to determine useful new views for missing constraints. We demonstrate the method with large networks of mobile cameras distributed over an urban environment, using directional constraints that have been extracted automatically from commonly viewed features. Spectral solutions yield layouts that are consistent in some cases to a fraction of a millimeter, substantially improving the state of the art. Global layout of large camera networks can be computed in a fraction of a second.",
"",
"We present a new structure from motion (Sfm) technique based on point and vanishing point (VP) matches in images. First, all global camera rotations are computed from VP matches as well as relative rotation estimates obtained from pairwise image matches. A new multi-staged linear technique is then used to estimate all camera translations and 3D points simultaneously. The proposed method involves first performing pairwise reconstructions, then robustly aligning these in pairs, and finally aligning all of them globally by simultaneously estimating their unknown relative scales and translations. In doing so, measurements inconsistent in three views are efficiently removed. Unlike sequential Sfm, the proposed method treats all images equally, is easy to parallelize and does not require intermediate bundle adjustments. There is also a reduction of drift and significant speedups up to two order of magnitude over sequential Sfm. We compare our method with a standard Sfm pipeline [1] and demonstrate that our linear estimates are accurate on a variety of datasets, and can serve as good initializations for final bundle adjustment. Because we exploit VPs when available, our approach is particularly well-suited to the reconstruction of man-made scenes.",
"Recently, there has been interest in formulating various geometric problems in Computer Vision as L optimization problems. The advantage of this approach is that under L norm, such problems typically have a single minimum, and may be efficiently solved using Second-Order Cone Programming (SOCP). This paper shows that such techniques may be used effectively on the problem of determining the track of a camera given observations of features in the environment. The approach to this problem involves two steps: determination of the orientation of the camera by estimation of relative orientation between pairs of views, followed by determination of the translation of the camera. This paper focusses on the second step, that of determining the motion of the camera. It is shown that it may be solved effectively by using SOCP to reconcile translation estimates obtained for pairs or triples of views. In addition, it is observed that the individual translation estimates are not known with equal certainty in all directions. To account for this anisotropy in uncertainty, we introduce the use of covariances into the L optimization framework.",
"This paper presents a new framework for solving geometric structure and motion problems based on the Linfin-norm. Instead of using the common sum-of-squares cost function, that is, the L2-norm, the model-fitting errors are measured using the Linfin-norm. Unlike traditional methods based on L2, our framework allows for the efficient computation of global estimates. We show that a variety of structure and motion problems, for example, triangulation, camera resectioning, and homography estimation, can be recast as quasi-convex optimization problems within this framework. These problems can be efficiently solved using second-order cone programming (SOCP), which is a standard technique in convex optimization. The methods have been implemented in Matlab and the resulting toolbox has been made publicly available. The algorithms have been validated on real data in different settings on problems with small and large dimensions and with excellent performance.",
"In this paper we describe two methods for estimating the motion parameters of an image sequence. For a sequence of N images, the global motion can be described by N-1 independent motion models. On the other hand, in a sequence there exist as many as sub 2 sup N(N-1) pairwise relative motion constraints that can be solve for efficiently. In this paper we show how to linearly solve for consistent global motion models using this highly redundant set of constraints. In the first case, our method involves estimating all available pairwise relative motions and linearly fining a global motion model to these estimates. In the second instance, we exploit the fact that algebraic (i.e. epipolar) constraints between various image pairs are all related to each other by the global motion model. This results in an estimation method that directly computes the motion of the sequence by using all possible algebraic constraints. Unlike using reprojection error, our optimisation method does not solve for the structure of points resulting in a reduction of the dimensionality of the search space. Our algorithms are used for both 3D camera motion estimation and camera calibration. We provide real examples of both applications."
]
} |
1608.02165 | 2952684211 | We introduce a new method for location recovery from pair-wise directions that leverages an efficient convex program that comes with exact recovery guarantees, even in the presence of adversarial outliers. When pairwise directions represent scaled relative positions between pairs of views (estimated for instance with epipolar geometry) our method can be used for location recovery, that is the determination of relative pose up to a single unknown scale. For this task, our method yields performance comparable to the state-of-the-art with an order of magnitude speed-up. Our proposed numerical framework is flexible in that it accommodates other approaches to location recovery and can be used to speed up other methods. These properties are demonstrated by extensively testing against state-of-the-art methods for location recovery on 13 large, irregular collections of images of real scenes in addition to simulated data with ground truth. | Recent advances have addressed some of these limitations: 1dSfM @cite_25 focuses on removing outliers by examining inconsistencies along one-dimensional projections, before attempting to recover camera locations. This method, however, does not reason about self-consistent outliers, which can occur due to repetitive structures, commonly found in man-made scenes. Also, @cite_22 introduced a method to filter outlier epipolar geometries based on inconsistent triplets of views. " O zye s . i l and Singer propose a convex program called Least Unsquared Deviations (LUD) and empirically demonstrate its robustness to outliers @cite_5 . While these methods exhibit good empirical performance, they lack theoretical guarantees in terms of robustness to outliers. | {
"cite_N": [
"@cite_5",
"@cite_22",
"@cite_25"
],
"mid": [
"1958116060",
"",
"2112634643"
],
"abstract": [
"3D structure recovery from a collection of 2D images requires the estimation of the camera locations and orientations, i.e. the camera motion. For large, irregular collections of images, existing methods for the location estimation part, which can be formulated as the inverse problem of estimating n locations t 1 , t 2 , …, tn in ℝ 3 from noisy measurements of a subset of the pairwise directions t i −t j ∥t i −t j ∥, are sensitive to outliers in direction measurements. In this paper, we firstly provide a complete characterization of well-posed instances of the location estimation problem, by presenting its relation to the existing theory of parallel rigidity. For robust estimation of camera locations, we introduce a two-step approach, comprised of a pairwise direction estimation method robust to outliers in point correspondences between image pairs, and a convex program to maintain robustness to outlier directions. In the presence of partially corrupted measurements, we empirically demonstrate that our convex formulation can even recover the locations exactly. Lastly, we demonstrate the utility of our formulations through experiments on Internet photo collections.",
"",
"We present a simple, effective method for solving structure from motion problems by averaging epipolar geometries. Based on recent successes in solving for global camera rotations using averaging schemes, we focus on the problem of solving for 3D camera translations given a network of noisy pairwise camera translation directions (or 3D point observations). To do this well, we have two main insights. First, we propose a method for removing outliers from problem instances by solving simpler low-dimensional subproblems, which we refer to as 1DSfM problems. Second, we present a simple, principled averaging scheme. We demonstrate this new method in the wild on Internet photo collections."
]
} |
1608.01766 | 2289121796 | We perform a thorough study of various characteristics of the asynchronous push–pull protocol for spreading a rumor on Erdős–Renyi random graphs (G_ n,p ), for any (p>c (n) n ) with (c>1 ). In particular, we provide a simple strategy for analyzing the asynchronous push–pull protocol on arbitrary graph topologies and apply this strategy to (G_ n,p ). We prove tight bounds of logarithmic order for the total time that is needed until the information has spread to all nodes. Surprisingly, the time required by the asynchronous push–pull protocol is asymptotically almost unaffected by the average degree of the graph. Similarly tight bounds for Erdős–Renyi random graphs have previously only been obtained for the synchronous push protocol, where it has been observed that the total running time increases significantly for sparse random graphs. Finally, we quantify the robustness of the protocol with respect to transmission and node failures. Our analysis suggests that the asynchronous protocols are particularly robust with respect to these failures compared to their synchronous counterparts. | There are many theoretical studies that are concerned with the performance of the push-pull algorithm @cite_13 @cite_22 @cite_25 @cite_0 @cite_2 @cite_3 . For example, the performance of rumor spreading on general graph topologies was made explicit in @cite_13 @cite_22 @cite_2 , where the number of rounds necessary to spread a rumor was related to the conductance of the graph. In particular, the upper bound @math was shown, where @math is the conductance of the graph. | {
"cite_N": [
"@cite_22",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_13",
"@cite_25"
],
"mid": [
"2167805066",
"2157004711",
"167256694",
"2103160469",
"2009356484",
"2621876985"
],
"abstract": [
"We show that if a connected graph with n nodes has conductance p then rumour spreading, also known as randomized broadcast, successfully broadcasts a message within O(log4 n p6) many steps, with high probability, using the PUSH-PULL strategy. An interesting feature of our approach is that it draws a connection between rumour spreading and the spectral sparsification procedure of Spielman and Teng [23].",
"Investigates the class of epidemic algorithms that are commonly used for the lazy transmission of updates to distributed copies of a database. These algorithms use a simple randomized communication mechanism to ensure robustness. Suppose n players communicate in parallel rounds in each of which every player calls a randomly selected communication partner. In every round, players can generate rumors (updates) that are to be distributed among all players. Whenever communication is established between two players, each one must decide which of the rumors to transmit. The major problem is that players might not know which rumors their partners have already received. For example, a standard algorithm forwarding each rumor form the calling to the called players for spl Theta (ln n) rounds needs to transmit the rumor spl Theta (n ln n) times in order to ensure that every player finally receives the rumor with high probability. We investigate whether such a large communication overhead is inherent to epidemic algorithms. On the positive side, we show that the communication overhead can be reduced significantly. We give an algorithm using only O(n ln ln n) transmissions and O(ln n) rounds. In addition, we prove the robustness of this algorithm. On the negative side, we show that any address-oblivious algorithm needs to send spl Omega (n ln ln n) messages for each rumor, regardless of the number of rounds. Furthermore, we give a general lower bound showing that time and communication optimality cannot be achieved simultaneously using random phone calls, i.e. every algorithm that distributes a rumor in O(ln n) rounds needs spl omega (n) transmissions.",
"We analyze the popular push-pull protocol for spreading a rumor in networks. Initially, a single node knows of a rumor. In each succeeding round, every node chooses a random neighbor, and the two nodes share the rumor if one of them is already aware of it. We present the first theoretical analysis of this protocol on random graphs that have a power law degree distribution with an arbitrary exponent β > 2. Our main findings reveal a striking dichotomy in the performance of the protocol that depends on the exponent of the power law. More specifically, we show that if 2 3, then Ω(log n) rounds are necessary. We also investigate the asynchronous version of the push-pull protocol, where the nodes do not operate in rounds, but exchange information according to a Poisson process with rate 1. Surprisingly, we are able to show that, if 2 < β < 3, the rumor spreads even in constant time, which is much smaller than the typical distance of two nodes. To the best of our knowledge, this is the first result that establishes a gap between the synchronous and the asynchronous protocol.",
"We study the connection between the rate at which a rumor spreads throughout a graph and the conductance of the graph—a standard measure of a graph’s expansion properties. We show that for any n-node graph with conductance , the classical PUSH-PULL algorithm distributes a rumor to all nodes of the graph in O( 1 logn) rounds with high probability (w.h.p.). This bound improves a recent result of Chierichetti, Lattanzi, and Panconesi [6], and it is tight in the sense that there exist graphs where ( 1 logn) rounds of the PUSH-PULL algorithm are required to distribute a rumor w.h.p. We also explore the PUSH and the PULL algorithms, and derive conditions that are both necessary and sucient for the above upper bound to hold for those algorithms as well. An",
"We show that if a connected graph with @math nodes has conductance φ then rumour spreading, also known as randomized broadcast, successfully broadcasts a message within O(φ-1 • log n), many rounds with high probability, regardless of the source, by using the PUSH-PULL strategy. The O(••) notation hides a polylog φ-1 factor. This result is almost tight since there exists graph of n nodes, and conductance φ, with diameter Ω(φ-1 • log n). If, in addition, the network satisfies some kind of uniformity condition on the degrees, our analysis implies that both both PUSH and PULL, by themselves, successfully broadcast the message to every node in the same number of rounds.",
"Abstract It has been observed that information spreads extremely fast in social networks. We model social networks with the preferential attachment model of Barabasi and Albert (Science 1999) and information spreading with the random phone call model of (FOCS 2000). In a recent paper (STOC 2011), we prove the following two results. (i) The random phone call model delivers a message to all nodes of graphs in the preferential attachment model within Θ ( log n ) rounds with high probability. The best known bound so far was O ( log 2 n ) . (ii) If we slightly modify the protocol so that contacts are chosen uniformly from all neighbors but the one Θ ( log n log log n ) , which is the diameter of the graph. This is the first time that a sublogarithmic broadcast time is proven for a natural setting. Also, this is the first time that avoiding doublecontacts reduces the run-time to a smaller order of magnitude."
]
} |
1608.01766 | 2289121796 | We perform a thorough study of various characteristics of the asynchronous push–pull protocol for spreading a rumor on Erdős–Renyi random graphs (G_ n,p ), for any (p>c (n) n ) with (c>1 ). In particular, we provide a simple strategy for analyzing the asynchronous push–pull protocol on arbitrary graph topologies and apply this strategy to (G_ n,p ). We prove tight bounds of logarithmic order for the total time that is needed until the information has spread to all nodes. Surprisingly, the time required by the asynchronous push–pull protocol is asymptotically almost unaffected by the average degree of the graph. Similarly tight bounds for Erdős–Renyi random graphs have previously only been obtained for the synchronous push protocol, where it has been observed that the total running time increases significantly for sparse random graphs. Finally, we quantify the robustness of the protocol with respect to transmission and node failures. Our analysis suggests that the asynchronous protocols are particularly robust with respect to these failures compared to their synchronous counterparts. | It is also known that push-pull is efficient on many classes of random graphs. For example, on classical preferential attachment graphs @cite_16 it was shown in @cite_25 that w.h.p. a rumor spreads in @math rounds. Moreover, the performance of the push-pull algorithm on classes of random graphs with a degree sequence that is a power law was studied in @cite_0 . In particular, they showed that if the degree sequence has unbounded variance, then the number of rounds until the information has reached almost all nodes is reduced to @math , while in all other cases it remains @math . | {
"cite_N": [
"@cite_0",
"@cite_16",
"@cite_25"
],
"mid": [
"167256694",
"2008620264",
"2621876985"
],
"abstract": [
"We analyze the popular push-pull protocol for spreading a rumor in networks. Initially, a single node knows of a rumor. In each succeeding round, every node chooses a random neighbor, and the two nodes share the rumor if one of them is already aware of it. We present the first theoretical analysis of this protocol on random graphs that have a power law degree distribution with an arbitrary exponent β > 2. Our main findings reveal a striking dichotomy in the performance of the protocol that depends on the exponent of the power law. More specifically, we show that if 2 3, then Ω(log n) rounds are necessary. We also investigate the asynchronous version of the push-pull protocol, where the nodes do not operate in rounds, but exchange information according to a Poisson process with rate 1. Surprisingly, we are able to show that, if 2 < β < 3, the rumor spreads even in constant time, which is much smaller than the typical distance of two nodes. To the best of our knowledge, this is the first result that establishes a gap between the synchronous and the asynchronous protocol.",
"Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.",
"Abstract It has been observed that information spreads extremely fast in social networks. We model social networks with the preferential attachment model of Barabasi and Albert (Science 1999) and information spreading with the random phone call model of (FOCS 2000). In a recent paper (STOC 2011), we prove the following two results. (i) The random phone call model delivers a message to all nodes of graphs in the preferential attachment model within Θ ( log n ) rounds with high probability. The best known bound so far was O ( log 2 n ) . (ii) If we slightly modify the protocol so that contacts are chosen uniformly from all neighbors but the one Θ ( log n log log n ) , which is the diameter of the graph. This is the first time that a sublogarithmic broadcast time is proven for a natural setting. Also, this is the first time that avoiding doublecontacts reduces the run-time to a smaller order of magnitude."
]
} |
1608.01766 | 2289121796 | We perform a thorough study of various characteristics of the asynchronous push–pull protocol for spreading a rumor on Erdős–Renyi random graphs (G_ n,p ), for any (p>c (n) n ) with (c>1 ). In particular, we provide a simple strategy for analyzing the asynchronous push–pull protocol on arbitrary graph topologies and apply this strategy to (G_ n,p ). We prove tight bounds of logarithmic order for the total time that is needed until the information has spread to all nodes. Surprisingly, the time required by the asynchronous push–pull protocol is asymptotically almost unaffected by the average degree of the graph. Similarly tight bounds for Erdős–Renyi random graphs have previously only been obtained for the synchronous push protocol, where it has been observed that the total running time increases significantly for sparse random graphs. Finally, we quantify the robustness of the protocol with respect to transmission and node failures. Our analysis suggests that the asynchronous protocols are particularly robust with respect to these failures compared to their synchronous counterparts. | For the synchronous push protocol there exist very accurate bounds of the spreading time @cite_5 @cite_18 @cite_26 . In @cite_26 it was shown that a rumor spreads in @math rounds w.h.p. on the complete graph. In @cite_5 it was shown that for @math with @math a rumor spreads w.h.p. in @math rounds. Furthermore, it was recently shown that the spreading time increases significantly when approaching the connectivity threshold more closely, namely that for @math with @math , @math , the spreading time is w.h.p. @math , where @math @cite_11 . This means in particular that the spreading time cannot be bounded by logarithmic time independent of the edge probability @math ; this is in contrast to the asynchronous push-pull algorithm, which is more robust with respect to variations in the average degree, cf. Theorem . On random regular graphs with degree @math , the spreading time has been shown to equal @math w.h.p. @cite_7 . | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_7",
"@cite_5",
"@cite_11"
],
"mid": [
"2059014957",
"1984977694",
"2137662992",
"2016123342",
"1992403724"
],
"abstract": [
"We consider the problem of finding the shortest distance between all pairs of vertices in a complete digraph on n vertices, whose arc-lengths are non-negative random variables. We describe an algorithm which solves this problem in O(n(m+nlogn)) expected time, where m is the expected number of arcs with finite lenght. If m is small enough, this represents a small improvement over the bound in Bloniarz [3]. We consider also the case when the arc-lengths are random variables which are independently distributed with distribution function F, where F(0)=0 and F is differentiable at 0; for this case, we describe an algorithm which runs in O(n2logn) expected time. In our treatment of the shortest-path problem we consider the following problem in combinatorial probability theory. A town contains n people, one of whom knows a rumour. At the first stage he tells someone chosen randomly from the town; at each stage, each person who knows the rumour tells someone else, chosen randomly from the town and indeependently of all other choices. Let Sn be the number of stages before the whole town rnows the rumour. We show that Sn log2n → 1 + loge2 in probability as n → ∞, and estimate the probabilities of large deviations in Sn.",
"Suppose that one of n people knows a rumor. At the first stage, he passes the rumor to someone chosen at random; at each stage, each person already informed (“knower”) communicates the rumor to a person chosen at random and independently of all other past and present choices. Denote by @math the random number of stages before everybody is informed. How large is @math typically? Frieze and Grimmet, who introduced this problem, proved that, in probability, @math . In this paper we show that, in fact, @math in probability. Our proof demonstrates that the number @math of persons informed after t stages obeys very closely, with high probability, a deterministic equation @math , @math . A case when each knower passes the rumor to several members at every stage is also discussed.",
"Broadcasting algorithms are important building blocks of distributed systems. In this work we investigate the typical performance of the classical and well-studied push model. Assume that initially one node in a given network holds some piece of information. In each round, every one of the informed nodes chooses independently a neighbor uniformly at random and transmits the message to it. In this paper we consider random networks where each vertex has degree d ≥ 3, i.e., the underlying graph is drawn uniformly at random from the set of all d -regular graphs with n vertices. We show that with probability 1 - o(1) the push model broadcasts the message to all nodes within (1 + o(1))Cd lnn rounds, where Particularly, we can characterize precisely the effect of the node degree to the typical broadcast time of the push model. Moreover, we consider pseudo-random regular networks, where we assume that the degree of each node is very large. There we show that the broadcast time is (1 + o(1))Clnn with probability 1 - o(1), where . © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2013",
"Broadcasting algorithms are of fundamental importance for distributed systems engineering. In this paper we revisit the classical and well-studied push protocol for message broadcasting and we investigate a faulty version of it. Assuming that initially only one node has some piece of information, at each stage every one of the informed nodes chooses randomly and independently one of its neighbors and passes the message to it with some probability q that is, it fails to do so with probability 1-q. The performance of the push protocol on a fully connected network, where each node is joined by a link to every other node, with q=1 is very well understood. In particular, Frieze and Grimmett proved that with probability 1-o(1) the push protocol completes the broadcasting of the message within (1±e) (log_2 n + ln n) stages, where n is the number of nodes in the network. However, there are no tight bounds for the broadcast time on networks that are significantly sparser than the complete graph. In this work we consider random networks on n nodes, where every edge is present with probability p, independently of every other edge. We show that if p≥ α(n)ln n n, where α(n) is any function that tends to infinity as n grows, then the push protocol with faulty transmissions broadcasts the message within (1±±ε)(log_ 1+q n + 1 q ln n) stages with probability 1-o(1). In other words, in almost every network of density d such that d ≥ α(n) ln n, the push protocol broadcasts a message as fast as in a fully connected network and the speed is only affected by the success probability q. This is quite surprising in the sense that the time needed remains essentially unaffected by the fact that most of the links are missing. Our results are accompanied by experimental evaluation.",
"We consider the popular and well-studied push model, which is used to spread information in a given network with n vertices. Initially, some vertex owns a rumour and passes it to one of its neighbours, which is chosen randomly. In each of the succeeding rounds, every vertex that knows the rumour informs a random neighbour. It has been shown on various network topologies that this algorithm succeeds in spreading the rumour within O(log n) rounds. However, many studies are quite coarse and involve huge constants that do not allow for a direct comparison between different network topologies. In this paper, we analyse the push model on several important families of graphs, and obtain tight runtime estimates. We first show that, for any almost-regular graph on n vertices with small spectral expansion, rumour spreading completes after log2n + log n+o(log n) rounds with high probability. This is the first result that exhibits a general graph class for which rumour spreading is essentially as fast as on complete graphs. Moreover, for the random graph G(n,p) with p=c log n n, where c > 1, we determine the runtime of rumour spreading to be log2n + γ (c)log n with high probability, where γ(c) = clog(c (c−1)). In particular, this shows that the assumption of almost regularity in our first result is necessary. Finally, for a hypercube on n=2d vertices, the runtime is with high probability at least (1+β) ⋅ (log2n + log n), where β > 0. This reveals that the push model on hypercubes is slower than on complete graphs, and thus shows that the assumption of small spectral expansion in our first result is also necessary. In addition, our results combined with the upper bound of O(log n) for the hypercube (see [11]) imply that the push model is faster on hypercubes than on a random graph G(n, clog n n), where c is sufficiently close to 1."
]
} |
1608.01766 | 2289121796 | We perform a thorough study of various characteristics of the asynchronous push–pull protocol for spreading a rumor on Erdős–Renyi random graphs (G_ n,p ), for any (p>c (n) n ) with (c>1 ). In particular, we provide a simple strategy for analyzing the asynchronous push–pull protocol on arbitrary graph topologies and apply this strategy to (G_ n,p ). We prove tight bounds of logarithmic order for the total time that is needed until the information has spread to all nodes. Surprisingly, the time required by the asynchronous push–pull protocol is asymptotically almost unaffected by the average degree of the graph. Similarly tight bounds for Erdős–Renyi random graphs have previously only been obtained for the synchronous push protocol, where it has been observed that the total running time increases significantly for sparse random graphs. Finally, we quantify the robustness of the protocol with respect to transmission and node failures. Our analysis suggests that the asynchronous protocols are particularly robust with respect to these failures compared to their synchronous counterparts. | In addition, the effect of transmission failures that occur independently for each contact at some constant rate @math was investigated in @cite_5 for the synchronous push protocol on dense random graphs. There, it was shown that the total time increases under-proportionally, namely that it is w.h.p. equal to @math @cite_5 . While the total time increases proportionally, i.e., by a factor of @math , in the asynchronous case, the asynchronous protocol is still faster and the speed difference increases with larger @math . | {
"cite_N": [
"@cite_5"
],
"mid": [
"2016123342"
],
"abstract": [
"Broadcasting algorithms are of fundamental importance for distributed systems engineering. In this paper we revisit the classical and well-studied push protocol for message broadcasting and we investigate a faulty version of it. Assuming that initially only one node has some piece of information, at each stage every one of the informed nodes chooses randomly and independently one of its neighbors and passes the message to it with some probability q that is, it fails to do so with probability 1-q. The performance of the push protocol on a fully connected network, where each node is joined by a link to every other node, with q=1 is very well understood. In particular, Frieze and Grimmett proved that with probability 1-o(1) the push protocol completes the broadcasting of the message within (1±e) (log_2 n + ln n) stages, where n is the number of nodes in the network. However, there are no tight bounds for the broadcast time on networks that are significantly sparser than the complete graph. In this work we consider random networks on n nodes, where every edge is present with probability p, independently of every other edge. We show that if p≥ α(n)ln n n, where α(n) is any function that tends to infinity as n grows, then the push protocol with faulty transmissions broadcasts the message within (1±±ε)(log_ 1+q n + 1 q ln n) stages with probability 1-o(1). In other words, in almost every network of density d such that d ≥ α(n) ln n, the push protocol broadcasts a message as fast as in a fully connected network and the speed is only affected by the success probability q. This is quite surprising in the sense that the time needed remains essentially unaffected by the fact that most of the links are missing. Our results are accompanied by experimental evaluation."
]
} |
1608.01766 | 2289121796 | We perform a thorough study of various characteristics of the asynchronous push–pull protocol for spreading a rumor on Erdős–Renyi random graphs (G_ n,p ), for any (p>c (n) n ) with (c>1 ). In particular, we provide a simple strategy for analyzing the asynchronous push–pull protocol on arbitrary graph topologies and apply this strategy to (G_ n,p ). We prove tight bounds of logarithmic order for the total time that is needed until the information has spread to all nodes. Surprisingly, the time required by the asynchronous push–pull protocol is asymptotically almost unaffected by the average degree of the graph. Similarly tight bounds for Erdős–Renyi random graphs have previously only been obtained for the synchronous push protocol, where it has been observed that the total running time increases significantly for sparse random graphs. Finally, we quantify the robustness of the protocol with respect to transmission and node failures. Our analysis suggests that the asynchronous protocols are particularly robust with respect to these failures compared to their synchronous counterparts. | For the asynchronous push-pull protocol there exists much less literature @cite_10 @cite_0 , mostly devoted to models for scale-free networks. In @cite_10 it was shown that on preferential attachment graphs a rumor needs a time of @math w.h.p. to spread to almost all nodes. On power-law Chung-Lu random graphs @cite_4 (for power-law exponent @math ) it was shown in @cite_0 that a rumor initially located within the giant component spreads w.h.p. even in constant time to almost all nodes. Related to the asynchronous push-pull protocol is first-passage percolation, which, on regular graphs, is an equivalent process. There, it has been shown that the running time on the hypercube and the complete graph is @math @cite_6 @cite_23 @cite_21 . In a recent study, the ratio of the spreading time of the synchronous and asynchronous push-pull protocol was bounded by @math from below and by @math from above @cite_12 . In particular, examples of graphs in which the asynchronous version spreads the rumor in logarithmic time and the synchronous version needs polynomial time were given. | {
"cite_N": [
"@cite_4",
"@cite_21",
"@cite_6",
"@cite_0",
"@cite_23",
"@cite_10",
"@cite_12"
],
"mid": [
"2068902429",
"2031541804",
"2494932805",
"167256694",
"2005443809",
"2159732236",
"2885831468"
],
"abstract": [
"Random graph theory is used to examine the \"small-world phenomenon\"– any two strangers are connected through a short chain of mutual acquaintances. We will show that for certain families of random graphs with given expected degrees, the average distance is almost surely of order log n logd where d is the weighted average of the sum of squares of the expected degrees. Of particular interest are power law random graphs in which the number of vertices of degree k is proportional to 1 k β for some fixed exponent β. For the case of β > 3, we prove that the average distance of the power law graphs is almost surely of order log n log d. However, many Internet, social, and citation networks are power law graphs with exponents in the range 2 < β < 3 for which the power law random graphs have average distance almost surely of order log log n, but have diameter of order log n (provided having some mild constraints for the average distance and maximum degree). In particular, these graphs contain a dense subgraph...",
"Consider the minimal weights of paths between two points in a complete graph Kn with random weights on the edges, the weights being, for instance, uniformly distributed. It is shown that, asymptotically, this is log n n for two given points, that the maximum if one point is fixed and the other varies is 2 log n n, and that the maximum over all pairs of points is 3 log n n.Some further related results are given as well, including results on asymptotic distributions and moments, and on the number of edges in the minimal weight paths.",
"",
"We analyze the popular push-pull protocol for spreading a rumor in networks. Initially, a single node knows of a rumor. In each succeeding round, every node chooses a random neighbor, and the two nodes share the rumor if one of them is already aware of it. We present the first theoretical analysis of this protocol on random graphs that have a power law degree distribution with an arbitrary exponent β > 2. Our main findings reveal a striking dichotomy in the performance of the protocol that depends on the exponent of the power law. More specifically, we show that if 2 3, then Ω(log n) rounds are necessary. We also investigate the asynchronous version of the push-pull protocol, where the nodes do not operate in rounds, but exchange information according to a Poisson process with rate 1. Surprisingly, we are able to show that, if 2 < β < 3, the rumor spreads even in constant time, which is much smaller than the typical distance of two nodes. To the best of our knowledge, this is the first result that establishes a gap between the synchronous and the asynchronous protocol.",
"Percolation with edge-passage probability p and first-passage percolation are studied for the n-cube Bn = 0, 1 with nearest neighbor edges. For oriented and unoriented percolation, p = e n and p = 1 n are the respective critical probabilities. For oriented first-passage percolation with i.i.d. edge-passage times having a density of 1 near the origin, the percolation time (time to reach the opposite corner of the cube) converges in probability to 1 as n → ∞. This resolves a conjecture of David Aldous. When the edge-passage distribution is standard exponential, the (smaller) percolation time for unoriented edges is at least 0.88. These results are applied to Richardson’s model on the (unoriented) n-cube. Richardson’s model, otherwise known as the contact process with no recoveries, models the spread of infection as a Poisson process on each edge connecting an infected node to an uninfected one. It is shown that the time to cover the entire n-cube is bounded between 1.41 and 14.05 in probability as n→∞. Research supported by the National Security Agency under Grant Number MDA904-89-H-2051. Research supported by a National Science Foundation postdoctoral fellowship and by a Mathematical Sciences Institute postdoctoral fellowship. AMS 1991 subject classifications. Primary 60K35; secondary 60C05.",
"We show that the asynchronous push-pull protocol spreads rumors in preferential attachment graphs (as defined by Barabasi and Albert) in time @math to all but a lower order fraction of the nodes with high probability. This is significantly faster than what synchronized protocols can achieve; an obvious lower bound for these is the average distance, which is known to be Θ(logn loglogn).",
""
]
} |
1608.02071 | 2505457768 | In many domains such as medicine, training data is in short supply. In such cases, external knowledge is often helpful in building predictive models. We propose a novel method to incorporate publicly available domain expertise to build accurate models. Specifically, we use word2vec models trained on a domain-specific corpus to estimate the relevance of each feature's text description to the prediction problem. We use these relevance estimates to rescale the features, causing more important features to experience weaker regularization. We apply our method to predict the onset of five chronic diseases in the next five years in two genders and two age groups. Our rescaling approach improves the accuracy of the model, particularly when there are few positive examples. Furthermore, our method selects 60 fewer features, easing interpretation by physicians. Our method is applicable to other domains where feature and outcome descriptions are available. | Another body of related work can be found in the field of text classification. Some authors used expert-coded ontologies such as the Unified Medical Language System (UMLS) to engineer and extract features from clinical text and used these features to identify patients with cardiovascular diseases ( @cite_2 @cite_3 ). In non-clinical settings, others have used ontology in the Open Directory Project ( @cite_7 ) and Wikipedia ( @cite_13 ). | {
"cite_N": [
"@cite_13",
"@cite_7",
"@cite_3",
"@cite_2"
],
"mid": [
"2100335205",
"103965747",
"",
"2105637130"
],
"abstract": [
"When humans approach the task of text categorization, they interpret the specific wording of the document in the much larger context of their background knowledge and experience. On the other hand, state-of-the-art information retrieval systems are quite brittle--they traditionally represent documents as bags of words, and are restricted to learning from individual word occurrences in the (necessarily limited) training set. For instance, given the sentence \"Wal-Mart supply chain goes real time\", how can a text categorization system know that Wal-Mart manages its stock with RFID technology? And having read that \"Ciprofloxacin belongs to the quinolones group\", how on earth can a machine know that the drug mentioned is an antibiotic produced by Bayer? In this paper we present algorithms that can do just that. We propose to enrich document representation through automatic use of a vast compendium of human knowledge--an encyclopedia. We apply machine learning techniques to Wikipedia, the largest encyclopedia to date, which surpasses in scope many conventional encyclopedias and provides a cornucopia of world knowledge. Each Wikipedia article represents a concept, and documents to be categorized are represented in the rich feature space of words and relevant Wikipedia concepts. Empirical results confirm that this knowledge-intensive representation brings text categorization to a qualitatively new level of performance across a diverse collection of datasets.",
"We enhance machine learning algorithms for text categorization with generated features based on domain-specific and common-sense knowledge. This knowledge is represented using publicly available ontologies that contain hundreds of thousands of concepts, such as the Open Directory; these ontologies are further enriched by several orders of magnitude through controlled Web crawling. Prior to text categorization, a feature generator analyzes the documents and maps them onto appropriate ontology concepts, which in turn induce a set of generated features that augment the standard bag of words. Feature generation is accomplished through contextual analysis of document text, implicitly performing word sense disambiguation. Coupled with the ability to generalize concepts using the ontology, this approach addresses the two main problems of natural language processing--synonymy and polysemy. Categorizing documents with the aid of knowledge-based features leverages information that cannot be deduced from the documents alone. Experimental results confirm improved performance, breaking through the plateau previously reached in the field.",
"",
"Objective Analysis of narrative (text) data from electronic health records (EHRs) can improve population-scale phenotyping for clinical and genetic research. Currently, selection of text features for phenotyping algorithms is slow and laborious, requiring extensive and iterative involvement by domain experts. This paper introduces a method to develop phenotyping algorithms in an unbiased manner by automatically extracting and selecting informative features, which can be comparable to expert-curated ones in classification accuracy. @PARASPLIT Materials and methods Comprehensive medical concepts were collected from publicly available knowledge sources in an automated, unbiased fashion. Natural language processing (NLP) revealed the occurrence patterns of these concepts in EHR narrative notes, which enabled selection of informative features for phenotype classification. When combined with additional codified features, a penalized logistic regression model was trained to classify the target phenotype. @PARASPLIT Results The authors applied our method to develop algorithms to identify patients with rheumatoid arthritis and coronary artery disease cases among those with rheumatoid arthritis from a large multi-institutional EHR. The area under the receiver operating characteristic curves (AUC) for classifying RA and CAD using models trained with automated features were 0.951 and 0.929, respectively, compared to the AUCs of 0.938 and 0.929 by models trained with expert-curated features. @PARASPLIT Discussion Models trained with NLP text features selected through an unbiased, automated procedure achieved comparable or slightly higher accuracy than those trained with expert-curated features. The majority of the selected model features were interpretable. @PARASPLIT Conclusion The proposed automated feature extraction method, generating highly accurate phenotyping algorithms with improved efficiency, is a significant step toward high-throughput phenotyping."
]
} |
1608.02071 | 2505457768 | In many domains such as medicine, training data is in short supply. In such cases, external knowledge is often helpful in building predictive models. We propose a novel method to incorporate publicly available domain expertise to build accurate models. Specifically, we use word2vec models trained on a domain-specific corpus to estimate the relevance of each feature's text description to the prediction problem. We use these relevance estimates to rescale the features, causing more important features to experience weaker regularization. We apply our method to predict the onset of five chronic diseases in the next five years in two genders and two age groups. Our rescaling approach improves the accuracy of the model, particularly when there are few positive examples. Furthermore, our method selects 60 fewer features, easing interpretation by physicians. Our method is applicable to other domains where feature and outcome descriptions are available. | Our work diverges from prior work because our target task'' uses structured medical data, but we transfer knowledge from unstructured external sources to understand the relationship between the of the features and the outcome. For each feature, we estimate its relevance to predicting the outcome, and use these relevance estimates to rescale the feature matrix. This rescaling procedure is equivalent to the feature selection methods, adaptive lasso ( @cite_9 ) and nonnegative garotte ( @cite_16 ). In the adaptive lasso, the adaptive scaling factors are usually obtained from the ordinary least squares estimate. By contrast, we inferred these scaling factors from an expert text corpus. This technique leverages auxiliary data and can thus be applied when the original training data are not sufficient to obtain reliable least squares estimates. Other approaches to feature selection for predicting disease onset include augmenting expert-derived risk factors with data-driven variables ( @cite_11 ) and performing regression in multiple dimensions simultaneously ( @cite_17 ). | {
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_17",
"@cite_11"
],
"mid": [
"2020925091",
"2070094080",
"2137283541",
"1490846006"
],
"abstract": [
"The lasso is a popular technique for simultaneous estimation and variable selection. Lasso variable selection has been shown to be consistent under certain conditions. In this work we derive a necessary condition for the lasso variable selection to be consistent. Consequently, there exist certain scenarios where the lasso is inconsistent for variable selection. We then propose a new version of the lasso, called the adaptive lasso, where adaptive weights are used for penalizing different coefficients in the l1 penalty. We show that the adaptive lasso enjoys the oracle properties; namely, it performs as well as if the true underlying model were given in advance. Similar to the lasso, the adaptive lasso is shown to be near-minimax optimal. Furthermore, the adaptive lasso can be solved by the same efficient algorithm for solving the lasso. We also discuss the extension of the adaptive lasso in generalized linear models and show that the oracle properties still hold under mild regularity conditions. As a bypro...",
"A new method, called the nonnegative (nn) garrote, is proposed for doing subset regression. It both shrinks and zeroes coefficients. In tests on real and simulated data, it produces lower prediction error than ordinary subset selection. It is also compared to ridge regression. If the regression equations generated by a procedure do not change drastically with small changes in the data, the procedure is called stable. Subset selection is unstable, ridge is very stable, and the nn-garrote is intermediate. Simulation results illustrate the effects of instability on prediction error.",
"Logistic regression is one core predictive modeling technique that has been used extensively in health and biomedical problems. Recently a lot of research has been focusing on enforcing sparsity on the learned model to enhance its effectiveness and interpretability, which results in sparse logistic regression model. However, no matter the original or sparse logistic regression, they require the inputs to be in vector form. This limits the applicability of logistic regression in the problems when the data cannot be naturally represented vectors (e.g., functional magnetic resonance imaging and electroencephalography signals). To handle the cases when the data are in the form of multi-dimensional arrays, we propose MulSLR: Multilinear Sparse Logistic Regression. MulSLR can be viewed as a high order extension of sparse logistic regression. Instead of solving one classification vector as in conventional logistic regression, we solve for K classification vectors in MulSLR (K is the number of modes in the data). We propose a block proximal descent approach to solve the problem and prove its convergence. The convergence rate of the proposed algorithm is also analyzed. Finally we validate the efficiency and effectiveness of MulSLR on predicting the onset risk of patients with Alzheimer's disease and heart failure.",
"Background: The ability to identify the risk factors related to an adverse condition, e.g., heart failures (HF) diagnosis, is very important for improving care quality and reducing cost. Existing approaches for risk factor identification are either knowledge driven (from guidelines or literatures) or data driven (from observational data). No existing method provides a model to effectively combine expert knowledge with data driven insight for risk factor identification."
]
} |
1608.02128 | 2501012547 | Machine learning is increasingly used to make sense of the physical world yet may suffer from adversarial manipulation. We examine the Viola-Jones 2D face detection algorithm to study whether images can be created that humans do not notice as faces yet the algorithm detects as faces. We show that it is possible to construct images that Viola-Jones recognizes as containing faces yet no human would consider a face. Moreover, we show that it is possible to construct images that fool facial detection even when they are printed and then photographed. | Recent work on physical world attacks on object recognition measured the degradation in effectiveness of adversarial images when they are first printed and photographed with a cell phone @cite_0 . Though they measure the effects of components of that physical channel, they construct their images using knowledge of the object detection algorithm, and not the channel. Our work constructs adversarial images without knowledge of the detection algorithm. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2460937040"
],
"abstract": [
"Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera."
]
} |
1608.02171 | 2554997459 | An effective paradigm for simulating the dynamics of robots that locomote and manipulate is multi-rigid body simulation with rigid contact. This paradigm provides reasonable tradeoffs between accuracy, running time, and simplicity of parameter selection and identification. The Stewart-Trinkle Anitescu-Potra time stepping approach is the basis of many existing implementations. It successfully treats inconsistent (Painleve-type) contact configurations, efficiently handles many contact events occurring in short time intervals, and provably converges to the solution of the continuous time differential algebraic equations (DAEs) as the integration step size tends to zero. However, there is currently no means to determine when the solution has largely converged, i.e., when smaller integration steps would result in only small increases in accuracy. The present work describes an approach that computes the event times (when the set of active equations in a DAE changes) of all contact impact events for a multi-body simulation, toward using integration techniques with error control to compute a solution with desired accuracy. We also describe a first-order, variable integration approach that ensures that rigid bodies with convex polytopic geometries never interpenetrate. This approach permits taking large steps when possible and takes small steps when contact is complex. | We note that piecewise DAEs with unilateral constraints are generally modeled as differential variational inequalities (DVI) @cite_15 , which---for rigid body contact problems, at least---we find more challenging to pose, though existing solution approaches can work reasonably well. These approaches will be discussed in . | {
"cite_N": [
"@cite_15"
],
"mid": [
"2109783572"
],
"abstract": [
"This paper introduces and studies the class of differential variational inequalities (DVIs) in a finite-dimensional Euclidean space. The DVI provides a powerful modeling paradigm for many applied problems in which dynamics, inequalities, and discontinuities are present; examples of such problems include constrained time-dependent physical systems with unilateral constraints, differential Nash games, and hybrid engineering systems with variable structures. The DVI unifies several mathematical problem classes that include ordinary differential equations (ODEs) with smooth and discontinuous right-hand sides, differential algebraic equations (DAEs), dynamic complementarity systems, and evolutionary variational inequalities. Conditions are presented under which the DVI can be converted, either locally or globally, to an equivalent ODE with a Lipschitz continuous right-hand function. For DVIs that cannot be so converted, we consider their numerical resolution via an Euler time-stepping procedure, which involves the solution of a sequence of finite-dimensional variational inequalities. Borrowing results from differential inclusions (DIs) with upper semicontinuous, closed and convex valued multifunctions, we establish the convergence of such a procedure for solving initial-value DVIs. We also present a class of DVIs for which the theory of DIs is not directly applicable, and yet similar convergence can be established. Finally, we extend the method to a boundary-value DVI and provide conditions for the convergence of the method. The results in this paper pertain exclusively to systems with “index” not exceeding two and which have absolutely continuous solutions."
]
} |
1608.02171 | 2554997459 | An effective paradigm for simulating the dynamics of robots that locomote and manipulate is multi-rigid body simulation with rigid contact. This paradigm provides reasonable tradeoffs between accuracy, running time, and simplicity of parameter selection and identification. The Stewart-Trinkle Anitescu-Potra time stepping approach is the basis of many existing implementations. It successfully treats inconsistent (Painleve-type) contact configurations, efficiently handles many contact events occurring in short time intervals, and provably converges to the solution of the continuous time differential algebraic equations (DAEs) as the integration step size tends to zero. However, there is currently no means to determine when the solution has largely converged, i.e., when smaller integration steps would result in only small increases in accuracy. The present work describes an approach that computes the event times (when the set of active equations in a DAE changes) of all contact impact events for a multi-body simulation, toward using integration techniques with error control to compute a solution with desired accuracy. We also describe a first-order, variable integration approach that ensures that rigid bodies with convex polytopic geometries never interpenetrate. This approach permits taking large steps when possible and takes small steps when contact is complex. | Early works in simulating rigid bodies undergoing rigid contact were conducted by L "ostedt @cite_23 @cite_7 and Baraff @cite_27 @cite_9 @cite_19 , and used root finding approaches to locate events. These works described some challenges with theory---inconsistent configurations, exemplified by the Painlev ' e Paradox @cite_26 that was well known to the theoretical mechanics community---and computational complexity: for example, Baraff showed that the problem of classifying a contact configuration as inconsistent is NP-hard @cite_19 . These works do not consider the challenges of locating all possible events or the failure to do so. | {
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_9",
"@cite_19",
"@cite_27",
"@cite_23"
],
"mid": [
"",
"2157141568",
"2159418100",
"2234650683",
"2245259275",
"2024670401"
],
"abstract": [
"",
"A numerical method is given for the solution of a system of ordinary differential equations and algebraic, unilateral constraints. The equations govern the motion of a mechanical system of rigid bodies, where contacts between the bodies are created and disappear in the time interval of interest. The ordinary differential equations are discretized by linear multistep methods. In order to satisfy the constraints, a quadratic programming problem is solved at each time step. The fact that the variation of the objective function is small from step to step is utilized to save computing time. A discrete friction model, based on Coulomb’s law of friction and suitable for efficient computation, is proposed for planar problems where dry friction cannot be neglected. The normal forces and the friction forces are the optimal solution to a quadratic programming problem. The methods are tested on four model problems. A data structure and possible generalizations are discussed.",
"Algorithms and computational complexity measures for simulating the motion of contacting bodies with friction are presented. The bodies are restricted to be perfectly rigid bodies that contact at finitely many points. Contact forces between bodies must satisfy the Coulomb model of friction. A traditional principle of mechanics is that contact forces are impulsive if and only if non-impulsive contact forces are insufficient to maintain the non-penetration constraints between bodies. When friction is allowed, it is known that impulsive contact forces can be necessary even in the absence of collisions between bodies. This paper shows that computing contact forces according to this traditional principle is likely to require exponential time. An analysis of this result reveals that the principle for when impulses can occur is too restrictive, and a natural reformulation of the principle is proposed. Using the reformulated principle, an algorithm with expected polynomial time behaviour for computing contact forces is presented.",
"A new algorithm for computing contact forces between solid objects with friction is presented. The algorithm allows a mix of contact points with static and dynamic friction. In contrast to previous approaches, the problem of computing contact forces is not transformed into an optimization problem. Because of this, the need for sophisticated optimization software packages is eliminated. For both systems with and without friction, the algorithm has proven to be considerably faster, simple, and more reliable than previous approaches to the problem. In particular, implementation of the algorithm by nonspecialists in numerical programming is quite feasible.",
"A method for analytically calculating the forces between systems of rigid bodies in resting (non-colliding) contact is presented. The systems of bodies may either be in motion or static equilibrium and adjacent bodies may touch at multiple points. The analytic formulation of the forces between bodies in non-colliding contact can be modified to deal with colliding bodies. Accordingly, an improved method for analytically calculating the forces between systems of rigid bodies in colliding contact is also presented. Both methods can be applied to systems with arbitrary holonomic geometric constraints, such as linked figures. The analytical formulations used treat both holonomic and non-holonomic constraints in a consistent manner.",
"The properties of mechanical systems of rigid bodies subject to unilateral constraints are investigated. In particular, properties of interest for the digital simulation of the motion of such systems are studied. The constraints give rise to discontinuities in the solution. Under general assumptions on the system a unique solution is constructed using the linear complementarily theory of mathematical programming. A numerical method for solution of these problems and generalizations of the constraints studied in this paper are briefly discussed."
]
} |
1608.02171 | 2554997459 | An effective paradigm for simulating the dynamics of robots that locomote and manipulate is multi-rigid body simulation with rigid contact. This paradigm provides reasonable tradeoffs between accuracy, running time, and simplicity of parameter selection and identification. The Stewart-Trinkle Anitescu-Potra time stepping approach is the basis of many existing implementations. It successfully treats inconsistent (Painleve-type) contact configurations, efficiently handles many contact events occurring in short time intervals, and provably converges to the solution of the continuous time differential algebraic equations (DAEs) as the integration step size tends to zero. However, there is currently no means to determine when the solution has largely converged, i.e., when smaller integration steps would result in only small increases in accuracy. The present work describes an approach that computes the event times (when the set of active equations in a DAE changes) of all contact impact events for a multi-body simulation, toward using integration techniques with error control to compute a solution with desired accuracy. We also describe a first-order, variable integration approach that ensures that rigid bodies with convex polytopic geometries never interpenetrate. This approach permits taking large steps when possible and takes small steps when contact is complex. | Stewart and Trinkle @cite_10 and Anitescu and Potra @cite_0 described an approach (based on Moreau's time stepping'' @cite_13 discretization of the rigid body dynamics and differential inclusion theory) that provably mitigated the problem of inconsistent configurations. These authors proved that the complementarity problem @cite_30 based approaches used in these works do non-positive work and always possess a solution if the contact constraints are linearly independent. Shortly after introducing this method, Stewart proved convergence of the approach to the solution to the continuous time dynamics @cite_17 as the integration step tends to zero. Stewart-Trinkle @cite_2 and Anitescu-Potra @cite_14 later described approaches that, respectively, prevent and minimize interpenetration; Figures and illustrate the similarities and differences. Proofs of convergence and constraint stabilization rely upon assumptions that the complementarity problem has unique solutions @cite_2 and the function @cite_14 is differentiable. | {
"cite_N": [
"@cite_30",
"@cite_13",
"@cite_14",
"@cite_0",
"@cite_2",
"@cite_10",
"@cite_17"
],
"mid": [
"2000151369",
"2335533",
"2050594090",
"1514487078",
"2113115376",
"2085897137",
"2093990123"
],
"abstract": [
"Preface to the Classics Edition Preface Glossary of notation Numbering system 1. Introduction 2. Background 3. Existence and multiplicity 4. Pivoting methods 5. Iterative methods 6. Geometry and degree theory 7. Sensitivity and stability analysis Bibliography Index.",
"This paper is devoted to mechanical systems with a finite number of degrees of freedom; let q1,...,qn denote (possibly local) coordinates in the configuration manifold Q. In addition to the constraints, bilateral and frictionless, which have permitted such a finite-dimensional parametrization of Q, we assume the system submitted to a finite family of unilateral constraints whose geometrical effect is expressed by v inequalities @math (1.1) defining a closed region L of Q. As every greek index in the sequel, α takes its values in the set 1,2,...,v . The v functions fα are supposed C1, with nonzero gradients, at least in some neighborhood of the respective surfaces fα = 0; for the sake of simplicity, we assume them independent of time.",
"We present a method for achieving geometrical constraint stabilization for a linear-complementarity-based time-stepping scheme for rigid multibody dynamics with joints, contact, and friction. The method requires the solution of only one linear complementarity problem per step. We prove that the velocity stays bounded and that the constraint infeasibility is uniformly bounded in terms of the size of the time step and the current value of the velocity. Several examples, including one for joint-only systems, are used to demonstrate the constraint stabilization effect. Copyright © 2004 John Wiley & Sons, Ltd.",
"A linear complementarity formulation for dynamic multi-rigid-body contact problems with Coulomb friction is presented. The formulation, based on explicit Euler integration and polygonal approximation of the friction cone, is guaranteed to have a solution for any number of contacts and contact configuration. A model with the same property, based on the Poisson hypothesis, is formulated for impact problems with friction and nonzero restitution coefficients. An explicit Euler scheme based on these formulations is presented and is proved to have uniformly bounded velocities as the stepsize tends to zero for the Newton–Euler formulation in body co-ordinates.",
"In this paper a new time-stepping method for simulating systems of rigid bodies is given. Unlike methods which take an instantaneous point of view, our method is based on impulse-momentum equations, and so does not need to explicitly resolve impulsive forces. On the other hand, our method is distinct from previous impulsive methods in that it does not require explicit collision checking and it can handle simultaneous impacts. Numerical results are given for one planar and one three dimensional example, which demonstrate the practicality of the method, and its convergence as the step size becomes small.",
"In this paper a new time-stepping method for simulating systems of rigid bodies is given which incorporates Coulomb friction and inelastic impacts and shocks. Unlike other methods which take an instantaneous point of view, this method does not need to identify explicitly impulsive forces. Instead, the treatment is similar to that of J. J. Moreau and Monteiro-Marques, except that the numerical formulation used here ensures that there is no inter-penetration of rigid bodies, unlike their velocity-based formulation. Numerical results are given for the method presented here for a spinning rod impacting a table in two dimensions, and a system of four balls colliding on a table in a fully three-dimensional way. These numerical results also show the practicality of the method, and convergence of the method as the step size becomes small.",
"This paper gives convergence theory for a new implicit time‐stepping scheme for general rigid‐body dynamics with Coulomb friction and purely inelastic collisions and shocks. An important consequence of this work is the proof of existence of solutions of rigid‐body problems which include the famous counterexamples of Painleve. The mathematical basis for this work is the formulation of the rigid‐body problem in terms of measure differential inclusions of Moreau and Monteiro Marques. The implicit time‐stepping method is based on complementarity problems, and is essentially a particular case of the algorithm described in Anitescu & Potra [2], which in turn is based on the formulation in Stewart & Trinkle [47]."
]
} |
1608.02171 | 2554997459 | An effective paradigm for simulating the dynamics of robots that locomote and manipulate is multi-rigid body simulation with rigid contact. This paradigm provides reasonable tradeoffs between accuracy, running time, and simplicity of parameter selection and identification. The Stewart-Trinkle Anitescu-Potra time stepping approach is the basis of many existing implementations. It successfully treats inconsistent (Painleve-type) contact configurations, efficiently handles many contact events occurring in short time intervals, and provably converges to the solution of the continuous time differential algebraic equations (DAEs) as the integration step size tends to zero. However, there is currently no means to determine when the solution has largely converged, i.e., when smaller integration steps would result in only small increases in accuracy. The present work describes an approach that computes the event times (when the set of active equations in a DAE changes) of all contact impact events for a multi-body simulation, toward using integration techniques with error control to compute a solution with desired accuracy. We also describe a first-order, variable integration approach that ensures that rigid bodies with convex polytopic geometries never interpenetrate. This approach permits taking large steps when possible and takes small steps when contact is complex. | Mirtich pioneered (CA), which uses the distance between bodies and bodies' velocities and accelerations, to integrate rigid bodies and multi-rigid body systems without missing contact events (a guarantee root finding approaches cannot make @cite_4 ). Assuming the typical first order integration process for multibody dynamics with contact, CA (depicted in Figure ) can use the formula below, where variables @math represent linear velocities, @math represent angular velocities, @math is the minimum bounding sphere for the body, and @math : | {
"cite_N": [
"@cite_4"
],
"mid": [
"1602382704"
],
"abstract": [
"Dynamic simulation is a powerful application of today's computers, with uses in fields ranging from engineering to animation to virtual reality. This thesis introduces a new paradigm for dynamic simulation, called impulse-based simulation. The paradigm is designed to meet the twin goals of physical accuracy and computational efficiency. Obtaining physically accurate results is often the whole reason for performing a simulation, however, in many applications, computational efficiency is equally important. Impulse-based simulation is designed to simulate moderately complex systems at interactive speeds. To achieve this performance, certain restrictions are made on the systems to be simulated. The strongest restriction is that they comprise only rigid bodies. The hardest part of rigid body simulation is modeling the interactions that occur between bodies in contact. The most commonly used approaches are penalty methods, followed by analytic methods. Both of these approaches are constraint-based, meaning that constraint forces at the contact points are continually computed and applied to determine the accelerations of the bodies. Impulse-based simulation is a departure from these approaches, in that there are no explicit constraints to be maintained at contact points. Rather, all contact interactions between bodies are affected through collisions; rolling, sliding, resting, and colliding contact are all modeled in this way. The approach has several advantages, including simplicity, robustness, parallelizability, and an ability to efficiently simulate classes of systems that are difficult to simulate using constraint-based methods. The accuracy of impulse-based simulation has been experimentally tested and is sufficient for many applications. The processing of collisions is a critical aspect of the impulse-based approach. Efficient algorithms are needed for detecting the large number of collisions that occur, without missing any. Furthermore, the physical accuracy of the simulator rests upon the accuracy of the collision response algorithms. This thesis describes these essential algorithms, and their underlying theory. It describes how the algorithms for simple rigid body simulation may be extended to systems of articulated rigid bodies. To prove the method is truly practical, the algorithms have been implemented in the prototype simulator, Impulse. Many experiments performed with Impulse are described."
]
} |
1608.02171 | 2554997459 | An effective paradigm for simulating the dynamics of robots that locomote and manipulate is multi-rigid body simulation with rigid contact. This paradigm provides reasonable tradeoffs between accuracy, running time, and simplicity of parameter selection and identification. The Stewart-Trinkle Anitescu-Potra time stepping approach is the basis of many existing implementations. It successfully treats inconsistent (Painleve-type) contact configurations, efficiently handles many contact events occurring in short time intervals, and provably converges to the solution of the continuous time differential algebraic equations (DAEs) as the integration step size tends to zero. However, there is currently no means to determine when the solution has largely converged, i.e., when smaller integration steps would result in only small increases in accuracy. The present work describes an approach that computes the event times (when the set of active equations in a DAE changes) of all contact impact events for a multi-body simulation, toward using integration techniques with error control to compute a solution with desired accuracy. We also describe a first-order, variable integration approach that ensures that rigid bodies with convex polytopic geometries never interpenetrate. This approach permits taking large steps when possible and takes small steps when contact is complex. | Proofs of correctness of conservative advancement (i.e., proof that it generates a lower bound on the time of impact) are provided in @cite_4 . The formula indicates that angular velocities parallel to @math ---the normal to the contact manifold for kissing'' bodies---do not decrease @math , the conservative bound. When bodies are contacting, the safe bound is zero, so a special strategy is necessary. Mirtich kept bodies undergoing sustained contact separated, using a microcollision'' approach, which not only results in poor computation rate on some scenarios (like stacking), but also introduces error that does not disappear as the integration step tends to zero. The approach described in the present work instead leverages knowledge that the geometries are convex polyhedra to address this problem. Note that non-convex polytope geometries may be decomposed into unions of convex polytopes (see, e.g., @cite_24 ). | {
"cite_N": [
"@cite_24",
"@cite_4"
],
"mid": [
"1983687767",
"1602382704"
],
"abstract": [
"Decomposition is a technique commonly used to partition complex models into simpler components. While decomposition into convex components results in pieces that are easy to process, such decompositions can be costly to construct and can result in representations with an unmanageable number of components. In this paper we explore an alternative partitioning strategy that decomposes a given model into ''approximately convex'' pieces that may provide similar benefits as convex components, while the resulting decomposition is both significantly smaller (typically by orders of magnitude) and can be computed more efficiently. Indeed, for many applications, an approximate convex decomposition (acd) can more accurately represent the important structural features of the model by providing a mechanism for ignoring less significant features, such as surface texture. We describe a technique for computing acds of three-dimensional polyhedral solids and surfaces of arbitrary genus. We provide results illustrating that our approach results in high quality decompositions with very few components and applications showing that comparable or better results can be obtained using acd decompositions in place of exact convex decompositions (ecd) that are several orders of magnitude larger.",
"Dynamic simulation is a powerful application of today's computers, with uses in fields ranging from engineering to animation to virtual reality. This thesis introduces a new paradigm for dynamic simulation, called impulse-based simulation. The paradigm is designed to meet the twin goals of physical accuracy and computational efficiency. Obtaining physically accurate results is often the whole reason for performing a simulation, however, in many applications, computational efficiency is equally important. Impulse-based simulation is designed to simulate moderately complex systems at interactive speeds. To achieve this performance, certain restrictions are made on the systems to be simulated. The strongest restriction is that they comprise only rigid bodies. The hardest part of rigid body simulation is modeling the interactions that occur between bodies in contact. The most commonly used approaches are penalty methods, followed by analytic methods. Both of these approaches are constraint-based, meaning that constraint forces at the contact points are continually computed and applied to determine the accelerations of the bodies. Impulse-based simulation is a departure from these approaches, in that there are no explicit constraints to be maintained at contact points. Rather, all contact interactions between bodies are affected through collisions; rolling, sliding, resting, and colliding contact are all modeled in this way. The approach has several advantages, including simplicity, robustness, parallelizability, and an ability to efficiently simulate classes of systems that are difficult to simulate using constraint-based methods. The accuracy of impulse-based simulation has been experimentally tested and is sufficient for many applications. The processing of collisions is a critical aspect of the impulse-based approach. Efficient algorithms are needed for detecting the large number of collisions that occur, without missing any. Furthermore, the physical accuracy of the simulator rests upon the accuracy of the collision response algorithms. This thesis describes these essential algorithms, and their underlying theory. It describes how the algorithms for simple rigid body simulation may be extended to systems of articulated rigid bodies. To prove the method is truly practical, the algorithms have been implemented in the prototype simulator, Impulse. Many experiments performed with Impulse are described."
]
} |
1608.01961 | 2490986620 | One major deficiency of most semantic representation techniques is that they usually model a word type as a single point in the semantic space, hence conflating all the meanings that the word can have. Addressing this issue by learning distinct representations for individual meanings of words has been the subject of several research studies in the past few years. However, the generated sense representations are either not linked to any sense inventory or are unreliable for infrequent word senses. We propose a technique that tackles these problems by de-conflating the representations of words based on the deep knowledge it derives from a semantic network. Our approach provides multiple advantages in comparison to the past work, including its high coverage and the ability to generate accurate representations even for infrequent word senses. We carry out evaluations on six datasets across two semantic similarity tasks and report state-of-the-art results on most of them. | Another notable line of research incorporates knowledge from external resources, such as PPDB @cite_21 and WordNet, to improve word embeddings @cite_26 @cite_1 . Neither of the two techniques however provide representations for word senses. | {
"cite_N": [
"@cite_21",
"@cite_1",
"@cite_26"
],
"mid": [
"2251044566",
"",
"2250930514"
],
"abstract": [
"We present the 1.0 release of our paraphrase database, PPDB. Its English portion, PPDB:Eng, contains over 220 million paraphrase pairs, consisting of 73 million phrasal and 8 million lexical paraphrases, as well as 140 million paraphrase patterns, which capture many meaning-preserving syntactic transformations. The paraphrases are extracted from bilingual parallel corpora totaling over 100 million sentence pairs and over 2 billion English words. We also release PPDB:Spa, a collection of 196 million Spanish paraphrases. Each paraphrase pair in PPDB contains a set of associated scores, including paraphrase probabilities derived from the bitext data and a variety of monolingual distributional similarity scores computed from the Google n-grams and the Annotated Gigaword corpus. Our release includes pruning tools that allow users to determine their own precision recall tradeoff.",
"",
"Word embeddings learned on unlabeled data are a popular tool in semantics, but may not capture the desired semantics. We propose a new learning objective that incorporates both a neural language model objective (, 2013) and prior knowledge from semantic resources to learn improved lexical semantic embeddings. We demonstrate that our embeddings improve over those learned solely on raw text in three settings: language modeling, measuring semantic similarity, and predicting human judgements."
]
} |
1608.01760 | 2476528498 | This paper proposes the use of graph pattern matching for investigative graph search, which is the process of searching for and prioritizing persons of interest who may exhibit part or all of a pattern of suspicious behaviors or connections. While there are a variety of applications, our principal motivation is to aid law enforcement in the detection of homegrown violent extremists. We introduce investigative simulation, which consists of several necessary extensions to the existing dual simulation graph pattern matching scheme in order to make it appropriate for intelligence analysts and law enforcement officials. Specifically, we impose a categorical label structure on nodes consistent with the nature of indicators in investigations, as well as prune or complete search results to ensure sensibility and usefulness of partial matches to analysts. Lastly, we introduce a natural top-k ranking scheme that can help analysts prioritize investigative efforts. We demonstrate performance of investigative simulation on a real-world large dataset. | Our work builds upon advances in graph pattern matching in the static setting. Several surveys exist, including @cite_1 and @cite_8 . Of the two principal types of matching, exact and inexact, we focus our efforts on the state of the art in inexact matching due its flexibility for returning results in the presence of noise or errors in the data @cite_8 . The notable works in static inexact matching include best-effort matching' @cite_27 , TALE @cite_20 , SIGMA @cite_4 , NeMa @cite_5 , and MAGE @cite_24 . The inexact' component of these works primarily involves the allowance for finding nearby matches for nodes in which an exact match does not exist. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_1",
"@cite_24",
"@cite_27",
"@cite_5",
"@cite_20"
],
"mid": [
"2101511590",
"2903174694",
"2323284252",
"2047944694",
"2048653843",
"1509240356",
"2147913014"
],
"abstract": [
"Network querying is a growing domain with vast applications ranging from screening compounds against a database of known molecules to matching sub-networks across species. Graph indexing is a powerful method for searching a large database of graphs. Most graph indexing methods to date tackle the exact matching (isomorphism) problem, limiting their applicability to specific instances in which such matches exist. Here we provide a novel graph indexing method to cope with the more general, inexact matching problem. Our method, SIGMA, builds on approximating a variant of the set-cover problem that concerns overlapping multi-sets. We extensively test our method and compare it to a baseline method and to the state-of-the-art Grafil. We show that SIGMA outperforms both, providing higher pruning power in all the tested scenarios.",
"",
"Most of the internet data that is available in public that are analyzed archived is graph structured in nature. Graphs form a powerful modeling tool in many areas that include chemistry, biology, www etc. Hence there is a demand for efficiently querying such large graph data. Graph pattern matching problem is to find all the patterns from a large data graph that match the given graph pattern. The survey paper discusses tree pattern matching and graph pattern matching techniques and efficient computation of compressed transitive closure using 2-hop labeling. Join based algorithms are proposed which are two step filter(R-semijoin) and fetch(R-join) steps that are implemented using cluster-based index in relational database context. Optimization techniques like R-join order selection with R-semijoin enhancement and interleaving R-joins and R- semijoins are proposed which make graph pattern matching efficient.",
"Given a large graph with millions of nodes and edges, say a social network where both its nodes and edges have multiple attributes (e.g., job titles, tie strengths), how to quickly find subgraphs of interest (e.g., a ring of businessmen with strong ties)? We present MAGE, a scalable, multicore subgraph matching approach that supports expressive queries over large, richly-attributed graphs. Our major contributions include: (1) MAGE supports graphs with both node and edge attributes (most existing approaches handle either one, but not both); (2) it supports expressive queries, allowing multiple attributes on an edge, wildcards as attribute values (i.e., match any permissible values), and attributes with continuous values; and (3) it is scalable, supporting graphs with several hundred million edges. We demonstrate MAGE's effectiveness and scalability via extensive experiments on large real and synthetic graphs, such as a Google+ social network with 460 million edges.",
"We focus on large graphs where nodes have attributes, such as a social network where the nodes are labelled with each person's job title. In such a setting, we want to find subgraphs that match a user query pattern. For example, a \"star\" query would be, \"find a CEO who has strong interactions with a Manager, a Lawyer,and an Accountant, or another structure as close to that as possible\". Similarly, a \"loop\" query could help spot a money laundering ring. Traditional SQL-based methods, as well as more recent graph indexing methods, will return no answer when an exact match does not exist. This is the first main feature of our method. It can find exact-, as well as near-matches, and it will present them to the user in our proposed \"goodness\" order. For example, our method tolerates indirect paths between, say, the \"CEO\" and the \"Accountant\" of the above sample query, when direct paths don't exist. Its second feature is scalability. In general, if the query has nq nodes and the data graph has n nodes, the problem needs polynomial time complexity O(n n q), which is prohibitive. Our G-Ray (\"Graph X-Ray\") method finds high-quality subgraphs in time linear on the size of the data graph. Experimental results on the DLBP author-publication graph (with 356K nodes and 1.9M edges) illustrate both the effectiveness and scalability of our approach. The results agree with our intuition, and the speed is excellent. It takes 4 seconds on average fora 4-node query on the DBLP graph.",
"It is increasingly common to find real-life data represented as networks of labeled, heterogeneous entities. To query these networks, one often needs to identify the matches of a given query graph in a (typically large) network modeled as a target graph. Due to noise and the lack of fixed schema in the target graph, the query graph can substantially differ from its matches in the target graph in both structure and node labels, thus bringing challenges to the graph querying tasks. In this paper, we propose NeMa (Network Match), a neighborhood-based subgraph matching technique for querying real-life networks. (1) To measure the quality of the match, we propose a novel subgraph matching cost metric that aggregates the costs of matching individual nodes, and unifies both structure and node label similarities. (2) Based on the metric, we formulate the minimum cost subgraph matching problem. Given a query graph and a target graph, the problem is to identify the (top-k) matches of the query graph with minimum costs in the target graph. We show that the problem is NP-hard, and also hard to approximate. (3) We propose a heuristic algorithm for solving the problem based on an inference model. In addition, we propose optimization techniques to improve the efficiency of our method. (4) We empirically verify that NeMa is both effective and efficient compared to the keyword search and various state-of-the-art graph querying techniques.",
"Large graph datasets are common in many emerging database applications, and most notably in large-scale scientific applications. To fully exploit the wealth of information encoded in graphs, effective and efficient graph matching tools are critical. Due to the noisy and incomplete nature of real graph datasets, approximate, rather than exact, graph matching is required. Furthermore, many modern applications need to query large graphs, each of which has hundreds to thousands of nodes and edges. This paper presents a novel technique for approximate matching of large graph queries. We propose a novel indexing method that incorporates graph structural information in a hybrid index structure. This indexing technique achieves high pruning power and the index size scales linearly with the database size. In addition, we propose an innovative matching paradigm to query large graphs. This technique distinguishes nodes by their importance in the graph structure. The matching algorithm first matches the important nodes of a query and then progressively extends these matches. Through experiments on several real datasets, this paper demonstrates the effectiveness and efficiency of the proposed method."
]
} |
1608.01760 | 2476528498 | This paper proposes the use of graph pattern matching for investigative graph search, which is the process of searching for and prioritizing persons of interest who may exhibit part or all of a pattern of suspicious behaviors or connections. While there are a variety of applications, our principal motivation is to aid law enforcement in the detection of homegrown violent extremists. We introduce investigative simulation, which consists of several necessary extensions to the existing dual simulation graph pattern matching scheme in order to make it appropriate for intelligence analysts and law enforcement officials. Specifically, we impose a categorical label structure on nodes consistent with the nature of indicators in investigations, as well as prune or complete search results to ensure sensibility and usefulness of partial matches to analysts. Lastly, we introduce a natural top-k ranking scheme that can help analysts prioritize investigative efforts. We demonstrate performance of investigative simulation on a real-world large dataset. | Of these, the work most closely related to ours in intention is @cite_24 , which first introduced a graph pattern matching method that supports exact and inexact queries on both node and edge attributes as well as wildcard matches. This matching method specifically cites intelligence analysis as a use-case and offers great flexibility in the query construction, allowing analysts to explore the unknown or uncertain connections. However, this matching scheme still does not truly support uncertain indicator-type matches or innocuous nodes that become significant only in the context of other indicators. | {
"cite_N": [
"@cite_24"
],
"mid": [
"2047944694"
],
"abstract": [
"Given a large graph with millions of nodes and edges, say a social network where both its nodes and edges have multiple attributes (e.g., job titles, tie strengths), how to quickly find subgraphs of interest (e.g., a ring of businessmen with strong ties)? We present MAGE, a scalable, multicore subgraph matching approach that supports expressive queries over large, richly-attributed graphs. Our major contributions include: (1) MAGE supports graphs with both node and edge attributes (most existing approaches handle either one, but not both); (2) it supports expressive queries, allowing multiple attributes on an edge, wildcards as attribute values (i.e., match any permissible values), and attributes with continuous values; and (3) it is scalable, supporting graphs with several hundred million edges. We demonstrate MAGE's effectiveness and scalability via extensive experiments on large real and synthetic graphs, such as a Google+ social network with 460 million edges."
]
} |
1608.01760 | 2476528498 | This paper proposes the use of graph pattern matching for investigative graph search, which is the process of searching for and prioritizing persons of interest who may exhibit part or all of a pattern of suspicious behaviors or connections. While there are a variety of applications, our principal motivation is to aid law enforcement in the detection of homegrown violent extremists. We introduce investigative simulation, which consists of several necessary extensions to the existing dual simulation graph pattern matching scheme in order to make it appropriate for intelligence analysts and law enforcement officials. Specifically, we impose a categorical label structure on nodes consistent with the nature of indicators in investigations, as well as prune or complete search results to ensure sensibility and usefulness of partial matches to analysts. Lastly, we introduce a natural top-k ranking scheme that can help analysts prioritize investigative efforts. We demonstrate performance of investigative simulation on a real-world large dataset. | Equally important are simulation-based matching schemes, starting with bounded graph simulation @cite_16 @cite_26 , to find meaningful matches given a pattern graph with arbitrary or specified path lengths in the connections. Later, dual and strong simulations @cite_26 @cite_25 were developed to preserve query graph topology through enforcement of both parent and child relationships in the match and the imposition of locality constraints. | {
"cite_N": [
"@cite_16",
"@cite_25",
"@cite_26"
],
"mid": [
"2123966888",
"2055898276",
"2132285256"
],
"abstract": [
"Graph pattern matching is typically defined in terms of subgraph isomorphism, which makes it an np-complete problem. Moreover, it requires bijective functions, which are often too restrictive to characterize patterns in emerging applications. We propose a class of graph patterns, in which an edge denotes the connectivity in a data graph within a predefined number of hops. In addition, we define matching based on a notion of bounded simulation, an extension of graph simulation. We show that with this revision, graph pattern matching can be performed in cubic-time, by providing such an algorithm. We also develop algorithms for incrementally finding matches when data graphs are updated, with performance guarantees for dag patterns. We experimentally verify that these algorithms scale well, and that the revised notion of graph pattern matching allows us to identify communities commonly found in real-world networks.",
"Graph pattern matching is finding all matches in a data graph for a given pattern graph and is often defined in terms of subgraph isomorphism, an NP -complete problem. To lower its complexity, various extensions of graph simulation have been considered instead. These extensions allow graph pattern matching to be conducted in cubic time. However, they fall short of capturing the topology of data graphs, that is, graphs may have a structure drastically different from pattern graphs they match, and the matches found are often too large to understand and analyze. To rectify these problems, this article proposes a notion of strong simulation, a revision of graph simulation for graph pattern matching. (1) We identify a set of criteria for preserving the topology of graphs matched. We show that strong simulation preserves the topology of data graphs and finds a bounded number of matches. (2) We show that strong simulation retains the same complexity as earlier extensions of graph simulation by providing a cubic-time algorithm for computing strong simulation. (3) We present the locality property of strong simulation which allows us to develop an effective distributed algorithm to conduct graph pattern matching on distributed graphs. (4) We experimentally verify the effectiveness and efficiency of these algorithms using both real-life and synthetic data.",
"Graph pattern matching is fundamental to social network analysis. Traditional techniques are subgraph isomorphism and graph simulation. However, these notions often impose too strong a topological constraint on graphs to find meaningful matches. Worse still, graphs in the real world are typically large, with millions of nodes and billions of edges. It is often prohibitively expensive to compute matches in such graphs. With these comes the need for revising the notions of graph pattern matching and for developing techniques of querying large graphs, to effectively and efficiently identify social communities or groups. This paper aims to provide an overview of recent advances in the study of graph pattern matching in social networks. (1) We present several revisions of the traditional notions of graph pattern matching to find sensible matches in social networks. (2) We provide boundedness analyses of incremental graph pattern matching, in response to frequent updates to social networks. (3) To cope with large real-life graphs, we propose a framework of query preserving graph compression, which retains only information necessary for answering a certain class of queries of users' choice. (4) We also address pattern matching in distributed graphs, and in particular, advocate the use of partial evaluation techniques. Finally, we identify directions for future research."
]
} |
1608.01760 | 2476528498 | This paper proposes the use of graph pattern matching for investigative graph search, which is the process of searching for and prioritizing persons of interest who may exhibit part or all of a pattern of suspicious behaviors or connections. While there are a variety of applications, our principal motivation is to aid law enforcement in the detection of homegrown violent extremists. We introduce investigative simulation, which consists of several necessary extensions to the existing dual simulation graph pattern matching scheme in order to make it appropriate for intelligence analysts and law enforcement officials. Specifically, we impose a categorical label structure on nodes consistent with the nature of indicators in investigations, as well as prune or complete search results to ensure sensibility and usefulness of partial matches to analysts. Lastly, we introduce a natural top-k ranking scheme that can help analysts prioritize investigative efforts. We demonstrate performance of investigative simulation on a real-world large dataset. | Lastly, because most graph queries end up returning many matches given a large graph, researchers have also devised ways to rank the most relevant matches using various goodness functions. Such criteria include social impact @cite_3 , social diversity @cite_3 , structural similarity @cite_15 @cite_5 @cite_27 , weighted attribute similarity @cite_15 , and label similarity @cite_5 . As sophisticated as these ranking methods are, we find that none account for intuitive red-flag indicators (i.e., those matches which demand the immediate attention of an analyst) that are relevant in investigative searches. | {
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_27",
"@cite_3"
],
"mid": [
"1509240356",
"",
"2048653843",
"2167429959"
],
"abstract": [
"It is increasingly common to find real-life data represented as networks of labeled, heterogeneous entities. To query these networks, one often needs to identify the matches of a given query graph in a (typically large) network modeled as a target graph. Due to noise and the lack of fixed schema in the target graph, the query graph can substantially differ from its matches in the target graph in both structure and node labels, thus bringing challenges to the graph querying tasks. In this paper, we propose NeMa (Network Match), a neighborhood-based subgraph matching technique for querying real-life networks. (1) To measure the quality of the match, we propose a novel subgraph matching cost metric that aggregates the costs of matching individual nodes, and unifies both structure and node label similarities. (2) Based on the metric, we formulate the minimum cost subgraph matching problem. Given a query graph and a target graph, the problem is to identify the (top-k) matches of the query graph with minimum costs in the target graph. We show that the problem is NP-hard, and also hard to approximate. (3) We propose a heuristic algorithm for solving the problem based on an inference model. In addition, we propose optimization techniques to improve the efficiency of our method. (4) We empirically verify that NeMa is both effective and efficient compared to the keyword search and various state-of-the-art graph querying techniques.",
"",
"We focus on large graphs where nodes have attributes, such as a social network where the nodes are labelled with each person's job title. In such a setting, we want to find subgraphs that match a user query pattern. For example, a \"star\" query would be, \"find a CEO who has strong interactions with a Manager, a Lawyer,and an Accountant, or another structure as close to that as possible\". Similarly, a \"loop\" query could help spot a money laundering ring. Traditional SQL-based methods, as well as more recent graph indexing methods, will return no answer when an exact match does not exist. This is the first main feature of our method. It can find exact-, as well as near-matches, and it will present them to the user in our proposed \"goodness\" order. For example, our method tolerates indirect paths between, say, the \"CEO\" and the \"Accountant\" of the above sample query, when direct paths don't exist. Its second feature is scalability. In general, if the query has nq nodes and the data graph has n nodes, the problem needs polynomial time complexity O(n n q), which is prohibitive. Our G-Ray (\"Graph X-Ray\") method finds high-quality subgraphs in time linear on the size of the data graph. Experimental results on the DLBP author-publication graph (with 356K nodes and 1.9M edges) illustrate both the effectiveness and scalability of our approach. The results agree with our intuition, and the speed is excellent. It takes 4 seconds on average fora 4-node query on the DBLP graph.",
"Graph pattern matching has been widely used in e.g., social data analysis. A number of matching algorithms have been developed that, given a graph pattern Q and a graph G, compute the set M(Q,G) of matches of Q in G. However, these algorithms often return an excessive number of matches, and are expensive on large real-life social graphs. Moreover, in practice many social queries are to find matches of a specific pattern node, rather than the entire M(Q,G). This paper studies top-k graph pattern matching. (1) We revise graph pattern matching defined in terms of simulation, by supporting a designated output node uo. Given G and Q, it is to find those nodes in M(Q,G) that match uo, instead of the large set M(Q,G). (2) We study two classes of functions for ranking the matches: relevance functions δr() based on, e.g., social impact, and distance functions δd() to cover diverse elements. (3) We develop two algorithms for computing top-k matches of uo based on δr(), with the early termination property, i.e., they find top-k matches without computing the entire M(Q,G). (4) We also study diversified top-k matching, a bi-criteria optimization problem based on both δr() and δd(). We show that its decision problem is NP-complete. Nonetheless, we provide an approximation algorithm with performance guarantees and a heuristic one with the early termination property. (5) Using real-life and synthetic data, we experimentally verify that our (diversified) top-k matching algorithms are effective, and outperform traditional matching algorithms in efficiency."
]
} |
1608.01302 | 2462851002 | We investigate learning heuristics for domain-specific planning. Prior work framed learning a heuristic as an ordinary regression problem. However, in a greedy best-first search, the ordering of states induced by a heuristic is more indicative of the resulting planner's performance than mean squared error. Thus, we instead frame learning a heuristic as a learning to rank problem which we solve using a RankSVM formulation. Additionally, we introduce new methods for computing features that capture temporal interactions in an approximate plan. Our experiments on recent International Planning Competition problems show that the RankSVM learned heuristics outperform both the original heuristics and heuristics learned through ordinary regression. | Prior work in learning for planning spans many types of domain-specific planning knowledge @cite_3 ; our focus in this paper is on learning heuristic functions. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2291853497"
],
"abstract": [
"Recent discoveries in automated planning are broadening the scope of planners, from toy problems to real applications. However, applying automated planners to real-world problems is far from simple. On the one hand, the definition of accurate action models for planning is still a bottleneck. On the other hand, off-the-shelf planners fail to scale-up and to provide good solutions in many domains. In these problematic domains, planners can exploit domain-specific control knowledge to improve their performance in terms of both speed and quality of the solutions. However, manual definition of control knowledge is quite difficult. This paper reviews recent techniques in machine learning for the automatic definition of planning knowledge. It has been organized according to the target of the learning process: automatic definition of planning action models and automatic definition of planning control knowledge. In addition, the paper reviews the advances in the related field of reinforcement learning."
]
} |
1608.01302 | 2462851002 | We investigate learning heuristics for domain-specific planning. Prior work framed learning a heuristic as an ordinary regression problem. However, in a greedy best-first search, the ordering of states induced by a heuristic is more indicative of the resulting planner's performance than mean squared error. Thus, we instead frame learning a heuristic as a learning to rank problem which we solve using a RankSVM formulation. Additionally, we introduce new methods for computing features that capture temporal interactions in an approximate plan. Our experiments on recent International Planning Competition problems show that the RankSVM learned heuristics outperform both the original heuristics and heuristics learned through ordinary regression. | were the first to improve on a heuristic function using machine learning @cite_22 @cite_19 . They centered their learning on improving the FF Heuristic @cite_17 , using ordinary least-squares regression to learn the difference between the actual distance-to-go and the estimate given by the FF heuristic. Their key contribution was deriving features using the relaxed plan that FF produces when computing its estimate. Specifically, they used taxonomic syntax to identify unordered sets of actions and predicates on the relaxed plan that shared common object arguments. Because there are an exponential number of possible subsets of actions and predicates, they iteratively introduced a taxonomic expression that identifies a subset greedily based on which subset will give the largest decrease in mean squared error. This process resulted in an average of about 20 features per domain @cite_25 . In contrast, our features encode ordering information about the plan and can be successfully applied without any taxonomic syntax or iterative feature selection. | {
"cite_N": [
"@cite_19",
"@cite_25",
"@cite_22",
"@cite_17"
],
"mid": [
"2107218456",
"2146140624",
"174354460",
"1545688112"
],
"abstract": [
"A number of today's state-of-the-art planners are based on forward state-space search. The impressive performance can be attributed to progress in computing domain independent heuristics that perform well across many domains. However, it is easy to find domains where such heuristics provide poor guidance, leading to planning failure. Motivated by such failures, the focus of this paper is to investigate mechanisms for learning domain-specific knowledge to better control forward search in a given domain. While there has been a large body of work on inductive learning of control knowledge for AI planning, there is a void of work aimed at forward-state-space search. One reason for this may be that it is challenging to specify a knowledge representation for compactly representing important concepts across a wide range of domains. One of the main contributions of this work is to introduce a novel feature space for representing such control knowledge. The key idea is to define features in terms of information computed via relaxed plan extraction, which has been a major source of success for non-learning planners. This gives a new way of leveraging relaxed planning techniques in the context of learning. Using this feature space, we describe three forms of control knowledge---reactive policies (decision list rules and measures of progress) and linear heuristics---and show how to learn them and incorporate them into forward state-space search. Our empirical results show that our approaches are able to surpass state-of-the-art non-learning planners across a wide range of planning competition domains.",
"Beam search is commonly used to help maintain tractability in large search spaces at the expense of completeness and optimality. Here we study supervised learning of linear ranking functions for controlling beam search. The goal is to learn ranking functions that allow for beam search to perform nearly as well as unconstrained search, and hence gain computational efficiency without seriously sacrificing optimality. In this paper, we develop theoretical aspects of this learning problem and investigate the application of this framework to learning in the context of automated planning. We first study the computational complexity of the learning problem, showing that even for exponentially large search spaces the general consistency problem is in NP. We also identify tractable and intractable subclasses of the learning problem, giving insight into the problem structure. Next, we analyze the convergence of recently proposed and modified online learning algorithms, where we introduce several notions of problem margin that imply convergence for the various algorithms. Finally, we present empirical results in automated planning, where ranking functions are learned to guide beam search in a number of benchmark planning domains. The results show that our approach is often able to outperform an existing state-of-the-art planning heuristic as well as a recent approach to learning such heuristics.",
"We present a novel approach to learning heuristic functions for AI planning domains. Given a state, we view a relaxed plan (RP) found from that state as a relational database, which includes the current state and goal facts, the actions in the RP, and the actions' add and delete lists. We represent heuristic functions as linear combinations of generic features of the database, selecting features and weights using training data from solved problems in the target planning domain. Many recent competitive planners use RP-based heuristics, but focus exclusively on the length of the RP, ignoring other RP features. Since RP construction ignores delete lists, for many domains, RP length dramatically under-estimates the distance to a goal, providing poor guidance. By using features that depend on deleted facts and other RP properties, our learned heuristics can potentially capture patterns that describe where such under-estimation occurs. Experiments in the STRIPS domains of IPC 3 and 4 show that best-first search using the learned heuristic can outperform FF (Hoffmann & Nebel 2001), which provided our training data, and frequently outperforms the top performances in IPC 4.",
"We describe and evaluate the algorithmic techniques that are used in the FF planning system. Like the HSP system, FF relies on forward state space search, using a heuristic that estimates goal distances by ignoring delete lists. Unlike HSP's heuristic, our method does not assume facts to be independent. We introduce a novel search strategy that combines hill-climbing with systematic search, and we show how other powerful heuristic information can be extracted and used to prune the search space. FF was the most successful automatic planner at the recent AIPS-2000 planning competition. We review the results of the competition, give data for other benchmark domains, and investigate the reasons for the runtime performance of FF compared to HSP."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.