aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1604.03641 | 2951357319 | Dynamic languages such as Ruby, Python, and JavaScript have many compelling benefits, but the lack of static types means subtle errors can remain latent in code for a long time. While many researchers have developed various systems to bring some of the benefits of static types to dynamic languages, prior approaches have trouble dealing with metaprogramming, which generates code as the program executes. In this paper, we propose Hummingbird, a new system that uses a novel technique, just-in-time static type checking, to type check Ruby code even in the presence of metaprogramming. In Hummingbird, method type signatures are gathered dynamically at run-time, as those methods are created. When a method is called, Hummingbird statically type checks the method body against current type signatures. Thus, Hummingbird provides thorough static checks on a per-method basis, while also allowing arbitrarily complex metaprogramming. For performance, Hummingbird memoizes the static type checking pass, invalidating cached checks only if necessary. We formalize Hummingbird using a core, Ruby-like language and prove it sound. To evaluate Hummingbird, we applied it to six apps, including three that use Ruby on Rails, a powerful framework that relies heavily on metaprogramming. We found that all apps typecheck successfully using Hummingbird, and that Hummingbird’s performance overhead is reasonable. We applied Hummingbird to earlier versions of one Rails app and found several type errors that had been introduced and then fixed. Lastly, we demonstrate using Hummingbird in Rails development mode to typecheck an app as live updates are applied to it. | 's staged program analysis @cite_26 performs static analysis on as much code as is possible at compile time, and then computes a set of remaining checks to be performed at run time. uses a related idea in which no static analysis is performed at compile time, but type checking is always done when methods are called. is simpler because it need not compute which checks are necessary, as it always does the same kind of checking. | {
"cite_N": [
"@cite_26"
],
"mid": [
"2165304392"
],
"abstract": [
"Modern websites are powered by JavaScript, a flexible dynamic scripting language that executes in client browsers. A common paradigm in such websites is to include third-party JavaScript code in the form of libraries or advertisements. If this code were malicious, it could read sensitive information from the page or write to the location bar, thus redirecting the user to a malicious page, from which the entire machine could be compromised. We present an information-flow based approach for inferring the effects that a piece of JavaScript has on the website in order to ensure that key security properties are not violated. To handle dynamically loaded and generated JavaScript, we propose a framework for staging information flow properties. Our framework propagates information flow through the currently known code in order to compute a minimal set of syntactic residual checks that are performed on the remaining code when it is dynamically loaded. We have implemented a prototype framework for staging information flow. We describe our techniques for handling some difficult features of JavaScript and evaluate our system's performance on a variety of large real-world websites. Our experiments show that static information flow is feasible and efficient for JavaScript, and that our technique allows the enforcement of information-flow policies with almost no run-time overhead."
]
} |
1604.03641 | 2951357319 | Dynamic languages such as Ruby, Python, and JavaScript have many compelling benefits, but the lack of static types means subtle errors can remain latent in code for a long time. While many researchers have developed various systems to bring some of the benefits of static types to dynamic languages, prior approaches have trouble dealing with metaprogramming, which generates code as the program executes. In this paper, we propose Hummingbird, a new system that uses a novel technique, just-in-time static type checking, to type check Ruby code even in the presence of metaprogramming. In Hummingbird, method type signatures are gathered dynamically at run-time, as those methods are created. When a method is called, Hummingbird statically type checks the method body against current type signatures. Thus, Hummingbird provides thorough static checks on a per-method basis, while also allowing arbitrarily complex metaprogramming. For performance, Hummingbird memoizes the static type checking pass, invalidating cached checks only if necessary. We formalize Hummingbird using a core, Ruby-like language and prove it sound. To evaluate Hummingbird, we applied it to six apps, including three that use Ruby on Rails, a powerful framework that relies heavily on metaprogramming. We found that all apps typecheck successfully using Hummingbird, and that Hummingbird’s performance overhead is reasonable. We applied Hummingbird to earlier versions of one Rails app and found several type errors that had been introduced and then fixed. Lastly, we demonstrate using Hummingbird in Rails development mode to typecheck an app as live updates are applied to it. | Several researchers have explored other ways to bring the benefits of static typing to dynamic languages. Contracts @cite_27 check assertions at function or method entry and exit. In contrast, performs static analysis of method bodies, which can find bugs on paths before they are run. At the same time, contracts can encode richer properties than types. | {
"cite_N": [
"@cite_27"
],
"mid": [
"2128303158"
],
"abstract": [
"Assertions play an important role in the construction of robust software. Their use in programming languages dates back to the 1970s. Eiffel, an object-oriented programming language, wholeheartedly adopted assertions and developed the \"Design by Contract\" philosophy. Indeed, the entire object-oriented community recognizes the value of assertion-based contracts on methods.In contrast, languages with higher-order functions do not support assertion-based contracts. Because predicates on functions are, in general, undecidable, specifying such predicates appears to be meaningless. Instead, the functional languages community developed type systems that statically approximate interesting predicates.In this paper, we show how to support higher-order function contracts in a theoretically well-founded and practically viable manner. Specifically, we introduce λ con , a typed lambda calculus with assertions for higher-order functions. The calculus models the assertion monitoring system that we employ in DrScheme. We establish basic properties of the model (type soundness, etc.) and illustrate the usefulness of contract checking with examples from DrScheme's code base.We believe that the development of an assertion system for higher-order functions serves two purposes. On one hand, the system has strong practical potential because existing type systems simply cannot express many assertions that programmers would like to state. On the other hand, an inspection of a large base of invariants may provide inspiration for the direction of practical future type system research."
]
} |
1604.03641 | 2951357319 | Dynamic languages such as Ruby, Python, and JavaScript have many compelling benefits, but the lack of static types means subtle errors can remain latent in code for a long time. While many researchers have developed various systems to bring some of the benefits of static types to dynamic languages, prior approaches have trouble dealing with metaprogramming, which generates code as the program executes. In this paper, we propose Hummingbird, a new system that uses a novel technique, just-in-time static type checking, to type check Ruby code even in the presence of metaprogramming. In Hummingbird, method type signatures are gathered dynamically at run-time, as those methods are created. When a method is called, Hummingbird statically type checks the method body against current type signatures. Thus, Hummingbird provides thorough static checks on a per-method basis, while also allowing arbitrarily complex metaprogramming. For performance, Hummingbird memoizes the static type checking pass, invalidating cached checks only if necessary. We formalize Hummingbird using a core, Ruby-like language and prove it sound. To evaluate Hummingbird, we applied it to six apps, including three that use Ruby on Rails, a powerful framework that relies heavily on metaprogramming. We found that all apps typecheck successfully using Hummingbird, and that Hummingbird’s performance overhead is reasonable. We applied Hummingbird to earlier versions of one Rails app and found several type errors that had been introduced and then fixed. Lastly, we demonstrate using Hummingbird in Rails development mode to typecheck an app as live updates are applied to it. | Gradual typing @cite_10 lets developers add types gradually as programs evolve; recently implemented gradual typing for Python @cite_17 . Like types @cite_6 bring some of the flexibility of dynamic typing to statically typed languages. The goal of these systems is to allow mixing of typed and untyped code. This is orthogonal to , which focuses on checking code with type annotations. | {
"cite_N": [
"@cite_10",
"@cite_6",
"@cite_17"
],
"mid": [
"1568983194",
"2111531191",
"2102473657"
],
"abstract": [
"Static and dynamic type systems have well-known strengths and weaknesses. In previous work we developed a gradual type system for a functional calculus named @math . Gradual typing provides the benefits of both static and dynamic checking in a single language by allowing the programmer to control whether a portion of the program is type checked at compile-time or run-time by adding or removing type annotations on variables. Several object-oriented scripting languages are preparing to add static checking. To support that work this paper develops @math , a gradual type system for object-based languages, extending the Ob < : calculus of Abadi and Cardelli. Our primary contribution is to show that gradual typing and subtyping are orthogonal and can be combined in a principled fashion. We also develop a small-step semantics, provide a machine-checked proof of type safety, and improve the space efficiency of higher-order casts.",
"Many large software systems originate from untyped scripting language code. While good for initial development, the lack of static type annotations can impact code-quality and performance in the long run. We present an approach for integrating untyped code and typed code in the same system to allow an initial prototype to smoothly evolve into an efficient and robust program. We introduce like types , a novel intermediate point between dynamic and static typing. Occurrences of like types variables are checked statically within their scope but, as they may be bound to dynamic values, their usage is checked dynamically. Thus like types provide some of the benefits of static typing without decreasing the expressiveness of the language. We provide a formal account of like types in a core object calculus and evaluate their applicability in the context of a new scripting language.",
"Combining static and dynamic typing within the same language offers clear benefits to programmers. It provides dynamic typing in situations that require rapid prototyping, heterogeneous data structures, and reflection, while supporting static typing when safety, modularity, and efficiency are primary concerns. Siek and Taha (2006) introduced an approach to combining static and dynamic typing in a fine-grained manner through the notion of type consistency in the static semantics and run-time casts in the dynamic semantics. However, many open questions remain regarding the semantics of gradually typed languages. In this paper we present Reticulated Python, a system for experimenting with gradual-typed dialects of Python. The dialects are syntactically identical to Python 3 but give static and dynamic semantics to the type annotations already present in Python 3. Reticulated Python consists of a typechecker and a source-to-source translator from Reticulated Python to Python 3. Using Reticulated Python, we evaluate a gradual type system and three approaches to the dynamic semantics of mutable objects: the traditional semantics based on Siek and Taha (2007) and (2007) and two new designs. We evaluate these designs in the context of several third-party Python programs."
]
} |
1604.03641 | 2951357319 | Dynamic languages such as Ruby, Python, and JavaScript have many compelling benefits, but the lack of static types means subtle errors can remain latent in code for a long time. While many researchers have developed various systems to bring some of the benefits of static types to dynamic languages, prior approaches have trouble dealing with metaprogramming, which generates code as the program executes. In this paper, we propose Hummingbird, a new system that uses a novel technique, just-in-time static type checking, to type check Ruby code even in the presence of metaprogramming. In Hummingbird, method type signatures are gathered dynamically at run-time, as those methods are created. When a method is called, Hummingbird statically type checks the method body against current type signatures. Thus, Hummingbird provides thorough static checks on a per-method basis, while also allowing arbitrarily complex metaprogramming. For performance, Hummingbird memoizes the static type checking pass, invalidating cached checks only if necessary. We formalize Hummingbird using a core, Ruby-like language and prove it sound. To evaluate Hummingbird, we applied it to six apps, including three that use Ruby on Rails, a powerful framework that relies heavily on metaprogramming. We found that all apps typecheck successfully using Hummingbird, and that Hummingbird’s performance overhead is reasonable. We applied Hummingbird to earlier versions of one Rails app and found several type errors that had been introduced and then fixed. Lastly, we demonstrate using Hummingbird in Rails development mode to typecheck an app as live updates are applied to it. | @cite_18 @cite_12 have explored how highly dynamic language features are used in JavaScript. They find such features, including eval , are used extensively in a wide variety of ways, including supporting metaprogramming. | {
"cite_N": [
"@cite_18",
"@cite_12"
],
"mid": [
"1999753800",
"1777693579"
],
"abstract": [
"The JavaScript programming language is widely used for web programming and, increasingly, for general purpose computing. As such, improving the correctness, security and performance of JavaScript applications has been the driving force for research in type systems, static analysis and compiler techniques for this language. Many of these techniques aim to reign in some of the most dynamic features of the language, yet little seems to be known about how programmers actually utilize the language or these features. In this paper we perform an empirical study of the dynamic behavior of a corpus of widely-used JavaScript programs, and analyze how and why the dynamic features are used. We report on the degree of dynamism that is exhibited by these JavaScript programs and compare that with assumptions commonly made in the literature and accepted industry benchmark suites.",
"Transforming text into executable code with a function such as Java-Script's eval endows programmers with the ability to extend applications, at any time, and in almost any way they choose. But, this expressive power comes at a price: reasoning about the dynamic behavior of programs that use this feature becomes challenging. Any ahead-of-time analysis, to remain sound, is forced to make pessimistic assumptions about the impact of dynamically created code. This pessimism affects the optimizations that can be applied to programs and significantly limits the kinds of errors that can be caught statically and the security guarantees that can be enforced. A better understanding of how eval is used could lead to increased performance and security. This paper presents a large-scale study of the use of eval in JavaScript-based web applications. We have recorded the behavior of 337 MB of strings given as arguments to 550,358 calls to the eval function exercised in over 10,000 web sites. We provide statistics on the nature and content of strings used in eval expressions, as well as their provenance and data obtained by observing their dynamic behavior."
]
} |
1604.03880 | 2340604060 | Today's person detection methods work best when people are in common upright poses and appear reasonably well spaced out in the image. However, in many real images, that's not what people do. People often appear quite close to each other, e.g., with limbs linked or heads touching, and their poses are often not pedestrian-like. We propose an approach to detangle people in multi-person images. We formulate the task as a region assembly problem. Starting from a large set of overlapping regions from body part semantic segmentation and generic object proposals, our optimization approach reassembles those pieces together into multiple person instances. It enforces that the composed body part regions of each person instance obey constraints on relative sizes, mutual spatial relationships, foreground coverage, and exclusive label assignments when overlapping. Since optimal region assembly is a challenging combinatorial problem, we present a Lagrangian relaxation method to accelerate the lower bound estimation, thereby enabling a fast branch and bound solution for the global optimum. As output, our method produces a pixel-level map indicating both 1) the body part labels (arm, leg, torso, and head), and 2) which parts belong to which individual person. Our results on three challenging datasets show our method is robust to clutter, occlusion, and complex poses. It outperforms a variety of competing methods, including existing detector CRF methods and region CNN approaches. In addition, we demonstrate its impact on a proxemics recognition task, which demands a precise representation of "whose body part is where" in crowded images. | Most previous methods for human instance segmentation require a person detector. In @cite_37 a multi-part pedestrian detector is combined with an MCMC method for walking crowd segmentation. A pedestrian detector is used in @cite_44 @cite_0 @cite_9 to find people instances in bounding boxes before instance segmentation. Joint pose estimation and segmentation for single subjects have been proposed in @cite_2 @cite_33 @cite_17 . Multiple people instance segmentation in TV shows has been studied in @cite_21 @cite_34 using the detector CRF scheme, which combines a person detector and a pixel-level CRF to achieve accurate results. In @cite_34 , people detection bounding boxes are verified using face detections and then grabcut is used to refine the instance segmentation. In @cite_21 a pictorial structure method @cite_35 is used to detect candidate human instances. Sequential assignment is used to fit the human instance masks to image data. From instance masks, detailed human segmentation and body part regions are further estimated using the CRF. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_33",
"@cite_9",
"@cite_21",
"@cite_44",
"@cite_0",
"@cite_2",
"@cite_34",
"@cite_17"
],
"mid": [
"2099305880",
"2101178587",
"2039800185",
"",
"2043217799",
"1969236304",
"",
"2145510178",
"2126854506",
"2120938053"
],
"abstract": [
"We describe a method for generating N-best configurations from part-based models, ensuring that they do not overlap according to some user-provided definition of overlap. We extend previous N-best algorithms from the speech community to incorporate non-maximal suppression cues, such that pixel-shifted copies of a single configuration are not returned. We use approximate algorithms that perform nearly identical to their exact counterparts, but are orders of magnitude faster. Our approach outperforms standard methods for generating multiple object configurations in an image. We use our method to generate multiple pose hypotheses for the problem of human pose estimation from video sequences. We present quantitative results that demonstrate that our framework significantly improves the accuracy of a state-of-the-art pose estimation algorithm.",
"The problem of segmenting individual humans in crowded situations from stationary video camera sequences is exacerbated by object inter-occlusion. We pose this problem as a \"model-based segmentation\" problem in which human shape models are used to interpret the foreground in a Bayesian framework. The solution is obtained by using an efficient Markov chain Monte Carlo (MCMC) method that uses domain knowledge as proposal probabilities. Knowledge of various aspects including human shape, human height, camera model, and image cues including human head candidates, foreground background separation are integrated in one theoretically sound framework. We show promising results and evaluations on some challenging data.",
"Combining information from the higher level and the lower level has long been recognized as an essential component in holistic image understanding. However, an efficient inference method for multi-level models remains an open problem. Moreover, modeling the complex relations within real world images often gives rise to energy terms that couple many variables in arbitrary ways. They make the inference problem even harder. In this paper, we construct an energy function over the pose of the human body and pixel-wise foreground background segmentation. The energy function incorporates terms both on the higher level, which models the human poses, and the lower level, which models the pixels. It also contains an intractable term that couples all body parts. We show how to optimize this energy in a principled way by relaxed dual decomposition, which proceeds by maximizing a concave lower bound on the energy function. Empirically, we show that our approach improves the state-of-the-art performance of human pose estimation on the Ramanan benchmark dataset.",
"",
"Our goal is to detect humans and estimate their 2D pose in single images. In particular, handling cases of partial visibility where some limbs may be occluded or one person is partially occluding another. Two standard, but disparate, approaches have developed in the field: the first is the part based approach for layout type problems, involving optimising an articulated pictorial structure, the second is the pixel based approach for image labelling involving optimising a random field graph defined on the image. Our novel contribution is a formulation for pose estimation which combines these two models in a principled way in one optimisation problem and thereby inherits the advantages of both of them. Inference on this joint model finds the set of instances of persons in an image, the location of their joints, and a pixel-wise body part labelling. We achieve near or state of the art results on standard human pose data sets, and demonstrate the correct estimation for cases of self-occlusion, person overlap and image truncation.",
"We present an automatic and efficient method to extract spatio-temporal human volumes from video, which combines top-down model-based and bottom-up appearance-based approaches. From the top-down perspective, our algorithm applies shape priors probabilistically to candidate image regions obtained by pedestrian detection, and provides accurate estimates of the human body areas which serve as important constraints for bottom-up processing. Temporal propagation of the identified region is performed with bottom-up cues in an efficient level-set framework, which takes advantage of the sparse top-down information that is available. Our formulation also optimizes the extracted human volume across frames through belief propagation and provides temporally coherent human regions. We demonstrate the ability of our method to extract human body regions efficiently and automatically from a large, challenging dataset collected from YouTube.",
"",
"We propose an on-line algorithm to extract a human by foreground background segmentation and estimate pose of the human from the videos captured by moving cameras. We claim that a virtuous cycle can be created by appropriate interactions between the two modules to solve individual problems. This joint estimation problem is divided into two sub problems, foreground background segmentation and pose tracking, which alternate iteratively for optimization, segmentation step generates foreground mask for human pose tracking, and human pose tracking step provides fore-ground response map for segmentation. The final solution is obtained when the iterative procedure converges. We evaluate our algorithm quantitatively and qualitatively in real videos involving various challenges, and present its outstanding performance compared to the state-of-the-art techniques for segmentation and pose estimation.",
"In this work, we propose a method for instance based human segmentation in images and videos, extending the recent detector-based conditional random field model of Ladicky et.al. Instance based human segmentation involves pixel level labeling of an image, partitioning it into distinct human instances and background. To achieve our goal, we add three new components to their framework. First, we include human partsbased detection potentials to take advantage of the structure present in human instances. Further, in order to generate a consistent segmentation from different human parts, we incorporate shape prior information, which biases the segmentation to characteristic overall human shapes. Also, we enhance the representative power of the energy function by adopting exemplar instance based matching terms, which helps our method to adapt easily to different human sizes and poses. Finally, we extensively evaluate our proposed method on the Buffy dataset with our new segmented ground truth images, and show a substantial improvement over existing CRF methods. These new annotations will be made available for future use as well.",
"This paper presents a novel algorithm for performing integrated segmentation and 3D pose estimation of a human body from multiple views. Unlike other state of the art methods which focus on either segmentation or pose estimation individually, our approach tackles these two tasks together. Our method works by optimizing a cost function based on a Conditional Random Field (CRF). This has the advantage that all information in the image (edges, background and foreground appearances), as well as the prior information on the shape and pose of the subject can be combined and used in a Bayesian framework. Optimizing such a cost function would have been computationally infeasible. However, our recent research in dynamic graph cuts allows this to be done much more efficiently than before. We demonstrate the efficacy of our approach on challenging motion sequences. Although we target the human pose inference problem in the paper, our method is completely generic and can be used to segment and infer the pose of any rigid, deformable or articulated object."
]
} |
1604.03880 | 2340604060 | Today's person detection methods work best when people are in common upright poses and appear reasonably well spaced out in the image. However, in many real images, that's not what people do. People often appear quite close to each other, e.g., with limbs linked or heads touching, and their poses are often not pedestrian-like. We propose an approach to detangle people in multi-person images. We formulate the task as a region assembly problem. Starting from a large set of overlapping regions from body part semantic segmentation and generic object proposals, our optimization approach reassembles those pieces together into multiple person instances. It enforces that the composed body part regions of each person instance obey constraints on relative sizes, mutual spatial relationships, foreground coverage, and exclusive label assignments when overlapping. Since optimal region assembly is a challenging combinatorial problem, we present a Lagrangian relaxation method to accelerate the lower bound estimation, thereby enabling a fast branch and bound solution for the global optimum. As output, our method produces a pixel-level map indicating both 1) the body part labels (arm, leg, torso, and head), and 2) which parts belong to which individual person. Our results on three challenging datasets show our method is robust to clutter, occlusion, and complex poses. It outperforms a variety of competing methods, including existing detector CRF methods and region CNN approaches. In addition, we demonstrate its impact on a proxemics recognition task, which demands a precise representation of "whose body part is where" in crowded images. | Part voting approaches have been intensively studied for human or object instance segmentation. In @cite_4 , boundary shape units vote for the centers of human subjects. In @cite_20 @cite_12 , the poselets vote for the centers of people instances. The poselets that cast the votes are then identified to obtain the object segmentation. In @cite_43 the object boundary is obtained by reversely finding the activation parts used in the voting. Similar to the Hough Transform, such a voting approach is more suitable to targets that have relatively fixed shape. Our proposed method finds the optimal part assembly using articulation invariant constraints instead of simply voting for the person center; it therefore can be used to segment highly articulated human subjects. | {
"cite_N": [
"@cite_43",
"@cite_4",
"@cite_12",
"@cite_20"
],
"mid": [
"2144794286",
"1980468130",
"1864464506",
"2055349880"
],
"abstract": [
"We study the challenging problem of localizing and classifying category-specific object contours in real world images. For this purpose, we present a simple yet effective method for combining generic object detectors with bottom-up contours to identify object contours. We also provide a principled way of combining information from different part detectors and across categories. In order to study the problem and evaluate quantitatively our approach, we present a dataset of semantic exterior boundaries on more than 20, 000 object instances belonging to 20 categories, using the images from the VOC2011 PASCAL challenge [7].",
"We describe an approach for detecting and segmenting humans with extensive posture articulations in crowded video sequences. In our method we learn a set of mean posture clusters, and a codebook of local shape distributions for humans in various postures. Detection proceeds in two stages: first instances of the codebook entries cast votes for locations of humans in the video and their respective postures. Subsequently, consistent hypotheses are found as maxima within a voting space. The segmentation of humans in the scene is initialized by the corresponding posture clusters and contours are evolved to obtain precise and consistent segmentations. Our experimental results indicate that the framework provides a simple yet effective means for aggregating local and global shape-based cues. The proposed method is capable of detecting and segmenting humans in crowded scenes as they perform a diverse set of activities and undergo a wide range of articulations within different contexts.",
"Bourdev and Malik (ICCV 09) introduced a new notion of parts, poselets, constructed to be tightly clustered both in the configuration space of keypoints, as well as in the appearance space of image patches. In this paper we develop a new algorithm for detecting people using poselets. Unlike that work which used 3D annotations of keypoints, we use only 2D annotations which are much easier for naive human annotators. The main algorithmic contribution is in how we use the pattern of poselet activations. Individual poselet activations are noisy, but considering the spatial context of each can provide vital disambiguating information, just as object detection can be improved by considering the detection scores of nearby objects in the scene. This can be done by training a two-layer feed-forward network with weights set using a max margin technique. The refined poselet activations are then clustered into mutually consistent hypotheses where consistency is based on empirically determined spatial keypoint distributions. Finally, bounding boxes are predicted for each person hypothesis and shape masks are aligned to edges in the image to provide a segmentation. To the best of our knowledge, the resulting system is the current best performer on the task of people detection and segmentation with an average precision of 47.8 and 40.5 respectively on PASCAL VOC 2009.",
"In this paper, we propose techniques to make use of two complementary bottom-up features, image edges and texture patches, to guide top-down object segmentation towards higher precision. We build upon the part-based pose-let detector, which can predict masks for numerous parts of an object. For this purpose we extend poselets to 19 other categories apart from person. We non-rigidly align these part detections to potential object contours in the image, both to increase the precision of the predicted object mask and to sort out false positives. We spatially aggregate object information via a variational smoothing technique while ensuring that object regions do not overlap. Finally, we propose to refine the segmentation based on self-similarity defined on small image patches. We obtain competitive results on the challenging Pascal VOC benchmark. On four classes we achieve the best numbers to-date."
]
} |
1604.03880 | 2340604060 | Today's person detection methods work best when people are in common upright poses and appear reasonably well spaced out in the image. However, in many real images, that's not what people do. People often appear quite close to each other, e.g., with limbs linked or heads touching, and their poses are often not pedestrian-like. We propose an approach to detangle people in multi-person images. We formulate the task as a region assembly problem. Starting from a large set of overlapping regions from body part semantic segmentation and generic object proposals, our optimization approach reassembles those pieces together into multiple person instances. It enforces that the composed body part regions of each person instance obey constraints on relative sizes, mutual spatial relationships, foreground coverage, and exclusive label assignments when overlapping. Since optimal region assembly is a challenging combinatorial problem, we present a Lagrangian relaxation method to accelerate the lower bound estimation, thereby enabling a fast branch and bound solution for the global optimum. As output, our method produces a pixel-level map indicating both 1) the body part labels (arm, leg, torso, and head), and 2) which parts belong to which individual person. Our results on three challenging datasets show our method is robust to clutter, occlusion, and complex poses. It outperforms a variety of competing methods, including existing detector CRF methods and region CNN approaches. In addition, we demonstrate its impact on a proxemics recognition task, which demands a precise representation of "whose body part is where" in crowded images. | Our method is also related to human region parsing, in that we segment and label each person's body part regions. Human region parsing has been mostly studied for analyzing body part regions of a single person @cite_36 @cite_6 @cite_15 @cite_29 . To handle multiple people, in @cite_9 a pedestrian detector is used to find the bounding box of each single person. Finding people with arbitrary poses using a bounding box detector is still a hard problem, whereas our method naturally handles multiple people with complex interactions and poses. Part segmentation has recently been used to improve semantic segmentation of animals in @cite_5 , but the pairwise CRF method cannot individuate multiple animal instances. In contrast, our method is able to individuate tangled people with complex poses. | {
"cite_N": [
"@cite_36",
"@cite_9",
"@cite_29",
"@cite_6",
"@cite_5",
"@cite_15"
],
"mid": [
"2162253476",
"",
"",
"2212333002",
"792160549",
"2018793343"
],
"abstract": [
"The goal of this work is to detect a human figure image and localize his joints and limbs along with their associated pixel masks. In this work we attempt to tackle this problem in a general setting. The dataset we use is a collection of sports news photographs of baseball players, varying dramatically in pose and clothing. The approach that we take is to use segmentation to guide our recognition algorithm to salient bits of the image. We use this segmentation approach to build limb and torso detectors, the outputs of which are assembled into human figures. We present quantitative results on torso localization, in addition to shortlisted full body configurations.",
"",
"",
"A scale, rotation and articulation invariant method is proposed to match human subjects in images. Different from the widely used pictorial structure scheme, the proposed method directly matches body parts to image regions which are obtained from object independent proposals and successively merged superpixels. Body part region matching is formulated as a graph matching problem. We globally assign a body part candidate to each node on the model graph so that the overall configuration satisfies the spatial layout of a human body plan, part regions have small overlap, and the part coverage follows proper area ratios. The proposed graph model is non-tree and contains high order hyper-edges. We propose an efficient method that finds global optimal solution to the matching problem with a sequence of branch and bound procedures. The experiments show that the proposed method is able to handle arbitrary scale, rotation, articulation and match human subjects in cluttered images.",
"Segmenting semantic objects from images and parsing them into their respective semantic parts are fundamental steps towards detailed object understanding in computer vision. In this paper, we propose a joint solution that tackles semantic object and part segmentation simultaneously, in which higher object-level context is provided to guide part segmentation, and more detailed part-level localization is utilized to refine object segmentation. Specifically, we first introduce the concept of semantic compositional parts (SCP) in which similar semantic parts are grouped and shared among different objects. A two-channel fully convolutional network (FCN) is then trained to provide the SCP and object potentials at each pixel. At the same time, a compact set of segments can also be obtained from the SCP predictions of the network. Given the potentials and the generated segments, in order to explore long-range context, we finally construct an efficient fully connected conditional random field (FCRF) to jointly predict the final object and part labels. Extensive evaluation on three different datasets shows that our approach can mutually enhance the performance of object and part segmentation, and outperforms the current state-of-the-art on both tasks.",
"Recognizing humans, estimating their pose and segmenting their body parts are key to high-level image understanding. Because humans are highly articulated, the range of deformations they undergo makes this task extremely challenging. Previous methods have focused largely on heuristics or pairwise part models in approaching this problem. We propose a bottom-up parsing of increasingly more complete partial body masks guided by a parse tree. At each level of the parsing process, we evaluate the partial body masks directly via shape matching with exemplars, without regard to how the parses are formed. The body is evaluated as a whole, not the sum of its constituent parses, unlike previous approaches. Multiple image segmentations are included at each of the levels of the parsing, to augment existing parses or to introduce ones. Our method yields both a pose estimate as well as a segmentation of the human. We demonstrate competitive results on this challenging task with relatively few training examples on a dataset of baseball players with wide pose variation. Our method is comparatively simple and could be easily extended to other objects."
]
} |
1604.03880 | 2340604060 | Today's person detection methods work best when people are in common upright poses and appear reasonably well spaced out in the image. However, in many real images, that's not what people do. People often appear quite close to each other, e.g., with limbs linked or heads touching, and their poses are often not pedestrian-like. We propose an approach to detangle people in multi-person images. We formulate the task as a region assembly problem. Starting from a large set of overlapping regions from body part semantic segmentation and generic object proposals, our optimization approach reassembles those pieces together into multiple person instances. It enforces that the composed body part regions of each person instance obey constraints on relative sizes, mutual spatial relationships, foreground coverage, and exclusive label assignments when overlapping. Since optimal region assembly is a challenging combinatorial problem, we present a Lagrangian relaxation method to accelerate the lower bound estimation, thereby enabling a fast branch and bound solution for the global optimum. As output, our method produces a pixel-level map indicating both 1) the body part labels (arm, leg, torso, and head), and 2) which parts belong to which individual person. Our results on three challenging datasets show our method is robust to clutter, occlusion, and complex poses. It outperforms a variety of competing methods, including existing detector CRF methods and region CNN approaches. In addition, we demonstrate its impact on a proxemics recognition task, which demands a precise representation of "whose body part is where" in crowded images. | Our work is also distantly related to human pose estimation, which has been intensively studied on depth images @cite_10 and on color images using pictorial structure methods @cite_1 @cite_19 @cite_16 and CNNs @cite_13 @cite_18 @cite_30 . However, unlike our approach, human pose estimation methods usually do not directly give the instance and body part region segmentation. Our method produces multiple human segmentations without extracting human poses. | {
"cite_N": [
"@cite_13",
"@cite_30",
"@cite_18",
"@cite_1",
"@cite_19",
"@cite_16",
"@cite_10"
],
"mid": [
"2949447708",
"2113325037",
"2155394491",
"2030536784",
"1994529670",
"2131263044",
"2060280062"
],
"abstract": [
"We propose a new learning-based method for estimating 2D human pose from a single image, using Dual-Source Deep Convolutional Neural Networks (DS-CNN). Recently, many methods have been developed to estimate human pose by using pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective. In this paper, we propose to integrate both the local (body) part appearance and the holistic view of each local part for more accurate human pose estimation. Specifically, the proposed DS-CNN takes a set of image patches (category-independent object proposals for training and multi-scale sliding windows for testing) as the input and then learns the appearance of each local part by considering their holistic views in the full body. Using DS-CNN, we achieve both joint detection, which determines whether an image patch contains a body joint, and joint localization, which finds the exact location of the joint in the image patch. Finally, we develop an algorithm to combine these joint detection localization results from all the image patches for estimating the human pose. The experimental results show the effectiveness of the proposed method by comparing to the state-of-the-art human-pose estimation methods based on pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective.",
"We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.",
"We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency and statistical power of DCNNs. Our method significantly outperforms the state of the art methods on the LSP and FLIC datasets and also performs very well on the Buffy dataset without any training.",
"In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images.",
"We describe a method for human pose estimation in static images based on a novel representation of part models. Notably, we do not use articulated limb parts, but rather capture orientation with a mixture of templates for each part. We describe a general, flexible mixture model for capturing contextual co-occurrence relations between parts, augmenting standard spring models that encode spatial relations. We show that such relations can capture notions of local rigidity. When co-occurrence and spatial relations are tree-structured, our model can be efficiently optimized with dynamic programming. We present experimental results on standard benchmarks for pose estimation that indicate our approach is the state-of-the-art system for pose estimation, outperforming past work by 50 while being orders of magnitude faster.",
"Non-rigid object detection and articulated pose estimation are two related and challenging problems in computer vision. Numerous models have been proposed over the years and often address different special cases, such as pedestrian detection or upper body pose estimation in TV footage. This paper shows that such specialization may not be necessary, and proposes a generic approach based on the pictorial structures framework. We show that the right selection of components for both appearance and spatial modeling is crucial for general applicability and overall performance of the model. The appearance of body parts is modeled using densely sampled shape context descriptors and discriminatively trained AdaBoost classifiers. Furthermore, we interpret the normalized margin of each classifier as likelihood in a generative model. Non-Gaussian relationships between parts are represented as Gaussians in the coordinate system of the joint between parts. The marginal posterior of each part is inferred using belief propagation. We demonstrate that such a model is equally suitable for both detection and pose estimation tasks, outperforming the state of the art on three recently proposed datasets.",
"We propose a new method to quickly and accurately predict human pose---the 3D positions of body joints---from a single depth image, without depending on information from preceding frames. Our approach is strongly rooted in current object recognition strategies. By designing an intermediate representation in terms of body parts, the difficult pose estimation problem is transformed into a simpler per-pixel classification problem, for which efficient machine learning techniques exist. By using computer graphics to synthesize a very large dataset of training image pairs, one can train a classifier that estimates body part labels from test images invariant to pose, body shape, clothing, and other irrelevances. Finally, we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes. The system runs in under 5ms on the Xbox 360. Our evaluation shows high accuracy on both synthetic and real test sets, and investigates the effect of several training parameters. We achieve state-of-the-art accuracy in our comparison with related work and demonstrate improved generalization over exact whole-skeleton nearest neighbor matching."
]
} |
1604.03640 | 2337199865 | We discuss relations between Residual Networks (ResNet), Recurrent Neural Networks (RNNs) and the primate visual cortex. We begin with the observation that a shallow RNN is exactly equivalent to a very deep ResNet with weight sharing among the layers. A direct implementation of such a RNN, although having orders of magnitude fewer parameters, leads to a performance similar to the corresponding ResNet. We propose 1) a generalization of both RNN and ResNet architectures and 2) the conjecture that a class of moderately deep RNNs is a biologically-plausible model of the ventral stream in visual cortex. We demonstrate the eectiveness of the architectures by testing them on the CIFAR-10 dataset. | Our final model is deep and similar to a stacked RNN @cite_32 @cite_9 @cite_11 with several main differences: 1. our model has feedback transitions between hidden layers and self-transition from each hidden layer to itself. 2. our model has identity shortcut mappings inspired by residual learning. 3. our transition functions are deep and convolutional. | {
"cite_N": [
"@cite_9",
"@cite_32",
"@cite_11"
],
"mid": [
"",
"2036317923",
"1810943226"
],
"abstract": [
"",
"Previous neural network learning algorithms for sequence processing are computationally expensive and perform poorly when it comes to long time lags. This paper first introduces a simple principle for reducing the descriptions of event sequences without loss of information. A consequence of this principle is that only unexpected inputs can be relevant. This insight leads to the construction of neural architectures that learn to “divide and conquer” by recursively decomposing sequences. I describe two architectures. The first functions as a self-organizing multilevel hierarchy of recurrent networks. The second, involving only two recurrent networks, tries to collapse a multilevel predictor hierarchy into a single recurrent net. Experiments show that the system can require less computation per time step and many fewer training sequences than conventional training algorithms for recurrent nets.",
"This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time. The approach is demonstrated for text (where the data are discrete) and online handwriting (where the data are real-valued). It is then extended to handwriting synthesis by allowing the network to condition its predictions on a text sequence. The resulting system is able to generate highly realistic cursive handwriting in a wide variety of styles."
]
} |
1604.03640 | 2337199865 | We discuss relations between Residual Networks (ResNet), Recurrent Neural Networks (RNNs) and the primate visual cortex. We begin with the observation that a shallow RNN is exactly equivalent to a very deep ResNet with weight sharing among the layers. A direct implementation of such a RNN, although having orders of magnitude fewer parameters, leads to a performance similar to the corresponding ResNet. We propose 1) a generalization of both RNN and ResNet architectures and 2) the conjecture that a class of moderately deep RNNs is a biologically-plausible model of the ventral stream in visual cortex. We demonstrate the eectiveness of the architectures by testing them on the CIFAR-10 dataset. | As suggested by @cite_16 , the term depth in RNN could also refer to input-to-hidden, hidden-to-hidden or hidden-to-output connections. Our model is deep in all of these senses. See Section . | {
"cite_N": [
"@cite_16"
],
"mid": [
"1889624880"
],
"abstract": [
"In this paper, we explore different ways to extend a recurrent neural network (RNN) to a RNN. We start by arguing that the concept of depth in an RNN is not as clear as it is in feedforward neural networks. By carefully analyzing and understanding the architecture of an RNN, however, we find three points of an RNN which may be made deeper; (1) input-to-hidden function, (2) hidden-to-hidden transition and (3) hidden-to-output function. Based on this observation, we propose two novel architectures of a deep RNN which are orthogonal to an earlier attempt of stacking multiple recurrent layers to build a deep RNN (Schmidhuber, 1992; El Hihi and Bengio, 1996). We provide an alternative interpretation of these deep RNNs using a novel framework based on neural operators. The proposed deep RNNs are empirically evaluated on the tasks of polyphonic music prediction and language modeling. The experimental result supports our claim that the proposed deep RNNs benefit from the depth and outperform the conventional, shallow RNNs."
]
} |
1604.03640 | 2337199865 | We discuss relations between Residual Networks (ResNet), Recurrent Neural Networks (RNNs) and the primate visual cortex. We begin with the observation that a shallow RNN is exactly equivalent to a very deep ResNet with weight sharing among the layers. A direct implementation of such a RNN, although having orders of magnitude fewer parameters, leads to a performance similar to the corresponding ResNet. We propose 1) a generalization of both RNN and ResNet architectures and 2) the conjecture that a class of moderately deep RNNs is a biologically-plausible model of the ventral stream in visual cortex. We demonstrate the eectiveness of the architectures by testing them on the CIFAR-10 dataset. | When unfolding RNN into a feedforward network, the weights of many layers are tied. This is reminiscent of Recursive Neural Networks (Recursive NN), first proposed by @cite_12 . Recursive NN are characterized by applying same operations recursively on a structure. The convolutional version was first studied by @cite_8 . Subsequent related work includes @cite_31 and @cite_25 . One characteristic distinguishes our model and residual learning from Recursive NN and convolutional recurrent NN is whether there are identity shortcut mappings. This discrepancy seems to account for the superior performance of residual learning and of our model over the latters. | {
"cite_N": [
"@cite_31",
"@cite_25",
"@cite_12",
"@cite_8"
],
"mid": [
"2951277909",
"1934184906",
"1423339008",
"2167343029"
],
"abstract": [
"Scene parsing is a technique that consist on giving a label to all pixels in an image according to the class they belong to. To ensure a good visual coherence and a high class accuracy, it is essential for a scene parser to capture image long range dependencies. In a feed-forward architecture, this can be simply achieved by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach consisting of a recurrent convolutional neural network which allows us to consider a large input context, while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation methods, nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time.",
"In recent years, the convolutional neural network (CNN) has achieved great success in many computer vision tasks. Partially inspired by neuroscience, CNN shares many properties with the visual system of the brain. A prominent difference is that CNN is typically a feed-forward architecture while in the visual system recurrent connections are abundant. Inspired by this fact, we propose a recurrent CNN (RCNN) for object recognition by incorporating recurrent connections into each convolutional layer. Though the input is static, the activities of RCNN units evolve over time so that the activity of each unit is modulated by the activities of its neighboring units. This property enhances the ability of the model to integrate the context information, which is important for object recognition. Like other recurrent neural networks, unfolding the RCNN through time can result in an arbitrarily deep network with a fixed number of parameters. Furthermore, the unfolded network has multiple paths, which can facilitate the learning process. The model is tested on four benchmark object recognition datasets: CIFAR-10, CIFAR-100, MNIST and SVHN. With fewer trainable parameters, RCNN outperforms the state-of-the-art models on all of these datasets. Increasing the number of parameters leads to even better performance. These results demonstrate the advantage of the recurrent structure over purely feed-forward structure for object recognition.",
"Recursive structure is commonly found in the inputs of different modalities such as natural scene images or natural language sentences. Discovering this recursive structure helps us to not only identify the units that an image or sentence contains but also how they interact to form a whole. We introduce a max-margin structure prediction architecture based on recursive neural networks that can successfully recover such structure both in complex scene images as well as sentences. The same algorithm can be used both to provide a competitive syntactic parser for natural language sentences from the Penn Treebank and to outperform alternative approaches for semantic scene segmentation, annotation and classification. For segmentation and annotation our algorithm obtains a new level of state-of-the-art performance on the Stanford background dataset (78.1 ). The features from the image parse tree outperform Gist descriptors for scene classification by 4 .",
"A key challenge in designing convolutional network models is sizing them appropriately. Many factors are involved in these decisions, including number of layers, feature maps, kernel sizes, etc. Complicating this further is the fact that each of these influence not only the numbers and dimensions of the activation units, but also the total number of parameters. In this paper we focus on assessing the independent contributions of three of these linked variables: The numbers of layers, feature maps, and parameters. To accomplish this, we employ a recursive convolutional network whose weights are tied between layers; this allows us to vary each of the three factors in a controlled setting. We find that while increasing the numbers of layers and parameters each have clear benefit, the number of feature maps (and hence dimensionality of the representation) appears ancillary, and finds most of its benefit through the introduction of more weights. Our results (i) empirically confirm the notion that adding layers alone increases computational power, within the context of convolutional layers, and (ii) suggest that precise sizing of convolutional feature map dimensions is itself of little concern; more attention should be paid to the number of parameters in these layers instead."
]
} |
1604.03628 | 2337374958 | In this paper, we propose a recurrent framework for Joint Unsupervised LEarning (JULE) of deep representations and image clusters. In our framework, successive operations in a clustering algorithm are expressed as steps in a recurrent process, stacked on top of representations output by a Convolutional Neural Network (CNN). During training, image clusters and representations are updated jointly: image clustering is conducted in the forward pass, while representation learning in the backward pass. Our key idea behind this framework is that good representations are beneficial to image clustering and clustering results provide supervisory signals to representation learning. By integrating two processes into a single model with a unified weighted triplet loss and optimizing it end-to-end, we can obtain not only more powerful representations, but also more precise image clusters. Extensive experiments show that our method outperforms the state-of-the-art on image clustering across a variety of image datasets. Moreover, the learned representations generalize well when transferred to other tasks. | A number of works have explored combining image clustering with representation learning. In @cite_2 , the authors proposed to learn a non-linear embedding of the undirected affinity graph using stacked autoencoder, and then ran K-means in the embedding space to obtain clusters. In @cite_13 , a deep semi-NMF model was used to factorize the input into multiple stacking factors which are initialized and updated layer by layer. Using the representations on the top layer, K-means was implemented to get the final results. Unlike our work, they do not jointly optimize for the representation learning and clustering. | {
"cite_N": [
"@cite_13",
"@cite_2"
],
"mid": [
"2103958034",
"2405933695"
],
"abstract": [
"Semi-NMF is a matrix factorization technique that learns a low-dimensional representation of a dataset that lends itself to a clustering interpretation. It is possible that the mapping between this new representation and our original features contains rather complex hierarchical information with implicit lower-level hidden attributes, that classical one level clustering methodologies can not interpret. In this work we propose a novel model, Deep Semi-NMF, that is able to learn such hidden representations that allow themselves to an interpretation of clustering according to different, unknown attributes of a given dataset. We show that by doing so, our model is able to learn low-dimensional representations that are better suited for clustering, outperforming Semi-NMF, but also other NMF variants.",
"Recently deep learning has been successfully adopted in many applications such as speech recognition and image classification. In this work, we explore the possibility of employing deep learning in graph clustering. We propose a simple method, which first learns a nonlinear embedding of the original graph by stacked autoencoder, and then runs k-means algorithm on the embedding to obtain clustering result. We show that this simple method has solid theoretical foundation, due to the similarity between autoencoder and spectral clustering in terms of what they actually optimize. Then, we demonstrate that the proposed method is more efficient and flexible than spectral clustering. First, the computational complexity of autoencoder is much lower than spectral clustering: the former can be linear to the number of nodes in a sparse graph while the latter is super quadratic due to eigenvalue decomposition. Second, when additional sparsity constraint is imposed, we can simply employ the sparse autoencoder developed in the literature of deep learning; however, it is nonstraightforward to implement a sparse spectral method. The experimental results on various graph datasets show that the proposed method significantly outperforms conventional spectral clustering, which clearly indicates the effectiveness of deep learning in graph clustering."
]
} |
1604.03628 | 2337374958 | In this paper, we propose a recurrent framework for Joint Unsupervised LEarning (JULE) of deep representations and image clusters. In our framework, successive operations in a clustering algorithm are expressed as steps in a recurrent process, stacked on top of representations output by a Convolutional Neural Network (CNN). During training, image clusters and representations are updated jointly: image clustering is conducted in the forward pass, while representation learning in the backward pass. Our key idea behind this framework is that good representations are beneficial to image clustering and clustering results provide supervisory signals to representation learning. By integrating two processes into a single model with a unified weighted triplet loss and optimizing it end-to-end, we can obtain not only more powerful representations, but also more precise image clusters. Extensive experiments show that our method outperforms the state-of-the-art on image clustering across a variety of image datasets. Moreover, the learned representations generalize well when transferred to other tasks. | To connect image clustering and representation learning more closely, @cite_53 conducted image clustering and codebook learning iteratively. However, they learned codebook over SIFT feature @cite_59 , and did not learn deep representations . Instead of using hand-crafted features, Chen @cite_60 used DBN to learn representations, and then conducted a nonparametric maximum margin clustering upon the outputs of DBN. Afterwards, they fine-tuned the top layer of DBN based on clustering results. A more recent work on jointly optimizing two tasks is found in @cite_55 , where the authors trained a task-specific deep architecture for clustering. The deep architecture is composed of sparse coding modules which can be jointly trained through back propagation from a cluster-oriented loss. However, they used sparse coding to extract representations for images, while we use a CNN. Instead of fixing the number of clusters to be the number of categories and predicted labels based on softmax outputs, we predict the labels using agglomerative clustering based on the learned representations. In our experiments we show that our approach outperforms @cite_55 . | {
"cite_N": [
"@cite_55",
"@cite_53",
"@cite_59",
"@cite_60"
],
"mid": [
"1949807611",
"2237964723",
"",
"1491343858"
],
"abstract": [
"While sparse coding-based clustering methods have shown to be successful, their bottlenecks in both efficiency and scalability limit the practical usage. In recent years, deep learning has been proved to be a highly effective, efficient and scalable feature learning tool. In this paper, we propose to emulate the sparse coding-based clustering pipeline in the context of deep learning, leading to a carefully crafted deep model benefiting from both. A feed-forward network structure, named TAGnet, is constructed based on a graph-regularized sparse coding algorithm. It is then trained with task-specific loss functions from end to end. We discover that connecting deep learning to sparse coding benefits not only the model performance, but also its initialization and interpretation. Moreover, by introducing auxiliary clustering tasks to the intermediate feature hierarchy, we formulate DTAGnet and obtain a further performance boost. Extensive experiments demonstrate that the proposed model gains remarkable margins over several state-of-the-art methods.",
"Image clustering and visual codebook learning are two fundamental problems in computer vision and they are tightly related. On one hand, a good codebook can generate effective feature representations which largely affect clustering performance. On the other hand, class labels obtained from image clustering can serve as supervised information to guide codebook learning. Traditionally, these two processes are conducted separately and their correlation is generally ignored. In this paper, we propose a Double Layer Gaussian Mixture Model (DLGMM) to simultaneously perform image clustering and codebook learning. In DLGMM, two tasks are seamlessly coupled and can mutually promote each other. Cluster labels and codebook are jointly estimated to achieve the overall best performance. To incorporate the spatial coherence between neighboring visual patches, we propose a Spatially Coherent DL-GMM which uses a Markov Random Field to encourage neighboring patches to share the same visual word label. We use variational inference to approximate the posterior of latent variables and learn model parameters. Experiments on two datasets demonstrate the effectiveness of two models.",
"",
"Clustering is an essential problem in machine learning and data mining. One vital factor that impacts clustering performance is how to learn or design the data representation (or features). Fortunately, recent advances in deep learning can learn unsupervised features effectively, and have yielded state of the art performance in many classification problems, such as character recognition, object recognition and document categorization. However, little attention has been paid to the potential of deep learning for unsupervised clustering problems. In this paper, we propose a deep belief network with nonparametric clustering. As an unsupervised method, our model first leverages the advantages of deep learning for feature representation and dimension reduction. Then, it performs nonparametric clustering under a maximum margin framework -- a discriminative clustering model and can be trained online efficiently in the code space. Lastly model parameters are refined in the deep belief network. Thus, this model can learn features for clustering and infer model complexity in an unified framework. The experimental results show the advantage of our approach over competitive baselines."
]
} |
1604.03468 | 1475741780 | In this paper we address planning problems in high-dimensional hybrid configuration spaces, with a particular focus on manipulation planning problems involving many objects. We present the hybrid backward-forward (HBF) planning algorithm that uses a backward identification of constraints to direct the sampling of the infinite action space in a forward search from the initial state towards a goal configuration. The resulting planner is probabilistically complete and can effectively construct long manipulation plans requiring both prehensile and nonprehensile actions in cluttered environments. | In manipulation planning, the objective is for the robot to operate on objects in the world. The first treatments considered a continuous configuration space of both object placements and robot configurations, but discrete grasps @cite_26 @cite_3 @cite_0 ; they were more recently extended to selecting from a continuous set of grasps and formalized in terms of a manipulation graph @cite_24 @cite_23 . The approach was extended to complex problems with a single movable object, possibly requiring multiple regrasps, by using probabilistic roadmaps and a search decomposition in which a high-level sequence of transit and transfer paths is first identified, and then motion planning attempts to achieve it @cite_22 . | {
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_3",
"@cite_0",
"@cite_24",
"@cite_23"
],
"mid": [
"2141825555",
"2044995998",
"2148933384",
"2044725657",
"1558116630",
""
],
"abstract": [
"The class of problems that involve finding where to place or how to move a solid object in the presence of obstacles is discussed. The solution to this class of problems is essential to the automatic planning of manipulator transfer movements, i.e., the motions to grasp a part and place it at some destination. For example, planning transfer movements requires the ability to plan paths for the manipulator that avoid collisions with objects in the workspace and the ability to choose safe grasp points on objects. The approach to these problems described here is based on a method of computing an explicit representation of the manipulator configurations that would bring about a collision.",
"This paper deals with motion planning for robots manipulating movable objects among obstacles. We propose a general manipulation planning approach capable of addressing continuous sets for modeling both the possible grasps and the stable placements of the movable object, rather than discrete sets generally assumed by the previous approaches. The proposed algorithm relies on a topological property that characterizes the existence of solutions in the subspace of configurations where the robot grasps the object placed at a stable position. It allows us to devise a manipulation planner that captures in a probabilistic roadmap the connectivity of sub-dimensional manifolds of the composite configuration space. Experiments conducted with the planner in simulated environments demonstrate its efficacy to solve complex manipulation problems.",
"We describe a robot system capable of locating a part in an unstructured pile of objects, choose a grasp on the part, plan a motion to reach the part safely, and plan a motion to place the part at a commanded position. The system requires as input a polyhedral world model including models of the part to be manipulated, the robot arm, and any other fixed objects in the environment. In addition, the system builds a depth map, using structured light, of the area where the part is to be found initially. Any other objects present in that area do not have to be modeled.",
"Motion planning algorithms have generally dealt with motion in a static environment, or more recently, with motion in an environment that changes in a known manner. We consider the problem of finding collision-free motions in a changeable environment. That is, we wish to find a motion for an object where the object is permitted to move some of the obstacles. In such an environment the final positions of the movable obstacles may or may not be part of the goal. In the case where the final positions of the obstacles are unspecified, the motion planning problem is shown to be NP-hard. An algorithm that runs in O ( n 2 log n ) time after O ( n 3 log 2 n ) preprocessing time is presented when the object to be moved is polygonal and there is only one movable polygonal obstacle in a polygonal environment of complexity O ( n ). In the case where the final positions of the obstacles are specified the general problem is shown to be PSPACE-hard and an algorithm is given when there is one movable obstacle with the same preprocessing time as the previous algorithm but with O ( n 2 ) query time.",
"This paper presents a new geometrical formulation of the manipulation task planning problem in robotics. The problem is shown to be a constrained instance of the coordinated motion planning problem for multiple moving bodies. The constraints are related to the placement and the motion of objects, and can be expressed geometrically. We give a general paradigm for building Manipulation Task Planners based on the proposed formulation. A manipulation task appears as a path in the configuration space of the robot and all movable objects. A manipulation path is a sequence of constrained paths: transit-paths , where the robot moves \"alone'', and transfer-paths , where the robot holds an object. The approach consists in building a Manipulation Graph that models the connectivity between certain regions in the global configuration space by transit-paths and transfer-paths. This approach is then applied to the case of a finite number of object placements and grasps. The nodes in the Manipulation Graph correspond to well identified configurations and the edges correspond to paths built from a series of configuration space slices. An implemented system is presented and discussed.",
""
]
} |
1604.03468 | 1475741780 | In this paper we address planning problems in high-dimensional hybrid configuration spaces, with a particular focus on manipulation planning problems involving many objects. We present the hybrid backward-forward (HBF) planning algorithm that uses a backward identification of constraints to direct the sampling of the infinite action space in a forward search from the initial state towards a goal configuration. The resulting planner is probabilistically complete and can effectively construct long manipulation plans requiring both prehensile and nonprehensile actions in cluttered environments. | Many recent approaches to manipulation planning integrate discrete task planning and continuous motion planning algorithms; they pre-discretize grasps and placements so that a discrete task planner can produce candidate high-level plans, then use a general-purpose robot motion planner to verify the feasibility of candidate task plans on a robot @cite_2 @cite_13 @cite_1 @cite_6 . Some other systems combine the task planner and motion planner more intimately; although they generally also rely on discretization, the sampling is generally driven by the task @cite_5 @cite_4 @cite_17 @cite_21 @cite_18 @cite_11 . | {
"cite_N": [
"@cite_18",
"@cite_11",
"@cite_4",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_2",
"@cite_5",
"@cite_13",
"@cite_17"
],
"mid": [
"2141841102",
"1852202999",
"2132467020",
"2050718309",
"2143602574",
"2057408106",
"1494534478",
"2075577325",
"",
"2402990501"
],
"abstract": [
"The need for combined task and motion planning in robotics is well understood. Solutions to this problem have typically relied on special purpose, integrated implementations of task planning and motion planning algorithms. We propose a new approach that uses off-the-shelf task planners and motion planners and makes no assumptions about their implementation. Doing so enables our approach to directly build on, and benefit from, the vast literature and latest advances in task planning and motion planning. It uses a novel representational abstraction and requires only that failures in computing a motion plan for a high-level action be identifiable and expressible in the form of logical predicates at the task level. We evaluate the approach and illustrate its robustness through a number of experiments using a state-of-the-art robotics simulator and a PR2 robot. These experiments show the system accomplishing a diverse set of challenging tasks such as taking advantage of a tray when laying out a table for dinner and picking objects from cluttered environments where other objects need to be re-arranged before the target object can be reached.",
"Manipulation problems involving many objects present substantial challenges for motion planning algorithms due to the high dimensionality and multi-modality of the search space. Symbolic task planners can efficiently construct plans involving many entities but cannot incorporate the constraints from geometry and kinematics. In this paper, we show how to extend the heuristic ideas from one of the most successful symbolic planners in recent years, the FastForward (FF) planner, to motion planning, and to compute it efficiently. We use a multi-query roadmap structure that can be conditionalized to model different placements of movable objects. The resulting tightly integrated planner is simple and performs efficiently in a collection of tasks involving manipulation of many objects.",
"To compute collision-free and dynamically-feasibile trajectories that satisfy high-level specifications given in a planning-domain definition language, this paper proposes to combine sampling-based motion planning with symbolic action planning. The proposed approach, Sampling-based Motion and Symbolic Action Planner (SMAP), leverages from sampling-based motion planning the underlying idea of searching for a solution trajectory by selectively sampling and exploring the continuous space of collision-free and dynamically-feasible motions. Drawing from AI, SMAP uses symbolic action planning to identify actions and regions of the continuous space that sampling-based motion planning can further explore to significantly advance the search. The planning layers interact with each-other through estimates on the utility of each action, which are computed based on information gathered during the search. Simulation experiments with dynamical models of vehicles carrying out tasks given by high-level STRIPS specifications provide promising initial validation, showing that SMAP efficiently solves challenging problems.",
"In a typical Human-Robot Interaction (HRI) scenario, the robot needs to perform various tasks for the human, hence should take into account human oriented constraints. In this context it is not sufficient that the robot selects grasp and placement of the object from the stability point of view only. Motivated from human behavioral psychology, in this paper we emphasize on the mutually depended nature of grasp and placement selections, which is further constrained by the task, the environment and the human's perspective. We will explore essential human oriented constraints on grasp and placement selections and present a framework to incorporate them in synthesizing key configurations of planning basic interactive manipulation tasks.",
"We present a formal framework that combines high-level representation and causality-based reasoning with low-level geometric reasoning and motion planning. The frame-work features bilateral interaction between task and motion planning, and embeds geometric reasoning in causal reasoning, thanks to several advantages inherited from its underlying components. In particular, our choice of using a causality-based high-level formalism for describing action domains allows us to represent ramifications and state transition constraints, and embed in such formal domain descriptions externally defined functions implemented in some programming language (e.g., C++). Moreover, given such a domain description, the causal reasoner based on this formalism (i.e., the Causal Calculator) allows us to compute optimal solutions (e.g., shortest plans) for elaborate planning prediction problems with temporal constraints. Utilizing these features of high-level representation and reasoning, we can combine causal reasoning, motion planning and geometric planning to find feasible kinematic solutions to task-level problems. In our framework, the causal reasoner guides the motion planner by finding an optimal task-plan; if there is no feasible kinematic solution for that task-plan then the motion planner guides the causal reasoner by modifying the planning problem with new temporal constraints. Furthermore, while computing a task-plan, the causal reasoner takes into account geometric models and kinematic relations by means of external predicates implemented for geometric reasoning (e.g., to check some collisions); in that sense the geometric reasoner guides the causal reasoner to find feasible kinematic solutions. We illustrate an application of this framework to robotic manipulation, with two pantograph robots on a complex assembly task that requires concurrent execution of actions. A short video of this application accompanies the paper.",
"The combination of task and motion planning presents us with a new problem that we call geometric backtracking. This problem arises from the fact that a single symbolic state or action may be geometrically instantiated in infinitely many ways. When a symbolic action cannot be geometrically validated, we may need to backtrack in the space of geometric configurations, which greatly increases the complexity of the whole planning process. In this paper, we address this problem using intervals to represent geometric configurations, and constraint propagation techniques to shrink these intervals according to the geometric constraints of the problem. After propagation, either (i) the intervals are shrunk, thus reducing the search space in which geometric backtracking may occur, or (ii) the constraints are inconsistent, indicating the non-feasibility of the sequence of actions without further effort. We illustrate our approach on scenarios in which a two-arm robot manipulates a set of objects, and report experiments that show how the search space is reduced.",
"Solving real-world problems using symbolic planning often requires a simplified formulation of the original problem, since certain subproblems cannot be represented at all or only in a way leading to inefficiency. For example, manipulation planning may appear as a subproblem in a robotic planning context or a packing problem can be part of a logistics task. In this paper we propose an extension of PDDL for specifying semantic attachments. This allows the evaluation of grounded predicates as well as the change of fluents by externally specified functions. Furthermore, we describe a general schema of integrating semantic attachments into a forward-chaining planner and report on our experience of adding this extension to the planners FF and Temporal Fast Downward. Finally, we present some preliminary experiments using semantic attachments.",
"We propose a representation and a planning algorithm able to deal with problems integrating task planning as well as motion and manipulation planning knowledge involving several robots and objects. Robot plans often include actions where the robot has to place itself in some position in order to perform some other action or to \"modify\" the configuration of its environment by displacing objects. Our approach aims at establishing a bridge between task planning and manipulation planning that allows a rigorous treatment of geometric preconditions and effects of robot actions in realistic environments. We show how links can be established between a symbolic description and its geometric counterpart and how they can be used in an integrated planning process that is able to deal with intricate symbolic and geometric constraints. Finally, we describe the main features of an implemented planner and discuss several examples of its use.",
"",
"In this paper we outline an approach to the integration of task planning and motion planning that has the following key properties: It is aggressively hierarchical. It makes choices and commits to them in a top-down fashion in an attempt to limit the length of plans that need to be constructed, and thereby exponentially decrease the amount of search required. Importantly, our approach also limits the need to project the effect of actions into the far future. It operates on detailed, continuous geometric representations and partial symbolic descriptions. It does not require a complete symbolic representation of the input geometry or of the geometric effect of the task-level operations."
]
} |
1604.03468 | 1475741780 | In this paper we address planning problems in high-dimensional hybrid configuration spaces, with a particular focus on manipulation planning problems involving many objects. We present the hybrid backward-forward (HBF) planning algorithm that uses a backward identification of constraints to direct the sampling of the infinite action space in a forward search from the initial state towards a goal configuration. The resulting planner is probabilistically complete and can effectively construct long manipulation plans requiring both prehensile and nonprehensile actions in cluttered environments. | Effective domain-independent search guidance has been a major contribution of research in the artificial intelligence planning community, which has focused on state-space search methods; they solve the exact problem, but do so using algorithmic heuristics that quickly solve approximations of the actual planning task to estimate the distance to the goal from an arbitrary state @cite_12 @cite_7 . One effective approximation is the delete relaxation'' in which it is assumed that any effect, once achieved by the planner, can remain true for the duration of the plan, even if it ought to have been deleted by other actions @cite_8 ; the length of a plan to achieve the goal in this relaxed domain is an estimate that is the basis for the @math heuristic. | {
"cite_N": [
"@cite_7",
"@cite_12",
"@cite_8"
],
"mid": [
"2122054842",
"1781384914",
"1545688112"
],
"abstract": [
"In the AIPS98 Planning Contest, the HSP planner showed that heuristic search planners can be competitive with state-of-the-art Graphplan and SAT planners. Heuristic search planners like HSP transform planning problems into problems of heuristic search by automatically extracting heuristics from Strips encodings. They differ from specialized problem solvers such as those developed for the 24-Puzzle and Rubik’s Cube in that they use a general declarative language for stating problems and a general mechanism for extracting heuristics from these representations. In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are independent. The heuristic is then used in the context of best-first and hill-climbing search algorithms, and is tested over a large collection of domains. We then consider variations and extensions such as reversing the direction of the search for speeding node evaluation, and extracting information about propositional invariants for avoiding dead-ends. We analyze the resulting planners, evaluate their performance, and explain when they do best. We also compare the performance of these planners with two state-of-the-art planners, and show that the simplest planner based on a pure best-first search yields the most solid performance over a large set of problems. We also discuss the strengths and limitations of this approach, establish a correspondence between heuristic search planning and Graphplan, and briefly survey recent ideas that can reduce the current gap in performance between general heuristic search planners and specialized solvers. 2001 Elsevier Science B.V. All rights reserved.",
"In the recent AIPS98 Planning Competition, the hsp planner, based on a forward state search and a domain-independent heuristic, showed that heuristic search planners can be competitive with state of the art Graphplan and Satisfiability planners. hsp solved more problems than the other planners but it often took more time or produced longer plans. The main bottleneck in hsp is the computation of the heuristic for every new state. This computation may take up to 85 of the processing time. In this paper, we present a solution to this problem that uses a simple change in the direction of the search. The new planner, that we call hspr, is based on the same ideas and heuristic as hsp , but searches backward from the goal rather than forward from the initial state. This allows hspr to compute the heuristic estimates only once. As a result, hspr can produce better plans, often in less time. For example, hspr solves each of the 30 logistics problems from Kautz and Selman in less than 3 seconds. This is two orders of magnitude faster than blackbox. At the same time, in almost all cases, the plans are substantially smaller. hspr is also more robust than hsp as it visits a larger number of states, makes deterministic decisions, and relies on a single adjustable parameter than can be fixed for most domains. hspr, however, is not better than hsp accross all domains and in particular, in the blocks world, hspr fails on some large instances that hsp can solve. We discuss also the relation between hspr and Graphplan, and argue that Graphplan can also be understood as a heuristic search planner with a precise heuristic function and search algorithm.",
"We describe and evaluate the algorithmic techniques that are used in the FF planning system. Like the HSP system, FF relies on forward state space search, using a heuristic that estimates goal distances by ignoring delete lists. Unlike HSP's heuristic, our method does not assume facts to be independent. We introduce a novel search strategy that combines hill-climbing with systematic search, and we show how other powerful heuristic information can be extracted and used to prune the search space. FF was the most successful automatic planner at the recent AIPS-2000 planning competition. We review the results of the competition, give data for other benchmark domains, and investigate the reasons for the runtime performance of FF compared to HSP."
]
} |
1604.03468 | 1475741780 | In this paper we address planning problems in high-dimensional hybrid configuration spaces, with a particular focus on manipulation planning problems involving many objects. We present the hybrid backward-forward (HBF) planning algorithm that uses a backward identification of constraints to direct the sampling of the infinite action space in a forward search from the initial state towards a goal configuration. The resulting planner is probabilistically complete and can effectively construct long manipulation plans requiring both prehensile and nonprehensile actions in cluttered environments. | The most closely related approach to integrates symbolic and geometric search into one combined problem and provides search guidance using an adaptation of the @math heuristic to directly include geometric considerations @cite_11 . It was able to solve larger pick-and-place problems than most previous approaches but suffered from the need to pre-sample its geometric roadmaps. | {
"cite_N": [
"@cite_11"
],
"mid": [
"1852202999"
],
"abstract": [
"Manipulation problems involving many objects present substantial challenges for motion planning algorithms due to the high dimensionality and multi-modality of the search space. Symbolic task planners can efficiently construct plans involving many entities but cannot incorporate the constraints from geometry and kinematics. In this paper, we show how to extend the heuristic ideas from one of the most successful symbolic planners in recent years, the FastForward (FF) planner, to motion planning, and to compute it efficiently. We use a multi-query roadmap structure that can be conditionalized to model different placements of movable objects. The resulting tightly integrated planner is simple and performs efficiently in a collection of tasks involving manipulation of many objects."
]
} |
1604.03211 | 2339638311 | There are billions of lines of sequential code inside nowadays' software which do not benefit from the parallelism available in modern multicore architectures. Automatically parallelizing sequential code, to promote an efficient use of the available parallelism, has been a research goal for some time now. This work proposes a new approach for achieving such goal. We created a new parallelizing compiler that analyses the read and write instructions, and control-flow modifications in programs to identify a set of dependencies between the instructions in the program. Afterwards, the compiler, based on the generated dependencies graph, rewrites and organizes the program in a task-oriented structure. Parallel tasks are composed by instructions that cannot be executed in parallel. A work-stealing-based parallel runtime is responsible for scheduling and managing the granularity of the generated tasks. Furthermore, a compile-time granularity control mechanism also avoids creating unnecessary data-structures. This work focuses on the Java language, but the techniques are general enough to be applied to other programming languages. We have evaluated our approach on 8 benchmark programs against OoOJava, achieving higher speedups. In some cases, values were close to those of a manual parallelization. The resulting parallel code also has the advantage of being readable and easily configured to improve further its performance manually. | Given the wide availability of multicore processors, GPUs and other accelerators such as FPGAs and the Xeon Phi, research on concurrent programming has increased in the last decade. New programming models, languages and runtime systems have been developed to improve the expression and execution of parallel programs. Much of this work has culminated in new languages, such as X10 @cite_13 , Fortress @cite_0 and Chapel @cite_32 , in which most language constructs are default by parallel (such as for cycles, for instance). These languages also provide constructs to explicitly inform the compiler that certain memory regions are independent and, therefore, accesses to them can be executed in parallel. Unlike these languages, which mostly target scientific computing, the Æminium language @cite_19 has focused on dependable systems programming. By annotating variables with access permissions, programs could be automatically parallelized with guarantees that the execution would not break the defined contracts. | {
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_13",
"@cite_32"
],
"mid": [
"1987607743",
"2015979616",
"2109065830",
"2090409324"
],
"abstract": [
"Summary form only given. The Programming Language Research Group at Sun Microsystems Laboratories seeks to apply lessons learned from the Java (TM) programming language to the next generation of programming languages. The Java language supports platform-independent parallel programming with explicit multithreading and explicit locks. As part of the DARPA program for High Productivity Computing Systems, we are developing Fortress, a language intended to support large-scale scientific computation. One of the design principles is that parallelism be encouraged everywhere (for example, it is intentionally just a little bit harder to write a sequential loop than a parallel loop). Another is to have fairly rich mechanisms for encapsulation and abstraction; the idea is to have a fairly complicated language for library writers that enable them to write libraries that present a relatively simple set of interfaces to the application programmer. We will discuss ideas for using a rich polymorphic type system to organize multithreading and data distribution on large parallel machines. The net result is similar in some ways to data distribution facilities in other languages such as HPF and Chapel, but more open-ended, because in Fortress the facilities are defined by user-replaceable libraries rather than wired into the compiler.",
"Writing concurrent applications is extremely challenging, not only in terms of producing bug-free and maintainable software, but also for enabling developer productivity. In this article we present the AEminium concurrent-by-default programming language. Using AEminium programmers express data dependencies rather than control flow between instructions. Dependencies are expressed using permissions, which are used by the type system to automatically parallelize the application. The AEminium approach provides a modular and composable mechanism for writing concurrent applications, preventing data races in a provable way. This allows programmers to shift their attention from low-level, error-prone reasoning about thread interleaving and synchronization to focus on the core functionality of their applications. We study the semantics of AEminium through μAEminium, a sound core calculus that leverages permission flow to enable concurrent-by-default execution. After discussing our prototype implementation we present several case studies of our system. Our case studies show up to 6.5X speedup on an eight-core machine when leveraging data group permissions to manage access to shared state, and more than 70p higher throughput in a Web server application.",
"It is now well established that the device scaling predicted by Moore's Law is no longer a viable option for increasing the clock frequency of future uniprocessor systems at the rate that had been sustained during the last two decades. As a result, future systems are rapidly moving from uniprocessor to multiprocessor configurations, so as to use parallelism instead of frequency scaling as the foundation for increased compute capacity. The dominant emerging multiprocessor structure for the future is a Non-Uniform Cluster Computing (NUCC) system with nodes that are built out of multi-core SMP chips with non-uniform memory hierarchies, and interconnected in horizontally scalable cluster configurations such as blade servers. Unlike previous generations of hardware evolution, this shift will have a major impact on existing software. Current OO language facilities for concurrent and distributed programming are inadequate for addressing the needs of NUCC systems because they do not support the notions of non-uniform data access within a node, or of tight coupling of distributed nodes.We have designed a modern object-oriented programming language, X10, for high performance, high productivity programming of NUCC systems. A member of the partitioned global address space family of languages, X10 highlights the explicit reification of locality in the form of places ; lightweight activities embodied in async, future, foreach, and ateach constructs; a construct for termination detection (finish); the use of lock-free synchronization (atomic blocks); and the manipulation of cluster-wide global data structures. We present an overview of the X10 programming model and language, experience with our reference implementation, and results from some initial productivity comparisons between the X10 and Java™ languages.",
"In this paper we consider productivity challenges for parallel programmers and explore ways that parallel language design might help improve end-user productivity. We offer a candidate list of desirable qualities for a parallel programming language, and describe how these qualities are addressed in the design of the Chapel language. In doing so, we provide an overview of Chapel's features and how they help address parallel productivity. We also survey current techniques for parallel programming and describe ways in which we consider them to fall short of our idealized productive programming model."
]
} |
1604.03211 | 2339638311 | There are billions of lines of sequential code inside nowadays' software which do not benefit from the parallelism available in modern multicore architectures. Automatically parallelizing sequential code, to promote an efficient use of the available parallelism, has been a research goal for some time now. This work proposes a new approach for achieving such goal. We created a new parallelizing compiler that analyses the read and write instructions, and control-flow modifications in programs to identify a set of dependencies between the instructions in the program. Afterwards, the compiler, based on the generated dependencies graph, rewrites and organizes the program in a task-oriented structure. Parallel tasks are composed by instructions that cannot be executed in parallel. A work-stealing-based parallel runtime is responsible for scheduling and managing the granularity of the generated tasks. Furthermore, a compile-time granularity control mechanism also avoids creating unnecessary data-structures. This work focuses on the Java language, but the techniques are general enough to be applied to other programming languages. We have evaluated our approach on 8 benchmark programs against OoOJava, achieving higher speedups. In some cases, values were close to those of a manual parallelization. The resulting parallel code also has the advantage of being readable and easily configured to improve further its performance manually. | Another approach for writing parallel programs is semi-automatic parallelization. In this approach, programmers annotate existing sequential programs with enough information for the compiler to automatically parallelize parts of the code. Cilk @cite_6 and OpenMP @cite_24 are the two most common examples of such approach and work on top of the C language. Cilk focus on divide-and-conquer recursive algorithms, while OpenMP has focus mostly on symmetrical parallelism in for cycles. OpenMP 3.0 has introduced unstructured parallelism via the concept of Tasks @cite_16 @cite_12 . More recently, OpenMP has also started to generate code for GPUs @cite_5 . | {
"cite_N": [
"@cite_6",
"@cite_24",
"@cite_5",
"@cite_16",
"@cite_12"
],
"mid": [
"2072725684",
"1988888548",
"2170634604",
"2108801243",
"2159618460"
],
"abstract": [
"The fifth release of the multithreaded language Cilk uses a provably good \"work-stealing\" scheduling algorithm similar to the first system, but the language has been completely redesigned and the runtime system completely reengineered. The efficiency of the new implementation was aided by a clear strategy that arose from a theoretical analysis of the scheduling algorithm: concentrate on minimizing overheads that contribute to the work, even at the expense of overheads that contribute to the critical path. Although it may seem counterintuitive to move overheads onto the critical path, this \"work-first\" principle has led to a portable Cilk-5 implementation in which the typical cost of spawning a parallel thread is only between 2 and 6 times the cost of a C function call on a variety of contemporary machines. Many Cilk programs run on one processor with virtually no degradation compared to equivalent C programs. This paper describes how the work-first principle was exploited in the design of Cilk-5's compiler and its runtime system. In particular, we present Cilk-5's novel \"two-clone\" compilation strategy and its Dijkstra-like mutual-exclusion protocol for implementing the ready deque in the work-stealing scheduler.",
"At its most elemental level, OpenMP is a set of compiler directives and callable runtime library routines that extend Fortran (and separately, C and C++ to express shared memory parallelism. It leaves the base language unspecified, and vendors can implement OpenMP in any Fortran compiler. Naturally, to support pointers and allocatables, Fortran 90 and Fortran 95 require the OpenMP implementation to include additional semantics over Fortran 77. OpenMP leverages many of the X3H5 concepts while extending them to support coarse grain parallelism. The standard also includes a callable runtime library with accompanying environment variables.",
"GPGPUs have recently emerged as powerful vehicles for general-purpose high-performance computing. Although a new Compute Unified Device Architecture (CUDA) programming model from NVIDIA offers improved programmability for general computing, programming GPGPUs is still complex and error-prone. This paper presents a compiler framework for automatic source-to-source translation of standard OpenMP applications into CUDA-based GPGPU applications. The goal of this translation is to further improve programmability and make existing OpenMP applications amenable to execution on GPGPUs. In this paper, we have identified several key transformation techniques, which enable efficient GPU global memory access, to achieve high performance. Experimental results from two important kernels (JACOBI and SPMUL) and two NAS OpenMP Parallel Benchmarks (EP and CG) show that the described translator and compile-time optimizations work well on both regular and irregular applications, leading to performance improvements of up to 50X over the unoptimized translation (up to 328X over serial).",
"OpenMP has been very successful in exploiting structured parallelism in applications. With increasing application complexity, there is a growing need for addressing irregular parallelism in the presence of complicated control structures. This is evident in various efforts by the industry and research communities to provide a solution to this challenging problem. One of the primary goals of OpenMP 3.0 is to define a standard dialect to express and efficiently exploit unstructured parallelism. This paper presents the design of the OpenMP tasking model by members of the OpenMP 3.0 tasking sub-committee which was formed for this purpose. The paper summarizes the efforts of the sub-committee (spanning over two years) in designing, evaluating and seamlessly integrating the tasking model into the OpenMP specification. In this paper, we present the design goals and key features of the tasking model, including a rich set of examples and an in-depth discussion of the rationale behind various design choices. We compare a prototype implementation of the tasking model with existing models, and evaluate it on a wide range of applications. The comparison shows that the OpenMP tasking model provides expressiveness, flexibility, and huge potential for performance and scalability.",
"The OpenMP standard was conceived to parallelize dense array-based applications, and it has achieved much success with that. Recently, a novel tasking proposal to handle unstructured parallelism in OpenMP has been submitted to the OpenMP 3.0 Language Committee. We tested its expressiveness and flexibility, using it to parallelize a number of examples from a variety of different application areas. Furthermore, we checked whether the model can be implemented efficiently, evaluating the performance of an experimental implementation of the tasking proposal on an SGI Altix 4700, and comparing it to the performance achieved with Intel's Workqueueing model and other worksharing alternatives currently available in OpenMP 2.5. We conclude that the new OpenMP tasks allow the expression of parallelism for a broad range of applications and that they will not hamper application performance."
]
} |
1604.03227 | 2342171291 | Convolutional-deconvolution networks can be adopted to perform end-to-end saliency detection. But, they do not work well with objects of multiple scales. To overcome such a limitation, in this work, we propose a recurrent attentional convolutional-deconvolution network (RACDNN). Using spatial transformer and recurrent network units, RACDNN is able to iteratively attend to selected image sub-regions to perform saliency refinement progressively. Besides tackling the scale problem, RACDNN can also learn context-aware features from past iterations to enhance saliency refinement in future iterations. Experiments on several challenging saliency detection datasets validate the effectiveness of RACDNN, and show that RACDNN outperforms state-of-the-art saliency detection methods. | Saliency detection methods can be coarsely categorized into bottom-up and top-down methods. Bottom-up methods @cite_23 @cite_6 @cite_4 @cite_11 @cite_45 @cite_24 @cite_39 make use of level local visual cues like color, contrast, orientation and texture. Top-down methods @cite_34 @cite_12 @cite_41 are based on high-level task-specific prior knowledge. Recently, deep learning-based saliency detection methods @cite_1 @cite_19 @cite_0 @cite_37 @cite_48 have been very successful. Instead of manually defining and tuning saliency-specific features, these methods can learn both low-level features and high-level semantics useful for saliency detection straight from minimally processed images. However, these works employ neither attention mechanism nor RNN to improve saliency detection. To the best of our knowledge, ours is the first work to exploit recurrent attention along with deep learning for saliency detection. | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_41",
"@cite_48",
"@cite_1",
"@cite_6",
"@cite_39",
"@cite_24",
"@cite_19",
"@cite_0",
"@cite_45",
"@cite_23",
"@cite_34",
"@cite_12",
"@cite_11"
],
"mid": [
"1894057436",
"2146103513",
"1510835000",
"2078903912",
"1947031653",
"2135957164",
"2122076510",
"2037954058",
"1914179642",
"1942214758",
"1996326832",
"2128272608",
"2133589685",
"2133858838",
"2100470808"
],
"abstract": [
"Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this paper, we discover that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for feature extraction at three different scales. We then propose a refinement method to enhance the spatial coherence of our saliency results. Finally, aggregating multiple saliency maps computed for different levels of image segmentation can further boost the performance, yielding saliency maps better than those generated from a single segmentation. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks, improving the F-Measure by 5.0 and 13.2 respectively on the MSRA-B dataset and our new dataset (HKU-IS), and lowering the mean absolute error by 5.7 and 35.1 respectively on these two datasets.",
"The ability of human visual system to detect visual saliency is extraordinarily fast and reliable. However, computational modeling of this basic intelligent behavior still remains a challenge. This paper presents a simple method for the visual saliency detection. Our model is independent of features, categories, or other forms of prior knowledge of the objects. By analyzing the log-spectrum of an input image, we extract the spectral residual of an image in spectral domain, and propose a fast method to construct the corresponding saliency map in spatial domain. We test this model on both natural pictures and artificial images such as psychological patterns. The result indicate fast and robust saliency detection of our method.",
"For many applications in graphics, design, and human computer interaction, it is essential to understand where humans look in a scene. Where eye tracking devices are not a viable option, models of saliency can be used to predict fixation locations. Most saliency approaches are based on bottom-up computation that does not consider top-down image semantics and often does not match actual eye movements. To address this problem, we collected eye tracking data of 15 viewers on 1003 images and use this database as training and testing examples to learn a model of saliency based on low, middle and high-level image features. This large database of eye tracking data is publicly available with this paper.",
"Saliency prediction typically relies on hand-crafted (multiscale) features that are combined in different ways to form a \"master\" saliency map, which encodes local image conspicuity. Recent improvements to the state of the art on standard benchmarks such as MIT1003 have been achieved mostly by incrementally adding more and more hand-tuned features (such as car or face detectors) to existing models. In contrast, we here follow an entirely automatic data-driven approach that performs a large-scale search for optimal features. We identify those instances of a richly-parameterized bio-inspired model family (hierarchical neuromorphic networks) that successfully predict image saliency. Because of the high dimensionality of this parameter space, we use automated hyperparameter optimization to efficiently guide the search. The optimal blend of such multilayer features combined with a simple linear classifier achieves excellent performance on several image saliency benchmarks. Our models outperform the state of the art on MIT1003, on which features and classifiers are learned. Without additional training, these models generalize well to two other image saliency data sets, Toronto and NUSEF, despite their different image content. Finally, our algorithm scores best of all the 23 models evaluated to date on the MIT300 saliency challenge, which uses a hidden test set to facilitate an unbiased comparison.",
"This paper presents a saliency detection algorithm by integrating both local estimation and global search. In the local estimation stage, we detect local saliency by using a deep neural network (DNN-L) which learns local patch features to determine the saliency value of each pixel. The estimated local saliency maps are further refined by exploring the high level object concepts. In the global search stage, the local saliency map together with global contrast and geometric information are used as global features to describe a set of object candidate regions. Another deep neural network (DNN-G) is trained to predict the saliency score of each object region based on the global features. The final saliency map is generated by a weighted sum of salient object regions. Our method presents two interesting insights. First, local features learned by a supervised scheme can effectively capture local contrast, texture and shape information for saliency detection. Second, the complex relationship between different global saliency cues can be captured by deep networks and exploited principally rather than heuristically. Quantitative and qualitative experiments on several benchmark data sets demonstrate that our algorithm performs favorably against the state-of-the-art methods.",
"A new bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is proposed. It consists of two steps: first forming activation maps on certain feature channels, and then normalizing them in a way which highlights conspicuity and admits combination with other maps. The model is simple, and biologically plausible insofar as it is naturally parallelized. This model powerfully predicts human fixations on 749 variations of 108 natural images, achieving 98 of the ROC area of a human-based control, whereas the classical algorithms of Itti & Koch ([2], [3], [4]) achieve only 84 .",
"What makes an object salient? Most previous work assert that distinctness is the dominating factor. The difference between the various algorithms is in the way they compute distinctness. Some focus on the patterns, others on the colors, and several add high-level cues and priors. We propose a simple, yet powerful, algorithm that integrates these three factors. Our key contribution is a novel and fast approach to compute pattern distinctness. We rely on the inner statistics of the patches in the image for identifying unique patterns. We provide an extensive evaluation and show that our approach outperforms all state-of-the-art methods on the five most commonly-used datasets.",
"Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.",
"With the goal of effectively identifying common and salient objects in a group of relevant images, co-saliency detection has become essential for many applications such as video foreground extraction, surveillance, image retrieval, and image annotation. In this paper, we propose a unified co-saliency detection framework by introducing two novel insights: 1) looking deep to transfer higher-level representations by using the convolutional neural network with additional adaptive layers could better reflect the properties of the co-salient objects, especially their consistency among the image group; 2) looking wide to take advantage of the visually similar neighbors beyond a certain image group could effectively suppress the influence of the common background regions when formulating the intra-group consistency. In the proposed framework, the wide and deep information are explored for the object proposal windows extracted in each image, and the co-saliency scores are calculated by integrating the intra-image contrast and intra-group consistency via a principled Bayesian formulation. Finally the window-level co-saliency scores are converted to the superpixel-level co-saliency maps through a foreground region agreement strategy. Comprehensive experiments on two benchmark datasets have demonstrated the consistent performance gain of the proposed approach.",
"Low-level saliency cues or priors do not produce good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. This issue raises a serious problem for conventional approaches. In this paper, we tackle this problem by proposing a multi-context deep learning framework for salient object detection. We employ deep Convolutional Neural Networks to model saliency of objects in images. Global context and local context are both taken into account, and are jointly modeled in a unified multi-context deep learning framework. To provide a better initialization for training the deep neural networks, we investigate different pre-training strategies, and a task-specific pre-training scheme is designed to make the multi-context modeling suited for saliency detection. Furthermore, recently proposed contemporary deep models in the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection are investigated. Our approach is extensively evaluated on five public datasets, and experimental results show significant and consistent improvements over the state-of-the-art methods.",
"In this paper, we study the salient object detection problem for images. We formulate this problem as a binary labeling task where we separate the salient object from the background. We propose a set of novel features, including multiscale contrast, center-surround histogram, and color spatial distribution, to describe a salient object locally, regionally, and globally. A conditional random field is learned to effectively combine these features for salient object detection. Further, we extend the proposed approach to detect a salient object from sequential images by introducing the dynamic salient features. We collected a large image database containing tens of thousands of carefully labeled images by multiple users and a video segment database, and conducted a set of experiments over them to demonstrate the effectiveness of the proposed approach.",
"A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail.",
"We propose a definition of saliency by considering what the visual system is trying to optimize when directing attention. The resulting model is a Bayesian framework from which bottom-up saliency emerges naturally as the self-information of visual features, and overall saliency (incorporating top-down information with bottom-up saliency) emerges as the pointwise mutual information between the features and the target when searching for a target. An implementation of our framework demonstrates that our model’s bottom-up saliency maps perform as well as or better than existing algorithms in predicting people’s fixations in free viewing. Unlike existing saliency measures, which depend on the statistics of the particular image being viewed, our measure of saliency is derived from natural image statistics, obtained in advance from a collection of natural images. For this reason, we call our model SUN (Saliency Using Natural statistics). A measure of saliency based on natural image statistics, rather than based on a single test image, provides a straightforward explanation for many search asymmetries observed in humans; the statistics of a single test image lead to predictions that are not consistent with these asymmetries. In our model, saliency is computed locally, which is consistent with the neuroanatomy of the early visual system and results in an efficient algorithm with few free parameters.",
"Top-down visual saliency facilities object localization by providing a discriminative representation of target objects and a probability map for reducing the search space. In this paper, we propose a novel top-down saliency model that jointly learns a Conditional Random Field (CRF) and a discriminative dictionary. The proposed model is formulated based on a CRF with latent variables. By using sparse codes as latent variables, we train the dictionary modulated by CRF, and meanwhile a CRF with sparse coding. We propose a max-margin approach to train our model via fast inference algorithms. We evaluate our model on the Graz-02 and PASCAL VOC 2007 datasets. Experimental results show that our model performs favorably against the state-of-the-art top-down saliency methods. We also observe that the dictionary update significantly improves the model performance.",
"Detection of visually salient image regions is useful for applications like object segmentation, adaptive compression, and object recognition. In this paper, we introduce a method for salient region detection that outputs full resolution saliency maps with well-defined boundaries of salient objects. These boundaries are preserved by retaining substantially more frequency content from the original image than other existing techniques. Our method exploits features of color and luminance, is simple to implement, and is computationally efficient. We compare our algorithm to five state-of-the-art salient region detection methods with a frequency domain analysis, ground truth, and a salient object segmentation application. Our method outperforms the five algorithms both on the ground-truth evaluation and on the segmentation task by achieving both higher precision and better recall."
]
} |
1604.03227 | 2342171291 | Convolutional-deconvolution networks can be adopted to perform end-to-end saliency detection. But, they do not work well with objects of multiple scales. To overcome such a limitation, in this work, we propose a recurrent attentional convolutional-deconvolution network (RACDNN). Using spatial transformer and recurrent network units, RACDNN is able to iteratively attend to selected image sub-regions to perform saliency refinement progressively. Besides tackling the scale problem, RACDNN can also learn context-aware features from past iterations to enhance saliency refinement in future iterations. Experiments on several challenging saliency detection datasets validate the effectiveness of RACDNN, and show that RACDNN outperforms state-of-the-art saliency detection methods. | Attention models are a new variant of neural networks aiming to model visual attention. They are often used with recurrent neural networks to achieve sequential attention. @cite_17 formulates a recurrent attention model that surpasses CNN on some image classification tasks. @cite_44 extends the work of @cite_17 by making the model deeper and apply it for multi-object classification task. To overcome the training difficulty of recurrent attention model, @cite_27 propose a differentiable attention mechanism and apply it for generative image generation and image classification. @cite_46 propose a differentiable and efficient sampling-based spatial attention mechanism, in which any spatial transformation can be used. Unlike the above works @cite_17 @cite_28 @cite_18 which mostly use small attention networks for low-resolution digit classification task, the attention mechanism used in our work is much more complex, as it is tied with a large CNN-DecNN for dense pixelwise saliency refinement. | {
"cite_N": [
"@cite_18",
"@cite_28",
"@cite_44",
"@cite_27",
"@cite_46",
"@cite_17"
],
"mid": [
"",
"",
"2964036520",
"2962741254",
"603908379",
"2147527908"
],
"abstract": [
"",
"",
"Abstract: We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.",
"This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.",
"Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.",
"Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so."
]
} |
1604.03498 | 2335905583 | Fisher vector has been widely used in many multimedia retrieval and visual recognition applications with good performance. However, the computation complexity prevents its usage in real-time video monitoring. In this work, we proposed and implemented GPU-FV, a fast Fisher vector extraction method with the help of modern GPUs. The challenge of implementing Fisher vector on GPUs lies in the data dependency in feature extraction and expensive memory access in Fisher vector computing. To handle these challenges, we carefully designed GPU-FV in a way that utilizes the computing power of GPU as much as possible, and applied optimizations such as loop tiling to boost the performance. GPUFV is about 12 times faster than the CPU version, and 50 faster than a non-optimized GPU implementation. For standard video input (320*240), GPU-FV can process each frame within 34ms on a model GPU. Our experiments show that GPU-FV obtains a similar recognition accuracy as traditional FV on VOC 2007 and Caltech 256 image sets. We also applied GPU-FV for realtime video monitoring tasks and found that GPU-FV outperforms a number of previous works. Especially, when the number of training examples are small, GPU-FV outperforms the recent popular deep CNN features borrowed from ImageNet. | SURF is an optimized robust feature extraction system @cite_16 . Cornelis and Van Gool @cite_0 implemented SURF on the GPU (Graphics Processing Unit) and obtained an order of magnitude speedup compared to a CPU implementation. Extracting SIFT descriptors on GPU has been studied by other researchers @cite_2 @cite_14 . Recent efforts were also made on accelerating Dense SIFT computation @cite_12 . SVM model training, have been independently studied on the GPU before @cite_36 . To address the bottlenecks in accurate visual categorization systems, Sande et.al @cite_30 did a detailed analysis, and proposed two GPU-accelerated algorithms, GPU vector quantization and GPU kernel value precomputation, which results in a substantial acceleration of the complete visual categorization pipeline. However, their method does not involve Fisher vector encoding. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_36",
"@cite_0",
"@cite_2",
"@cite_16",
"@cite_12"
],
"mid": [
"2109370419",
"2057460344",
"2159350554",
"2150440820",
"2083822463",
"1677409904",
""
],
"abstract": [
"Visual categorization is important to manage large collections of digital images and video, where textual metadata is often incomplete or simply unavailable. The bag-of-words model has become the most powerful method for visual categorization of images and video. Despite its high accuracy, a severe drawback of this model is its high computational cost. As the trend to increase computational power in newer CPU and GPU architectures is to increase their level of parallelism, exploiting this parallelism becomes an important direction to handle the computational cost of the bag-of-words approach. When optimizing a system based on the bag-of-words approach, the goal is to minimize the time it takes to process batches of images. this paper, we analyze the bag-of-words model for visual categorization in terms of computational cost and identify two major bottlenecks: the quantization step and the classification step. We address these two bottlenecks by proposing two efficient algorithms for quantization and classification by exploiting the GPU hardware and the CUDA parallel programming model. The algorithms are designed to (1) keep categorization accuracy intact, (2) decompose the problem, and (3) give the same numerical results. In the experiments on large scale datasets, it is shown that, by using a parallel implementation on the Geforce GTX260 GPU, classifying unseen images is 4.8 times faster than a quad-core CPU version on the Core i7 920, while giving the exact same numerical results. In addition, we show how the algorithms can be generalized to other applications, such as text retrieval and video retrieval. Moreover, when the obtained speedup is used to process extra video frames in a video retrieval benchmark, the accuracy of visual categorization is improved by 29 .",
"Scale Invariance Feature Transform (SIFT) is quite suitable for image matching because of its invariance to image scaling, rotation and slight changes in illumination or viewpoint. However, due to high computation complexity it's technically challenging to deploy SIFT in real time application situations. To address this problem, we propose CLSIFT, an OpenCL based highly speeded up and performance portable SIFT solution. Important optimization techniques employed in CLSIFT such as: (1) For less global memory traffic, independent logical functions are merged into the same kernel to reuse data.(2) loop buffers are introduced in for data and intermediate results reusing.(3)Task queue used to schedule threads in the same branch to remove branch divergences. (4) Data partition is based on the statics patterns for workload balance among workgroups. (5) Overlap of CPU time and better parallel strategies are used too. With all mentioned efforts, CLSIFT processes lena. jpg at 74.2 FPS and 43.4FPS respectively on NVidia and AMD GPUS, much higher than CPU's nearly 10 FPS and the known fastest SIFTGPU's 39.8 FPS and 13FPS. Moreover in a quantitative comparison approach we analyze those successful strategies beating SIFTGPU, a famous existing GPU implementation. Additionally, we observe and conclude that NVidia GPU achieves better occupancy and performance due to some factors. Finally, we summarize some techniques and empirical guiding principles that may be shared by other applications on GPU.",
"Recent developments in programmable, highly parallel Graphics Processing Units (GPUs) have enabled high performance implementations of machine learning algorithms. We describe a solver for Support Vector Machine training running on a GPU, using the Sequential Minimal Optimization algorithm and an adaptive first and second order working set selection heuristic, which achieves speedups of 9-35x over LIBSVM running on a traditional processor. We also present a GPU-based system for SVM classification which achieves speedups of 81-138x over LIBSVM (5-24x over our own CPU based SVM classifier).",
"Ever since the introduction of freely programmable hardware components into modern graphics hardware, graphics processing units (GPUs) have become increasingly popular for general purpose computations. Especially when applied to computer vision algorithms where a Single set of Instructions has to be executed on Multiple Data (SIMD), GPU-based algorithms can provide a major increase in processing speed compared to their CPU counterparts. This paper presents methods that take full advantage of modern graphics card hardware for real-time scale invariant feature detection and matching. The focus lies on the extraction of feature locations and the generation of feature descriptors from natural images. The generation of these feature-vectors is based on the Speeded Up Robust Features (SURF) method [1] due to its high stability against rotation, scale and changes in lighting condition of the processed images. With the presented methods feature detection and matching can be performed at framerates exceeding 100 frames per second for 640 times 480 images. The remaining time can then be spent on fast matching against large feature databases on the GPU while the CPU can be used for other tasks.",
"This paper describes novel implementations of the KLT feature tracking and SIFT feature extraction algorithms that run on the graphics processing unit (GPU) and is suitable for video analysis in real-time vision systems. While significant acceleration over standard CPU implementations is obtained by exploiting parallelism provided by modern programmable graphics hardware, the CPU is freed up to run other computations in parallel. Our GPU-based KLT implementation tracks about a thousand features in real-time at 30 Hz on 1,024 × 768 resolution video which is a 20 times improvement over the CPU. The GPU-based SIFT implementation extracts about 800 features from 640 × 480 video at 10 Hz which is approximately 10 times faster than an optimized CPU implementation.",
"In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance.",
""
]
} |
1604.03498 | 2335905583 | Fisher vector has been widely used in many multimedia retrieval and visual recognition applications with good performance. However, the computation complexity prevents its usage in real-time video monitoring. In this work, we proposed and implemented GPU-FV, a fast Fisher vector extraction method with the help of modern GPUs. The challenge of implementing Fisher vector on GPUs lies in the data dependency in feature extraction and expensive memory access in Fisher vector computing. To handle these challenges, we carefully designed GPU-FV in a way that utilizes the computing power of GPU as much as possible, and applied optimizations such as loop tiling to boost the performance. GPUFV is about 12 times faster than the CPU version, and 50 faster than a non-optimized GPU implementation. For standard video input (320*240), GPU-FV can process each frame within 34ms on a model GPU. Our experiments show that GPU-FV obtains a similar recognition accuracy as traditional FV on VOC 2007 and Caltech 256 image sets. We also applied GPU-FV for realtime video monitoring tasks and found that GPU-FV outperforms a number of previous works. Especially, when the number of training examples are small, GPU-FV outperforms the recent popular deep CNN features borrowed from ImageNet. | Efforts have been made to reduce the storage and computation overhead of Fisher vector @cite_22 , by compressing the Fisher vector, with some loss of precision. The widely used Vlfeat package @cite_33 provides a wonderful implementation of the Fisher vector, along with other popular computer vision algorithms. However, there was no GPU-based implementation in this package. A few years back, some researchers implemented Fisher vector on GPU @cite_15 , but it is on a modified algorithm with hierarchical GMM model, and the accuracy is lower than the state-of-the-art. | {
"cite_N": [
"@cite_15",
"@cite_22",
"@cite_33"
],
"mid": [
"",
"2071027807",
"2066941820"
],
"abstract": [
"",
"The problem of large-scale image search has been traditionally addressed with the bag-of-visual-words (BOV). In this article, we propose to use as an alternative the Fisher kernel framework. We first show why the Fisher representation is well-suited to the retrieval problem: it describes an image by what makes it different from other images. One drawback of the Fisher vector is that it is high-dimensional and, as opposed to the BOV, it is dense. The resulting memory and computational costs do not make Fisher vectors directly amenable to large-scale retrieval. Therefore, we compress Fisher vectors to reduce their memory footprint and speed-up the retrieval. We compare three binarization approaches: a simple approach devised for this representation and two standard compression techniques. We show on two publicly available datasets that compressed Fisher vectors perform very well using as little as a few hundreds of bits per image, and significantly better than a very recent compressed BOV approach.",
"VLFeat is an open and portable library of computer vision algorithms. It aims at facilitating fast prototyping and reproducible research for computer vision scientists and students. It includes rigorous implementations of common building blocks such as feature detectors, feature extractors, (hierarchical) k-means clustering, randomized kd-tree matching, and super-pixelization. The source code and interfaces are fully documented. The library integrates directly with MATLAB, a popular language for computer vision research."
]
} |
1604.03498 | 2335905583 | Fisher vector has been widely used in many multimedia retrieval and visual recognition applications with good performance. However, the computation complexity prevents its usage in real-time video monitoring. In this work, we proposed and implemented GPU-FV, a fast Fisher vector extraction method with the help of modern GPUs. The challenge of implementing Fisher vector on GPUs lies in the data dependency in feature extraction and expensive memory access in Fisher vector computing. To handle these challenges, we carefully designed GPU-FV in a way that utilizes the computing power of GPU as much as possible, and applied optimizations such as loop tiling to boost the performance. GPUFV is about 12 times faster than the CPU version, and 50 faster than a non-optimized GPU implementation. For standard video input (320*240), GPU-FV can process each frame within 34ms on a model GPU. Our experiments show that GPU-FV obtains a similar recognition accuracy as traditional FV on VOC 2007 and Caltech 256 image sets. We also applied GPU-FV for realtime video monitoring tasks and found that GPU-FV outperforms a number of previous works. Especially, when the number of training examples are small, GPU-FV outperforms the recent popular deep CNN features borrowed from ImageNet. | The problem of abnormal event recognition in videos has attracted many attentions @cite_7 @cite_5 @cite_9 . However, the approaches listed above did not use Fisher Vector due to its slowness. @cite_23 used MoSIFT and Fisher Vector for event detection, and obtained good performance on TRECVID data. However, their work did not consider how to speed up local feature extraction or Fisher vector encoding. As a result, their method relies on significant subsampling of one from 30 60 120 frames and the time of encoding such a frame is about 0.4 second (with feature extraction it will be longer). We believe our work in this paper can be easily employed by the framework @cite_23 and provide similar speed up. also used Fisher Vector in action localization and event recognition @cite_19 , at a speed 2.4 times slower than real-time. In this paper, we demonstrate that with GPU based Fisher Vector, we can handle some abnormal event recognition very well at a realtime speed. | {
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_19",
"@cite_23",
"@cite_5"
],
"mid": [
"2021659075",
"2163612318",
"2131042978",
"2010783580",
"2125105611"
],
"abstract": [
"Real-time unusual event detection in video stream has been a difficult challenge due to the lack of sufficient training information, volatility of the definitions for both normality and abnormality, time constraints, and statistical limitation of the fitness of any parametric models. We propose a fully unsupervised dynamic sparse coding approach for detecting unusual events in videos based on online sparse re-constructibility of query signals from an atomically learned event dictionary, which forms a sparse coding bases. Based on an intuition that usual events in a video are more likely to be reconstructible from an event dictionary, whereas unusual events are not, our algorithm employs a principled convex optimization formulation that allows both a sparse reconstruction code, and an online dictionary to be jointly inferred and updated. Our algorithm is completely un-supervised, making no prior assumptions of what unusual events may look like and the settings of the cameras. The fact that the bases dictionary is updated in an online fashion as the algorithm observes more data, avoids any issues with concept drift. Experimental results on hours of real world surveillance video and several Youtube videos show that the proposed algorithm could reliably locate the unusual events in the video sequence, outperforming the current state-of-the-art methods.",
"Speedy abnormal event detection meets the growing demand to process an enormous number of surveillance videos. Based on inherent redundancy of video structures, we propose an efficient sparse combination learning framework. It achieves decent performance in the detection phase without compromising result quality. The short running time is guaranteed because the new method effectively turns the original complicated problem to one in which only a few costless small-scale least square optimization steps are involved. Our method reaches high detection rates on benchmark datasets at a speed of 140-150 frames per second on average when computing on an ordinary desktop PC using MATLAB.",
"Action recognition in uncontrolled video is an important and challenging computer vision problem. Recent progress in this area is due to new local features and models that capture spatio-temporal structure between local features, or human-object interactions. Instead of working towards more complex models, we focus on the low-level features and their encoding. We evaluate the use of Fisher vectors as an alternative to bag-of-word histograms to aggregate a small set of state-of-the-art low-level descriptors, in combination with linear classifiers. We present a large and varied set of evaluations, considering (i) classification of short actions in five datasets, (ii) localization of such actions in feature-length movies, and (iii) large-scale recognition of complex events. We find that for basic action recognition and localization MBH features alone are enough for state-of-the-art performance. For complex events we find that SIFT and MFCC features provide complementary cues. On all three problems we obtain state-of-the-art results, while using fewer features and less complex models.",
"We present a generic event detection system evaluated in the Surveillance Event Detection (SED) task of TRECVID 2012. We investigate a statistical approach with spatio-temporal features applied to seven event classes, which were defined by the SED task. This approach is based on local spatio-temporal descriptors, called MoSIFT and generated by pair-wise video frames. A Gaussian Mixture Model(GMM) is learned to model the distribution of the low level features. Then for each sliding window, the Fisher vector encoding [improvedFV] is used to generate the sample representation. The model is learnt using a Linear SVM for each event. The main novelty of our system is the introduction of Fisher vector encoding into video event detection. Fisher vector encoding has demonstrated great success in image classification. The key idea is to model the low level visual features as a Gaussian Mixture Model and to generate an intermediate vector representation for bag of features. FV encoding uses higher order statistics in place of histograms in the standard BoW. FV has several good properties: (a) it can naturally separate the video specific information from the noisy local features and (b) we can use a linear model for this representation. We build an efficient implementation for FV encoding which can attain a 10 times speed-up over real-time. We also take advantage of non-trivial object localization techniques to feed into the video event detection, e.g. multi-scale detection and non-maximum suppression. This approach outperformed the results of all other teams submissions in TRECVID SED 2012 on four of the seven event types.",
"Extremely crowded scenes present unique challenges to video analysis that cannot be addressed with conventional approaches. We present a novel statistical framework for modeling the local spatio-temporal motion pattern behavior of extremely crowded scenes. Our key insight is to exploit the dense activity of the crowded scene by modeling the rich motion patterns in local areas, effectively capturing the underlying intrinsic structure they form in the video. In other words, we model the motion variation of local space-time volumes and their spatial-temporal statistical behaviors to characterize the overall behavior of the scene. We demonstrate that by capturing the steady-state motion behavior with these spatio-temporal motion pattern models, we can naturally detect unusual activity as statistical deviations. Our experiments show that local spatio-temporal motion pattern modeling offers promising results in real-world scenes with complex activities that are hard for even human observers to analyze."
]
} |
1604.03498 | 2335905583 | Fisher vector has been widely used in many multimedia retrieval and visual recognition applications with good performance. However, the computation complexity prevents its usage in real-time video monitoring. In this work, we proposed and implemented GPU-FV, a fast Fisher vector extraction method with the help of modern GPUs. The challenge of implementing Fisher vector on GPUs lies in the data dependency in feature extraction and expensive memory access in Fisher vector computing. To handle these challenges, we carefully designed GPU-FV in a way that utilizes the computing power of GPU as much as possible, and applied optimizations such as loop tiling to boost the performance. GPUFV is about 12 times faster than the CPU version, and 50 faster than a non-optimized GPU implementation. For standard video input (320*240), GPU-FV can process each frame within 34ms on a model GPU. Our experiments show that GPU-FV obtains a similar recognition accuracy as traditional FV on VOC 2007 and Caltech 256 image sets. We also applied GPU-FV for realtime video monitoring tasks and found that GPU-FV outperforms a number of previous works. Especially, when the number of training examples are small, GPU-FV outperforms the recent popular deep CNN features borrowed from ImageNet. | In recent years, deep neural network have enjoyed a remarkable success as efficient and effective in a number of visual recognition tasks @cite_21 @cite_32 . Especially, @cite_20 showed that by simply borrowing the CNN-based AlexNet model @cite_1 trained for ImageNet, a SVM model using CNN features can obtain the state-of-the-art in many applications. The deep CNN features could be sped up significantly by GPUs. We believe Fisher vector can be sped up with the same hardware, and in this paper we show that GPU-FV can outperform deep CNN features in some applications with limited amount of training samples. | {
"cite_N": [
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_20"
],
"mid": [
"2156303437",
"2618530766",
"1522734439",
"2062118960"
],
"abstract": [
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 , respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.",
"Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks."
]
} |
1604.03175 | 2952875946 | We study the problem of optimal content placement over a network of caches, a problem naturally arising in several networking applications, including ICNs, CDNs, and P2P systems. Given a demand of content request rates and paths followed, we wish to determine the content placement that maximizes the expected caching gain, i.e., the reduction of routing costs due to intermediate caching. The offline version of this problem is NP-hard and, in general, the demand and topology may be a priori unknown. Hence, a distributed, adaptive, constant approximation content placement algorithm is desired. We show that path replication, a simple algorithm frequently encountered in literature, can be arbitrarily suboptimal when combined with traditional eviction policies, like LRU, LFU, or FIFO. We propose a distributed, adaptive algorithm that performs stochastic gradient ascent on a concave relaxation of the expected caching gain, and constructs a probabilistic content placement within 1-1 e factor from the optimal, in expectation. Motivated by our analysis, we also propose a novel greedy eviction policy to be used with path replication, and show through numerical evaluations that both algorithms significantly outperform path replication with traditional eviction policies over a broad array of network topologies. | Path replication is best known as the de facto caching mechanism in content-centric networking @cite_15 , but has a long history in networking literature. In their seminal paper, Cohen and Shenker @cite_24 show that path replication, combined with constant rate of evictions leads to an allocation that is optimal, in equilibrium, when nodes are visited through uniform sampling. This is one of the few results on path replication's optimality (see also @cite_9 ); our work (c.f., Theorem ) proves that, unfortunately, this result does not generalize to routing over arbitrary topologies. Many studies provide numerical evaluations of path replication combined with simple eviction policies, like LRU, LFU, etc., over different topologies (see, e.g., @cite_0 @cite_2 @cite_37 ). In the context of CDNs and ICNs, @cite_32 study conditions under which path replication with LRU, FIFO, and other variants, under fixed paths, lead to an ergodic chain. @cite_36 approximate the LRU policy hit probability through a TTL-based eviction scheme; this approach that has been refined and extended in several recent works to model many traditional eviction policies @cite_12 @cite_18 @cite_13 @cite_29 ; alternative analytical models are explored in @cite_6 @cite_1 . None of the above works however study optimality issues or guarantees. | {
"cite_N": [
"@cite_37",
"@cite_18",
"@cite_36",
"@cite_9",
"@cite_29",
"@cite_1",
"@cite_32",
"@cite_6",
"@cite_24",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_12"
],
"mid": [
"2121605376",
"2400388591",
"2150495639",
"2150947497",
"1549860141",
"1535296432",
"1974066983",
"1982134210",
"2149721632",
"2165766008",
"",
"2014952121",
"2156068340",
"2156588926"
],
"abstract": [
"Buffer caches are commonly used in servers to reduce the number of slow disk accesses or network messages. These buffer caches form a multilevel buffer cache hierarchy. In such a hierarchy, second-level buffer caches have different access patterns from first-level buffer caches because accesses to a second-level are actually misses from a first-level. Therefore, commonly used cache management algorithms such as the least recently used (LRU) replacement algorithm that work well for single-level buffer caches may not work well for second-level. We investigate multiple approaches to effectively manage second-level buffer caches. In particular, we report our research results in 1) second-level buffer cache access pattern characterization, 2) a new local algorithm called multi-queue (MQ) that performs better than nine tested alternative algorithms for second-level buffer caches, 3) a set of global algorithms that manage a multilevel buffer cache hierarchy globally and significantly improve second-level buffer cache hit ratios over corresponding local algorithms, and 4) implementation and evaluation of these algorithms in a real storage system connected with commercial database servers (Microsoft SQL server and Oracle) running industrial-strength online transaction processing benchmarks.",
"",
"This paper aims at finding fundamental design principles for hierarchical Web caching. An analytical modeling technique is developed to characterize an uncooperative two-level hierarchical caching system where the least recently used (LRU) algorithm is locally run at each cache. With this modeling technique, we are able to identify a characteristic time for each cache, which plays a fundamental role in understanding the caching processes. In particular, a cache can be viewed roughly as a low-pass filter with its cutoff frequency equal to the inverse of the characteristic time. Documents with access frequencies lower than this cutoff frequency have good chances to pass through the cache without cache hits. This viewpoint enables us to take any branch of the cache tree as a tandem of low-pass filters at different cutoff frequencies, which further results in the finding of two fundamental design principles. Finally, to demonstrate how to use the principles to guide the caching algorithm design, we propose a cooperative hierarchical Web caching architecture based on these principles. Both model-based and real trace simulation studies show that the proposed cooperative architecture results in more than 50 memory saving and substantial central processing unit (CPU) power saving for the management and update of cache entries compared with the traditional uncooperative hierarchical caching architecture.",
"We propose a novel search mechanism for unstructured p2p networks, and show that it is both scalable, i.e., it leads to a bounded query traffic load per peer as the peer population grows, and reliable, i.e., it successfully locates all files that are (sufficiently often) brought into the system. To the best of our knowledge, this is the first time that a search mechanism for unstructured p2p networks has been shown to be both scalable and reliable. We provide both a formal analysis and a numerical case study to illustrate this result. Our analysis is based on a random graph model for the overlay graph topology and uses a mean-field approximation to characterize the evolution of how files are replicated in the network.",
"Many researchers have been working on the performance analysis of caching in Information-Centric Networks (ICNs) under various replacement policies like Least Recently Used (LRU), FIFO or Random (RND). However, no exact results are provided, and many approximate models do not scale even for the simple network of two caches connected in tandem. In this paper, we introduce a Time-To-Live based policy (TTL), that assigns a timer to each content stored in the cache and redraws the timer each time the content is requested (at each hit miss). We show that our TTL policy is more general than LRU, FIFO or RND, since it is able to mimic their behavior under an appropriate choice of its parameters. Moreover, the analysis of networks of TTL-based caches appears simpler not only under the Independent Reference Model (IRM, on which many existing results rely) but also with the Renewal Model for requests. In particular, we determine exact formulas for the performance metrics of interest for a linear network and a tree network with one root cache and N leaf caches. For more general networks, we propose an approximate solution with the relative errors smaller than 10−3 and 10−2 for exponentially distributed and constant TTLs respectively.",
"Content-centric networking proposals, as Parc's CCN, have recently emerged to define new network architectures where content, and not its location, becomes the core of the communication model. These new paradigms push data storage and delivery at network layer and are designed to better deal with current Internet usage, mainly centered around content dissemination and retrieval. In this paper, we develop an analytical model of CCN in-network storage and receiver-driven transport, that more generally applies to a class of content ori ented networks identified by chunk-based communication. We derive a closed-form expression for the mean stationary throughput as a function of hit miss probabilities at the caches along the path, of content popularity and of content cache size. Our analytical results, supported by chunk level simulations, can be used to analyze fundamental trade-offs in current CCN architecture, and provide an essential building block for the design and evaluation of enhanced CCN protocols.",
"Over the past few years Content-Centric Networking, a networking model in which host-to-content communication protocols are introduced, has been gaining much attention. A central component of such an architecture is a large-scale interconnected caching system. To date, the way these Cache Networks operate and perform is still poorly understood. In this work, we demonstrate that certain cache networks are non-ergodic in that their steady-state characterization depends on the initial state of the system. We then establish several important properties of cache networks, in the form of three independently-sufficient conditions for a cache network to comprise a single ergodic component. Each property targets a different aspect of the system - topology, admission control and cache replacement policies. Perhaps most importantly we demonstrate that cache replacement can be grouped into equivalence classes, such that the ergodicity (or lack-thereof) of one policy implies the same property holds for all policies in the class.",
"The overall performance of content distribution networks as well as recently proposed information-centric networks rely on both memory and bandwidth capacities. The hit ratio is the key performance indicator which captures the bandwidth memory tradeoff for a given global performance. This paper focuses on the estimation of the hit ratio in a network of caches that employ the Random replacement policy (RND). Assuming that requests are independent and identically distributed, general expressions of miss probabilities for a single RND cache are provided as well as exact results for specific popularity distributions (such results also hold for the FIFO replacement policy). Moreover, for any Zipf popularity distribution with exponent @a>1, we obtain asymptotic equivalents for the miss probability in the case of large cache size. We extend the analysis to networks of RND caches, when the topology is either a line or a homogeneous tree. In that case, approximations for miss probabilities across the network are derived by neglecting time correlations between miss events at any node; the obtained results are compared to the same network using the Least-Recently-Used discipline, already addressed in the literature. We further analyze the case of a mixed tandem cache network where the two nodes employ either Random or Least-Recently-Used policies. In all scenarios, asymptotic formulas and approximations are extensively compared to simulation results and shown to be very accurate. Finally, our results enable us to propose recommendations for cache replacement disciplines in a network dedicated to content distribution.",
"The Peer-to-Peer (P2P) architectures that are most prevalent in today's Internet are decentralized and unstructured. Search is blind in that it is independent of the query and is thus not more effective than probing randomly chosen peers. One technique to improve the effectiveness of blind search is to proactively replicate data. We evaluate and compare different replication strategies and reveal interesting structure: Two very common but very different replication strategies - uniform and proportional - yield the same average performance on successful queries, and are in fact worse than any replication strategy which lies between them. The optimal strategy lies between the two and can be achieved by simple distributed algorithms. These fundamental results o.er a new understanding of replication and show that currently deployed replication strategies are far from optimal and that optimal replication is attainable by protocols that resemble existing ones in simplicity and operation.",
"Large scale hierarchical caches for Web content have been deployed widely in an attempt to reduce delivery delays and bandwidth consumption and also to improve the scalability of content dissemination through the World Wide Web. Irrespective of the specific replacement algorithm employed in each cache, a de facto characteristic of contemporary hierarchical caches is that a hit for a document at an l-level cache leads to the caching of the document in all intermediate caches (levels l-1,..., 1) on the path towards the leaf cache that received the initial request. This paper presents various algorithms that revises this standard behavior and attempts to be more selective in choosing the caches that gets to store a local copy of the requested document. As these algorithms operate independently of the actual replacement algorithm running in each individual cache, they are referred to as meta algorithms. Three new meta algorithms are proposed and compared against the de facto one and a recently proposed one by means of synthetic and trace-driven simulations. The best of the new meta algorithms appears to be leading to improved performance under most simulated scenarios, especially under a low availability of storage. The latter observation makes the presented meta algorithms particularly favorable for the handling of large data objects such as stored music files or short video clips. Additionally, a simple load balancing algorithm that is based on the concept of meta algorithms is proposed and evaluated. The algorithm is shown to be able to provide for an effective balancing of load thus possibly addressing the recently discovered \"filtering-effect\" in hierarchical Web caches.",
"",
"Network use has evolved to be dominated by content distribution and retrieval, while networking technology still speaks only of connections between hosts. Accessing content and services requires mapping from the what that users care about to the network's where. We present Content-Centric Networking (CCN) which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name. Using new approaches to routing named content, derived heavily from IP, we can simultaneously achieve scalability, security and performance. We implemented our architecture's basic features and demonstrate resilience and performance with secure file downloads and VoIP calls.",
"TTL caching models have recently regained significant research interest due to their connection to popular caching policies such as LRU. This paper advances the state-of-the-art analysis of TTL-based cache networks by developing two exact methods with orthogonal generality and computational complexity. The first method generalizes existing results for line networks under renewal requests to the broad class of caching policies whereby evictions are driven by stopping times; in addition to classical policies used in DNS and web caching, our stopping time model captures an emerging new policy implemented in SDN switches and Amazon web services. The second method further generalizes these results to feedforward networks with Markov arrival process (MAP) requests. MAPs are particularly suitable for non-line networks because they are closed not only under superposition and splitting, as known, but also under caching operations with phase-type (PH) TTL distributions. The crucial benefit of the two closure properties is that they jointly enable the first exact analysis of TTL feedforward cache networks in great generality. Moreover, numerical results highlight that existing Poisson approximations in binary-tree topologies are subject to relative errors as large as 30 , depending on the tree depth.",
"In a 2002 paper, Che and co-authors proposed a simple approach for estimating the hit rates of a cache operating the least recently used (LRU) replacement policy. The approximation proves remarkably accurate and is applicable to quite general distributions of object popularity. This paper provides a mathematical explanation for the success of the approximation, notably in configurations where the intuitive arguments of clearly do not apply. The approximation is particularly useful in evaluating the performance of current proposals for an information centric network where other approaches fail due to the very large populations of cacheable objects to be taken into account and to their complex popularity law, resulting from the mix of different content types and the filtering effect induced by the lower layers in a cache hierarchy."
]
} |
1604.03175 | 2952875946 | We study the problem of optimal content placement over a network of caches, a problem naturally arising in several networking applications, including ICNs, CDNs, and P2P systems. Given a demand of content request rates and paths followed, we wish to determine the content placement that maximizes the expected caching gain, i.e., the reduction of routing costs due to intermediate caching. The offline version of this problem is NP-hard and, in general, the demand and topology may be a priori unknown. Hence, a distributed, adaptive, constant approximation content placement algorithm is desired. We show that path replication, a simple algorithm frequently encountered in literature, can be arbitrarily suboptimal when combined with traditional eviction policies, like LRU, LFU, or FIFO. We propose a distributed, adaptive algorithm that performs stochastic gradient ascent on a concave relaxation of the expected caching gain, and constructs a probabilistic content placement within 1-1 e factor from the optimal, in expectation. Motivated by our analysis, we also propose a novel greedy eviction policy to be used with path replication, and show through numerical evaluations that both algorithms significantly outperform path replication with traditional eviction policies over a broad array of network topologies. | All of the above complexity papers @cite_38 @cite_20 @cite_10 @cite_28 , including @cite_39 , study , versions of their respective caching problems. Instead, we focus on providing algorithms, that operate without any prior knowledge of the demand or topology. In doing so, we we produce a distributed algorithms for (a) performing projected gradient ascent over the concave objective used in pipage rounding, and (b) rounding the objective across nodes; combined, these lead to a distributed, adaptive caching algorithm with provable guarantees (Thm. ). | {
"cite_N": [
"@cite_38",
"@cite_28",
"@cite_39",
"@cite_10",
"@cite_20"
],
"mid": [
"2018682510",
"2072291569",
"1972738071",
"1966079129",
"1972442117"
],
"abstract": [
"We develop approximation algorithms for the problem of placing replicated data in arbitrary networks, where the nodes may both issue requests for data objects and have capacity for storing data objects so as to minimize the average data-access cost. We introduce the data placement problem to model this problem. We have a set of caches @math , a set of clients @math , and a set of data objects @math . Each cache @math can store at most @math data objects. Each client @math has demand @math for a specific data object @math and has to be assigned to a cache that stores that object. Storing an object @math in cache @math incurs a storage cost of @math , and assigning client @math to cache @math incurs an access cost of @math . The goal is to find a placement of the data objects to caches respecting the capacity constraints, and an assignment of clients to caches so as to minimize the total storage and client access costs. We present a 10-approximation algorithm for this problem. Our algorithm is based on rounding an optimal solution to a natural linear-programming relaxation of the problem. One of the main technical challenges encountered during rounding is to preserve the cache capacities while incurring only a constant-factor increase in the solution cost. We also introduce the connected data placement problem to capture settings where write-requests are also issued for data objects, so that one requires a mechanism to maintain consistency of data. We model this by requiring that all caches containing a given object be connected by a Steiner tree to a root for that object, which issues a multicast message upon a write to (any copy of) that object. The total cost now includes the cost of these Steiner trees. We devise a 14-approximation algorithm for this problem. We show that our algorithms can be adapted to handle two variants of the problem: (a) a @math -median variant, where there is a specified bound on the number of caches that may contain a given object, and (b) a generalization where objects have lengths and the total length of the objects stored in any cache must not exceed its capacity.",
"The paper presents a general method of designing constant-factor approximation algorithms for some discrete optimization problems with assignment-type constraints. The core of the method is a simple deterministic procedure of rounding of linear relaxations (referred to as pipage rounding). With the help of the method we design approximation algorithms with better performance guarantees for some well-known problems including MAXIMUM COVERAGE, MAX CUT with given sizes of parts and some of their generalizations.",
"Video on-demand streaming from Internet-based servers is becoming one of the most important services offered by wireless networks today. In order to improve the area spectral efficiency of video transmission in cellular systems, small cells heterogeneous architectures (e.g., femtocells, WiFi off-loading) are being proposed, such that video traffic to nomadic users can be handled by short-range links to the nearest small cell access points (referred to as “helpers”). As the helper deployment density increases, the backhaul capacity becomes the system bottleneck. In order to alleviate such bottleneck we propose a system where helpers with low-rate backhaul but high storage capacity cache popular video files. Files not available from helpers are transmitted by the cellular base station. We analyze the optimum way of assigning files to the helpers, in order to minimize the expected downloading time for files. We distinguish between the uncoded case (where only complete files are stored) and the coded case, where segments of Fountain-encoded versions of the video files are stored at helpers. We show that the uncoded optimum file assignment is NP-hard, and develop a greedy strategy that is provably within a factor 2 of the optimum. Further, for a special case we provide an efficient algorithm achieving a provably better approximation ratio of 1-(1-1 d )d, where d is the maximum number of helpers a user can be connected to. We also show that the coded optimum cache assignment problem is convex that can be further reduced to a linear program. We present numerical results comparing the proposed schemes.",
"A separable assignment problem (SAP) is defined by a set of bins and a set of items to pack in each bin; a value, f ij , for assigning item j to bin i; and a separate packing constraint for each bin - i.e. for bin i, a family L i of subsets of items that fit in bin i. The goal is to pack items into bins to maximize the aggregate value. This class of problems includes the maximum generalized assignment problem (GAP)1) and a distributed caching problem (DCP) described in this paper.Given a β-approximation algorithm for finding the highest value packing of a single bin, we give1. A polynomial-time LP-rounding based ((1 − 1 e)β)-approximation algorithm.2. A simple polynomial-time local search (β β+1 - e) - approximation algorithm, for any e > 0.Therefore, for all examples of SAP that admit an approximation scheme for the single-bin problem, we obtain an LP-based algorithm with (1 - 1 e - e)-approximation and a local search algorithm with (1 2-e)-approximation guarantee. Furthermore, for cases in which the subproblem admits a fully polynomial approximation scheme (such as for GAP), the LP-based algorithm analysis can be strengthened to give a guarantee of 1 - 1 e. The best previously known approximation algorithm for GAP is a 1 2-approximation by Shmoys and Tardos; and Chekuri and Khanna. Our LP algorithm is based on rounding a new linear programming relaxation, with a provably better integrality gap.To complement these results, we show that SAP and DCP cannot be approximated within a factor better than 1 -1 e unless NP⊆ DTIME(nO(log log n)), even if there exists a polynomial-time exact algorithm for the single-bin problem.We extend the (1 - 1 e)-approximation algorithm to a nonseparable assignment problem with applications in maximizing revenue for budget-constrained combinatorial auctions and the AdWords assignment problem. We generalize the local search algorithm to yield a 1 2-e approximation algorithm for the k-median problem with hard capacities. Finally, we study naturally defined game-theoretic versions of these problems, and show that they have price of anarchy of 2. We also prove the existence of cycles of best response moves, and exponentially long best-response paths to (pure or sink) equilibria.",
"We deal with the competitive analysis of algorithms for managing data in a distributed environment. We deal with the file allocation problem, where copies of a file may be be stored in the local storage of some subsets of processors. Copies may be replicated and discarded over time so as to optimize communication costs, but multiple copies must be kept consistent and at least one copy must be stored somewhere in the network at all times. We deal with competitive algorithms for minimizing communication costs, over arbitrary sequences of reads and writes, and arbitrary network topologies. We define the constrained file allocation problem to be the solution of many individual file allocation problems simultaneously, subject to the constraints of local memory size. We give competitive algorithms for this problem on the uniform network topology. We then introduce distributed competitive algorithms for on-line data tracking (a generalization of mobile user tracking) to transform our competitive data management algorithms into distributed algorithms themselves."
]
} |
1604.03175 | 2952875946 | We study the problem of optimal content placement over a network of caches, a problem naturally arising in several networking applications, including ICNs, CDNs, and P2P systems. Given a demand of content request rates and paths followed, we wish to determine the content placement that maximizes the expected caching gain, i.e., the reduction of routing costs due to intermediate caching. The offline version of this problem is NP-hard and, in general, the demand and topology may be a priori unknown. Hence, a distributed, adaptive, constant approximation content placement algorithm is desired. We show that path replication, a simple algorithm frequently encountered in literature, can be arbitrarily suboptimal when combined with traditional eviction policies, like LRU, LFU, or FIFO. We propose a distributed, adaptive algorithm that performs stochastic gradient ascent on a concave relaxation of the expected caching gain, and constructs a probabilistic content placement within 1-1 e factor from the optimal, in expectation. Motivated by our analysis, we also propose a novel greedy eviction policy to be used with path replication, and show through numerical evaluations that both algorithms significantly outperform path replication with traditional eviction policies over a broad array of network topologies. | Adaptive replication schemes exist for asymptotically large, single-hop CDNs, @cite_4 @cite_16 @cite_25 , but these works do not explicitly model a graph structure. The dynamics of the greedy path replication algorithm we propose in resemble the greedy algorithm used to make caching decisions in @cite_25 , though our objective is different, and we cannot rely on a mean-field approximation in our argument. The dynamics are also similar (but not identical) to the dynamics of the continuous-greedy'' algorithm used for submodular maximization @cite_34 and the Frank-Wolfe algorithm @cite_30 ; these can potentially serve as a basis for formally establishing its convergence, which we leave as future work. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_34",
"@cite_16",
"@cite_25"
],
"mid": [
"2621075174",
"1675432311",
"1989453388",
"",
"2164237594"
],
"abstract": [
"The problem of maximizing a concave function f(x) in a simplex S can be solved approximately by a simple greedy algorithm. For given k, the algorithm can find a point x(k) on a k-dimensional face of S, such that f(x(k)) ≥ f(x*) - O(1 k). Here f(x*) is the maximum value of f in S. This algorithm and analysis were known before, and related to problems of statistics and machine learning, such as boosting, regression, and density mixture estimation. In other work, coming from computational geometry, the existence of e-coresets was shown for the minimum enclosing ball problem, by means of a simple greedy algorithm. Similar greedy algorithms, that are special cases of the Frank-Wolfe algorithm, were described for other enclosure problems. Here these results are tied together, stronger convergence results are reviewed, and several coreset bounds are generalized or strengthened.",
"We address the problem of content replication in large distributed content delivery networks, composed of a data center assisted by many small servers with limited capabilities and located at the edge of the network. We aim at optimizing the placement of contents on the servers to offload the data center as much as possible. We model the sub-system constituted by the small servers as a loss network, each loss corresponding to a request to the data center. Based on large system storage behavior, we obtain an asymptotic formula for the optimal replication of contents and propose adaptive schemes to attain it by reacting to losses, as well as faster algorithms which can react before losses occur. We show through simulations that our adaptive schemes outperform significantly standard replication strategies both in terms of loss rates and adaptation speed.",
"In the Submodular Welfare Problem, m items are to be distributed among n players with utility functions wi: 2[m] → R+. The utility functions are assumed to be monotone and submodular. Assuming that player i receives a set of items Si, we wish to maximize the total utility ∑i=1n wi(Si). In this paper, we work in the value oracle model where the only access to the utility functions is through a black box returning wi(S) for a given set S. Submodular Welfare is in fact a special case of the more general problem of submodular maximization subject to a matroid constraint: max f(S): S ∈ I , where f is monotone submodular and I is the collection of independent sets in some matroid. For both problems, a greedy algorithm is known to yield a 1 2-approximation [21, 16]. In special cases where the matroid is uniform (I = S: |S| ≤ k) [20] or the submodular function is of a special type [4, 2], a (1-1 e)-approximation has been achieved and this is optimal for these problems in the value oracle model [22, 6, 15]. A (1-1 e)-approximation for the general Submodular Welfare Problem has been known only in a stronger demand oracle model [4], where in fact 1-1 e can be improved [9]. In this paper, we develop a randomized continuous greedy algorithm which achieves a (1-1 e)-approximation for the Submodular Welfare Problem in the value oracle model. We also show that the special case of n equal players is approximation resistant, in the sense that the optimal (1-1 e)-approximation is achieved by a uniformly random solution. Using the pipage rounding technique [1, 2], we obtain a (1-1 e)-approximation for submodular maximization subject to any matroid constraint. The continuous greedy algorithm has a potential of wider applicability, which we demonstrate on the examples of the Generalized Assignment Problem and the AdWords Assignment Problem.",
"",
"Sharing content over a mobile network through opportunistic contacts has recently received considerable attention. In proposed scenarios, users store content they download in a local cache and share it with other users they meet, e.g., via Bluetooth or WiFi. The storage capacity of mobile devices is typically limited; therefore, identifying which content a user should store in her cache is a fundamental problem in the operation of any such content distribution system. In this work, we propose Psephos, a novel mechanism for determining the caching policy of each mobile user. Psephos is fully distributed: users compute their own policies individually, in the absence of a central authority. Moreover, it is designed for a heterogeneous environment, in which demand for content, access to resources, and mobility characteristics may vary across different users. Most importantly, the caching policies computed by our mechanism are optimal: we rigorously show that Psephos maximizes the system's social welfare. Our results are derived formally using techniques from stochastic approximation and convex optimization; to the best of our knowledge, our work is the first to address caching with heterogeneity in a fully distributed manner."
]
} |
1604.03175 | 2952875946 | We study the problem of optimal content placement over a network of caches, a problem naturally arising in several networking applications, including ICNs, CDNs, and P2P systems. Given a demand of content request rates and paths followed, we wish to determine the content placement that maximizes the expected caching gain, i.e., the reduction of routing costs due to intermediate caching. The offline version of this problem is NP-hard and, in general, the demand and topology may be a priori unknown. Hence, a distributed, adaptive, constant approximation content placement algorithm is desired. We show that path replication, a simple algorithm frequently encountered in literature, can be arbitrarily suboptimal when combined with traditional eviction policies, like LRU, LFU, or FIFO. We propose a distributed, adaptive algorithm that performs stochastic gradient ascent on a concave relaxation of the expected caching gain, and constructs a probabilistic content placement within 1-1 e factor from the optimal, in expectation. Motivated by our analysis, we also propose a novel greedy eviction policy to be used with path replication, and show through numerical evaluations that both algorithms significantly outperform path replication with traditional eviction policies over a broad array of network topologies. | The path replication eviction policy we propose also relates to greedy maximization techniques used in throughput-optimal backpressure algorithms---see, e.g., Stolyar @cite_21 and, more recently, @cite_26 , for an application to throughput-optimal caching in ICN networks. We minimize routing costs and ignore throughput issues, as we do not model congestion. Investigating how to combine these two research directions, capitalizing on commonalities between these greedy algorithms, is an interesting open problem. | {
"cite_N": [
"@cite_21",
"@cite_26"
],
"mid": [
"2163634514",
"2155611422"
],
"abstract": [
"We study a model of controlled queueing network, which operates and makes control decisions in discrete time. An underlying random network mode determines the set of available controls in each time slot. Each control decision \"produces\" a certain vector of \"commodities\"; it also has associated \"traditional\" queueing control effect, i.e., it determines traffic (customer) arrival rates, service rates at the nodes, and random routing of processed customers among the nodes. The problem is to find a dynamic control strategy which maximizes a concave utility function H(X), where X is the average value of commodity vector, subject to the constraint that network queues remain stable. We introduce a dynamic control algorithm, which we call Greedy Primal-Dual (GPD) algorithm, and prove its asymptotic optimality. We show that our network model and GPD algorithm accommodate a wide range of applications. As one example, we consider the problem of congestion control of networks where both traffic sources and network processing nodes may be randomly time-varying and interdependent. We also discuss a variety of resource allocation problems in wireless networks, which in particular involve average power consumption constraints and or optimization, as well as traffic rate constraints.",
"Emerging information-centric networking architectures seek to optimally utilize both bandwidth and storage for efficient content distribution. This highlights the need for joint design of traffic engineering and caching strategies, in order to optimize network performance in view of both current traffic loads and future traffic demands. We present a systematic framework for joint dynamic interest request forwarding and dynamic cache placement and eviction, within the context of the Named Data Networking (NDN) architecture. The framework employs a virtual control plane which operates on the user demand rate for data objects in the network, and an actual plane which handles Interest Packets and Data Packets. We develop distributed algorithms within the virtual plane to achieve network load balancing through dynamic forwarding and caching, thereby maximiz- ing the user demand rate that the NDN network can satisfy. Numerical experiments within a number of network settings demonstrate the superior performance of the resulting algorithms for the actual plane in terms of low user delay and high rate of cache hits."
]
} |
1604.03526 | 2340164423 | We propose an online spatiotemporal articulation model estimation framework that estimates both articulated structure as well as a temporal prediction model solely using passive observations. The resulting model can predict future motions of an articulated object with high confidence because of the spatial and temporal structure. We demonstrate the effectiveness of the predictive model by incorporating it within a standard simultaneous localization and mapping (SLAM) pipeline for mapping and robot localization in previously unexplored dynamic environments. Our method is able to localize the robot and map a dynamic scene by explaining the observed motion in the world. We demonstrate the effectiveness of the proposed framework for both simulated and real-world dynamic environments. | Articulation structure estimation methods attempt to recover the linking structure, or kinematic chain, of rigid bodies, essentially discovering the articulated joints that constrain the motion of rigid bodies to a subspace of @math . The early approaches to address this problem extended multibody structure from motion @cite_17 ideas to understand articulation by clustering feature trajectories into individual rigid bodies @cite_6 . With the introduction of the commercial depth camera, the feature trajectories could be directly represented in @math and thus avoiding the need to estimate shape @cite_20 @cite_12 . However, these methods implicitly assume that large number of features trajectories are available whereas our model can estimate the articulation structure of a single trajectory. | {
"cite_N": [
"@cite_20",
"@cite_12",
"@cite_6",
"@cite_17"
],
"mid": [
"1548410568",
"2159461149",
"2168224686",
"2118154608"
],
"abstract": [
"",
"We present an interactive perceptual skill for segmenting, tracking, and modeling the kinematic structure of 3D articulated objects. This skill is a prerequisite for general manipulation in unstructured environments. Robot-environment interactions are used to move an unknown object, creating a perceptual signal that reveals the kinematic properties of the object. The resulting perceptual information can then inform and facilitate further manipulation. The algorithm is computationally efficient, handles partial occlusions, and depends on little object motion; it only requires sufficient texture for visual feature tracking. We conducted experiments with everyday objects on a robotic manipulation platform equipped with an RGB-D sensor. The results demonstrate the robustness of the proposed method to lighting conditions, object appearance, size, structure, and configuration.",
"We investigate the problem of learning the structure of an articulated object, i.e. its kinematic chain, from feature trajectories under affine projections. We demonstrate this possibility by proposing an algorithm which first segments the trajectories by local sampling and spectral clustering, then builds the kinematic chain as a minimum spanning tree of a graph constructed from the segmented motion subspaces. We test our method in challenging data sets and demonstrate the ability to automatically build the kinematic chain of an articulated object from feature trajectories. The algorithm also works when there are multiple articulated objects in the scene. Furthermore, we take into account non-rigid articulated parts that exist in human motions. We believe this advance will have impact on articulated object tracking and dynamical structure from motion.",
"The structure-from-motion problem has been extensively studied in the field of computer vision. Yet, the bulk of the existing work assumes that the scene contains only a single moving object. The more realistic case where an unknown number of objects move in the scene has received little attention, especially for its theoretical treatment. In this paper we present a new method for separating and recovering the motion and shape of multiple independently moving objects in a sequence of images. The method does not require prior knowledge of the number of objects, nor is dependent on any grouping of features into an object at the image level. For this purpose, we introduce a mathematical construct of object shapes, called the shape interaction matrix, which is invariant to both the object motions and the selection of coordinate systems. This invariant structure is computable solely from the observed trajectories of image features without grouping them into individual objects. Once the matrix is computed, it allows for segmenting features into objects by the process of transforming it into a canonical form, as well as recovering the shape and motion of each object. The theory works under a broad set of projection models (scaled orthography, paraperspective and affine) but they must be linear, so it excludes projective “cameras”."
]
} |
1604.03526 | 2340164423 | We propose an online spatiotemporal articulation model estimation framework that estimates both articulated structure as well as a temporal prediction model solely using passive observations. The resulting model can predict future motions of an articulated object with high confidence because of the spatial and temporal structure. We demonstrate the effectiveness of the predictive model by incorporating it within a standard simultaneous localization and mapping (SLAM) pipeline for mapping and robot localization in previously unexplored dynamic environments. Our method is able to localize the robot and map a dynamic scene by explaining the observed motion in the world. We demonstrate the effectiveness of the proposed framework for both simulated and real-world dynamic environments. | These prior works in estimating articulation structure have mostly relied on collecting data from demonstrations and performing articulation estimation offline, e.g., @cite_32 @cite_1 . In contrast to these state-of-the-art methods, we do online articulation estimation. Online estimation not only enables evolving beliefs with more observations but also allows for inclusion in online tracking and mapping algorithms. The closest work to ours is @cite_23 who propose a framework for online estimation; however, their method has no explicit probabilistic measure for model confidence to select an articulation model. Furthermore, the state-of-the-art in articulation estimation does not model the temporal evolution of motion. | {
"cite_N": [
"@cite_1",
"@cite_32",
"@cite_23"
],
"mid": [
"",
"1517508454",
"2026373015"
],
"abstract": [
"",
"Robots operating in domestic environments generally need to interact with articulated objects, such as doors, cabinets, dishwashers or fridges. In this work, we present a novel, probabilistic framework for modeling articulated objects as kinematic graphs. Vertices in this graph correspond to object parts, while edges between them model their kinematic relationship. In particular, we present a set of parametric and non-parametric edge models and how they can robustly be estimated from noisy pose observations. We furthermore describe how to estimate the kinematic structure and how to use the learned kinematic models for pose prediction and for robotic manipulation tasks. We finally present how the learned models can be generalized to new and previously unseen objects. In various experiments using real robots with different camera systems as well as in simulation, we show that our approach is valid, accurate and efficient. Further, we demonstrate that our approach has a broad set of applications, in particular for the emerging fields of mobile manipulation and service robotics.",
"To successfully manipulate in unknown environments, a robot must be able to perceive degrees of freedom of objects in its environment. Based on the resulting kinematic model and joint configurations, the robot is able to select and adapt actions, recognize their successful completion and detect failure. We present an RGB-D-based online algorithm for the interactive perception of articulated objects. The algorithm decomposes the perception problem into three interconnected levels of recursive estimation. The estimation problems at each level are much simpler than the original problem and their robustness is improved by level-specific priors that help reject noise in the measurements. These three estimators mutually inform each other to further improve the convergence properties of the three estimation solutions. We demonstrate that the resulting algorithm is robust, accurate, and versatile in realworld experiments. We also show how the perceptual skill can be used online to control the robot’s behavior in real-world manipulation tasks."
]
} |
1604.03526 | 2340164423 | We propose an online spatiotemporal articulation model estimation framework that estimates both articulated structure as well as a temporal prediction model solely using passive observations. The resulting model can predict future motions of an articulated object with high confidence because of the spatial and temporal structure. We demonstrate the effectiveness of the predictive model by incorporating it within a standard simultaneous localization and mapping (SLAM) pipeline for mapping and robot localization in previously unexplored dynamic environments. Our method is able to localize the robot and map a dynamic scene by explaining the observed motion in the world. We demonstrate the effectiveness of the proposed framework for both simulated and real-world dynamic environments. | Our model directly addresses this lack of temporal modeling (for example, acceleration deceleration of a door) in articulation estimation. We propose an explicit temporal model for each articulation type, which is necessary to make good long-term future predictions. Temporal modeling of arbitrary order allows us to: i) track new parts objects that enter exit the scene @cite_23 ; ii) model the entire scene and as a result explore dependencies between neighboring objects; and, iii) assimilate articulated object motion in SLAM @cite_13 . Apart from the applications presented in this paper, temporal models associated with articulated structure will help in robot-environment interaction, specifically obtaining dynamic characteristics of the objects in the environment @cite_32 . | {
"cite_N": [
"@cite_13",
"@cite_32",
"@cite_23"
],
"mid": [
"2146881125",
"1517508454",
"2026373015"
],
"abstract": [
"This paper describes the simultaneous localization and mapping (SLAM) problem and the essential methods for solving the SLAM problem and summarizes key implementations and demonstrations of the method. While there are still many practical issues to overcome, especially in more complex outdoor environments, the general SLAM method is now a well understood and established part of robotics. Another part of the tutorial summarized more recent works in addressing some of the remaining issues in SLAM, including computation, feature representation, and data association",
"Robots operating in domestic environments generally need to interact with articulated objects, such as doors, cabinets, dishwashers or fridges. In this work, we present a novel, probabilistic framework for modeling articulated objects as kinematic graphs. Vertices in this graph correspond to object parts, while edges between them model their kinematic relationship. In particular, we present a set of parametric and non-parametric edge models and how they can robustly be estimated from noisy pose observations. We furthermore describe how to estimate the kinematic structure and how to use the learned kinematic models for pose prediction and for robotic manipulation tasks. We finally present how the learned models can be generalized to new and previously unseen objects. In various experiments using real robots with different camera systems as well as in simulation, we show that our approach is valid, accurate and efficient. Further, we demonstrate that our approach has a broad set of applications, in particular for the emerging fields of mobile manipulation and service robotics.",
"To successfully manipulate in unknown environments, a robot must be able to perceive degrees of freedom of objects in its environment. Based on the resulting kinematic model and joint configurations, the robot is able to select and adapt actions, recognize their successful completion and detect failure. We present an RGB-D-based online algorithm for the interactive perception of articulated objects. The algorithm decomposes the perception problem into three interconnected levels of recursive estimation. The estimation problems at each level are much simpler than the original problem and their robustness is improved by level-specific priors that help reject noise in the measurements. These three estimators mutually inform each other to further improve the convergence properties of the three estimation solutions. We demonstrate that the resulting algorithm is robust, accurate, and versatile in realworld experiments. We also show how the perceptual skill can be used online to control the robot’s behavior in real-world manipulation tasks."
]
} |
1604.03526 | 2340164423 | We propose an online spatiotemporal articulation model estimation framework that estimates both articulated structure as well as a temporal prediction model solely using passive observations. The resulting model can predict future motions of an articulated object with high confidence because of the spatial and temporal structure. We demonstrate the effectiveness of the predictive model by incorporating it within a standard simultaneous localization and mapping (SLAM) pipeline for mapping and robot localization in previously unexplored dynamic environments. Our method is able to localize the robot and map a dynamic scene by explaining the observed motion in the world. We demonstrate the effectiveness of the proposed framework for both simulated and real-world dynamic environments. | Stachniss and Burgard @cite_33 considered a graphic model similar to ours to update the map of a dynamic environment by using local patch maps and modeling transitions between patch maps. However, such an approach is only suitable for quasi-static environments as the number of maps required would increase exponentially with the number and pose characterizations of dynamic objects in the world. A recent work has extended dense tracking and mapping to dynamic scenes by estimating a dense warp field @cite_19 . However, the approach disregards the object level rigid nature which limits its application to topological changes, such as, open to closed door. In the context of manipulating doors, @cite_29 used a model of the door and a prior low-resolution static map of environment to track objects for manipulation tasks. In contrast, our approach does not need any prior maps of the environment or articulated objects, and does not suffer from exponential increase in the number of maps for a dynamic environment. | {
"cite_N": [
"@cite_19",
"@cite_29",
"@cite_33"
],
"mid": [
"1938204631",
"46519310",
"1540474423"
],
"abstract": [
"We present the first dense SLAM system capable of reconstructing non-rigidly deforming scenes in real-time, by fusing together RGBD scans captured from commodity sensors. Our DynamicFusion approach reconstructs scene geometry whilst simultaneously estimating a dense volumetric 6D motion field that warps the estimated geometry into a live frame. Like KinectFusion, our system produces increasingly denoised, detailed, and complete reconstructions as more measurements are fused, and displays the updated model in real time. Because we do not require a template or other prior scene model, the approach is applicable to a wide range of moving objects and scenes.",
"In recent years, probabilistic approaches have found many successful applications to mobile robot localization, and to object state estimation for manipulation. In this paper, we propose a unified approach to these two problems that dynamically models the objects to be manipulated and localizes the robot at the same time. Our approach applies in the common setting where only a lowresolution (10cm) grid-map of a building is available, but we also have a high-resolution (0.1cm) model of the object to be manipulated. Our method is based on defining a unifying probabilistic model over these two representations. The resulting algorithm works in real-time, and estimates the position of objects with sufficient precision for manipulation tasks. We apply our approach to the task of navigating from one office to another (including manipulating doors). Our approach, successfully tested on multiple doors, allows the robot to navigate through a hallway to an office door, grasp and turn the door handle, and continuously manipulate the door as it moves into the office.",
"Whenever mobile robots act in the real world, they need to be able to deal with non-static objects. In the context of mapping, a common technique to deal with dynamic objects is to filter out the spurious measurements corresponding to such objects. In this paper, we present a novel approach to estimate typical configurations of dynamic areas in the environment of a mobile robot. Our approach clusters local grid maps to identify the possible configurations. We furthermore describe how these clusters can be utilized within a Rao-Blackwellized particle filter to localize a mobile robot in a non-static environment. In practical experiments carried out with a mobile robot in a typical office environment, we demonstrate the advantages of our approach compared to alternative techniques for mapping and localization in dynamic environments."
]
} |
1604.03266 | 2601273947 | We consider online learning of ensembles of portfolio selection algorithms and aim to regularize risk by encouraging diversification with r espect to a predefined risk-driven grouping of stocks. Our procedure uses online convex optimization to control capital allocation to underlying investment alg orithms while encouraging non-sparsity over the given grouping. We prove a logarithmic regret for this procedure with respect to the best-in-hindsight ensemble. We applied the procedure with known mean-reversion portfolio selection algorithms using the standard GICS industry sector grouping. Empirical Experimental results showed an impressive percentage increase of risk-adjusted return (Sharpe r atio). | Although our procedure relies on the same @math group norm to encourage diversification, our algorithm differs from that of Johnson and Banerjee in two ways: first, rather than generating portfolios on the stock themselves, we generate a weighted ensemble over investment algorithms. Second, our learning algorithm exploits the exp-concavity of our loss function, allowing the use of online Newton steps as in @cite_16 to guarantee @math regret w.r.t. the best fixed ensemble in hindsight. A direct application of our procedure over the stocks themselves yields exponential improvement in regret relative to the result of JohnsonB2015 . Also worth mentioning is that our implementation utilizes a fixed grouping of the stocks given by the standard GICS industry taxonomy, whereas JohnsonB2015 employ a correlation-based heuristic to group the stocks on the fly. Our numerical examples in Section 3 include a direct comparison with the method of JohnsonB2015 , showing an overwhelming advantage to our method (see, e.g., Figure ). | {
"cite_N": [
"@cite_16"
],
"mid": [
"2068643490"
],
"abstract": [
"We experimentally study on-line investment algorithms first proposed by Agarwal and Hazan and extended by which achieve almost the same wealth as the best constant-rebalanced portfolio determined in hindsight. These algorithms are the first to combine optimal logarithmic regret bounds with efficient deterministic computability. They are based on the Newton method for offline optimization which, unlike previous approaches, exploits second order information. After analyzing the algorithm using the potential function introduced by Agarwal and Hazan, we present extensive experiments on actual financial data. These experiments confirm the theoretical advantage of our algorithms, which yield higher returns and run considerably faster than previous algorithms with optimal regret. Additionally, we perform financial analysis using mean-variance calculations and the Sharpe ratio."
]
} |
1604.03454 | 2338689841 | Detection of non-overlapping and overlapping communities are essentially the same problem. However, current algorithms focus either on finding overlapping or non-overlapping communities. We present a generalized framework that can identify both non-overlapping and overlapping communities, without any prior input about the network or its community distribution. To do so, we introduce a vertex-based metric, GenPerm , that quantifies by how much a vertex belongs to each of its constituent communities. Our community detection algorithm is based on maximizing the GenPerm over all the vertices in the network. We demonstrate, through experiments over synthetic and real-world networks, that GenPerm is more effective than other metrics in evaluating community structure. Further, we show that due to its vertex-centric property, GenPerm can be used to unfold several inferences beyond community detection, such as core-periphery analysis and message spreading. Our algorithm for maximizing GenPerm outperforms six state-of-the-art algorithms in accurately predicting the ground-truth labels. Finally, we discuss the problem of resolution limit in overlapping communities and demonstrate that maximizing GenPerm can mitigate this problem. | When the true community structure is not known, a quality function such as modularity @cite_15 is used to evaluate the performance of the clustering algorithms. @cite_41 introduced a variant of modularity for overlapping communities, which was later modified by L 'a z 'a @cite_48 . @cite_11 proposed another variant on the basis that a maximal clique only belongs to one community. @cite_46 extended the definition to directed graphs with overlapping communities. @cite_47 proposed an extension of modularity density for overlapping community structure. Several other extensions of modularity @cite_45 @cite_32 have also been proposed. It is worth mentioning that the formulation of GenPerm is an extension of our earlier work, where we proposed permanence , a new scoring metric for non-overlapping community @cite_19 . | {
"cite_N": [
"@cite_47",
"@cite_41",
"@cite_48",
"@cite_46",
"@cite_32",
"@cite_19",
"@cite_45",
"@cite_15",
"@cite_11"
],
"mid": [
"2040858714",
"2091202730",
"2120698362",
"2000560314",
"",
"1968656203",
"",
"2095293504",
"2033507223"
],
"abstract": [
"Modularity is widely used to effectively measure the strength of the disjoint community structure found by community detection algorithms. Although several overlapping extensions of modularity were proposed to measure the quality of overlapping community structure, there is lack of systematic comparison of different extensions. To fill this gap, we overview overlapping extensions of modularity to select the best. In addition, we extend the Modularity Density metric to enable its usage for overlapping communities. The experimental results on four real networks using overlapping extensions of modularity, overlapping modularity density, and six other community quality metrics show that the best results are obtained when the product of the belonging coefficients of two nodes is used as the belonging function. Moreover, our experiments indicate that overlapping modularity density is a better measure of the quality of overlapping community structure than other metrics considered.",
"Clustering and community structure is crucial for many network systems and the related dynamic processes. It has been shown that communities are usually overlapping and hierarchical. However, previous methods investigate these two properties of community structure separately. This paper proposes an algorithm (EAGLE) to detect both the overlapping and hierarchical properties of complex community structure together. This algorithm deals with the set of maximal cliques and adopts an agglomerative framework. The quality function of modularity is extended to evaluate the goodness of a cover. The examples of application to real world networks give excellent results.",
"In this paper we introduce a non-fuzzy measure which has been designed to rank the partitions of a network's nodes into overlapping communities. Such a measure can be useful for both quantifying clusters detected by various methods and during finding the overlapping community structure by optimization methods. The theoretical problem referring to the separation of overlapping modules is discussed, and an example for possible applications is given as well.",
"Complex network topologies present interesting and surprising properties, such as community structures, which can be exploited to optimize communication, to find new efficient and context-aware routing algorithms or simply to understand the dynamics and meaning of relationships among nodes. Complex networks are gaining more and more importance as a reference model and are a powerful interpretation tool for many different kinds of natural, biological and social networks, where directed relationships and contextual belonging of nodes to many different communities is a matter of fact. This paper starts from the definition of a modularity function, given by Newman to evaluate the goodness of network community decompositions, and extends it to the more general case of directed graphs with overlapping community structures. Interesting properties of the proposed extension are discussed, a method for finding overlapping communities is proposed and results of its application to benchmark case-studies are reported. We also propose a new data set which could be used as a reference benchmark for overlapping community structures identification.",
"",
"Despite the prevalence of community detection algorithms, relatively less work has been done on understanding whether a network is indeed modular and how resilient the community structure is under perturbations. To address this issue, we propose a new vertex-based metric called \"permanence\", that can quantitatively give an estimate of the community- like structure of the network. The central idea of permanence is based on the observation that the strength of membership of a vertex to a community depends upon the following two factors: (i) the distribution of external connectivity of the vertex to individual communities and not the total external connectivity, and (ii) the strength of its internal connectivity and not just the total internal edges. In this paper, we demonstrate that compared to other metrics, permanence provides (i) a more accurate estimate of a derived community structure to the ground-truth community and (ii) is more sensitive to perturbations in the network. As a by-product of this study, we have also developed a community detection algorithm based on maximizing permanence. For a modular network structure, the results of our algorithm match well with ground-truth communities.",
"",
"We propose and study a set of algorithms for discovering community structure in networks-natural divisions of network nodes into densely connected subgroups. Our algorithms all share two definitive features: first, they involve iterative removal of edges from the network to split it into communities, the edges removed being identified using any one of a number of possible \"betweenness\" measures, and second, these measures are, crucially, recalculated after each removal. We also propose a measure for the strength of the community structure found by our algorithms, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our algorithms are highly effective at discovering community structure in both computer-generated and real-world network data, and show how they can be used to shed light on the sometimes dauntingly complex structure of networked systems.",
"It has been shown that the communities of complex networks often overlap with each other. However, there is no effective method to quantify the overlapping community structure. In this paper, we propose a metric to address this problem. Instead of assuming that one node can only belong to one community, our metric assumes that a maximal clique only belongs to one community. In this way, the overlaps between communities are allowed. To identify the overlapping community structure, we construct a maximal clique network from the original network, and prove that the optimization of our metric on the original network is equivalent to the optimization of Newman's modularity on the maximal clique network. Thus the overlapping community structure can be identified through partitioning the maximal clique network using any modularity optimization method. The effectiveness of our metric is demonstrated by extensive tests on both artificial networks and real world networks with a known community structure. The application to the word association network also reproduces excellent results."
]
} |
1604.03454 | 2338689841 | Detection of non-overlapping and overlapping communities are essentially the same problem. However, current algorithms focus either on finding overlapping or non-overlapping communities. We present a generalized framework that can identify both non-overlapping and overlapping communities, without any prior input about the network or its community distribution. To do so, we introduce a vertex-based metric, GenPerm , that quantifies by how much a vertex belongs to each of its constituent communities. Our community detection algorithm is based on maximizing the GenPerm over all the vertices in the network. We demonstrate, through experiments over synthetic and real-world networks, that GenPerm is more effective than other metrics in evaluating community structure. Further, we show that due to its vertex-centric property, GenPerm can be used to unfold several inferences beyond community detection, such as core-periphery analysis and message spreading. Our algorithm for maximizing GenPerm outperforms six state-of-the-art algorithms in accurately predicting the ground-truth labels. Finally, we discuss the problem of resolution limit in overlapping communities and demonstrate that maximizing GenPerm can mitigate this problem. | There has been a class of algorithms for network clustering, which allow nodes belonging to more than one community. Palla proposed CFinder'' @cite_28 , the seminal and most popular method based on clique-percolation technique. However, due to the clique requirement and the sparseness of real networks, the communities discovered by CFinder are usually of low quality @cite_1 . The idea of partitioning links instead of nodes to discover community structure has also been explored @cite_42 | {
"cite_N": [
"@cite_28",
"@cite_42",
"@cite_1"
],
"mid": [
"2164928285",
"2110620844",
"1974487050"
],
"abstract": [
"A network is a network — be it between words (those associated with ‘bright’ in this case) or protein structures. Many complex systems in nature and society can be described in terms of networks capturing the intricate web of connections among the units they are made of1,2,3,4. A key question is how to interpret the global organization of such networks as the coexistence of their structural subunits (communities) associated with more highly interconnected parts. Identifying these a priori unknown building blocks (such as functionally related proteins5,6, industrial sectors7 and groups of people8,9) is crucial to the understanding of the structural and functional properties of networks. The existing deterministic methods used for large networks find separated communities, whereas most of the actual networks are made of highly overlapping cohesive groups of nodes. Here we introduce an approach to analysing the main statistical features of the interwoven sets of overlapping communities that makes a step towards uncovering the modular structure of complex systems. After defining a set of new characteristic quantities for the statistics of communities, we apply an efficient technique for exploring overlapping communities on a large scale. We find that overlaps are significant, and the distributions we introduce reveal universal features of networks. Our studies of collaboration, word-association and protein interaction graphs show that the web of communities has non-trivial correlations and specific scaling properties.",
"Network theory has become pervasive in all sectors of biology, from biochemical signalling to human societies, but identification of relevant functional communities has been impaired by many nodes belonging to several overlapping groups at once, and by hierarchical structures. These authors offer a radically different viewpoint, focusing on links rather than nodes, which allows them to demonstrate that overlapping communities and network hierarchies are two faces of the same issue.",
"Uncovering the community structure exhibited by real networks is a crucial step towards an understanding of complex systems that goes beyond the local organization of their constituents. Many algorithms have been proposed so far, but none of them has been subjected to strict tests to evaluate their performance. Most of the sporadic tests performed so far involved small networks with known community structure and or artificial graphs with a simplified structure, which is very uncommon in real systems. Here we test several methods against a recently introduced class of benchmark graphs, with heterogeneous distributions of degree and community size. The methods are also tested against the benchmark by Girvan and Newman and on random graphs. As a result of our analysis, three recent algorithms introduced by Rosvall and Bergstrom, and Ronhovde and Nussinov, respectively, have an excellent performance, with the additional advantage of low computational complexity, which enables one to analyze large systems."
]
} |
1604.03454 | 2338689841 | Detection of non-overlapping and overlapping communities are essentially the same problem. However, current algorithms focus either on finding overlapping or non-overlapping communities. We present a generalized framework that can identify both non-overlapping and overlapping communities, without any prior input about the network or its community distribution. To do so, we introduce a vertex-based metric, GenPerm , that quantifies by how much a vertex belongs to each of its constituent communities. Our community detection algorithm is based on maximizing the GenPerm over all the vertices in the network. We demonstrate, through experiments over synthetic and real-world networks, that GenPerm is more effective than other metrics in evaluating community structure. Further, we show that due to its vertex-centric property, GenPerm can be used to unfold several inferences beyond community detection, such as core-periphery analysis and message spreading. Our algorithm for maximizing GenPerm outperforms six state-of-the-art algorithms in accurately predicting the ground-truth labels. Finally, we discuss the problem of resolution limit in overlapping communities and demonstrate that maximizing GenPerm can mitigate this problem. | On the other hand, a set of algorithms utilized local expansion and optimization to detect overlapping communities. For instance, @cite_21 proposed RankRemoval'' using a local density function. MONC @cite_14 uses the modified fitness function of LFM which allows a single node to be considered a community by itself. OSLOM @cite_23 tests the statistical significance of a cluster with respect to a global null model (i.e., the random graph generated by the configuration model) during community expansion. @cite_29 proposed selecting a node with maximal node strength based on two quantities -- belonging degree and the modified modularity. EAGLE @cite_41 and GCE @cite_34 use the agglomerative framework to produce overlapping communities. | {
"cite_N": [
"@cite_14",
"@cite_41",
"@cite_29",
"@cite_21",
"@cite_23",
"@cite_34"
],
"mid": [
"2083818595",
"2091202730",
"2008015797",
"189016807",
"1970301364",
"2136576902"
],
"abstract": [
"We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE.",
"Clustering and community structure is crucial for many network systems and the related dynamic processes. It has been shown that communities are usually overlapping and hierarchical. However, previous methods investigate these two properties of community structure separately. This paper proposes an algorithm (EAGLE) to detect both the overlapping and hierarchical properties of complex community structure together. This algorithm deals with the set of maximal cliques and adopts an agglomerative framework. The quality function of modularity is extended to evaluate the goodness of a cover. The examples of application to real world networks give excellent results.",
"Identification of communities is significant in understanding the structures and functions of networks. Since some nodes naturally belong to several communities, the study of overlapping communities has attracted increasing attention recently, and many algorithms have been designed to detect overlapping communities. In this paper, an overlapping communities detecting algorithm is proposed whose main strategies are finding an initial partial community from a node with maximal node strength and adding tight nodes to expand the partial community. Seven real-world complex networks and one synthetic network are used to evaluate the algorithm. Experimental results demonstrate that the algorithm proposed is efficient for detecting overlapping communities in weighted networks.",
"We present a new approach to the problem of finding communities: a community is a subset of actors who induce a locally optimal subgraph with respect to a density function defined on subsets of actors. Two different subsets with significant overlap can both be locally optimal, and in this way we may obtain overlapping communities. We design, implement, and test two novel efficient algorithms, RaRe and IS, which find communities according to our definition. These algorithms are shown to work effectively on both synthetic and real-world graphs, and also are shown to outperform a well-known k-neighborhood heuristic.",
"Community structure is one of the main structural features of networks, revealing both their internal organization and the similarity of their elementary units. Despite the large variety of methods proposed to detect communities in graphs, there is a big need for multi-purpose techniques, able to handle different types of datasets and the subtleties of community structure. In this paper we present OSLOM (Order Statistics Local Optimization Method), the first method capable to detect clusters in networks accounting for edge directions, edge weights, overlapping communities, hierarchies and community dynamics. It is based on the local optimization of a fitness function expressing the statistical significance of clusters with respect to random fluctuations, which is estimated with tools of Extreme and Order Statistics. OSLOM can be used alone or as a refinement procedure of partitions covers delivered by other techniques. We have also implemented sequential algorithms combining OSLOM with other fast techniques, so that the community structure of very large networks can be uncovered. Our method has a comparable performance as the best existing algorithms on artificial benchmark graphs. Several applications on real networks are shown as well. OSLOM is implemented in a freely available software (http: www.oslom.org), and we believe it will be a valuable tool in the analysis of networks.",
"In complex networks it is common for each node to belong to several communities, implying a highly overlapping community structure. Recent advances in benchmarking indicate that the existing community assignment algorithms that are capable of detecting overlapping communities perform well only when the extent of community overlap is kept to modest levels. To overcome this limitation, we introduce a new community assignment algorithm called Greedy Clique Expansion (GCE). The algorithm identifies distinct cliques as seeds and expands these seeds by greedily optimizing a local fitness function. We perform extensive benchmarks on synthetic data to demonstrate that GCE’s good performance is robust across diverse graph topologies. Significantly, GCE is the only algorithm to perform well on these synthetic graphs, in which every node belongs to multiple communities. Furthermore, when put to the task of identifying functional modules in protein interaction data, and college dorm assignments in Facebook friendship data, we find that GCE performs competitively."
]
} |
1604.03454 | 2338689841 | Detection of non-overlapping and overlapping communities are essentially the same problem. However, current algorithms focus either on finding overlapping or non-overlapping communities. We present a generalized framework that can identify both non-overlapping and overlapping communities, without any prior input about the network or its community distribution. To do so, we introduce a vertex-based metric, GenPerm , that quantifies by how much a vertex belongs to each of its constituent communities. Our community detection algorithm is based on maximizing the GenPerm over all the vertices in the network. We demonstrate, through experiments over synthetic and real-world networks, that GenPerm is more effective than other metrics in evaluating community structure. Further, we show that due to its vertex-centric property, GenPerm can be used to unfold several inferences beyond community detection, such as core-periphery analysis and message spreading. Our algorithm for maximizing GenPerm outperforms six state-of-the-art algorithms in accurately predicting the ground-truth labels. Finally, we discuss the problem of resolution limit in overlapping communities and demonstrate that maximizing GenPerm can mitigate this problem. | Few fuzzy community detection algorithms have been proposed that quantify the strength of association between all pairs of nodes and communities @cite_24 . @cite_40 modeled the overlapping community detection as a nonlinear constrained optimization problem which can be solved by simulated annealing methods. Due to the probabilistic nature, mixture models provide an appropriate framework for overlapping community detection @cite_31 . MOSES @cite_43 uses a local optimization scheme in which the fitness function is defined based on the observed condition distribution. @cite_2 employed the affinity propagation clustering algorithm for overlapping community detection. @cite_9 developed an overlapping community detection algorithm using a seed set expansion approach. Recently, BIGCLAM @cite_17 algorithm is also built on NMF framework. | {
"cite_N": [
"@cite_9",
"@cite_24",
"@cite_43",
"@cite_40",
"@cite_2",
"@cite_31",
"@cite_17"
],
"mid": [
"2066090568",
"2001017535",
"2159984870",
"2142170653",
"1993236420",
"2139818818",
"2139694940"
],
"abstract": [
"Community detection is an important task in network analysis. A community (also referred to as a cluster) is a set of cohesive vertices that have more connections inside the set than outside. In many social and information networks, these communities naturally overlap. For instance, in a social network, each vertex in a graph corresponds to an individual who usually participates in multiple communities. One of the most successful techniques for finding overlapping communities is based on local optimization and expansion of a community metric around a seed set of vertices. In this paper, we propose an efficient overlapping community detection algorithm using a seed set expansion approach. In particular, we develop new seeding strategies for a personalized PageRank scheme that optimizes the conductance community score. The key idea of our algorithm is to find good seeds, and then expand these seed sets using the personalized PageRank clustering procedure. Experimental results show that this seed set expansion approach outperforms other state-of-the-art overlapping community detection methods. We also show that our new seeding strategies are better than previous strategies, and are thus effective in finding good overlapping clusters in a graph.",
"Networks commonly exhibit a community structure, whereby groups of vertices are more densely connected to each other than to other vertices. Often these communities overlap, such that each vertex may occur in more than one community. However, two distinct types of overlapping are possible: crisp (where each vertex belongs fully to each community of which it is a member) and fuzzy (where each vertex belongs to each community to a different extent). We investigate the effects of the fuzziness of community overlap. We find that it has a strong effect on the performance of community detection methods: some algorithms perform better with fuzzy overlapping while others favour crisp overlapping. We also evaluate the performance of some algorithms that recover the belonging coefficients when the overlap is fuzzy. Finally, we investigate whether real networks contain fuzzy or crisp overlapping.",
"As research into community finding in social networks progresses, there is a need for algorithms capable of detecting overlapping community structure. Many algorithms have been proposed in recent years that are capable of assigning each node to more than a single community. The performance of these algorithms tends to degrade when the ground-truth contains a more highly overlapping community structure, with nodes assigned to more than two communities. Such highly overlapping structure is likely to exist in many social networks, such as Facebook friendship networks. In this paper we present a scalable algorithm, MOSES, based on a statistical model of community structure, which is capable of detecting highly overlapping community structure, especially when there is variance in the number of communities each node is in. In evaluation on synthetic data MOSES is found to be superior to existing algorithms, especially at high levels of overlap. We demonstrate MOSES on real social network data by analyzing the networks of friendship links between students of five US universities.",
"We consider the problem of fuzzy community detection in networks, which complements and expands the concept of overlapping community structure. Our approach allows each vertex of the graph to belong to multiple communities at the same time, determined by exact numerical membership degrees, even in the presence of uncertainty in the data being analyzed. We create an algorithm for determining the optimal membership degrees with respect to a given goal function. Based on the membership degrees, we introduce a measure that is able to identify outlier vertices that do not belong to any of the communities, bridge vertices that have significant membership in more than one single community, and regular vertices that fundamentally restrict their interactions within their own community, while also being able to quantify the centrality of a vertex with respect to its dominant community. The method can also be used for prediction in case of uncertainty in the data set analyzed. The number of communities can be given in advance, or determined by the algorithm itself, using a fuzzified variant of the modularity function. The technique is able to discover the fuzzy community structure of different real world networks including, but not limited to, social networks, scientific collaboration networks, and cortical networks, with high confidence.",
"Community structure is one of the important topological characteristics of many complex networks. Detecting communities from networks has been intensively investigated in recent years. In most previous methods for community detection, the overlapping property of communities, which exists common in many real-world networks, is ignored. By combining commute-time kernel based distance measure and fuzzy affinity propagation, we present a new community detection algorithm CDKFAP for overlapping communities. Based on a new proposed index measures the fuzziness of nodes, the algorithm can rank and extract overlapping nodes of communities. The applications to computer-generated networks and real-world networks demonstrate the effectiveness of our algorithm.",
"Networks are widely used in the biological, physical, and social sciences as a concise mathematical representation of the topology of systems of interacting components. Understanding the structure of these networks is one of the outstanding challenges in the study of complex systems. Here we describe a general technique for detecting structural features in large-scale network data that works by dividing the nodes of a network into classes such that the members of each class have similar patterns of connection to other nodes. Using the machinery of probabilistic mixture models and the expectation–maximization algorithm, we show that it is possible to detect, without prior knowledge of what we are looking for, a very broad range of types of structure in networks. We give a number of examples demonstrating how the method can be used to shed light on the properties of real-world networks, including social and information networks.",
"Network communities represent basic structures for understanding the organization of real-world networks. A community (also referred to as a module or a cluster) is typically thought of as a group of nodes with more connections amongst its members than between its members and the remainder of the network. Communities in networks also overlap as nodes belong to multiple clusters at once. Due to the difficulties in evaluating the detected communities and the lack of scalable algorithms, the task of overlapping community detection in large networks largely remains an open problem. In this paper we present BIGCLAM (Cluster Affiliation Model for Big Networks), an overlapping community detection method that scales to large networks of millions of nodes and edges. We build on a novel observation that overlaps between communities are densely connected. This is in sharp contrast with present community detection methods which implicitly assume that overlaps between communities are sparsely connected and thus cannot properly extract overlapping communities in networks. In this paper, we develop a model-based community detection algorithm that can detect densely overlapping, hierarchically nested as well as non-overlapping communities in massive networks. We evaluate our algorithm on 6 large social, collaboration and information networks with ground-truth community information. Experiments show state of the art performance both in terms of the quality of detected communities as well as in speed and scalability of our algorithm."
]
} |
1604.03454 | 2338689841 | Detection of non-overlapping and overlapping communities are essentially the same problem. However, current algorithms focus either on finding overlapping or non-overlapping communities. We present a generalized framework that can identify both non-overlapping and overlapping communities, without any prior input about the network or its community distribution. To do so, we introduce a vertex-based metric, GenPerm , that quantifies by how much a vertex belongs to each of its constituent communities. Our community detection algorithm is based on maximizing the GenPerm over all the vertices in the network. We demonstrate, through experiments over synthetic and real-world networks, that GenPerm is more effective than other metrics in evaluating community structure. Further, we show that due to its vertex-centric property, GenPerm can be used to unfold several inferences beyond community detection, such as core-periphery analysis and message spreading. Our algorithm for maximizing GenPerm outperforms six state-of-the-art algorithms in accurately predicting the ground-truth labels. Finally, we discuss the problem of resolution limit in overlapping communities and demonstrate that maximizing GenPerm can mitigate this problem. | The label propagation algorithm has been extended to overlapping community detection by allowing a node to have multiple labels. In COPRA @cite_18 , each node updates its belonging coefficients by averaging the coefficients from all its neighbors at each time step in a synchronous fashion. SLPA @cite_10 @cite_3 spreads labels between nodes according to pairwise interaction rules. A game-theoretic framework is proposed by @cite_36 in which a community is associated with a Nash local equilibrium. | {
"cite_N": [
"@cite_36",
"@cite_18",
"@cite_10",
"@cite_3"
],
"mid": [
"2119222198",
"2037096232",
"1774824195",
""
],
"abstract": [
"In this paper, we introduce a game-theoretic framework to address the community detection problem based on the structures of social networks. We formulate the dynamics of community formation as a strategic game called community formation game: Given an underlying social graph, we assume that each node is a selfish agent who selects communities to join or leave based on her own utility measurement. A community structure can be interpreted as an equilibrium of this game. We formulate the agents' utility by the combination of a gain function and a loss function. We allow each agent to select multiple communities, which naturally captures the concept of \"overlapping communities\". We propose a gain function based on the modularity concept introduced by Newman (Proc Natl Acad Sci 103(23):8577---8582, 2006), and a simple loss function that reflects the intrinsic costs incurred when people join the communities. We conduct extensive experiments under this framework, and our results show that our algorithm is effective in identifying overlapping communities, and are often better then other algorithms we evaluated especially when many people belong to multiple communities. To the best of our knowledge, this is the first time the community detection problem is addressed by a game-theoretic framework that considers community formation as the result of individual agents' rational behaviors.",
"We propose an algorithm for finding overlapping community structure in very large networks. The algorithm is based on the label propagation technique of Raghavan, Albert and Kumara, but is able to detect communities that overlap. Like the original algorithm, vertices have labels that propagate between neighbouring vertices so that members of a community reach a consensus on their community membership. Our main contribution is to extend the label and propagation step to include information about more than one community: each vertex can now belong to up to v communities, where v is the parameter of the algorithm. Our algorithm can also handle weighted and bipartite networks. Tests on an independently designed set of benchmarks, and on real networks, show the algorithm to be highly effective in recovering overlapping communities. It is also very fast and can process very large and dense networks in a short time.",
"Membership diversity is a characteristic aspect of social networks in which a person may belong to more than one social group. For this reason, discovering overlapping structures is necessary for realistic social analysis. In this paper, we present a fast algorithm, called SLPA, for overlapping community detection in large-scale networks. SLPA spreads labels according to dynamic interaction rules. It can be applied to both unipartite and bipartite networks. It is also able to uncover overlapping nested hierarchy . The time complexity of SLPA scales linearly with the number of edges in the network. Experiments in both synthetic and real-world networks show that SLPA has an excellent performance in identifying both node and community level overlapping structures.",
""
]
} |
1604.03454 | 2338689841 | Detection of non-overlapping and overlapping communities are essentially the same problem. However, current algorithms focus either on finding overlapping or non-overlapping communities. We present a generalized framework that can identify both non-overlapping and overlapping communities, without any prior input about the network or its community distribution. To do so, we introduce a vertex-based metric, GenPerm , that quantifies by how much a vertex belongs to each of its constituent communities. Our community detection algorithm is based on maximizing the GenPerm over all the vertices in the network. We demonstrate, through experiments over synthetic and real-world networks, that GenPerm is more effective than other metrics in evaluating community structure. Further, we show that due to its vertex-centric property, GenPerm can be used to unfold several inferences beyond community detection, such as core-periphery analysis and message spreading. Our algorithm for maximizing GenPerm outperforms six state-of-the-art algorithms in accurately predicting the ground-truth labels. Finally, we discuss the problem of resolution limit in overlapping communities and demonstrate that maximizing GenPerm can mitigate this problem. | Beside these, @cite_25 proposed an iterative process that reinforces the network topology and proximity that is interpreted as the probability of a pair of nodes belonging to the same community. Istv 'a @cite_8 proposed an approach focusing on centrality-based influence functions. Gopalan and Blei @cite_35 proposed an algorithm that naturally interleaves subsampling from the network and updating an estimate of its communities. Recently, @cite_39 proposed fuzzy clustering based non-overlapping community detection technique that can be extended to overlapping case. However, none of these algorithms can work equally well for both overlapping and non-overlapping cases. | {
"cite_N": [
"@cite_39",
"@cite_35",
"@cite_25",
"@cite_8"
],
"mid": [
"1992190284",
"2066828202",
"2119625792",
"2010371279"
],
"abstract": [
"This paper proposes a novel method based on fuzzy clustering to detect community structure in complex networks. In contrast to previous studies, our method does not focus on a graph model, but rather on a fuzzy relation model, which uses the operations of fuzzy relation to replace a traversal search of the graph for identifying community structure. In our method, we first use a fuzzy relation to describe the relation between vertices as well as the similarity in network topology to determine the membership grade of the relation. Then, we transform this fuzzy relation into a fuzzy equivalence relation. Finally, we map the non-overlapping communities as equivalence classes that satisfy a certain equivalence relation. Because most real-world networks are made of overlapping communities (e.g., in social networks, people may belong to multiple communities), we can consider the equivalence classes above as the skeletons of overlapping communities and extend our method by adding vertices to the skeletons to identify overlapping communities. We evaluated our method on artificial networks with built-in communities and real-world networks with known and unknown communities. The experimental results show that our method works well for detecting these communities and gives a new understanding of network division and community formation.",
"Detecting overlapping communities is essential to analyzing and exploring natural networks such as social networks, biological networks, and citation networks. However, most existing approaches do not scale to the size of networks that we regularly observe in the real world. In this paper, we develop a scalable approach to community detection that discovers overlapping communities in massive real-world networks. Our approach is based on a Bayesian model of networks that allows nodes to participate in multiple communities, and a corresponding algorithm that naturally interleaves subsampling from the network and updating an estimate of its communities. We demonstrate how we can discover the hidden community structure of several real-world networks, including 3.7 million US patents, 575,000 physics articles from the arXiv preprint server, and 875,000 connected Web pages from the Internet. Furthermore, we demonstrate on large simulated networks that our algorithm accurately discovers the true community structure. This paper opens the door to using sophisticated statistical models to analyze massive networks.",
"Graphs or networks can be used to model complex systems. Detecting community structures from large network data is a classic and challenging task. In this paper, we propose a novel community detection algorithm, which utilizes a dynamic process by contradicting the network topology and the topology-based propinquity, where the propinquity is a measure of the probability for a pair of nodes involved in a coherent community structure. Through several rounds of mutual reinforcement between topology and propinquity, the community structures are expected to naturally emerge. The overlapping vertices shared between communities can also be easily identified by an additional simple postprocessing. To achieve better efficiency, the propinquity is incrementally calculated. We implement the algorithm on a vertex-oriented bulk synchronous parallel(BSP) model so that the mining load can be distributed on thousands of machines. We obtained interesting experimental results on several real network data.",
"Background Network communities help the functional organization and evolution of complex networks. However, the development of a method, which is both fast and accurate, provides modular overlaps and partitions of a heterogeneous network, has proven to be rather difficult. Methodology Principal Findings Here we introduce the novel concept of ModuLand, an integrative method family determining overlapping network modules as hills of an influence function-based, centrality-type community landscape, and including several widely used modularization methods as special cases. As various adaptations of the method family, we developed several algorithms, which provide an efficient analysis of weighted and directed networks, and (1) determine pervasively overlapping modules with high resolution; (2) uncover a detailed hierarchical network structure allowing an efficient, zoom-in analysis of large networks; (3) allow the determination of key network nodes and (4) help to predict network dynamics. Conclusions Significance The concept opens a wide range of possibilities to develop new approaches and applications including network routing, classification, comparison and prediction."
]
} |
1604.03344 | 2346704290 | We consider training-based channel estimation for a cloud radio access network (CRAN), in which a large amount of remote radio heads (RRHs) and users are randomly scattered over the service area. In this model, assigning orthogonal training sequences to all users will incur a substantial overhead to the overall network, and is even impossible when the number of users is large. Therefore, in this paper, we introduce the notion of local orthogonality, under which the training sequence of a user is orthogonal to those of the other users in its neighborhood. We model the design of locally orthogonal training sequences as a graph coloring problem. Then, based on the theory of random geometric graph, we show that the minimum training length scales in the order of lnK, where K is the number of users covered by a CRAN. This indicates that the proposed training design yields a scalable solution to sustain the need of large-scale cooperation in CRANs. Numerical results show that the proposed scheme outperforms other reference schemes. | It is also worth mentioning that training-based CRAN has been previously studied in the literature @cite_18 @cite_25 . In @cite_18 , the authors proposed a coded pilot design where RRHs can be turned on or off to avoid pilot collisions, which may degrade the system performance. In @cite_25 , in each transmission block, only a portion of users is allowed to transmit pilots for channel training, and the channels of the other users are not updated. This scheme can only accommodate a relatively small number of users to avoid an unaffordable training overhead. Therefore, training design for CRAN deserves further endeavor, which is the main focus of this work. | {
"cite_N": [
"@cite_18",
"@cite_25"
],
"mid": [
"2963409499",
"2289552691"
],
"abstract": [
"Dense large-scale antenna deployments are one of the most promising technologies for delivering very large throughputs per unit area in the downlink (DL) of cellular networks. We consider such a dense deployment involving a distributed system formed by multi-antenna remote radio head (RRH) units connected to the same fronthaul serving a geographical area. Knowledge of the DL channel between each active user and its nearby RRH antennas is most efficiently obtained at the RRHs via reciprocity based training, that is, by estimating a user's channel using uplink (UL) pilots transmitted by the user, and exploiting the UL DL channel reciprocity. We consider aggressive pilot reuse across an RRH system, whereby a single pilot dimension is simultaneously assigned to multiple active users. We introduce a novel coded pilot approach, which allows each RRH unit to detect pilot collisions, i.e., when more than a single user in its proximity uses the same pilot dimensions. Thanks to the proposed coded pilot approach, pilot contamination can be substantially avoided. As shown, such a strategy can yield densification benefits in the form of increased multiplexing gains per UL pilot dimension with respect to conventional reuse schemes and some recent approaches assigning pseudorandom pilot vectors to the active users.",
"As a promising technique to meet the drastically growing demand for both high throughput and uniform coverage in the fifth generation (5G) wireless networks, massive multiple-input multiple-output (MIMO) systems have attracted significant attention in recent years. However, in massive MIMO systems, as the density of mobile users (MUs) increases, conventional uplink training methods will incur prohibitively high training overhead, which is proportional to the number of MUs. In this paper, we propose a selective uplink training method for massive MIMO systems, where in each channel block only part of the MUs will send uplink pilots for channel training, and the channel states of the remaining MUs are predicted from the estimates in previous blocks, taking advantage of the channels' temporal correlation. We propose an efficient algorithm to dynamically select the MUs to be trained within each block and determine the optimal uplink training length. Simulation results show that the proposed training method provides significant throughput gains compared to the existing methods, while much lower estimation complexity is achieved. It is observed that the throughput gain becomes higher as the MU density increases."
]
} |
1604.03278 | 2342007355 | Decision tree classifiers are a widely used tool in data stream mining. The use of confidence intervals to estimate the gain associated with each split leads to very effective methods, like the popular Hoeffding tree algorithm. From a statistical viewpoint, the analysis of decision tree classifiers in a streaming setting requires knowing when enough new information has been collected to justify splitting a leaf. Although some of the issues in the statistical analysis of Hoeffding trees have been already clarified, a general and rigorous study of confidence intervals for splitting criteria is missing. We fill this gap by deriving accurate confidence intervals to estimate the splitting gain in decision tree learning with respect to three criteria: entropy, Gini index, and a third index proposed by Kearns and Mansour. Our confidence intervals depend in a more detailed way on the tree parameters. We also extend our confidence analysis to a selective sampling setting, in which the decision tree learner adaptively decides which labels to query in the stream. We furnish theoretical guarantee bounding the probability that the classification is non-optimal learning the decision tree via our selective sampling strategy. Experiments on real and synthetic data in a streaming setting show that our trees are indeed more accurate than trees with the same number of leaves generated by other techniques and our active learning module permits to save labeling cost. In addition, comparing our labeling strategy with recent methods, we show that our approach is more robust and consistent respect all the other techniques applied to incremental decision trees. | In this work, we significantly simplify the approach of @cite_6 and extend it to a third splitting criterion. Moreover, we also solve the bias problem, controlling the deviations of @math from the real quantity of interest (i.e., @math rather than @math ). Moreover, unlike @cite_29 and @cite_27 , our bounds apply to the standard splitting criteria. Our analysis shows that the confidence intervals associated with the choice of a suboptimal split not only depend on the number of leaf examples @math ---as in bounds ) and )--- but also on other problem dependent parameters, as the dimension of the feature space, the depth of the leaves, and the overall number of examples seen so far by the algorithm. As revealed by the experiments in , this allows a more cautious and accurate splitting in complex problems. Furthermore, we point out that our technique can be easily applied to all extensions of VFDT (see ) yielding similar improvements, as these extensions all share the same Hoeffding-based confidence analysis as the Hoeffding tree algorithm. | {
"cite_N": [
"@cite_27",
"@cite_29",
"@cite_6"
],
"mid": [
"2050806103",
"20971561",
"2124357902"
],
"abstract": [
"Decision trees are the commonly applied tools in the task of data stream classification. The most critical point in decision tree construction algorithm is the choice of the splitting attribute. In majority of algorithms existing in literature the splitting criterion is based on statistical bounds derived for split measure functions. In this paper we propose a totally new kind of splitting criterion. We derive statistical bounds for arguments of split measure function instead of deriving it for split measure function itself. This approach allows us to properly use the Hoeffding's inequality to obtain the required bounds. Based on this theoretical results we propose the Decision Trees based on the Fractions Approximation algorithm (DTFA). The algorithm exhibits satisfactory results of classification accuracy in numerical experiments. It is also compared with other existing in literature methods, demonstrating noticeably better performance.",
"Many stream classification algorithms use the Hoeffding Inequality to identify the best split attribute during tree induction. We show that the prerequisites of the Inequality are violated by these algorithms, and we propose corrective steps. The new stream classification core, correctedVFDT , satisfies the prerequisites of the Hoeffding Inequality and thus provides the expected performance guarantees. The goal of our work is not to improve accuracy, but to guarantee a reliable and interpretable error bound. Nonetheless, we show that our solution achieves lower error rates regarding split attributes and sooner split decisions while maintaining a similar level of accuracy.",
"In mining data streams the most popular tool is the Hoeffding tree algorithm. It uses the Hoeffding's bound to determine the smallest number of examples needed at a node to select a splitting attribute. In the literature the same Hoeffding's bound was used for any evaluation function (heuristic measure), e.g., information gain or Gini index. In this paper, it is shown that the Hoeffding's inequality is not appropriate to solve the underlying problem. We prove two theorems presenting the McDiarmid's bound for both the information gain, used in ID3 algorithm, and for Gini index, used in Classification and Regression Trees (CART) algorithm. The results of the paper guarantee that a decision tree learning system, applied to data streams and based on the McDiarmid's bound, has the property that its output is nearly identical to that of a conventional learner. The results of the paper have a great impact on the state of the art of mining data streams and various developed so far methods and algorithms should be reconsidered."
]
} |
1604.03413 | 2336099476 | We propose a formalism to model database-driven systems, called database manipulating systems (DMS). The actions of a DMS modify the current instance of a relational database by adding new elements into the database, deleting tuples from the relations and adding tuples to the relations. The elements which are modified by an action are chosen by (full) first-order queries. DMS is a highly expressive model and can be thought of as a succinct representation of an infinite state relational transition system, in line with similar models proposed in the literature. We propose monadic second order logic (MSO-FO) to reason about sequences of database instances appearing along a run. Unsurprisingly, the linear-time model checking problem of DMS against MSO-FO is undecidable. Towards decidability, we propose under-approximate model checking of DMS, where the under-approximation parameter is the "bound on recency". In a @math -recency-bounded run, only the most recent @math elements in the current active domain may be modified by an action. More runs can be verified by increasing the bound on recency. Our main result shows that recency-bounded model checking of DMS against MSO-FO is decidable, by a reduction to the satisfiability problem of MSO over nested words. | Both in @cite_11 and @cite_0 , decidability is obtained by constructing a faithful, finite-state abstraction that preserves the properties to be verified. This shows that state-bounded dynamic systems are an interesting class of @cite_12 . On the other hand, state-boundedness is a too restrictive requirement when dealing with systems such as that of Example . In fact, allowing for unboundedly many tuples to be stored in the database is required to deal with , whose behavior is influenced by the presence of certain patterns in the (unbounded) history of the system (cf. the definition of in Example ). It is also essential to capture , where the currently executed task may be interrupted by a task with a higher-priority, and so on, resuming the execution of the original task only when the (unbounded) chain of higher-priority tasks is completed. See, e.g., the pre-emptive offer handling adopted in Example . Notably, as argued in Example , such classes of unbounded systems can all be subject to model checking, by choosing a sufficiently large bound for recency. | {
"cite_N": [
"@cite_0",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"2050591418",
"2204935731"
],
"abstract": [
"",
"In this paper, we give a step by step introduction to the theory of well quasi-ordered transition systems. The framework combines two concepts, namely (i) transition systems which are monotonic wrt ...",
"We explore the paradigm of artifact-centric systems from a knowledge-based perspective. We provide a semantics based on interpreted-systems to interpret a first-order temporal-epistemic language with identity in a multi-agent setting. We consider the model checking problem for this language and provide abstraction results. We isolate a natural subclass of artifact-systems for which the model checking problem is decidable. We give an upper bound on the complexity of the model checking problem."
]
} |
1604.02808 | 2342311830 | Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art hand-crafted features on the suggested cross-subject and cross-view evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis. | After the release of Microsoft Kinect @cite_41 , several datasets are collected by different groups to perform research on 3D action recognition and to evaluate different methods in this field. | {
"cite_N": [
"@cite_41"
],
"mid": [
"2056898157"
],
"abstract": [
"Recent advances in 3D depth cameras such as Microsoft Kinect sensors (www.xbox.com en-US kinect) have created many opportunities for multimedia computing. The Kinect sensor lets the computer directly sense the third dimension (depth) of the players and the environment. It also understands when users talk, knows who they are when they walk up to it, and can interpret their movements and translate them into a format that developers can use to build new experiences. While the Kinect sensor incorporates several advanced sensing hardware, this article focuses on the vision aspect of the Kinect sensor and its impact beyond the gaming industry."
]
} |
1604.02808 | 2342311830 | Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art hand-crafted features on the suggested cross-subject and cross-view evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis. | MSR-Action3D dataset @cite_0 was one of the earliest ones which opened up the research in depth-based action analysis. The samples of this dataset were limited to depth sequences of gaming actions Later the body joint data was added to the dataset. Joint information includes the 3D locations of 20 different body joints in each frame. A decent number of methods are evaluated on this benchmark and recent ones reported close to saturation accuracies @cite_17 @cite_10 @cite_3 . | {
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_3",
"@cite_17"
],
"mid": [
"2144380653",
"2086663212",
"2217325140",
"2162033023"
],
"abstract": [
"This paper presents a method to recognize human actions from sequences of depth maps. Specifically, we employ an action graph to model explicitly the dynamics of the actions and a bag of 3D points to characterize a set of salient postures that correspond to the nodes in the action graph. In addition, we propose a simple, but effective projection based sampling scheme to sample the bag of 3D points from the depth maps. Experimental results have shown that over 90 recognition accuracy were achieved by sampling only about 1 3D points from the depth maps. Compared to the 2D silhouette based recognition, the recognition errors were halved. In addition, we demonstrate the potential of the bag of points posture model to deal with occlusions through simulation.",
"We propose binary range-sample feature in depth. It is based on τ tests and achieves reasonable invariance with respect to possible change in scale, viewpoint, and background. It is robust to occlusion and data corruption as well. The descriptor works in a high speed thanks to its binary property. Working together with standard learning algorithms, the proposed descriptor achieves state-of-theart results on benchmark datasets in our experiments. Impressively short running time is also yielded.",
"The articulated and complex nature of human actions makes the task of action recognition difficult. One approach to handle this complexity is dividing it to the kinetics of body parts and analyzing the actions based on these partial descriptors. We propose a joint sparse regression based learning method which utilizes the structured sparsity to model each action as a combination of multimodal features from a sparse set of body parts. To represent dynamics and appearance of parts, we employ a heterogeneous set of depth and skeleton based features. The proper structure of multimodal multipart features are formulated into the learning framework via the proposed hierarchical mixed norm, to regularize the structured features of each part and to apply sparsity between them, in favor of a group feature selection. Our experimental results expose the effectiveness of the proposed learning method in which it outperforms other methods in all three tested datasets while saturating one of them by achieving perfect accuracy.",
"Human action recognition based on the depth information provided by commodity depth sensors is an important yet challenging task. The noisy depth maps, different lengths of action sequences, and free styles in performing actions, may cause large intra-class variations. In this paper, a new framework based on sparse coding and temporal pyramid matching (TPM) is proposed for depth-based human action recognition. Especially, a discriminative class-specific dictionary learning algorithm is proposed for sparse coding. By adding the group sparsity and geometry constraints, features can be well reconstructed by the sub-dictionary belonging to the same class, and the geometry relationships among features are also kept in the calculated coefficients. The proposed approach is evaluated on two benchmark datasets captured by depth cameras. Experimental results show that the proposed algorithm repeatedly achieves superior performance to the state of the art algorithms. Moreover, the proposed dictionary learning method also outperforms classic dictionary learning approaches."
]
} |
1604.02808 | 2342311830 | Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art hand-crafted features on the suggested cross-subject and cross-view evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis. | RGBD-HuDaAct @cite_34 was one of the largest datasets. It contains RGB and depth sequences of 1189 videos of 12 human daily actions (plus one background class), with high variation in time lengths. The special characteristic of this dataset was the synced and aligned RGB and depth channels which enabled local multimodal analysis of RBGD signals We emphasize the difference between RGBD and RGB+D terms. We suggest to use RGBD when the two modalities are aligned pixel-wise, and RGB+D when the resolutions of the two are different and frames are not aligned. . | {
"cite_N": [
"@cite_34"
],
"mid": [
"1989665047"
],
"abstract": [
"In this paper, we present a home-monitoring oriented human activity recognition benchmark database, based on the combination of a color video camera and a depth sensor. Our contributions are two-fold: 1) We have created a publicly releasable human activity video database (i.e., named as RGBD-HuDaAct), which contains synchronized color-depth video streams, for the task of human daily activity recognition. This database aims at encouraging more research efforts on human activity recognition based on multi-modality sensor combination (e.g., color plus depth). 2) Two multi-modality fusion schemes, which naturally combine color and depth information, have been developed from two state-of-the-art feature representation methods for action recognition, i.e., spatio-temporal interest points (STIPs) and motion history images (MHIs). These depth-extended feature representation methods are evaluated comprehensively and superior recognition performances over their uni-modality (e.g., color only) counterparts are demonstrated."
]
} |
1604.02808 | 2342311830 | Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art hand-crafted features on the suggested cross-subject and cross-view evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis. | 3D Action Pairs @cite_32 was proposed to provide multiple pairs of action classes. Each pair contains very closely related actions with differences along temporal axis State-of-the-art methods @cite_8 @cite_3 @cite_2 achieved perfect accuracy on this benchmark. | {
"cite_N": [
"@cite_2",
"@cite_32",
"@cite_3",
"@cite_8"
],
"mid": [
"2309561466",
"2085735683",
"2217325140",
"1895914852"
],
"abstract": [
"Single modality action recognition on RGB or depth sequences has been extensively explored recently. It is generally accepted that each of these two modalities has different strengths and limitations for the task of action recognition. Therefore, analysis of the RGB+D videos can help us to better study the complementary properties of these two types of modalities and achieve higher levels of performance. In this paper, we propose a new deep autoencoder based shared-specific feature factorization network to separate input multimodal signals into a hierarchy of components. Further, based on the structure of the features, a structured sparsity learning machine is proposed which utilizes mixed norms to apply regularization within components and group selection between them for better classification performance. Our experimental results show the effectiveness of our cross-modality feature analysis framework by achieving state-of-the-art accuracy for action classification on five challenging benchmark datasets.",
"We present a new descriptor for activity recognition from videos acquired by a depth sensor. Previous descriptors mostly compute shape and motion features independently, thus, they often fail to capture the complex joint shape-motion cues at pixel-level. In contrast, we describe the depth sequence using a histogram capturing the distribution of the surface normal orientation in the 4D space of time, depth, and spatial coordinates. To build the histogram, we create 4D projectors, which quantize the 4D space and represent the possible directions for the 4D normal. We initialize the projectors using the vertices of a regular polychoron. Consequently, we refine the projectors using a discriminative density measure, such that additional projectors are induced in the directions where the 4D normals are more dense and discriminative. Through extensive experiments, we demonstrate that our descriptor better captures the joint shape-motion cues in the depth sequence, and thus outperforms the state-of-the-art on all relevant benchmarks.",
"The articulated and complex nature of human actions makes the task of action recognition difficult. One approach to handle this complexity is dividing it to the kinetics of body parts and analyzing the actions based on these partial descriptors. We propose a joint sparse regression based learning method which utilizes the structured sparsity to model each action as a combination of multimodal features from a sparse set of body parts. To represent dynamics and appearance of parts, we employ a heterogeneous set of depth and skeleton based features. The proper structure of multimodal multipart features are formulated into the learning framework via the proposed hierarchical mixed norm, to regularize the structured features of each part and to apply sparsity between them, in favor of a group feature selection. Our experimental results expose the effectiveness of the proposed learning method in which it outperforms other methods in all three tested datasets while saturating one of them by achieving perfect accuracy.",
"This paper proposes a novel approach to action recognition from RGB-D cameras, in which depth features and RGB visual features are jointly used. Rich heterogeneous RGB and depth data are effectively compressed and projected to a learned shared space, in order to reduce noise and capture useful information for recognition. Knowledge from various sources can then be shared with others in the learned space to learn cross-modal features. This guides the discovery of valuable information for recognition. To capture complex spatiotemporal structural relationships in visual and depth features, we represent both RGB and depth data in a matrix form. We formulate the recognition task as a low-rank bilinear model composed of row and column parameter matrices. The rank of the model parameter is minimized to build a low-rank classifier, which is beneficial for improving the generalization power. The proposed method is extensively evaluated on two public RGB-D action datasets, and achieves state-of-the-art results. It also shows promising results if RGB or depth data are missing in training or testing procedure."
]
} |
1604.02808 | 2342311830 | Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art hand-crafted features on the suggested cross-subject and cross-view evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis. | Multiview 3D event @cite_53 and Northwestern-UCLA @cite_44 datasets used more than one Kincect cameras at the same time to collect multi-view representations of the same action, and scale up the number of samples. | {
"cite_N": [
"@cite_44",
"@cite_53"
],
"mid": [
"2949462896",
"1972283961"
],
"abstract": [
"Existing methods on video-based action recognition are generally view-dependent, i.e., performing recognition from the same views seen in the training data. We present a novel multiview spatio-temporal AND-OR graph (MST-AOG) representation for cross-view action recognition, i.e., the recognition is performed on the video from an unknown and unseen view. As a compositional model, MST-AOG compactly represents the hierarchical combinatorial structures of cross-view actions by explicitly modeling the geometry, appearance and motion variations. This paper proposes effective methods to learn the structure and parameters of MST-AOG. The inference based on MST-AOG enables action recognition from novel views. The training of MST-AOG takes advantage of the 3D human skeleton data obtained from Kinect cameras to avoid annotating enormous multi-view video frames, which is error-prone and time-consuming, but the recognition does not need 3D information and is based on 2D video input. A new Multiview Action3D dataset has been created and will be released. Extensive experiments have demonstrated that this new action representation significantly improves the accuracy and robustness for cross-view action recognition on 2D videos.",
"Recognizing the events and objects in the video sequence are two challenging tasks due to the complex temporal structures and the large appearance variations. In this paper, we propose a 4D human-object interaction model, where the two tasks jointly boost each other. Our human-object interaction is defined in 4D space: i) the co occurrence and geometric constraints of human pose and object in 3D space, ii) the sub-events transition and objects coherence in 1D temporal dimension. We represent the structure of events, sub-events and objects in a hierarchical graph. For an input RGB-depth video, we design a dynamic programming beam search algorithm to: i) segment the video, ii) recognize the events, and iii) detect the objects simultaneously. For evaluation, we built a large-scale multiview 3D event dataset which contains 3815 video sequences and 383,036 RGBD frames captured by the Kinect cameras. The experiment results on this dataset show the effectiveness of our method."
]
} |
1604.02808 | 2342311830 | Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art hand-crafted features on the suggested cross-subject and cross-view evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis. | Oreifej al @cite_32 calculated the four-dimensional normals (X-Y-depth-time) from depth sequences and accumulates them on spatio-temporal cubes as quantized histograms over 120 vertices of a regular polychoron. The work of @cite_36 proposed histograms of oriented principle components of depth cloud points, in order to extract robust features against viewpoint variations. Lu al @cite_10 applied @math test based binary range-sample features on depth maps and achieved robust representation against noise, scaling, camera views, and background clutter. Yang and Tian @cite_50 proposed supernormal vectors as aggregated dictionary-based codewords of four-dimensional normals over space-time grids. | {
"cite_N": [
"@cite_36",
"@cite_10",
"@cite_32",
"@cite_50"
],
"mid": [
"1777221758",
"2086663212",
"2085735683",
"2091911422"
],
"abstract": [
"Existing techniques for 3D action recognition are sensitive to viewpoint variations because they extract features from depth images which are viewpoint dependent. In contrast, we directly process pointclouds for cross-view action recognition from unknown and unseen views. We propose the histogram of oriented principal components (HOPC) descriptor that is robust to noise, viewpoint, scale and action speed variations. At a 3D point, HOPC is computed by projecting the three scaled eigenvectors of the pointcloud within its local spatio-temporal support volume onto the vertices of a regular dodecahedron. HOPC is also used for the detection of spatio-temporal keypoints (STK) in 3D pointcloud sequences so that view-invariant STK descriptors (or Local HOPC descriptors) at these key locations only are used for action recognition. We also propose a global descriptor computed from the normalized spatio-temporal distribution of STKs in 4-D, which we refer to as STK-D. We have evaluated the performance of our proposed descriptors against nine existing techniques on two cross-view and three single-view human action recognition datasets. The experimental results show that our techniques provide significant improvement over state-of-the-art methods.",
"We propose binary range-sample feature in depth. It is based on τ tests and achieves reasonable invariance with respect to possible change in scale, viewpoint, and background. It is robust to occlusion and data corruption as well. The descriptor works in a high speed thanks to its binary property. Working together with standard learning algorithms, the proposed descriptor achieves state-of-theart results on benchmark datasets in our experiments. Impressively short running time is also yielded.",
"We present a new descriptor for activity recognition from videos acquired by a depth sensor. Previous descriptors mostly compute shape and motion features independently, thus, they often fail to capture the complex joint shape-motion cues at pixel-level. In contrast, we describe the depth sequence using a histogram capturing the distribution of the surface normal orientation in the 4D space of time, depth, and spatial coordinates. To build the histogram, we create 4D projectors, which quantize the 4D space and represent the possible directions for the 4D normal. We initialize the projectors using the vertices of a regular polychoron. Consequently, we refine the projectors using a discriminative density measure, such that additional projectors are induced in the directions where the 4D normals are more dense and discriminative. Through extensive experiments, we demonstrate that our descriptor better captures the joint shape-motion cues in the depth sequence, and thus outperforms the state-of-the-art on all relevant benchmarks.",
"This paper presents a new framework for human activity recognition from video sequences captured by a depth camera. We cluster hypersurface normals in a depth sequence to form the polynormal which is used to jointly characterize the local motion and shape information. In order to globally capture the spatial and temporal orders, an adaptive spatio-temporal pyramid is introduced to subdivide a depth video into a set of space-time grids. We then propose a novel scheme of aggregating the low-level polynormals into the super normal vector (SNV) which can be seen as a simplified version of the Fisher kernel representation. In the extensive experiments, we achieve classification results superior to all previous published results on the four public benchmark datasets, i.e., MSRAction3D, MSRDailyActivity3D, MSRGesture3D, and MSRActionPairs3D."
]
} |
1604.02808 | 2342311830 | Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art hand-crafted features on the suggested cross-subject and cross-view evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis. | To have a view-invariant representation of the actions, features can be extracted from the 3D body joint positions which are available for each frame. Evangelidis al @cite_19 divided the body into part-based joint quadruples and encodes the configuration of each part with a succinct 6D feature vector, so called skelet al quads. To aggregate the skelet al quads, they applied Fisher vectors and classified the samples by a linear SVM. In @cite_28 different skeleton configurations were represented as points on a Lie group. Actions as time-series of skelet al configurations, were encoded as curves on this manifold. The work of @cite_17 utilized group sparsity based class-specific dictionary coding with geometric constraints to extract skeleton-based features. Rahmani and Mian @cite_38 introduced a nonlinear knowledge transfer model to transform different views of human actions to a canonical view. To apply ConvNet-based learning to this domain, @cite_48 used synthetically generated data and fitted them to real mocap data. Their learning method was able to recognize actions from novel poses and viewpoints. | {
"cite_N": [
"@cite_38",
"@cite_28",
"@cite_48",
"@cite_19",
"@cite_17"
],
"mid": [
"1926974744",
"2048821851",
"2465488276",
"2021150171",
"2162033023"
],
"abstract": [
"This paper concerns action recognition from unseen and unknown views. We propose unsupervised learning of a non-linear model that transfers knowledge from multiple views to a canonical view. The proposed Non-linear Knowledge Transfer Model (NKTM) is a deep network, with weight decay and sparsity constraints, which finds a shared high-level virtual path from videos captured from different unknown viewpoints to the same canonical view. The strength of our technique is that we learn a single NKTM for all actions and all camera viewing directions. Thus, NKTM does not require action labels during learning and knowledge of the camera viewpoints during training or testing. NKTM is learned once only from dense trajectories of synthetic points fitted to mocap data and then applied to real video data. Trajectories are coded with a general codebook learned from the same mocap data. NKTM is scalable to new action classes and training data as it does not require re-learning. Experiments on the IXMAS and N-UCLA datasets show that NKTM outperforms existing state-of-the-art methods for cross-view action recognition.",
"Recently introduced cost-effective depth sensors coupled with the real-time skeleton estimation algorithm of [16] have generated a renewed interest in skeleton-based human action recognition. Most of the existing skeleton-based approaches use either the joint locations or the joint angles to represent a human skeleton. In this paper, we propose a new skelet al representation that explicitly models the 3D geometric relationships between various body parts using rotations and translations in 3D space. Since 3D rigid body motions are members of the special Euclidean group SE(3), the proposed skelet al representation lies in the Lie group SE(3)×…×SE(3), which is a curved manifold. Using the proposed representation, human actions can be modeled as curves in this Lie group. Since classification of curves in this Lie group is not an easy task, we map the action curves from the Lie group to its Lie algebra, which is a vector space. We then perform classification using a combination of dynamic time warping, Fourier temporal pyramid representation and linear SVM. Experimental results on three action datasets show that the proposed representation performs better than many existing skelet al representations. The proposed approach also outperforms various state-of-the-art skeleton-based human action recognition approaches.",
"We propose a human pose representation model that transfers human poses acquired from different unknown views to a view-invariant high-level space. The model is a deep convolutional neural network and requires a large corpus of multiview training data which is very expensive to acquire. Therefore, we propose a method to generate this data by fitting synthetic 3D human models to real motion capture data and rendering the human poses from numerous viewpoints. While learning the CNN model, we do not use action labels but only the pose labels after clustering all training poses into k clusters. The proposed model is able to generalize to real depth images of unseen poses without the need for re-training or fine-tuning. Real depth videos are passed through the model frame-wise to extract viewinvariant features. For spatio-temporal representation, we propose group sparse Fourier Temporal Pyramid which robustly encodes the action specific most discriminative output features of the proposed human pose model. Experiments on two multiview and three single-view benchmark datasets show that the proposed method dramatically outperforms existing state-of-the-art in action recognition.",
"Recent advances on human motion analysis have made the extraction of human skeleton structure feasible, even from single depth images. This structure has been proven quite informative for discriminating actions in a recognition scenario. In this context, we propose a local skeleton descriptor that encodes the relative position of joint quadruples. Such a coding implies a similarity normalisation transform that leads to a compact (6D) view-invariant skelet al feature, referred to as skelet al quad. Further, the use of a Fisher kernel representation is suggested to describe the skelet al quads contained in a (sub)action. A Gaussian mixture model is learnt from training data, so that the generation of any set of quads is encoded by its Fisher vector. Finally, a multi-level representation of Fisher vectors leads to an action description that roughly carries the order of sub-action within each action sequence. Efficient classification is here achieved by linear SVMs. The proposed action representation is tested on widely used datasets, MSRAction3D and HDM05. The experimental evaluation shows that the proposed method outperforms state-of-the-art algorithms that rely only on joints, while it competes with methods that combine joints with extra cues.",
"Human action recognition based on the depth information provided by commodity depth sensors is an important yet challenging task. The noisy depth maps, different lengths of action sequences, and free styles in performing actions, may cause large intra-class variations. In this paper, a new framework based on sparse coding and temporal pyramid matching (TPM) is proposed for depth-based human action recognition. Especially, a discriminative class-specific dictionary learning algorithm is proposed for sparse coding. By adding the group sparsity and geometry constraints, features can be well reconstructed by the sub-dictionary belonging to the same class, and the geometry relationships among features are also kept in the calculated coefficients. The proposed approach is evaluated on two benchmark datasets captured by depth cameras. Experimental results show that the proposed algorithm repeatedly achieves superior performance to the state of the art algorithms. Moreover, the proposed dictionary learning method also outperforms classic dictionary learning approaches."
]
} |
1604.02808 | 2342311830 | Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art hand-crafted features on the suggested cross-subject and cross-view evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis. | In most of 3D action recognition scenarios, there are more than one modality of information and combining them helps to improve the classification accuracy. Ohn-Bar and Trivedi @cite_43 combined second order joint-angle similarity representations of skeletons with a modified two step HOG feature on spatio-temporal depth maps to build global representation of each video sample and utilized a linear SVM to classify the actions. Wang al @cite_37 , combined Fourier temporal pyramids of skelet al information with local occupancy pattern features extracted from depth maps and applied a data mining framework to discover the most discriminative combinations of body joints. A structured sparsity based multimodal feature fusion technique was introduced by @cite_20 for action recognition in RGB+D domain. In @cite_6 random decision forests were utilized for learning and feature pruning over a combination of depth and skeleton-based features. The work of @cite_3 proposed hierarchical mixed norms to fuse different features and select most informative body parts in a joint learning framework. Hu al @cite_26 proposed dynamic skeletons as Fourier temporal pyramids of spline-based interpolated skeleton points and their gradients, and HOG-based dynamic color and depth patterns to be used in a RGB+D joint-learning model for action classification. | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_3",
"@cite_6",
"@cite_43",
"@cite_20"
],
"mid": [
"2110819057",
"1893516992",
"2217325140",
"2010676632",
"2007057255",
"2085900439"
],
"abstract": [
"Human action recognition is an important yet challenging task. Human actions usually involve human-object interactions, highly articulated motions, high intra-class variations, and complicated temporal structures. The recently developed commodity depth sensors open up new possibilities of dealing with this problem by providing 3D depth data of the scene. This information not only facilitates a rather powerful human motion capturing technique, but also makes it possible to efficiently model human-object interactions and intra-class variations. In this paper, we propose to characterize the human actions with a novel actionlet ensemble model, which represents the interaction of a subset of human joints. The proposed model is robust to noise, invariant to translational and temporal misalignment, and capable of characterizing both the human motion and the human-object interactions. We evaluate the proposed approach on three challenging action recognition datasets captured by Kinect devices, a multiview action recognition dataset captured with Kinect device, and a dataset captured by a motion capture system. The experimental evaluations show that the proposed approach achieves superior performance to the state-of-the-art algorithms.",
"In this paper, we focus on heterogeneous feature learning for RGB-D activity recognition. Considering that features from different channels could share some similar hidden structures, we propose a joint learning model to simultaneously explore the shared and feature-specific components as an instance of heterogenous multi-task learning. The proposed model in an unified framework is capable of: 1) jointly mining a set of subspaces with the same dimensionality to enable the multi-task classifier learning, and 2) meanwhile, quantifying the shared and feature-specific components of features in the subspaces. To efficiently train the joint model, a three-step iterative optimization algorithm is proposed, followed by two inference models. Extensive results on three activity datasets have demonstrated the efficacy of the proposed method. In addition, a novel RGB-D activity dataset focusing on human-object interaction is collected for evaluating the proposed method, which will be made available to the community for RGB-D activity benchmarking and analysis.",
"The articulated and complex nature of human actions makes the task of action recognition difficult. One approach to handle this complexity is dividing it to the kinetics of body parts and analyzing the actions based on these partial descriptors. We propose a joint sparse regression based learning method which utilizes the structured sparsity to model each action as a combination of multimodal features from a sparse set of body parts. To represent dynamics and appearance of parts, we employ a heterogeneous set of depth and skeleton based features. The proper structure of multimodal multipart features are formulated into the learning framework via the proposed hierarchical mixed norm, to regularize the structured features of each part and to apply sparsity between them, in favor of a group feature selection. Our experimental results expose the effectiveness of the proposed learning method in which it outperforms other methods in all three tested datasets while saturating one of them by achieving perfect accuracy.",
"We propose an algorithm which combines the discriminative information from depth images as well as from 3D joint positions to achieve high action recognition accuracy. To avoid the suppression of subtle discriminative information and also to handle local occlusions, we compute a vector of many independent local features. Each feature encodes spatiotemporal variations of depth and depth gradients at a specific space-time location in the action volume. Moreover, we encode the dominant skeleton movements by computing a local 3D joint position difference histogram. For each joint, we compute a 3D space-time motion volume which we use as an importance indicator and incorporate in the feature vector for improved action discrimination. To retain only the discriminant features, we train a random decision forest (RDF). The proposed algorithm is evaluated on three standard datasets and compared with nine state-of-the-art algorithms. Experimental results show that, on the average, the proposed algorithm outperform all other algorithms in accuracy and have a processing speed of over 112 frames second.",
"We propose a set of features derived from skeleton tracking of the human body and depth maps for the purpose of action recognition. The descriptors proposed are easy to implement, produce relatively small-sized feature sets, and the multi-class classification scheme is fast and suitable for real-time applications. We intuitively characterize actions using pairwise affinities between view-invariant joint angles features over the performance of an action. Additionally, a new descriptor for spatio-temporal feature extraction from color and depth images is introduced. This descriptor involves an application of a modified histogram of oriented gradients (HOG) algorithm. The application produces a feature set at every frame, and these features are collected into a 2D array which then the same algorithm is applied to again (the approach is termed HOG2). Both feature sets are evaluated in a bag-of-words scheme using a linear SVM, showing state-of-the-art results on public datasets from different domains of human-computer interaction.",
"Microsoft Kinect's output is a multi-modal signal which gives RGB videos, depth sequences and skeleton information simultaneously. Various action recognition techniques focused on different single modalities of the signals and built their classifiers over the features extracted from one of these channels. For better recognition performance, it's desirable to fuse these multi-modal information into an integrated set of discriminative features. Most of current fusion methods merged heterogeneous features in a holistic manner and ignored the complementary properties of these modalities in finer levels. In this paper, we proposed a new hierarchical bag-of-words feature fusion technique based on multi-view structured spar-sity learning to fuse atomic features from RGB and skeletons for the task of action recognition."
]
} |
1604.02808 | 2342311830 | Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art hand-crafted features on the suggested cross-subject and cross-view evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis. | Differential RNN @cite_16 added a new gating mechanism to the traditional LSTM to extract the derivatives of internal state (DoS). The derived DoS was fed to the LSTM gates to learn salient dynamic patterns in 3D skeleton data. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2951208315"
],
"abstract": [
"The long short-term memory (LSTM) neural network is capable of processing complex sequential information since it utilizes special gating schemes for learning representations from long input sequences. It has the potential to model any sequential time-series data, where the current hidden state has to be considered in the context of the past hidden states. This property makes LSTM an ideal choice to learn the complex dynamics of various actions. Unfortunately, the conventional LSTMs do not consider the impact of spatio-temporal dynamics corresponding to the given salient motion patterns, when they gate the information that ought to be memorized through time. To address this problem, we propose a differential gating scheme for the LSTM neural network, which emphasizes on the change in information gain caused by the salient motions between the successive frames. This change in information gain is quantified by Derivative of States (DoS), and thus the proposed LSTM model is termed as differential Recurrent Neural Network (dRNN). We demonstrate the effectiveness of the proposed model by automatically recognizing actions from the real-world 2D and 3D human action datasets. Our study is one of the first works towards demonstrating the potential of learning complex time-series representations via high-order derivatives of states."
]
} |
1604.02808 | 2342311830 | Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art hand-crafted features on the suggested cross-subject and cross-view evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis. | HBRNN-L @cite_30 proposed a multilayer RNN framework for action recognition on a hierarchy of skeleton-based inputs. At the first layer, each subnetwork received the inputs from one body part. On next layers, the combined hidden representation of previous layers were fed as inputs in a hierarchical combination of body parts. | {
"cite_N": [
"@cite_30"
],
"mid": [
"1950788856"
],
"abstract": [
"Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency."
]
} |
1604.02808 | 2342311830 | Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art hand-crafted features on the suggested cross-subject and cross-view evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis. | The work of @cite_18 introduced an internal dropout mechanism applied to LSTM gates for stronger regularization in the RNN-based 3D action learning network. To further regularize the learning, a co-occurrence inducing norm was added to the network's cost function which enforced the learning to discover the groups of co-occurring and discriminative joints for better action recognition. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2307035320"
],
"abstract": [
"Skeleton based action recognition distinguishes human actions using the trajectories of skeleton joints, which provide a very good representation for describing actions. Considering that recurrent neural networks (RNNs) with Long Short-Term Memory (LSTM) can learn feature representations and model long-term temporal dependencies automatically, we propose an end-to-end fully connected deep LSTM network for skeleton based action recognition. Inspired by the observation that the co-occurrences of the joints intrinsically characterize human actions, we take the skeleton as the input at each time slot and introduce a novel regularization scheme to learn the co-occurrence features of skeleton joints. To train the deep LSTM network effectively, we propose a new dropout algorithm which simultaneously operates on the gates, cells, and output responses of the LSTM neurons. Experimental results on three human action recognition datasets consistently demonstrate the effectiveness of the proposed model."
]
} |
1604.03114 | 2342255891 | Public debates are a common platform for presenting and juxtaposing diverging views on important issues. In this work we propose a methodology for tracking how ideas flow between participants throughout a debate. We use this approach in a case study of Oxford-style debates---a competitive format where the winner is determined by audience votes---and show how the outcome of a debate depends on aspects of conversational flow. In particular, we find that winners tend to make better use of a debate's interactive component than losers, by actively pursuing their opponents' points rather than promoting their own ideas over the course of the conversation. | Previous work on conversational structure has proposed approaches to model dialogue acts @cite_0 @cite_11 @cite_19 or disentangle interleaved conversations @cite_21 @cite_23 . Other research has considered the problem of detecting conversation-level traits such as the presence of disagreements @cite_2 @cite_7 or the likelihood of relation dissolution @cite_6 . At the participant level, several studies present approaches to identify ideological stances @cite_20 @cite_8 , using features based on participant interactions @cite_5 @cite_13 , or extracting words and reasons characterizing a stance @cite_14 @cite_22 @cite_4 . In our setting, both the stances and the turn structure of a debate are known, allowing us to instead focus on the debate's outcome. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_13",
"@cite_20",
"@cite_11"
],
"mid": [
"2038411619",
"2150248423",
"2129229917",
"2251147786",
"2251911493",
"",
"2143976867",
"2949473767",
"1472827061",
"2121912514",
"2145747781",
"1967807490",
"2252133813",
"2098420572",
"1654173042"
],
"abstract": [
"Entries in the burgeoning “text-as-data” movement are often accompanied by lists or visualizations of how word (or other lexical feature) usage differs across some pair or set of documents. These are intended either to establish some target semantic concept (like the content of partisan frames) to estimate word-specific measures that feed forward into another analysis (like locating parties in ideological space) or both. We discuss a variety of techniques for selecting words that capture partisan, or other, differences in political speech and for evaluating the relative importance of those words. We introduce and emphasize several new approaches based on Bayesian shrinkage and regularization. We illustrate the relative utility of these approaches with analyses of partisan, gender, and distributive speech in the U.S. Senate.",
"Recent years have seen a surge of interest in stance classification in online debates. Oftentimes, however, it is important to determine not only the stance expressed by an author in her debate posts, but also the reasons behind her supporting or opposing the issue under debate. We therefore examine the new task of reason classification in this paper. Given the close interplay between stance classification and reason classification, we design computational models for examining how automatically computed stance information can be profitably exploited for reason classification. Experiments on our reason-annotated corpus of ideological debate posts from four domains demonstrate that sophisticated models of stances and reasons can indeed yield more accurate reason and stance classification results than their simpler counterparts.",
"In this paper we investigate the effect of the context of interaction on the extent to which a contributor's perspective bias is displayed through their lexical choice. We present a series of experiments on political discussion data. Our experiments indicate that (i) when people quote contributors with an opposing view, they tend to quote the words that are less strongly associated with the opposing view. (ii) Nevertheless, in quoting their opponents, the displayed bias of their word distributions shifts towards that of their opponents. (iii) The personal bias of the speaker is displayed most clearly through the words that are not quoted, (iv) although characteristics of the quoted message do have a measurable effect on the words that are included in the contribution. And, finally, (v) posts are influenced by the displayed bias of previous posts in a thread.",
"We investigate the novel task of online dispute detection and propose a sentiment analysis solution to the problem: we aim to identify the sequence of sentence-level sentiments expressed during a discussion and to use them as features in a classifier that predicts the DISPUTE NON-DISPUTE label for the discussion as a whole. We evaluate dispute detection approaches on a newly created corpus of Wikipedia Talk page disputes and find that classifiers that rely on our sentiment tagging features outperform those that do not. The best model achieves a very promising F1 score of 0.78 and an accuracy of 0.80.",
"Determining when conversational participants agree or disagree is instrumental for broader conversational analysis; it is necessary, for example, in deciding when a group has reached consensus. In this paper, we describe three main contributions. We show how different aspects of conversational structure can be used to detect agreement and disagreement in discussion forums. In particular, we exploit information about meta-thread structure and accommodation between participants. Second, we demonstrate the impact of the features using 3-way classification, including sentences expressing disagreement, agreement or neither. Finally, we show how to use a naturally occurring data set with labels derived from the sides that participants choose in debates on createdebate.com. The resulting new agreement corpus, Agreement by Create Debaters (ABCD) is 25 times larger than any prior corpus. We demonstrate that using this data enables us to outperform the same system trained on prior existing in-domain smaller annotated datasets.",
"",
"Interpersonal relations are fickle, with close friendships often dissolving into enmity. In this work, we explore linguistic cues that presage such transitions by studying dyadic interactions in an online strategy game where players form alliances and break those alliances through betrayal. We characterize friendships that are unlikely to last and examine temporal patterns that foretell betrayal. We reveal that subtle signs of imminent betrayal are encoded in the conversational patterns of the dyad, even if the victim is not aware of the relationship's fate. In particular, we find that lasting friendships exhibit a form of balance that manifests itself through language. In contrast, sudden changes in the balance of certain conversational attributes---such as positive sentiment, politeness, or focus on future planning---signal impending betrayal.",
"For the task of recognizing dialogue acts, we are applying the Transformation-Based Learning (TBL) machine learning algorithm. To circumvent a sparse data problem, we extract values of well-motivated features of utterances, such as speaker direction, punctuation marks, and a new feature, called dialogue act cues, which we find to be more effective than cue phrases and word n-grams in practice. We present strategies for constructing a set of dialogue act cues automatically by minimizing the entropy of the distribution of dialogue acts in a training corpus, filtering out irrelevant dialogue act cues, and clustering semantically-related words. In addition, to address limitations of TBL, we introduce a Monte Carlo strategy for training efficiently and a committee method for computing confidence measures. These ideas are combined in our working implementation, which labels held-out data as accurately as any other reported system for the dialogue act tagging task.",
"In this paper, we propose an annotation schema for the discourse analysis of Wikipedia Talk pages aimed at the coordination efforts for article improvement. We apply the annotation schema to a corpus of 100 Talk pages from the Simple English Wikipedia and make the resulting dataset freely available for download. Furthermore, we perform automatic dialog act classification on Wikipedia discussions and achieve an average F1-score of 0.82 with our classification pipeline.",
"We evaluate several popular models of local discourse coherence for domain and task generality by applying them to chat disentanglement. Using experiments on synthetic multiparty conversations, we show that most models transfer well from text to dialogue. Coherence models improve results overall when good parses and topic models are available, and on a constrained task for real chat data.",
"Casual online forums such as Reddit, Slashdot and Digg, are continuing to increase in popularity as a means of communication. Detecting disagreement in this domain is a considerable challenge. Many topics are unique to the conversation on the forum, and the appearance of disagreement may be much more subtle than on political blogs or social media sites such as twitter. In this analysis we present a crowd-sourced annotated corpus for topic level disagreement detection in Slashdot, showing that disagreement detection in this domain is difficult even for humans. We then proceed to show that a new set of features determined from the rhetorical structure of the conversation significantly improves the performance on disagreement detection over a baseline consisting of unigram bigram features, discourse markers, structural features and meta-post features.",
"We investigate whether one can determine from the transcripts of U.S. Congressional floor debates whether the speeches represent support of or opposition to proposed legislation. To address this problem, we exploit the fact that these speeches occur as part of a discussion; this allows us to use sources of information regarding relationships between discourse segments, such as whether a given utterance indicates agreement with the opinion expressed by another. We find that the incorporation of such information yields substantial improvements over classifying speeches in isolation.",
"Online debate forums present a valuable opportunity for the understanding and modeling of dialogue. To understand these debates, a key challenge is inferring the stances of the participants, all of which are interrelated and dependent. While collectively modeling users’ stances has been shown to be effective (, 2012c; Hasan and Ng, 2013), there are many modeling decisions whose ramifications are not well understood. To investigate these choices and their effects, we introduce a scalable unified probabilistic modeling framework for stance classification models that 1) are collective, 2) reason about disagreement, and 3) can model stance at either the author level or at the post level. We comprehensively evaluate the possible modeling choices on eight topics across two online debate corpora, finding accuracy improvements of up to 11.5 percentage points over a local classifier. Our results highlight the importance of making the correct modeling choices for online dialogues, and having a unified probabilistic modeling framework that makes this possible.",
"This work explores the utility of sentiment and arguing opinions for classifying stances in ideological debates. In order to capture arguing opinions in ideological stance taking, we construct an arguing lexicon automatically from a manually annotated corpus. We build supervised systems employing sentiment and arguing opinions and their targets as features. Our systems perform substantially better than a distribution-based baseline. Additionally, by employing both types of opinion features, we are able to perform better than a unigram-based system.",
"We propose the first unsupervised approach to the problem of modeling dialogue acts in an open domain. Trained on a corpus of noisy Twitter conversations, our method discovers dialogue acts by clustering raw utterances. Because it accounts for the sequential behaviour of these acts, the learned model can provide insight into the shape of communication in a new medium. We address the challenge of evaluating the emergent model with a qualitative visualization and an intrinsic conversation ordering task. This work is inspired by a corpus of 1.3 million Twitter conversations, which will be made publicly available. This huge amount of data, available only because Twitter blurs the line between chatting and publishing, highlights the need to be able to adapt quickly to a new medium."
]
} |
1604.03114 | 2342255891 | Public debates are a common platform for presenting and juxtaposing diverging views on important issues. In this work we propose a methodology for tracking how ideas flow between participants throughout a debate. We use this approach in a case study of Oxford-style debates---a competitive format where the winner is determined by audience votes---and show how the outcome of a debate depends on aspects of conversational flow. In particular, we find that winners tend to make better use of a debate's interactive component than losers, by actively pursuing their opponents' points rather than promoting their own ideas over the course of the conversation. | Existing research on argumentation strategies has largely focused on exploiting the structure of monologic arguments @cite_3 , like those of persuasive essays @cite_12 @cite_9 . In addition, has examined the effectiveness of arguments in the context of a forum where people invite others to challenge their opinions. We complement this line of work by looking at the relative persuasiveness of participants in extended conversations as they exchange arguments over multiple turns. | {
"cite_N": [
"@cite_9",
"@cite_12",
"@cite_3"
],
"mid": [
"2250309026",
"2119707623",
""
],
"abstract": [
"In this paper, we present a novel approach for identifying argumentative discourse structures in persuasive essays. The structure of argumentation consists of several components (i.e. claims and premises) that are connected with argumentative relations. We consider this task in two consecutive steps. First, we identify the components of arguments using multiclass classification. Second, we classify a pair of argument components as either support or non-support for identifying the structure of argumentative discourse. For both tasks, we evaluate several classifiers and propose novel feature sets including structural, lexical, syntactic and contextual features. In our experiments, we obtain a macro F1-score of 0.726 for identifying argument components and 0.722 for argumentative relations.",
"Argumentation schemes are structures or templates for various kinds of arguments. Given the text of an argument with premises and conclusion identified, we classify it as an instance of one of five common schemes, using features specific to each scheme. We achieve accuracies of 63--91 in one-against-others classification and 80--94 in pairwise classification (baseline = 50 in both cases).",
""
]
} |
1604.03114 | 2342255891 | Public debates are a common platform for presenting and juxtaposing diverging views on important issues. In this work we propose a methodology for tracking how ideas flow between participants throughout a debate. We use this approach in a case study of Oxford-style debates---a competitive format where the winner is determined by audience votes---and show how the outcome of a debate depends on aspects of conversational flow. In particular, we find that winners tend to make better use of a debate's interactive component than losers, by actively pursuing their opponents' points rather than promoting their own ideas over the course of the conversation. | Previous studies of influence in extended conversations have largely dealt with the political domain, examining moderated but relatively unstructured settings such as talk shows or presidential debates, and suggesting features like topic control @cite_18 , linguistic style matching @cite_17 and turn-taking @cite_15 . With persuasion in mind, our work extends these studies to explore a new dynamic, the flow of ideas between speakers, in a highly structured setting that controls for confounding factors. | {
"cite_N": [
"@cite_18",
"@cite_15",
"@cite_17"
],
"mid": [
"1985741469",
"2250299432",
"1927374806"
],
"abstract": [
"Identifying influential speakers in multi-party conversations has been the focus of research in communication, sociology, and psychology for decades. It has been long acknowledged qualitatively that controlling the topic of a conversation is a sign of influence. To capture who introduces new topics in conversations, we introduce SITS--Speaker Identity for Topic Segmentation--a nonparametric hierarchical Bayesian model that is capable of discovering (1) the topics used in a set of conversations, (2) how these topics are shared across conversations, (3) when these topics change during conversations, and (4) a speaker-specific measure of \"topic control\". We validate the model via evaluations using multiple datasets, including work meetings, online discussions, and political debates. Experimental results confirm the effectiveness of SITS in both intrinsic and extrinsic evaluations.",
"In this paper, we present an automatic system to rank participants of an interaction in terms of their relative power. We find several linguistic and structural features to be effective in predicting these rankings. We conduct our study in the domain of political debates, specifically the 2012 Republican presidential primary debates. Our dataset includes textual transcripts of 20 debates with 4-9 candidates as participants per debate. We model the power index of each candidate in terms of their relative poll standings in the state and national polls. We find that the candidates’ power indices affect the way they interact with others and the way others interact with them. We obtained encouraging results in our experiments and we expect these findings to carry across to other genres of multi-party conversations.",
"The current research used the contexts of U.S. presidential debates and negotiations to examine whether matching the linguistic style of an opponent in a two-party exchange affects the reactions of third-party observers. Building off communication accommodation theory (CAT), interaction alignment theory (IAT), and processing fluency, we propose that language style matching (LSM) will improve subsequent third-party evaluations because matching an opponent’s linguistic style reflects greater perspective taking and will make one’s arguments easier to process. In contrast, research on status inferences predicts that LSM will negatively impact third-party evaluations because LSM implies followership. We conduct two studies to test these competing hypotheses. Study 1 analyzed transcripts of U.S. presidential debates between 1976 and 2012 and found that candidates who matched their opponent’s linguistic style increased their standing in the polls. Study 2 demonstrated a causal relationship between LSM and third-..."
]
} |
1604.02715 | 2337635495 | In this work, we propose a novel way of efficiently localizing a soccer field from a single broadcast image of the game. Related work in this area relies on manually annotating a few key frames and extending the localization to similar images, or installing fixed specialized cameras in the stadium from which the layout of the field can be obtained. In contrast, we formulate this problem as a branch and bound inference in a Markov random field where an energy function is defined in terms of field cues such as grass, lines and circles. Moreover, our approach is fully automatic and depends only on single images from the broadcast video of the game. We demonstrate the effectiveness of our method by applying it to various games and obtain promising results. Finally, we posit that our approach can be applied easily to other sports such as hockey and basketball. | @cite_28 , the authors proposed an approach that matches images of the game to 3D models of the stadium for initial camera parameter estimation @cite_28 . However, these 3D models only exist in well known stadiums, limiting the applicability of the proposed approach. | {
"cite_N": [
"@cite_28"
],
"mid": [
"188479045"
],
"abstract": [
"In this paper we present ASPOGAMO, a vision system capable of es- timating motion trajectories of soccer players taped on video. The system per- forms well in a multitude of application scenarios because of its adaptivity to various camera setups, such as single or multiple camera settings, static or dy- namic ones. Furthermore, ASPOGAMO can directly process image streams taken from TV broadcast, and extract all valuable information despite scene interrup- tions and cuts between different cameras. The system achieves a high level of robustness through the use of modelbased vision algorithms for camera estima- tion and player recognition and a probabilistic multi-player tracking framework capable of dealing with occlusion situations typical in team-sports. The continu- ous interplay between these submodules is adding to both the reliability and the efficiency of the overall system."
]
} |
1604.02847 | 2950251060 | We present SymNet, a network static analysis tool based on symbolic execution. SymNet quickly analyzes networks by injecting symbolic packets and tracing their path through the network. Our key novelty is SEFL, a language we designed for network processing that is symbolic-execution friendly. SymNet is easy to use: we have developed parsers that automatically generate SEFL models from router and switch tables, firewall configurations and arbitrary Click modular router configurations. Most of our models are exact and have optimal branching factor. Finally, we built a testing tool that checks SEFL models conform to the real implementation. SymNet can check networks containing routers with hundreds of thousands of prefixes and NATs in seconds, while ensuring packet header memory-safety and capturing network functionality such as dynamic tunneling, stateful processing and encryption. We used SymNet to debug middlebox interactions documented in the literature, to check our department’s network and the Stanford backbone network. Results show that symbolic execution is fast and more accurate than existing static analysis tools. | Static network analysis is a well-established topic, with many available tools @cite_12 @cite_11 @cite_1 @cite_14 @cite_19 . AntEater @cite_1 models network boxes as boolean formulae.Network Optimized Datalog @cite_14 is the most complete tool to date and relies on Datalog both for network models and policy constraints. The work of uses a model checker to verify networks containing stateful middleboxes @cite_19 . Our NAT model is similar in spirit with their proposal. | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_19",
"@cite_12",
"@cite_11"
],
"mid": [
"2188073520",
"2115526539",
"223955670",
"2140069682",
"1882012874"
],
"abstract": [
"Network Verification is a form of model checking in which a model of the network is checked for properties stated using a specification language. Existing network verification tools lack a general specification language and hardcode the network model. Hence they cannot, for example, model policies at a high level of abstraction. Neither can they model dynamic networks; even a simple packet format change requires changes to internals. Standard verification tools (e.g., model checkers) have expressive specification and modeling languages but do not scale to large header spaces. We introduce Network Optimized Datalog (NoD) as a tool for network verification in which both the specification language and modeling languages are Datalog. NoD can also scale to large to large header spaces because of a new filter-project operator and a symbolic header representation. As a consequence, NoD allows checking for beliefs about network reachability policies in dynamic networks. A belief is a high-level invariant (e.g., \"Internal controllers cannot be accessed from the Internet\") that a network operator thinks is true. Beliefs may not hold, but checking them can uncover bugs or policy exceptions with little manual effort. Refuted beliefs can be used as a basis for revised beliefs. Further, in real networks, machines are added and links fail; on a longer term, packet formats and even forwarding behaviors can change, enabled by OpenFlow and P4. NoD allows the analyst to model such dynamic networks by adding new Datalog rules. For a large Singapore data center with 820K rules, NoD checks if any guest VM can access any controller (the equivalent of 5K specific reachability invariants) in 12 minutes. NoD checks for loops in an experimental SWAN backbone network with new headers in a fraction of a second. NoD generalizes a specialized system, SecGuru, we currently use in production to catch hundreds of configuration bugs a year. NoD has been released as part of the publicly available Z3 SMT solver.",
"Diagnosing problems in networks is a time-consuming and error-prone process. Existing tools to assist operators primarily focus on analyzing control plane configuration. Configuration analysis is limited in that it cannot find bugs in router software, and is harder to generalize across protocols since it must model complex configuration languages and dynamic protocol behavior. This paper studies an alternate approach: diagnosing problems through static analysis of the data plane. This approach can catch bugs that are invisible at the level of configuration files, and simplifies unified analysis of a network across many protocols and implementations. We present Anteater, a tool for checking invariants in the data plane. Anteater translates high-level network invariants into boolean satisfiability problems (SAT), checks them against network state using a SAT solver, and reports counterexamples if violations have been found. Applied to a large university network, Anteater revealed 23 bugs, including forwarding loops and stale ACL rules, with only five false positives. Nine of these faults are being fixed by campus network operators.",
"Great progress has been made recently in verifying the correctness of router forwarding tables [17, 19, 20, 26]. However, these approaches do not work for networks containing middleboxes such as caches and firewalls whose forwarding behavior depends on previously observed traffic. We explore how to verify isolation properties in networks that include such “dynamic datapath” elements using model checking. Our work leverages recent advances in SMT solvers, and the main challenge lies in scaling the approach to handle large and complicated networks. While the straightforward application of model checking to this problem can only handle very small networks (if at all), our approach can verify simple realistic invariants on networks containing 30,000 middleboxes in a few minutes.",
"The primary purpose of a network is to provide reachability between applications running on end hosts. In this paper, we describe how to compute the reachability a network provides from a snapshot of the configuration state from each of the routers. Our primary contribution is the precise definition of the potential reachability of a network and a substantial simplification of the problem through a unified modeling of packet filters and routing protocols. In the end, we reduce a complex, important practical problem to computing the transitive closure to set union and intersection operations on reachability set representations. We then extend our algorithm to model the influence of packet transformations (e.g., by NATs or ToS remapping) along the path. Our technique for static analysis of network reachability is valuable for verifying the intent of the network designer, troubleshooting reachability problems, and performing \"what-if\" analysis of failure scenarios.",
"Today's networks typically carry or deploy dozens of protocols and mechanisms simultaneously such as MPLS, NAT, ACLs and route redistribution. Even when individual protocols function correctly, failures can arise from the complex interactions of their aggregate, requiring network administrators to be masters of detail. Our goal is to automatically find an important class of failures, regardless of the protocols running, for both operational and experimental networks. To this end we developed a general and protocol-agnostic framework, called Header Space Analysis (HSA). Our formalism allows us to statically check network specifications and configurations to identify an important class of failures such as Reachability Failures, Forwarding Loops and Traffic Isolation and Leakage problems. In HSA, protocol header fields are not first class entities; instead we look at the entire packet header as a concatenation of bits without any associated meaning. Each packet is a point in the 0,1 L space where L is the maximum length of a packet header, and networking boxes transform packets from one point in the space to another point or set of points (multicast). We created a library of tools, called Hassel, to implement our framework, and used it to analyze a variety of networks and protocols. Hassel was used to analyze the Stanford University backbone network, and found all the forwarding loops in less than 10 minutes, and verified reachability constraints between two subnets in 13 seconds. It also found a large and complex loop in an experimental loose source routing protocol in 4 minutes."
]
} |
1604.02847 | 2950251060 | We present SymNet, a network static analysis tool based on symbolic execution. SymNet quickly analyzes networks by injecting symbolic packets and tracing their path through the network. Our key novelty is SEFL, a language we designed for network processing that is symbolic-execution friendly. SymNet is easy to use: we have developed parsers that automatically generate SEFL models from router and switch tables, firewall configurations and arbitrary Click modular router configurations. Most of our models are exact and have optimal branching factor. Finally, we built a testing tool that checks SEFL models conform to the real implementation. SymNet can check networks containing routers with hundreds of thousands of prefixes and NATs in seconds, while ensuring packet header memory-safety and capturing network functionality such as dynamic tunneling, stateful processing and encryption. We used SymNet to debug middlebox interactions documented in the literature, to check our department’s network and the Stanford backbone network. Results show that symbolic execution is fast and more accurate than existing static analysis tools. | @PARASPLIT Symbolic execution. We are not the first to propose using symbolic execution to analyze networks. @cite_4 used symbolic execution to check selected Click elements' source code for bugs, aiming to proove crash-freedom and bounded execution. We have shown that using C as modelling language does not scale, and have proposed SEFL and as scalable alternatives. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2155216527"
],
"abstract": [
"Software dataplanes are emerging as an alternative to traditional hardware switches and routers, promising programmability and short time to market. These advantages are set against the risk of disrupting the network with bugs, unpredictable performance, or security vulnerabilities. We explore the feasibility of verifying software dataplanes to ensure smooth network operation. For general programs, verifiability and performance are competing goals; we argue that software dataplanes are different--we can write them in a way that enables verification and preserves performance. We present a verification tool that takes as input a software dataplane, written in a way that meets a given set of conditions, and (dis)proves that the dataplane satisfies crash-freedom, bounded-execution, and filtering properties. We evaluate our tool on stateless and simple stateful Click pipelines; we perform complete and sound verification of these pipelines within tens of minutes, whereas a state-of-the-art general-purpose tool fails to complete the same task within several hours."
]
} |
1604.02847 | 2950251060 | We present SymNet, a network static analysis tool based on symbolic execution. SymNet quickly analyzes networks by injecting symbolic packets and tracing their path through the network. Our key novelty is SEFL, a language we designed for network processing that is symbolic-execution friendly. SymNet is easy to use: we have developed parsers that automatically generate SEFL models from router and switch tables, firewall configurations and arbitrary Click modular router configurations. Most of our models are exact and have optimal branching factor. Finally, we built a testing tool that checks SEFL models conform to the real implementation. SymNet can check networks containing routers with hundreds of thousands of prefixes and NATs in seconds, while ensuring packet header memory-safety and capturing network functionality such as dynamic tunneling, stateful processing and encryption. We used SymNet to debug middlebox interactions documented in the literature, to check our department’s network and the Stanford backbone network. Results show that symbolic execution is fast and more accurate than existing static analysis tools. | Online verification. Veriflow @cite_0 and NetPlumber @cite_5 aim to perform live validation of all network configuration changes. They work underneath an SDN controller and verify all state updates. NICE uses symbolic model checking to verify the correctness of Openflow programs @cite_15 . More recently, Armstrong @cite_9 use Klee on middlebox models written in C to guide the generation of test packets for networks. is orthogonal to these works. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_15",
"@cite_9"
],
"mid": [
"1675033504",
"158224344",
"",
"1537567235"
],
"abstract": [
"Networks are complex and prone to bugs. Existing tools that check network configuration files and the data-plane state operate offline at timescales of seconds to hours, and cannot detect or prevent bugs as they arise. Is it possible to check network-wide invariants in real time, as the network state evolves? The key challenge here is to achieve extremely low latency during the checks so that network performance is not affected. In this paper, we present a design, VeriFlow, which achieves this goal. VeriFlow is a layer between a software-defined networking controller and network devices that checks for network-wide invariant violations dynamically as each forwarding rule is inserted, modified or deleted. VeriFlow supports analysis over multiple header fields, and an API for checking custom invariants. Based on a prototype implementation integrated with the NOX OpenFlow controller, and driven by a Mininet OpenFlow network and Route Views trace data, we find that VeriFlow can perform rigorous checking within hundreds of microseconds per rule insertion or deletion.",
"Network state may change rapidly in response to customer demands, load conditions or configuration changes. But the network must also ensure correctness conditions such as isolating tenants from each other and from critical services. Existing policy checkers cannot verify compliance in real time because of the need to collect \"state\" from the entire network and the time it takes to analyze this state. SDNs provide an opportunity in this respect as they provide a logically centralized view from which every proposed change can be checked for compliance with policy. But there remains the need for a fast compliance checker. Our paper introduces a real time policy checking tool called NetPlumber based on Header Space Analysis (HSA) [8]. Unlike HSA, however, NetPlumber incrementally checks for compliance of state changes, using a novel set of conceptual tools that maintain a dependency graph between rules. While NetPlumber is a natural fit for SDNs, its abstract intermediate form is conceptually applicable to conventional networks as well. We have tested NetPlumber on Google's SDN, the Stanford backbone and Internet 2. With NetPlumber, checking the compliance of a typical rule update against a single policy on these networks takes 50-500µs on average.",
"",
"Network operators today spend significant manual effort in ensuring and checking that the network meets their intended policies. While recent work in network verification has made giant strides to reduce this effort, they focus on simple reachability properties and cannot handle context-dependent policies (e.g., how many connections has a host spawned) that operators realize using stateful network functions (NFs). Together, these introduce new expressiveness and scalability challenges that fall outside the scope of existing network verification mechanisms. To address these challenges, we present Armstrong, a system that enables operators to test if network with stateful data plane elements correctly implements a given context-dependent policy. Our design makes three key contributions to address expressiveness and scalability: (1) An abstract I O unit for modeling network I O that encodes policy-relevant context information; (2) A practical representation of complex NFs via an ensemble of finite state machines abstraction; and (3) A scalable application of symbolic execution to tackle state space explosion. We demonstrate that Armstrong is several orders of magnitude faster than existing mechanisms."
]
} |
1604.02847 | 2950251060 | We present SymNet, a network static analysis tool based on symbolic execution. SymNet quickly analyzes networks by injecting symbolic packets and tracing their path through the network. Our key novelty is SEFL, a language we designed for network processing that is symbolic-execution friendly. SymNet is easy to use: we have developed parsers that automatically generate SEFL models from router and switch tables, firewall configurations and arbitrary Click modular router configurations. Most of our models are exact and have optimal branching factor. Finally, we built a testing tool that checks SEFL models conform to the real implementation. SymNet can check networks containing routers with hundreds of thousands of prefixes and NATs in seconds, while ensuring packet header memory-safety and capturing network functionality such as dynamic tunneling, stateful processing and encryption. We used SymNet to debug middlebox interactions documented in the literature, to check our department’s network and the Stanford backbone network. Results show that symbolic execution is fast and more accurate than existing static analysis tools. | NetKAT @cite_17 and Frenetic @cite_6 are novel specification languages optimized for specifying OpenFlow-like rules in networks. SEFL is strictly more general as it can model middlebox behaviours too, not just layer two behaviour. | {
"cite_N": [
"@cite_6",
"@cite_17"
],
"mid": [
"2099501333",
"2130210899"
],
"abstract": [
"Modern networks provide a variety of interrelated services including routing, traffic monitoring, load balancing, and access control. Unfortunately, the languages used to program today's networks lack modern features - they are usually defined at the low level of abstraction supplied by the underlying hardware and they fail to provide even rudimentary support for modular programming. As a result, network programs tend to be complicated, error-prone, and difficult to maintain. This paper presents Frenetic, a high-level language for programming distributed collections of network switches. Frenetic provides a declarative query language for classifying and aggregating network traffic as well as a functional reactive combinator library for describing high-level packet-forwarding policies. Unlike prior work in this domain, these constructs are - by design - fully compositional, which facilitates modular reasoning and enables code reuse. This important property is enabled by Frenetic's novel run-time system which manages all of the details related to installing, uninstalling, and querying low-level packet-processing rules on physical switches. Overall, this paper makes three main contributions: (1) We analyze the state-of-the art in languages for programming networks and identify the key limitations; (2) We present a language design that addresses these limitations, using a series of examples to motivate and validate our choices; (3) We describe an implementation of the language and evaluate its performance on several benchmarks.",
"Recent years have seen growing interest in high-level languages for programming networks. But the design of these languages has been largely ad hoc, driven more by the needs of applications and the capabilities of network hardware than by foundational principles. The lack of a semantic foundation has left language designers with little guidance in determining how to incorporate new features, and programmers without a means to reason precisely about their code. This paper presents NetKAT, a new network programming language that is based on a solid mathematical foundation and comes equipped with a sound and complete equational theory. We describe the design of NetKAT, including primitives for filtering, modifying, and transmitting packets; union and sequential composition operators; and a Kleene star operator that iterates programs. We show that NetKAT is an instance of a canonical and well-studied mathematical structure called a Kleene algebra with tests (KAT) and prove that its equational theory is sound and complete with respect to its denotational semantics. Finally, we present practical applications of the equational theory including syntactic techniques for checking reachability, proving non-interference properties that ensure isolation between programs, and establishing the correctness of compilation algorithms."
]
} |
1604.02993 | 2342155211 | There is a small but growing body of research on statistical scripts, models of event sequences that allow probabilistic inference of implicit events from documents. These systems operate on structured verb-argument events produced by an NLP pipeline. We compare these systems with recent Recurrent Neural Net models that directly operate on raw tokens to predict sentences, finding the latter to be roughly comparable to the former in terms of predicting missing events in documents. | The use of scripts in AI dates back to the 1970s @cite_20 @cite_13 ; in this conception, scripts were composed of complex events with no probabilistic semantics, which were difficult to learn automatically. In recent years, a growing body of research has investigated learning probabilistic co-occurrence models with simpler events. propose a model of co-occurrence of (verb, dependency) pairs, which can be used to infer such pairs from documents; give a superior model in the same general framework. give a method of generalizing from single sequences of pair events to collections of such sequences. apply a discriminative language model to the (verb, dependency) sequence modeling task, raising the question of to what extent event inference can be performed with standard language models applied to event sequences. describe a method of learning a co-occurrence based model of verbs with multiple coreference-based entity arguments. | {
"cite_N": [
"@cite_13",
"@cite_20"
],
"mid": [
"2000900121",
"2121773050"
],
"abstract": [
"For both people and machines, each in their own way, there is a serious problem in common of making sense out of what they hear, see, or are told about the world. The conceptual apparatus necessary to perform even a partial feat of understanding is formidable and fascinating. Our analysis of this apparatus is what this book is about. Roger C. Schank and Robert P. Abelson from the Introduction (http: www.psypress.com scripts-plans-goals-and-understanding-9780898591385)",
"Abstract : A partial theory is presented of thinking, combining a number of classical and modern concepts from psychology, linguistics, and AI. In a new situation one selects from memory a structure called a frame: a remembered framework to be adapted to fit reality by changing details as necessary, and a data-structure for representing a stereotyped situation. Attached to each frame are several kinds of information -- how to use the frame, what one can expect to happen next, and what to do if these expectations are not confirmed. The report discusses collections of related frames that are linked together into frame-systems."
]
} |
1604.02993 | 2342155211 | There is a small but growing body of research on statistical scripts, models of event sequences that allow probabilistic inference of implicit events from documents. These systems operate on structured verb-argument events produced by an NLP pipeline. We compare these systems with recent Recurrent Neural Net models that directly operate on raw tokens to predict sentences, finding the latter to be roughly comparable to the former in terms of predicting missing events in documents. | There is a body of related work focused on learning models of co-occurring events to automatically induce templates of complex events comprising multiple verbs and arguments, aimed ultimately at maximizing coherency of templates @cite_26 @cite_24 @cite_22 . give a model integrating various levels of event information of increasing abstraction, evaluating both on coherence of induced templates and log-likelihood of predictions of held-out events. describe a system that learns a model of co-occurring events and uses this model to automatically generate stories via a Genetic Algorithm. | {
"cite_N": [
"@cite_24",
"@cite_26",
"@cite_22"
],
"mid": [
"2950340514",
"2250836735",
"2252139350"
],
"abstract": [
"In natural-language discourse, related events tend to appear near each other to describe a larger scenario. Such structures can be formalized by the notion of a frame (a.k.a. template), which comprises a set of related events and prototypical participants and event transitions. Identifying frames is a prerequisite for information extraction and natural language generation, and is usually done manually. Methods for inducing frames have been proposed recently, but they typically use ad hoc procedures and are difficult to diagnose or extend. In this paper, we propose the first probabilistic approach to frame induction, which incorporates frames, events, participants as latent topics and learns those frame and event transitions that best explain the text. The number of frames is inferred by a novel application of a split-merge method from syntactic parsing. In end-to-end evaluations from text to induced frames and extracted facts, our method produced state-of-the-art results while substantially reducing engineering effort.",
"Event schema induction is the task of learning high-level representations of complex events (e.g., a bombing) and their entity roles (e.g., perpetrator and victim) from unlabeled text. Event schemas have important connections to early NLP research on frames and scripts, as well as modern applications like template extraction. Recent research suggests event schemas can be learned from raw text. Inspired by a pipelined learner based on named entity coreference, this paper presents the first generative model for schema induction that integrates coreference chains into learning. Our generative model is conceptually simpler than the pipelined approach and requires far less training data. It also provides an interesting contrast with a recent HMM-based model. We evaluate on a common dataset for template schema extraction. Our generative model matches the pipeline’s performance, and outperforms the HMM by 7 F1 points (20 ).",
"Chambers and Jurafsky (2009) demonstrated that event schemas can be automatically induced from text corpora. However, our analysis of their schemas identifies several weaknesses, e.g., some schemas lack a common topic and distinct roles are incorrectly mixed into a single actor. It is due in part to their pair-wise representation that treats subjectverb independently from verb-object. This often leads to subject-verb-object triples that are not meaningful in the real-world. We present a novel approach to inducing open-domain event schemas that overcomes these limitations. Our approach uses cooccurrence statistics of semantically typed relational triples, which we call Rel-grams (relational n-grams). In a human evaluation, our schemas outperform Chambers’s schemas by wide margins on several evaluation criteria. Both Rel-grams and event schemas are freely available to the research community."
]
} |
1604.02546 | 2338696281 | This paper presents a novel retrieval pipeline for video collections, which aims to retrieve the most significant parts of an edited video for a given query, and represent them with thumbnails which are at the same time semantically meaningful and aesthetically remarkable. Videos are first segmented into coherent and story-telling scenes, then a retrieval algorithm based on deep learning is proposed to retrieve the most significant scenes for a textual query. A ranking strategy based on deep features is finally used to tackle the problem of visualizing the best thumbnail. Qualitative and quantitative experiments are conducted on a collection of edited videos to demonstrate the effectiveness of our approach. | The process of producing thumbnails to represent video content has been widely studied. Most conventional methods for video thumbnail selection have focused on learning visual representativeness purely from visual content @cite_13 @cite_19 ; however, more recent researches have focused on choosing query-dependent thumbnails to supply specific thumbnails for different queries. Craggs al @cite_22 introduced the concept that thumbnails are surrogates for videos, as they take the place of a video in search results. Therefore, they may not accurately represent the content of the video, and create an , i.e. a discrepancy between the information sought by the user and the actual content of the video. To reduce the intention gap, they propose a new kind of animated preview, constructed of frames taken from a full video, and a crowdsourced tagging process which enables the matching between query terms and videos. Their system, while going in the right direction, suffers from the need of manual annotations, which are often expensive and difficult to obtain. | {
"cite_N": [
"@cite_19",
"@cite_13",
"@cite_22"
],
"mid": [
"2163527813",
"1966451496",
"2054235944"
],
"abstract": [
"The power of video over still images is the ability to represent dynamic activities. But video browsing and retrieval are inconvenient due to inherent spatio-temporal redundancies, where some time intervals may have no activity, or have activities that occur in a small image region. Video synopsis aims to provide a compact video representation, while preserving the essential activities of the original video. We present dynamic video synopsis, where most of the activity in the video is condensed by simultaneously showing several actions, even when they originally occurred at different times. For example, we can create a \"stroboscopic movie\", where multiple dynamic instances of a moving object are played simultaneously. This is an extension of the still stroboscopic picture. Previous approaches for video abstraction addressed mostly the temporal redundancy by selecting representative key-frames or time intervals. In dynamic video synopsis the activity is shifted into a significantly shorter period, in which the activity is much denser. Video examples can be found online in http: www.vision.huji.ac.il synopsis",
"With the rapid explosion of video data, compact representation of videos is becoming more and more desirable for efficient browsing and communication, which leads to a number of research works on video summarization in recent years. Among these works, summaries based on a set of still frames are frequently studied and applied due to its high compactness. However, the representativeness of the selected frames, which are taken as the compact representation of the video or video segment, has not been well studied. It is observed that frame representativeness is highly related to the following elements: image quality, user attention measure, visual details, and displaying duration. It is also observed that users have similar tendency in selecting the most representative frame for a certain video segment. In this paper, we developed a method to examine and evaluate the representativeness of video frames based on learning users' perceptive evaluations.",
"During online search, the user's expectations often differ from those of the author. This is known as the \"intention gap\" and is particularly problematic when searching for and discriminating between online video content. An author uses description and meta-data tags to label their content, but often cannot predict alternate interpretations or appropriations of their work. To address this intention gap, we present ThumbReels, a concept for query-sensitive video previews generated from crowdsourced, temporally defined semantic tagging. Further, we supply an open-source tool that supports on-the-fly temporal tagging of videos, whose output can be used for later search queries. A first user study validates the tool and concept. We then present a second study that shows participants found ThumbReels to better represent search terms than contemporary preview techniques."
]
} |
1604.02546 | 2338696281 | This paper presents a novel retrieval pipeline for video collections, which aims to retrieve the most significant parts of an edited video for a given query, and represent them with thumbnails which are at the same time semantically meaningful and aesthetically remarkable. Videos are first segmented into coherent and story-telling scenes, then a retrieval algorithm based on deep learning is proposed to retrieve the most significant scenes for a textual query. A ranking strategy based on deep features is finally used to tackle the problem of visualizing the best thumbnail. Qualitative and quantitative experiments are conducted on a collection of edited videos to demonstrate the effectiveness of our approach. | @cite_15 , instead, authors proposed a method to enforce the representativeness of a selected thumbnail given a user query, by using a reinforcement algorithm to rank frames in each video and a relevance model to calculate the similarity between the video frames and the query keywords. Recently, Liu al @cite_21 trained a deep visual-semantic embedding to retrieve query-dependent video thumbnails. Their method employs a deeply-learned model to directly compute the similarity between a query and video thumbnails, by mapping them into a common latent semantic space. | {
"cite_N": [
"@cite_15",
"@cite_21"
],
"mid": [
"1990089425",
"1958932515"
],
"abstract": [
"With the fast rising of the video sharing websites, the online video becomes an important media for people to share messages, interests, ideas, beliefs, etc. In this paper, we propose a novel approach to dynamically generate the web video thumbnails according to user's query. Two issues are addressed: the video content representativeness of the selected video thumbnail, and the relationship between the selected video thumbnail and the user's query. For the first issue the reinforcement based algorithm is adopted to rank the frames in each video. For the second issue the relevance model based method is employed to calculate the similarity between the video frames and the query keywords. The final video thumbnail is generated by linear fusion of the above two scores. Compared with the existing web video thumbnails, which only reflect the preference of the video owner, the thumbnails generated in our approach not only consider the video content representativeness of the frame, but also reflect the intention of the video searcher. In order to show the effectiveness of the proposed method, experiments are conducted on the videos selected from the video sharing website. Experimental results and subjective evaluations demonstrate that the proposed method is effective and can meet the user's intention requirement.",
"Given the tremendous growth of online videos, video thumbnail, as the common visualization form of video content, is becoming increasingly important to influence user's browsing and searching experience. However, conventional methods for video thumbnail selection often fail to produce satisfying results as they ignore the side semantic information (e.g., title, description, and query) associated with the video. As a result, the selected thumbnail cannot always represent video semantics and the click-through rate is adversely affected even when the retrieved videos are relevant. In this paper, we have developed a multi-task deep visual-semantic embedding model, which can automatically select query-dependent video thumbnails according to both visual and side information. Different from most existing methods, the proposed approach employs the deep visual-semantic embedding model to directly compute the similarity between the query and video thumbnails by mapping them into a common latent semantic space, where even unseen query-thumbnail pairs can be correctly matched. In particular, we train the embedding model by exploring the large-scale and freely accessible click-through video and image data, as well as employing a multi-task learning strategy to holistically exploit the query-thumbnail relevance from these two highly related datasets. Finally, a thumbnail is selected by fusing both the representative and query relevance scores. The evaluations on 1,000 query-thumbnail dataset labeled by 191 workers in Amazon Mechanical Turk have demonstrated the effectiveness of our proposed method."
]
} |
1604.02546 | 2338696281 | This paper presents a novel retrieval pipeline for video collections, which aims to retrieve the most significant parts of an edited video for a given query, and represent them with thumbnails which are at the same time semantically meaningful and aesthetically remarkable. Videos are first segmented into coherent and story-telling scenes, then a retrieval algorithm based on deep learning is proposed to retrieve the most significant scenes for a textual query. A ranking strategy based on deep features is finally used to tackle the problem of visualizing the best thumbnail. Qualitative and quantitative experiments are conducted on a collection of edited videos to demonstrate the effectiveness of our approach. | On a different note, lot of work has also been proposed for video retrieval: with the explosive growth of online videos, this has become a hot topic in computer vision. In their seminal work, Sivic al proposed @cite_16 , a system that retrieves videos from a database via bag-of-words matching. Lew al @cite_11 reviewed earlier efforts in video retrieval, which mostly relied on feature-based relevance feedback or similar methods. | {
"cite_N": [
"@cite_16",
"@cite_11"
],
"mid": [
"2131846894",
"2147069236"
],
"abstract": [
"We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films.",
"Extending beyond the boundaries of science, art, and culture, content-based multimedia information retrieval provides new paradigms and methods for searching through the myriad variety of media all over the world. This survey reviews 100p recent articles on content-based multimedia information retrieval and discusses their role in current research directions which include browsing and search paradigms, user studies, affective computing, learning, semantic queries, new features and media types, high performance indexing, and evaluation techniques. Based on the current state of the art, we discuss the major challenges for the future."
]
} |
1604.02546 | 2338696281 | This paper presents a novel retrieval pipeline for video collections, which aims to retrieve the most significant parts of an edited video for a given query, and represent them with thumbnails which are at the same time semantically meaningful and aesthetically remarkable. Videos are first segmented into coherent and story-telling scenes, then a retrieval algorithm based on deep learning is proposed to retrieve the most significant scenes for a textual query. A ranking strategy based on deep features is finally used to tackle the problem of visualizing the best thumbnail. Qualitative and quantitative experiments are conducted on a collection of edited videos to demonstrate the effectiveness of our approach. | Recently, concept-based methods have emerged as a popular approach to video retrieval. Snoek al @cite_9 proposed a method based on a set of concept detectors, with the aim to bridge the semantic gap between visual features and high level concepts. @cite_2 , authors proposed a video retrieval approach based on tag propagation: given an input video with user-defined tags, Flickr, Google Images and Bing are mined to collect images with similar tags: these are used to label each temporal segment of the video, so that the method increases the number of tags originally proposed by the users, and localizes them temporally. Our method, in contrast, does not need any kind of manual annotation, but is applicable to edited video only. | {
"cite_N": [
"@cite_9",
"@cite_2"
],
"mid": [
"2139882085",
"2294432969"
],
"abstract": [
"In this paper, we propose an automatic video retrieval method based on high-level concept detectors. Research in video analysis has reached the point where over 100 concept detectors can be learned in a generic fashion, albeit with mixed performance. Such a set of detectors is very small still compared to ontologies aiming to capture the full vocabulary a user has. We aim to throw a bridge between the two fields by building a multimedia thesaurus, i.e., a set of machine learned concept detectors that is enriched with semantic descriptions and semantic structure obtained from WordNet. Given a multimodal user query, we identify three strategies to select a relevant detector from this thesaurus, namely: text matching, ontology querying, and semantic visual querying. We evaluate the methods against the automatic search task of the TRECVID 2005 video retrieval benchmark, using a news video archive of 85 h in combination with a thesaurus of 363 machine learned concept detectors. We assess the influence of thesaurus size on video search performance, evaluate and compare the multimodal selection strategies for concept detectors, and finally discuss their combined potential using oracle fusion. The set of queries in the TRECVID 2005 corpus is too small for us to be definite in our conclusions, but the results suggest promising new lines of research.",
"Our approach locates the temporal positions of tags in videos at the keyframe level.We deal with a scenario in which there is no pre-defined set of tags.We report experiments about the use of different web sources (Flickr, Google, Bing).We show state-of-the-art results on DUT-WEBV, a large dataset of YouTube videos.We show results in a real-world scenario to perform open vocabulary tag annotation. Tagging of visual content is becoming more and more widespread as web-based services and social networks have popularized tagging functionalities among their users. These user-generated tags are used to ease browsing and exploration of media collections, e.g.?using tag clouds, or to retrieve multimedia content. However, not all media are equally tagged by users. Using the current systems is easy to tag a single photo, and even tagging a part of a photo, like a face, has become common in sites like Flickr and Facebook. On the other hand, tagging a video sequence is more complicated and time consuming, so that users just tag the overall content of a video. In this paper we present a method for automatic video annotation that increases the number of tags originally provided by users, and localizes them temporally, associating tags to keyframes. Our approach exploits collective knowledge embedded in user-generated tags and web sources, and visual similarity of keyframes and images uploaded to social sites like YouTube and Flickr, as well as web sources like Google and Bing. Given a keyframe, our method is able to select \"on the fly\" from these visual sources the training exemplars that should be the most relevant for this test sample, and proceeds to transfer labels across similar images. Compared to existing video tagging approaches that require training classifiers for each tag, our system has few parameters, is easy to implement and can deal with an open vocabulary scenario. We demonstrate the approach on tag refinement and localization on DUT-WEBV, a large dataset of web videos, and show state-of-the-art results."
]
} |
1604.02271 | 2340793067 | This paper addresses a fundamental problem of scene understanding: How to parse the scene image into a structured configuration (i.e., a semantic object hierarchy with object interaction relations) that finely accords with human perception. We propose a deep architecture consisting of two networks: i) a convolutional neural network (CNN) extracting the image representation for pixelwise object labeling and ii) a recursive neural network (RNN) discovering the hierarchical object structure and the inter-object relations. Rather than relying on elaborative user annotations (e.g., manually labeling semantic maps and relations), we train our deep model in a weakly-supervised manner by leveraging the descriptive sentences of the training images. Specifically, we decompose each sentence into a semantic tree consisting of nouns and verb phrases, and facilitate these trees discovering the configurations of the training images. Once these scene configurations are determined, then the parameters of both the CNN and RNN are updated accordingly by back propagation. The entire model training is accomplished through an Expectation-Maximization method. Extensive experiments suggest that our model is capable of producing meaningful and structured scene configurations and achieving more favorable scene labeling performance on PASCAL VOC 2012 over other state-of-the-art weakly-supervised methods. | Scene understanding is arguably considered as the most fundamental problem in computer vision, which actually involves several tasks of different level. In current research, a myriad of different methods focus on what general scene type the image shows (classification) @cite_37 @cite_19 @cite_4 , what objects and their locations are in a scene (semantic labeling or segmentation) @cite_14 @cite_34 @cite_30 @cite_21 . These methods, however, ignore or over-simplified the compositional object representations and would fail to gain a deeper scene understanding. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_14",
"@cite_4",
"@cite_21",
"@cite_19",
"@cite_34"
],
"mid": [
"1938976761",
"2125560515",
"",
"2119525058",
"2076874408",
"2937970997",
"2545985378"
],
"abstract": [
"We introduce a purely feed-forward architecture for semantic segmentation. We map small image elements (superpixels) to rich feature representations extracted from a sequence of nested regions of increasing extent. These regions are obtained by “zooming out” from the superpixel all the way to scene-level resolution. This approach exploits statistical structure in the image and in the label space without setting up explicit structured prediction mechanisms, and thus avoids complex and expensive inference. Instead superpixels are classified by a feedforward multilayer network. Our architecture achieves 69.6 average accuracy on the PASCAL VOC 2012 test set.",
"We present a probabilistic generative model of visual attributes, together with an efficient learning algorithm. Attributes are visual qualities of objects, such as 'red', 'striped', or 'spotted'. The model sees attributes as patterns of image segments, repeatedly sharing some characteristic properties. These can be any combination of appearance, shape, or the layout of segments within the pattern. Moreover, attributes with general appearance are taken into account, such as the pattern of alternation of any two colors which is characteristic for stripes. To enable learning from unsegmented training images, the model is learnt discriminatively, by optimizing a likelihood ratio. As demonstrated in the experimental evaluation, our model can learn in a weakly supervised setting and encompasses a broad range of attributes. We show that attributes can be learnt starting from a text query to Google image search, and can then be used to recognize the attribute and determine its spatial extent in novel real-world images.",
"",
"Fine-grained categorization refers to the task of classifying objects that belong to the same basic-level class (e.g. different bird species) and share similar shape or visual appearances. Most of the state-of-the-art basic-level object classification algorithms have difficulties in this challenging problem. One reason for this can be attributed to the popular codebook-based image representation, often resulting in loss of subtle image information that are critical for fine-grained classification. Another way to address this problem is to introduce human annotations of object attributes or key points, a tedious process that is also difficult to generalize to new tasks. In this work, we propose a codebook-free and annotation-free approach for fine-grained image categorization. Instead of using vector-quantized codewords, we obtain an image representation by running a high throughput template matching process using a large number of randomly generated image templates. We then propose a novel bagging-based algorithm to build a final classifier by aggregating a set of discriminative yet largely uncorrelated classifiers. Experimental results show that our method outperforms state-of-the-art classification approaches on the Caltech-UCSD Birds dataset.",
"This work proposes a method to interpret a scene by assigning a semantic label at every pixel and inferring the spatial extent of individual object instances together with their occlusion relationships. Starting with an initial pixel labeling and a set of candidate object masks for a given test image, we select a subset of objects that explain the image well and have valid overlap relationships and occlusion ordering. This is done by minimizing an integer quadratic program either using a greedy method or a standard solver. Then we alternate between using the object predictions to refine the pixel labels and vice versa. The proposed system obtains promising results on two challenging subsets of the LabelMe and SUN datasets, the largest of which contains 45, 676 images and 232 classes.",
"Many state-of-the-art approaches for object recognition reduce the problem to a 0-1 classification task. This allows one to leverage sophisticated machine learning techniques for training classifiers from labeled examples. However, these models are typically trained independently for each class using positive and negative examples cropped from images. At test-time, various post-processing heuristics such as non-maxima suppression (NMS) are required to reconcile multiple detections within and between different classes for each image. Though crucial to good performance on benchmarks, this post-processing is usually defined heuristically. We introduce a unified model for multi-class object recognition that casts the problem as a structured prediction task. Rather than predicting a binary label for each image window independently, our model simultaneously predicts a structured labeling of the entire image (Fig. 1). Our model learns statistics that capture the spatial arrangements of various object classes in real images, both in terms of which arrangements to suppress through NMS and which arrangements to favor through spatial co-occurrence statistics. We formulate parameter estimation in our model as a max-margin learning problem. Given training images with ground-truth object locations, we show how to formulate learning as a convex optimization problem. We employ the cutting plane algorithm of (Mach. Learn. 2009) to efficiently learn a model from thousands of training images. We show state-of-the-art results on the PASCAL VOC benchmark that indicate the benefits of learning a global model encapsulating the spatial layout of multiple object classes (a preliminary version of this work appeared in ICCV 2009, , IEEE international conference on computer vision, 2009).",
"We propose a method to identify and localize object classes in images. Instead of operating at the pixel level, we advocate the use of superpixels as the basic unit of a class segmentation or pixel localization scheme. To this end, we construct a classifier on the histogram of local features found in each superpixel. We regularize this classifier by aggregating histograms in the neighborhood of each superpixel and then refine our results further by using the classifier in a conditional random field operating on the superpixel graph. Our proposed method exceeds the previously published state-of-the-art on two challenging datasets: Graz-02 and the PASCAL VOC 2007 Segmentation Challenge."
]
} |
1604.02271 | 2340793067 | This paper addresses a fundamental problem of scene understanding: How to parse the scene image into a structured configuration (i.e., a semantic object hierarchy with object interaction relations) that finely accords with human perception. We propose a deep architecture consisting of two networks: i) a convolutional neural network (CNN) extracting the image representation for pixelwise object labeling and ii) a recursive neural network (RNN) discovering the hierarchical object structure and the inter-object relations. Rather than relying on elaborative user annotations (e.g., manually labeling semantic maps and relations), we train our deep model in a weakly-supervised manner by leveraging the descriptive sentences of the training images. Specifically, we decompose each sentence into a semantic tree consisting of nouns and verb phrases, and facilitate these trees discovering the configurations of the training images. Once these scene configurations are determined, then the parameters of both the CNN and RNN are updated accordingly by back propagation. The entire model training is accomplished through an Expectation-Maximization method. Extensive experiments suggest that our model is capable of producing meaningful and structured scene configurations and achieving more favorable scene labeling performance on PASCAL VOC 2012 over other state-of-the-art weakly-supervised methods. | Meanwhile, as a higher-level task, structured scene parsing has also attracted much attention. A pioneer work was proposed by , @cite_20 , in which they mainly focused on faces and texture patterns by a Bayesian inference framework. In @cite_13 , , proposed to hierarchically parse the indoor scene images by developing a generative grammar model. A hierarchical model was proposed in @cite_3 to represent the image recursively by contextualized templates at multiple scales, and the rapid inference was realized based on dynamic programming. , @cite_27 developed a connected segmentation tree for object and scene parsing. Some other related works @cite_2 @cite_5 investigated the approaches for RGB-D scene understanding, achieving impressive results. | {
"cite_N": [
"@cite_3",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_13",
"@cite_20"
],
"mid": [
"1980124385",
"2154086615",
"125693051",
"2067912884",
"2139381543",
"2056860348"
],
"abstract": [
"In this paper, we propose a Hierarchical Image Model (HIM) which parses images to perform segmentation and object recognition. The HIM represents the image recursively by segmentation and recognition templates at multiple levels of the hierarchy. This has advantages for representation, inference, and learning. First, the HIM has a coarse-to-fine representation which is capable of capturing long-range dependency and exploiting different levels of contextual information (similar to how natural language models represent sentence structure in terms of hierarchical representations such as verb and noun phrases). Second, the structure of the HIM allows us to design a rapid inference algorithm, based on dynamic programming, which yields the first polynomial time algorithm for image labeling. Third, we learn the HIM efficiently using machine learning methods from a labeled data set. We demonstrate that the HIM is comparable with the state-of-the-art methods by evaluation on the challenging public MSRC and PASCAL VOC 2007 image data sets.",
"This paper proposes a new object representation, called connected segmentation tree (CST), which captures canonical characteristics of the object in terms of the photometric, geometric, and spatial adjacency and containment properties of its constituent image regions. CST is obtained by augmenting the objectpsilas segmentation tree (ST) with inter-region neighbor links, in addition to their recursive embedding structure already present in ST. This makes CST a hierarchy of region adjacency graphs. A regionpsilas neighbors are computed using an extension to regions of the Voronoi diagram for point patterns. Unsupervised learning of the CST model of a category is formulated as matching the CST graph representations of unlabeled training images, and fusing their maximally matching subgraphs. A new learning algorithm is proposed that optimizes the model structure by simultaneously searching for both the most salient nodes (regions) and the most salient edges (containment and neighbor relationships of regions) across the image graphs. Matching of the category model to the CST of a new image results in simultaneous detection, segmentation and recognition of all occurrences of the category, and a semantic explanation of these results.",
"We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.",
"We address the problems of contour detection, bottom-up grouping and semantic segmentation using RGB-D data. We focus on the challenging setting of cluttered indoor scenes, and evaluate our approach on the recently introduced NYU-Depth V2 (NYUD2) dataset [27]. We propose algorithms for object boundary detection and hierarchical segmentation that generalize the gPb-ucm approach of [2] by making effective use of depth information. We show that our system can label each contour with its type (depth, normal or albedo). We also propose a generic method for long-range amodal completion of surfaces and show its effectiveness in grouping. We then turn to the problem of semantic segmentation and propose a simple approach that classifies super pixels into the 40 dominant object categories in NYUD2. We use both generic and class-specific features to encode the appearance and geometry of objects. We also show how our approach can be used for scene classification, and how this contextual information in turn improves object recognition. In all of these tasks, we report significant improvements over the state-of-the-art.",
"This paper presents a simple attribute graph grammar as a generative representation for made-made scenes, such as buildings, hallways, kitchens, and living rooms, and studies an effective top-down bottom-up inference algorithm for parsing images in the process of maximizing a Bayesian posterior probability or equivalently minimizing a description length (MDL). Given an input image, the inference algorithm computes (or constructs) a parse graph, which includes a parse tree for the hierarchical decomposition and a number of spatial constraints. In the inference algorithm, the bottom-up step detects an excessive number of rectangles as weighted candidates, which are sorted in certain order and activate top-down predictions of occluded or missing components through the grammar rules. In the experiment, we show that the grammar and top-down inference can largely improve the performance of bottom-up detection.",
"In this paper we present a Bayesian framework for parsing images into their constituent visual patterns. The parsing algorithm optimizes the posterior probability and outputs a scene representation as a \"parsing graph\", in a spirit similar to parsing sentences in speech and natural language. The algorithm constructs the parsing graph and re-configures it dynamically using a set of moves, which are mostly reversible Markov chain jumps. This computational framework integrates two popular inference approaches--generative (top-down) methods and discriminative (bottom-up) methods. The former formulates the posterior probability in terms of generative models for images defined by likelihood functions and priors. The latter computes discriminative probabilities based on a sequence (cascade) of bottom-up tests filters. In our Markov chain algorithm design, the posterior probability, defined by the generative models, is the invariant (target) probability for the Markov chain, and the discriminative probabilities are used to construct proposal probabilities to drive the Markov chain. Intuitively, the bottom-up discriminative probabilities activate top-down generative models. In this paper, we focus on two types of visual patterns--generic visual patterns, such as texture and shading, and object patterns including human faces and text. These types of patterns compete and cooperate to explain the image and so image parsing unifies image segmentation, object detection, and recognition (if we use generic visual patterns only then image parsing will correspond to image segmentation (Tu and Zhu, 2002. IEEE Trans. PAMI, 24(5):657--673). We illustrate our algorithm on natural images of complex city scenes and show examples where image segmentation can be improved by allowing object specific knowledge to disambiguate low-level segmentation cues, and conversely where object detection can be improved by using generic visual patterns to explain away shadows and occlusions."
]
} |
1604.02271 | 2340793067 | This paper addresses a fundamental problem of scene understanding: How to parse the scene image into a structured configuration (i.e., a semantic object hierarchy with object interaction relations) that finely accords with human perception. We propose a deep architecture consisting of two networks: i) a convolutional neural network (CNN) extracting the image representation for pixelwise object labeling and ii) a recursive neural network (RNN) discovering the hierarchical object structure and the inter-object relations. Rather than relying on elaborative user annotations (e.g., manually labeling semantic maps and relations), we train our deep model in a weakly-supervised manner by leveraging the descriptive sentences of the training images. Specifically, we decompose each sentence into a semantic tree consisting of nouns and verb phrases, and facilitate these trees discovering the configurations of the training images. Once these scene configurations are determined, then the parameters of both the CNN and RNN are updated accordingly by back propagation. The entire model training is accomplished through an Expectation-Maximization method. Extensive experiments suggest that our model is capable of producing meaningful and structured scene configurations and achieving more favorable scene labeling performance on PASCAL VOC 2012 over other state-of-the-art weakly-supervised methods. | With the resurgence of neural network models, the performances of scene understanding have been improved substantially. The representative works, the fully convolutional network (FCN) @cite_32 and its extensions @cite_22 , demonstrate effectiveness in pixel-wise scene labeling. A recurrent neural network model was proposed in @cite_11 , which improves the segmentation performance by incorporating the mean-field approximate inference, and similar idea was also explored in @cite_38 . For the problem of structured scene parsing, recursive neural networks (RNNs) were studied in @cite_29 @cite_23 . For example, @cite_29 proposed to predict hierarchical scene structures by using a max-margin RNN model. The differences between these existing RNN-based parsing models and our model are two-fold. First, they mainly focused on parsing only the semantic entities ( , buildings, bikes, trees) and the scene configurations generated by ours include not only the objects but also the interaction relations of objects. Second, we incorporate convolutional feature learning into our deep model for joint optimization. | {
"cite_N": [
"@cite_38",
"@cite_22",
"@cite_29",
"@cite_32",
"@cite_23",
"@cite_11"
],
"mid": [
"2111077768",
"1923697677",
"1423339008",
"2952632681",
"2951221812",
""
],
"abstract": [
"This paper addresses semantic image segmentation by incorporating rich information into Markov Random Field (MRF), including high-order relations and mixture of label contexts. Unlike previous works that optimized MRFs using iterative algorithm, we solve MRF by proposing a Convolutional Neural Network (CNN), namely Deep Parsing Network (DPN), which enables deterministic end-toend computation in a single forward pass. Specifically, DPN extends a contemporary CNN architecture to model unary terms and additional layers are carefully devised to approximate the mean field algorithm (MF) for pairwise terms. It has several appealing properties. First, different from the recent works that combined CNN and MRF, where many iterations of MF were required for each training image during back-propagation, DPN is able to achieve high performance by approximating one iteration of MF. Second, DPN represents various types of pairwise terms, making many existing works as its special cases. Third, DPN makes MF easier to be parallelized and speeded up in Graphical Processing Unit (GPU). DPN is thoroughly evaluated on the PASCAL VOC 2012 dataset, where a single DPN model yields a new state-of-the-art segmentation accuracy of 77.5 .",
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"Recursive structure is commonly found in the inputs of different modalities such as natural scene images or natural language sentences. Discovering this recursive structure helps us to not only identify the units that an image or sentence contains but also how they interact to form a whole. We introduce a max-margin structure prediction architecture based on recursive neural networks that can successfully recover such structure both in complex scene images as well as sentences. The same algorithm can be used both to provide a competitive syntactic parser for natural language sentences from the Penn Treebank and to outperform alternative approaches for semantic scene segmentation, annotation and classification. For segmentation and annotation our algorithm obtains a new level of state-of-the-art performance on the Stanford background dataset (78.1 ). The features from the image parse tree outperform Gist descriptors for scene classification by 4 .",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"This paper proposes a learning-based approach to scene parsing inspired by the deep Recursive Context Propagation Network (RCPN). RCPN is a deep feed-forward neural network that utilizes the contextual information from the entire image, through bottom-up followed by top-down context propagation via random binary parse trees. This improves the feature representation of every super-pixel in the image for better classification into semantic categories. We analyze RCPN and propose two novel contributions to further improve the model. We first analyze the learning of RCPN parameters and discover the presence of bypass error paths in the computation graph of RCPN that can hinder contextual propagation. We propose to tackle this problem by including the classification loss of the internal nodes of the random parse trees in the original RCPN loss function. Secondly, we use an MRF on the parse tree nodes to model the hierarchical dependency present in the output. Both modifications provide performance boosts over the original RCPN and the new system achieves state-of-the-art performance on Stanford Background, SIFT-Flow and Daimler urban datasets.",
""
]
} |
1604.02271 | 2340793067 | This paper addresses a fundamental problem of scene understanding: How to parse the scene image into a structured configuration (i.e., a semantic object hierarchy with object interaction relations) that finely accords with human perception. We propose a deep architecture consisting of two networks: i) a convolutional neural network (CNN) extracting the image representation for pixelwise object labeling and ii) a recursive neural network (RNN) discovering the hierarchical object structure and the inter-object relations. Rather than relying on elaborative user annotations (e.g., manually labeling semantic maps and relations), we train our deep model in a weakly-supervised manner by leveraging the descriptive sentences of the training images. Specifically, we decompose each sentence into a semantic tree consisting of nouns and verb phrases, and facilitate these trees discovering the configurations of the training images. Once these scene configurations are determined, then the parameters of both the CNN and RNN are updated accordingly by back propagation. The entire model training is accomplished through an Expectation-Maximization method. Extensive experiments suggest that our model is capable of producing meaningful and structured scene configurations and achieving more favorable scene labeling performance on PASCAL VOC 2012 over other state-of-the-art weakly-supervised methods. | Most of the existing scene labeling parsing models are studied in the context of supervised learning, and they rely on expensive annotations. To overcome this issue, one can develop alternative methods that train the models from weakly annotated training data, e.g., image-level tags and contexts @cite_17 @cite_18 @cite_6 . Among these methods, one inspiring us is @cite_6 , which adopts an EM learning algorithm for training the model with image-level semantic labels. This algorithm alternates between predicting the latent pixel labels subject to the weak annotation constraints and optimizing the neural network parameters. | {
"cite_N": [
"@cite_18",
"@cite_6",
"@cite_17"
],
"mid": [
"1931270512",
"2221898772",
"2026581312"
],
"abstract": [
"Multiple instance learning (MIL) can reduce the need for costly annotation in tasks such as semantic segmentation by weakening the required degree of supervision. We propose a novel MIL formulation of multi-class semantic segmentation learning by a fully convolutional network. In this setting, we seek to learn a semantic segmentation model from just weak image-level labels. The model is trained end-to-end to jointly optimize the representation while disambiguating the pixel-image label assignment. Fully convolutional training accepts inputs of any size, does not need object proposal pre-processing, and offers a pixelwise loss map for selecting latent instances. Our multi-class MIL loss exploits the further supervision given by images with multiple labels. We evaluate this approach through preliminary experiments on the PASCAL VOC segmentation challenge.",
"Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at https: bitbucket.org deeplab deeplab-public.",
"We address the problem of weakly supervised semantic segmentation. The training images are labeled only by the classes they contain, not by their location in the image. On test images instead, the method must predict a class label for every pixel. Our goal is to enable segmentation algorithms to use multiple visual cues in this weakly supervised setting, analogous to what is achieved by fully supervised methods. However, it is difficult to assess the relative usefulness of different visual cues from weakly supervised training data. We define a parametric family of structured models, were each model weights visual cues in a different way. We propose a Maximum Expected Agreement model selection principle that evaluates the quality of a model from the family without looking at superpixel labels. Searching for the best model is a hard optimization problem, which has no analytic gradient and multiple local optima. We cast it as a Bayesian optimization problem and propose an algorithm based on Gaussian processes to efficiently solve it. Our second contribution is an Extremely Randomized Hashing Forest that represents diverse superpixel features as a sparse binary vector. It enables using appearance models of visual classes that are fast at training and testing and yet accurate. Experiments on the SIFT-flow dataset show a significant improvement over previous weakly supervised methods and even over some fully supervised methods."
]
} |
1604.02509 | 2341077648 | We propose a computational model of situated language comprehension based on the Indexical Hypothesis that generates meaning representations by translating amodal linguistic symbols to modal representations of beliefs, knowledge, and experience external to the linguistic system. This Indexical Model incorporates multiple information sources including perceptions, domain knowledge, and short-term and long-term experiences during comprehension. We show that exploiting diverse information sources can alleviate ambiguities that arise from contextual use of underspecific referring expressions and unexpressed argument alternations of verbs. The model is being used to support linguistic interactions in Rosie, an agent implemented in Soar that learns from instruction. | In the robotics community, grounded comprehension has been studied in the context of describing a visual scene @cite_4 , understanding descriptions of a scene @cite_1 , understanding spatial directions @cite_9 , and understanding natural language commands for navigation @cite_15 . These comprehension models work with the complex state and action representations required for reasoning about physical worlds (D1). Their primary focus has been on the acquisition of grounding models through batch-learning from human-generated descriptions of robot's perceptions or behavior. However, generating an annotated corpus is expensive. The agents are prone to failure if their training is insufficient for grounding a novel instruction. An interactive agent on the other hand will switch to learning mode if it is unable to comprehend the instruction. It is unclear if such data extensive, corpus-based, batch-learning mechanisms can be effectively incorporated in online and incremental human-agent interactions, allowing the agent to guide communication. Additionally, these mechanism do not address the challenges arising from ambiguities in natural language. | {
"cite_N": [
"@cite_1",
"@cite_9",
"@cite_4",
"@cite_15"
],
"mid": [
"2111807093",
"2069809153",
"2156050092",
"2236233024"
],
"abstract": [
"We present a visually-grounded language understanding model based on a study of how people verbally describe objects in scenes. The emphasis of the model is on the combination of individual word meanings to produce meanings for complex referring expressions. The model has been implemented, and it is able to understand a broad range of spatial referring expressions. We describe our implementation of word level visually-grounded semantics and their embedding in a compositional parsing framework. The implemented system selects the correct referents in response to natural language expressions for a large percentage of test cases. In an analysis of the system's successes and failures we reveal how visual context influences the semantics of utterances and propose future extensions to the model that take such context into account.",
"Speaking using unconstrained natural language is an intuitive and flexible way for humans to interact with robots. Understanding this kind of linguistic input is challenging because diverse words and phrases must be mapped into structures that the robot can understand, and elements in those structures must be grounded in an uncertain environment. We present a system that follows natural language directions by extracting a sequence of spatial description clauses from the linguistic input and then infers the most probable path through the environment given only information about the environmental geometry and detected visible objects. We use a probabilistic graphical model that factors into three key components. The first component grounds landmark phrases such as \"the computers\" in the perceptual frame of the robot by exploiting co-occurrence statistics from a database of tagged images such as Flickr. Second, a spatial reasoning component judges how well spatial relations such as \"past the computers\" describe a path. Finally, verb phrases such as \"turn right\" are modeled according to the amount of change in orientation in the path. Our system follows 60 of the directions in our corpus to within 15 meters of the true destination, significantly outperforming other approaches.",
"A spoken language generation system has been developed that learns to describe objects in computer-generated visual scenes. The system is trained by a ‘show-and-tell\" procedure in which visual scenes are paired with natural language descriptions. Learning algorithms acquire probabilistic structures which encode the visual semantics of phrase structure, word classes, and individual words. Using these structures, a planning algorithm integrates syntactic, semantic, and contextual constraints to generate natural and unambiguous descriptions of objects in novel scenes. The system generates syntactically well-formed compound adjective noun phrases, as well as relative spatial clauses. The acquired linguistic structures generalize from training data, enabling the production of novel word sequences which were never observed during training. The output of the generation system is synthesized using word-based concatenative synthesis drawing from the original training speech corpus. In evaluations of semantic comprehension by human judges, the performance of automatically generated spoken descriptions was comparable to human-generated descriptions. This work is motivated by our long-term goal of developing spoken language processing systems which grounds semantics in machine perception and action. ! 2002 Elsevier Science Ltd. All rights reserved.",
"This paper describes a new model for understanding natural language commands given to autonomous systems that perform navigation and mobile manipulation in semi-structured environments. Previous approaches have used models with fixed structure to infer the likelihood of a sequence of actions given the environment and the command. In contrast, our framework, called Generalized Grounding Graphs (G3), dynamically instantiates a probabilistic graphical model for a particular natural language command according to the command's hierarchical and compositional semantic structure. Our system performs inference in the model to successfully find and execute plans corresponding to natural language commands such as \"Put the tire pallet on the truck.\" The model is trained using a corpus of commands collected using crowdsourcing. We pair each command with robot actions and use the corpus to learn the parameters of the model. We evaluate the robot's performance by inferring plans from natural language commands, executing each plan in a realistic robot simulator, and asking users to evaluate the system's performance. We demonstrate that our system can successfully follow many natural language commands from the corpus."
]
} |
1604.02509 | 2341077648 | We propose a computational model of situated language comprehension based on the Indexical Hypothesis that generates meaning representations by translating amodal linguistic symbols to modal representations of beliefs, knowledge, and experience external to the linguistic system. This Indexical Model incorporates multiple information sources including perceptions, domain knowledge, and short-term and long-term experiences during comprehension. We show that exploiting diverse information sources can alleviate ambiguities that arise from contextual use of underspecific referring expressions and unexpressed argument alternations of verbs. The model is being used to support linguistic interactions in Rosie, an agent implemented in Soar that learns from instruction. | SHRDLU @cite_8 is a well-known early attempt to design an intelligent agent that could understand and generate natural language referring to objects and actions in a simple virtual blocks world (D1). It performed semantic interpretation by attaching short procedures to lexical units. It demonstrated simple learning as the user could define compositions of blocks (such as a tower) that the system would remember and could construct and answer questions about (D3). The system was not physically grounded, did not learn new procedures (D4) and, therefore, was constrained to pre-programmed behaviors. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2005814556"
],
"abstract": [
"Abstract This paper describes a computer system for understanding English. The system answers questions, executes commands, and accepts information in an interactive English dialog. It is based on the belief that in modeling language understanding, we must deal in an integrated way with all of the aspects of language—syntax, semantics, and inference. The system contains a parser, a recognition grammar of English, programs for semantic analysis, and a general problem solving system. We assume that a computer cannot deal reasonably with language unless it can understand the subject it is discussing. Therefore, the program is given a detailed model of a particular domain. In addition, the system has a simple model of its own mentality. It can remember and discuss its plans and actions as well as carrying them out. It enters into a dialog with a person, responding to English sentences with actions and English replies, asking for clarification when its heuristic programs cannot understand a sentence through the use of syntactic, semantic, contextual, and physical knowledge. Knowledge in the system is represented in the form of procedures, rather than tables of rules or lists of patterns. By developing special procedural representations for syntax, semantics, and inference, we gain flexibility and power. Since each piece of knowledge can be a procedure, it can call directly on any other piece of knowledge in the system."
]
} |
1604.02380 | 2951198456 | We consider a group of @math trusted and authenticated nodes that aim to create a shared secret key @math over a wireless channel in the presence of an eavesdropper Eve. We assume that there exists a state dependent wireless broadcast channel from one of the honest nodes to the rest of them including Eve. All of the trusted nodes can also discuss over a cost-free, noiseless and unlimited rate public channel which is also overheard by Eve. For this setup, we develop an information-theoretically secure secret key agreement protocol. We show the optimality of this protocol for "linear deterministic" wireless broadcast channels. This model generalizes the packet erasure model studied in literature for wireless broadcast channels. For "state-dependent Gaussian" wireless broadcast channels, we propose an achievability scheme based on a multi-layer wiretap code. Finding the best achievable secret key generation rate leads to solving a non-convex power allocation problem. We show that using a dynamic programming algorithm, one can obtain the best power allocation for this problem. Moreover, we prove the optimality of the proposed achievability scheme for the regime of high-SNR and large-dynamic range over the channel states in the (generalized) degrees of freedom sense. | The main contributions of the paper can be summarized as follows: For the secret key sharing problem among @math trusted nodes that have access to a deterministic broadcast channel and to a public discussion channel, we completely characterize the secret key generation capacity. This result can be considered an extension to the erasure channel case @cite_17 and produces information theoretic secure key regardless of the computational power of the eavesdropper Eve. By using ideas from the code design for deterministic broadcast channels, we devise a coding scheme based on a nested message set, degraded channel wiretap code (also see @cite_10 ). In general, we characterize the achievable secret key generation rate which is given by a non-convex power allocation optimization problem. Moreover, we derive an upper bound on the secrecy rate and show that for the high-SNR, high-dynamic range regime, our proposed scheme is optimal in a degree of freedom sense. Although, in the proposed scheme, the best achievable secrecy rate is described by the solution of a non-convex optimization problem, we the optimization problem by using dynamic programming technique. | {
"cite_N": [
"@cite_10",
"@cite_17"
],
"mid": [
"2162821768",
"2546022252"
],
"abstract": [
"A (layered) broadcast approach is studied for fading wiretap channels. The basic idea is to employ superposition coding to encode information into a number of layers and use stochastic encoding for each layer to keep the corresponding information secret from an eavesdropper. The legitimate receiver successively decodes information one layer after another by canceling the interference caused by the layers that the receiver has already decoded. The advantage of this approach is that the transmitter does not need to know the channel states to the legitimate receiver and the eavesdropper, but can still securely transmit certain layers of information to the legitimate receiver. The layers that can be securely transmitted are determined by the channel states to the legitimate receiver and the eavesdropper. The Gaussian wiretap channel with fixed channel gains is first studied to illustrate the idea of the broadcast approach. Three cases of block fading wiretap channels with a stringent delay constraint are then studied, in which either the legitimate receiver's channel, the eavesdropper's channel, or both channels are fading. For each case, the secrecy rate that can be achieved by using the broadcast approach is obtained, and the optimal power allocation over the layers (or the conditions on the optimal power allocation) is also derived.",
"We consider a group of m trusted nodes that aim to create a shared secret key K over a wireless channel in the presence an eavesdropper Eve. We assume an erasure broadcast channel from one of the honest nodes to the rest of them including Eve. All of the trusted nodes can also discuss over a cost-free public channel which is observed by Eve. For this setup we characterize the secret key generation capacity and propose an achievability scheme that is computationally efficient and employs techniques from network coding. Surprisingly, whether we have m = 2 nodes, or an arbitrary number m of nodes, we can establish a shared secret key among them at the same rate, independently of m.1"
]
} |
1604.02223 | 2337195104 | In this paper, we propose a novel multichannel network with infrastructure support, which is called an MC-IS network, that has not been studied in the literature. To the best of our knowledge, we are the first to study such an MC-IS network. Our proposed MC-IS network has a number of advantages over three existing conventional networks: a single-channel wireless ad hoc network (called an SC-AH network), a multichannel wireless ad hoc network (called an MC-AH network), and a single-channel network with infrastructure support (called an SC-IS network). In particular, the network capacity of our proposed MC-IS network is @math times higher than that of an SC-AH network and an MC-AH network and the same as that of an SC-IS network, where @math is the number of nodes in the network. The average delay of our MC-IS network is @math times lower than that of an SC-AH network and an MC-AH network and @math times lower than the average delay of an SC-IS network, where @math and @math denote the number of channels dedicated for infrastructure communications and the number of interfaces mounted at each infrastructure node, respectively. Our analysis on an MC-IS network equipped with omnidirectional antennas has been extended to an MC-IS network equipped with directional antennas only, which are named as an MC-IS-DA network. We show that an MC-IS-DA network has an even lower delay of @math compared with an SC-IS network and our MC-IS network. For example, when @math and @math , an MC-IS-DA can further reduce the delay by 24 times lower than that of an MC-IS network and by 288 times lower than that of an SC-IS network. | We summarize the related works to our study in this section. The first network related to our proposed network is an network. An network has a poor performance due to the following reasons: (i) the interference among multiple concurrent transmissions, (ii) the number of simultaneous transmissions on a single interface and (iii) the multi-hop communications @cite_44 @cite_31 @cite_39 . | {
"cite_N": [
"@cite_44",
"@cite_31",
"@cite_39"
],
"mid": [
"2137775453",
"2161725792",
"2115678412"
],
"abstract": [
"When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance.",
"Gupta and Kumar (2000) introduced a random network model for studying the way throughput scales in a wireless network when the nodes are fixed, and showed that the throughput per source-destination pair is spl otimes (1 spl radic nlogn). Grossglauser and Tse (2001) showed that when nodes are mobile it is possible to have a constant or spl otimes (1) throughput scaling per source-destination pair. The focus of this paper is on characterizing the delay and determining the throughput-delay trade-off in such fixed and mobile ad hoc networks. For the Gupta-Kumar fixed network model, we show that the optimal throughput-delay trade-off is given by D(n) = spl otimes (nT(n)), where T(n) and D(n) are the throughput and delay respectively. For the Grossglauser-Tse mobile network model, we show that the delay scales as spl otimes (n sup 1 2 v(n)), where v(n) is the velocity of the mobile nodes. We then describe a scheme that achieves the optimal order of delay for any given throughput. The scheme varies (i) the number of hops, (ii) the transmission range and (iii) the degree of node mobility to achieve the optimal throughput-delay trade-off. The scheme produces a range of models that capture the Gupta-Kumar model at one extreme and the Grossglauser-Tse model at the other. In the course of our work, we recover previous results of Gupta and Kumar, and Grossglauser and Tse using simpler techniques, which might be of a separate interest.",
"Gupta and Kumar (2000) introduced a random model to study throughput scaling in a wireless network with static nodes, and showed that the throughput per source-destination pair is Theta(1 radic(nlogn)). Grossglauser and Tse (2001) showed that when nodes are mobile it is possible to have a constant throughput scaling per source-destination pair. In most applications, delay is also a key metric of network performance. It is expected that high throughput is achieved at the cost of high delay and that one can be improved at the cost of the other. The focus of this paper is on studying this tradeoff for wireless networks in a general framework. Optimal throughput-delay scaling laws for static and mobile wireless networks are established. For static networks, it is shown that the optimal throughput-delay tradeoff is given by D(n)=Theta(nT(n)), where T(n) and D(n) are the throughput and delay scaling, respectively. For mobile networks, a simple proof of the throughput scaling of Theta(1) for the Grossglauser-Tse scheme is given and the associated delay scaling is shown to be Theta(nlogn). The optimal throughput-delay tradeoff for mobile networks is also established. To capture physical movement in the real world, a random-walk (RW) model for node mobility is assumed. It is shown that for throughput of Oscr(1 radic(nlogn)), which can also be achieved in static networks, the throughput-delay tradeoff is the same as in static networks, i.e., D(n)=Theta(nT(n)). Surprisingly, for almost any throughput of a higher order, the delay is shown to be Theta(nlogn), which is the delay for throughput of Theta(1). Our result, thus, suggests that the use of mobility to increase throughput, even slightly, in real-world networks would necessitate an abrupt and very large increase in delay."
]
} |
1604.02223 | 2337195104 | In this paper, we propose a novel multichannel network with infrastructure support, which is called an MC-IS network, that has not been studied in the literature. To the best of our knowledge, we are the first to study such an MC-IS network. Our proposed MC-IS network has a number of advantages over three existing conventional networks: a single-channel wireless ad hoc network (called an SC-AH network), a multichannel wireless ad hoc network (called an MC-AH network), and a single-channel network with infrastructure support (called an SC-IS network). In particular, the network capacity of our proposed MC-IS network is @math times higher than that of an SC-AH network and an MC-AH network and the same as that of an SC-IS network, where @math is the number of nodes in the network. The average delay of our MC-IS network is @math times lower than that of an SC-AH network and an MC-AH network and @math times lower than the average delay of an SC-IS network, where @math and @math denote the number of channels dedicated for infrastructure communications and the number of interfaces mounted at each infrastructure node, respectively. Our analysis on an MC-IS network equipped with omnidirectional antennas has been extended to an MC-IS network equipped with directional antennas only, which are named as an MC-IS-DA network. We show that an MC-IS-DA network has an even lower delay of @math compared with an SC-IS network and our MC-IS network. For example, when @math and @math , an MC-IS-DA can further reduce the delay by 24 times lower than that of an MC-IS network and by 288 times lower than that of an SC-IS network. | The second network related to our network is an network, in which multiple channels instead of a single channel are used. Besides, each node in such a network is equipped with multiple network interfaces instead of single network interface. This network has a higher throughput than an network because each node can support multiple concurrent transmissions over multiple channels. However, this network suffers from the high delay and the increased deployment complexity. The average delay of an network is the same as that of an network, which increases significantly with the number of nodes. The deployment complexity is mainly due to the condition @cite_17 that each channel (up to @math channels) must be utilized by a dedicated interface at a node so that all the channels are fully utilized simultaneously and thus the network capacity can be maximized. When the condition is not fulfilled, the capacity degrades significantly. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2163418239"
],
"abstract": [
"This paper studies how the capacity of a static multi-channel network scales as the number of nodes, n, increases. Gupta and Kumar have determined the capacity of single-channel networks, and those bounds are applicable to multi-channel networks as well, provided each node in the network has a dedicated interface per channel.In this work, we establish the capacity of general multi-channel networks wherein the number of interfaces, m, may be smaller than the number of channels, c. We show that the capacity of multi-channel networks exhibits different bounds that are dependent on the ratio between c and m. When the number of interfaces per node is smaller than the number of channels, there is a degradation in the network capacity in many scenarios. However, one important exception is a random network with up to O(log n) channels, wherein the network capacity remains at the Gupta and Kumar bound of Θ(W√noverlog n) bits sec, independent of the number of interfaces available at each node. Since in many practical networks, number of channels available is small (e.g., IEEE 802.11 networks), this bound is of practical interest. This implies that it may be possible to build capacity-optimal multi-channel networks with as few as one interface per node. We also extend our model to consider the impact of interface switching delay, and show that in a random network with up to O(log n) channels, switching delay may not affect capacity if multiple interfaces are used."
]
} |
1604.02223 | 2337195104 | In this paper, we propose a novel multichannel network with infrastructure support, which is called an MC-IS network, that has not been studied in the literature. To the best of our knowledge, we are the first to study such an MC-IS network. Our proposed MC-IS network has a number of advantages over three existing conventional networks: a single-channel wireless ad hoc network (called an SC-AH network), a multichannel wireless ad hoc network (called an MC-AH network), and a single-channel network with infrastructure support (called an SC-IS network). In particular, the network capacity of our proposed MC-IS network is @math times higher than that of an SC-AH network and an MC-AH network and the same as that of an SC-IS network, where @math is the number of nodes in the network. The average delay of our MC-IS network is @math times lower than that of an SC-AH network and an MC-AH network and @math times lower than the average delay of an SC-IS network, where @math and @math denote the number of channels dedicated for infrastructure communications and the number of interfaces mounted at each infrastructure node, respectively. Our analysis on an MC-IS network equipped with omnidirectional antennas has been extended to an MC-IS network equipped with directional antennas only, which are named as an MC-IS-DA network. We show that an MC-IS-DA network has an even lower delay of @math compared with an SC-IS network and our MC-IS network. For example, when @math and @math , an MC-IS-DA can further reduce the delay by 24 times lower than that of an MC-IS network and by 288 times lower than that of an SC-IS network. | The third network related to our network is an network @cite_1 @cite_46 @cite_41 @cite_49 @cite_24 @cite_47 @cite_53 @cite_25 @cite_30 . It is shown in @cite_1 @cite_24 that an network can significantly improve the network capacity and reduce the average delay. However, an infrastructure node in such a network equipped with a single interface cannot transmit and receive at the same time (i.e., the half-duplex constraint is still enforced). Thus, the communication delay in such an network is still not minimized. Besides, such networks also suffer from the poor spectrum reuse. | {
"cite_N": [
"@cite_30",
"@cite_47",
"@cite_41",
"@cite_53",
"@cite_1",
"@cite_24",
"@cite_49",
"@cite_46",
"@cite_25"
],
"mid": [
"2017654695",
"2166314437",
"2124309564",
"2128465088",
"2163565176",
"2152301416",
"2057411153",
"2113134966",
"2102457283"
],
"abstract": [
"In this paper, we study the multicast capacity of wireless ad hoc networks with infrastructure support. The network under study is termed as hybrid wireless network, where L-Maximum-Hop resource allocation strategy is adopted. There are n uniformly deployed normal wireless nodes and m regularly placed base stations dividing the network region into m cells. We show that the maximum capacity O(n1 2 k1 2(log n)1 2 W1) + O(mW 2 ) is achieved when the hop number L = Θ (n1 4 (k1 4(log n)3 4)) with the number of destinations k = O (a2 r2), where a is the side length of network region and r is transmission range of wireless terminals. This result provides a meaningful guide for the design of hybrid wireless networks. Moreover, we demonstrate that it is more efficient to adopt Infrastructure Mode than Ad Hoc Mode when k = Ω(a2 r2), because infrastructure nodes can cover the whole cell and broadcast to nodes more efficiently. In this case, maximum capacity is O (W 1 ) + O(mW 2 ), when L = Θ(1). Furthermore, we reveal that the per-node capacity does not vanish to zero only if the number of base stations m = Ω(n).",
"We study the throughput capacity of hybrid wireless networks with a directional antenna. The hybrid wireless network consists of n randomly distributed nodes equipped with a directional antenna, and m regularly placed base stations connected by optical links. We investigate the ad hoc mode throughput capacity when each node is equipped with a directional antenna under an L-maximum-hop resource allocation. That is, a source node transmits to its destination only with the help of normal nodes within L hops. Otherwise, the transmission will be carried out in the infrastructure mode, i.e., with the help of base stations. We find that the throughput capacity of a hybrid wireless network greatly depends on the maximum hop L, the number of base stations m, and the beamwidth of directional antenna θ. Assuming the total bandwidth W bits sec of the network is split into three parts, i.e., W1 for ad hoc mode, W2 for uplink in the infrastructure mode, and W3 for downlink in the infrastructure mode. We show that the throughput capacity of the hybrid directional wireless network is Θ(nW1 θ2 L log n) + Θ(mW2), if L = Ω(n1 3 θ4 3 log2 3 n); and Θ((θ2L2 log2 3 n); and + Θ(mW2), if L = o(n1 3 θ4 3 log2 3 n), respectively. Finally, we analyze the impact of L, m and θ on the throughput capacity of the hybrid networks.",
"We determine the asymptotic scaling for the per user throughput in a large hybrid ad hoc network, i.e., a network with both ad hoc nodes, which communicate with each other via shared wireless links of capacity W bits s, and infrastructure nodes which in addition are interconnected with each other via high capacity links. Specifically, we consider a network model where ad hoc nodes are randomly spatially distributed and choose to communicate with a random destination. We identify three scaling regimes, depending on the growth of the number of infrastructure nodes, m relative to the number of ad hoc nodes n, and show the asymptotic scaling for the per user throughput as n becomes large. We show that when m spl lsim spl radic n logn the per user throughput is of order W spl radic n log n and could be realized by allowing only ad hoc communications, i.e., not deploying the infrastructure nodes at all. Whenever spl radic n log n spl lsim m spl lsim n log n, the order for the per user throughput is Wm n and, thus, the total additional bandwidth provided by m infrastructure nodes is effectively shared among ad hoc nodes. Finally, whenever m spl gsim n log n, the order of the per user throughput is only W log n, suggesting that further investments in infrastructure nodes will not lead to improvement in throughput. The results are shown through an upper bound which is independent of the routing strategy, and by constructing scenarios showing that the upper bound is asymptotically tight.",
"Although capacity has been extensively studied in wireless networks, most of the results are for homogeneous wireless networks where all nodes are assumed identical. In this paper, we investigate the capacity of heterogeneous wireless networks with general network settings. Specifically, we consider a dense network with n normal nodes and m = n^b (0 < b < 1) more powerful helping nodes in a rectangular area with width b(n) and length 1 b(n), where b(n) = n^w and -1 2 < w ≤ 0. We assume there are n flows in the network. All the n normal nodes are sources while only randomly chosen n^d (0 < d < 1) normal nodes are destinations. We further assume the n normal nodes are uniformly and independently distributed, while the m helping nodes are either regularly placed or uniformly and independently distributed, resulting in two different kinds of networks called Regular Heterogeneous Wireless Networks and Random Heterogeneous Wireless Networks, respectively. In this paper, we attempt to find out what a heterogeneous wireless network with general network settings can do by deriving a lower bound on the capacity. We also explore the conditions under which heterogeneous wireless networks can provide throughput higher than traditional homogeneous wireless networks.",
"This paper involves the study of the throughput capacity of hybrid wireless networks. A hybrid network is formed by placing a sparse network of base stations in an ad hoc network. These base stations are assumed to be connected by a high-bandwidth wired network and act as relays for wireless nodes. They are not data sources nor data receivers. Hybrid networks present a tradeoff between traditional cellular networks and pure ad hoc networks in that data may be forwarded in a multihop fashion or through the infrastructure. It has been shown that the capacity of a random ad hoc network does not scale well with the number of nodes in the system. In this work, we consider two different routing strategies and study the scaling behavior of the throughput capacity of a hybrid network. Analytical expressions of the throughput capacity are obtained. For a hybrid network of n nodes and m base stations, the results show that if m grows asymptotically slower than √n, the benefit of adding base stations on capacity is insignificant. However, if m grows faster than √n, the throughput capacity increases linearly with the number of base stations, providing an effective improvement over a pure ad hoc network. Therefore, in order to achieve nonnegligible capacity gain, the investment in the wired infrastructure should be high enough.",
"An optical network is too costly to act as a broadband access network. On the other hand, a pure wireless ad hoc network with n nodes and total bandwidth of W bits per second cannot provide satisfactory broadband services since the pernode throughput diminishes as the number of users goes large. In this paper, we propose a hybrid wireless network, which is an integrated wireless and optical network, as the broadband access network. Specifically, we assume a hybrid wireless network consisting of n randomly distributed normal nodes, and m regularly placed base stations connected via an optical network. A source node transmits to its destination only with the help of normal nodes, i.e., in the ad hoc mode, if the destination can be reached within L (L spl geq 1) hops from the source. Otherwise, the transmission will be carried out in the infrastructure mode, i.e., with the help of base stations. Two transmission modes share the same bandwidth of W bits sec. We first study the throughput capacity of such a hybrid wireless network, and observe that the throughput capacity greatly depends on the maximum hop count L and the number of base stations m. We show that the throughput capacity of a hybrid wireless network can scale linearly with n only if m = Omega(n), and when we assign all the bandwidth to the infrastructure mode traffics. We then investigate the delay in hybrid wireless networks. We find that the average packet delay can be maintained as low as Theta(1) even when the per-node throughput capacity is Theta(W).",
"In this paper we study the capacity of wireless ad hoc networks with infrastructure support of an overlay of wired base stations. Such a network architecture is often referred to as hybrid wireless network or multihop cellular network. Previous studies on this topic are all focused on the two-dimensional disk model proposed by Gupta and Kumarin their original work on the capacity of wireless ad hoc networks. We further consider a one-dimensional network model and a two-dimensional strip model to investigate the impact of network dimensionality and geometry on the capacity of such networks. Our results show that different network dimensions lead to significantly different capacity scaling laws. Specifically, for a one-dimensional network of n nodes and b base stations, even with a small number of base stations, the gain in capacity is substantial, increasing linearly with the number of base stations as long as b log b ≤ n. However, a two-dimensional square (or disk) network requires a large number of base stations b = Ω(√n) before we see such a capacity increase. For a 2-dimensional strip network, if the width of the strip is at least on the order of the logarithmic of its length, the capacity follows the same scaling law as in the 2-dimensional square case. Otherwise the capacity exhibits the same scaling behavior as in the 1-dimensional network. We find that the different capacity scaling behaviors are attributed to the percolation properties of the respective network models.",
"In this paper, we consider the transport capacity of ad hoc networks with a random flat topology under the present support of an infinite capacity infrastructure network. Such a network architecture allows ad hoc nodes to communicate with each other by purely using the remaining ad hoc nodes as their relays. In addition, ad hoc nodes can also utilize the existing infrastructure fully or partially by reaching any access point (or gateway) of the infrastructure network in a single or multi-hop fashion. Using the same tools as in [1], we show that the per source node capacity of T(W log(N)) can be achieved in a random network scenario with the following assumptions: (i) The number of ad hoc nodes per access point is bounded above, (ii) each wireless node, including the access points, is able to transmit at W bits sec using a fixed transmission range, and (iii) N ad hoc nodes, excluding the access points, constitute a connected topology graph. This is a significant improvement over the capacity of random ad hoc networks with no infrastructure support which is found as T(W vN log(N)) in [1]. Although better capacity figures may be obtained by complex network coding or exploiting mobility in the network, infrastructure approach provides a simpler mechanism that has more practical aspects. We also show that even when less stringent requirements are imposed on topology connectivity, a per source node capacity figure that is arbitrarily close to T(1) cannot be obtained. Nevertheless, under these weak conditions, we can further improve per node throughput significantly.",
"How much information can one send through a random ad hoc network of n nodes, if overlaid with a cellular architecture of m base stations? This network model is commonly referred to as hybrid wireless networks and our paper analyzes the above question by characterizing its throughput capacity. Although several research efforts related to throughput capacity exist in the area of hybrid wireless networks, most of these solutions under-explore the capacity analysis. Their results particularly indicate that one can realize only a less than log or no gain on capacity, as compared to pure ad hoc networks, when ?? scales slower than some threshold. This unsatisfying capacity gain is due to the fact that the base stations were not properly exploited while formulating the capacity analysis. Moreover, these research efforts also assume an one-hop wireless uplink between a node and its associated base station. Nevertheless, with those power-constrained wireless nodes, this assumption clearly indicates an unrealistic scenario. In this paper, we establish the bounds on capacity and delay by resolving the issues in existing efforts and at the heart of our analysis lies a simple routing policy known as same cell routing policy. Our findings particularly stipulate that whether m = O(n over log n) or Ω(n over log n), each node can realize a throughput that scales, sublinearly or linearly, with m. This is in fact a significant result as opposed to previous efforts which claims that if m grows slower than some threshold, the benefit of augmenting those base stations to the original ad hoc network is insignificant. Our analysis also shows that for a maximum per node throughput Λ(n,m), the average end-to-end delay in a hybrid network can be bounded by Θ(Λ(n,m)n over m), which has an inverse relationship to m."
]
} |
1604.02223 | 2337195104 | In this paper, we propose a novel multichannel network with infrastructure support, which is called an MC-IS network, that has not been studied in the literature. To the best of our knowledge, we are the first to study such an MC-IS network. Our proposed MC-IS network has a number of advantages over three existing conventional networks: a single-channel wireless ad hoc network (called an SC-AH network), a multichannel wireless ad hoc network (called an MC-AH network), and a single-channel network with infrastructure support (called an SC-IS network). In particular, the network capacity of our proposed MC-IS network is @math times higher than that of an SC-AH network and an MC-AH network and the same as that of an SC-IS network, where @math is the number of nodes in the network. The average delay of our MC-IS network is @math times lower than that of an SC-AH network and an MC-AH network and @math times lower than the average delay of an SC-IS network, where @math and @math denote the number of channels dedicated for infrastructure communications and the number of interfaces mounted at each infrastructure node, respectively. Our analysis on an MC-IS network equipped with omnidirectional antennas has been extended to an MC-IS network equipped with directional antennas only, which are named as an MC-IS-DA network. We show that an MC-IS-DA network has an even lower delay of @math compared with an SC-IS network and our MC-IS network. For example, when @math and @math , an MC-IS-DA can further reduce the delay by 24 times lower than that of an MC-IS network and by 288 times lower than that of an SC-IS network. | The fourth network related to our network is a multi-channel wireless mesh network with infrastructure support (called an network) @cite_50 @cite_32 @cite_37 @cite_26 @cite_12 @cite_5 , which is the evolution of multi-channel multi-interface wireless mesh networks (called an network) @cite_7 @cite_27 . An network is different from our network due to the following characteristics of an network: [(i)] a typical network consists of mesh clients , mesh routers and mesh gateways while an network consists of common nodes and infrastructure nodes. [(ii)] different types of communications exist in the multi-tier hierarchical network, which are far more complicated than an network. For example, there are communications between mesh clients, communications between mesh gateways, and communications between a mesh gateway and a mesh router. [(iii)] an network uses wireless links to connect the backbone networks (corresponding to the infrastructure network in an network). As a result, the assumption of the unlimited capacity and the interference-free infrastructure communications in an network does not hold for an network. [(iv)] the traffic source of an network is either from a mesh client or from the Internet while the traffic always originates from the same network in an network. | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_7",
"@cite_32",
"@cite_27",
"@cite_50",
"@cite_5",
"@cite_12"
],
"mid": [
"2021851294",
"1980168610",
"2150825860",
"2054952342",
"2092511902",
"2145803479",
"2323365737",
"2053616686"
],
"abstract": [
"In conventional Wireless Mesh Networks (WMNs), multihop relays are performed in the backbone comprising of interconnected Mesh Routers (MRs) and this causes capacity degradation. This paper proposes a hybrid WMN architecture that the backbone is able to utilize random connections to Access Points (APs) of Wireless Local Area Network (WLAN). In such a proposed hierarchal architecture, capacity enhancement can be achieved by letting the traffic take advantage of the wired connections through APs. Theoretical analysis has been conducted for the asymptotic capacity of three-tier hybrid WMN, where per-MR capacity in the backbone is first derived and per-MC capacity is then obtained. Besides related to the number of MR cells as a conventional WMN, the analytical results reveal that the asymptotic capacity of a hybrid WMN is also strongly affected by the number of cells having AP connections, the ratio of access link bandwidth to backbone link bandwidth, etc. Appropriate configuration of the network can drastically improve the network capacity in our proposed network architecture. It also shows that the traffic balance among the MRs with AP access is very important to have a tighter asymptotic capacity bound. The results and conclusions justify the perspective of having such a hybrid WMN utilizing widely deployed WLANs.",
"This paper studies the asymptotic throughput capacity of a random infrastructure wireless mesh network (RndInfWMN). Generally an infrastructure wireless mesh network (InfWMN) comprises mesh clients, routers, gateways and has hierarchical structures. An InfWMN can be divided into two categories, which are arbitrary InfWMN (ArbInfWMN) and random InfWMN (RndInfWMN). In an ArbInfWMN, the locations of the WMRs are arbitrary while in an RndInfWMN, WMRs are distributed randomly. The latter is more interesting when randomly distributed WLANs are desired to be connected through a wired network of gateways. There are some analytical research on the asymptotic capacity of ArbInfWMN where the number of interfaces per-infrastructure node, m, is at the same order of the number of available channels for the network, c, i.e. cm=@q(1). In our previous research, we investigated the asymptotic throughput capacity of ArbInfWMNs for a more general case in which cm=O(1). However, to date, analytical facility has been limited by the absence of analysis in RndInfWMNs, especially for the general case in which cm=O(1). In this paper, we carry out an original analysis of the asymptotic per-client throughput capacity of multi-channel multi-interface RndInfWMNs for the case in which cm=O(1). Our analysis shows that by identifying cm in different scaling regimes, the asymptotic per-client throughput capacity of multi-channel multi-interface RndInfWMNs exhibits different bounds, depending on the ratio between c and m.",
"Wireless mesh networks (WMNs) consist of mesh routers and mesh clients, where mesh routers have minimal mobility and form the backbone of WMNs. They provide network access for both mesh and conventional clients. The integration of WMNs with other networks such as the Internet, cellular, IEEE 802.11, IEEE 802.15, IEEE 802.16, sensor networks, etc., can be accomplished through the gateway and bridging functions in the mesh routers. Mesh clients can be either stationary or mobile, and can form a client mesh network among themselves and with mesh routers. WMNs are anticipated to resolve the limitations and to significantly improve the performance of ad hoc networks, wireless local area networks (WLANs), wireless personal area networks (WPANs), and wireless metropolitan area networks (WMANs). They are undergoing rapid progress and inspiring numerous deployments. WMNs will deliver wireless services for a large variety of applications in personal, local, campus, and metropolitan areas. Despite recent advances in wireless mesh networking, many research challenges remain in all protocol layers. This paper presents a detailed study on recent advances and open research issues in WMNs. System architectures and applications of WMNs are described, followed by discussing the critical factors influencing protocol design. Theoretical network capacity and the state-of-the-art protocols for WMNs are explored with an objective to point out a number of open research issues. Finally, testbeds, industrial practice, and current standard activities related to WMNs are highlighted.",
"Research into the analytical solutions for the capacity of the infrastructure wireless mesh networks (InfWMN) is highly interesting. An InfWMN is a hierarchical network consisting of mesh clients, mesh routers and gateways. The mesh routers form a wireless mesh infrastructure to which the mesh clients are connected through the use of star topology. The previous analytical solutions have only investigated the asymptotic per-client throughput capacity of either single-channel InfWMNs or multi-channel InfWMNs under conditions in which each infrastructure node (i.e. wireless routers and gateways), has a dedicated interface per-channel. The results of previous analytical studies show that there are quite few studies that have addressed the more practical cases where the number of interfaces per-node is less than the number of channels. In this paper, we derive an original analysis of the asymptotic per-client throughput capacity of multi-channel InfWMNs in which the number of interfaces per-infrastructure node, denoted by m, is less than or equal to the number of channels, denoted by c. Our analysis reveals that the asymptotic per-client throughput capacity of multi-channel InfWMNs has different bounds, which depend on the ratio between c and m. In addition, in the case that m < c, there is a reduction in the capacity of the InfWMN compared to the case in which c = m. Our analytical solutions also prove that when ( c m = ( N _g^2 N _ r ) ), where Ng and Nr denote the number of gateways and mesh routers respectively, gateways cannot effectively increase the throughput capacity of the multi-channel InfWMNs.",
"Next generation fixed wireless broadband networks are being increasingly deployed as mesh networks in order to provide and extend access to the internet. These networks are characterized by the use of multiple orthogonal channels and nodes with the ability to simultaneously communicate with many neighbors using multiple radios (interfaces) over orthogonal channels. Networks based on the IEEE 802.11a b g and 802.16 standards are examples of these systems. However, due to the limited number of available orthogonal channels, interference is still a factor in such networks. In this paper, we propose a network model that captures the key practical aspects of such systems and characterize the constraints binding their behavior. We provide necessary conditions to verify the feasibility of rate vectors in these networks, and use them to derive upper bounds on the capacity in terms of achievable throughput, using a fast primal-dual algorithm. We then develop two link channel assignment schemes, one static and the other dynamic, in order to derive lower bounds on the achievable throughput. We demonstrate through simulations that the dynamic link channel assignment scheme performs close to optimal on the average, while the static link channel assignment algorithm also performs very well. The methods proposed in this paper can be a valuable tool for network designers in planning network deployment and for optimizing different performance objectives.",
"An infrastructure wireless mesh network (WMN) is a hierarchical network consisting of mesh clients, mesh routers and gateways. Mesh routers constitute a wireless mesh backbone, to which mesh clients are connected as a star topology, and gateways are chosen among mesh routers providing Internet access. In this paper, the throughput capacity of infrastructure WMNs is studied. For such a network with Nc randomly distributed mesh clients, Nr regularly placed mesh routers and Ng gateways, assuming that each mesh router can transmit at W bits s, the per-client throughput capacity has been derived as a function of Nc , Nr , Ng and W . The result illustrates that, in order to achieve high capacity performance, the number of mesh routers and the number of gateways must be properly chosen. It also reveals that an infrastructure WMN can achieve the same asymptotic throughput capacity as that of a hybrid ad hoc network by choosing only a small number of mesh routers as gateways. This property makes WMNs a very promising solution for future wireless networking.",
"In this paper, we study the asymptotic throughput capacity of a static multi-channel multi-interface infrastructure wireless mesh network (InfWMN) wherein each infrastructure node has m interfaces and c channels of unequal bandwidth are available. First, an upper bound on the InfWMN per-user capacity is established. Then, the feasible lower bound is derived by construction. We prove that both lower and upper bounds are tight. We limit our analysis for more practical case of @math . However, for the asymptotic upper bound, our analysis can be used for the general case in which there is no constraint on m and c. Our study shows that in such a network with Nc randomly distributed mesh clients, Nr regularly placed mesh routers, and Ng gateways, the asymptotic per-client throughput capacity has different bounds, which depend on the ratio between the total available bandwidth for the network and the sum of m first greatest data rates of c available channels, i.e., @math . The results of this paper are more general compared to the existing published researches. In addition, in the case that @math , our results reduce to the previously reported studies. This implies that our study is comprehensive compared to the formerly published researches.",
"Compared to single-hop networks such as WiFi, multihop infrastructure wireless mesh networks (WMNs) can potentially embrace the broadcast benefits of a wireless medium in a more flexible manner. Rather than being point-to-point, links in the WMNs may originate from a single node and reach more than one other node. Nodes located farther than a one-hop distance and overhearing such transmissions may opportunistically help relay packets for previous hops. This phenomenon is called opportunistic overhearing listening. With multiple radios, a node can also improve its capacity by transmitting over multiple radios simultaneously using orthogonal channels. Capitalizing on these potential advantages requires effective routing and efficient mapping of channels to radios (channel assignment (CA)). While efficient channel assignment can greatly reduce interference from nearby transmitters, effective routing can potentially relieve congestion on paths to the infrastructure. Routing, however, requires that only packets pertaining to a particular connection be routed on a predetermined route. Random network coding (RNC) breaks this constraint by allowing nodes to randomly mix packets overheard so far before forwarding. A relay node thus only needs to know how many packets, and not which packets, it should send. We mathematically formulate the joint problem of random network coding, channel assignment, and broadcast link scheduling, taking into account opportunistic overhearing, the interference constraints, the coding constraints, the number of orthogonal channels, the number of radios per node, and fairness among unicast connections. Based on this formulation, we develop a suboptimal, auction-based solution for overall network throughput optimization. Performance evaluation results show that our algorithm can effectively exploit multiple radios and channels and can cope with fairness issues arising from auctions. Our algorithm also shows promising gains over traditional routing solutions in which various channel assignment strategies are used."
]
} |
1604.02223 | 2337195104 | In this paper, we propose a novel multichannel network with infrastructure support, which is called an MC-IS network, that has not been studied in the literature. To the best of our knowledge, we are the first to study such an MC-IS network. Our proposed MC-IS network has a number of advantages over three existing conventional networks: a single-channel wireless ad hoc network (called an SC-AH network), a multichannel wireless ad hoc network (called an MC-AH network), and a single-channel network with infrastructure support (called an SC-IS network). In particular, the network capacity of our proposed MC-IS network is @math times higher than that of an SC-AH network and an MC-AH network and the same as that of an SC-IS network, where @math is the number of nodes in the network. The average delay of our MC-IS network is @math times lower than that of an SC-AH network and an MC-AH network and @math times lower than the average delay of an SC-IS network, where @math and @math denote the number of channels dedicated for infrastructure communications and the number of interfaces mounted at each infrastructure node, respectively. Our analysis on an MC-IS network equipped with omnidirectional antennas has been extended to an MC-IS network equipped with directional antennas only, which are named as an MC-IS-DA network. We show that an MC-IS-DA network has an even lower delay of @math compared with an SC-IS network and our MC-IS network. For example, when @math and @math , an MC-IS-DA can further reduce the delay by 24 times lower than that of an MC-IS network and by 288 times lower than that of an SC-IS network. | In this paper, we analyze the capacity and the delay of an network. Although parts of the results on the analysis on the capacity and the delay contributed by ad hoc communications have appeared in @cite_40 , our analysis in this paper significantly differs from the previous work in the following aspects: We derive the capacity and the delay of an network contributed by infrastructure communications in this paper while @cite_40 only addresses the capacity and the delay contributed by ad hoc communications. We fully investigate the capacity and the delay of an network with consideration of both infrastructure communications and ad hoc communications. Specifically, we also analyze the average delay and the optimality of our results, all of which have not been addressed in @cite_40 . We also compare our results with other existing networks, such as an network, an network and an network and analyze the generality of our network in this paper. We extend our analysis with consideration of using directional antennas in an network. Discussions on the mobility are also presented in this paper (see Section for more details). | {
"cite_N": [
"@cite_40"
],
"mid": [
"2005582408"
],
"abstract": [
"In this paper, we propose a novel multi-channel wireless network with infrastructure support, called an MC-IS network. To the best of our knowledge, we are the first to study the capacity and the delay of such an MC-IS network. In particular, we derive the upper bounds and the lower bounds on the network capacity of such MC-IS networks contributed by ad hoc communications, where the orders of the upper bounds are the same as the orders of the lower bounds, implying that the bounds are tight. We also found that the capacity of MC-IS networks contributed by ad hoc communications is mainly limited by connectivity requirement, interference requirement, destination-bottleneck requirement and interface-bottleneck requirement. In addition, we also derive the average delay of MC-IS networks contributed by ad hoc communications, which is bounded by the maximum number of hops."
]
} |
1604.01952 | 2341356815 | Methods from convex optimization are widely used as building blocks for deep learning algorithms. However, the reasons for their empirical success are unclear, since modern convolutional networks (convnets), incorporating rectifier units and max-pooling, are neither smooth nor convex. Standard guarantees therefore do not apply. This paper provides the first convergence rates for gradient descent on rectifier convnets. The proof utilizes the particular structure of rectifier networks which consists in binary active inactive gates applied on top of an underlying linear network. The approach generalizes to max-pooling, dropout and maxout. In other words, to precisely the neural networks that perform best empirically. The key step is to introduce gated games, an extension of convex games with similar convergence properties that capture the gating function of rectifiers. The main result is that rectifier convnets converge to a critical point at a rate controlled by the gated-regret of the units in the network. Corollaries of the main result include: (i) a game-theoretic description of the representations learned by a neural network; (ii) a logarithmic-regret algorithm for training neural nets; and (iii) a formal setting for analyzing conditional computation in neural nets that can be applied to recently developed models of attention. | A number of papers have brought techniques from convex optimization into the analysis of neural networks. A line of work initiated by Bengio in @cite_83 shows that allowing the learning algorithm to choose the number of hidden units can convert neural network optimization in a convex problem, see also @cite_62 . A convex multi-layer architecture is developed in @cite_82 @cite_23 . Although these methods are interesting, they have not achieved the practical success of convnets. In this paper, we analyze convnets rather than proposing a more tractable, but potentially less useful, model. | {
"cite_N": [
"@cite_62",
"@cite_83",
"@cite_23",
"@cite_82"
],
"mid": [
"2107822587",
"2167967601",
"2106662462",
""
],
"abstract": [
"We consider neural networks with a single hidden layer and non-decreasing homogeneous activa-tion functions like the rectified linear units. By letting the number of hidden units grow unbounded and using classical non-Euclidean regularization tools on the output weights, we provide a detailed theoretical analysis of their generalization performance, with a study of both the approximation and the estimation errors. We show in particular that they are adaptive to unknown underlying linear structures, such as the dependence on the projection of the input variables onto a low-dimensional subspace. Moreover, when using sparsity-inducing norms on the input weights, we show that high-dimensional non-linear variable selection may be achieved, without any strong assumption regarding the data and with a total number of variables potentially exponential in the number of ob-servations. In addition, we provide a simple geometric interpretation to the non-convex problem of addition of a new unit, which is the core potentially hard computational element in the framework of learning from continuously many basis functions. We provide simple conditions for convex relaxations to achieve the same generalization error bounds, even when constant-factor approxi-mations cannot be found (e.g., because it is NP-hard such as for the zero-homogeneous activation function). We were not able to find strong enough convex relaxations and leave open the existence or non-existence of polynomial-time algorithms.",
"Convexity has recently received a lot of attention in the machine learning community, and the lack of convexity has been seen as a major disadvantage of many learning algorithms, such as multi-layer artificial neural networks. We show that training multi-layer neural networks in which the number of hidden units is learned can be viewed as a convex optimization problem. This problem involves an infinite number of variables, but can be solved by incrementally inserting a hidden unit at a time, each time finding a linear classifier that minimizes a weighted sum of errors.",
"Deep learning has been a long standing pursuit in machine learning, which until recently was hampered by unreliable training methods before the discovery of improved heuristics for embedded layer training. A complementary research strategy is to develop alternative modeling architectures that admit efficient training methods while expanding the range of representable structures toward deep models. In this paper, we develop a new architecture for nested nonlinearities that allows arbitrarily deep compositions to be trained to global optimality. The approach admits both parametric and nonparametric forms through the use of normalized kernels to represent each latent layer. The outcome is a fully convex formulation that is able to capture compositions of trainable nonlinear layers to arbitrary depth.",
""
]
} |
1604.01952 | 2341356815 | Methods from convex optimization are widely used as building blocks for deep learning algorithms. However, the reasons for their empirical success are unclear, since modern convolutional networks (convnets), incorporating rectifier units and max-pooling, are neither smooth nor convex. Standard guarantees therefore do not apply. This paper provides the first convergence rates for gradient descent on rectifier convnets. The proof utilizes the particular structure of rectifier networks which consists in binary active inactive gates applied on top of an underlying linear network. The approach generalizes to max-pooling, dropout and maxout. In other words, to precisely the neural networks that perform best empirically. The key step is to introduce gated games, an extension of convex games with similar convergence properties that capture the gating function of rectifiers. The main result is that rectifier convnets converge to a critical point at a rate controlled by the gated-regret of the units in the network. Corollaries of the main result include: (i) a game-theoretic description of the representations learned by a neural network; (ii) a logarithmic-regret algorithm for training neural nets; and (iii) a formal setting for analyzing conditional computation in neural nets that can be applied to recently developed models of attention. | Game theory was developed to model interactions between humans @cite_72 . However, it may be more directly applicable as a toolbox for analyzing -- that is, interacting populations of algorithms that are optimizing objective functions @cite_89 . We go one step further, and develop a game-theoretic analysis of the internal structure of backpropagation. | {
"cite_N": [
"@cite_72",
"@cite_89"
],
"mid": [
"2144846366",
"1505251695"
],
"abstract": [
"This is the classic work upon which modern-day game theory is based. What began more than sixty years ago as a modest proposal that a mathematician and an economist write a short paper together blossomed, in 1944, when Princeton University Press published \"Theory of Games and Economic Behavior.\" In it, John von Neumann and Oskar Morgenstern conceived a groundbreaking mathematical theory of economic and social organization, based on a theory of games of strategy. Not only would this revolutionize economics, but the entirely new field of scientific inquiry it yielded--game theory--has since been widely used to analyze a host of real-world phenomena from arms races to optimal policy choices of presidential candidates, from vaccination policy to major league baseball salary negotiations. And it is today established throughout both the social sciences and a wide range of other sciences.",
"The field of artificial intelligence (AI) strives to build rational agents capable of perceiving the world around them and taking actions to advance specified goals. Put another way, AI researchers aim to construct a synthetic homo economicus, the mythical perfectly rational agent of neoclassical economics. We review progress toward creating this new species of machine, machina economicus, and discuss some challenges in designing AIs that can reason effectively in economic contexts. Supposing that AI succeeds in this quest, or at least comes close enough that it is useful to think about AIs in rationalistic terms, we ask how to design the rules of interaction in multi-agent systems that come to represent an economy of AIs. Theories of normative design from economics may prove more relevant for artificial agents than human agents, with AIs that better respect idealized assumptions of rationality than people, interacting through novel rules and incentive systems quite distinct from those tailored for people."
]
} |
1604.01952 | 2341356815 | Methods from convex optimization are widely used as building blocks for deep learning algorithms. However, the reasons for their empirical success are unclear, since modern convolutional networks (convnets), incorporating rectifier units and max-pooling, are neither smooth nor convex. Standard guarantees therefore do not apply. This paper provides the first convergence rates for gradient descent on rectifier convnets. The proof utilizes the particular structure of rectifier networks which consists in binary active inactive gates applied on top of an underlying linear network. The approach generalizes to max-pooling, dropout and maxout. In other words, to precisely the neural networks that perform best empirically. The key step is to introduce gated games, an extension of convex games with similar convergence properties that capture the gating function of rectifiers. The main result is that rectifier convnets converge to a critical point at a rate controlled by the gated-regret of the units in the network. Corollaries of the main result include: (i) a game-theoretic description of the representations learned by a neural network; (ii) a logarithmic-regret algorithm for training neural nets; and (iii) a formal setting for analyzing conditional computation in neural nets that can be applied to recently developed models of attention. | The idea of decomposing deep learning algorithms into cooperating modules dates back to at least the work of Bottou @cite_85 . A related line of work modeling biological neural networks from a game-theoretic perspective can be found in @cite_49 @cite_20 @cite_54 @cite_51 . | {
"cite_N": [
"@cite_54",
"@cite_85",
"@cite_49",
"@cite_51",
"@cite_20"
],
"mid": [
"2963635822",
"2137440383",
"2963582189",
"2112952404",
"1628827775"
],
"abstract": [
"We investigate cortical learning from the perspective of mechanism design. First, we show that discretizing standard models of neurons and synaptic plasticity leads to rational agents maximizing simple scoring rules. Second, our main result is that the scoring rules are proper, implying that neurons faithfully encode expected utilities in their synaptic weights and encode high-scoring outcomes in their spikes. Third, with this foundation in hand, we propose a biologically plausible mechanism whereby neurons backpropagate incentives which allows them to optimize their usefulness to the rest of cortex. Finally, experiments show that networks that backpropagate incentives can learn simple tasks.",
"We introduce a framework for training architectures composed of several modules. This framework, which uses a statistical formulation of learning systems, provides a unique formalism for describing many classical connectionist algorithms as well as complex systems where several algorithms interact. It allows to design hybrid systems which combine the advantages of connectionist algorithms as well as other learning algorithms.",
"This paper suggests a learning-theoretic perspective on how synaptic plasticity benefits global brain functioning. We introduce a model, the selectron, that (i) arises as the fast time constant limit of leaky integrate-and-fire neurons equipped with spiking timing dependent plasticity (STDP) and (ii) is amenable to theoretical analysis. We show that the selectron encodes reward estimates into spikes and that an error bound on spikes is controlled by a spiking margin and the sum of synaptic weights. Moreover, the efficacy of spikes (their usefulness to other reward maximizing selectrons) also depends on total synaptic strength. Finally, based on our analysis, we propose a regularized version of STDP, and show the regularization improves the robustness of neuronal learning when faced with multiple stimuli.",
"Error backpropagation is an extremely effective algorithm for assigning credit in artificial neural networks. However, weight updates under Backprop depend on lengthy recursive computations and require separate output and error messages - features not shared by biological neurons, that are perhaps unnecessary. In this paper, we revisit Backprop and the credit assignment problem. We first decompose Backprop into a collection of interacting learning algorithms; provide regret bounds on the performance of these sub-algorithms; and factorize Backprop's error signals. Using these results, we derive a new credit assignment algorithm for nonparametric regression, Kickback, that is significantly simpler than Backprop. Finally, we provide a sufficient condition for Kickback to follow error gradients, and show that Kickback matches Backprop's performance on real-world regression benchmarks.",
"Despite its size and complexity, the human cortex exhibits striking anatomical regularities, suggesting there may simple meta-algorithms underlying cortical learning and computation. We expect such meta-algorithms to be of interest since they need to operate quickly, scalably and effectively with little-to-no specialized assumptions. This note focuses on a specific question: How can neurons use vast quantities of unlabeled data to speed up learning from the comparatively rare labels provided by reward systems? As a partial answer, we propose randomized co-training as a biologically plausible meta-algorithm satisfying the above requirements. As evidence, we describe a biologically-inspired algorithm, Correlated Nystrom Views (XNV) that achieves state-of-the-art performance in semi-supervised learning, and sketch work in progress on a neuronal implementation."
]
} |
1604.02133 | 2339860986 | Proceedings of the 16th International Workshop on Non-Monotonic Reasoning (NMR), 22-24 April 2016, Cape Town, South Africa | @cite_0 propose a probabilistic revision operation for imprecise probabilistic beliefs in the framework of Probabilistic Logic Programming (PLP). New evidence may be a probabilistic (conditional) formula and needs not be consistent with the original beliefs. Revision via imaging (e.g., @math ) also overcomes this consistency issue. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2114185696"
],
"abstract": [
"Probabilistic logic programming is a powerful technique to represent and reason with imprecise probabilistic knowledge. A probabilistic logic program (PLP) is a knowledge base which contains a set of conditional events with probability intervals. In this paper, we investigate the issue of revising such a PLP in light of receiving new information. We propose postulates for revising PLPs when a new piece of evidence is also a probabilistic conditional event. Our postulates lead to Jeffrey's rule and Bayesian conditioning when the original PLP defines a single probability distribution. Furthermore, we prove that our postulates are extensions to Darwiche and Pearl (DP) postulates when new evidence is a propositional formula. We also give the representation theorem for the postulates and provide an instantiation of revision operators satisfying the proposed postulates."
]
} |
1604.01802 | 2340000481 | Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker's state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker is the first neural-network tracker that learns to track generic objects at 100 fps. | Trackers for generic object tracking are typically trained entirely online, starting from the first frame of a video @cite_34 @cite_35 @cite_38 @cite_32 . A typical tracker will sample patches near the target object, which are considered as foreground" @cite_35 . Some patches farther from the target object are also sampled, and these are considered as background." These patches are then used to train a foreground-background classifier, and this classifier is used to score patches from the next frame to estimate the new location of the target object @cite_38 @cite_32 . Unfortunately, since these trackers are trained entirely online, they cannot take advantage of the large amount of videos that are readily available for offline training that can potentially be used to improve their performance. | {
"cite_N": [
"@cite_35",
"@cite_34",
"@cite_32",
"@cite_38"
],
"mid": [
"2167089254",
"2098941887",
"",
"2953047851"
],
"abstract": [
"In this paper, we address the problem of learning an adaptive appearance model for object tracking. In particular, a class of tracking techniques called “tracking by detection” have been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrades the classifier and can cause further drift. In this paper we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks. We present a novel online MIL algorithm for object tracking that achieves superior results with real-time performance.",
"Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (accurate estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we are able to avoid the need for an intermediate classification step. Our method uses a kernelized structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow for real-time application, we introduce a budgeting mechanism which prevents the unbounded growth in the number of support vectors which would otherwise occur during tracking. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased performance.",
"",
"Several benchmark datasets for visual tracking research have been proposed in recent years. Despite their usefulness, whether they are sufficient for understanding and diagnosing the strengths and weaknesses of different trackers remains questionable. To address this issue, we propose a framework by breaking a tracker down into five constituent parts, namely, motion model, feature extractor, observation model, model updater, and ensemble post-processor. We then conduct ablative experiments on each component to study how it affects the overall result. Surprisingly, our findings are discrepant with some common beliefs in the visual tracking research community. We find that the feature extractor plays the most important role in a tracker. On the other hand, although the observation model is the focus of many studies, we find that it often brings no significant improvement. Moreover, the motion model and model updater contain many details that could affect the result. Also, the ensemble post-processor can improve the result substantially when the constituent trackers have high diversity. Based on our findings, we put together some very elementary building blocks to give a basic tracker which is competitive in performance to the state-of-the-art trackers. We believe our framework can provide a solid baseline when conducting controlled experiments for visual tracking research."
]
} |
1604.01802 | 2340000481 | Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker's state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker is the first neural-network tracker that learns to track generic objects at 100 fps. | Some researchers have also attempted to use neural networks for tracking within the traditional online training framework @cite_29 @cite_31 @cite_12 @cite_2 @cite_21 @cite_30 @cite_26 @cite_13 @cite_9 @cite_17 , showing state-of-the-art results @cite_30 @cite_13 @cite_6 . Unfortunately, neural networks are very slow to train, and if online training is required, then the resulting tracker will be very slow at test time. Such trackers range from 0.8 fps @cite_29 to 15 fps @cite_2 , with the top performing neural-network trackers running at 1 fps on a GPU @cite_30 @cite_13 @cite_6 . Hence, these trackers are not usable for most practical applications. Because our tracker is trained offline in a generic manner, no online training of our tracker is required, enabling us to track at 100 fps. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_29",
"@cite_21",
"@cite_9",
"@cite_6",
"@cite_2",
"@cite_31",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2950410377",
"2280723172",
"2069332137",
"1497265063",
"1982344527",
"2130026429",
"",
"1554825167",
"",
"2211629196",
"2951157758"
],
"abstract": [
"We propose a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN). Our algorithm pretrains a CNN using a large set of videos with tracking ground-truths to obtain a generic target representation. Our network is composed of shared layers and multiple branches of domain-specific layers, where domains correspond to individual training sequences and each branch is responsible for binary classification to identify the target in each domain. We train the network with respect to each domain iteratively to obtain generic target representations in the shared layers. When tracking a target in a new sequence, we construct a new network by combining the shared layers in the pretrained CNN with a new binary classification layer, which is updated online. Online tracking is performed by evaluating the candidate windows randomly sampled around the previous target state. The proposed algorithm illustrates outstanding performance compared with state-of-the-art methods in existing tracking benchmarks.",
"Deep networks have been successfully applied to visual tracking by learning a generic representation offline from numerous training images. However the offline training is time-consuming and the learned generic representation may be less discriminative for tracking specific objects. In this paper we present that, even without offline training with a large amount of auxiliary data, simple two-layer convolutional networks can be powerful enough to develop a robust representation for visual tracking. In the first frame, we employ the k-means algorithm to extract a set of normalized patches from the target region as fixed filters, which integrate a series of adaptive contextual filters surrounding the target to define a set of feature maps in the subsequent frames. These maps measure similarities between each filter and the useful local intensity patterns across the target, thereby encoding its local structural information. Furthermore, all the maps form together a global representation, which is built on mid-level features, thereby remaining close to image-level information, and hence the inner geometric layout of the target is also well preserved. A simple soft shrinkage method with an adaptive threshold is employed to de-noise the global representation, resulting in a robust sparse representation. The representation is updated via a simple and effective online strategy, allowing it to robustly adapt to target appearance variations. Our convolution networks have surprisingly lightweight structure, yet perform favorably against several state-of-the-art methods on the CVPR2013 tracking benchmark dataset with 50 challenging videos.",
"Defining hand-crafted feature representations needs expert knowledge, requires timeconsuming manual adjustments, and besides, it is arguably one of the limiting factors of object tracking. In this paper, we propose a novel solution to automatically relearn the most useful feature representations during the tracking process in order to accurately adapt appearance changes, pose and scale variations while preventing from drift and tracking failures. We employ a candidate pool of multiple Convolutional Neural Networks (CNNs) as a data-driven model of different instances of the target object. Individually, each CNN maintains a specific set of kernels that favourably discriminate object patches from their surrounding background using all available low-level cues. These kernels are updated in an online manner at each frame after being trained with just one instance at the initialization of the corresponding CNN. Given a frame, the most promising CNNs in the pool are selected to evaluate the hypothesises for the target object. The hypothesis with the highest score is assigned as the current detection window and the selected models are retrained using a warm-start back-propagation which optimizes a structural loss function. In addition to the model-free tracker, we introduce a class-specific version of the proposed method that is tailored for tracking of a particular object class such as human faces. Our experiments on a large selection of videos from the recent benchmarks demonstrate that our method outperforms the existing state-of-the-art algorithms and rarely loses the track of the target object.",
"Convolutional neural network (CNN) models have demonstrated great success in various computer vision tasks including image classification and object detection. However, some equally important tasks such as visual tracking remain relatively unexplored. We believe that a major hurdle that hinders the application of CNN to visual tracking is the lack of properly labeled training data. While existing applications that liberate the power of CNN often need an enormous amount of training data in the order of millions, visual tracking applications typically have only one labeled example in the first frame of each video. We address this research issue here by pre-training a CNN offline and then transferring the rich feature hierarchies learned to online tracking. The CNN is also fine-tuned during online tracking to adapt to the appearance of the tracked target specified in the first video frame. To fit the characteristics of object tracking, we first pre-train the CNN to recognize what is an object, and then propose to generate a probability map instead of producing a simple class label. Using two challenging open benchmarks for performance evaluation, our proposed tracker has demonstrated substantial improvement over other state-of-the-art trackers.",
"Visual representation is crucial for visual tracking method?s performances. Conventionally, visual representations adopted in visual tracking rely on hand-crafted computer vision descriptors. These descriptors were developed generically without considering tracking-specific information. In this paper, we propose to learn complex-valued invariant representations from tracked sequential image patches, via strong temporal slowness constraint and stacked convolutional autoencoders. The deep slow local representations are learned offline on unlabeled data and transferred to the observational model of our proposed tracker. The proposed observational model retains old training samples to alleviate drift, and collect negative samples which are coherent with target?s motion pattern for better discriminative tracking. With the learned representation and online training samples, a logistic regression classifier is adopted to distinguish target from background, and retrained online to adapt to appearance changes. Subsequently, the observational model is integrated into a particle filter framework to perform visual tracking. Experimental results on various challenging benchmark sequences demonstrate that the proposed tracker performs favorably against several state-of-the-art trackers. HighlightsTemporal slowness principle is exploited for learning tracking representation.Learned invariant representation is decomposed into amplitude and phase features.Higher-level features are learned by stacking autoencoders convolutionally.A novel observational model to counter drift and collect relevant samples online.Tracking experiments show our method is superior to state-of-the-art trackers.",
"The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website.",
"",
"Deep neural networks, albeit their great success on feature learning in various computer vision tasks, are usually considered as impractical for online visual tracking, because they require very long training time and a large number of training samples. In this paper, we present an efficient and very robust tracking algorithm using a single convolutional neural network (CNN) for learning effective feature representations of the target object in a purely online manner. Our contributions are multifold. First, we introduce a novel truncated structural loss function that maintains as many training samples as possible and reduces the risk of tracking error accumulation. Second, we enhance the ordinary stochastic gradient descent approach in CNN training with a robust sample selection mechanism. The sampling mechanism randomly generates positive and negative samples from different temporal distributions, which are generated by taking the temporal relations and label noise into account. Finally, a lazy yet effective updating scheme is designed for CNN training. Equipped with this novel updating algorithm, the CNN model is robust to some long-existing difficulties in visual tracking, such as occlusion or incorrect detections, without loss of the effective adaption for significant appearance changes. In the experiment, our CNN tracker outperforms all compared state-of-the-art methods on two recently proposed benchmarks, which in total involve over 60 video sequences. The remarkable performance improvement over the existing trackers illustrates the superiority of the feature representations, which are learned purely online via the proposed deep learning framework.",
"",
"We propose a new approach for general object tracking with fully convolutional neural network. Instead of treating convolutional neural network (CNN) as a black-box feature extractor, we conduct in-depth study on the properties of CNN features offline pre-trained on massive image data and classification task on ImageNet. The discoveries motivate the design of our tracking system. It is found that convolutional layers in different levels characterize the target from different perspectives. A top layer encodes more semantic features and serves as a category detector, while a lower layer carries more discriminative information and can better separate the target from distracters with similar appearance. Both layers are jointly used with a switch mechanism during tracking. It is also found that for a tracking target, only a subset of neurons are relevant. A feature map selection method is developed to remove noisy and irrelevant feature maps, which can reduce computation redundancy and improve tracking accuracy. Extensive evaluation on the widely used tracking benchmark [36] shows that the proposed tacker outperforms the state-of-the-art significantly.",
"We propose an online visual tracking algorithm by learning discriminative saliency map using Convolutional Neural Network (CNN). Given a CNN pre-trained on a large-scale image repository in offline, our algorithm takes outputs from hidden layers of the network as feature descriptors since they show excellent representation performance in various general visual recognition problems. The features are used to learn discriminative target appearance models using an online Support Vector Machine (SVM). In addition, we construct target-specific saliency map by backpropagating CNN features with guidance of the SVM, and obtain the final tracking result in each frame based on the appearance model generatively constructed with the saliency map. Since the saliency map visualizes spatial configuration of target effectively, it improves target localization accuracy and enable us to achieve pixel-level target segmentation. We verify the effectiveness of our tracking algorithm through extensive experiment on a challenging benchmark, where our method illustrates outstanding performance compared to the state-of-the-art tracking algorithms."
]
} |
1604.01802 | 2340000481 | Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker's state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker is the first neural-network tracker that learns to track generic objects at 100 fps. | A separate class of trackers are the model-based trackers which are designed to track a specific class of objects @cite_4 @cite_19 @cite_15 . For example, if one is only interested in tracking pedestrians, then one can train a pedestrian detector. During test-time, these detections can be linked together using temporal information. These trackers are trained offline, but they are limited because they can only track a specific class of objects. Our tracker is trained offline in a generic fashion and can be used to track novel objects at test time. | {
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_4"
],
"mid": [
"",
"2168117308",
"2526455154"
],
"abstract": [
"",
"In this paper, we treat tracking as a learning problem of estimating the location and the scale of an object given its previous location, scale, as well as current and previous image frames. Given a set of examples, we train convolutional neural networks (CNNs) to perform the above estimation task. Different from other learning methods, the CNNs learn both spatial and temporal features jointly from image pairs of two adjacent frames. We introduce multiple path ways in CNN to better fuse local and global information. A creative shift-variant CNN architecture is designed so as to alleviate the drift problem when the distracting objects are similar to the target in cluttered environment. Furthermore, we employ CNNs to estimate the scale through the accurate localization of some key points. These techniques are object-independent so that the proposed method can be applied to track other types of object. The capability of the tracker of handling complex situations is demonstrated in many testing sequences.",
"This work is a contribution to understanding multi-object traffic scenes from video sequences. All data is provided by a camera system which is mounted on top of the autonomous driving platform AnnieWAY. The proposed probabilistic generative model reasons jointly about the 3D scene layout as well as the 3D location and orientation of objects in the scene. In particular, the scene topology, geometry as well as traffic activities are inferred from short video sequences."
]
} |
1604.01802 | 2340000481 | Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker's state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker is the first neural-network tracker that learns to track generic objects at 100 fps. | A related area of research is patch matching @cite_28 @cite_7 , which was recently used for tracking in @cite_3 , running at 4 fps. In such an approach, many candidate patches are passed through the network, and the patch with the highest matching score is selected as the tracking output. In contrast, our network only passes two images through the network, and the network regresses directly to the bounding box location of the target object. By avoiding the need to score many candidate patches, we are able to track objects at 100 fps. | {
"cite_N": [
"@cite_28",
"@cite_3",
"@cite_7"
],
"mid": [
"1929856797",
"",
"2949213045"
],
"abstract": [
"Motivated by recent successes on learning feature representations and on learning feature comparison functions, we propose a unified approach to combining both for training a patch matching system. Our system, dubbed Match-Net, consists of a deep convolutional network that extracts features from patches and a network of three fully connected layers that computes a similarity between the extracted features. To ensure experimental repeatability, we train MatchNet on standard datasets and employ an input sampler to augment the training set with synthetic exemplar pairs that reduce overfitting. Once trained, we achieve better computational efficiency during matching by disassembling MatchNet and separately applying the feature computation and similarity networks in two sequential stages. We perform a comprehensive set of experiments on standard datasets to carefully study the contributions of each aspect of MatchNet, with direct comparisons to established methods. Our results confirm that our unified approach improves accuracy over previous state-of-the-art results on patch matching datasets, while reducing the storage requirement for descriptors. We make pre-trained MatchNet publicly available.",
"",
"In this paper we show how to learn directly from image data (i.e., without resorting to manually-designed features) a general similarity function for comparing image patches, which is a task of fundamental importance for many computer vision problems. To encode such a function, we opt for a CNN-based model that is trained to account for a wide variety of changes in image appearance. To that end, we explore and study multiple neural network architectures, which are specifically adapted to this task. We show that such an approach can significantly outperform the state-of-the-art on several problems and benchmark datasets."
]
} |
1604.01802 | 2340000481 | Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker's state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker is the first neural-network tracker that learns to track generic objects at 100 fps. | Prior attempts have been made to use neural networks for tracking in various other ways @cite_27 , including visual attention models @cite_5 @cite_11 . However, these approaches are not competitive with other state-of-the-art trackers when evaluated on difficult tracker datasets. | {
"cite_N": [
"@cite_5",
"@cite_27",
"@cite_11"
],
"mid": [
"2183231851",
"2014170934",
"2951527505"
],
"abstract": [
"We propose a novel attentional model for simultaneous object tracking and recognition that is driven by gaze data. Motivated by theories of the human perceptual system, the model consists of two interacting pathways: ventral and dorsal. The ventral pathway models object appearance and classification using deep (factored)-restricted Boltzmann machines. At each point in time, the observations consist of retinal images, with decaying resolution toward the periphery of the gaze. The dorsal pathway models the location, orientation, scale and speed of the attended object. The posterior distribution of these states is estimated with particle filtering. Deeper in the dorsal pathway, we encounter an attentional mechanism that learns to control gazes so as to minimize tracking uncertainty. The approach is modular (with each module easily replaceable with more sophisticated algorithms), straightforward to implement, practically efficient, and works well in simple video sequences.",
"We present deep neural network models applied to tracking objects of interest. Deep neural networks trained for general-purpose use are introduced to conduct long-term tracking, which requires scale-invariant feature extraction even when the object dramatically changes shape as it moves in the scene. We use two-layer networks trained using either supervised or unsupervised learning techniques. The networks, augmented with a radial basis function classifier, are able to track objects based on a single example. We tested the networks tracking capability on the TLD dataset, one of the most difficult sets of tracking tasks and real-time tracking is achieved in 0.074 seconds per frame for 320×240 pixel image on a 2-core 2.7GHz Intel i7 laptop.",
"Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so."
]
} |
1604.01904 | 2536575466 | Recently, neural models have been proposed for headline generation by learning to map documents to headlines with recurrent neural networks. Nevertheless, as traditional neural network utilizes maximum likelihood estimation for parameter optimization, it essentially constrains the expected training objective within word level rather than sentence level. Moreover, the performance of model prediction significantly relies on training data distribution. To overcome these drawbacks, we employ minimum risk training strategy in this paper, which directly optimizes model parameters in sentence level with respect to evaluation metrics and leads to significant improvements for headline generation. Experiment results show that our models outperforms state-of-the-art systems on both English and Chinese headline generation tasks. | Headline generation is a well-defined task standardized in DUC-2003 and DUC-2004. Various approaches have been proposed for headline generation: rule-based, statistical-based and neural-based. The rule-based models create a headline for a news article using handcrafted and linguistically motivated rules to guide the choice of a potential headline. Hedge Trimmer @cite_6 is a representative example of this approach which creates a headline by removing constituents from the parse tree of the first sentence until it reaches a specific length limit. Statistical-based methods make use of large scale training data to learn correlations between words in headlines and articles @cite_12 . The best system on DUC-2004, TOPIARY @cite_8 combines both linguistic and statistical information to generate headlines. There is also method make use of knowledge bases to generate better headlines. With the advances of deep neural networks, there are growing works that design neural networks for headline generation. @cite_17 proposes an attention-based model to generate headlines. @cite_5 proposes a recurrent neural network with long short term memory (LSTM) @cite_11 for headline generation. @cite_9 introduces copying mechanism into encoder-deconder architecture inspired by the Pointer Networks @cite_4 . | {
"cite_N": [
"@cite_11",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_5",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"",
"2103164118",
"2964165364",
"2081265723",
"2251654079",
"2028339364",
"1843891098"
],
"abstract": [
"",
"",
"This paper reports our results at DUC2004 and describes our approach, implemented in a system called Topiary. We will show that the combination of linguistically motivated sentence compression with statistically selected topic terms performs better than either alone, according to some automatic summary evaluation measures.",
"",
"This paper presents Hedge Trimmer, a HEaDline GEneration system that creates a headline for a newspaper story using linguistically-motivated heuristics to guide the choice of a potential headline. We present feasibility tests used to establish the validity of an approach that constructs a headline by selecting words in order from a story. In addition, we describe experimental results that demonstrate the effectiveness of our linguistically-motivated approach over a HMM-based model, using both human evaluation and automatic metrics for comparing the two approaches.",
"We present an LSTM approach to deletion-based sentence compression where the task is to translate a sentence into a sequence of zeros and ones, corresponding to token deletion decisions. We demonstrate that even the most basic version of the system, which is given no syntactic information (no PoS or NE tags, or dependencies) or desired compression length, performs surprisingly well: around 30 of the compressions from a large test set could be regenerated. We compare the LSTM system with a competitive baseline which is trained on the same amount of data but is additionally provided with all kinds of linguistic features. In an experiment with human raters the LSTMbased model outperforms the baseline achieving 4.5 in readability and 3.8 in informativeness.",
"Extractive summarization techniques cannot generate document summaries shorter than a single sentence, something that is often required. An ideal summarization system would understand each document and generate an appropriate summary directly from the results of that understanding. A more practical approach to this problem results in the use of an approximation: viewing summarization as a problem analogous to statistical machine translation. The issue then becomes one of generating a target document in a more concise language from a source document in a more verbose language. This paper presents results on experiments using this approach, in which statistical models of the term selection and term ordering are jointly applied to produce summaries in a style learned from a training corpus.",
"Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines."
]
} |
1604.01904 | 2536575466 | Recently, neural models have been proposed for headline generation by learning to map documents to headlines with recurrent neural networks. Nevertheless, as traditional neural network utilizes maximum likelihood estimation for parameter optimization, it essentially constrains the expected training objective within word level rather than sentence level. Moreover, the performance of model prediction significantly relies on training data distribution. To overcome these drawbacks, we employ minimum risk training strategy in this paper, which directly optimizes model parameters in sentence level with respect to evaluation metrics and leads to significant improvements for headline generation. Experiment results show that our models outperforms state-of-the-art systems on both English and Chinese headline generation tasks. | In this work, we propose the NHG model realized by a bidirectional recurrent neural network with gated recurrent units. We also propose to apply minimum risk training (MRT) to optimize parameters of NHG model. MRT has been widely used in machine translation @cite_25 @cite_24 @cite_26 @cite_7 , but less been explored in document summarization. To the best of our knowledge, this work is the first attempt to utilize MRT in neural headline generation. | {
"cite_N": [
"@cite_24",
"@cite_26",
"@cite_25",
"@cite_7"
],
"mid": [
"2137143056",
"2250445771",
"2146574666",
"2195405088"
],
"abstract": [
"When training the parameters for a natural language system, one would prefer to minimize 1-best loss (error) on an evaluation set. Since the error surface for many natural language problems is piecewise constant and riddled with local minima, many systems instead optimize log-likelihood, which is conveniently differentiable and convex. We propose training instead to minimize the expected loss, or risk. We define this expectation using a probability distribution over hypotheses that we gradually sharpen (anneal) to focus on the 1-best hypothesis. Besides the linear loss functions used in previous work, we also describe techniques for optimizing nonlinear functions such as precision or the BLEU metric. We present experiments training log-linear combinations of models for dependency parsing and for machine translation. In machine translation, annealed minimum risk training achieves significant improvements in BLEU over standard minimum error training. We also show improvements in labeled dependency parsing.",
"This paper tackles the sparsity problem in estimating phrase translation probabilities by learning continuous phrase representations, whose distributed nature enables the sharing of related phrases in their representations. A pair of source and target phrases are projected into continuous-valued vector representations in a low-dimensional latent space, where their translation score is computed by the distance between the pair in this new space. The projection is performed by a neural network whose weights are learned on parallel training data. Experimental evaluation has been performed on two WMT translation tasks. Our best result improves the performance of a state-of-the-art phrase-based statistical machine translation system trained on WMT 2012 French-English data by up to 1.3 BLEU points.",
"Often, the training procedure for statistical machine translation models is based on maximum likelihood or related criteria. A general problem of this approach is that there is only a loose relation to the final translation quality on unseen text. In this paper, we analyze various training criteria which directly optimize translation quality. These training criteria make use of recently proposed automatic evaluation metrics. We describe a new algorithm for efficient training an unsmoothed error count. We show that significantly better results can often be obtained if the final evaluation criterion is taken directly into account as part of the training procedure.",
"We propose minimum risk training for end-to-end neural machine translation. Unlike conventional maximum likelihood estimation, minimum risk training is capable of optimizing model parameters directly with respect to arbitrary evaluation metrics, which are not necessarily differentiable. Experiments show that our approach achieves significant improvements over maximum likelihood estimation on a state-of-the-art neural machine translation system across various languages pairs. Transparent to architectures, our approach can be applied to more neural networks and potentially benefit more NLP tasks."
]
} |
1604.01985 | 2340811439 | Many different approaches for estimating the Interaction Quality (IQ) of Spoken Dialogue Systems have been investigated. While dialogues clearly have a sequential nature, statistical classification approaches designed for sequential problems do not seem to work better on automatic IQ estimation than static approaches, i.e., regarding each turn as being independent of the corresponding dialogue. Hence, we analyse this effect by investigating the subset of temporal features used as input for statistical classification of IQ. We extend the set of temporal features to contain the system and the user view. We determine the contribution of each feature sub-group showing that temporal features contribute most to the classification performance. Furthermore, for the feature sub-group modeling the temporal effects with a window, we modify the window size increasing the overall performance significantly by +15.69 . | Numerous work on predicting User Satisfaction as a single-valued rating task for each system-user-exchange has been performed using both static and sequential approaches. @cite_4 derived turn level ratings from an overall score applied by the users after the dialogue. Using n-gram models reflecting the dialogue history, the estimation results for US on a 5 point scale showed to be hardly above chance. | {
"cite_N": [
"@cite_4"
],
"mid": [
"194823000"
],
"abstract": [
"In this paper, we propose an estimation method of user satisfaction for a spoken dialog system using an N-gram-based dialog history model. We have collected a large amount of spoken dialog data accompanied by usability evaluation scores by users in real environments. The database is made by a field-test in which naive users used a client-server music retrieval system with a spoken dialog interface on their own PCs. An N-gram model is trained from the sequences that consist of users’ dialog acts and or the system’s dialog acts for each one of six user satisfaction levels: from 1 to 5 and φ (task not completed). Then, the satisfaction level is estimated based on the N-gram likelihood. Experiments were conducted on the large real data and the results show that our proposed method achieved good classification performance; the classification accuracy was 94.7 in the experiment on a classification into dialogs with task completion and those without task completion. Even if the classifier detected all of the task incomplete dialog correctly, our proposed method achieved the false detection rate of only 6 ."
]
} |
1604.01985 | 2340811439 | Many different approaches for estimating the Interaction Quality (IQ) of Spoken Dialogue Systems have been investigated. While dialogues clearly have a sequential nature, statistical classification approaches designed for sequential problems do not seem to work better on automatic IQ estimation than static approaches, i.e., regarding each turn as being independent of the corresponding dialogue. Hence, we analyse this effect by investigating the subset of temporal features used as input for statistical classification of IQ. We extend the set of temporal features to contain the system and the user view. We determine the contribution of each feature sub-group showing that temporal features contribute most to the classification performance. Furthermore, for the feature sub-group modeling the temporal effects with a window, we modify the window size increasing the overall performance significantly by +15.69 . | @cite_22 proposed a model to predict turn-wise ratings for human-human dialogues (transcribed conversation) and human-machine dialogues (text from chat system). Ratings ranging from 1-7 were applied by two expert raters labeling Smoothness'', Closeness'', and Willingness'' not achieving a Match Rate per Rating (MR R) of more than 0.2-0.24 applying Hidden Markov Modes as well as Conditioned Random Fields. These results are only slightly above the random baseline of 0.14. Further work by @cite_25 uses ratings for overall dialogues to predict ratings for each system-user-exchange using HMMs. Again, evaluating in three user satisfaction categories Smoothness'', Closeness'', and Willingness'' with ratings ranging from 1-7 achieved best performance of 0.19 MR R. An approach presented by @cite_19 uses Hidden Markov Models (HMMs) to model the SDS as a process evolving over time. User Satisfaction was predicted at any point within the dialogue on a 5 point scale. Evaluation was performed based on labels the users applied themselves during the dialogue. | {
"cite_N": [
"@cite_19",
"@cite_25",
"@cite_22"
],
"mid": [
"1975016129",
"1565373308",
"1586218930"
],
"abstract": [
"Models for predicting judgments about the quality of Spoken Dialog Systems have been used as overall evaluation metric or as optimization functions in adaptive systems. We describe a new approach to such models, using Hidden Markov Models (HMMs). The user's opinion is regarded as a continuous process evolving over time. We present the data collection method and results achieved with the HMM model.",
"This paper proposes a novel approach for predicting user satisfaction transitions during a dialogue only from the ratings given to entire dialogues, with the aim of reducing the cost of creating reference ratings for utterances dialogue-acts that have been necessary in conventional approaches. In our approach, we first train hidden Markov models (HMMs) of dialogue-act sequences associated with each overall rating. Then, we combine such rating-related HMMs into a single HMM to decode a sequence of dialogue-acts into state sequences representing to which overall rating each dialogue-act is most related, which leads to our rating predictions. Experimental results in two dialogue domains show that our approach can make reasonable predictions; it significantly outperforms a baseline and nears the upper bound of a supervised approach in some evaluation criteria. We also show that introducing states that represent dialogue-act sequences that occur commonly in all ratings into an HMM significantly improves prediction accuracy.",
"This paper addresses three important issues in automatic prediction of user satisfaction transitions in dialogues. The first issue concerns the individual differences in user satisfaction ratings and how they affect the possibility of creating a user-independent prediction model. The second issue concerns how to determine appropriate evaluation criteria for predicting user satisfaction transitions. The third issue concerns how to train suitable prediction models. We present our findings for these issues on the basis of the experimental results using dialogue data in two domains."
]
} |
1604.01985 | 2340811439 | Many different approaches for estimating the Interaction Quality (IQ) of Spoken Dialogue Systems have been investigated. While dialogues clearly have a sequential nature, statistical classification approaches designed for sequential problems do not seem to work better on automatic IQ estimation than static approaches, i.e., regarding each turn as being independent of the corresponding dialogue. Hence, we analyse this effect by investigating the subset of temporal features used as input for statistical classification of IQ. We extend the set of temporal features to contain the system and the user view. We determine the contribution of each feature sub-group showing that temporal features contribute most to the classification performance. Furthermore, for the feature sub-group modeling the temporal effects with a window, we modify the window size increasing the overall performance significantly by +15.69 . | Work by @cite_9 deals with determining User Satisfaction from ratings applied by the users themselves during the dialogues. A Support Vector Machine (SVM) was trained using automatically derived interaction parameter to predict User Satisfaction for each system-user-exchange on a 5-point scale achieving an MR R of 0.49. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2083042912"
],
"abstract": [
"This paper addresses a new approach for statistical modeling of user satisfaction in Spoken Dialogue Systems (SDS) and thereby allows an online monitoring of spoken human-machine interaction. The presented technique relies on a large set of input variables originating from system log files that quantify the ongoing spoken human-machine interaction. The target variable, user satisfaction (US), is captured in a lab study on a 5 point scale with 46 users interacting with an SDS. The model, which is based on Support Vector Machines (SVM) yields a performance of 49.2 unweighted average recall (Cohen's κ = .442, Spearman's ρ = .668) and significantly outperforms related work in that field."
]
} |
1604.01985 | 2340811439 | Many different approaches for estimating the Interaction Quality (IQ) of Spoken Dialogue Systems have been investigated. While dialogues clearly have a sequential nature, statistical classification approaches designed for sequential problems do not seem to work better on automatic IQ estimation than static approaches, i.e., regarding each turn as being independent of the corresponding dialogue. Hence, we analyse this effect by investigating the subset of temporal features used as input for statistical classification of IQ. We extend the set of temporal features to contain the system and the user view. We determine the contribution of each feature sub-group showing that temporal features contribute most to the classification performance. Furthermore, for the feature sub-group modeling the temporal effects with a window, we modify the window size increasing the overall performance significantly by +15.69 . | To improve the performance of static classifiers for IQ recognition, @cite_24 proposed a hierarchical approach: first, IQ is predicted using a static classifier. Then, the prediction error is calculated and a second classifier is trained targeting the error value. In a final step, the initial hypothesis may then be corrected by the estimated error. This approach has been successfully applied improving the recognition performance relatively by up to +4.1 Work on rendering IQ prediction as a sequential task analyzing HMMs and Conditioned Hidden Markov Models has been performed by @cite_10 . They achieved an UAR of 0.39 for CHMMs. This was outperformed by regular HMMs (0.44 UAR) using Gaussian mixture models for modeling the observation probability for both approaches. Replacing the observation probability model with the confidence scores of static classification methods, @cite_5 achieved a significant improvement of the baseline with an UAR of 0.51. | {
"cite_N": [
"@cite_24",
"@cite_5",
"@cite_10"
],
"mid": [
"2114707621",
"2250390455",
"291351333"
],
"abstract": [
"Determining the quality of an ongoing interaction in the field of Spoken Dialogue Systems is a hard task. While existing methods employing automatic estimation already achieve reasonable results, still there is a lot of room for improvement. Hence, we aim at tackling the task by estimating the error of the applied statistical classification algorithms in a two-stage approach. Correcting the hypotheses using the estimated model error increases performance by up to 4.1 relative improvement in Unweighted Average Recall.",
"Research trends on SDS evaluation are recently focusing on objective assessment methods. Most existing methods, which derive quality for each systemuser-exchange, do not consider temporal dependencies on the quality of previous exchanges. In this work, we investigate an approach for determining Interaction Quality for human-machine dialogue based on methods modeling the sequential characteristics using HMM modeling. Our approach significantly outperforms conventional approaches by up to 4.5 relative improvement based on Unweighted Average Recall metrics.",
"The interaction quality (IQ) metric has recently been introduced for measuring the quality of spoken dialogue systems (SDSs) on the exchange level. While previous work relied on support vector machines (SVMs), we evaluate a conditioned hidden Markov model (CHMM) which accounts for the sequential character of the data and, in contrast to a regular hidden Markov model (HMM), provides class probabilities. While the CHMM achieves an unweighted average recall (UAR) of 0.39, it is outperformed by regular HMM with an UAR of 0.44 and a SVM with an UAR of 0.49, both trained and evaluated under the same conditions."
]
} |
1604.01999 | 2342369154 | We study online optimization of smoothed piecewise constant functions over the domain [0, 1). This is motivated by the problem of adaptively picking parameters of learning algorithms as in the recently introduced framework by Gupta and Roughgarden (2016). Majority of the machine learning literature has focused on Lipschitz-continuous functions or functions with bounded gradients. This is with good reason—any learning algorithm suffers linear regret even against piecewise constant functions that are chosen adversarially, arguably the simplest of non-Lipschitz continuous functions. The smoothed setting we consider is inspired by the seminal work of Spielman and Teng (2004) and the recent work of Gupta and Roughgarden (2016)—in this setting, the sequence of functions may be chosen by an adversary, however, with some uncertainty in the location of discontinuities. We give algorithms that achieve sublinear regret in the full information and bandit settings. | The work most closely related to ours is that of @cite_4 . In the context of online learning, our work improves upon theirs in providing bounds for online learning of algorithm parameters that are bounds; in their paper they only provide @math -regret bounds, in that one can guarantee that for any given @math the algorithm will achieve average regret of @math . The algorithms we present in this paper are more natural and achieve a significant improvement in running time. We also give results in the setting, which is in many ways more appropriate for the applications under consideration. The approach considered in their paper does not yield a bandit algorithm. With some effort, one may be able to adapt ideas from the @math algorithm of @cite_5 to achieve a non-trivial regret bound in the case; however, the resulting algorithm would be computationally expensive. | {
"cite_N": [
"@cite_5",
"@cite_4"
],
"mid": [
"2098339418",
"1881419322"
],
"abstract": [
"In the multi-armed bandit problem, a gambler must decide which arm of non-identical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to nd the best one) and exploitation (playing the arm believed to give the best payoff). Past solutions for the bandit problem have almost always relied on assumptions about the statistics of the slot machines. In this work, we make no statistical assumptions whatsoever about the nature of the process generating the payoffs of the slot machines. We give a solution to the bandit problem in which an adversary, rather than a well-behaved stochastic process, has complete control over the payoffs. In a sequence of plays, we prove that the per-round payoff of our algorithm approaches that of the best arm at the rate . We show by a matching lower bound that this is best possible. We also prove that our algorithm approaches the per-round payoff of any set of strategies at a similar rate: if the best strategy is chosen from a pool of strategies then our algorithm approaches the per-round payoff of the strategy at the rate . Finally, we apply our results to the problem of playing an unknown repeated matrix game. We show that our algorithm approaches the minimax payoff of the unknown game at the rate",
"We consider the framework of stochastic multi-armed bandit problems and study the possibilities and limitations of strategies that perform an online exploration of the arms. The strategies are assessed in terms of their simple regret, a regret notion that captures the fact that exploration is only constrained by the number of available rounds (not necessarily known in advance), in contrast to the case when the cumulative regret is considered and when exploitation needs to be performed at the same time.We believe that this performance criterion is suited to situations when the cost of pulling an arm is expressed in terms of resources rather than rewards. We discuss the links between the simple and the cumulative regret. The main result is that the required exploration-exploitation trade-offs are qualitatively different, in view of a general lower bound on the simple regret in terms of the cumulative regret."
]
} |
1604.01999 | 2342369154 | We study online optimization of smoothed piecewise constant functions over the domain [0, 1). This is motivated by the problem of adaptively picking parameters of learning algorithms as in the recently introduced framework by Gupta and Roughgarden (2016). Majority of the machine learning literature has focused on Lipschitz-continuous functions or functions with bounded gradients. This is with good reason—any learning algorithm suffers linear regret even against piecewise constant functions that are chosen adversarially, arguably the simplest of non-Lipschitz continuous functions. The smoothed setting we consider is inspired by the seminal work of Spielman and Teng (2004) and the recent work of Gupta and Roughgarden (2016)—in this setting, the sequence of functions may be chosen by an adversary, however, with some uncertainty in the location of discontinuities. We give algorithms that achieve sublinear regret in the full information and bandit settings. | There is a substantial body of work that seeks to use learning mechanisms to choose the parameters or of algorithms. @cite_2 @cite_1 suggest using Bayesian optimization techniques to choose hyperparameters effectively. Yet other papers (see ) suggest various techniques to choose parameters for algorithms (not necessarily in the context of learning). However, except for the work of @cite_4 , most work is not theoretical in nature. | {
"cite_N": [
"@cite_1",
"@cite_4",
"@cite_2"
],
"mid": [
"",
"1881419322",
"1533803232"
],
"abstract": [
"",
"We consider the framework of stochastic multi-armed bandit problems and study the possibilities and limitations of strategies that perform an online exploration of the arms. The strategies are assessed in terms of their simple regret, a regret notion that captures the fact that exploration is only constrained by the number of available rounds (not necessarily known in advance), in contrast to the case when the cumulative regret is considered and when exploitation needs to be performed at the same time.We believe that this performance criterion is suited to situations when the cost of pulling an arm is expressed in terms of resources rather than rewards. We discuss the links between the simple and the cumulative regret. The main result is that the required exploration-exploitation trade-offs are qualitatively different, in view of a general lower bound on the simple regret in terms of the cumulative regret.",
"Bayesian optimization has proven to be a highly effective methodology for the global optimization of unknown, expensive and multimodal functions. The ability to accurately model distributions over functions is critical to the effectiveness of Bayesian optimization. Although Gaussian processes provide a flexible prior over functions, there are various classes of functions that remain difficult to model. One of the most frequently occurring of these is the class of non-stationary functions. The optimization of the hyperparameters of machine learning algorithms is a problem domain in which parameters are often manually transformed a priori, for example by optimizing in \"log-space,\" to mitigate the effects of spatially-varying length scale. We develop a methodology for automatically learning a wide family of bijective transformations or warpings of the input space using the Beta cumulative distribution function. We further extend the warping framework to multi-task Bayesian optimization so that multiple tasks can be warped into a jointly stationary space. On a set of challenging benchmark optimization tasks, we observe that the inclusion of warping greatly improves on the state-of-the-art, producing better results faster and more reliably."
]
} |
1604.01507 | 2338544672 | This paper considers the problem of manipulating a uniformly rotating chain: the chain is rotated at a constant angular speed around a fixed axis using a robotic manipulator. Manipulation is quasi-static in the sense that transitions are slow enough for the chain to be always in "rotational" equilibrium. The curve traced by the chain in a rotating plane -- its shape function -- can be determined by a simple force analysis, yet it possesses complex multi-solutions behavior typical of non-linear systems. We prove that the configuration space of the uniformly rotating chain is homeomorphic to a two-dimensional surface embedded in @math . Using that representation, we devise a manipulation strategy for transiting between different rotation modes in a stable and controlled manner. We demonstrate the strategy on a physical robotic arm manipulating a rotating chain. Finally, we discuss how the ideas developed here might find fruitful applications in the study of other flexible objects, such as elastic rods or concentric tubes. | Within the field of robotics, the manipulation of flexible objects is studied along two main directions. A first direction is topological: one is mainly interested in the order and sequence of the manipulation rather than in the precise behavior of the flexible object. Examples include origami folding @cite_14 , laundry folding @cite_1 or rope-knotting @cite_23 @cite_20 . | {
"cite_N": [
"@cite_20",
"@cite_14",
"@cite_1",
"@cite_23"
],
"mid": [
"",
"2132634958",
"1982920857",
"2126998703"
],
"abstract": [
"",
"Origami, the art of paper sculpture, is a fresh challenge for the field of robotic manipulation, and provides a concrete example of the many difficulties and general manipulation problems faced in robotics. This paper describes our initial exploration, and highlights key problems in the manipulation, modeling, and design of foldable structures. Results include the design of the first origami-folding robot, a complete fold-sequence planner for a simple class of origami, and analysis of the kinematics of more complicated folds, including the common paper shopping bag.",
"We consider the problem of autonomous robotic laundry folding, and propose a solution to the perception and manipulation challenges inherent to the task. At the core of our approach is a quasi-static cloth model which allows us to neglect the complex dynamics of cloth under significant parts of the state space, allowing us to reason instead in terms of simple geometry. We present an algorithm which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers. We define parametrized fold sequences for four clothing categories: towels, pants, short-sleeved shirts, and long-sleeved shirts, each represented as polygons. We then devise a model-based optimization approach for visually inferring the class and pose of a spread-out or folded clothing article from a single image, such that the resulting polygon provides a parse suitable for these folding primitives. We test the manipulation and perception tasks individually, and combine them to implement an autonomous folding system on the Willow Garage PR2. This enables the PR2 to identify a clothing article spread out on a table, execute the computed folding sequence, and visually track its progress over successive folds.",
"Here, we propose a planning method for knotting unknotting of deformable linear objects. First, we propose a topological description of the state of a linear object. Second, transitions between these states are defined by introducing four basic operations. Then, possible sequences of crossing state transitions, i.e. possible manipulation processes, can be generated once the initial and the objective states are given. Third, a method for determining grasping points and their directions of movement is proposed to realize derived manipulation processes. Our proposed method indicated that it is theoretically possible for any knotting manipulation of a linear object placed on a table to be realized by a one-handed robot with three translational DOF and one rotational DOF. Furthermore, criteria for evaluation of generated plans are introduced to reduce the candidates of manipulation plans. Fourth, a planning method for tying knots tightly is established because they fulfill their fixing function by tightening them. Finally, we report knotting unknotting manipulation performed by a vision-guided system to demonstrate the usefulness of our approach."
]
} |
1604.01529 | 2338144305 | Committee scoring rules form a rich class of aggregators of voters' preferences for the purpose of selecting subsets of objects with desired properties, e.g., a shortlist of candidates for an interview, a representative collective body such as a parliament, or a set of locations for a set of public facilities. In the spirit of celebrated Young's characterization result that axiomatizes single-winner scoring rules, we provide an axiomatic characterization of multiwinner committee scoring rules. We show that committee scoring rules---despite forming a remarkably general class of rules---are characterized by the set of four standard axioms, anonymity, neutrality, consistency and continuity, and by one axiom specific to multiwinner rules which we call committee dominance. In the course of our proof, we develop several new notions and techniques. In particular, we introduce and axiomatically characterize multiwinner decision scoring rules, a class of rules that broadly generalizes the well-known majority relation. | Probabilistic single-winner election rules have also been a subject of axiomatic studies. For instance, Gibbard @cite_3 investigated strategyproofness of probabilistic election systems and blue his result can be seen as an axiomatic characterization of the random dictatorship rule. Brandl et al. @cite_30 , by studying different types of consistency of probabilistic single-winner election rules, characterized the function returning maximal lotteries, first proposed by Fishburn @cite_34 . | {
"cite_N": [
"@cite_30",
"@cite_34",
"@cite_3"
],
"mid": [
"2963715374",
"1966938374",
"1977126954"
],
"abstract": [
"Two fundamental axioms in social choice theory are consistency with respect to a variable electorate and consistency with respect to components of similar alternatives. In the context of traditional non‐probabilistic social choice, these axioms are incompatible with each other. We show that in the context of probabilistic social choice, these axioms uniquely characterize a function proposed by Fishburn (1984). Fishburn's function returns so‐called maximal lotteries, that is, lotteries that correspond to optimal mixed strategies in the symmetric zero‐sum game induced by the pairwise majority margins. Maximal lotteries are guaranteed to exist due to von Neumann's Minimax Theorem, are almost always unique, and can be efficiently computed using linear programming.",
"A social choice procedure is developed for selecting an alternative from a finite set on the basis of paired-comparison voting. Ballot data are used to construct a lottery on the alternatives that is socially as preferred as every other lottery. The constructed lottery is then used to select a winner. An axiomatization of social preferences among lotteries that justifies the procedure is included. The procedure will always select a consensus majority alternative when one exists, and it will never select an alternative that is Pareto dominated by another alternative.",
""
]
} |
1604.01529 | 2338144305 | Committee scoring rules form a rich class of aggregators of voters' preferences for the purpose of selecting subsets of objects with desired properties, e.g., a shortlist of candidates for an interview, a representative collective body such as a parliament, or a set of locations for a set of public facilities. In the spirit of celebrated Young's characterization result that axiomatizes single-winner scoring rules, we provide an axiomatic characterization of multiwinner committee scoring rules. We show that committee scoring rules---despite forming a remarkably general class of rules---are characterized by the set of four standard axioms, anonymity, neutrality, consistency and continuity, and by one axiom specific to multiwinner rules which we call committee dominance. In the course of our proof, we develop several new notions and techniques. In particular, we introduce and axiomatically characterize multiwinner decision scoring rules, a class of rules that broadly generalizes the well-known majority relation. | The state of research on axiomatic characterizations of multiwinner voting rules is far less advanced. Indeed, we are aware of only one unconditional characterization of a multiwinner rule: Debord has characterized the @math -Borda rule as the only rule that satisfies neutrality, faithfulness, consistency, and the cancellation property @cite_19 . Yet, there exists an interesting line of research, where the properties of multiwinner election rules are studied. A large bulk of this literature focuses on the principle of Condorcet consistency @cite_58 @cite_40 @cite_23 @cite_64 , and on approval-based multiwinner rules @cite_0 @cite_59 @cite_65 @cite_44 . Properties of other types of multiwinner election rules have been studied by Felsenthal and Maoz @cite_49 , Elkind et al. , and---in a somewhat different context---Skowron @cite_26 . | {
"cite_N": [
"@cite_64",
"@cite_26",
"@cite_65",
"@cite_0",
"@cite_19",
"@cite_40",
"@cite_44",
"@cite_23",
"@cite_59",
"@cite_49",
"@cite_58"
],
"mid": [
"",
"2950697607",
"",
"967083285",
"1969569988",
"",
"",
"",
"",
"1980240034",
"2127882486"
],
"abstract": [
"",
"We present a new model that describes the process of electing a group of representatives (e.g., a parliament) for a group of voters. In this model, called the voting committee model, the elected group of representatives runs a number of ballots to make final decisions regarding various issues. The satisfaction of voters comes from the final decisions made by the elected committee. Our results suggest that depending on a decision system used by the committee to make these final decisions, different multi-winner election rules are most suitable for electing the committee. Furthermore, we show that if we allow not only a committee, but also an election rule used to make final decisions, to depend on the voters' preferences, we can obtain an even better representation of the voters.",
"",
"Approval voting is a well-known voting procedure for single-winner elections. Voters approve of as many candidates as they like, and the candidate with the most approvals wins (Brams and Fishburn 1978, 1983, 2005). But Merrill and Nagel (1987) point out that there are many ways to aggregate approval votes to determine a winner, justifying a distinction between approval balloting, in which each voter submits a ballot that identifies the candidates the voter approves of, and approval voting, the procedure of ranking the candidates according to their total numbers of approvals.",
"This paper presents an extension of Borda's choice function to k-choice function (the k-choice set may be understood as the set of equally best “elites” of k alternatives). Young's axiomatic conditions are then generalized to k-choice functions and a simple combinatorial proof is exposed for the axiomatic characterization of Borda's k-choice function.",
"",
"",
"",
"",
"This article focuses on decision-making by voting in systems at the levels of the organization, the community, the society, and the international system. It examines the compatibility of four voting procedures (plurality, approval voting, Borda's count, and Hare's single transferable vote) with ten normative properties commonly used to evaluate the desirability of voting procedures by social choice theorists. The analysis alternately assumes that one candidate, as well as more than one candidate, must be elected under each of the procedures. It thus extends previous analyses of the same issues which were mainly restricted to situations where only a single candidate must be elected. We show that: (i) under all four procedures, if a property is violated when only a single candidate must be elected it is also violated when more than one candidate must be elected; (ii) all properties that are satisfied under the AV procedure when a single candidate must be elected are also satisfied when more than one candidate must be elected; (iii) at least one property that is satisfied under each of the remaining three procedures when only a single candidate must be elected, may be violated when the number of available slots is larger than one. Theoretical issues and practical implications of these findings are discussed.",
"Barbera and Coelho (WP 264, CREA-Barcelona Economics, 2007) documented six screening rules associated with the rule of k names that are used by diferent institutions around the world. Here, we study whether these screening rules satisfy stability. A set is said to be a weak Condorcet set a la Gehrlein (Math Soc Sci 10:199–209) if no candidate in this set can be defeated by any candidate from outside the set on the basis of simple majority rule. We say that a screening rule is stable if it always selects a weak Condorcet set whenever such set exists. We show that all of the six procedures which are used in reality do violate stability if the voters do not act strategically. We then show that there are screening rules which satisfy stability. Finally, we provide two results that can explain the widespread use of unstable screening rules."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.