aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1903.09189 | 2923164151 | We present a coarse-to-fine approach based semi-autonomous teleoperation system using vision guidance. The system is optimized for long range teleoperation tasks under time-delay network conditions and does not require prior knowledge of the remote scene. Our system initializes with a self exploration behavior that senses the remote surroundings through a freely mounted eye-in-hand web cam. The self exploration stage estimates hand-eye calibration and provides a telepresence interface via real-time 3D geometric reconstruction. The human operator is able to specify a visual task through the interface and a coarse-to-fine controller guides the remote robot enabling our system to work in high latency networks. Large motions are guided by coarse 3D estimation, whereas fine motions use image cues (IBVS). Network data transmission cost is minimized by sending only sparse points and a final image to the human side. Experiments from Singapore to Canada on multiple tasks were conducted to show our system's capability to work in long range teleoperation tasks. | For the 3D reconstruction, there are other approaches (e.g., multi-camera based method @cite_11 and RGB-D data based methods @cite_21 ), that can create a usable telepresence model, however, they typically require high performance network conditions and machines. For robot side semi-autonomous control using vision guidance, @cite_1 proposed an interface for task specification based on geometric constraints with error mapping represented in image space. A successive uncalibrated IBVS @cite_0 controller is used for robot side autonomy. | {
"cite_N": [
"@cite_0",
"@cite_21",
"@cite_1",
"@cite_11"
],
"mid": [
"2082991751",
"2071906076",
"",
"2062330262"
],
"abstract": [
"This paper is the first of a two-part series on the topic of visual servo control using computer vision data in the servo loop to control the motion of a robot. In this paper, we describe the basic techniques that are by now well established in the field. We first give a general overview of the formulation of the visual servo control problem. We then describe the two archetypal visual servo control schemes: image-based and position-based visual servo control. Finally, we discuss performance and stability issues that pertain to these two schemes, motivating the second article in the series, in which we consider advanced techniques",
"Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.",
"",
"We present a multicamera real-time 3D modeling system that aims at enabling new immersive and interactive environments. This system, called Grimage, allows to retrieve in real-time a 3D mesh of the observed scene as well as the associated textures. This information enables a strong visual presence of the user into virtual worlds. The 3D shape information is also used to compute collisions and reaction forces with virtual objects, enforcing the mechanical presence of the user in the virtual world. The innovation is a fully integrated system with both immersive and interactive capabilities. It embeds a parallel version of the EPVH modeling algorithm inside a distributed vision pipeline. It also adopts the hierarchical component approach of the FlowVR middleware to enforce software modularity and enable distributed executions. Results show high refresh rates and low latencies obtained by taking advantage of the I O and computing resources of PC clusters. The applications we have developed demonstrate the quality of the visual and mechanical presence with a single platform and with a dual platform that allows telecollaboration."
]
} |
1903.09174 | 2950751761 | Well established libraries typically have API documentation. However, they frequently lack examples and explanations, possibly making difficult their effective reuse. Stack Overflow is a question-and-answer website oriented to issues related to software development. Despite the increasing adoption of Stack Overflow, the information related to a particular topic (e.g., an API) is spread across the website. Thus, Stack Overflow still lacks organization of the crowd knowledge available on it. Our target goal is to address the problem of the poor quality documentation for APIs by providing an alternative artifact to document them based on the crowd knowledge available on Stack Overflow, called crowd cookbook. A cookbook is a recipe-oriented book, and we refer to our cookbook as crowd cookbook since it contains content generated by a crowd. The cookbooks are meant to be used through an exploration process, i.e. browsing. In this paper, we present a semi-automatic approach that organizes the crowd knowledge available on Stack Overflow to build cookbooks for APIs. We have generated cookbooks for three APIs widely used by the software development community: SWT, LINQ and QT. We have also defined desired properties that crowd cookbooks must meet, and we conducted an evaluation of the cookbooks against these properties with human subjects. The results showed that the cookbooks built using our approach, in general, meet those properties. As a highlight, most of the recipes were considered appropriate to be in the cookbooks and have self-contained information. We concluded that our approach is capable to produce adequate cookbooks automatically, which can be as useful as manually produced cookbooks. This opens an opportunity for API designers to enrich existent cookbooks with the different points of view from the crowd, or even to generate initial versions of new cookbooks. | Hen @math et al @cite_48 presented a semi-automated technique to build FAQs (Frequently Asked Questions) from data available in discussions, such as mailing lists and forums. There are similarities between their approach and ours, regarding the method for producing the documentation. For instance, both strategies apply similar data preprocessing before applying LDA. However, there are differences between cookbooks and FAQs. First, cookbooks are designed to contain practical problems that developers may encounter, and we included only questions in cookbooks. In contrast, FAQs include different types of question. Second, the content available on Stack Overflow is semi-structured in questions and answers while the content in mailing listings and forums is not structured. Third, the recipes of the cookbooks must contain source code examples, since the goal of the recipes is to show how to solve programming tasks using an API. On the other hand, the goal of FAQs is to organize knowledge scattered in natural language text. | {
"cite_N": [
"@cite_48"
],
"mid": [
"2120294318"
],
"abstract": [
"Frequently asked questions (FAQs) are a popular way to document software development knowledge. As creating such documents is expensive, this paper presents an approach for automatically extracting FAQs from sources of software development discussion, such as mailing lists and Internet forums, by combining techniques of text mining and natural language processing. We apply the approach to popular mailing lists and carry out a survey among software developers to show that it is able to extract high-quality FAQs that may be further improved by experts."
]
} |
1903.09174 | 2950751761 | Well established libraries typically have API documentation. However, they frequently lack examples and explanations, possibly making difficult their effective reuse. Stack Overflow is a question-and-answer website oriented to issues related to software development. Despite the increasing adoption of Stack Overflow, the information related to a particular topic (e.g., an API) is spread across the website. Thus, Stack Overflow still lacks organization of the crowd knowledge available on it. Our target goal is to address the problem of the poor quality documentation for APIs by providing an alternative artifact to document them based on the crowd knowledge available on Stack Overflow, called crowd cookbook. A cookbook is a recipe-oriented book, and we refer to our cookbook as crowd cookbook since it contains content generated by a crowd. The cookbooks are meant to be used through an exploration process, i.e. browsing. In this paper, we present a semi-automatic approach that organizes the crowd knowledge available on Stack Overflow to build cookbooks for APIs. We have generated cookbooks for three APIs widely used by the software development community: SWT, LINQ and QT. We have also defined desired properties that crowd cookbooks must meet, and we conducted an evaluation of the cookbooks against these properties with human subjects. The results showed that the cookbooks built using our approach, in general, meet those properties. As a highlight, most of the recipes were considered appropriate to be in the cookbooks and have self-contained information. We concluded that our approach is capable to produce adequate cookbooks automatically, which can be as useful as manually produced cookbooks. This opens an opportunity for API designers to enrich existent cookbooks with the different points of view from the crowd, or even to generate initial versions of new cookbooks. | The content available on Stack Overflow has also been used in previous works to support developers at using APIs. We find two different veins of work: API documentation, which is the focus of this work, and recommendation systems. For the former, Treude and Robillard @cite_28 presented an automatic approach to mine insight sentences from Stack Overflow and augment existing API documentation (Javadoc) with them. These insight sentences are related to a particular API type (e.g. class) and provide insight not contained in the API documentation of that type. The main differences between such work and ours are: 1) our approach constructs documentation from scratch instead of improving existing documentation; and 2) their focus is on documenting API elements individually (each mined insight sentence is about an API element), while our focus is on documenting APIs with problem-solution recipes, which may include several API elements. | {
"cite_N": [
"@cite_28"
],
"mid": [
"2366532918"
],
"abstract": [
"Software developers need access to different kinds of information which is often dispersed among different documentation sources, such as API documentation or Stack Overflow. We present an approach to automatically augment API documentation with \"insight sentences\" from Stack Overflow -- sentences that are related to a particular API type and that provide insight not contained in the API documentation of that type. Based on a development set of 1,574 sentences, we compare the performance of two state-of-the-art summarization techniques as well as a pattern-based approach for insight sentence extraction. We then present SISE, a novel machine learning based approach that uses as features the sentences themselves, their formatting, their question, their answer, and their authors as well as part-of-speech tags and the similarity of a sentence to the corresponding API documentation. With SISE, we were able to achieve a precision of 0.64 and a coverage of 0.7 on the development set. In a comparative study with eight software developers, we found that SISE resulted in the highest number of sentences that were considered to add useful information not found in the API documentation. These results indicate that taking into account the meta data available on Stack Overflow as well as part-of-speech tags can significantly improve unsupervised extraction approaches when applied to Stack Overflow data."
]
} |
1903.08926 | 2923680873 | Some biological experiments show that the tubular structures of Physarum polycephalum are often analogous to those of Steiner trees. Therefore, the emerging Physarum-inspired Algorithms (PAs) have the potential of computing Steiner trees. In this paper, we propose two PAs to solve the Steiner Tree Problem in Graphs (STPG). We apply some widely-used artificial and real-world VLSI design instances to evaluate the performance of our PAs. The experimental results show that: 1) for instances with hundreds of vertices, our first PA can find feasible solutions with an average error of 0.19 , while the Genetic Algorithm (GA), the Discrete Particle Swarm Optimization (DPSO) algorithm and a widely-used Steiner tree approximation algorithm: the Shortest Path Heuristic (SPH) algorithm can only find feasible solutions with an average error above 4.96 ; and 2) for larger instances with up to tens of thousands of vertices, where our first PA, GA and DPSO are too slow to be used, our second PA can find feasible solutions with an average error of 3.69 , while SPH can only find feasible solutions with an average error of 6.42 . These experimental results indicate that PAs can compute Steiner trees, and it may be preferable to apply our PAs to solve STPG in some cases. | Besides the attempts to solve STPG, some PAs have already solved the shortest path problem @cite_26 . A well-known one of such PAs is the Physarum Solver @cite_6 , which is the basis of many PAs @cite_28 @cite_0 @cite_38 . We introduce it in this section. | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_28",
"@cite_6",
"@cite_0"
],
"mid": [
"2557242167",
"1964894406",
"",
"1987048169",
"2015244507"
],
"abstract": [
"Recently it has been shown that Physarum-inspired algorithms can solve some network optimization problems. However, it is not yet shown that Physarum-inspired algorithm can solve Node Weighted Steiner Tree Problem (NWSTP). Two new Physarum-inspired algorithms are proposed in this paper to solve NWSTP for the first time. Since all the existing NWSTP benchmark instances have an empty terminal set, new benchmark instances with non-empty terminal sets are generated to cover the shortage of existing benchmark instances. Both proposed algorithms are compared with Genetic Algorithm (GA) and Discrete Particle Swarm Optimization (DPSO) in these benchmark instances. Furthermore, an adapted Dijkstra's algorithm is proposed to provide the optimal solutions for part of these benchmark instances where there are two terminals and the node weights are negative. Simulation results show that our first proposed algorithm can find the optimal solutions for NWSTP with two terminals in graphs with negative node weights, and our second proposed algorithm can find close approximate solutions for NWSTP with multiple terminals in any node weighted graph. Both proposed algorithms provide faster and better NWSTP solutions than GA and DPSO.",
"Physarum polycephalum is a unicellular, multinucleate slime mold in the Mycetozoa group that exhibits remarkable capabilities, including the ability to construct efficient networks when foraging food [1]. The Physarum cell grows, as long as nutrition is abundant. When nutrition is limited, Physarum forms a network of interconnected veins; the veins are gel-like tubes in which the cytoplasm flows. It has been experimentally observed that, with two food sources, Physarum's tubular network often retracts to the shortest path between the food sources. Tero, Kobayashi and Nakagaki [2] proposed a model for the dynamics of the Physarum, which in the computer simulations converged to the shortest path on any initial network. We analytically prove that, under this model, the mass of the mold has to eventually converge to the shortest path in the initial network between the two food sources, independently of the structure of the initial network or of the initial mass distribution. This presentation is based on the work in [3,4] and is supported by the Italian Flagship Initiative \"InterOmics\" (PB.P05). References [1] Toshiyuki Nakagaki, Hiroyasu Yamada, Agota Toth: Maze-solving by an amoeboid organism. Nature 407:470 (2000) [2] Atsushi Tero, Ryo Kobayashi, Toshiyuki Nakagaki: A mathematical model for adaptive transport network in path finding by true slime mold. Journal of Theoretical Biology 244: 553-564 (2007) [3] Vincenzo Bonifaci, Kurt Mehlhorn, Girish Varma: Physarum can compute shortest paths. Journal of Theoretical Biology 309: 121-133 (2012) [4] Vincenzo Bonifaci: Physarum can compute shortest paths: A short proof. Information Processing Letters 113(1-2): 4-7 (2013)",
"",
"We have proposed a mathematical model for the adaptive dynamics of the transport network in an amoeba-like organism, the true slime mold Physarum polycephalum. The model is based on physiological observations of this species, but can also be used for path-finding in the complicated networks of mazes and road maps. In this paper, we describe the physiological basis and the formulation of the model, as well as the results of simulations of some complicated networks. The path-finding method used by Physarum is a good example of cellular computation.",
"Using insights from biological processes could help to design new optimization techniques for long-standing computational problems. This paper exploits a cellular computing model in the slime mold physarum polycephalum to solve the Steiner tree problem which is an important NP-hard problem in various applications, especially in network design. Inspired by the path-finding and network formation capability of physarum, we develop a new optimization algorithm, named as the physarum optimization, with low complexity and high parallelism. To validate and evaluate our proposed models and algorithm, we further apply the physarum optimization to the minimal exposure problem which is a fundamental problem corresponding to the worst-case coverage in wireless sensor networks. Complexity analysis and simulation results show that our proposed algorithm could achieve good performance with low complexity. Moreover, the core mechanism of our physarum optimization also may provide a useful starting point to develop some practical distributed algorithms for network design."
]
} |
1903.08926 | 2923680873 | Some biological experiments show that the tubular structures of Physarum polycephalum are often analogous to those of Steiner trees. Therefore, the emerging Physarum-inspired Algorithms (PAs) have the potential of computing Steiner trees. In this paper, we propose two PAs to solve the Steiner Tree Problem in Graphs (STPG). We apply some widely-used artificial and real-world VLSI design instances to evaluate the performance of our PAs. The experimental results show that: 1) for instances with hundreds of vertices, our first PA can find feasible solutions with an average error of 0.19 , while the Genetic Algorithm (GA), the Discrete Particle Swarm Optimization (DPSO) algorithm and a widely-used Steiner tree approximation algorithm: the Shortest Path Heuristic (SPH) algorithm can only find feasible solutions with an average error above 4.96 ; and 2) for larger instances with up to tens of thousands of vertices, where our first PA, GA and DPSO are too slow to be used, our second PA can find feasible solutions with an average error of 3.69 , while SPH can only find feasible solutions with an average error of 6.42 . These experimental results indicate that PAs can compute Steiner trees, and it may be preferable to apply our PAs to solve STPG in some cases. | Use @math to signify the threshold value of edge conductivity. Edges with conductivities smaller than @math will be cut from the network. Ultimately, if there is a unique shortest path between the source node and the sink node, then this path can be found by iteratively updating edge conductivities and cutting edges @cite_26 . However, even though Physarum Solver can solve the shortest path problem, it cannot solve STPG directly because there are only two terminals in its model, which are the source node and the sink node, while there are multiple terminals in STPG. Thus, new PAs are required to solve STPG. | {
"cite_N": [
"@cite_26"
],
"mid": [
"1964894406"
],
"abstract": [
"Physarum polycephalum is a unicellular, multinucleate slime mold in the Mycetozoa group that exhibits remarkable capabilities, including the ability to construct efficient networks when foraging food [1]. The Physarum cell grows, as long as nutrition is abundant. When nutrition is limited, Physarum forms a network of interconnected veins; the veins are gel-like tubes in which the cytoplasm flows. It has been experimentally observed that, with two food sources, Physarum's tubular network often retracts to the shortest path between the food sources. Tero, Kobayashi and Nakagaki [2] proposed a model for the dynamics of the Physarum, which in the computer simulations converged to the shortest path on any initial network. We analytically prove that, under this model, the mass of the mold has to eventually converge to the shortest path in the initial network between the two food sources, independently of the structure of the initial network or of the initial mass distribution. This presentation is based on the work in [3,4] and is supported by the Italian Flagship Initiative \"InterOmics\" (PB.P05). References [1] Toshiyuki Nakagaki, Hiroyasu Yamada, Agota Toth: Maze-solving by an amoeboid organism. Nature 407:470 (2000) [2] Atsushi Tero, Ryo Kobayashi, Toshiyuki Nakagaki: A mathematical model for adaptive transport network in path finding by true slime mold. Journal of Theoretical Biology 244: 553-564 (2007) [3] Vincenzo Bonifaci, Kurt Mehlhorn, Girish Varma: Physarum can compute shortest paths. Journal of Theoretical Biology 309: 121-133 (2012) [4] Vincenzo Bonifaci: Physarum can compute shortest paths: A short proof. Information Processing Letters 113(1-2): 4-7 (2013)"
]
} |
1903.09109 | 2924907019 | Multitask learning aims at solving a set of related tasks simultaneously, by exploiting the shared knowledge for improving the performance on individual tasks. Hence, an important aspect of multitask learning is to understand the similarities within a set of tasks. Previous works have incorporated this similarity information explicitly (e.g., weighted loss for each task) or implicitly (e.g., adversarial loss for feature adaptation), for achieving good empirical performances. However, the theoretical motivations for adding task similarity knowledge are often missing or incomplete. In this paper, we give a different perspective from a theoretical point of view to understand this practice. We first provide an upper bound on the generalization error of multitask learning, showing the benefit of explicit and implicit task similarity knowledge. We systematically derive the bounds based on two distinct task similarity metrics: H divergence and Wasserstein distance. From these theoretical results, we revisit the Adversarial Multi-task Neural Network, proposing a new training algorithm to learn the task relation coefficients and neural network parameters iteratively. We assess our new algorithm empirically on several benchmarks, showing not only that we find interesting and robust task relations, but that the proposed approach outperforms the baselines, reaffirming the benefits of theoretical insight in algorithm design. | Some survey papers @cite_10 @cite_1 @cite_2 have made a broad and detailed presentation of the general MTL. More specifically related to our work, on the practical side, we note several approaches that use tasks relationship to improve empirical performances: solve a convex optimization problem in the original space or Reproducing Kernel Hilbert-Space to extract tasks relationships, propose probabilistic models through construction of a task covariance matrix or estimate the multitask likelihood from a deep Bayes model. On the theoretical side, analyze the weighted sum loss algorithm and its applications in online learning, active learning and transductive learning. Moreover, analyze generalization error of representation-based approaches, and analyze the algorithmic stability in MTL. | {
"cite_N": [
"@cite_1",
"@cite_10",
"@cite_2"
],
"mid": [
"2624871570",
"2742079690",
"2204678271"
],
"abstract": [
"Multi-task learning (MTL) has led to successes in many applications of machine learning, from natural language processing and speech recognition to computer vision and drug discovery. This article aims to give a general overview of MTL, particularly in deep neural networks. It introduces the two most common methods for MTL in Deep Learning, gives an overview of the literature, and discusses recent advances. In particular, it seeks to help ML practitioners apply MTL by shedding light on how MTL works and providing guidelines for choosing appropriate auxiliary tasks.",
"Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL. First, we classify different MTL algorithms into several categories, including feature learning approach, low-rank approach, task clustering approach, task relation learning approach, and decomposition approach, and then discuss the characteristics of each approach. In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, unsupervised learning, reinforcement learning, multi-view learning and graphical models. When the number of tasks is large or the data dimensionality is high, batch MTL models are difficult to handle this situation and online, parallel and distributed MTL models as well as dimensionality reduction and feature hashing are reviewed to reveal their computational and storage advantages. Many real-world applications use MTL to boost their performance and we review representative works. Finally, we present theoretical analyses and discuss several future directions for MTL.",
"In this paper, we study multi-task algorithms from the perspective of the algorithmic stability. We give a definition of the multi-task uniform stability, a generalization of the conventional uniform stability, which measures the maximum difference between the loss of a multi-task algorithm trained on a data set and that of the multitask algorithm trained on the same data set but with a data point removed in each task. In order to analyze multi-task algorithms based on multi-task uniform stability, we prove a generalized McDiarmid's inequality which assumes the difference bound condition holds by changing multiple input arguments instead of only one in the conventional McDiarmid's inequality. By using the generalized McDiarmid's inequality as a tool, we can analyze the generalization performance of general multitask algorithms in terms of the multi-task uniform stability. Moreover, as applications, we prove generalization bounds of several representative regularized multi-task algorithms."
]
} |
1903.09109 | 2924907019 | Multitask learning aims at solving a set of related tasks simultaneously, by exploiting the shared knowledge for improving the performance on individual tasks. Hence, an important aspect of multitask learning is to understand the similarities within a set of tasks. Previous works have incorporated this similarity information explicitly (e.g., weighted loss for each task) or implicitly (e.g., adversarial loss for feature adaptation), for achieving good empirical performances. However, the theoretical motivations for adding task similarity knowledge are often missing or incomplete. In this paper, we give a different perspective from a theoretical point of view to understand this practice. We first provide an upper bound on the generalization error of multitask learning, showing the benefit of explicit and implicit task similarity knowledge. We systematically derive the bounds based on two distinct task similarity metrics: H divergence and Wasserstein distance. From these theoretical results, we revisit the Adversarial Multi-task Neural Network, proposing a new training algorithm to learn the task relation coefficients and neural network parameters iteratively. We assess our new algorithm empirically on several benchmarks, showing not only that we find interesting and robust task relations, but that the proposed approach outperforms the baselines, reaffirming the benefits of theoretical insight in algorithm design. | The (or distribution distance distribution discrepancy) is currently used in deep generative models , domain adaptation , robust learning and meta-learning . In transfer learning, adversarial losses are widely used for feature adaptation, since the transfer procedure is much more efficient on a shared representation. In applied transfer learning, @math divergence and Wasserstein distance are widely used as adversarial losses. As for MTL applications, to our knowledge, and applie @math divergence in natural language processing for text classification and speech recognition. @cite_0 are the first to use Wasserstein distance to estimated the similarity of linear parameters instead of the data generation distributions. As for the theoretical understanding, @cite_18 analyzed the Minimax statistical property in the Wasserstein distance, and @cite_13 analyzed an adaptation bound on general discrepancy. | {
"cite_N": [
"@cite_0",
"@cite_18",
"@cite_13"
],
"mid": [
"2803204576",
"2953362272",
"2593597837"
],
"abstract": [
"Two important elements have driven recent innovation in the field of regression: sparsity-inducing regularization, to cope with high-dimensional problems; multi-task learning through joint parameter estimation, to augment the number of training samples. Both approaches complement each other in the sense that a joint estimation results in more samples, which are needed to estimate sparse models accurately, whereas sparsity promotes models that act on subsets of related variables. This idea has driven the proposal of block regularizers such as L1 Lq norms, which however effective, require that active regressors strictly overlap. In this paper, we propose a more flexible convex regularizer based on unbalanced optimal transport (OT) theory. That regularizer promotes parameters that are close, according to the OT geometry, which takes into account a prior geometric knowledge on the regressor variables. We derive an efficient algorithm based on a regularized formulation of optimal transport, which iterates through applications of Sinkhorn's algorithm along with coordinate descent iterations. The performance of our model is demonstrated on regular grids and complex triangulated geometries of the cortex with an application in neuroimaging.",
"As opposed to standard empirical risk minimization (ERM), distributionally robust optimization aims to minimize the worst-case risk over a larger ambiguity set containing the original empirical distribution of the training data. In this work, we describe a minimax framework for statistical learning with ambiguity sets given by balls in Wasserstein space. In particular, we prove generalization bounds that involve the covering number properties of the original ERM problem. As an illustrative example, we provide generalization guarantees for transport-based domain adaptation problems where the Wasserstein distance between the source and target domain distributions can be reliably estimated from unlabeled samples.",
"We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm, (DM), previously shown to outperform a number of algorithms for this problem. Unlike many previously proposed solutions for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. Instead, the reweighting depends on the hypothesis sought. The algorithm is derived from a less conservative notion of discrepancy than the DM algorithm called generalized discrepancy. We present a detailed description of our algorithm and show that it can be formulated as a convex optimization problem. We also give a detailed theoretical analysis of its learning guarantees which helps us select its parameters. Finally, we report the results of experiments demonstrating that it improves upon discrepancy minimization in several tasks."
]
} |
1903.08839 | 2924460655 | Recent studies have shown remarkable advances in 3D human pose estimation from monocular images, with the help of large-scale in-door 3D datasets and sophisticated network architectures. However, the generalizability to different environments remains an elusive goal. In this work, we propose a geometry-aware 3D representation for the human pose to address this limitation by using multiple views in a simple auto-encoder model at the training stage and only 2D keypoint information as supervision. A view synthesis framework is proposed to learn the shared 3D representation between viewpoints with synthesizing the human pose from one viewpoint to the other one. Instead of performing a direct transfer in the raw image-level, we propose a skeleton-based encoder-decoder mechanism to distil only pose-related representation in the latent space. A learning-based representation consistency constraint is further introduced to facilitate the robustness of latent 3D representation. Since the learnt representation encodes 3D geometry information, mapping it to 3D pose will be much easier than conventional frameworks that use an image or 2D coordinates as the input of 3D pose estimator. We demonstrate our approach on the task of 3D human pose estimation. Comprehensive experiments on three popular benchmarks show that our model can significantly improve the performance of state-of-the-art methods with simply injecting the representation as a robust 3D prior. | To capture the intrinsic structure of objects, existing studies @cite_38 @cite_32 @cite_28 @cite_6 typically disentangle visual content into multiple predefined factors like camera viewpoints, appearance and motion. Some works @cite_43 @cite_0 leverage the correspondence among intra-object instance category to encode the structure representation. @cite_0 discovery landmark structure as an intermediate representation for image autoencoding with several constraints. Other approaches utilize multiple views to either directly learn the geometry representation @cite_41 @cite_19 @cite_16 with object reconstruction, or take advantage of view synthesis @cite_4 to learn the structure with shared latent representation between views. For example, @cite_4 learn 3D hand pose representation by synthesizing depth maps under different views. @cite_28 conditionally generate an image of the object from another one, where the generated image differs by acquisition time or viewpoint, to encourage representation distilled to object landmarks. These methods mainly focus on structure representation of generic objects or hand face pose. Whereas, the human body is articulated and much more deformable. How to capture the geometry representation of the human body with fewer data and simpler constraints is still an open question. | {
"cite_N": [
"@cite_38",
"@cite_4",
"@cite_28",
"@cite_41",
"@cite_32",
"@cite_6",
"@cite_0",
"@cite_43",
"@cite_19",
"@cite_16"
],
"mid": [
"2188956040",
"2797784912",
"2809334950",
"2952069407",
"2784125538",
"",
"2796896492",
"2950701417",
"2798998662",
"2796350382"
],
"abstract": [
"An important problem for both graphics and vision is to synthesize novel views of a 3D object from a single image. This is particularly challenging due to the partial observability inherent in projecting a 3D object onto the image space, and the ill-posedness of inferring object shape and pose. However, we can train a neural network to address the problem if we restrict our attention to specific object categories (in our case faces and chairs) for which we can gather ample training data. In this paper, we propose a novel recurrent convolutional encoder-decoder network that is trained end-to-end on the task of rendering rotated objects starting from a single image. The recurrent structure allows our model to capture long-term dependencies along a sequence of transformations. We demonstrate the quality of its predictions for human faces on the Multi-PIE dataset and for a dataset of 3D chair models, and also show its ability to disentangle latent factors of variation (e.g., identity and pose) without using full supervision.",
"The labeled data required to learn pose estimation for articulated objects is difficult to provide in the desired quantity, realism, density, and accuracy. To address this issue, we develop a method to learn representations, which are very specific for articulated poses, without the need for labeled training data. We exploit the observation that the object pose of a known object is predictive for the appearance in any known view. That is, given only the pose and shape parameters of a hand, the hand's appearance from any viewpoint can be approximated. To exploit this observation, we train a model that -- given input from one view -- estimates a latent representation, which is trained to be predictive for the appearance of the object when captured from another viewpoint. Thus, the only necessary supervision is the second view. The training process of this model reveals an implicit pose representation in the latent space. Importantly, at test time the pose representation can be inferred using only a single view. In qualitative and quantitative experiments we show that the learned representations capture detailed pose information. Moreover, when training the proposed method jointly with labeled and unlabeled data, it consistently surpasses the performance of its fully supervised counterpart, while reducing the amount of needed labeled samples by at least one order of magnitude.",
"",
"This paper presents KeypointNet, an end-to-end geometric reasoning framework to learn an optimal set of category-specific 3D keypoints, along with their detectors. Given a single image, KeypointNet extracts 3D keypoints that are optimized for a downstream task. We demonstrate this framework on 3D pose estimation by proposing a differentiable objective that seeks the optimal set of keypoints for recovering the relative pose between two views of an object. Our model discovers geometrically and semantically consistent keypoints across viewing angles and instances of an object category. Importantly, we find that our end-to-end framework using no ground-truth keypoint annotations outperforms a fully supervised baseline using the same neural network architecture on the task of pose estimation. The discovered 3D keypoints on the car, chair, and plane categories of ShapeNet are visualized at this http URL",
"We present a framework for learning single-view shape and pose prediction without using direct supervision for either. Our approach allows leveraging multi-view observations from unknown poses as supervisory signal during training. Our proposed training setup enforces geometric consistency between the independently predicted shape and pose from two views of the same instance. We consequently learn to predict shape in an emergent canonical (view-agnostic) frame along with a corresponding pose predictor. We show empirical and qualitative results using the ShapeNet dataset and observe encouragingly competitive performance to previous techniques which rely on stronger forms of supervision. We also demonstrate the applicability of our framework in a realistic setting which is beyond the scope of existing techniques: using a training dataset comprised of online product images where the underlying shape and pose are unknown.",
"",
"Deep neural networks can model images with rich latent representations, but they cannot naturally conceptualize structures of object categories in a human-perceptible way. This paper addresses the problem of learning object structures in an image modeling process without supervision. We propose an autoencoding formulation to discover landmarks as explicit structural representations. The encoding module outputs landmark coordinates, whose validity is ensured by constraints that reflect the necessary properties for landmarks. The decoding module takes the landmarks as a part of the learnable input representations in an end-to-end differentiable framework. Our discovered landmarks are semantically meaningful and more predictive of manually annotated landmarks than those discovered by previous methods. The coordinates of our landmarks are also complementary features to pretrained deep-neural-network representations in recognizing visual attributes. In addition, the proposed method naturally creates an unsupervised, perceptible interface to manipulate object shapes and decode images with controllable structures. The project webpage is at this http URL",
"Understanding the 3D world is a fundamental problem in computer vision. However, learning a good representation of 3D objects is still an open problem due to the high dimensionality of the data and many factors of variation involved. In this work, we investigate the task of single-view 3D object reconstruction from a learning agent's perspective. We formulate the learning process as an interaction between 3D and 2D representations and propose an encoder-decoder network with a novel projection loss defined by the perspective transformation. More importantly, the projection loss enables the unsupervised learning using 2D observation without explicit 3D supervision. We demonstrate the ability of the model in generating 3D volume from a single 2D image with three sets of experiments: (1) learning from single-class objects; (2) learning from multi-class objects and (3) testing on novel object classes. Results show superior performance and better generalization ability for 3D object reconstruction when the projection loss is involved.",
"View-based methods have achieved considerable success in 3D object recognition tasks. Different from existing view-based methods pooling the view-wise features, we tackle this problem from the perspective of patches-to-patches similarity measurement. By exploiting the relationship between polynomial kernel and bilinear pooling, we obtain an effective 3D object representation by aggregating local convolutional features through bilinear pooling. Meanwhile, we harmonize different components inherited in the bilinear feature to obtain a more discriminative representation. To achieve an end-to-end trainable framework, we incorporate the harmonized bilinear pooling as a layer of a network, constituting the proposed Multi-view Harmonized Bilinear Network (MHBN). Systematic experiments conducted on two public benchmark datasets demonstrate the efficacy of the proposed methods in 3D object recognition.",
"We present DeepMVS, a deep convolutional neural network (ConvNet) for multi-view stereo reconstruction. Taking an arbitrary number of posed images as input, we first produce a set of plane-sweep volumes and use the proposed DeepMVS network to predict high-quality disparity maps. The key contributions that enable these results are (1) supervised pretraining on a photorealistic synthetic dataset, (2) an effective method for aggregating information across a set of unordered images, and (3) integrating multi-layer feature activations from the pre-trained VGG-19 network. We validate the efficacy of DeepMVS using the ETH3D Benchmark. Our results show that DeepMVS compares favorably against state-of-the-art conventional MVS algorithms and other ConvNet based methods, particularly for near-textureless regions and thin structures."
]
} |
1903.08839 | 2924460655 | Recent studies have shown remarkable advances in 3D human pose estimation from monocular images, with the help of large-scale in-door 3D datasets and sophisticated network architectures. However, the generalizability to different environments remains an elusive goal. In this work, we propose a geometry-aware 3D representation for the human pose to address this limitation by using multiple views in a simple auto-encoder model at the training stage and only 2D keypoint information as supervision. A view synthesis framework is proposed to learn the shared 3D representation between viewpoints with synthesizing the human pose from one viewpoint to the other one. Instead of performing a direct transfer in the raw image-level, we propose a skeleton-based encoder-decoder mechanism to distil only pose-related representation in the latent space. A learning-based representation consistency constraint is further introduced to facilitate the robustness of latent 3D representation. Since the learnt representation encodes 3D geometry information, mapping it to 3D pose will be much easier than conventional frameworks that use an image or 2D coordinates as the input of 3D pose estimator. We demonstrate our approach on the task of 3D human pose estimation. Comprehensive experiments on three popular benchmarks show that our model can significantly improve the performance of state-of-the-art methods with simply injecting the representation as a robust 3D prior. | A vast amount of fully-supervised 3D pose estimation methods via monocular image exist in the literature @cite_36 @cite_40 @cite_9 @cite_13 . Despite the performance these methods achieve, modeling 3D mapping from a given dataset limits their generalizability due to the constrained lab environment, limited motion and inter-dataset variation. Inter-dataset variation refers to bias among different datasets on viewpoints, environments, the definition of 3D key points, | {
"cite_N": [
"@cite_36",
"@cite_9",
"@cite_40",
"@cite_13"
],
"mid": [
"2612706635",
"2583372902",
"2557698284",
"2803914169"
],
"abstract": [
"Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3- dimensional positions.,,With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, \"lifting\" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30 on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results – this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.",
"We explore 3D human pose estimation from a single RGB image. While many approaches try to directly predict 3D pose from image measurements, we explore a simple architecture that reasons through intermediate 2D pose predictions. Our approach is based on two key observations (1) Deep neural nets have revolutionized 2D pose estimation, producing accurate 2D predictions even for poses with self-occlusions (2) Big-datasets of 3D mocap data are now readily available, making it tempting to lift predicted 2D poses to 3D through simple memorization (e.g., nearest neighbors). The resulting architecture is straightforward to implement with off-the-shelf 2D pose estimation systems and 3D mocap libraries. Importantly, we demonstratethatsuchmethodsoutperformalmostallstate-of-theart 3D pose estimation systems, most of which directly try to regress 3D pose from 2D measurements.",
"This paper addresses the problem of 3D human pose estimation from a single image. We follow a standard two-step pipeline by first detecting the 2D position of the N body joints, and then using these observations to infer 3D pose. For the first step, we use a recent CNN-based detector. For the second step, most existing approaches perform 2N-to-3N regression of the Cartesian joint coordinates. We show that more precise pose estimates can be obtained by representing both the 2D and 3D human poses using NxN distance matrices, and formulating the problem as a 2D-to-3D distance matrix regression. For learning such a regressor we leverage on simple Neural Network architectures, which by construction, enforce positivity and symmetry of the predicted matrices. The approach has also the advantage to naturally handle missing observations and allowing to hypothesize the position of non-observed joints. Quantitative results on Humaneva and Human3.6M datasets demonstrate consistent performance gains over state-of-the-art. Qualitative evaluation on the images in-the-wild of the LSP dataset, using the regressor learned on Human3.6M, reveals very promising generalization results.",
"In this paper, we propose a two-stage depth ranking based method (DRPose3D) to tackle the problem of 3D human pose estimation. Instead of accurate 3D positions, the depth ranking can be identified by human intuitively and learned using the deep neural network more easily by solving classification problems. Moreover, depth ranking contains rich 3D information. It prevents the 2D-to-3D pose regression in two-stage methods from being ill-posed. In our method, firstly, we design a Pairwise Ranking Convolutional Neural Network (PRCNN) to extract depth rankings of human joints from images. Secondly, a coarse-to-fine 3D Pose Network(DPNet) is proposed to estimate 3D poses from both depth rankings and 2D human joint locations. Additionally, to improve the generality of our model, we introduce a statistical method to augment depth rankings. Our approach outperforms the state-of-the-art methods in the Human3.6M benchmark for all three testing protocols, indicating that depth ranking is an essential geometric feature which can be learned to improve the 3D pose estimation."
]
} |
1903.08839 | 2924460655 | Recent studies have shown remarkable advances in 3D human pose estimation from monocular images, with the help of large-scale in-door 3D datasets and sophisticated network architectures. However, the generalizability to different environments remains an elusive goal. In this work, we propose a geometry-aware 3D representation for the human pose to address this limitation by using multiple views in a simple auto-encoder model at the training stage and only 2D keypoint information as supervision. A view synthesis framework is proposed to learn the shared 3D representation between viewpoints with synthesizing the human pose from one viewpoint to the other one. Instead of performing a direct transfer in the raw image-level, we propose a skeleton-based encoder-decoder mechanism to distil only pose-related representation in the latent space. A learning-based representation consistency constraint is further introduced to facilitate the robustness of latent 3D representation. Since the learnt representation encodes 3D geometry information, mapping it to 3D pose will be much easier than conventional frameworks that use an image or 2D coordinates as the input of 3D pose estimator. We demonstrate our approach on the task of 3D human pose estimation. Comprehensive experiments on three popular benchmarks show that our model can significantly improve the performance of state-of-the-art methods with simply injecting the representation as a robust 3D prior. | Several works focus on weakly-supervised learning to increase the diversity of samples and meanwhile restrain the usage of labeled 3d annotated data. For example, synthesize training data by deforming a human template model with known 3D ground truth @cite_3 , or generating various foreground background @cite_23 . @cite_22 proposes to transform knowledge from 2D pose to 3d pose estimation network with re-projection constraint to 2D results. A converse strategy is employed in @cite_8 to distil 3D pose structure to unconstrained domain under an adversarial learning framework. @cite_15 proposes to learn the parameters of the statistical model SMPL @cite_1 to obtain 3D mesh from image with an end-to-end network, and regresses 3d coordinates from the mesh. Other approaches @cite_21 @cite_39 exploit views consistency with the usage of multiple viewpoints of the same person. Nevertheless, these methods still rely on a large quantity of 3D training samples or auxiliary annotations, like silhouettes @cite_34 and depth @cite_39 to initialize or constrain the models. | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_39",
"@cite_23",
"@cite_15",
"@cite_34"
],
"mid": [
"2756050327",
"2953333925",
"2793814912",
"1967554269",
"2576289912",
"2774618166",
"2797184202",
"",
"2519469348"
],
"abstract": [
"In this paper, we study the task of 3D human pose estimation in the wild. This task is challenging due to lack of training data, as existing datasets are either in the wild images with 2D pose or in the lab images with 3D pose.,, We propose a weakly-supervised transfer learning method that uses mixed 2D and 3D labels in a unified deep neutral network that presents two-stage cascaded structure. Our network augments a state-of-the-art 2D pose estimation sub-network with a 3D depth regression sub-network. Unlike previous two stage approaches that train the two sub-networks sequentially and separately, our training is end-to-end and fully exploits the correlation between the 2D pose and depth estimation sub-tasks. The deep features are better learnt through shared representations. In doing so, the 3D pose labels in controlled lab environments are transferred to in the wild images. In addition, we introduce a 3D geometric constraint to regularize the 3D pose prediction, which is effective in the absence of ground truth depth labels. Our method achieves competitive results on both 2D and 3D benchmarks.",
"Recently, remarkable advances have been achieved in 3D human pose estimation from monocular images because of the powerful Deep Convolutional Neural Networks (DCNNs). Despite their success on large-scale datasets collected in the constrained lab environment, it is difficult to obtain the 3D pose annotations for in-the-wild images. Therefore, 3D human pose estimation in the wild is still a challenge. In this paper, we propose an adversarial learning framework, which distills the 3D human pose structures learned from the fully annotated dataset to in-the-wild images with only 2D pose annotations. Instead of defining hard-coded rules to constrain the pose estimation results, we design a novel multi-source discriminator to distinguish the predicted 3D poses from the ground-truth, which helps to enforce the pose estimator to generate anthropometrically valid poses even with images in the wild. We also observe that a carefully designed information source for the discriminator is essential to boost the performance. Thus, we design a geometric descriptor, which computes the pairwise relative locations and distances between body joints, as a new information source for the discriminator. The efficacy of our adversarial learning framework with the new geometric descriptor has been demonstrated through extensive experiments on widely used public benchmarks. Our approach significantly improves the performance compared with previous state-of-the-art approaches.",
"Accurate 3D human pose estimation from single images is possible with sophisticated deep-net architectures that have been trained on very large datasets. However, this still leaves open the problem of capturing motions for which no such database exists. Manual annotation is tedious, slow, and error-prone. In this paper, we propose to replace most of the annotations by the use of multiple views, at training time only. Specifically, we train the system to predict the same pose in all views. Such a consistency constraint is necessary but not sufficient to predict accurate poses. We therefore complement it with a supervised loss aiming to predict the correct pose in a small set of labeled images, and with a regularization term that penalizes drift from initial predictions. Furthermore, we propose a method to estimate camera pose jointly with human pose, which lets us utilize multi-view footage where calibration is difficult, e.g., for pan-tilt or moving handheld cameras. We demonstrate the effectiveness of our approach on established benchmarks, as well as on a new Ski dataset with rotating cameras and expert ski motion, for which annotations are truly hard to obtain.",
"We present a learned model of human body shape and pose-dependent shape variation that is more accurate than previous models and is compatible with existing graphics pipelines. Our Skinned Multi-Person Linear model (SMPL) is a skinned vertex-based model that accurately represents a wide variety of body shapes in natural human poses. The parameters of the model are learned from data including the rest pose template, blend weights, pose-dependent blend shapes, identity-dependent blend shapes, and a regressor from vertices to joint locations. Unlike previous models, the pose-dependent blend shapes are a linear function of the elements of the pose rotation matrices. This simple formulation enables training the entire model from a relatively large number of aligned 3D meshes of different people in different poses. We quantitatively evaluate variants of SMPL using linear or dual-quaternion blend skinning and show that both are more accurate than a Blend-SCAPE model trained on the same data. We also extend SMPL to realistically model dynamic soft-tissue deformations. Because it is based on blend skinning, SMPL is compatible with existing rendering engines and we make it available for research purposes.",
"Estimating human pose, shape, and motion from images and videos are fundamental challenges with many applications. Recent advances in 2D human pose estimation use large amounts of manually-labeled training data for learning convolutional neural networks (CNNs). Such data is time consuming to acquire and difficult to extend. Moreover, manual labeling of 3D pose, depth and motion is impractical. In this work we present SURREAL (Synthetic hUmans foR REAL tasks): a new large-scale dataset with synthetically-generated but realistic images of people rendered from 3D sequences of human motion capture data. We generate more than 6 million frames together with ground truth pose, depth maps, and segmentation masks. We show that CNNs trained on our synthetic dataset allow for accurate human depth estimation and human part segmentation in real RGB images. Our results and the new dataset open up new possibilities for advancing person analysis using cheap and large-scale synthetic data.",
"In this paper, we introduce a novel unsupervised domain adaptation technique for the task of 3D keypoint prediction from a single depth scan image. Our key idea is to utilize the fact that predictions from different views of the same or similar objects should be consistent with each other. Such view consistency provides effective regularization for keypoint prediction on unlabeled instances. In addition, we introduce a geometric alignment term to regularize predictions in the target domain. The resulting loss function can be effectively optimized via alternating minimization. We demonstrate the effectiveness of our approach on real datasets and present experimental results showing that our approach is superior to state-of-the-art general-purpose domain adaptation techniques.",
"We propose a CNN-based approach for 3D human body pose estimation from single RGB images that addresses the issue of limited generalizability of models trained solely on the starkly limited publicly available 3D pose data. Using only the existing 3D pose data and 2D pose data, we show state-of-the-art performance on established benchmarks through transfer of learned features, while also generalizing to in-the-wild scenes. We further introduce a new training set for human body pose estimation from monocular images of real humans that has the ground truth captured with a multi-camera marker-less motion capture system. It complements existing corpora with greater diversity in pose, human appearance, clothing, occlusion, and viewpoints, and enables an increased scope of augmentation. We also contribute a new benchmark that covers outdoor and indoor scenes, and demonstrate that our 3D pose dataset shows better in-the-wild performance than existing annotated data, which is further improved in conjunction with transfer learning from 2D pose data. All in all, we argue that the use of transfer learning of representations in tandem with algorithmic and data contributions is crucial for general 3D body pose estimation.",
"",
"The recovery of 3D human pose with monocular camera is an inherently ill-posed problem due to the large number of possible projections from the same 2D image to 3D space. Aimed at improving the accuracy of 3D motion reconstruction, we introduce the additional built-in knowledge, namely height-map, into the algorithmic scheme of reconstructing the 3D pose motion under a single-view calibrated camera. Our novel proposed framework consists of two major contributions. Firstly, the RGB image and its calculated height-map are combined to detect the landmarks of 2D joints with a dual-stream deep convolution network. Secondly, we formulate a new objective function to estimate 3D motion from the detected 2D joints in the monocular image sequence, which reinforces the temporal coherence constraints on both the camera and 3D poses. Experiments with HumanEva, Human3.6M, and MCAD dataset validate that our method outperforms the state-of-the-art algorithms on both 2D joints localization and 3D motion recovery. Moreover, the evaluation results on HumanEva indicates that the performance of our proposed single-view approach is comparable to that of the multi-view deep learning counterpart."
]
} |
1903.08839 | 2924460655 | Recent studies have shown remarkable advances in 3D human pose estimation from monocular images, with the help of large-scale in-door 3D datasets and sophisticated network architectures. However, the generalizability to different environments remains an elusive goal. In this work, we propose a geometry-aware 3D representation for the human pose to address this limitation by using multiple views in a simple auto-encoder model at the training stage and only 2D keypoint information as supervision. A view synthesis framework is proposed to learn the shared 3D representation between viewpoints with synthesizing the human pose from one viewpoint to the other one. Instead of performing a direct transfer in the raw image-level, we propose a skeleton-based encoder-decoder mechanism to distil only pose-related representation in the latent space. A learning-based representation consistency constraint is further introduced to facilitate the robustness of latent 3D representation. Since the learnt representation encodes 3D geometry information, mapping it to 3D pose will be much easier than conventional frameworks that use an image or 2D coordinates as the input of 3D pose estimator. We demonstrate our approach on the task of 3D human pose estimation. Comprehensive experiments on three popular benchmarks show that our model can significantly improve the performance of state-of-the-art methods with simply injecting the representation as a robust 3D prior. | In contrast to above approaches, our framework aims at discovering a robust geometry-aware 3D representation of human pose in latent space, with only 2D annotation in hand. This allows us to train the subsequent monocular 3D pose estimation network with much less labeled 3D data. Recently, a concurrent work is published in the community with similar spirits. In contrast to @cite_11 that can only handle one particular dataset due to the dependency of appearance and inter-frame information during the training process, our framework tries to break the gap of inter-dataset variation, which permits more practical usages. Moreover, our framework is complementary to previous 3D pose estimation works, and can use current approaches as the baseline with the injection of learnt representation as a 3D structure prior. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2795942369"
],
"abstract": [
"Modern 3D human pose estimation techniques rely on deep networks, which require large amounts of training data. While weakly-supervised methods require less supervision, by utilizing 2D poses or multi-view imagery without annotations, they still need a sufficiently large set of samples with 3D annotations for learning to succeed. In this paper, we propose to overcome this problem by learning a geometry-aware body representation from multi-view images without annotations. To this end, we use an encoder-decoder that predicts an image from one viewpoint given an image from another viewpoint. Because this representation encodes 3D geometry, using it in a semi-supervised setting makes it easier to learn a mapping from it to 3D human pose. As evidenced by our experiments, our approach significantly outperforms fully-supervised methods given the same amount of labeled data, and improves over other semi-supervised methods while using as little as 1 of the labeled data."
]
} |
1903.08863 | 2923918895 | In this paper, we investigate how to learn a suitable representation of satellite image time series in an unsupervised manner by leveraging large amounts of unlabeled data. Additionally , we aim to disentangle the representation of time series into two representations: a shared representation that captures the common information between the images of a time series and an exclusive representation that contains the specific information of each image of the time series. To address these issues, we propose a model that combines a novel component called cross-domain autoencoders with the variational autoencoder (VAE) and generative ad-versarial network (GAN) methods. In order to learn disentangled representations of time series, our model learns the multimodal image-to-image translation task. We train our model using satellite image time series from the Sentinel-2 mission. Several experiments are carried out to evaluate the obtained representations. We show that these disentangled representations can be very useful to perform multiple tasks such as image classification, image retrieval, image segmentation and change detection. | . It is one of the most popular applications using conditional GANs @cite_21 . The image-to-image translation task consists of learning a mapping function between an input image domain and an output image domain. Impressive results have been achieved by the pix2pix @cite_13 and cycleGAN @cite_8 models. Nevertheless, most of these models are monomodal. That is, there is a unique output image for a given input image. | {
"cite_N": [
"@cite_21",
"@cite_13",
"@cite_8"
],
"mid": [
"2125389028",
"",
"2962793481"
],
"abstract": [
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach."
]
} |
1903.08863 | 2923918895 | In this paper, we investigate how to learn a suitable representation of satellite image time series in an unsupervised manner by leveraging large amounts of unlabeled data. Additionally , we aim to disentangle the representation of time series into two representations: a shared representation that captures the common information between the images of a time series and an exclusive representation that contains the specific information of each image of the time series. To address these issues, we propose a model that combines a novel component called cross-domain autoencoders with the variational autoencoder (VAE) and generative ad-versarial network (GAN) methods. In order to learn disentangled representations of time series, our model learns the multimodal image-to-image translation task. We train our model using satellite image time series from the Sentinel-2 mission. Several experiments are carried out to evaluate the obtained representations. We show that these disentangled representations can be very useful to perform multiple tasks such as image classification, image retrieval, image segmentation and change detection. | . One of the limitations of previous models is the lack of diversity of generated images. Certain models address this problem by combining the GAN and VAE methods. On the one hand, GANs are used to generate realistic images while VAE is used to provide diversity in the output domain. Recent work that deals with multimodal output is presented by Gonzalez-Garcia al @cite_22 , Zhu al @cite_0 , Huang al @cite_4 , Lee al @cite_25 and Ma al @cite_10 . In particular, to be able to generate an entire time series from a single image, we adopt the principle of the BicycleGAN model proposed by Zhu al @cite_0 where a low-dimensional latent vector represents the diversity of the output domain. However, while the BicycleGAN model is mainly focus on image generation, we only consider the image-to-image translation task as a way to learn suitable feature representations. For image generation purpose, the output diversity is conditioned at the encoder input level in the BicycleGAN model. Instead the output diversity is conditioned at the decoder input level in our model. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_0",
"@cite_10",
"@cite_25"
],
"mid": [
"2797650215",
"2803398058",
"",
"2897699732",
"2796303840"
],
"abstract": [
"Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any pairs of corresponding images. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to the state-of-the-art approaches further demonstrates the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at this https URL",
"Deep image translation methods have recently shown excellent results, outputting high-quality images covering multiple modes of the data distribution. There has also been increased interest in disentangling the internal representations learned by deep methods to further improve their performance and achieve a finer control. In this paper, we bridge these two objectives and introduce the concept of cross-domain disentanglement. We aim to separate the internal representation into three parts. The shared part contains information for both domains. The exclusive parts, on the other hand, contain only factors of variation that are particular to each domain. We achieve this through bidirectional image translation based on Generative Adversarial Networks and cross-domain autoencoders, a novel network component. Our model offers multiple advantages. We can output diverse samples covering multiple modes of the distributions of both domains, perform domain-specific image transfer and interpolation, and cross-domain retrieval without the need of labeled data, only paired images. We compare our model to the state-of-the-art in multi-modal image translation and achieve better results for translation on challenging datasets as well as for cross-domain retrieval on realistic datasets.",
"",
"Image-to-image translation has recently received significant attention due to advances in deep learning. Most works focus on learning either a one-to-one mapping in an unsupervised way or a many-to-many mapping in a supervised way. However, a more practical setting is many-to-many mapping in an unsupervised way, which is harder due to the lack of supervision and the complex inner- and cross-domain variations. To alleviate these issues, we propose the Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) network which conditions the translation process on an exemplar image in the target domain. We assume that an image comprises of a content component which is shared across domains, and a style component specific to each domain. Under the guidance of an exemplar from the target domain we apply Adaptive Instance Normalization to the shared content component, which allows us to transfer the style information of the target domain to the source domain. To avoid semantic inconsistencies during translation that naturally appear due to the large inner- and cross-domain variations, we introduce the concept of feature masks that provide coarse semantic guidance without requiring the use of any semantic labels. Experimental results on various datasets show that EGSC-IT does not only translate the source image to diverse instances in the target domain, but also preserves the semantic consistency during the process.",
"Being able to predict what may happen in the future requires an in-depth understanding of the physical and causal rules that govern the world. A model that is able to do so has a number of appealing applications, from robotic planning to representation learning. However, learning to predict raw future observations, such as frames in a video, is exceedingly challenging -- the ambiguous nature of the problem can cause a naively designed model to average together possible futures into a single, blurry prediction. Recently, this has been addressed by two distinct approaches: (a) latent variational variable models that explicitly model underlying stochasticity and (b) adversarially-trained models that aim to produce naturalistic images. However, a standard latent variable model can struggle to produce realistic results, and a standard adversarially-trained model underutilizes latent variables and fails to produce diverse predictions. We show that these distinct methods are in fact complementary. Combining the two produces predictions that look more realistic to human raters and better cover the range of possible futures. Our method outperforms prior and concurrent work in these aspects."
]
} |
1903.08682 | 2950384556 | We propose a high-quality photo-to-pencil translation method with fine-grained control over the drawing style. This is a challenging task due to multiple stroke types (e.g., outline and shading), structural complexity of pencil shading (e.g., hatching), and the lack of aligned training data pairs. To address these challenges, we develop a two-branch model that learns separate filters for generating sketchy outlines and tonal shading from a collection of pencil drawings. We create training data pairs by extracting clean outlines and tonal illustrations from original pencil drawings using image filtering techniques, and we manually label the drawing styles. In addition, our model creates different pencil styles (e.g., line sketchiness and shading style) in a user-controllable manner. Experimental results on different types of pencil drawings show that the proposed algorithm performs favorably against existing methods in terms of quality, diversity and user evaluations. | There is a rich literature on procedural (non-learning) stylization in Non-Photorealistic Rendering (NPR) @cite_12 @cite_30 . Early work focuses on interactive pen-and-ink drawing and hatching of 2D inputs @cite_32 @cite_19 and 3D models @cite_5 @cite_31 @cite_3 . Pencil drawing is similar to pen-and-ink drawing, but it has more degrees-of-freedom since individual pencil strokes may have varying tone, width, and texture. For 2D images, several procedural image stylization approaches have simulated pencil drawings @cite_44 @cite_49 . These methods use hand-crafted algorithms and features for outlines and a pre-defined set of pencil texture examples for shading. While procedural approaches can be fast and interpretable, accurately capturing a wide range of illustration styles with purely procedural methods is still challenging. | {
"cite_N": [
"@cite_30",
"@cite_32",
"@cite_3",
"@cite_44",
"@cite_19",
"@cite_49",
"@cite_5",
"@cite_31",
"@cite_12"
],
"mid": [
"",
"1989800297",
"2137391696",
"",
"2163189457",
"50562987",
"2153968339",
"",
"2288799730"
],
"abstract": [
"",
"We present an interactive system for creating pen-and-ink illustrations. The system uses stroke textures —collections of strokes arranged in different patterns—to generate texture and tone. The user “paints” with a desired stroke texture to achieve a desired tone, and the computer draws all of the individual strokes. The system includes support for using scanned or rendered images for reference to provide the user with guides for outline and tone. By following these guides closely, the illustration system can be used for interactive digital halftoning, in which stroke textures are applied to convey details that would otherwise be lost in this black-and-white medium. By removing the burden of placing individual strokes from the user, the illustration system makes it possible to create fine stroke work with a purely mouse-based interface. Thus, this approach holds promise for bringing high-quality black-and-white illustration to the world of personal computing and desktop publishing.",
"This paper presents new algorithms and techniques for rendering parametric free-form surfaces in pen and ink. In particular, we introduce the idea of “controlled-density hatching” for conveying tone, texture, and shape. The fine control over tone this method provides allows the use of traditional texture mapping techniques for specifying the tone of pen-and-ink illustrations. We also show how a planar map, a data structure central to our rendering algorithm, can be constructed from parametric surfaces, and used for clipping strokes and generating outlines. Finally, we show how curved shadows can be cast onto curved objects for this style of illustration. CR",
"",
"We present an interactive system for creating pen-and-ink-style line drawings from greyscale images in which the strokes of the rendered illustration follow the features of the original image. The user, via new interaction techniques for editing a direction field, specifies an orientation for each region of the image; the computer draws oriented strokes, based on a user-specified set of example strokes, that achieve the same tone as the image via a new algorithm that compares an adaptively-blurred version of the current illustration to the target tone image. By aligning the direction field with surface orientations of the objects in the image, the user can create textures that appear attached to those objects instead of merely conveying their darkness. The result is a more compelling pen-and-ink illustration than was previously possible from 2D reference imagery. CR",
"",
"This article presents an algorithm for learning hatching styles from line drawings. An artist draws a single hatching illustration of a 3D object. Her strokes are analyzed to extract the following per-pixel properties: hatching level (hatching, cross-hatching, or no strokes), stroke orientation, spacing, intensity, length, and thickness. A mapping is learned from input geometric, contextual, and shading features of the 3D object to these hatching properties, using classification, regression, and clustering techniques. Then, a new illustration can be generated in the artist's style, as follows. First, given a new view of a 3D object, the learned mapping is applied to synthesize target stroke properties for each pixel. A new illustration is then generated by synthesizing hatching strokes according to the target properties.",
"",
"Non-photorealistic rendering (NPR) is a combination of computer graphics and computer vision that produces renderings in various artistic, expressive or stylized ways such as painting and drawing. This book focuses on image and video based NPR, where the input is a 2D photograph or a video rather than a 3D model. 2D NPR techniques have application in areas as diverse as consumer and professional digital photography and visual effects for TV and film production. The book covers the full range of the state of the art of NPR with every chapter authored by internationally renowned experts in the field, covering both classical and contemporary techniques. It will enable both graduate students in computer graphics, computer vision or image processing and professional developers alike to quickly become familiar with contemporary techniques, enabling them to apply 2D NPR algorithms in their own projects."
]
} |
1903.08682 | 2950384556 | We propose a high-quality photo-to-pencil translation method with fine-grained control over the drawing style. This is a challenging task due to multiple stroke types (e.g., outline and shading), structural complexity of pencil shading (e.g., hatching), and the lack of aligned training data pairs. To address these challenges, we develop a two-branch model that learns separate filters for generating sketchy outlines and tonal shading from a collection of pencil drawings. We create training data pairs by extracting clean outlines and tonal illustrations from original pencil drawings using image filtering techniques, and we manually label the drawing styles. In addition, our model creates different pencil styles (e.g., line sketchiness and shading style) in a user-controllable manner. Experimental results on different types of pencil drawings show that the proposed algorithm performs favorably against existing methods in terms of quality, diversity and user evaluations. | . Left: the created paired training data generated by using an abstraction procedure on pencil drawings for training. Right: the testing phase (including network details). Two branches will output an outline and shading drawing result respectively, which can be combined together through pixel-wise multiplication as the third option of pencil drawing result. The edge module in gray in the outline branch (top) is a boundary detector @cite_21 , which is optional at test-time. For highly-textured photos, it is suggested to use this module to detect boundaries only. See for technical details. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2129587342"
],
"abstract": [
"Edge detection is a critical component of many vision systems, including object detectors and image segmentation algorithms. Patches of edges exhibit well-known forms of local structure, such as straight lines or T-junctions. In this paper we take advantage of the structure present in local image patches to learn both an accurate and computationally efficient edge detector. We formulate the problem of predicting local edge masks in a structured learning framework applied to random decision forests. Our novel approach to learning decision trees robustly maps the structured labels to a discrete space on which standard information gain measures may be evaluated. The result is an approach that obtains real time performance that is orders of magnitude faster than many competing state-of-the-art approaches, while also achieving state-of-the-art edge detection results on the BSDS500 Segmentation dataset and NYU Depth dataset. Finally, we show the potential of our approach as a general purpose edge detector by showing our learned edge models generalize well across datasets."
]
} |
1903.08682 | 2950384556 | We propose a high-quality photo-to-pencil translation method with fine-grained control over the drawing style. This is a challenging task due to multiple stroke types (e.g., outline and shading), structural complexity of pencil shading (e.g., hatching), and the lack of aligned training data pairs. To address these challenges, we develop a two-branch model that learns separate filters for generating sketchy outlines and tonal shading from a collection of pencil drawings. We create training data pairs by extracting clean outlines and tonal illustrations from original pencil drawings using image filtering techniques, and we manually label the drawing styles. In addition, our model creates different pencil styles (e.g., line sketchiness and shading style) in a user-controllable manner. Experimental results on different types of pencil drawings show that the proposed algorithm performs favorably against existing methods in terms of quality, diversity and user evaluations. | A third approach is to transfer deep texture statistics of a style exemplar, which does not employ paired training data. Since Gatys al @cite_40 proposed an algorithm for artistic stylization based on matching the correlations (Gram matrix) between deep features, numerous methods have been developed for improvements in different aspects @cite_38 , e.g., efficiency @cite_16 @cite_22 , generality @cite_23 @cite_2 @cite_17 @cite_1 @cite_48 , quality @cite_14 @cite_42 @cite_11 , diversity @cite_48 , high-resolution @cite_35 , and photorealism @cite_0 @cite_43 . However, these methods do not perform well for pencil drawing. The rendered results (Figure (c)) in the pencil style only capture the overall gray tones, but without capturing distinctive hatching or outline styles well. | {
"cite_N": [
"@cite_38",
"@cite_35",
"@cite_14",
"@cite_11",
"@cite_22",
"@cite_48",
"@cite_42",
"@cite_1",
"@cite_0",
"@cite_43",
"@cite_40",
"@cite_23",
"@cite_2",
"@cite_16",
"@cite_17"
],
"mid": [
"2613099748",
"2884041121",
"",
"",
"2952226636",
"",
"2611605760",
"2949960002",
"2604721644",
"",
"2475287302",
"2751689814",
"",
"2950689937",
"2962772087"
],
"abstract": [
"The seminal work of demonstrated the power of Convolutional Neural Networks (CNNs) in creating artistic imagery by separating and recombining image content and style. This process of using CNNs to render a content image in different styles is referred to as Neural Style Transfer (NST). Since then, NST has become a trending topic both in academic literature and industrial applications. It is receiving increasing attention and a variety of approaches are proposed to either improve or extend the original NST algorithm. In this paper, we aim to provide a comprehensive overview of the current progress towards NST. We first propose a taxonomy of current algorithms in the field of NST. Then, we present several evaluation methods and compare different NST algorithms both qualitatively and quantitatively. The review concludes with a discussion of various applications of NST and open problems for future research. A list of papers discussed in this review, corresponding codes, pre-trained models and more comparison results are publicly available at this https URL.",
"Recently, style transfer has received a lot of attention. While much of this research has aimed at speeding up processing, the approaches are still lacking from a principled, art historical standpoint: a style is more than just a single image or an artist, but previous work is limited to only a single instance of a style or shows no benefit from more images. Moreover, previous work has relied on a direct comparison of art in the domain of RGB images or on CNNs pre-trained on ImageNet, which requires millions of labeled object bounding boxes and can introduce an extra bias, since it has been assembled without artistic consideration. To circumvent these issues, we propose a style-aware content loss, which is trained jointly with a deep encoder-decoder network for real-time, high-resolution stylization of images and videos. We propose a quantitative measure for evaluating the quality of a stylized image and also have art historians rank patches from our approach against those from previous work. These and our qualitative results ranging from small image patches to megapixel stylistic images and videos show that our approach better captures the subtle nature in which a style affects content.",
"",
"",
"recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.",
"",
"We propose a new technique for visual attribute transfer across images that may have very different appearance but have perceptually similar semantic structure. By visual attribute transfer, we mean transfer of visual information (such as color, tone, texture, and style) from one image to another. For example, one image could be that of a painting or a sketch while the other is a photo of a real scene, and both depict the same type of scene. Our technique finds semantically-meaningful dense correspondences between two input images. To accomplish this, it adapts the notion of \"image analogy\" [ 2001] with features extracted from a Deep Convolutional Neutral Network for matching; we call our technique deep image analogy. A coarse-to-fine strategy is used to compute the nearest-neighbor field for generating the results. We validate the effectiveness of our proposed method in a variety of cases, including style texture transfer, color style swap, sketch painting to photo, and time lapse.",
"We propose StyleBank, which is composed of multiple convolution filter banks and each filter bank explicitly represents one style, for neural image style transfer. To transfer an image to a specific style, the corresponding filter bank is operated on top of the intermediate feature embedding produced by a single auto-encoder. The StyleBank and the auto-encoder are jointly learnt, where the learning is conducted in such a way that the auto-encoder does not encode any style information thanks to the flexibility introduced by the explicit filter bank representation. It also enables us to conduct incremental learning to add a new image style by learning a new filter bank while holding the auto-encoder fixed. The explicit style representation along with the flexible network design enables us to fuse styles at not only the image level, but also the region level. Our method is the first style transfer network that links back to traditional texton mapping methods, and hence provides new understanding on neural style transfer. Our method is easy to train, runs in real-time, and produces results that qualitatively better or at least comparable to existing methods.",
"This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.",
"",
"Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.",
"We present an approach to accelerating a wide variety of image processing operators. Our approach uses a fully-convolutional network that is trained on input-output pairs that demonstrate the operator's action. After training, the original operator need not be run at all. The trained network operates at full resolution and runs in constant time. We investigate the effect of network architecture on approximation accuracy, runtime, and memory footprint, and identify a specific architecture that balances these considerations. We evaluate the presented approach on ten advanced image processing operators, including multiple variational models, multiscale tone and detail manipulation, photographic style transfer, nonlocal dehazing, and nonphotorealistic stylization. All operators are approximated by the same model. Experiments demonstrate that the presented approach is significantly more accurate than prior approximation schemes. It increases approximation accuracy as measured by PSNR across the evaluated operators by 8.5 dB on the MIT-Adobe dataset (from 27.5 to 36 dB) and reduces DSSIM by a multiplicative factor of 3 compared to the most accurate prior approximation scheme, while being the fastest. We show that our models generalize across datasets and across resolutions, and investigate a number of extensions of the presented approach. The results are shown in the supplementary video at this https URL",
"",
"We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.",
"Universal style transfer aims to transfer arbitrary visual styles to content images. Existing feed-forward based methods, while enjoying the inference efficiency, are mainly limited by inability of generalizing to unseen styles or compromised visual quality. In this paper, we present a simple yet effective method that tackles these limitations without training on any pre-defined styles. The key ingredient of our method is a pair of feature transforms, whitening and coloring, that are embedded to an image reconstruction network. The whitening and coloring transforms reflect direct matching of feature covariance of the content image to a given style image, which shares similar spirits with the optimization of Gram matrix based cost in neural style transfer. We demonstrate the effectiveness of our algorithm by generating high-quality stylized images with comparisons to a number of recent methods. We also analyze our method by visualizing the whitened features and synthesizing textures by simple feature coloring."
]
} |
1903.08682 | 2950384556 | We propose a high-quality photo-to-pencil translation method with fine-grained control over the drawing style. This is a challenging task due to multiple stroke types (e.g., outline and shading), structural complexity of pencil shading (e.g., hatching), and the lack of aligned training data pairs. To address these challenges, we develop a two-branch model that learns separate filters for generating sketchy outlines and tonal shading from a collection of pencil drawings. We create training data pairs by extracting clean outlines and tonal illustrations from original pencil drawings using image filtering techniques, and we manually label the drawing styles. In addition, our model creates different pencil styles (e.g., line sketchiness and shading style) in a user-controllable manner. Experimental results on different types of pencil drawings show that the proposed algorithm performs favorably against existing methods in terms of quality, diversity and user evaluations. | Procedural methods often provide fine-grained style control, e.g., @cite_37 , but at the cost of considerable effort and difficulty in mastering certain styles. Image-to-image translation @cite_27 @cite_45 and neural style transfer methods provide only high-level control, e.g., by selecting training inputs in a different style, interpolating between unrelated styles @cite_7 @cite_20 , or selecting among high-level transfer parameters @cite_20 . In this work, we focus on developing a method with fine-grained style control that allows subtle adjustments to pencil drawing. | {
"cite_N": [
"@cite_37",
"@cite_7",
"@cite_27",
"@cite_45",
"@cite_20"
],
"mid": [
"2072438795",
"2953054324",
"2797650215",
"2952056941",
"2950078543"
],
"abstract": [
"This article introduces a programmable approach to nonphotorealistic line drawings from 3D models, inspired by programmable shaders in traditional rendering. This approach relies on the assumption generally made in NPR that style attributes (color, thickness, etc.) are chosen depending on generic properties of the scene such as line characteristics or depth discontinuities, etc. We propose a new image creation model where all operations are controlled through user-defined procedures in which the relations between style attributes and scene properties are specified. A view map describing all relevant support lines in the drawing and their topological arrangement is first created from the 3D model so as to ensure the continuity of all scene properties along its edges; a number of style modules operate on this map, by procedurally selecting, chaining, or splitting lines, before creating strokes and assigning drawing attributes. Consistent access to properties of the scene is provided from the different elements of the map that are manipulated throughout the whole process. The resulting drawing system permits flexible control of all elements of drawing style: First, different style modules can be applied to different types of lines in a view; second, the topology and geometry of strokes are entirely controlled from the programmable modules; and third, stroke attributes are assigned procedurally and can be correlated at will with various scene or view properties. We illustrate the components of our system and show how style modules successfully encode stylized visual characteristics that can be applied across a wide range of models.",
"The diversity of painting styles represents a rich visual vocabulary for the construction of an image. The degree to which one may learn and parsimoniously capture this visual vocabulary measures our understanding of the higher level features of paintings, if not images in general. In this work we investigate the construction of a single, scalable deep network that can parsimoniously capture the artistic style of a diversity of paintings. We demonstrate that such a network generalizes across a diversity of artistic styles by reducing a painting to a point in an embedding space. Importantly, this model permits a user to explore new painting styles by arbitrarily combining the styles learned from individual paintings. We hope that this work provides a useful step towards building rich models of paintings and offers a window on to the structure of the learned representation of artistic style.",
"Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any pairs of corresponding images. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to the state-of-the-art approaches further demonstrates the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at this https URL",
"Image-to-image translation aims to learn the mapping between two visual domains. There are two main challenges for many applications: 1) the lack of aligned training pairs and 2) multiple possible outputs from a single input image. In this work, we present an approach based on disentangled representation for producing diverse outputs without paired training images. To achieve diversity, we propose to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and a domain-specific attribute space. Our model takes the encoded content features extracted from a given input and the attribute vectors sampled from the attribute space to produce diverse outputs at test time. To handle unpaired training data, we introduce a novel cross-cycle consistency loss based on disentangled representations. Qualitative results show that our model can generate diverse and realistic images on a wide range of tasks without paired training data. For quantitative comparisons, we measure realism with user study and diversity with a perceptual distance metric. We apply the proposed model to domain adaptation and show competitive performance when compared to the state-of-the-art on the MNIST-M and the LineMod datasets.",
"Neural Style Transfer has shown very exciting results enabling new forms of image manipulation. Here we extend the existing method to introduce control over spatial location, colour information and across spatial scale. We demonstrate how this enhances the method by allowing high-resolution controlled stylisation and helps to alleviate common failure cases such as applying ground textures to sky regions. Furthermore, by decomposing style into these perceptual factors we enable the combination of style information from multiple sources to generate new, perceptually appealing styles from existing ones. We also describe how these methods can be used to more efficiently produce large size, high-quality stylisation. Finally we show how the introduced control measures can be applied in recent methods for Fast Neural Style Transfer."
]
} |
1903.08856 | 2922790685 | Blockchain has become one of the most attractive technologies for applications, with a large range of deployments such as production, economy, or banking. Under the hood, Blockchain technology is a type of distributed database that supports untrusted parties. In this paper we focus Hyperledger Fabric, the first blockchain in the market tailored for a private environment, allowing businesses to create a permissioned network. Hyperledger Fabric implements a PBFT consensus in order to maintain a non forking blockchain at the application level. We deployed this framework over an area network between France and Germany in order to evaluate its performance when potentially large network delays are observed. Overall we found that when network delay increases significantly (i.e. up to 3.5 seconds at network layer between two clouds), we observed that the blocks added to our blockchain had up to 134 seconds offset after 100 th block from one cloud to another. Thus by delaying block propagation, we demonstrated that Hyperledger Fabric does not provide sufficient consistency guaranties to be deployed in critical environments. Our work, is the fist to evidence the negative impact of network delays on a PBFT-based blockchain. | @cite_22 the authors describe two methods for hijacking Bitcoin: one by BGP hijacking and the second one by delaying propagation. They demonstrated that any network attacker can hijack few ( @math 100) BGP prefixes to isolate @math 50 Hijacking attack is also the premise for a full-fledged attacks on Ethereum blockchain to steal coins @cite_22 . They can multiply an asset by 200,000 in just 10 hours in consortium or private context. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2952662648"
],
"abstract": [
"As the most successful cryptocurrency to date, Bitcoin constitutes a target of choice for attackers. While many attack vectors have already been uncovered, one important vector has been left out though: attacking the currency via the Internet routing infrastructure itself. Indeed, by manipulating routing advertisements (BGP hijacks) or by naturally intercepting traffic, Autonomous Systems (ASes) can intercept and manipulate a large fraction of Bitcoin traffic. This paper presents the first taxonomy of routing attacks and their impact on Bitcoin, considering both small-scale attacks, targeting individual nodes, and large-scale attacks, targeting the network as a whole. While challenging, we show that two key properties make routing attacks practical: (i) the efficiency of routing manipulation; and (ii) the significant centralization of Bitcoin in terms of mining and routing. Specifically, we find that any network attacker can hijack few (<100) BGP prefixes to isolate 50 of the mining power---even when considering that mining pools are heavily multi-homed. We also show that on-path network attackers can considerably slow down block propagation by interfering with few key Bitcoin messages. We demonstrate the feasibility of each attack against the deployed Bitcoin software. We also quantify their effectiveness on the current Bitcoin topology using data collected from a Bitcoin supernode combined with BGP routing data. The potential damage to Bitcoin is worrying. By isolating parts of the network or delaying block propagation, attackers can cause a significant amount of mining power to be wasted, leading to revenue losses and enabling a wide range of exploits such as double spending. To prevent such effects in practice, we provide both short and long-term countermeasures, some of which can be deployed immediately."
]
} |
1903.08857 | 2924210329 | Motivated by recent developments in serverless systems for large-scale machine learning as well as improvements in scalable randomized matrix algorithms, we develop OverSketched Newton, a randomized Hessian-based optimization algorithm to solve large-scale smooth and strongly-convex problems in serverless systems. OverSketched Newton leverages matrix sketching ideas from Randomized Numerical Linear Algebra to compute the Hessian approximately. These sketching methods lead to inbuilt resiliency against stragglers that are a characteristic of serverless architectures. We establish that OverSketched Newton has a linear-quadratic convergence rate, and we empirically validate our results by solving large-scale supervised learning problems on real-world datasets. Experiments demonstrate a reduction of 50 in total running time on AWS Lambda, compared to state-of-the-art distributed optimization schemes. | Existing Straggler Mitigation Schemes: Strategies like speculative execution have been traditionally used to mitigate stragglers in popular distributed computing frameworks like Hadoop MapReduce @cite_48 and Apache Spark @cite_21 . Speculative execution works by detecting workers that are running slower than expected and then allocating their tasks to new workers without shutting down the original straggling task. The worker that finishes first communicates its results. This has several drawbacks. For example, constant monitoring of tasks is required, where the worker pauses its job and provides its running status. Additionally, it is possible that a worker will straggle only at the end of the task, say, while communicating the results. By the time the task is reallocated, the overall efficiency of the system would have suffered already. | {
"cite_N": [
"@cite_48",
"@cite_21"
],
"mid": [
"2173213060",
"2189465200"
],
"abstract": [
"MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.",
"MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable for other popular applications. This paper focuses on one such class of applications: those that reuse a working set of data across multiple parallel operations. This includes many iterative machine learning algorithms, as well as interactive data analysis tools. We propose a new framework called Spark that supports these applications while retaining the scalability and fault tolerance of MapReduce. To achieve these goals, Spark introduces an abstraction called resilient distributed datasets (RDDs). An RDD is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost. Spark can outperform Hadoop by 10x in iterative machine learning jobs, and can be used to interactively query a 39 GB dataset with sub-second response time."
]
} |
1903.08857 | 2924210329 | Motivated by recent developments in serverless systems for large-scale machine learning as well as improvements in scalable randomized matrix algorithms, we develop OverSketched Newton, a randomized Hessian-based optimization algorithm to solve large-scale smooth and strongly-convex problems in serverless systems. OverSketched Newton leverages matrix sketching ideas from Randomized Numerical Linear Algebra to compute the Hessian approximately. These sketching methods lead to inbuilt resiliency against stragglers that are a characteristic of serverless architectures. We establish that OverSketched Newton has a linear-quadratic convergence rate, and we empirically validate our results by solving large-scale supervised learning problems on real-world datasets. Experiments demonstrate a reduction of 50 in total running time on AWS Lambda, compared to state-of-the-art distributed optimization schemes. | Recently, many coding-theoretic ideas have been proposed to introduce redundancy into the distributed computation for straggler mitigation @cite_47 @cite_36 @cite_42 @cite_13 @cite_16 @cite_37 @cite_34 , many of them catering to distributed matrix-vector multiplication @cite_47 @cite_0 @cite_9 . In general, the idea of coded computation is to generate redundant copies of the result of distributed computation by encoding the input data using error-correcting-codes. These redundant copies can then be used to decode the output of the missing stragglers. We use tools from @cite_36 to compute gradients in a distributed straggler-resilient manner using codes, and we compare the performance with speculative execution. | {
"cite_N": [
"@cite_37",
"@cite_36",
"@cite_9",
"@cite_42",
"@cite_34",
"@cite_0",
"@cite_47",
"@cite_16",
"@cite_13"
],
"mid": [
"2962850796",
"2885072262",
"",
"2751625010",
"2963726510",
"",
"2268702383",
"2745045892",
""
],
"abstract": [
"We consider a large-scale matrix multiplication problem where the computation is carried out using a distributed system with a master node and multiple worker nodes, where each worker can store parts of the input matrices. We propose a computation strategy that leverages ideas from coding theory to design intermediate computations at the worker nodes, in order to optimally deal with straggling workers. The proposed strategy, named as , achieves the optimum recovery threshold, defined as the minimum number of workers that the master needs to wait for in order to compute the output. This is the first code that achieves the optimal utilization of redundancy for tolerating stragglers or failures in distributed matrix multiplication. Furthermore, by leveraging the algebraic structure of polynomial codes, we can map the reconstruction problem of the final output to a polynomial interpolation problem, which can be solved efficiently. Polynomial codes provide order-wise improvement over the state of the art in terms of recovery threshold, and are also optimal in terms of several other metrics including computation latency and communication load. Moreover, we extend this code to distributed convolution and show its order-wise optimality.",
"Distributed computing allows for large-scale computation and machine learning tasks by enabling parallel computing at massive scale. A critical challenge to speeding up distributed computing comes from stragglers, a crippling bottleneck to system performance [1]. Recently, coding theory has offered an attractive paradigm dubbed as coded computation [2] for addressing this challenge through the judicious introduction of redundant computing to combat stragglers. However, most existing approaches have limited applicability if the system scales to hundreds or thousands of workers, as is the trend in computing platforms. At these scales, previously proposed algorithms based on Maximum Distance Separable (MDS) codes are too expensive due to their hidden cost, i.e., computing and communication costs associated with the encoding decoding procedures. Motivated by this limitation, we present a novel coded matrix-matrix multiplication scheme based on d-dimensional product codes. We show that our scheme allows for order-optimal computation communication costs for the encoding decoding procedures while achieving near-optimal compute time.",
"",
"Computationally intensive distributed and parallel computing is often bottlenecked by a small set of slow workers known as stragglers. In this paper, we utilize the emerging idea of coded computation'' to design a novel error-correcting-code inspired technique for solving linear inverse problems under specific iterative methods in a parallelized implementation affected by stragglers. Example machine-learning applications include inverse problems such as personalized PageRank and sampling on graphs. We provably show that our coded-computation technique can reduce the mean-squared error under a computational deadline constraint. In fact, the ratio of mean-squared error of replication-based and coded techniques diverges to infinity as the deadline increases. Our experiments for personalized PageRank performed on real systems and real social networks show that this ratio can be as large as @math . Further, unlike coded-computation techniques proposed thus far, our strategy combines outputs of all workers, including the stragglers, to produce more accurate estimates at the computational deadline. This also ensures that the accuracy degrades gracefully'' in the event that the number of stragglers is large.",
"Building on the previous work of [2] and [3] on coded computation, we propose a sequential approximation framework for solving optimization problems in a distributed manner. In a distributed computation system, latency caused by individual processors (“stragglers”) usually causes a significant delay in the overall process. The proposed method is powered by a sequential computation scheme, which is designed specifically for systems with stragglers. This scheme has the desirable property that the user is guaranteed to receive useful (approximate) computation results whenever a processor finishes its subtask, even in the presence of uncertain latency. In this paper, we give a coding theorem for sequentially computing matrix-vector multiplications, and the optimality of this coding scheme is also established. As an application of the results, we demonstrate solving optimization problems using a sequential approximation approach, which accelerates the algorithm in a distributed system with stragglers.",
"",
"Codes are widely used in many engineering applications to offer robustness against noise . In large-scale systems, there are several types of noise that can affect the performance of distributed machine learning algorithms—straggler nodes, system failures, or communication bottlenecks—but there has been little interaction cutting across codes, machine learning, and distributed systems. In this paper, we provide theoretical insights on how coded solutions can achieve significant gains compared with uncoded ones. We focus on two of the most basic building blocks of distributed learning algorithms: matrix multiplication and data shuffling . For matrix multiplication, we use codes to alleviate the effect of stragglers and show that if the number of homogeneous workers is @math , and the runtime of each subtask has an exponential tail, coded computation can speed up distributed matrix multiplication by a factor of @math . For data shuffling, we use codes to reduce communication bottlenecks, exploiting the excess in storage. We show that when a constant fraction @math of the data matrix can be cached at each worker, and @math is the number of workers, coded shuffling reduces the communication cost by a factor of @math compared with uncoded shuffling, where @math is the ratio of the cost of unicasting @math messages to @math users to multicasting a common message (of the same size) to @math users. For instance, @math if multicasting a message to @math users is as cheap as unicasting a message to one user. We also provide experimental results, corroborating our theoretical gains of the coded algorithms.",
"Coded computation is a framework for providing redundancy in distributed computing systems to make them robust to slower nodes, or stragglers. In [1], the authors propose a coded computation scheme based on maximum distance separable (MDS) codes for computing the product ATB, and this scheme is suitable for the case where one of the matrices is small enough to fit into a single compute node. In this work, we study coded computation involving large matrix multiplication where both matrices are large, and propose a new coded computation scheme, which we call product-coded matrix multiplication. Our analysis reveals interesting insights into which schemes perform best in which regimes. When the number of backup nodes scales sub-linearly in the size of the product, the product-coded scheme achieves the best run-time performance. On the other hand, when the number of backup nodes scales linearly in the size of the product, the MDS-coded scheme achieves the fundamental limit on the run-time performance. Further, we propose a novel application of low-density-parity-check (LDPC) codes to achieve linear-time decoding complexity, thus allowing our proposed solutions to scale gracefully.",
""
]
} |
1903.08857 | 2924210329 | Motivated by recent developments in serverless systems for large-scale machine learning as well as improvements in scalable randomized matrix algorithms, we develop OverSketched Newton, a randomized Hessian-based optimization algorithm to solve large-scale smooth and strongly-convex problems in serverless systems. OverSketched Newton leverages matrix sketching ideas from Randomized Numerical Linear Algebra to compute the Hessian approximately. These sketching methods lead to inbuilt resiliency against stragglers that are a characteristic of serverless architectures. We establish that OverSketched Newton has a linear-quadratic convergence rate, and we empirically validate our results by solving large-scale supervised learning problems on real-world datasets. Experiments demonstrate a reduction of 50 in total running time on AWS Lambda, compared to state-of-the-art distributed optimization schemes. | Distributed Second-Order Methods: There has been a growing research interest in designing and analyzing distributed (synchronous) implementations of second-order methods @cite_20 @cite_4 @cite_2 @cite_18 @cite_40 @cite_1 . However, these implementations are tailored for server-based distributed systems. Our focus, on the other hand, is on serverless systems. Our motivation behind considering serverless systems stems from their usability benefits, cost efficiency, and extensive and inexpensive commercial offerings @cite_33 @cite_51 . We implement our algorithms using the recently developed serverless framework called PyWren @cite_33 . While there are works that evaluate existing algorithms on serverless systems @cite_7 @cite_28 , this is the first work that proposes a large-scale distributed optimization algorithm for serverless systems. We exploit the advantages offered serverless systems while mitigating the drawbacks such as stragglers and additional overhead per invocation of workers. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_33",
"@cite_7",
"@cite_28",
"@cite_1",
"@cite_40",
"@cite_2",
"@cite_51",
"@cite_20"
],
"mid": [
"2962798535",
"2556660792",
"2591324491",
"2910909327",
"",
"",
"2755278809",
"1697545848",
"2896903681",
"2963992805"
],
"abstract": [
"We study optimization algorithms based on variance reduction for stochastic gradient descent (SGD). Remarkable recent progress has been made in this direction through development of algorithms like SAG, SVRG, SAGA. These algorithms have been shown to outperform SGD, both theoretically and empirically. However, asynchronous versions of these algorithms—a crucial requirement for modern large-scale applications—have not been studied. We bridge this gap by presenting a unifying framework for many variance reduction techniques. Subsequently, we propose an asynchronous algorithm grounded in our framework, and prove its fast convergence. An important consequence of our general approach is that it yields asynchronous versions of variance reduction algorithms such as SVRG and SAGA as a byproduct. Our method achieves near linear speedup in sparse settings common to machine learning. We demonstrate the empirical performance of our method through a concrete realization of asynchronous SVRG.",
"The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning. We present a general-purpose framework for distributed computing environments, CoCoA, that has an efficient communication scheme and is applicable to a wide variety of problems in machine learning and signal processing. We extend the framework to cover general non-strongly-convex regularizers, including L1-regularized problems like lasso, sparse logistic regression, and elastic net regularization, and show how earlier work can be derived as a special case. We provide convergence guarantees for the class of convex regularized loss minimization objectives, leveraging a novel approach in handling non-strongly-convex regularizers and non-smooth loss functions. The resulting framework has markedly improved performance over state-of-the-art methods, as we illustrate with an extensive set of experiments on real distributed datasets.",
"Distributed computing remains inaccessible to a large number of users, in spite of many open source platforms and extensive commercial offerings. While distributed computation frameworks have moved beyond a simple map-reduce model, many users are still left to struggle with complex cluster management and configuration tools, even for running simple embarrassingly parallel jobs. We argue that stateless functions represent a viable platform for these users, eliminating cluster management overhead, fulfilling the promise of elasticity. Furthermore, using our prototype implementation, PyWren, we show that this model is general enough to implement a number of distributed computing models, such as BSP, efficiently. Extrapolating from recent trends in network bandwidth and the advent of disaggregated storage, we suggest that stateless functions are a natural fit for data processing in future computing environments.",
"The event-driven and elastic nature of serverless runtimes makes them a very efficient and cost-effective alternative for scaling up computations. So far, they have mostly been used for stateless, data parallel and ephemeral computations. In this work, we propose using serverless runtimes to solve generic, large-scale optimization problems. Specifically, we build a master-worker setup using AWS Lambda as the source of our workers, implement a parallel optimization algorithm to solve a regularized logistic regression problem, and show that relative speedups up to 256 workers and efficiencies above 70 up to 64 workers can be expected. We also identify possible algorithmic and system-level bottlenecks, propose improvements, and discuss the limitations and challenges in realizing these improvements.",
"",
"",
"For distributed computing environments, we consider the canonical machine learning problem of empirical risk minimization (ERM) with quadratic regularization, and we propose a distributed and communication-efficient Newton-type optimization method. At every iteration, each worker locally finds an Approximate NewTon (ANT) direction, and then it sends this direction to the main driver. The driver, then, averages all the ANT directions received from workers to form a Globally Improved ANT (GIANT) direction. GIANT naturally exploits the trade-offs between local computations and global communications in that more local computations result in fewer overall rounds of communications. GIANT is highly communication efficient in that, for @math -dimensional data uniformly distributed across @math workers, it has @math or @math rounds of communication and @math communication complexity per iteration. Theoretically, we show that GIANT's convergence rate is faster than first-order methods and existing distributed Newton-type methods. From a practical point-of-view, a highly beneficial feature of GIANT is that it has only one tuning parameter---the iterations of the local solver for computing an ANT direction. This is indeed in sharp contrast with many existing distributed Newton-type methods, as well as popular first-order methods, which have several tuning parameters, and whose performance can be greatly affected by the specific choices of such parameters. In this light, we empirically demonstrate the superior performance of GIANT compared with other competing methods.",
"We propose a new distributed algorithm for empirical risk minimization in machine learning. The algorithm is based on an inexact damped Newton method, where the inexact Newton steps are computed by a distributed preconditioned conjugate gradient method. We analyze its iteration complexity and communication efficiency for minimizing self-concordant empirical loss functions, and discuss the results for distributed ridge regression, logistic regression and binary classification with a smoothed hinge loss. In a standard setting for supervised learning, where the n data points are i.i.d. sampled and when the regularization parameter scales as 1 √n show that the proposed algorithm is communication efficient: the required round of communication does not increase with the sample size n, and only grows slowly with the number of machines.",
"Linear algebra operations are widely used in scientific computing and machine learning applications. However, it is challenging for scientists and data analysts to run linear algebra at scales beyond a single machine. Traditional approaches either require access to supercomputing clusters, or impose configuration and cluster management challenges. In this paper we show how the disaggregation of storage and compute resources in so-called \"serverless\" environments, combined with compute-intensive workload characteristics, can be exploited to achieve elastic scalability and ease of management. We present numpywren, a system for linear algebra built on a serverless architecture. We also introduce LAmbdaPACK, a domain-specific language designed to implement highly parallel linear algebra algorithms in a serverless setting. We show that, for certain linear algebra algorithms such as matrix multiply, singular value decomposition, and Cholesky decomposition, numpywren's performance (completion time) is within 33 of ScaLAPACK, and its compute efficiency (total CPU-hours) is up to 240 better due to elasticity, while providing an easier to use interface and better fault tolerance. At the same time, we show that the inability of serverless runtimes to exploit locality across the cores in a machine fundamentally limits their network efficiency, which limits performance on other algorithms such as QR factorization. This highlights how cloud providers could better support these types of computations through small changes in their infrastructure.",
"We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which provably improves with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evidence of the advantages of our method compared to other approaches, such as one-shot parameter averaging and ADMM."
]
} |
1903.08445 | 2923547786 | We propose a novel transition-based algorithm that straightforwardly parses sentences from left to right by building @math attachments, with @math being the length of the input sentence. Similarly to the recent stack-pointer parser by (2018), we use the pointer network framework that, given a word, can directly point to a position from the sentence. However, our left-to-right approach is simpler than the original top-down stack-pointer parser (not requiring a stack) and reduces transition sequence length in half, from 2 @math -1 actions to @math . This results in a quadratic non-projective parser that runs twice as fast as the original while achieving the best accuracy to date on the English PTB dataset (96.04 UAS, 94.43 LAS) among fully-supervised single-model dependency parsers, and improves over the former top-down transition system in the majority of languages tested. | Before presented their top-down parser, had already employed pointer networks @cite_3 for dependency parsing. Concretely, they developed a pointer-network-based neural architecture with multitask learning able to perform pre-processing, tagging and dependency parsing exclusively by reading tokens from an input sentence, without needing POS tags or pre-trained word embeddings. Like our approach, they also use the capabilities provided by pointer networks to undertake the parsing task as a simple process of attaching each word as dependent of another. They also try to improve the network performance with POS tag prediction as auxiliary task and with different approaches to perform label prediction. They do not exclude cycles, neither by forbidding them at parsing time or by removing them by post-processing, as they report that their system produces parses with a negligible amount of cycles, even with greedy decoding (matching our observation for our own system, in our case with beam-search decoding). Finally, the system developed by is constrained to projective dependencies, while our approach can handle unrestricted non-projective structures. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2250861254"
],
"abstract": [
"Almost all current dependency parsers classify based on millions of sparse indicator features. Not only do these features generalize poorly, but the cost of feature computation restricts parsing speed significantly. In this work, we propose a novel way of learning a neural network classifier for use in a greedy, transition-based dependency parser. Because this classifier learns and uses just a small number of dense features, it can work very fast, while achieving an about 2 improvement in unlabeled and labeled attachment scores on both English and Chinese datasets. Concretely, our parser is able to parse more than 1000 sentences per second at 92.2 unlabeled attachment score on the English Penn Treebank."
]
} |
1903.08636 | 2923619092 | It holds great implications for practical applications to enable centimeter-accuracy positioning for mobile and wearable sensor systems. In this paper, we propose a novel, high-precision, efficient visual-inertial (VI)-SLAM algorithm, termed Schmidt-EKF VI-SLAM (SEVIS), which optimally fuses IMU measurements and monocular images in a tightly-coupled manner to provide 3D motion tracking with bounded error. In particular, we adapt the Schmidt Kalman filter formulation to selectively include informative features in the state vector while treating them as nuisance parameters (or Schmidt states) once they become matured. This change in modeling allows for significant computational savings by no longer needing to constantly update the Schmidt states (or their covariance), while still allowing the EKF to correctly account for their cross-correlations with the active states. As a result, we achieve linear computational complexity in terms of map size, instead of quadratic as in the standard SLAM systems. In order to fully exploit the map information to bound navigation drifts, we advocate efficient keyframe-aided 2D-to-2D feature matching to find reliable correspondences between current 2D visual measurements and 3D map features. The proposed SEVIS is extensively validated in both simulations and experiments. | While SLAM estimators -- by jointly estimating the location of the sensor platform and the features in the surrounding environment -- are able to easily incorporate loop closure constraints to bound localization errors and have attracted much research attention in the past three decades @cite_46 @cite_45 @cite_47 @cite_6 , there are also significant research efforts devoted to open-loop VIO systems (e.g., @cite_36 @cite_40 @cite_3 @cite_17 @cite_27 @cite_29 @cite_7 @cite_35 @cite_38 @cite_18 @cite_20 @cite_44 @cite_43 ). For example, a hybrid MSCKF SLAM estimator was developed for VIO @cite_5 , which retains features that can be continuously tracked beyond the sliding window in the state as SLAM features while removing them when they get lost. | {
"cite_N": [
"@cite_35",
"@cite_38",
"@cite_18",
"@cite_7",
"@cite_46",
"@cite_36",
"@cite_29",
"@cite_6",
"@cite_3",
"@cite_44",
"@cite_43",
"@cite_40",
"@cite_45",
"@cite_27",
"@cite_5",
"@cite_47",
"@cite_20",
"@cite_17"
],
"mid": [
"2592781794",
"2569720095",
"",
"2736926039",
"2146881125",
"",
"",
"2750632489",
"2056298239",
"2800595980",
"2745859992",
"2051349034",
"2127578024",
"2072986918",
"2293973993",
"2461937780",
"",
"2056358962"
],
"abstract": [
"The main contribution of this paper is an invariant extended Kalman filter (EKF) for visual inertial navigation systems (VINS). It is demonstrated that the conventional EKF based VINS is not invariant under the stochastic unobservable transformation, associated with a translation and a rotation about the gravitational direction. This can lead to inconsistent state estimates as the estimator does not obey a fundamental property of the physical system. To address this issue, we use a novel uncertainty representation to derive a Right Invariant error extended Kalman filter (RIEKF-VINS) that preserves this invariance property. RIEKF-VINS is then adapted to the multi-state constraint Kalman filter framework to obtain a consistent state estimator. Both Monte Carlo simulations and real-world experiments are used to validate the proposed method.",
"In this letter, we investigate the convergence and consistency properties of an invariant-extended Kalman filter (RI-EKF) based simultaneous localization and mapping (SLAM) algorithm. Basic convergence properties of this algorithm are proven. These proofs do not require the restrictive assumption that the Jacobians of the motion and observation models need to be evaluated at the ground truth. It is also shown that the output of RI-EKF is invariant under any stochastic rigid body transformation in contrast to SO (3) based EKF SLAM algorithm (SO(3)-EKF) that is only invariant under deterministic rigid body transformation. Implications of these invariance properties on the consistency of the estimator are also discussed. Monte Carlo simulation results demonstrate that RI-EKF outperforms SO(3)-EKF, Robocentric-EKF and the “First Estimates Jacobian” EKF, for three-dimensional point feature-based SLAM.",
"",
"In this paper, a sliding-window two-camera vision-aided inertial navigation system (VINS) is presented in the square-root inverse domain. The performance of the system is assessed for the cases where feature matches across the two-camera images are processed with or without any stereo constraints (i.e., stereo vs. binocular). To support the comparison results, a theoretical analysis on the information gain when transitioning from binocular to stereo is also presented. Additionally, the advantage of using a two-camera (both stereo and binocular) system over a monocular VINS is assessed. Furthermore, the impact on the achieved accuracy of different image-processing frontends and estimator design choices is quantified. Finally, a thorough evaluation of the algorithm's processing requirements, which runs in real-time on a mobile processor, as well as its achieved accuracy as compared to alternative approaches is provided, for various scenes and motion profiles.",
"This paper describes the simultaneous localization and mapping (SLAM) problem and the essential methods for solving the SLAM problem and summarizes key implementations and demonstrations of the method. While there are still many practical issues to overcome, especially in more complex outdoor environments, the general SLAM method is now a well understood and established part of robotics. Another part of the tutorial summarized more recent works in addressing some of the remaining issues in SLAM, including computation, feature representation, and data association",
"",
"",
"In this paper, we propose a survey of the Simultaneous Localization And Mapping (SLAM) field when considering the recent evolution of autonomous driving. The growing interest regarding self-driving cars has given new directions to localization and mapping techniques. In this survey, we give an overview of the different branches of SLAM before going into the details of specific trends that are of interest when considered with autonomous applications in mind. We first present the limits of classical approaches for autonomous driving and discuss the criteria that are essential for this kind of application. We then review the methods where the identified challenges are tackled. We mostly focus on approaches building and reusing long-term maps in various conditions (weather, season, etc.). We also go through the emerging domain of multivehicle SLAM and its link with self-driving cars. We survey the different paradigms of that field (centralized and distributed) and the existing solutions. Finally, we conclude by giving an overview of the various large-scale experiments that have been carried out until now and discuss the remaining challenges and future orientations.",
"This work investigates the relationship between system observability properties and estimator inconsistency for a Vision-aided Inertial Navigation System (VINS). In particular, first we introduce a new methodology for determining the unobservable directions of nonlinear systems by factorizing the observability matrix according to the observable and unobservable modes. Subsequently, we apply this method to the VINS nonlinear model and determine its unobservable directions analytically. We leverage our analysis to improve the accuracy and consistency of linearized estimators applied to VINS. Our key findings are evaluated through extensive simulations and experimental validation on real-world data, demonstrating the superior accuracy and consistency of the proposed VINS framework compared to standard approaches.",
"In this paper, we propose a novel robocentric formulation of the visual-inertial navigation system (VINS) within a sliding-window filtering framework and design an efficient, lightweight, robocentric visual-inertial odometry (R-VIO) algorithm for consistent motion tracking even in challenging environments using only a monocular camera and a 6-axis IMU. The key idea is to deliberately reformulate the VINS with respect to a moving local frame, rather than a fixed global frame of reference as in the standard world-centric VINS, in order to obtain relative motion estimates of higher accuracy for updating global poses. As an immediate advantage of this robocentric formulation, the proposed R-VIO can start from an arbitrary pose, without the need to align the initial orientation with the global gravitational direction. More importantly, we analytically show that the linearized robocentric VINS does not undergo the observability mismatch issue as in the standard world-centric counterpart which was identified in the literature as the main cause of estimation inconsistency. Additionally, we investigate in-depth the special motions that degrade the performance in the world-centric formulation and show that such degenerate cases can be easily compensated in the proposed robocentric formulation, without resorting to additional sensors as in the world-centric formulation, thus leading to better robustness. The proposed R-VIO algorithm has been extensively tested through both Monte Carlo simulations and real-world experiments with different sensor platforms navigating in different environments, and shown to achieve better (or competitive at least) performance than the state-of-the-art VINS, in terms of consistency, accuracy and efficiency.",
"One camera and one low-cost inertial measurement unit (IMU) form a monocular visual-inertial system (VINS), which is the minimum sensor suite (in size, weight, and power) for the metric six degrees-of-freedom (DOF) state estimation. In this paper, we present VINS-Mono: a robust and versatile monocular visual-inertial state estimator. Our approach starts with a robust procedure for estimator initialization. A tightly coupled, nonlinear optimization-based method is used to obtain highly accurate visual-inertial odometry by fusing preintegrated IMU measurements and feature observations. A loop detection module, in combination with our tightly coupled formulation, enables relocalization with minimum computation. We additionally perform 4-DOF pose graph optimization to enforce the global consistency. Furthermore, the proposed system can reuse a map by saving and loading it in an efficient way. The current and previous maps can be merged together by the global pose graph optimization. We validate the performance of our system on public datasets and real-world experiments and compare against other state-of-the-art algorithms. We also perform an onboard closed-loop autonomous flight on the microaerial-vehicle platform and port the algorithm to an iOS-based demonstration. We highlight that the proposed work is a reliable, complete, and versatile system that is applicable for different applications that require high accuracy in localization. We open source our implementations for both PCs ( https: github.com HKUST-Aerial-Robotics VINS-Mono ) and iOS mobile devices ( https: github.com HKUST-Aerial-Robotics VINS-Mobile ).",
"In this paper, we study estimator inconsistency in vision-aided inertial navigation systems (VINS) from the standpoint of system's observability. We postulate that a leading cause of inconsistency is the gain of spurious information along unobservable directions, which results in smaller uncertainties, larger estimation errors, and divergence. We develop an observability constrained VINS (OC-VINS), which explicitly enforces the unobservable directions of the system, hence preventing spurious information gain and reducing inconsistency. This framework is applicable to several variants of the VINS problem such as visual simultaneous localization and mapping (V-SLAM), as well as visual-inertial odometry using the multi-state constraint Kalman filter (MSC-KF). Our analysis, along with the proposed method to reduce inconsistency, are extensively validated with simulation trials and real-world experimentation.",
"This paper discusses the recursive Bayesian formulation of the simultaneous localization and mapping (SLAM) problem in which probability distributions or estimates of absolute or relative locations of landmarks and vehicle pose are obtained. The paper focuses on three key areas: computational complexity; data association; and environment representation",
"United States. Office of Naval Research (N00014-12-1- 0093, N00014-10-1-0936, N00014-11-1-0688, and N00014-13-1-0588)",
"This paper focuses on the problem of real-time pose tracking using visual and inertial sensors in systems with limited processing power. Our main contribution is a novel approach to the design of estimators for these systems, which optimally utilizes the available resources. Specifically, we design a hybrid estimator that integrates two algorithms with complementary computational characteristics, namely a sliding-window EKF and EKF-SLAM. To decide which algorithm is best suited to process each of the available features at runtime, we learn the distribution of the feature number and of the lengths of the feature tracks. We show that using this information, we can predict the expected computational cost of each feature-allocation policy, and formulate an objective function whose minimization determines the optimal way to process the feature data. Our results demonstrate that the hybrid algorithm outperforms each individual method (EKF-SLAM and sliding-window EKF) by a wide margin, and allows processing the sensor data at real-time speed on the processor of a mobile phone.",
"Simultaneous localization and mapping (SLAM) consists in the concurrent construction of a model of the environment (the map ), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications and witnessing a steady transition of this technology to industry. We survey the current state of SLAM and consider future directions. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors’ take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved?",
"",
"In this paper, we focus on the problem of motion tracking in unknown environments using visual and inertial sensors. We term this estimation task visual-inertial odometry (VIO), in analogy to the well-known visual-odometry problem. We present a detailed study of extended Kalman filter (EKF)-based VIO algorithms, by comparing both their theoretical properties and empirical performance. We show that an EKF formulation where the state vector comprises a sliding window of poses (the multi-state-constraint Kalman filter (MSCKF)) attains better accuracy, consistency, and computational efficiency than the simultaneous localization and mapping (SLAM) formulation of the EKF, in which the state vector contains the current pose and the features seen by the camera. Moreover, we prove that both types of EKF approaches are inconsistent, due to the way in which Jacobians are computed. Specifically, we show that the observability properties of the EKF's linearized system models do not match those of the underlying system, which causes the filters to underestimate the uncertainty in the state estimates. Based on our analysis, we propose a novel, real-time EKF-based VIO algorithm, which achieves consistent estimation by (i) ensuring the correct observability properties of its linearized system model, and (ii) performing online estimation of the camera-to-inertial measurement unit (IMU) calibration parameters. This algorithm, which we term MSCKF 2.0, is shown to achieve accuracy and consistency higher than even an iterative, sliding-window fixed-lag smoother, in both Monte Carlo simulations and real-world testing."
]
} |
1903.08385 | 2924397824 | Compact convolutional neural networks gain efficiency mainly through depthwise convolutions, expanded channels and complex topologies, which contrarily aggravate the training efforts. In this work, we identify the shift problem occurs in even-sized kernel (2x2, 4x4) convolutions, and eliminate it by proposing symmetric padding on each side of the feature maps (C2sp, C4sp). Symmetric padding enlarges the receptive fields of even-sized kernels with little computational cost. In classification tasks, C2sp outperforms the conventional 3x3 convolution and obtains comparable accuracies to existing compact convolution blocks, but consumes less memory and time during training. In generation tasks, C2sp and C4sp both achieve improved image qualities and stabilized training. Symmetric padding coupled with even-sized convolution is easy to be implemented into deep learning frameworks, providing promising building units for architecture designs that emphasize training efforts on online and continual learning occasions. | Since even-sized kernels are integer multiples of 2, they are mostly applied together with stride 2 to rescale images, which can avoid the checkerboard artifact @cite_35 . For example, the shallower models of @cite_34 use 4 @math 4 kernels and stride 2 in the discriminators and generators to down-sample and up-sample images. However, 3 @math 3 kernel is mostly preferred when it comes to deep and large-scale GANs @cite_40 @cite_32 @cite_29 . In segmentation tasks, U-Net @cite_41 and its followers use 2 @math 2 kernels to up-sample images. Except for scaling, very few works have implemented even-sized kernels as basic building blocks for their CNN models. In relational reinforcement learning @cite_7 , two C2 layers are adopted to achieve reasoning and planning of objects represented by 4 pixels. It is discussed that factorizing a C3 into two C2s only provides 11 | {
"cite_N": [
"@cite_35",
"@cite_7",
"@cite_41",
"@cite_29",
"@cite_32",
"@cite_40",
"@cite_34"
],
"mid": [
"2535388113",
"2807340089",
"2952232639",
"2843598537",
"2962760235",
"",
"2963836885"
],
"abstract": [
"",
"We introduce an approach for deep reinforcement learning (RL) that improves upon the efficiency, generalization capacity, and interpretability of conventional approaches through structured perception and relational reasoning. It uses self-attention to iteratively reason about the relations between entities in a scene and to guide a model-free policy. Our results show that in a novel navigation and planning task called Box-World, our agent finds interpretable solutions that improve upon baselines in terms of sample complexity, ability to generalize to more complex scenes than experienced during training, and overall performance. In the StarCraft II Learning Environment, our agent achieves state-of-the-art performance on six mini-games -- surpassing human grandmaster performance on four. By considering architectural inductive biases, our work opens new directions for overcoming important, but stubborn, challenges in deep RL.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .",
"Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion. While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant amount of hyperparameter tuning, neural architecture engineering, and a non-trivial amount of \"tricks\". The success in many practical applications coupled with the lack of a measure to quantify the failure modes of GANs resulted in a plethora of proposed losses, regularization and normalization schemes, and neural architectures. In this work we take a sober view of the current state of GANs from a practical perspective. We reproduce the current state of the art and go beyond fairly exploring the GAN landscape. We discuss common pitfalls and reproducibility issues, open-source our code on Github, and provide pre-trained models on TensorFlow Hub.",
"We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.",
"",
"One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques."
]
} |
1903.08385 | 2924397824 | Compact convolutional neural networks gain efficiency mainly through depthwise convolutions, expanded channels and complex topologies, which contrarily aggravate the training efforts. In this work, we identify the shift problem occurs in even-sized kernel (2x2, 4x4) convolutions, and eliminate it by proposing symmetric padding on each side of the feature maps (C2sp, C4sp). Symmetric padding enlarges the receptive fields of even-sized kernels with little computational cost. In classification tasks, C2sp outperforms the conventional 3x3 convolution and obtains comparable accuracies to existing compact convolution blocks, but consumes less memory and time during training. In generation tasks, C2sp and C4sp both achieve improved image qualities and stabilized training. Symmetric padding coupled with even-sized convolution is easy to be implemented into deep learning frameworks, providing promising building units for architecture designs that emphasize training efforts on online and continual learning occasions. | In @cite_33 , the authors randomly shift 3 @math 3 kernels in the down-sampling layers (strided convolution or pooling) to reduce information loss. It can be viewed as a feature augmentation method applied during training. ShiftNet @cite_30 sidesteps spatial convolutions entirely by using shift operations followed with pointwise convolutions. Though shift kernels contain no parameter or FLOP, ShiftNet expands too many FMs and is not such efficient regarding their evaluations. Deformable convolution @cite_23 augments the spatial sampling locations of kernels by additional 2D offsets, and learning the offsets directly from the target tasks. Therefore, deformable kernels shift at pixel level and are more focused on geometric transformations. | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_23"
],
"mid": [
"2963844898",
"2741127129",
"2601564443"
],
"abstract": [
"Neural networks rely on convolutions to aggregate spatial information. However, spatial convolutions are expensive in terms of model size and computation, both of which grow quadratically with respect to kernel size. In this paper, we present a parameter-free, FLOP-free \"shift\" operation as an alternative to spatial convolutions. We fuse shifts and point-wise convolutions to construct end-to-end trainable shift-based modules, with a hyperparameter characterizing the tradeoff between accuracy and efficiency. To demonstrate the operation's efficacy, we replace ResNet's 3x3 convolutions with shift-based modules for improved CIFAR10 and CIFAR100 accuracy using 60 fewer parameters; we additionally demonstrate the operation's resilience to parameter reduction on ImageNet, outperforming ResNet family members. We finally show the shift operation's applicability across domains, achieving strong performance with fewer parameters on image classification, face verification and style transfer.",
"Down-sampling is widely adopted in deep convolutional neural networks (DCNN) for reducing the number of network parameters while preserving the transformation invariance. However, it cannot utilize information effectively because it only adopts a fixed stride strategy, which may result in poor generalization ability and information loss. In this paper, we propose a novel random strategy to alleviate these problems by embedding random shifting in the down-sampling layers during the training process. Random shifting can be universally applied to diverse DCNN models to dynamically adjust receptive fields by shifting kernel centers on feature maps in different directions. Thus, it can generate more robust features in networks and further enhance the transformation invariance of down-sampling operators. In addition, random shifting cannot only be integrated in all down-sampling layers including strided convolutional layers and pooling layers, but also improve performance of DCNN with negligible additional computational cost. We evaluate our method in different tasks (e.g., image classification and segmentation) with various network architectures (i.e., AlexNet, FCN and DFNMR). Experimental results demonstrate the effectiveness of our proposed method.",
"Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in their building modules. In this work, we introduce two new modules to enhance the transformation modeling capability of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from the target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the performance of our approach. For the first time, we show that learning dense spatial transformation in deep CNNs is effective for sophisticated vision tasks such as object detection and semantic segmentation. The code is released at https: github.com msracver Deformable-ConvNets."
]
} |
1903.08385 | 2924397824 | Compact convolutional neural networks gain efficiency mainly through depthwise convolutions, expanded channels and complex topologies, which contrarily aggravate the training efforts. In this work, we identify the shift problem occurs in even-sized kernel (2x2, 4x4) convolutions, and eliminate it by proposing symmetric padding on each side of the feature maps (C2sp, C4sp). Symmetric padding enlarges the receptive fields of even-sized kernels with little computational cost. In classification tasks, C2sp outperforms the conventional 3x3 convolution and obtains comparable accuracies to existing compact convolution blocks, but consumes less memory and time during training. In generation tasks, C2sp and C4sp both achieve improved image qualities and stabilized training. Symmetric padding coupled with even-sized convolution is easy to be implemented into deep learning frameworks, providing promising building units for architecture designs that emphasize training efforts on online and continual learning occasions. | Depthwise convolution (DWConv) is an extreme case of group convolution in which the number of groups is equal to the number of channels. In practice, DWConv is usually coupled with a pointwise convolution, named as the depthwise-separable convolution @cite_12 . The fundamental hypothesis behind is that the spatial and channel correlations can be sufficiently decoupled and separately realized. The overheads of DWConv is only linear to the channel number, so an inverted-bottleneck @cite_17 further improves the accuracy by expanding channels. Now DWConv and its extensions have become popular components for compact CNNs @cite_4 @cite_11 @cite_0 . | {
"cite_N": [
"@cite_4",
"@cite_17",
"@cite_0",
"@cite_12",
"@cite_11"
],
"mid": [
"2883780447",
"2963163009",
"2785430118",
"2531409750",
"2964081807"
],
"abstract": [
"Currently, the neural network architecture design is mostly guided by the indirect metric of computation complexity, i.e., FLOPs. However, the direct metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical guidelines for efficient network design. Accordingly, a new architecture is presented, called ShuffleNet V2. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff.",
"In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet [1] classification, COCO object detection [2], VOC image segmentation [3]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters.",
"The effort devoted to hand-crafting image classifiers has motivated the use of architecture search to discover them automatically. Reinforcement learning and evolution have both shown promise for this purpose. This study introduces a regularized version of a popular asynchronous evolutionary algorithm. We rigorously compare it to the non-regularized form and to a highly-successful reinforcement learning baseline. Using the same hardware, compute effort and neural network training code, we conduct repeated experiments side-by-side, exploring different datasets, search spaces and scales. We show regularized evolution consistently produces models with similar or higher accuracy, across a variety of contexts without need for re-tuning parameters. In addition, regularized evolution exhibits considerably better performance than reinforcement learning at early search stages, suggesting it may be the better choice when fewer compute resources are available. This constitutes the first controlled comparison of the two search algorithms in this context. Finally, we present new architectures discovered with regularized evolution that we nickname AmoebaNets. These models set a new state of the art for CIFAR-10 (mean test error = 2.13 ) and mobile-size ImageNet (top-5 accuracy = 92.1 with 5.06M parameters), and reach the current state of the art for ImageNet (top-5 accuracy = 96.2 ).",
"We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.",
"Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset."
]
} |
1903.08289 | 2924355039 | Online reviews have become a vital source of information in purchasing a service (product). Opinion spammers manipulate reviews, affecting the overall perception of the service. A key challenge in detecting opinion spam is obtaining ground truth. Though there exists a large set of reviews online, only a few of them have been labeled spam or non-spam. In this paper, we propose spamGAN, a generative adversarial network which relies on limited set of labeled data as well as unlabeled data for opinion spam detection. spamGAN improves the state-of-the-art GAN based techniques for text classification. Experiments on TripAdvisor dataset show that spamGAN outperforms existing spam detection techniques when limited labeled data is used. Apart from detecting spam reviews, spamGAN can also generate reviews with reasonable perplexity. | Most existing opinion spam detection techniques are supervised methods based on pre-defined features. @cite_14 used logistic regression with product, review and reviewer-centric features. @cite_21 used n-gram features to train a Naive Bayes and SVM classifier. @cite_6 @cite_24 @cite_18 used part-of-speech tags and context free grammar parse trees, behavioral features, spatio-temproal features, respectively. @cite_25 @cite_27 used graph based algorithms. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_21",
"@cite_6",
"@cite_24",
"@cite_27",
"@cite_25"
],
"mid": [
"2192783609",
"2047756776",
"2161283199",
"2124637344",
"",
"2282288858",
"2112213600"
],
"abstract": [
"Although opinion spam (or fake review) detection has attracted significant research attention in recent years, the problem is far from solved. One key reason is that there is no large-scale ground truth labeled dataset available for model building. Some review hosting sites such as Yelp.com and Dianping.com have built fake review filtering systems to ensure the quality of their reviews, but their algorithms are trade secrets. Working with Dianping, we present the first large-scale analysis of restaurant reviews filtered by Dianping's fake review filtering system. Along with the analysis, we also propose some novel temporal and spatial features for supervised opinion spam detection. Our results show that these features significantly outperform existing state-of-art features.",
"Evaluative texts on the Web have become a valuable source of opinions on products, services, events, individuals, etc. Recently, many researchers have studied such opinion sources as product reviews, forum posts, and blogs. However, existing research has been focused on classification and summarization of opinions using natural language processing and data mining techniques. An important issue that has been neglected so far is opinion spam or trustworthiness of online opinions. In this paper, we study this issue in the context of product reviews, which are opinion rich and are widely used by consumers and product manufacturers. In the past two years, several startup companies also appeared which aggregate opinions from product reviews. It is thus high time to study spam in reviews. To the best of our knowledge, there is still no published study on this topic, although Web spam and email spam have been investigated extensively. We will see that opinion spam is quite different from Web spam and email spam, and thus requires different detection techniques. Based on the analysis of 5.8 million reviews and 2.14 million reviewers from amazon.com, we show that opinion spam in reviews is widespread. This paper analyzes such spam activities and presents some novel techniques to detect them",
"Consumers increasingly rate, review and research products online (Jansen, 2010; , 2008). Consequently, websites containing consumer reviews are becoming targets of opinion spam. While recent work has focused primarily on manually identifiable instances of opinion spam, in this work we study deceptive opinion spam---fictitious opinions that have been deliberately written to sound authentic. Integrating work from psychology and computational linguistics, we develop and compare three approaches to detecting deceptive opinion spam, and ultimately develop a classifier that is nearly 90 accurate on our gold-standard opinion spam dataset. Based on feature analysis of our learned models, we additionally make several theoretical contributions, including revealing a relationship between deceptive opinions and imaginative writing.",
"Most previous studies in computerized deception detection have relied only on shallow lexico-syntactic patterns. This paper investigates syntactic stylometry for deception detection, adding a somewhat unconventional angle to prior literature. Over four different datasets spanning from the product review to the essay domain, we demonstrate that features driven from Context Free Grammar (CFG) parse trees consistently improve the detection performance over several baselines that are based only on shallow lexico-syntactic features. Our results improve the best published result on the hotel review data (, 2011) reaching 91.2 accuracy with 14 error reduction.",
"",
"User-generated online reviews can play a significant role in the success of retail products, hotels, restaurants, etc. However,review systems are often targeted by opinion spammers who seek to distort the perceived quality of a product by creating fraudulent reviews. We propose a fast and effective framework, FRAUDEAGLE, for spotting fraudsters and fake reviews in online review datasets. Our method has several advantages: (1) it exploits the network effect among reviewers and products, unlike the vast majority of existing methods that focus on review text or behavioral analysis, (2) it consists of two complementary steps; scoring users and reviews for fraud detection, and grouping for visualization and sensemaking, (3) it operates in a completely unsupervised fashion requiring no labeled data, while still incorporating side information if available, and (4) it is scalable to large datasets as its run time grows linearly with network size. We demonstrate the effectiveness of our framework on syntheticand real datasets; where FRAUDEAGLE successfully reveals fraud-bots in a large online app review database.",
"Online reviews provide valuable information about products and services to consumers. However, spammers are joining the community trying to mislead readers by writing fake reviews. Previous attempts for spammer detection used reviewers' behaviors, text similarity, linguistics features and rating patterns. Those studies are able to identify certain types of spammers, e.g., those who post many similar reviews about one target entity. However, in reality, there are other kinds of spammers who can manipulate their behaviors to act just like genuine reviewers, and thus cannot be detected by the available techniques. In this paper, we propose a novel concept of a heterogeneous review graph to capture the relationships among reviewers, reviews and stores that the reviewers have reviewed. We explore how interactions between nodes in this graph can reveal the cause of spam and propose an iterative model to identify suspicious reviewers. This is the first time such intricate relationships have been identified for review spam detection. We also develop an effective computation method to quantify the trustiness of reviewers, the honesty of reviews, and the reliability of stores. Different from existing approaches, we don't use review text information. Our model is thus complementary to existing approaches and able to find more difficult and subtle spamming activities, which are agreed upon by human judges after they evaluate our results."
]
} |
1903.08289 | 2924355039 | Online reviews have become a vital source of information in purchasing a service (product). Opinion spammers manipulate reviews, affecting the overall perception of the service. A key challenge in detecting opinion spam is obtaining ground truth. Though there exists a large set of reviews online, only a few of them have been labeled spam or non-spam. In this paper, we propose spamGAN, a generative adversarial network which relies on limited set of labeled data as well as unlabeled data for opinion spam detection. spamGAN improves the state-of-the-art GAN based techniques for text classification. Experiments on TripAdvisor dataset show that spamGAN outperforms existing spam detection techniques when limited labeled data is used. Apart from detecting spam reviews, spamGAN can also generate reviews with reasonable perplexity. | Neural network methods for spam detection consider the reviews as input wihtout specific feature extraction. GRNN @cite_23 used a gated recurrent neural network to study the contexual information of review sentences. DRI-RCNN @cite_12 used a recurrent network for learning the contextual information of the words in the reviews. DRI-RCNN extends RCNN @cite_16 by learning embedding vectors with respect to both spam and non-spam labels for the words in the reviews. Since RCNN and DRI-RCNN use neural networks for spam classification, we will use these supervised methods for comparison in our experiments. | {
"cite_N": [
"@cite_16",
"@cite_12",
"@cite_23"
],
"mid": [
"2265846598",
"2794855269",
"2569238137"
],
"abstract": [
"Text classification is a foundational task in many NLP applications. Traditional text classifiers often rely on many human-designed features, such as dictionaries, knowledge bases and special tree kernels. In contrast to traditional methods, we introduce a recurrent convolutional neural network for text classification without human-designed features. In our model, we apply a recurrent structure to capture contextual information as far as possible when learning word representations, which may introduce considerably less noise compared to traditional window-based neural networks. We also employ a max-pooling layer that automatically judges which words play key roles in text classification to capture the key components in texts. We conduct experiments on four commonly used datasets. The experimental results show that the proposed method outperforms the state-of-the-art methods on several datasets, particularly on document-level datasets.",
"Abstract With the widespread of deceptive opinions in the Internet, how to identify online deceptive reviews automatically has become an attractive topic in research field. Traditional methods concentrate on extracting different features from online reviews and training machine learning classifiers to produce models to decide whether an incoming review is deceptive or not. This paper proposes an approach called DRI-RCNN (Deceptive Review Identification by Recurrent Convolutional Neural Network) to identify deceptive reviews by using word contexts and deep learning. The basic idea is that since deceptive reviews and truthful reviews are written by writers without and with real experience respectively, the writers of the reviews should have different contextual knowledge on their target objectives under description. In order to differentiate the deceptive and truthful contextual knowledge embodied in the online reviews, we represent each word in a review with six components as a recurrent convolutional vector. The first and second components are two numerical word vectors derived from training deceptive and truthful reviews, respectively. The third and fourth components are left neighboring deceptive and truthful context vectors derived by training a recurrent convolutional neural network on context vectors and word vectors of left words. The fifth and six components are right neighboring deceptive and truthful context vectors of right words. Further, we employ max-pooling and ReLU (Rectified Linear Unit) filter to transfer recurrent convolutional vectors of words in a review to a review vector by extracting positive maximum feature elements in recurrent convolutional vectors of words in the review. Experiment results on the spam dataset and the deception dataset demonstrate that the proposed DRI-RCNN approach outperforms the state-of-the-art techniques in deceptive review identification.",
"The products reviews are increasingly used by individuals and organizations for purchase and business decisions. Driven by the desire of profit, spammers produce synthesized reviews to promote some products or demote competitors products. So deceptive opinion spam detection has attracted significant attention from both business and research communities in recent years. Existing approaches mainly focus on traditional discrete features, which are based on linguistic and psychological cues. However, these methods fail to encode the semantic meaning of a document from the discourse perspective, which limits the performance. In this work, we empirically explore a neural network model to learn document-level representation for detecting deceptive opinion spam. First, the model learns sentence representation with convolutional neural network. Then, sentence representations are combined using a gated recurrent neural network, which can model discourse information and yield a document vector. Finally, the document representations are directly used as features to identify deceptive opinion spam. Based on three domains datasets, the results on in-domain and cross-domain experiments show that our proposed method outperforms state-of-the-art methods."
]
} |
1903.08289 | 2924355039 | Online reviews have become a vital source of information in purchasing a service (product). Opinion spammers manipulate reviews, affecting the overall perception of the service. A key challenge in detecting opinion spam is obtaining ground truth. Though there exists a large set of reviews online, only a few of them have been labeled spam or non-spam. In this paper, we propose spamGAN, a generative adversarial network which relies on limited set of labeled data as well as unlabeled data for opinion spam detection. spamGAN improves the state-of-the-art GAN based techniques for text classification. Experiments on TripAdvisor dataset show that spamGAN outperforms existing spam detection techniques when limited labeled data is used. Apart from detecting spam reviews, spamGAN can also generate reviews with reasonable perplexity. | The ongoing research on GANs for text classification aim to address the drawbacks of GANs in generating sentences with respect to the gradients and the sparse rewards problem. SeqGAN @cite_20 addresses them by considering sequence generation as a reinforcement learning problem. Monte Carlo Tree Search (MCTS) is used to overcome the issue of sparse rewards, however it is computationally intractable. StepGAN @cite_26 and MaskGAN @cite_28 use the actor-critic @cite_29 method to learn the rewards, however MaskGAN is limited by length of the sequence. Further, all of them focus on sentence generation. CSGAN @cite_10 deals with sentence classification, but it uses MCTS and character-level embeddings. spamGAN differs from CSGAN in using the actor-critic reinforcement learning method for sequence generation and word-level embeddings, suitable for longer sentences. | {
"cite_N": [
"@cite_26",
"@cite_28",
"@cite_29",
"@cite_10",
"@cite_20"
],
"mid": [
"2786039921",
"2784823820",
"",
"2792593939",
"2964268978"
],
"abstract": [
"Sequence generative adversarial networks (SeqGAN) have been used to improve conditional sequence generation tasks, for example, chit-chat dialogue generation. To stabilize the training of SeqGAN, Monte Carlo tree search (MCTS) or reward at every generation step (REGS) is used to evaluate the goodness of a generated subsequence. MCTS is computationally intensive, but the performance of REGS is worse than MCTS. In this paper, we propose stepwise GAN (StepGAN), in which the discriminator is modified to automatically assign scores quantifying the goodness of each subsequence at every generation step. StepGAN has significantly less computational costs than MCTS. We demonstrate that StepGAN outperforms previous GAN-based methods on both synthetic experiment and chit-chat dialogue generation.",
"Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maxi- mum likelihood and teacher forcing. These methods are well-suited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.",
"",
"The neural network model has been the fulcrum of the so-called AI revolution. Although very powerful for pattern-recognition tasks, however, the model has two main drawbacks: it tends to overfit when the training dataset is small, and it is unable to accurately capture category information when the class number is large. In this paper, we combine reinforcement learning, generative adversarial networks, and recurrent neural networks to build a new model, termed category sentence generative adversarial network (CS-GAN). Not only the proposed model is able to generate category sentences that enlarge the original dataset, but also it helps improve its generalization capability during supervised training. We evaluate the performance of CS-GAN for the task of sentiment analysis. Quantitative evaluation exhibits the accuracy improvement in polarity detection on a small dataset with high category information.",
"As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines."
]
} |
1903.08294 | 2950467987 | ICMP timestamp request and response packets have been standardized for nearly 40 years, but have no modern practical application, having been superseded by NTP. However, ICMP timestamps are not deprecated, suggesting that while hosts must support them, little attention is paid to their implementation and use. In this work, we perform active measurements and find 2.2 million hosts on the Internet responding to ICMP timestamp requests from over 42,500 unique autonomous systems. We develop a methodology to classify timestamp responses, and find 13 distinct classes of behavior. Not only do these behaviors enable a new fingerprinting vector, some behaviors leak important information about the host e.g., OS, kernel version, and local timezone. | Several TCP IP protocols utilize timestamps, and significant prior work has examined TCP timestamps in the context of fingerprinting @cite_22 . TCP timestamps have since been used to infer whether IPv4 and IPv6 server addresses map to the same physical machine in @cite_2 and combined with clock skew to identify server siblings'' on a large scale in @cite_16 . | {
"cite_N": [
"@cite_16",
"@cite_22",
"@cite_2"
],
"mid": [
"2594351066",
"2104599106",
"2118695882"
],
"abstract": [
"Linking the growing IPv6 deployment to existing IPv4 addresses is an interesting field of research, be it for network forensics, structural analysis, or reconnaissance. In this work, we focus on classifying pairs of server IPv6 and IPv4 addresses as siblings, i.e., running on the same machine. Our methodology leverages active measurements of TCP timestamps and other network characteristics, which we measure against a diverse ground truth of 682 hosts. We define and extract a set of features, including estimation of variable (opposed to constant) remote clock skew. On these features, we train a manually crafted algorithm as well as a machine-learned decision tree. By conducting several measurement runs and training in cross-validation rounds, we aim to create models that generalize well and do not overfit our training data. We find both models to exceed 99 precision in train and test performance. We validate scalability by classifying 149k siblings in a large-scale measurement of 371k sibling candidates. We argue that this methodology, thoroughly cross-validated and likely to generalize well, can aid comparative studies of IPv6 and IPv4 behavior in the Internet. Striving for applicability and replicability, we release ready-to-use source code and raw data from our study.",
"We introduce the area of remote physical device fingerprinting, or fingerprinting a physical device, as opposed to an operating system or class of devices, remotely, and without the fingerprinted device's known cooperation. We accomplish this goal by exploiting small, microscopic deviations in device hardware: clock skews. Our techniques do not require any modification to the fingerprinted devices. Our techniques report consistent measurements when the measurer is thousands of miles, multiple hops, and tens of milliseconds away from the fingerprinted device and when the fingerprinted device is connected to the Internet from different locations and via different access technologies. Further, one can apply our passive and semipassive techniques when the fingerprinted device is behind a NAT or firewall, and. also when the device's system time is maintained via NTP or SNTP. One can use our techniques to obtain information about whether two devices on the Internet, possibly shifted in time or IP addresses, are actually the same physical device. Example applications include: computer forensics; tracking, with some probability, a physical device as it connects to the Internet from different public access points; counting the number of devices behind a NAT even when the devices use constant or random IP IDs; remotely probing a block of addresses to determine if the addresses correspond to virtual hosts, e.g., as part of a virtual honeynet; and unanonymizing anonymized network traces.",
"We present, validate, and apply an active measurement technique that ascertains whether candidate IPv4 and IPv6 server addresses are “siblings,” i.e., assigned to the same physical machine. In contrast to prior efforts limited to passive monitoring, opportunistic measurements, or end-client populations, we propose an active methodology that generalizes to all TCP-reachable devices, including servers. Our method extends prior device fingerprinting techniques to improve their feasibility in modern environments, and uses them to support measurement-based detection of sibling interfaces. We validate our technique against a diverse set of 61 web servers with known sibling addresses and find it to be over 97 accurate with 99 precision. Finally, we apply the technique to characterize the top ( )6,400 Alexa IPv6-capable web domains, and discover that a DNS name in common does not imply that the corresponding IPv4 and IPv6 addresses are on the same machine, network, or even autonomous system. Understanding sibling and non-sibling relationships gives insight not only into IPv6 deployment and evolution, but also helps characterize the potential for correlated failures and susceptibility to certain attacks."
]
} |
1903.08294 | 2950467987 | ICMP timestamp request and response packets have been standardized for nearly 40 years, but have no modern practical application, having been superseded by NTP. However, ICMP timestamps are not deprecated, suggesting that while hosts must support them, little attention is paid to their implementation and use. In this work, we perform active measurements and find 2.2 million hosts on the Internet responding to ICMP timestamp requests from over 42,500 unique autonomous systems. We develop a methodology to classify timestamp responses, and find 13 distinct classes of behavior. Not only do these behaviors enable a new fingerprinting vector, some behaviors leak important information about the host e.g., OS, kernel version, and local timezone. | Buchholz and Tjaden leveraged timestamps in the context of forensic reconstruction and correlation @cite_7 . Similar to our results, they find a wide variety of clock behaviors. However, while they probe @math 8,000 web servers, we perform an Internet-wide survey including 2.2M hosts more than a decade later, and demonstrate novel fingerprinting and geolocation uses of timestamps. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2129680786"
],
"abstract": [
"In this paper we describe the first large-scale, long-term study of how hosts connected to the Internet manage their clocks. This is important for forensic investigations when there is a need for correlation of events collected from disparate sources, as well as for the correlation of computer events to ''real'' time. We have sampled over 8000 web servers on the Internet on a regular basis for a period of over six months. We have found that only about 74 of the hosts we observed were within 10s of our reference time (UTC). The other hosts exhibited a large variety of different clock behaviors, some of which are explainable by existing clock models, some not, warranting further research in the area of forensic time and clock analysis."
]
} |
1903.08066 | 2972918064 | We propose a method of training quantization thresholds (TQT) for uniform symmetric quantizers using standard backpropagation and gradient descent. Contrary to prior work, we show that a careful analysis of the straight-through estimator for threshold gradients allows for a natural range-precision trade-off leading to better optima. Our quantizers are constrained to use power-of-2 scale-factors and per-tensor scaling of weights and activations to make it amenable for hardware implementations. We present analytical support for the general robustness of our methods and empirically validate them on various CNNs for ImageNet classification. We are able to achieve near-floating-point accuracy on traditionally difficult networks such as MobileNets with less than 5 epochs of quantized (8-bit) retraining. Finally, we present Graffitist, a framework that enables automatic quantization of TensorFlow graphs for TQT. Available at https: github.com Xilinx graffitist . | Historically, some of the earlier work in quantization looked at low bit-width weights, and in some cases, activations. BinaryNet @cite_1 proposed quantizing weights and activations to binary values of +1 and -1 and showed that weights could be trained using a straight-through estimator (STE) @cite_11 , where quantization is applied in the forward pass but approximated to a clipped identity function in the backward pass. XNOR-Nets @cite_28 uses a different network and binarization method with scale-factors based on the maximum per-channel activation values, to achieve better ImageNet performance. Ternary networks @cite_47 @cite_38 add another quantization level at 0 suggesting this helps significantly with accuracy. TTQ @cite_23 also suggest using a codebook to map the two non-zero quantization levels to arbitrary values that can be trained with gradient descent by averaging the gradients of the weights within each quantization bucket. | {
"cite_N": [
"@cite_38",
"@cite_28",
"@cite_1",
"@cite_23",
"@cite_47",
"@cite_11"
],
"mid": [
"",
"2300242332",
"2319920447",
"2796438033",
"",
"2242818861"
],
"abstract": [
"",
"We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32 ( ) memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58 ( ) faster convolutional operations (in terms of number of the high precision operations) and 32 ( ) memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than (16 , ) in top-1 accuracy. Our code is available at: http: allenai.org plato xnornet.",
"We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time. At training-time the binary weights and activations are used for computing the parameters gradients. During the forward pass, BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which is expected to substantially improve power-efficiency. To validate the effectiveness of BNNs we conduct two sets of experiments on the Torch7 and Theano frameworks. On both, BNNs achieved nearly state-of-the-art results over the MNIST, CIFAR-10 and SVHN datasets. Last but not least, we wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for training and running our BNNs is available on-line.",
"In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters",
"",
"Stochastic neurons and hard non-linearities can be useful for a number of reasons in deep learning models, but in many cases they pose a challenging problem: how to estimate the gradient of a loss function with respect to the input of such stochastic or non-smooth neurons? I.e., can we \"back-propagate\" through these stochastic neurons? We examine this question, existing approaches, and compare four families of solutions, applicable in different settings. One of them is the minimum variance unbiased gradient estimator for stochatic binary neurons (a special case of the REINFORCE algorithm). A second approach, introduced here, decomposes the operation of a binary stochastic neuron into a stochastic binary part and a smooth differentiable part, which approximates the expected effect of the pure stochatic binary neuron to first order. A third approach involves the injection of additive or multiplicative noise in a computational graph that is otherwise differentiable. A fourth approach heuristically copies the gradient with respect to the stochastic output directly as an estimator of the gradient with respect to the sigmoid argument (we call this the straight-through estimator). To explore a context where these estimators are useful, we consider a small-scale version of conditional computation , where sparse stochastic units form a distributed representation of gaters that can turn off in combinatorially many ways large chunks of the computation performed in the rest of the neural network. In this case, it is important that the gating units produce an actual 0 most of the time. The resulting sparsity can be potentially be exploited to greatly reduce the computational cost of large deep networks for which conditional computation would be useful."
]
} |
1903.08066 | 2972918064 | We propose a method of training quantization thresholds (TQT) for uniform symmetric quantizers using standard backpropagation and gradient descent. Contrary to prior work, we show that a careful analysis of the straight-through estimator for threshold gradients allows for a natural range-precision trade-off leading to better optima. Our quantizers are constrained to use power-of-2 scale-factors and per-tensor scaling of weights and activations to make it amenable for hardware implementations. We present analytical support for the general robustness of our methods and empirically validate them on various CNNs for ImageNet classification. We are able to achieve near-floating-point accuracy on traditionally difficult networks such as MobileNets with less than 5 epochs of quantized (8-bit) retraining. Finally, we present Graffitist, a framework that enables automatic quantization of TensorFlow graphs for TQT. Available at https: github.com Xilinx graffitist . | Continuing the trend for higher accuracy, researchers revisited multi-bit quantization. Initially, the quantization range tended to be fixed, such as in DoReFa-Net @cite_16 where elements are limited to [-1, 1] with the tanh function, or WRPN @cite_30 which limits weights to [-1, 1] and activations to [0, 1] even during floating point training. To improve accuracy further, researchers considered non-fixed quantization ranges. HWGQ @cite_15 uses ReLU activation and learns a clipping range by minimizing L2 loss of pre- and post-quantized values. PACT @cite_35 learns this ReLU clipping parameter ( ) through gradient descent, using the gradient | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_15",
"@cite_16"
],
"mid": [
"",
"2786771851",
"2586654419",
"2469490737"
],
"abstract": [
"",
"Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. To address this cost, a number of quantization schemeshave been proposed - but most of these techniques focused on quantizing weights, which are relatively smaller in size compared to activations. This paper proposes a novel quantization scheme for activations during training - that enables neural networks to work well with ultra low precision weights and activations without any significant accuracy degradation. This technique, PArameterized Clipping acTi-vation (PACT), uses an activation clipping parameter α that is optimized duringtraining to find the right quantization scale. PACT allows quantizing activations toarbitrary bit precisions, while achieving much better accuracy relative to publishedstate-of-the-art quantization schemes. We show, for the first time, that both weights and activations can be quantized to 4-bits of precision while still achieving accuracy comparable to full precision networks across a range of popular models and datasets. We also show that exploiting these reduced-precision computational units in hardware can enable a super-linear improvement in inferencing performance dueto a significant reduction in the area of accelerator compute engines coupled with the ability to retain the quantized model and activation data in on-chip memories.",
"The problem of quantizing the activations of a deep neural network is considered. An examination of the popular binary quantization approach shows that this consists of approximating a classical non-linearity, the hyperbolic tangent, by two functions: a piecewise constant sign function, which is used in feedforward network computations, and a piecewise linear hard tanh function, used in the backpropagation step during network learning. The problem of approximating the widely used ReLU non-linearity is then considered. An half-wave Gaussian quantizer (HWGQ) is proposed for forward approximation and shown to have efficient implementation, by exploiting the statistics of of network activations and batch normalization operations. To overcome the problem of gradient mismatch, due to the use of different forward and backward approximations, several piece-wise backward approximators are then investigated. The implementation of the resulting quantized network, denoted as HWGQ-Net, is shown to achieve much closer performance to full precision networks, such as AlexNet, ResNet, GoogLeNet and VGG-Net, than previously available low-precision networks, with 1-bit binary weights and 2-bit quantized activations.",
"We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward backward passes can now operate on low bitwidth weights and activations gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1 top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly."
]
} |
1903.08066 | 2972918064 | We propose a method of training quantization thresholds (TQT) for uniform symmetric quantizers using standard backpropagation and gradient descent. Contrary to prior work, we show that a careful analysis of the straight-through estimator for threshold gradients allows for a natural range-precision trade-off leading to better optima. Our quantizers are constrained to use power-of-2 scale-factors and per-tensor scaling of weights and activations to make it amenable for hardware implementations. We present analytical support for the general robustness of our methods and empirically validate them on various CNNs for ImageNet classification. We are able to achieve near-floating-point accuracy on traditionally difficult networks such as MobileNets with less than 5 epochs of quantized (8-bit) retraining. Finally, we present Graffitist, a framework that enables automatic quantization of TensorFlow graphs for TQT. Available at https: github.com Xilinx graffitist . | derived using the STE. LQ-Nets @cite_44 achieve state-of-the-art accuracy through a custom quantization error minimization (QEM) algorithm and non-uniform quantization scheme with decision levels derived from a small binary encoding basis. QIL @cite_43 likewise introduces a custom quantizatiom scheme based on trainable quantization intervals. | {
"cite_N": [
"@cite_44",
"@cite_43"
],
"mid": [
"2950458216",
"2954582488"
],
"abstract": [
"Although weight and activation quantization is an effective approach for Deep Neural Network (DNN) compression and has a lot of potentials to increase inference speed leveraging bit-operations, there is still a noticeable gap in terms of prediction accuracy between the quantized model and the full-precision model. To address this gap, we propose to jointly train a quantized, bit-operation-compatible DNN and its associated quantizers, as opposed to using fixed, handcrafted quantization schemes such as uniform or logarithmic quantization. Our method for learning the quantizers applies to both network weights and activations with arbitrary-bit precision, and our quantizers are easy to train. The comprehensive experiments on CIFAR-10 and ImageNet datasets show that our method works consistently well for various network structures such as AlexNet, VGG-Net, GoogLeNet, ResNet, and DenseNet, surpassing previous quantization methods in terms of accuracy by an appreciable margin. Code available at this https URL",
"Reducing bit-widths of activations and weights of deep networks makes it efficient to compute and store them in memory, which is crucial in their deployments to resource-limited devices, such as mobile phones. However, decreasing bit-widths with quantization generally yields drastically degraded accuracy. To tackle this problem, we propose to learn to quantize activations and weights via a trainable quantizer that transforms and discretizes them. Specifically, we parameterize the quantization intervals and obtain their optimal values by directly minimizing the task loss of the network. This quantization-interval-learning (QIL) allows the quantized networks to maintain the accuracy of the full-precision (32-bit) networks with bit-width as low as 4-bit and minimize the accuracy degeneration with further bit-width reduction (i.e., 3 and 2-bit). Moreover, our quantizer can be trained on a heterogeneous dataset, and thus can be used to quantize pretrained networks without access to their training data. We demonstrate the effectiveness of our trainable quantizer on ImageNet dataset with various network architectures such as ResNet-18, -34 and AlexNet, on which it outperforms existing methods to achieve the state-of-the-art accuracy."
]
} |
1903.08066 | 2972918064 | We propose a method of training quantization thresholds (TQT) for uniform symmetric quantizers using standard backpropagation and gradient descent. Contrary to prior work, we show that a careful analysis of the straight-through estimator for threshold gradients allows for a natural range-precision trade-off leading to better optima. Our quantizers are constrained to use power-of-2 scale-factors and per-tensor scaling of weights and activations to make it amenable for hardware implementations. We present analytical support for the general robustness of our methods and empirically validate them on various CNNs for ImageNet classification. We are able to achieve near-floating-point accuracy on traditionally difficult networks such as MobileNets with less than 5 epochs of quantized (8-bit) retraining. Finally, we present Graffitist, a framework that enables automatic quantization of TensorFlow graphs for TQT. Available at https: github.com Xilinx graffitist . | FAT @cite_8 does propose training the quantization thresholds through gradient descent while keeping the weights unchanged. They use an unlabeled dataset and train on a root-mean-square-error loss between the original and quantized networks. NICE @cite_22 starts with a clamping parameter (c_a ) located ( ) standard deviations from the mean of the input distribution, and trains it using gradient descent on a derivative found using the STE in a formulation similar to . | {
"cite_N": [
"@cite_22",
"@cite_8"
],
"mid": [
"2893251308",
"2904908132"
],
"abstract": [
"Convolutional Neural Networks (CNN) are very popular in many fields including computer vision, speech recognition, natural language processing, to name a few. Though deep learning leads to groundbreaking performance in these domains, the networks used are very demanding computationally and are far from real-time even on a GPU, which is not power efficient and therefore does not suit low power systems such as mobile devices. To overcome this challenge, some solutions have been proposed for quantizing the weights and activations of these networks, which accelerate the runtime significantly. Yet, this acceleration comes at the cost of a larger error. The method proposed in this work trains quantized neural networks by noise injection and a learned clamping, which improve the accuracy. This leads to state-of-the-art results on various regression and classification tasks, e.g., ImageNet classification with architectures such as ResNet-18 34 50 with low as 3-bit weights and activations. We implement the proposed solution on an FPGA to demonstrate its applicability for low power real-time applications. The implementation of the paper is available at this https URL",
"Neural network quantization procedure is the necessary step for porting of neural networks to mobile devices. Quantization allows accelerating the inference, reducing memory consumption and model size. It can be performed without fine-tuning using calibration procedure (calculation of parameters necessary for quantization), or it is possible to train the network with quantization from scratch. Training with quantization from scratch on the labeled data is rather long and resource-consuming procedure. Quantization of network without fine-tuning leads to accuracy drop because of outliers which appear during the calibration. In this article we suggest to simplify the quantization procedure significantly by introducing the trained scale factors for quantization thresholds. It allows speeding up the process of quantization with fine-tuning up to 8 epochs as well as reducing the requirements to the set of train images. By our knowledge, the proposed method allowed us to get the first public available quantized version of MNAS without significant accuracy reduction - 74.8 vs 75.3 for original full-precision network. Model and code are ready for use and available at: this https URL."
]
} |
1903.08066 | 2972918064 | We propose a method of training quantization thresholds (TQT) for uniform symmetric quantizers using standard backpropagation and gradient descent. Contrary to prior work, we show that a careful analysis of the straight-through estimator for threshold gradients allows for a natural range-precision trade-off leading to better optima. Our quantizers are constrained to use power-of-2 scale-factors and per-tensor scaling of weights and activations to make it amenable for hardware implementations. We present analytical support for the general robustness of our methods and empirically validate them on various CNNs for ImageNet classification. We are able to achieve near-floating-point accuracy on traditionally difficult networks such as MobileNets with less than 5 epochs of quantized (8-bit) retraining. Finally, we present Graffitist, a framework that enables automatic quantization of TensorFlow graphs for TQT. Available at https: github.com Xilinx graffitist . | Independently of our work, IBM's LSQ @cite_31 found very similar gradient definition for the quantizer and uses backpropagation to train them. However, our works differ in several interesting ways. For example, they learn the scale-factors directly and do not restrict them to power-of-2. Besides the evident implications for accuracy and hardware implementation, we show in that this also has major implications for training stability due to scale dependence of learning rate. As a workaround to these stability issues, they require careful fine-tuning of hyperparameters and consequently retrain for 90 epochs compared to 5 epochs in our case. They also use high precision in the first and last layers to retain performance, as is common in the field. We suspect the high precision and lack of power-of-2 limitations allow for very high accuracy in their low bit-width experiments. Further, they do not explore quantization on more difficult networks such as MobileNets @cite_24 @cite_42 . We address these issues with a different gradient formulation in and justify it analytically in . | {
"cite_N": [
"@cite_24",
"@cite_31",
"@cite_42"
],
"mid": [
"2612445135",
"2916954108",
""
],
"abstract": [
"We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.",
"We present here Learned Step Size Quantization, a method for training deep networks such that they can run at inference time using low precision integer matrix multipliers, which offer power and space advantages over high precision alternatives. The essence of our approach is to learn the step size parameter of a uniform quantizer by backpropagation of the training loss, applying a scaling factor to its learning rate, and computing its associated loss gradient by ignoring the discontinuity present in the quantizer. This quantization approach can be applied to activations or weights, using different levels of precision as needed for a given system, and requiring only a simple modification of existing training code. As demonstrated on the ImageNet dataset, our approach achieves better accuracy than all previous published methods for creating quantized networks on several ResNet network architectures at 2-, 3- and 4-bits of precision.",
""
]
} |
1903.08072 | 2949400323 | Following recent advances in morphological neural networks, we propose to study in more depth how Max-plus operators can be exploited to define morphological units and how they behave when incorporated in layers of conventional neural networks. Besides showing that they can be easily implemented with modern machine learning frameworks , we confirm and extend the observation that a Max-plus layer can be used to select important filters and reduce redundancy in its previous layer, without incurring performance loss. Experimental results demonstrate that the filter selection strategy enabled by a Max-plus is highly efficient and robust, through which we successfully performed model pruning on different neural network architectures. We also point out that there is a close connection between Maxout networks and our pruned Max-plus networks by comparing their respective characteristics. The code for reproducing our experiments is available online. | were defined almost simultaneously in two different ways at the end of the 1980s @cite_15 @cite_4 . Davidson @cite_15 introduced neural units that can be seen as pure dilations or erosions, whereas Wilson @cite_4 focused on a more general formulation based on rank filters, in which @math and @math operators are two particular cases. Davidson's definition interprets morphological neurons as bounding boxes in the feature space @cite_7 @cite_11 @cite_27 . In the latter studies, networks were trained to perform perfectly on training sets after few iterations, but little attention was drawn to generalization. Only recently, a backpropagation-based algorithm was adopted and improved constructive ones @cite_14 . Still, the bounding-box'' approach does not seem to generalize well to test set when faced with high-dimensional problems like image analysis. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_27",
"@cite_15",
"@cite_11"
],
"mid": [
"2613427450",
"",
"1966639131",
"2031888308",
"2073074703",
"2104622509"
],
"abstract": [
"Abstract Dendrite morphological neurons are a type of artificial neural network that works with min and max operators instead of algebraic products. These morphological operators build hyperboxes in N -dimensional space. These hyperboxes allow the proposal of training methods based on heuristics without using an optimisation method. In literature, it has been claimed that these heuristic-based trainings have advantages: there are no convergence problems, perfect classification can always be reached and training is performed in only one epoch. In this paper, we show that these assumed advantages come with a cost: these heuristics increase classification errors in the test set because they are not optimal and learning generalisation is poor. To solve these problems, we introduce a novel method to train dendrite morphological neurons based on stochastic gradient descent for classification tasks, using these heuristics just for initialisation of learning parameters. Experiments show that we can enhance the testing error in comparison with solely heuristic-based training methods. This approach can reach competitive performance with respect to other popular machine learning algorithms.",
"",
"The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and examine the computing capabilities of morphological neural networks. As particular examples of a morphological neural network we discuss morphological associative memories and morphological perceptrons.",
"A morphological neural network is generally defined as a type of artificial neural network that performs an elementary operation of mathematical morphology at every node, possibly followed by the application of an activation function. The underlying framework of mathematical morphology can be found in lattice theory. With the advent of granular computing, lattice-based neurocomputing models such as morphological neural networks and fuzzy lattice neurocomputing models are becoming increasingly important since many information granules such as fuzzy sets and their extensions, intervals, and rough sets are lattice ordered. In this paper, we present the lattice-theoretical background and the learning algorithms for morphological perceptrons with competitive learning which arise by incorporating a winner-take-all output layer into the original morphological perceptron model. Several well-known classification problems that are available on the internet are used to compare our new model with a range of classifiers such as conventional multi-layer perceptrons, fuzzy lattice neurocomputing models, k-nearest neighbors, and decision trees.",
"The theory of classical artificial neural networks has been used to solve pattern recognition problems in image processing that is different from traditional pattern recognition approaches. In standard neural network theory, the first step in performing a neural network calculation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for non-linearity of the network. This paper presents the fundamental theory for a morphological neural network which, instead of multiplication and summation, uses the non-linear operation of addition and maximum. Several basic applications which are distinctly different from pattern recognition techniques are given, including a net which performs a sieving algorithm.© (1990) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.",
"Recent advances in the biophysics of computation and neurocomputing models have brought to the foreground the importance of dendritic structures in a single neuron cell. Dendritic structures are now viewed as the primary autonomous computational units capable of realizing logical operations. By changing the classic simplified model of a single neuron with a more realistic one that incorporates the dendritic processes, a novel paradigm in artificial neural networks is being established. In this work, we introduce and develop a mathematical model of dendrite computation in a morphological neuron based on lattice algebra. The computational capabilities of this enriched neuron model are demonstrated by means of several illustrative examples and by proving that any single layer morphological perceptron endowed with dendrites and their corresponding input and output synaptic processes is able to approximate any compact region in higher dimensional Euclidean space to within any desired degree of accuracy. Based on this result, we describe a training algorithm for single layer morphological perceptrons and apply it to some well-known nonlinear problems in order to exhibit its performance."
]
} |
1903.08072 | 2949400323 | Following recent advances in morphological neural networks, we propose to study in more depth how Max-plus operators can be exploited to define morphological units and how they behave when incorporated in layers of conventional neural networks. Besides showing that they can be easily implemented with modern machine learning frameworks , we confirm and extend the observation that a Max-plus layer can be used to select important filters and reduce redundancy in its previous layer, without incurring performance loss. Experimental results demonstrate that the filter selection strategy enabled by a Max-plus is highly efficient and robust, through which we successfully performed model pruning on different neural network architectures. We also point out that there is a close connection between Maxout networks and our pruned Max-plus networks by comparing their respective characteristics. The code for reproducing our experiments is available online. | Various structured pruning methods were proposed to overcome this subtlety in practical applications by pruning at the level of channels or even layers. Filters with smaller @math norm were pruned based on a predefined pruning ratio for each layer in @cite_19 . Model pruning was transformed into an optimization problem in @cite_16 and the channels to remove were determined by minimizing next layer reconstruction error. A branch of algorithms in this category employs @math regularization to induce shrinkage and sparsity in model parameters. Sparsity constraints were imposed in @cite_5 on channel-wise scaling factors and pruning was based on their magnitude, while in @cite_13 group-sparsity was leverage to learn compact CNNs via a combination of @math and @math regularization. One minor drawback of these regularization-based methods is that the training phase generally requires more iterations to converge. Our approach also falls into the structured pruning category and thus no dedicated hardware or libraries are required to achieve speedup, yet no regularization is imposed during model training. Moreover, in contrast to most existing pruning algorithms, our method does not need fine-tuning to regain performance. | {
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_16",
"@cite_13"
],
"mid": [
"2962965870",
"2962851801",
"2964233199",
"2963000224"
],
"abstract": [
"The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34 and ResNet-110 by up to 38 on CIFAR10 while regaining close to the original accuracy by retraining the networks.",
"The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations.",
"We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31 x FLOPs reduction and 16.63× compression on VGG-16, with only 0.52 top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1 top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.",
"High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN's evaluation. Experimental results show that SSL achieves on average 5.1 × and 3.1 × speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25 to 92.60 , which is still higher than that of original ResNet with 32 layers. For AlexNet, SSL reduces the error by 1 ."
]
} |
1903.08206 | 2964163406 | The metadata about scientific experiments published in online repositories have been shown to suffer from a high degree of representational heterogeneity—there are often many ways to represent the same type of information, such as a geographical location via its latitude and longitude. To harness the potential that metadata have for discovering scientific data, it is crucial that they be represented in a uniform way that can be queried effectively. One step toward uniformly-represented metadata is to normalize the multiple, distinct field names used in metadata (e.g., lat lon, lat and long) to describe the same type of value. To that end, we present a new method based on clustering and embeddings (i.e., vector representations of words) to align metadata field names with ontology terms. We apply our method to biomedical metadata by generating embeddings for terms in biomedical ontologies from the BioPortal repository. We carried out a comparative study between our method and the NCBO Annotator, which revealed that our method yields more and substantially better alignments between metadata and ontology terms. | The NCBO Annotator is a reference service for annotating biomedical data and text with terms from biomedical ontologies. The service works directly with ontologies hosted in the BioPortal ontology repository. The NCBO Annotator relies on the Mgrep concept recognizer, developed at the University of Michigan, to match arbitrary text against a set of dictionary terms provided by BioPortal ontologies @cite_11 . The NCBO Annotator additionally exploits is-a relations between terms to expand the annotations found with Mgrep and to rank the alignments. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2106797966"
],
"abstract": [
"The National Center for Biomedical Ontology (NCBO) is developing a system for automated, ontology-based access to online biomedical resources. The system's indexing workflow processes the text metadata of diverse resources such as datasets from GEO and ArrayExpress to annotate and index them with concepts from appropriate ontologies. This indexing requires the use of a concept-recognition tool to identify ontology concepts in the resource's textual metadata. In this paper, we present a comparison of two concept recognizers – NLM's MetaMap and the University of Michigan's Mgrep. We utilize a number of data sources and dictionaries to evaluate the concept recognizers in terms of precision, recall, speed of execution, scalability and customizability. Our evaluations demonstrate that Mgrep has a clear edge over MetaMap for large-scale service oriented applications. Based on our analysis we also suggest areas of potential improvements for Mgrep. We have subsequently used Mgrep to build the Open Biomedical Annotator service. The Annotator service has access to a large dictionary of biomedical terms derived from the United Medical Language System (UMLS) and NCBO ontologies. The Annotator also leverages the hierarchical structure of the ontologies and their mappings to expand annotations. The Annotator service is available to the community as a REST Web service for creating ontology-based annotations of their data."
]
} |
1903.08206 | 2964163406 | The metadata about scientific experiments published in online repositories have been shown to suffer from a high degree of representational heterogeneity—there are often many ways to represent the same type of information, such as a geographical location via its latitude and longitude. To harness the potential that metadata have for discovering scientific data, it is crucial that they be represented in a uniform way that can be queried effectively. One step toward uniformly-represented metadata is to normalize the multiple, distinct field names used in metadata (e.g., lat lon, lat and long) to describe the same type of value. To that end, we present a new method based on clustering and embeddings (i.e., vector representations of words) to align metadata field names with ontology terms. We apply our method to biomedical metadata by generating embeddings for terms in biomedical ontologies from the BioPortal repository. We carried out a comparative study between our method and the NCBO Annotator, which revealed that our method yields more and substantially better alignments between metadata and ontology terms. | There are several recent methods to generate embeddings for entities in an RDF graph or OWL ontology. These embeddings, often called knowledge graph embeddings, require triples of the form @math @math . Translation-based methods represent each @math or @math entity as a point in a vector space, and the @math represents a vector translation function in a hyperplane (i.e., the @math entity vector can be translated to the @math entity vector, using a geometrical function over the @math vector) @cite_4 @cite_14 @cite_8 . Other methods to generate these knowledge graph embeddings are inspired by language modeling approaches, which rely on sequences of words in a text corpus---that is, random walks are carried out on an RDF graph to generate sequences of entities in nearby proximity, and these sequences can be used to generate latent numerical representations of entities @cite_1 . While these methods have been effective for link prediction in knowledge graphs, triple classification, and fact extraction, they cannot be used to assess semantic similarity of different terms by examining how they are used in the context of literature. | {
"cite_N": [
"@cite_1",
"@cite_14",
"@cite_4",
"@cite_8"
],
"mid": [
"2523679382",
"2184957013",
"2283196293",
"2127426251"
],
"abstract": [
"Linked Open Data has been recognized as a valuable source for background information in data mining. However, most data mining tools require features in propositional form, i.e., a vector of nominal or numerical features associated with an instance, while Linked Open Data sources are graphs by nature. In this paper, we present RDF2Vec, an approach that uses language modeling approaches for unsupervised feature extraction from sequences of words, and adapts them to RDF graphs. We generate sequences by leveraging local information from graph sub-structures, harvested by Weisfeiler-Lehman Subtree RDF Graph Kernels and graph walks, and learn latent numerical representations of entities in RDF graphs. Our evaluation shows that such vector representations outperform existing techniques for the propositionalization of RDF graphs on a variety of different predictive machine learning tasks, and that feature vector representations of general knowledge graphs such as DBpedia and Wikidata can be easily reused for different tasks.",
"Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to state-of-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction.",
"We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up.",
"Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively."
]
} |
1903.07994 | 2963858328 | Bitcoin is a cryptocurrency that features a distributed, decentralized and trustworthy mechanism, which has made Bitcoin a popular global transaction platform. The transaction efficiency among nations and the privacy benefiting from address anonymity of the Bitcoin network have attracted many activities such as payments, investments, gambling, and even money laundering in the past decade. Unfortunately, some criminal behaviors which took advantage of this platform were not identified. This has discouraged many governments to support cryptocurrency. Thus, the capability to identify criminal addresses becomes an important issue in the cryptocurrency network. In this paper, we propose new features in addition to those commonly used in the literature to build a classification model for detecting abnormality of Bitcoin network addresses. These features include various high orders of moments of transaction time (represented by block height) which summarizes the transaction history in an efficient way. The extracted features are trained by supervised machine learning methods on a labeling category data set. The experimental evaluation shows that these features have improved the performance of Bitcoin address classification significantly. We evaluate the results under eight classifiers and achieve the highest Micro-Fl Macro-F1 of 87 86 with LightGBM. | Other studies solve the entity identification problem by supervised learning methods. @cite_12 classifies cybercriminal entities by supervised learning methods on collected labeled Bitcoin addresses. @cite_27 train classifiers to detect Ponzi schemes in Bitcoin. To deal with imbalanced data, sampling-based approach and cost-sensitive approach are considered simultaneously in @cite_27 . To reduce anonymity of Bitcoin by predicting yet-unidentified addresses, @cite_8 trained classifiers with synthetic minority over-sampling technique @cite_32 on imbalanced data. | {
"cite_N": [
"@cite_27",
"@cite_32",
"@cite_12",
"@cite_8"
],
"mid": [
"2962831337",
"2148143831",
"2783095957",
"2782052028"
],
"abstract": [
"Soon after its introduction in 2009, Bitcoin has been adopted by cyber-criminals, which rely on its pseudonymity to implement virtually untraceable scams. One of the typical scams that operate on Bitcoin are the so-called Ponzi schemes. These are fraudulent investments which repay users with the funds invested by new users that join the scheme, and implode when it is no longer possible to find new investments. Despite being illegal in many countries, Ponzi schemes are now proliferating on Bitcoin, and they keep alluring new victims, who are plundered of millions of dollars. We apply data mining techniques to detect Bitcoin addresses related to Ponzi schemes. Our starting point is a dataset of features of real-world Ponzi schemes, that we construct by analysing, on the Bitcoin blockchain, the transactions used to perform the scams. We use this dataset to experiment with various machine learning algorithms, and we assess their effectiveness through standard validation protocols and performance metrics. The best of the classifiers we have experimented can identify most of the Ponzi schemes in the dataset, with a low number of false positives.",
"An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of \"normal\" examples with only a small percentage of \"abnormal\" or \"interesting\" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of oversampling the minority (abnormal)cla ss and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space)tha n only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space)t han varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC)and the ROC convex hull strategy.",
"Bitcoin, a peer-to-peer payment system and digital currency, is often involved in illicit activities such as scamming, ransomware attacks, illegal goods trading, and thievery. At the time of writing, the Bitcoin ecosystem has not yet been mapped and as such there is no estimate of the share of illicit activities. This paper provides the first estimation of the portion of cyber-criminal entities in the Bitcoin ecosystem. Our dataset consists of 854 observations categorised into 12 classes (out of which 5 are cybercrime-related) and a total of 100,000 uncategorised observations. The dataset was obtained from the data provider who applied three types of clustering of Bitcoin transactions to categorise entities: co-spend, intelligence-based, and behaviour-based. Thirteen supervised learning classifiers were then tested, of which four prevailed with a cross-validation accuracy of 77.38 , 76.47 , 78.46 , 80.76 respectively. From the top four classifiers, Bagging and Gradient Boosting classifiers were selected based on their weighted average and per class precision on the cybercrime-related categories. Both models were used to classify 100,000 uncategorised entities, showing that the share of cybercrime-related is 29.81 according to Bagging, and 10.95 according to Gradient Boosting with number of entities as the metric. With regard to the number of addresses and current coins held by this type of entities, the results are: 5.79 and 10.02 according to Bagging; and 3.16 and 1.45 according to Gradient Boosting.",
""
]
} |
1903.07994 | 2963858328 | Bitcoin is a cryptocurrency that features a distributed, decentralized and trustworthy mechanism, which has made Bitcoin a popular global transaction platform. The transaction efficiency among nations and the privacy benefiting from address anonymity of the Bitcoin network have attracted many activities such as payments, investments, gambling, and even money laundering in the past decade. Unfortunately, some criminal behaviors which took advantage of this platform were not identified. This has discouraged many governments to support cryptocurrency. Thus, the capability to identify criminal addresses becomes an important issue in the cryptocurrency network. In this paper, we propose new features in addition to those commonly used in the literature to build a classification model for detecting abnormality of Bitcoin network addresses. These features include various high orders of moments of transaction time (represented by block height) which summarizes the transaction history in an efficient way. The extracted features are trained by supervised machine learning methods on a labeling category data set. The experimental evaluation shows that these features have improved the performance of Bitcoin address classification significantly. We evaluate the results under eight classifiers and achieve the highest Micro-Fl Macro-F1 of 87 86 with LightGBM. | @cite_20 introduces the idea of in directed hypergraphs, defining exchange patterns of addresses. @cite_7 , the graph-based features are then combined with address features, entity features, temporal features, and centrality features to identify Bitcoin entity categories. | {
"cite_N": [
"@cite_7",
"@cite_20"
],
"mid": [
"2898334543",
"2768418903"
],
"abstract": [
"Bitcoin has created a new exchange paradigm within which financial transactions can be trusted without an intermediary. This premise of a free decentralized transactional network however requires, in its current implementation, unrestricted access to the ledger for peer-based transaction verification. A number of studies have shown that, in this pseudonymous context, identities can be leaked based on transaction features or off-network information. In this work, we analyze the information revealed by the pattern of transactions in the neighborhood of a given entity transaction. By definition, these features which pertain to an extended network are not directly controllable by the entity, but might enable leakage of information about transacting entities. We define a number of new features relevant to entity characterization on the Bitcoin Blockchain and study their efficacy in practice. We show that even a weak attacker with shallow data mining knowledge is able to leverage these features to characterize the entity properties.",
"Bitcoin exchanges operate between digital and fiat currency networks, thus providing an opportunity to connect real-world identities to pseudonymous addresses, an important task for anti-money laundering efforts. We seek to characterize, understand, and identify patterns centered around exchanges in the context of a directed hypergraph model for Bitcoin transactions. We introduce the idea of motifs in directed hypergraphs, considering a particular 2-motif as a potential laundering pattern. We identify distinct statistical properties of exchange addresses related to the acquisition and spending of bitcoin. We then leverage this to build classification models to learn a set of discriminating features, and are able to predict if an address is owned by an exchange with (>80 ) accuracy using purely structural features of the graph. Applying this classifier to the 2-motif patterns reveals a preponderance of inter-exchange activity, while not necessarily significant laundering patterns."
]
} |
1903.07903 | 2921283045 | Despite the huge success of Long Short-Term Memory networks, their applications in environmental sciences are scarce. We argue that one reason is the difficulty to interpret the internals of trained networks. In this study, we look at the application of LSTMs for rainfall-runoff forecasting, one of the central tasks in the field of hydrology, in which the river discharge has to be predicted from meteorological observations. LSTMs are particularly well-suited for this problem since memory cells can represent dynamic reservoirs and storages, which are essential components in state-space modelling approaches of the hydrological system. On basis of two different catchments, one with snow influence and one without, we demonstrate how the trained model can be analyzed and interpreted. In the process, we show that the network internally learns to represent patterns that are consistent with our qualitative understanding of the hydrological system. | In the field of water resources and hydrology, a lot of effort has been made on interpreting neural networks and analyzing the importance of input variables (see @cite_21 for an overview). However, so far only feed-forward neural networks have been applied in these studies. Only recently, @cite_17 have demonstrated the potential use of LSTMs for the task of rainfall-runoff modelling. In their work they have also shown that memory cells with interpretable functions exist, which were found by visual inspection. | {
"cite_N": [
"@cite_21",
"@cite_17"
],
"mid": [
"2004630602",
"2800819102"
],
"abstract": [
"The use of artificial neural network (ANN) models in water resources applications has grown considerably over the last decade. However, an important step in the ANN modelling methodology that has received little attention is the selection of appropriate model inputs. This article is the first in a two-part series published in this issue and addresses the lack of a suitable input determination methodology for ANN models in water resources applications. The current state of input determination is reviewed and two input determination methodologies are presented. The first method is a model-free approach, which utilises a measure of the mutual information criterion to characterise the dependence between a potential model input and the output variable. To facilitate the calculation of dependence in the case of multiple inputs, a partial measure of the mutual information criterion is used. In the second method, a self-organizing map (SOM) is used to reduce the dimensionality of the input space and obtain independent inputs. To determine which inputs have a significant relationship with the output (dependent) variable, a hybrid genetic algorithm and general regression neural network (GAGRNN) is used. Both input determination techniques are tested on a number of synthetic data sets, where the dependence attributes were known a priori. In the second paper of the series, the input determination methodology is applied to a real-world case study in order to determine suitable model inputs for forecasting salinity in the River Murray, South Australia, 14 days in advance.",
"Abstract. Rainfall–runoff modelling is one of the key challenges in the field of hydrology. Various approaches exist, ranging from physically based over conceptual to fully data-driven models. In this paper, we propose a novel data-driven approach, using the Long Short-Term Memory (LSTM) network, a special type of recurrent neural network. The advantage of the LSTM is its ability to learn long-term dependencies between the provided input and output of the network, which are essential for modelling storage effects in e.g. catchments with snow influence. We use 241 catchments of the freely available CAMELS data set to test our approach and also compare the results to the well-known Sacramento Soil Moisture Accounting Model (SAC-SMA) coupled with the Snow-17 snow routine. We also show the potential of the LSTM as a regional hydrological model in which one model predicts the discharge for a variety of catchments. In our last experiment, we show the possibility to transfer process understanding, learned at regional scale, to individual catchments and thereby increasing model performance when compared to a LSTM trained only on the data of single catchments. Using this approach, we were able to achieve better model performance as the SAC-SMA + Snow-17, which underlines the potential of the LSTM for hydrological modelling applications."
]
} |
1903.07803 | 2921558605 | Segmenting the retinal vasculature entails a trade-off between how much of the overall vascular structure we identify vs. how precisely we segment individual vessels. In particular, state-of-the-art methods tend to under-segment faint vessels, as well as pixels that lie on the edges of thicker vessels. Thus, they underestimate the width of individual vessels, as well as the ratio of large to small vessels. More generally, many crucial bio-markers---including the artery-vein (AV) ratio, branching angles, number of bifurcation, fractal dimension, tortuosity, vascular length-to-diameter ratio and wall-to-lumen length---require precise measurements of individual vessels. To address this limitation, we propose a novel, stochastic training scheme for deep neural networks that better classifies the faint, ambiguous regions of the image. Our approach relies on two key innovations. First, we train our deep networks with dynamic weights that fluctuate during each training iteration. This stochastic approach forces the network to learn a mapping that robustly balances precision and recall. Second, we decouple the segmentation process into two steps. In the first half of our pipeline, we estimate the likelihood of every pixel and then use these likelihoods to segment pixels that are clearly vessel or background. In the latter part of our pipeline, we use a second network to classify the ambiguous regions in the image. Our proposed method obtained state-of-the-art results on five retinal datasets---DRIVE, STARE, CHASE-DB, AV-WIDE, and VEVIO---by learning a robust balance between false positive and false negative rates. In addition, we are the first to report segmentation results on the AV-WIDE dataset, and we have made the ground-truth annotations for this dataset publicly available. | Vessel segmentation has a long history, although the advent of deep neural networks has yielded significant improvements in recent years. Earlier techniques relied primarily on handcrafted features, including matched filters @cite_32 , quadrature filters @cite_8 , and Gabor filters @cite_17 @cite_5 . The latter approach, in particular, uses a predefined kernel bank to incorporate all vessel widths and orientations. However, handcrafted features are limited by our ability to analytically model the segmentation process. Other classic techniques, as surveyed further in @cite_7 , include piece-wise thresholding @cite_4 , region growing @cite_20 , and concavity measurements @cite_16 . In addition to handcrafted features, some prior approaches used graph-theoretic techniques to trace the vascular structure, including shortest-path tracking @cite_13 and the fast marching algorithm @cite_25 . | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_32",
"@cite_5",
"@cite_16",
"@cite_13",
"@cite_25",
"@cite_20",
"@cite_17"
],
"mid": [
"2145305441",
"2188292956",
"2096301059",
"2166524747",
"",
"2112783556",
"2057739630",
"2048316306",
"1579287100",
"2060295888"
],
"abstract": [
"Describes an automated method to locate and outline blood vessels in images of the ocular fundus. Such a tool should prove useful to eye care specialists for purposes of patient screening, treatment evaluation, and clinical study. The authors' method differs from previously known methods in that it uses local and global vessel features cooperatively to segment the vessel network. The authors evaluate their method using hand-labeled ground truth segmentations of 20 images. A plot of the operating characteristic shows that the authors' method reduces false positives by as much as 15 times over basic thresholding of a matched filter response (MFR), at up to a 75 true positive rate. For a baseline, they also compared the ground truth against a second hand-labeling, yielding a 90 true positive and a 4 false positive detection rate, on average. These numbers suggest there is still room for a 15 true positive rate improvement, with the same false positive rate, over the authors' method. They are making all their images and hand labelings publicly available for interested researchers to use in evaluating related methods.",
"The Image segmentation is referred to as one of the most important processes of image processing. Image segmentation is the technique of dividing or partitioning an image into parts, called segments. It is mostly useful for applications like image compression or object recognition, because for these types of applications, it is inefficient to process the whole image. So, image segmentation is used to segment the parts from image for further processing. There exist several image segmentation techniques, which partition the image into several parts based on certain image features like pixel intensity value, color, texture, etc. These all techniques are categorized based on the segmentation method used. In this paper the various image segmentation techniques are reviewed, discussed and finally a comparison of their advantages and disadvantages is listed.",
"The segmentation of blood vessels is a common problem in medical imaging and various applications are found in diagnostics, surgical planning, training and more. Among many different techniques, the use of multiple scales and line detectors is a popular approach. However, the typical line filters used are sensitive to intensity variations and do not target the detection of vessel walls explicitly. In this article, we combine both line and edge detection using quadrature filters across multiple scales. The filter result gives well defined vessels as linear structures, while distinct edges facilitate a robust segmentation. We apply the filter output to energy optimization techniques for segmentation and show promising results in 2D and 3D to illustrate the behavior of our method. The conference version of this article received the best paper award in the bioinformatics and biomedical applications track at ICPR 2008.",
"Blood vessels usually have poor local contrast, and the application of existing edge detection algorithms yield results which are not satisfactory. An operator for feature extraction based on the optical and spatial properties of objects to be recognized is introduced. The gray-level profile of the cross section of a blood vessel is approximated by a Gaussian-shaped curve. The concept of matched filter detection of signals is used to detect piecewise linear segments of blood vessels in these images. Twelve different templates that are used to search for vessel segments along all possible directions are constructed. Various issues related to the implementation of these matched filters are discussed. The results are compared to those obtained with other methods. >",
"",
"Detecting blood vessels in retinal images with the presence of bright and dark lesions is a challenging unsolved problem. In this paper, a novel multiconcavity modeling approach is proposed to handle both healthy and unhealthy retinas simultaneously. The differentiable concavity measure is proposed to handle bright lesions in a perceptive space. The line-shape concavity measure is proposed to remove dark lesions which have an intensity structure different from the line-shaped vessels in a retina. The locally normalized concavity measure is designed to deal with unevenly distributed noise due to the spherical intensity variation in a retinal image. These concavity measures are combined together according to their statistical distributions to detect vessels in general retinal images. Very encouraging experimental results demonstrate that the proposed method consistently yields the best performance over existing state-of-the-art methods on the abnormal retinas and its accuracy outperforms the human observer, which has not been achieved by any of the state-of-the-art benchmark methods. Most importantly, unlike existing methods, the proposed method shows very attractive performances not only on healthy retinas but also on a mixture of healthy and pathological retinas.",
"We present a methodology for extracting the vascular network in the human retina using Dijkstra’s shortest-path algorithm. Our method preserves vessel thickness, requires no manual intervention, and follows vessel branching naturally and efficiently. To test our method, we constructed a retinal video indirect ophthalmoscopy (VIO) image database from pediatric patients and compared the segmentations achieved by our method and state-of-the-art approaches to a human-drawn gold standard. Our experimental results show that our algorithm outperforms prior state-of-the-art methods, for both single VIO frames and automatically generated, large field-of-view enhanced mosaics. We have made the corresponding dataset and source code freely available online.",
"We present a new interactive method for tubular structure extraction. The main application and motivation for this work is vessel tracking in 2D and 3D images. The basic tools are minimal paths solved using the fast marching algorithm. This allows interactive tools for the physician by clicking on a small number of points in order to obtain a minimal path between two points or a set of paths in the case of a tree structure. Our method is based on a variant of the minimal path method that models the vessel as a centerline and surface. This is done by adding one dimension for the local radius around the centerline. The crucial step of our method is the definition of the local metrics to minimize. We have chosen to exploit the tubular structure of the vessels one wants to extract to built an anisotropic metric. The designed metric is well oriented along the direction of the vessel, admits higher velocity on the centerline, and provides a good estimate of the vessel radius. Based on the optimally oriented flux this measure is required to be robust against the disturbance introduced by noise or adjacent structures with intensity similar to the target vessel. We obtain promising results on noisy synthetic and real 2D and 3D images and we present a clinical validation.",
"We present a method for retinal blood vessel segmentation based upon the scale-space analysis of the first and second derivative of the intensity image which gives information about its topology and overcomes the problem of variations in contrast inherent in these images. We use the local maxima over scales of the magnitude of the gradient and the maximum principal curvature as the two features used in a region growing procedure. In the first stage, the growth is constrained to regions of low gradient magnitude. In the final stage this constraint is relaxed to allow borders between regions to be defined. The algorithm is tested in both red-free and fluorescein retinal images.",
"Retinal blood vessel segmentation is a widely used process in diagnosis of various diseases such as diabetic retinopathy, glaucoma and arteriosclerosis. Therefore, an automated tool developed for vessel segmentation could be employed in diagnosis of those illnesses to help ophthalmologists. In this paper, we suggest a method to segment retinal blood vessels automatically. In the method, we apply top-hat transform after Gabor filter to enhance blood vessels. Later on, the output of the transformation is converted to binary image with p-tile thresholding. In order to test the developed system 20 images obtained from STARE database are used for performance evaluation. The results shows 86.31 of true positive rate (sensivity) and 92.90 of accuracy, which is promising."
]
} |
1903.07803 | 2921558605 | Segmenting the retinal vasculature entails a trade-off between how much of the overall vascular structure we identify vs. how precisely we segment individual vessels. In particular, state-of-the-art methods tend to under-segment faint vessels, as well as pixels that lie on the edges of thicker vessels. Thus, they underestimate the width of individual vessels, as well as the ratio of large to small vessels. More generally, many crucial bio-markers---including the artery-vein (AV) ratio, branching angles, number of bifurcation, fractal dimension, tortuosity, vascular length-to-diameter ratio and wall-to-lumen length---require precise measurements of individual vessels. To address this limitation, we propose a novel, stochastic training scheme for deep neural networks that better classifies the faint, ambiguous regions of the image. Our approach relies on two key innovations. First, we train our deep networks with dynamic weights that fluctuate during each training iteration. This stochastic approach forces the network to learn a mapping that robustly balances precision and recall. Second, we decouple the segmentation process into two steps. In the first half of our pipeline, we estimate the likelihood of every pixel and then use these likelihoods to segment pixels that are clearly vessel or background. In the latter part of our pipeline, we use a second network to classify the ambiguous regions in the image. Our proposed method obtained state-of-the-art results on five retinal datasets---DRIVE, STARE, CHASE-DB, AV-WIDE, and VEVIO---by learning a robust balance between false positive and false negative rates. In addition, we are the first to report segmentation results on the AV-WIDE dataset, and we have made the ground-truth annotations for this dataset publicly available. | There are numerous extensions to the original U-net architecture. increased the performance of U-net by introducing residual connection, and a recursive training strategy @cite_28 , albeit at the expense of increased training time due to the extra connections and recursion. arranged two U-net architectures in serial fashion, enabling the architecture to learn more abstract features @cite_22 . This serialization significantly increases the training time and requires heavy computational infrastructure to be able to train well. Jin used deformable CNNs to construct better vascular features @cite_29 , but also with added training complexity. Finally, Yun combined conditional generative adversarial networks (GANs) with U-net @cite_9 , achieving results comparable to the other U-net extensions. In contrast to these extensions, our approach not only yielded better results, but is simpler and faster to train, as detailed in the following section. | {
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_29",
"@cite_22"
],
"mid": [
"2788906943",
"2800306316",
"2898910301",
"2896194744"
],
"abstract": [
"Deep learning (DL) based semantic segmentation methods have been providing state-of-the-art performance in the last few years. More specifically, these techniques have been successfully applied to medical image classification, segmentation, and detection tasks. One deep learning technique, U-Net, has become one of the most popular for these applications. In this paper, we propose a Recurrent Convolutional Neural Network (RCNN) based on U-Net as well as a Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net models, which are named RU-Net and R2U-Net respectively. The proposed models utilize the power of U-Net, Residual Network, as well as RCNN. There are several advantages of these proposed architectures for segmentation tasks. First, a residual unit helps when training deep architecture. Second, feature accumulation with recurrent residual convolutional layers ensures better feature representation for segmentation tasks. Third, it allows us to design better U-Net architecture with same number of network parameters with better performance for medical image segmentation. The proposed models are tested on three benchmark datasets such as blood vessel segmentation in retina images, skin cancer segmentation, and lung lesion segmentation. The experimental results show superior performance on segmentation tasks compared to equivalent models including U-Net and residual U-Net (ResU-Net).",
"The segmentation of retinal vessels is of significance for doctors to diagnose the fundus diseases. However, existing methods have various problems in the segmentation of the retinal vessels, such as insufficient segmentation of retinal vessels, weak anti-noise interference ability, and sensitivity to lesions, etc. Aiming to the shortcomings of existed methods, this paper proposes the use of conditional deep convolutional generative adversarial networks to segment the retinal vessels. We mainly improve the network structure of the generator. The introduction of the residual module at the convolutional layer for residual learning makes the network structure sensitive to changes in the output, as to better adjust the weight of the generator. In order to reduce the number of parameters and calculations, using a small convolution to halve the number of channels in the input signature before using a large convolution kernel. By used skip connection to connect the output of the convolutional layer with the output of the deconvolution layer to avoid low-level information sharing. By verifying the method on the DRIVE and STARE datasets, the segmentation accuracy rate is 96.08 and 97.71 , the sensitivity reaches 82.74 and 85.34 respectively, and the F-measure reaches 82.08 and 85.02 respectively. The sensitivity is 4.82 and 2.4 higher than that of R2U-Net.",
"Abstract Automatic segmentation of retinal vessels in fundus images plays an important role in the diagnosis of some diseases such as diabetes and hypertension. In this paper, we propose Deformable U-Net (DUNet), which exploits the retinal vessels’ local features with a U-shape architecture, in an end to end manner for retinal vessel segmentation. Inspired by the recently introduced deformable convolutional networks, we integrate the deformable convolution into the proposed network. The DUNet, with upsampling operators to increase the output resolution, is designed to extract context information and enable precise localization by combining low-level features with high-level ones. Furthermore, DUNet captures the retinal vessels at various shapes and scales by adaptively adjusting the receptive fields according to vessels’ scales and shapes. Public datasets: DRIVE, STARE, CHASE_DB1 and HRF are used to test our models. Detailed comparisons between the proposed network and the deformable neural network, U-Net are provided in our study. Results show that more detailed vessels can be extracted by DUNet and it exhibits state-of-the-art performance for retinal vessel segmentation with a global accuracy of 0.9566 0.9641 0.9610 0.9651 and AUC of 0.9802 0.9832 0.9804 0.9831 on DRIVE, STARE, CHASE_DB1 and HRF respectively. Moreover, to show the generalization ability of the DUNet, we use another two retinal vessel data sets, i.e., WIDE and SYNTHE, to qualitatively and quantitatively analyze and compare with other methods. Extensive cross-training evaluations are used to further assess the extendibility of DUNet. The proposed method has the potential to be applied to the early diagnosis of diseases.",
"U-Net has been providing state-of-the-art performance in many medical image segmentation problems. Many modifications have been proposed for U-Net, such as attention U-Net, recurrent residual convolutional U-Net (R2-UNet), and U-Net with residual blocks or blocks with dense connections. However, all these modifications have an encoder-decoder structure with skip connections, and the number of paths for information flow is limited. We propose LadderNet in this paper, which can be viewed as a chain of multiple U-Nets. Instead of only one pair of encoder branch and decoder branch in U-Net, a LadderNet has multiple pairs of encoder-decoder branches, and has skip connections between every pair of adjacent decoder and decoder branches in each level. Inspired by the success of ResNet and R2-UNet, we use modified residual blocks where two convolutional layers in one block share the same weights. A LadderNet has more paths for information flow because of skip connections and residual blocks, and can be viewed as an ensemble of Fully Convolutional Networks (FCN). The equivalence to an ensemble of FCNs improves segmentation accuracy, while the shared weights within each residual block reduce parameter number. Semantic segmentation is essential for retinal disease detection. We tested LadderNet on two benchmark datasets for blood vessel segmentation in retinal images, and achieved superior performance over methods in the literature. The implementation is provided https: github.com juntang-zhuang LadderNet"
]
} |
1903.08051 | 2921821444 | In this paper, we proposed a novel Identity-free conditional Generative Adversarial Network (IF-GAN) to explicitly reduce inter-subject variations for facial expression recognition. Specifically, for any given input face image, a conditional generative model was developed to transform an average neutral face, which is calculated from various subjects showing neutral expressions, to an average expressive face with the same expression as the input image. Since the transformed images have the same synthetic "average" identity, they differ from each other by only their expressions and thus, can be used for identity-free expression classification. In this work, an end-to-end system was developed to perform expression transformation and expression recognition in the IF-GAN framework. Experimental results on three facial expression datasets have demonstrated that the proposed IF-GAN outperforms the baseline CNN model and achieves comparable or better performance compared with the state-of-the-art methods for facial expression recognition. | Facial expression recognition has been widely studied in the past decades as detailed in the recent surveys @cite_4 @cite_8 @cite_2 . One of the major steps is to capture the most discriminative features that characterize appearance and geometric facial changes caused by target expressions. These features can be roughly divided into two main categories: human-designed and learned features. Recently, deep CNNs have achieved promising results for facial expression recognition @cite_2 . However, the learned expression-related deep features are often affected by individual differences in facial attributes affected by gender, race, age, etc. As a result, performance of expression recognition usually degrades on unseen subjects. Although great progress has been achieved in feature classifier selections, the challenge caused by inter-subject variations still remains for facial expression recognition. | {
"cite_N": [
"@cite_2",
"@cite_4",
"@cite_8"
],
"mid": [
"2799041689",
"1965947362",
"2737559518"
],
"abstract": [
"With the transition of facial expression recognition (FER) from laboratory-controlled to challenging in-the-wild conditions and the recent success of deep learning techniques in various fields, deep neural networks have increasingly been leveraged to learn discriminative representations for automatic FER. Recent deep FER systems generally focus on two important issues: overfitting caused by a lack of sufficient training data and expression-unrelated variations, such as illumination, head pose and identity bias. In this paper, we provide a comprehensive survey on deep FER, including datasets and algorithms that provide insights into these intrinsic problems. First, we describe the standard pipeline of a deep FER system with the related background knowledge and suggestions of applicable implementations for each stage. We then introduce the available datasets that are widely used in the literature and provide accepted data selection and evaluation principles for these datasets. For the state of the art in deep FER, we review existing novel deep neural networks and related training strategies that are designed for FER based on both static images and dynamic image sequences, and discuss their advantages and limitations. Competitive performances on widely used benchmarks are also summarized in this section. We then extend our survey to additional related issues and application scenarios. Finally, we review the remaining challenges and corresponding opportunities in this field as well as future directions for the design of robust deep FER systems.",
"Automatic affect analysis has attracted great interest in various contexts including the recognition of action units and basic or non-basic emotions. In spite of major efforts, there are several open questions on what the important cues to interpret facial expressions are and how to encode them. In this paper, we review the progress across a range of affect recognition applications to shed light on these fundamental questions. We analyse the state-of-the-art solutions by decomposing their pipelines into fundamental components, namely face registration, representation, dimensionality reduction and recognition. We discuss the role of these components and highlight the models and new trends that are followed in their design. Moreover, we provide a comprehensive analysis of facial representations by uncovering their advantages and limitations; we elaborate on the type of information they encode and discuss how they deal with the key challenges of illumination variations, registration errors, head-pose variations, occlusions, and identity bias. This survey allows us to identify open issues and to define future directions for designing real-world affect recognition systems.",
"As one of the most comprehensive and objective ways to describe facial expressions, the Facial Action Coding System (FACS) has recently received significant attention. Over the past 30 years, extensive research has been conducted by psychologists and neuroscientists on various aspects of facial expression analysis using FACS. Automating FACS coding would make this research faster and more widely applicable, opening up new avenues to understanding how we communicate through facial expressions. Such an automated process can also potentially increase the reliability, precision and temporal resolution of coding. This paper provides a comprehensive survey of research into machine analysis of facial actions. We systematically review all components of such systems: pre-processing, feature extraction and machine coding of facial actions. In addition, the existing FACS-coded facial expression databases are summarised. Finally, challenges that have to be addressed to make automatic facial action analysis applicable in real-life situations are extensively discussed. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights into the future of machine recognition of facial actions: what are the challenges and opportunities that researchers in the field face."
]
} |
1903.08051 | 2921821444 | In this paper, we proposed a novel Identity-free conditional Generative Adversarial Network (IF-GAN) to explicitly reduce inter-subject variations for facial expression recognition. Specifically, for any given input face image, a conditional generative model was developed to transform an average neutral face, which is calculated from various subjects showing neutral expressions, to an average expressive face with the same expression as the input image. Since the transformed images have the same synthetic "average" identity, they differ from each other by only their expressions and thus, can be used for identity-free expression classification. In this work, an end-to-end system was developed to perform expression transformation and expression recognition in the IF-GAN framework. Experimental results on three facial expression datasets have demonstrated that the proposed IF-GAN outperforms the baseline CNN model and achieves comparable or better performance compared with the state-of-the-art methods for facial expression recognition. | Approaches @cite_25 @cite_19 were designed to learn discriminative features for facial expression recognition by reducing the intra-class variations while increasing the inter-class differences simultaneously. More recently, there are a few approaches focusing on explicitly improving person-independent facial expression recognition. An Identity-Aware CNN (IACNN) @cite_15 was proposed to alleviate variations introduced by identity-related information using an expression-sensitive contrastive loss and an identity-sensitive contrastive loss. However, the contrastive loss suffers from drastic data expansion when constructing image pairs from the training data. An Identity-Adaptive Generation (IA-gen) method @cite_17 was developed to generate person-dependent facial expression images such that any given input facial image is transferred to six expressive images of the same subject using six conditional GANs (cGANs). Then, expression classification is performed by comparing the input image with the six generated expressive images. De-expression Residue Learning (DeRL) @cite_31 also utilized the cGAN to synthesize a neutral facial image of the same identity from any input expressive image, while the person-independent expression information can be extracted from the intermediate layers of the generative model. However, the aforementioned cGAN-based models @cite_17 @cite_31 are not end-to-end networks and suffer from expensive computational cost. | {
"cite_N": [
"@cite_31",
"@cite_19",
"@cite_15",
"@cite_25",
"@cite_17"
],
"mid": [
"2798583514",
"2963712289",
"2730601341",
"2738672149",
"2805080735"
],
"abstract": [
"A facial expression is a combination of an expressive component and a neutral component of a person. In this paper, we propose to recognize facial expressions by extracting information of the expressive component through a de-expression learning procedure, called De-expression Residue Learning (DeRL). First, a generative model is trained by cGAN. This model generates the corresponding neutral face image for any input face image. We call this procedure de-expression because the expressive information is filtered out by the generative model; however, the expressive information is still recorded in the intermediate layers. Given the neutral face image, unlike previous works using pixel-level or feature-level difference for facial expression classification, our new method learns the deposition (or residue) that remains in the intermediate layers of the generative model. Such a residue is essential as it contains the expressive component deposited in the generative model from any input facial expression images. Seven public facial expression databases are employed in our experiments. With two databases (BU-4DFE and BP4D-spontaneous) for pre-training, the DeRL method has been evaluated on five databases, CK+, Oulu-CASIA, MMI, BU-3DFE, and BP4D+. The experimental results demonstrate the superior performance of the proposed method.",
"Over the past few years, Convolutional Neural Networks (CNNs) have shown promise on facial expression recognition. However, the performance degrades dramatically under real-world settings due to variations introduced by subtle facial appearance changes, head pose variations, illumination changes, and occlusions. In this paper, a novel island loss is proposed to enhance the discriminative power of deeply learned features. Specifically, the island loss is designed to reduce the intra-class variations while enlarging the inter-class differences simultaneously. Experimental results on four benchmark expression databases have demonstrated that the CNN with the proposed island loss (IL-CNN) outperforms the baseline CNN models with either traditional softmax loss or center loss and achieves comparable or better performance compared with the state-of-the-art methods for facial expression recognition.",
"Facial expression recognition suffers under realworldconditions, especially on unseen subjects due to highinter-subject variations. To alleviate variations introduced bypersonal attributes and achieve better facial expression recognitionperformance, a novel identity-aware convolutional neuralnetwork (IACNN) is proposed. In particular, a CNN with a newarchitecture is employed as individual streams of a bi-streamidentity-aware network. An expression-sensitive contrastive lossis developed to measure the expression similarity to ensure thefeatures learned by the network are invariant to expressionvariations. More importantly, an identity-sensitive contrastiveloss is proposed to learn identity-related information from identitylabels to achieve identity-invariant expression recognition.Extensive experiments on three public databases including aspontaneous facial expression database have shown that theproposed IACNN achieves promising results in real world.",
"Past research on facial expressions have used relatively limited datasets, which makes it unclear whether current methods can be employed in real world. In this paper, we present a novel database, RAF-DB, which contains about 30000 facial images from thousands of individuals. Each image has been individually labeled about 40 times, then EM algorithm was used to filter out unreliable labels. Crowdsourcing reveals that real-world faces often express compound emotions, or even mixture ones. For all we know, RAF-DB is the first database that contains compound expressions in the wild. Our cross-database study shows that the action units of basic emotions in RAF-DB are much more diverse than, or even deviate from, those of lab-controlled ones. To address this problem, we propose a new DLP-CNN (Deep Locality-Preserving CNN) method, which aims to enhance the discriminative power of deep features by preserving the locality closeness while maximizing the inter-class scatters. The benchmark experiments on the 7-class basic expressions and 11-class compound expressions, as well as the additional experiments on SFEW and CK+ databases, show that the proposed DLP-CNN outperforms the state-of-the-art handcrafted features and deep learning based methods for the expression recognition in the wild.",
"Subject variation is a challenging issue for fa- cial expression recognition, especially when handling unseen subjects with small-scale lableled facial expression databases. Although transfer learning has been widely used to tackle the problem, the performance degrades on new data. In this paper, we present a novel approach (so-called IA-gen) to alleviate the issue of subject variations by regenerating expressions from any input facial images. First of all, we train conditional generative models to generate six prototypic facial expressions from any given query face image while keeping the identity related information unchanged. Generative Adversarial Networks are employed to train the conditional generative models, and each of them is designed to generate one of the prototypic facial expression images. Second, a regular CNN (FER-Net) is fine- tuned for expression classification. After the corresponding prototypic facial expressions are regenerated from each facial image, we output the last FC layer of FER-Net as features for both the input image and the generated images. Based on the minimum distance between the input image and the generated expression images in the feature space, the input image is classified as one of the prototypic expressions consequently. Our proposed method can not only alleviate the influence of inter-subject variations, but will also be flexible enough to integrate with any other FER CNNs for person-independent facial expression recognition. Our method has been evaluated on CK+, Oulu-CASIA, BU-3DFE and BU-4DFE databases, and the results demonstrate the effectiveness of our proposed method."
]
} |
1903.07748 | 2921130045 | Joining trajectory datasets is a significant operation in mobility data analytics and the cornerstone of various methods that aim to extract knowledge out of them. In the era of Big Data, the production of mobility data has become massive and, consequently, performing such an operation in a centralized way is not feasible. In this paper, we address the problem of Distributed Subtrajectory Join processing by utilizing the MapReduce programming model. Compared to traditional trajectory join queries, this problem is even more challenging since the goal is to retrieve all the "maximal" portions of trajectories that are "similar". We propose three solutions: (i) a well-designed basic solution, coined DTJb, (ii) a solution that uses a preprocessing step that repartitions the data, labeled DTJr, and (iii) a solution that, additionally, employs an indexing scheme, named DTJi. In our experimental study, we utilize a 56GB dataset of real trajectories from the maritime domain, which, to the best of our knowledge, is the largest real dataset used for experimentation in the literature of trajectory data management. The results show that DTJi performs up to 16x faster compared with DTJb, 10x faster than DTJr and 3x faster than the closest related state of the art algorithm. | A similar but different problem is the one of trajectory similarity join, where the goal is to retrieve all pairs of trajectories that exceed a given similarity threshold as in @cite_32 and @cite_20 . However, both of them return as a result pairs of trajectories and not subtrajectories, thus they cannot support some of the scenarios presented in . An approach very similar to ours is presented in @cite_23 , where, given a pair of trajectories they try to perform partial matching, finding the most similar subtrajectories between these two trajectories. Different variations of the problem are presented, where the duration of the match'' is specified beforehand or not. Nevertheless, the problem in @cite_23 is not a join operation and temporal tolerance is not considered. To sum up, all of the above approaches are centralized and applying them to a parallel and distributed environment is non-trivial. | {
"cite_N": [
"@cite_23",
"@cite_32",
"@cite_20"
],
"mid": [
"2042968733",
"2576109540",
"2106659141"
],
"abstract": [
"A natural time-dependent similarity measure for two trajectories is their average distance at corresponding times. We give algorithms for computing the most similar subtrajectories under this measure, assuming the two trajectories are given as two polygonal, possibly self-intersecting lines with time stamps. For the case when a minimum duration of the subtrajectories is specified and the subtrajectories must start at corresponding times, we give a linear-time algorithm. The algorithm is based on a result of independent interest: We present a linear-time algorithm to find, for a piece-wise monotone function, an interval of at least a given length that has minimum average value. In the case that the subtrajectories may start at non-corresponding times, it appears difficult to give exact algorithms, even if the duration of the subtrajectories is fixed. For this case, we give (1+@e)-approximation algorithms, for both fixed duration and when only a minimum duration is specified.",
"Emerging vehicular trajectory data have opened up opportunities to benefit many real-world applications, e.g., frequent trajectory based navigation systems, road planning, car pooling, etc. The similarity join is a key operation to enable such applications, which finds similar trajectory pairs from two large collections of trajectories. Existing similarity metrics on trajectories rely on aligning sampling points of two trajectories. However, due to different sampling rates or different vehicular speeds, the sample points in similar trajectories may not be aligned. To address this problem, we propose a new bi-directional mapping similarity ( @math ), which allows a sample point of a trajectory to align to the closest location (which may not be a sample point) on the other trajectory, and vice versa. Since it is expensive to enumerate every two trajectories and compute their similarity, we propose Strain-Join , a signature-based trajectory similarity join framework. Strain-Join first generates signatures for each trajectory such that if two trajectories do not share common signatures, they cannot be similar. In order to utilize this property to prune dissimilar pairs, we devise several techniques to generate high-quality signatures and propose an efficient filtering algorithm to prune dissimilar pairs. For the pairs not pruned by the filtering algorithm, we propose effective verification algorithms to verify whether they are similar. Experimental results on real datasets show that our algorithm outperforms state-of-the-art techniques in terms of both effectiveness and efficiency.",
"We address the problem of performing efficient similarity join for large sets of moving objects trajectories. Unlike previous approaches which use a dedicated index in a transformed space, our premise is that in many applications of location-based services, the trajectories are already indexed in their native space, in order to facilitate the processing of common spatio-temporal queries, e.g., range, nearest neighbor etc. We introduce a novel distance measure adapted from the classic Frechet distance, which can be naturally extended to support lower upper bounding using the underlying indices of moving object databases in the native space. This, in turn, enables efficient implementation of various trajectory similarity joins. We report on extensive experiments demonstrating that our methodology provides performance speed-up of trajectory similarity join by more than 50 on average, while maintaining effectiveness comparable to the well-known approaches for identifying trajectory similarity based on time-series analysis."
]
} |
1903.07971 | 2922223813 | In this paper we present a convergence rate analysis of inexact variants of several randomized iterative methods. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic subspace ascent. A common feature of these methods is that in their update rule a certain sub-problem needs to be solved exactly. We relax this requirement by allowing for the sub-problem to be solved inexactly. In particular, we propose and analyze inexact randomized iterative methods for solving three closely related problems: a convex stochastic quadratic optimization problem, a best approximation problem and its dual, a concave quadratic maximization problem. We provide iteration complexity results under several assumptions on the inexactness error. Inexact variants of many popular and some more exotic methods, including randomized block Kaczmarz, randomized Gaussian Kaczmarz and randomized block coordinate descent, can be cast as special cases. Numerical experiments demonstrate the benefits of allowing inexactness. | In the area of deterministic algorithms, the inexact variant of the full gradient descent method, @math , has received a lot of attention @cite_19 @cite_52 @cite_46 @cite_64 @cite_39 . It has been analyzed for the cases of convex and strongly convex functions under several meaningful assumptions on the inexactness error @math and its practical benefit compared to the exact gradient descent is apparent. For further deterministic inexact methods check @cite_24 for Inexact Newton methods, @cite_5 @cite_13 for Inexact Proximal Point methods and @cite_32 for Inexact Fixed point methods. | {
"cite_N": [
"@cite_64",
"@cite_52",
"@cite_32",
"@cite_39",
"@cite_24",
"@cite_19",
"@cite_5",
"@cite_46",
"@cite_13"
],
"mid": [
"",
"",
"1855683893",
"2033121805",
"2107501462",
"2168914046",
"2084777452",
"2962834995",
""
],
"abstract": [
"",
"",
"We analyze inexact fixed-point iterations where the generating function contains an inexact solve of an equation system to answer the question of how tolerances for the inner solves influence the iteration error of the outer fixed-point iteration. Important applications are the Picard iteration and partitioned fluid-structure interaction. For the analysis, the iteration is modeled as a perturbed fixed-point iteration, and existing analysis is extended to the nested case x=F(S(x)). We prove that if the iteration converges, it converges to the exact solution irrespective of the tolerance in the inner systems, provided that a nonstandard relative termination criterion is employed, whereas standard relative and absolute criteria do not have this property. Numerical results demonstrate the effectiveness of the approach with the nonstandard termination criterion. (Less)",
"We propose and analyze two dual methods based on inexact gradient information and averaging that generate approximate primal solutions for smooth convex problems. The complicating constraints are moved into the cost using the Lagrange multipliers. The dual problem is solved by inexact first-order methods based on approximate gradients for which we prove sublinear rate of convergence. In particular, we provide a complete rate analysis and estimates on the primal feasibility violation and primal and dual suboptimality of the generated approximate primal and dual solutions. Moreover, we solve approximately the inner problems with a linearly convergent parallel coordinate descent algorithm. Our analysis relies on the Lipschitz property of the dual function and inexact dual gradients. Further, we combine these methods with dual decomposition and constraint tightening and apply this framework to linear model predictive control obtaining a suboptimal and feasible control scheme.",
"A classical algorithm for solving the system of nonlinear equations @math is Newton’s method [ x_ k + 1 = x_k + s_k , where F'(x_k )s_k = - F(x_k ), x_0 given . ]...",
"We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates. Using these rates, we perform as well as or better than a carefully chosen fixed error level on a set of structured sparsity problems.",
"We present a unified framework for the design and convergence analysis of a class of algorithms based on approximate solution of proximal point subproblems. Our development further enhances the constructive approximation approach of the recently proposed hybrid projection–proximal and extragradient–proximal methods. Specifically, we introduce an even more flexible error tolerance criterion, as well as provide a unified view of these two algorithms. Our general method possesses global convergence and local (super)linear rate of convergence under standard assumptions, while using a constructive approximation criterion suitable for a number of specific implementations. For example, we show that close to a regular solution of a monotone system of semismooth equations, two Newton iterations are sufficient to solve the proximal subproblem within the required error tolerance. Such systems of equations arise naturally when reformulating the nonlinear complementarity problem. *Research of the first author is suppo...",
"Many recent applications in machine learning and data fitting call for the algorithmic solution of structured smooth convex optimization problems. Although the gradient descent method is a natural choice for this task, it requires exact gradient computations and hence can be inefficient when the problem size is large or the gradient is difficult to evaluate. Therefore, there has been much interest in inexact gradient methods (IGMs), in which an efficiently computable approximate gradient is used to perform the update in each iteration. Currently, non-asymptotic linear convergence results for IGMs are typically established under the assumption that the objective function is strongly convex, which is not satisfied in many applications of interest; while linear convergence results that do not require the strong convexity assumption are usually asymptotic in nature. In this paper, we combine the best of these two types of results by developing a framework for analysing the non-asymptotic convergence rates of ...",
""
]
} |
1903.07971 | 2922223813 | In this paper we present a convergence rate analysis of inexact variants of several randomized iterative methods. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic subspace ascent. A common feature of these methods is that in their update rule a certain sub-problem needs to be solved exactly. We relax this requirement by allowing for the sub-problem to be solved inexactly. In particular, we propose and analyze inexact randomized iterative methods for solving three closely related problems: a convex stochastic quadratic optimization problem, a best approximation problem and its dual, a concave quadratic maximization problem. We provide iteration complexity results under several assumptions on the inexactness error. Inexact variants of many popular and some more exotic methods, including randomized block Kaczmarz, randomized Gaussian Kaczmarz and randomized block coordinate descent, can be cast as special cases. Numerical experiments demonstrate the benefits of allowing inexactness. | Finally an analysis of approximate stochastic gradient descent for solving the empirical risk minimization problem using quadratic constraints and sequential semi-definite programs has been presented in @cite_61 . | {
"cite_N": [
"@cite_61"
],
"mid": [
"2766326610"
],
"abstract": [
"We present convergence rate analysis for the approximate stochastic gradient method, where individual gradient updates are corrupted by computation errors. We develop stochastic quadratic constraints to formulate a small linear matrix inequality (LMI) whose feasible set characterizes convergence properties of the approximate stochastic gradient. Based on this LMI condition, we develop a sequential minimization approach to analyze the intricate trade-offs that couple stepsize selection, convergence rate, optimization accuracy, and robustness to gradient inaccuracy. We also analytically solve this LMI condition and obtain theoretical formulas that quantify the convergence properties of the approximate stochastic gradient under various assumptions on the loss functions."
]
} |
1903.07757 | 2922235480 | Distance distributions are a key building block in stochastic geometry modelling of wireless networks and in many other fields in mathematics and science. In this paper, we propose a novel framework for analytically computing the closed form probability density function (PDF) of the distance between two random nodes each uniformly randomly distributed in respective arbitrary (convex or concave) polygon regions (which may be disjoint or overlap or coincide). The proposed framework is based on measure theory and uses polar decomposition for simplifying and calculating the integrals to obtain closed form results. We validate our proposed framework by comparison with simulations and published closed form results in the literature for simple cases. We illustrate the versatility and advantage of the proposed framework by deriving closed form results for a case not yet reported in the literature. Finally, we also develop a Mathematica implementation of the proposed framework which allows a user to define any two arbitrary polygons and conveniently determine the distance distribution numerically. | In general, there are two types of distance distributions that are needed in the stochastic geometry modelling @cite_26 @cite_15 : (i) the distribution of the distance between a given reference node (located inside or outside the cell) and a random node located inside a cell, and (ii) the distribution of the distance between two random nodes (located in the same or different cells). An example of the former is the nearest neighbour distance distribution when the reference node (e.g., a base station) is located at the center of the cell. An example of the latter is the distribution of the distance between two randomly located device or machine type nodes in the same or different cells. | {
"cite_N": [
"@cite_15",
"@cite_26"
],
"mid": [
"1989451648",
"2752397115"
],
"abstract": [
"This paper derives the exact cumulative density function (cdf) of the distance between a randomly located node and any arbitrary reference point inside a regular L-sided polygon. Using this result, we obtain the closed-form probability density function of the Euclidean distance between any arbitrary reference point and its nth neighbor node when N nodes are uniformly and independently distributed inside a regular L-sided polygon. First, we exploit the rotational symmetry of the regular polygons and quantify the effect of polygon sides and vertices on the distance distributions. Then, we propose an algorithm to determine the distance distributions, given any arbitrary location of the reference point inside the polygon. For the special case when the arbitrary reference point is located at the center of the polygon, our framework reproduces the existing result in the literature.",
"Most performance metrics in wireless networks, such as outage probability, link capacity, etc., are functions of the distances between communicating interfering nodes. A probabilistic distance-based model is definitely needed in quantifying these metrics, which eventually involves the nodal distance distribution (NDD) in a finite network intrinsically depending on the network coverage and nodal spatial distribution. Recently, the NDD from a reference node to a uniformly distributed node has been extended to the networks in the shape of arbitrary polygons. In contrast, the NDD between two uniformly distributed nodes (Ran2Ran) is still confined to the networks in certain specific shapes, including disks, triangles, rectangles, rhombuses, trapezoids, and regular polygons, which greatly limits its applicable network scenarios. By extending a tool in integral geometry, called Kinematic Measure, and using decomposition and recursion methods, this paper shows a systematic, algorithmic approach to Ran2Ran NDDs, which can handle arbitrarily-shaped networks, including convex, concave, disjoint, and tiered networks. Besides validating our approach through extensive simulations and comparisons with the known results if applicable, we also demonstrate its potentials in handling nonuniform nodal distributions, and in modeling two wireless networks of particular interest in the current literature, where the existing approaches are inapplicable."
]
} |
1903.07563 | 2921334407 | Video activity Recognition has recently gained a lot of momentum with the release of massive Kinetics (400 and 600) data. Architectures such as I3D and C3D networks have shown state-of-the-art performances for activity recognition. The one major pitfall with these state-of-the-art networks is that they require a lot of compute. In this paper we explore how we can achieve comparable results to these state-of-the-art networks for devices-on-edge. We primarily explore two architectures - I3D and Temporal Segment Network. We show that comparable results can be achieved using one tenth the memory usage by changing the testing procedure. We also report our results on Resnet architecture as our backbone apart from the original Inception architecture. Specifically, we achieve 84.54 top-1 accuracy on UCF-101 dataset using only RGB frames. | While LSTM's or RNN's may be able to encode high-level temporal information, they fail to attend to minor changes in sequences. Two-stream networks @cite_2 achieve good response to small motion as well by using two streams: one for the RGB input and the other namely the flow stream. The flow stream is computed by calculating the optical flow of our original video. Optical flow can really capture minute variations in our input and can help us learn low-level motion which can be critical for a lot of cases. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2952186347"
],
"abstract": [
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification."
]
} |
1903.07427 | 2920816188 | Research in neural networks in the field of computer vision has achieved remarkable accuracy for point estimation. However, the uncertainty in the estimation is rarely addressed. Uncertainty quantification accompanied by point estimation can lead to a more informed decision, and even improve the prediction quality. In this work, we focus on uncertainty estimation in the domain of crowd counting. We propose a scalable neural network framework with quantification of decomposed uncertainty using a bootstrap ensemble. We demonstrate that the proposed uncertainty quantification method provides additional insight to the crowd counting problem and is simple to implement. We also show that our proposed method outperforms the current state of the art method in many benchmark data sets. To the best of our knowledge, we have the best system for ShanghaiTech part A and B, UCF CC 50, UCSD, and UCF-QNRF datasets. | , originally proposed in @cite_32 , preserves both the count and spatial distribution of the crowd, and have been shown effective at object counting in crowd scenes. In an object density map, the integral over any sub-region is the number of objects within the corresponding region in the image. Density-based methods are generally better at handling cases where objects are severely occluded by bypassing the hard detection of every object, while also maintaining some spatial information about the crowd. @cite_32 proposes a method which learns a linear mapping between the image feature and the density map. @cite_69 proposes learning a non-linear mapping using random forest regression. However, earlier approaches still depended on hand-crafted features. | {
"cite_N": [
"@cite_69",
"@cite_32"
],
"mid": [
"2207893099",
"2145983039"
],
"abstract": [
"This paper presents a patch-based approach for crowd density estimation in public scenes. We formulate the problem of estimating density in a structured learning framework applied to random decision forests. Our approach learns the mapping between patch features and relative locations of all objects inside each patch, which contribute to generate the patch density map through Gaussian kernel density estimation. We build the forest in a coarse-to-fine manner with two split node layers, and further propose a crowdedness prior and an effective forest reduction method to improve the estimation accuracy and speed. Moreover, we introduce a semi-automatic training method to learn the estimator for a specific scene. We achieved state-of-the-art results on the public Mall dataset and UCSD dataset, and also proposed two potential applications in traffic counts and scene understanding with promising results.",
"We propose a new supervised learning framework for visual object counting tasks, such as estimating the number of cells in a microscopic image or the number of humans in surveillance video frames. We focus on the practically-attractive case when the training images are annotated with dots (one dot per object). Our goal is to accurately estimate the count. However, we evade the hard task of learning to detect and localize individual object instances. Instead, we cast the problem as that of estimating an image density whose integral over any image region gives the count of objects within that region. Learning to infer such density can be formulated as a minimization of a regularized risk quadratic cost function. We introduce a new loss function, which is well-suited for such learning, and at the same time can be computed efficiently via a maximum subarray algorithm. The learning can then be posed as a convex quadratic program solvable with cutting-plane optimization. The proposed framework is very flexible as it can accept any domain-specific visual features. Once trained, our system provides accurate object counts and requires a very small time overhead over the feature extraction step, making it a good candidate for applications involving real-time processing or dealing with huge amount of visual data."
]
} |
1903.07427 | 2920816188 | Research in neural networks in the field of computer vision has achieved remarkable accuracy for point estimation. However, the uncertainty in the estimation is rarely addressed. Uncertainty quantification accompanied by point estimation can lead to a more informed decision, and even improve the prediction quality. In this work, we focus on uncertainty estimation in the domain of crowd counting. We propose a scalable neural network framework with quantification of decomposed uncertainty using a bootstrap ensemble. We demonstrate that the proposed uncertainty quantification method provides additional insight to the crowd counting problem and is simple to implement. We also show that our proposed method outperforms the current state of the art method in many benchmark data sets. To the best of our knowledge, we have the best system for ShanghaiTech part A and B, UCF CC 50, UCSD, and UCF-QNRF datasets. | . In recent years, the CNN based methods with density targets have shown performances superior to the traditional methods based on handcrafted features @cite_38 @cite_3 @cite_67 . To address perspective issues, @cite_1 leverages a multi-column network using convolution filters with different sizes in each column to generate the density map. As a different approach to address perspective issues, @cite_62 proposes taking a pyramid of input patches into a network. @cite_24 improves over @cite_1 and uses a switching layer to classify the crowd into three classes depending on crowd density and to select one of 3 regressor networks for actual counting. @cite_60 incorporates a multi-task objective, jointly estimating the density map and the total count by connecting fully convolutional networks and recurrent networks (LSTM). @cite_6 uses global and local context to generate high quality density map. @cite_35 introduces the dilated convolution to aggregate multi-scale contextual information and utilizes a much deeper architecture from VGG-16 @cite_28 . @cite_8 proposes an encoder-decoder network with the encoder extracting multi-scale features with scale aggregation modules and the decoder generating density maps by using a set of transposed convolutions. | {
"cite_N": [
"@cite_38",
"@cite_67",
"@cite_62",
"@cite_35",
"@cite_60",
"@cite_28",
"@cite_8",
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_24"
],
"mid": [
"2058907003",
"",
"2519281173",
"2964209782",
"2962854645",
"1686810756",
"2895051362",
"2463631526",
"1978232622",
"2743112477",
"2741077351"
],
"abstract": [
"As an effective way for crowd control and management, crowd density estimation is an important research topic in artificial intelligence applications. Since the existing methods are hard to satisfy the accuracy and speed requirements of engineering applications, we propose to estimate crowd density by an optimized convolutional neural network (ConvNet). The contributions are twofold: first, convolutional neural network is first introduced for crowd density estimation. The estimation speed is significantly accelerated by removing some network connections according to the observation of the existence of similar feature maps. Second, a cascade of two ConvNet classifier has been designed, which improves both of the accuracy and speed. The method is tested on three data sets: PETS_2009, a Subway image sequence and a ground truth image sequence. Experiments confirm the good performance of the method on the same data sets compared with the state of the art works.",
"",
"In this paper we address the problem of counting objects instances in images. Our models are able to precisely estimate the number of vehicles in a traffic congestion, or to count the humans in a very crowded scene. Our first contribution is the proposal of a novel convolutional neural network solution, named Counting CNN (CCNN). Essentially, the CCNN is formulated as a regression model where the network learns how to map the appearance of the image patches to their corresponding object density maps. Our second contribution consists in a scale-aware counting model, the Hydra CNN, able to estimate object densities in different very crowded scenarios where no geometric information of the scene can be provided. Hydra CNN learns a multiscale non-linear regression model which uses a pyramid of image patches extracted at multiple scales to perform the final density prediction. We report an extensive experimental evaluation, using up to three different object counting benchmarks, where we show how our solutions achieve a state-of-the-art performance.",
"We propose a network for Congested Scene Recognition called CSRNet to provide a data-driven and deep learning method that can understand highly congested scenes and perform accurate count estimation as well as present high-quality density maps. The proposed CSRNet is composed of two major components: a convolutional neural network (CNN) as the front-end for 2D feature extraction and a dilated CNN for the back-end, which uses dilated kernels to deliver larger reception fields and to replace pooling operations. CSRNet is an easy-trained model because of its pure convolutional structure. We demonstrate CSRNet on four datasets (ShanghaiTech dataset, the UCF_CC_50 dataset, the WorldEXPO'10 dataset, and the UCSD dataset) and we deliver the state-of-the-art performance. In the ShanghaiTech Part_B dataset, CSRNet achieves 47.3 lower Mean Absolute Error (MAE) than the previous state-of-the-art method. We extend the targeted applications for counting other objects, such as the vehicle in TRANCOS dataset. Results show that CSRNet significantly improves the output quality with 15.4 lower MAE than the previous state-of-the-art approach.",
"In this paper, we develop deep spatio-temporal neural networks to sequentially count vehicles from low quality videos captured by city cameras (citycams). Citycam videos have low resolution, low frame rate, high occlusion and large perspective, making most existing methods lose their efficacy. To overcome limitations of existing methods and incorporate the temporal information of traffic video, we design a novel FCN-rLSTM network to jointly estimate vehicle density and vehicle count by connecting fully convolutional neural networks (FCN) with long short term memory networks (LSTM) in a residual learning fashion. Such design leverages the strengths of FCN for pixel-level prediction and the strengths of LSTM for learning complex temporal dynamics. The residual learning connection reformulates the vehicle count regression as learning residual functions with reference to the sum of densities in each frame, which significantly accelerates the training of networks. To preserve feature map resolution, we propose a Hyper-Atrous combination to integrate atrous convolution in FCN and combine feature maps of different convolution layers. FCN-rLSTM enables refined feature representation and a novel end-to-end trainable mapping from pixels to vehicle count. We extensively evaluated the proposed method on different counting tasks with three datasets, with experimental results demonstrating their effectiveness and robustness. In particular, FCN-rLSTM reduces the mean absolute error (MAE) from 5.31 to 4.21 on TRANCOS; and reduces the MAE from 2.74 to 1.53 on WebCamT. Training process is accelerated by 5 times on average.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"In this paper, we propose a novel encoder-decoder network, called Scale Aggregation Network (SANet), for accurate and efficient crowd counting. The encoder extracts multi-scale features with scale aggregation modules and the decoder generates high-resolution density maps by using a set of transposed convolutions. Moreover, we find that most existing works use only Euclidean loss which assumes independence among each pixel but ignores the local correlation in density maps. Therefore, we propose a novel training loss, combining of Euclidean loss and local pattern consistency loss, which improves the performance of the model in our experiments. In addition, we use normalization layers to ease the training process and apply a patch-based test scheme to reduce the impact of statistic shift problem. To demonstrate the effectiveness of the proposed method, we conduct extensive experiments on four major crowd counting datasets and our method achieves superior performance to state-of-the-art methods while with much less parameters.",
"This paper aims to develop a method than can accurately estimate the crowd count from an individual image with arbitrary crowd density and arbitrary perspective. To this end, we have proposed a simple but effective Multi-column Convolutional Neural Network (MCNN) architecture to map the image to its crowd density map. The proposed MCNN allows the input image to be of arbitrary size or resolution. By utilizing filters with receptive fields of different sizes, the features learned by each column CNN are adaptive to variations in people head size due to perspective effect or image resolution. Furthermore, the true density map is computed accurately based on geometry-adaptive kernels which do not need knowing the perspective map of the input image. Since exiting crowd counting datasets do not adequately cover all the challenging situations considered in our work, we have collected and labelled a large new dataset that includes 1198 images with about 330,000 heads annotated. On this challenging new dataset, as well as all existing datasets, we conduct extensive experiments to verify the effectiveness of the proposed model and method. In particular, with the proposed simple MCNN model, our method outperforms all existing methods. In addition, experiments show that our model, once trained on one dataset, can be readily transferred to a new dataset.",
"People counting in extremely dense crowds is an important step for video surveillance and anomaly warning. The problem becomes especially more challenging due to the lack of training samples, severe occlusions, cluttered scenes and variation of perspective. Existing methods either resort to auxiliary human and face detectors or surrogate by estimating the density of crowds. Most of them rely on hand-crafted features, such as SIFT, HOG etc, and thus are prone to fail when density grows or the training sample is scarce. In this paper we propose an end-to-end deep convolutional neural networks (CNN) regression model for counting people of images in extremely dense crowds. Our method has following characteristics. Firstly, it is a deep model built on CNN to automatically learn effective features for counting. Besides, to weaken influence of background like buildings and trees, we purposely enrich the training data with expanded negative samples whose ground truth counting is set as zero. With these negative samples, the robustness can be enhanced. Extensive experimental results show that our method achieves superior performance than the state-of-the-arts in term of the mean and variance of absolute difference.",
"We present a novel method called Contextual Pyramid CNN (CP-CNN) for generating high-quality crowd density and count estimation by explicitly incorporating global and local contextual information of crowd images. The proposed CP-CNN consists of four modules: Global Context Estimator (GCE), Local Context Estimator (LCE), Density Map Estimator (DME) and a Fusion-CNN (F-CNN). GCE is a VGG-16 based CNN that encodes global context and it is trained to classify input images into different density classes, whereas LCE is another CNN that encodes local context information and it is trained to perform patch-wise classification of input images into different density classes. DME is a multi-column architecture-based CNN that aims to generate high-dimensional feature maps from the input image which are fused with the contextual information estimated by GCE and LCE using F-CNN. To generate high resolution and high-quality density maps, F-CNN uses a set of convolutional and fractionally-strided convolutional layers and it is trained along with the DME in an end-to-end fashion using a combination of adversarial loss and pixel-level Euclidean loss. Extensive experiments on highly challenging datasets show that the proposed method achieves significant improvements over the state-of-the-art methods.",
"We propose a novel crowd counting model that maps a given crowd scene to its density. Crowd analysis is compounded by myriad of factors like inter-occlusion between people due to extreme crowding, high similarity of appearance between people and background elements, and large variability of camera view-points. Current state-of-the art approaches tackle these factors by using multi-scale CNN architectures, recurrent networks and late fusion of features from multi-column CNN with different receptive fields. We propose switching convolutional neural network that leverages variation of crowd density within an image to improve the accuracy and localization of the predicted crowd count. Patches from a grid within a crowd scene are relayed to independent CNN regressors based on crowd count prediction quality of the CNN established during training. The independent CNN regressors are designed to have different receptive fields and a switch classifier is trained to relay the crowd scene patch to the best CNN regressor. We perform extensive experiments on all major crowd counting datasets and evidence better performance compared to current state-of-the-art methods. We provide interpretable representations of the multichotomy of space of crowd scene patches inferred from the switch. It is observed that the switch relays an image patch to a particular CNN column based on density of crowd."
]
} |
1903.07705 | 2922289793 | Visual object recognition under situations in which the direct line-of-sight is blocked, such as when it is occluded around the corner, is of practical importance in a wide range of applications. With coherent illumination, the light scattered from diffusive walls forms speckle patterns that contain information of the hidden object. It is possible to realize non-line-of-sight (NLOS) recognition with these speckle patterns. We introduce a novel approach based on speckle pattern recognition with deep neural network, which is simpler and more robust than other NLOS recognition methods. Simulations and experiments are performed to verify the feasibility and performance of this approach. | Recently, a few interesting works have demonstrated the formation of images of objects that are without direct line-of-sight. These methods overcome the traditional limitations of imaging optics, which cannot form clear images in the absence of direct line-of-sight conditions. These experiments are usually extensive, requiring non-traditional measurement of light, for example, by measuring time-of-flight (TOF) or in a setup that preserves the memory effect. However, imaging is not always needed for recognition. To perform imaging without line-of-sight, one would require expensive hardware and suffer from practical limitations such as a narrow field-of-view. In this study, we perform direct recognition without imaging the object. This aspect allows our method to overcome some of the critical practical limitations of the related imaging methods. The proposed method requires hardware (mostly consumer-grade electronics) that is far less expensive than that required for the TOF-based NLOS imaging @cite_39 @cite_3 @cite_5 @cite_31 @cite_35 @cite_32 @cite_14 @cite_28 @cite_22 , and the method is more robust than the memory-effect based imaging techniques that have a limited field-of-view @cite_27 @cite_11 . Moreover, a recent publication @cite_0 also uses only ordinal digital cameras but would require very specific scene setup (an accidental occlusion) to obtain better performance. | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_22",
"@cite_28",
"@cite_32",
"@cite_3",
"@cite_39",
"@cite_0",
"@cite_27",
"@cite_5",
"@cite_31",
"@cite_11"
],
"mid": [
"",
"2465446308",
"2118258703",
"2112858385",
"2464830972",
"2768431869",
"2299392318",
"2913322182",
"2786471625",
"1937534317",
"2015555006",
"2796125917"
],
"abstract": [
"",
"We propose a material classification method using raw time-of-flight (ToF) measurements. ToF cameras capture the correlation between a reference signal and the temporal response of material to incident illumination. Such measurements encode unique signatures of the material, i.e. the degree of subsurface scattering inside a volume. Subsequently, it offers an orthogonal domain of feature representation compared to conventional spatial and angular reflectance-based approaches. We demonstrate the effectiveness, robustness, and efficiency of our method through experiments and comparisons of real-world materials.",
"Global light transport is composed of direct and indirect components. In this paper, we take the first steps toward analyzing light transport using high temporal resolution information via time of flight (ToF) images. The time profile at each pixel encodes complex interactions between the incident light and the scene geometry with spatially-varying material properties. We exploit the time profile to decompose light transport into its constituent direct, subsurface scattering, and interreflection components. We show that the time profile is well modelled using a Gaussian function for the direct and interreflection components, and a decaying exponential function for the subsurface scattering component. We use our direct, subsurface scattering, and interreflection separation algorithm for four computer vision applications: recovering projective depth maps, identifying subsurface scattering objects, measuring parameters of analytical subsurface scattering models, and performing edge detection using ToF images.",
"An important goal in optics is to image objects hidden by turbid media, although line-of-sight techniques fail when the obscuring medium becomes opaque. use ultrafast imaging techniques to recover three-dimensional shapes of non-line-of-sight objects after reflection from diffuse surfaces.",
"Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human-computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design and applications of phased multi-camera time-of-flight (ToF) systems. We develop a reproducible hardware system that allows for the exposure times and waveforms of up to three cameras to be synchronized. Using this system, we analyze waveform interference between multiple light sources in ToF applications and propose simple solutions to this problem. Building on the concept of orthogonal frequency design, we demonstrate state-of-the-art results for instantaneous radial velocity capture via Doppler time-of-flight imaging and we explore new directions for optically probing global illumination, for example by de-scattering dynamic scenes and by non-line-of-sight motion detection via frequency gating.",
"Time-of-flight (ToF) imaging has become a widespread technique for depth estimation, allowing affordable off-the-shelf cameras to provide depth maps in real time. However, multipath interference (MPI) resulting from indirect illumination significantly degrades the captured depth. Most previous works have tried to solve this problem by means of complex hardware modifications or costly computations. In this work, we avoid these approaches and propose a new technique to correct errors in depth caused by MPI, which requires no camera modifications and takes just 10 milliseconds per frame. Our observations about the nature of MPI suggest that most of its information is available in image space; this allows us to formulate the depth imaging process as a spatially-varying convolution and use a convolutional neural network to correct MPI errors. Since the input and output data present similar structure, we base our network on an autoencoder, which we train in two stages. First, we use the encoder (convolution filters) to learn a suitable basis to represent MPI-corrupted depth images; then, we train the decoder (deconvolution filters) to correct depth from synthetic scenes, generated by using a physically-based, time-resolved renderer. This approach allows us to tackle a key problem in ToF, the lack of ground-truth data, by using a large-scale captured training set with MPI-corrupted depth to train the encoder, and a smaller synthetic training set with ground truth depth to train the decoder stage of the network. We demonstrate and validate our method on both synthetic and real complex scenarios, using an off-the-shelf ToF camera, and with only the captured, incorrect depth as input.",
"We explore the question of whether phase-based time-of-flight (TOF) range cameras can be used for looking around corners and through scattering diffusers. By connecting TOF measurements with theory from array signal processing, we conclude that performance depends on two primary factors: camera modulation frequency and the width of the specular lobe (“shininess”) of the wall. For purely Lambertian walls, commodity TOF sensors achieve resolution on the order of meters between targets. For seemingly diffuse walls, such as posterboard, the resolution is drastically reduced, to the order of 10cm. In particular, we find that the relationship between reflectance and resolution is nonlinear—a slight amount of shininess can lead to a dramatic improvement in resolution. Since many realistic scenes exhibit a slight amount of shininess, we believe that off-the-shelf TOF cameras can look around corners.",
"Computing the amounts of light arriving from different directions enables a diffusely reflecting surface to play the part of a mirror in a periscope—that is, perform non-line-of-sight imaging around an obstruction. Because computational periscopy has so far depended on light-travel distances being proportional to the times of flight, it has mostly been performed with expensive, specialized ultrafast optical systems1–12. Here we introduce a two-dimensional computational periscopy technique that requires only a single photograph captured with an ordinary digital camera. Our technique recovers the position of an opaque object and the scene behind (but not completely obscured by) the object, when both the object and scene are outside the line of sight of the camera, without requiring controlled or time-varying illumination. Such recovery is based on the visible penumbra of the opaque object having a linear dependence on the hidden scene that can be modelled through ray optics. Non-line-of-sight imaging using inexpensive, ubiquitous equipment may have considerable value in monitoring hazardous environments, navigation and detecting hidden adversaries.",
"Tracking moving targets behind a scattering medium is a challenge, and it has many important applications in various fields. Owing to the multiple scattering, instead of the object image, only a random speckle pattern can be received on the camera when light is passing through highly scattering layers. Significantly, an important feature of a speckle pattern has been found, and it showed the target information can be derived from the speckle correlation. In this work, inspired by the notions used in computer vision and deformation detection, by specific simulations and experiments, we demonstrate a simple object tracking method, in which by using the speckle correlation, the movement of a hidden object can be tracked in the lateral direction and axial direction. In addition, the rotation state of the moving target can also be recognized by utilizing the autocorrelation of a speckle. This work will be beneficial for biomedical applications in the fields of quantitative analysis of the working mechanisms of a micro-object and the acquisition of dynamical information of the micro-object motion.",
"Continuous-wave Time-of-flight (TOF) range imaging has become a commercially viable technology with many applications in computer vision and graphics. However, the depth images obtained from TOF cameras contain scene dependent errors due to multipath interference (MPI). Specifically, MPI occurs when multiple optical reflections return to a single spatial location on the imaging sensor. Many prior approaches to rectifying MPI rely on sparsity in optical reflections, which is an extreme simplification. In this paper, we correct MPI by combining the standard measurements from a TOF camera with information from direct and global light transport. We report results on both simulated experiments and physical experiments (using the Kinect sensor). Our results, evaluated against ground truth, demonstrate a quantitative improvement in depth accuracy.",
"This paper introduces the concept of time-of-flight reflectance estimation, and demonstrates a new technique that allows a camera to rapidly acquire reflectance properties of objects from a single view-point, over relatively long distances and without encircling equipment. We measure material properties by indirectly illuminating an object by a laser source, and observing its reflected light indirectly using a time-of-flight camera. The configuration collectively acquires dense angular, but low spatial sampling, within a limited solid angle range - all from a single viewpoint. Our ultra-fast imaging approach captures space-time \"streak images\" that can separate out different bounces of light based on path length. Entanglements arise in the streak images mixing signals from multiple paths if they have the same total path length. We show how reflectances can be recovered by solving for a linear system of equations and assuming parametric material models; fitting to lower dimensional reflectance models enables us to disentangle measurements. We demonstrate proof-of-concept results of parametric reflectance models for homogeneous and discretized heterogeneous patches, both using simulation and experimental hardware. As compared to lengthy or highly calibrated BRDF acquisition techniques, we demonstrate a device that can rapidly, on the order of seconds, capture meaningful reflectance information. We expect hardware advances to improve the portability and speed of this device.",
"We propose to measure intensity transmission matrices or point-spread-function (PSF) of diffusers via spatial-correlation, with no scanning or interferometric detection required. With the measured PSF, we report optical imaging based on the memory effect that allows tracking of moving objects through a scattering medium. Our technique enlarges the limited effective range of traditional imaging techniques based on the memory effect, and substitutes time-consuming iterative algorithms by a fast cross-correlation deconvolution method to greatly reduce time consumption for image reconstruction."
]
} |
1903.07705 | 2922289793 | Visual object recognition under situations in which the direct line-of-sight is blocked, such as when it is occluded around the corner, is of practical importance in a wide range of applications. With coherent illumination, the light scattered from diffusive walls forms speckle patterns that contain information of the hidden object. It is possible to realize non-line-of-sight (NLOS) recognition with these speckle patterns. We introduce a novel approach based on speckle pattern recognition with deep neural network, which is simpler and more robust than other NLOS recognition methods. Simulations and experiments are performed to verify the feasibility and performance of this approach. | NLOS imaging with TOF has recently received considerable attention. It is a range imaging system that resolves the distance based on measuring the TOF of a light signal between the object and the camera for each point of the image. The mechanism of TOF measurement without line-of-sight is as follows @cite_28 : A laser pulse hits a wall that scatters the light diffusely to a hidden object; then, the light returns to the wall and is captured by a camera. By changing the position of the laser beam on the wall with a set of galvanometer-actuated mirrors, the shape of the hidden object can be determined. | {
"cite_N": [
"@cite_28"
],
"mid": [
"2112858385"
],
"abstract": [
"An important goal in optics is to image objects hidden by turbid media, although line-of-sight techniques fail when the obscuring medium becomes opaque. use ultrafast imaging techniques to recover three-dimensional shapes of non-line-of-sight objects after reflection from diffuse surfaces."
]
} |
1903.07705 | 2922289793 | Visual object recognition under situations in which the direct line-of-sight is blocked, such as when it is occluded around the corner, is of practical importance in a wide range of applications. With coherent illumination, the light scattered from diffusive walls forms speckle patterns that contain information of the hidden object. It is possible to realize non-line-of-sight (NLOS) recognition with these speckle patterns. We introduce a novel approach based on speckle pattern recognition with deep neural network, which is simpler and more robust than other NLOS recognition methods. Simulations and experiments are performed to verify the feasibility and performance of this approach. | Imaging via speckle correlation is another method that was recently developed. When a rough surface is illuminated by a coherent light ( , a laser beam), a speckle pattern is observed in the image plane. The key principle of this method is that the auto-correlation of the speckle pattern is essentially identical to the original object's auto-correlation, as if it is imaged by a perfect diffraction-limited optical system that has replaced the scattering medium. Consequently, the object's image can be obtained from its auto-correlation by an iterative phase retrieval algorithm @cite_21 . In particular, for seeing without line-of-sight, the light back-scattered from a diffusive wall is used to image the hidden objects. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2000443157"
],
"abstract": [
"Diffraction-limited imaging in a variety of complex media is realized based on analysis of speckle correlations in light captured using a camera phone."
]
} |
1903.07705 | 2922289793 | Visual object recognition under situations in which the direct line-of-sight is blocked, such as when it is occluded around the corner, is of practical importance in a wide range of applications. With coherent illumination, the light scattered from diffusive walls forms speckle patterns that contain information of the hidden object. It is possible to realize non-line-of-sight (NLOS) recognition with these speckle patterns. We introduce a novel approach based on speckle pattern recognition with deep neural network, which is simpler and more robust than other NLOS recognition methods. Simulations and experiments are performed to verify the feasibility and performance of this approach. | In 2014, Singh al proposed a holographic approach for visualizing objects without line-of-sight, based on the numerical reconstruction of 3D objects by digital holography in which a hologram is formed on a reflectively scattering surface @cite_34 . A coherent light source is divided into two parts: One beam illuminates the object, while the other is set as the reference beam. The interference between the two beams forms an aerial hologram immediately in front of the scattering surface. Then, the hologram is recorded by a remote digital camera that focuses on the scattering surface. | {
"cite_N": [
"@cite_34"
],
"mid": [
"2075338436"
],
"abstract": [
"Retrieving the information about the object hidden around a corner or obscured by a diffused surface has a vast range of applications. Over the time many techniques have been tried to make this goal realizable. Here, we are presenting yet another approach to retrieve a 3-D object from the scattered field using digital holography with statistical averaging. The methods are simple, easy to implement and allow fast image reconstruction because they do not require phase correction, complicated image processing, scanning of the object or any kind of wave shaping. The methods inherit the merit of digital holography that the micro deformation and displacement of the hidden object can also be detected."
]
} |
1903.07518 | 2955678903 | We consider the problem of path inference: given a path prefix, i.e., a partially observed sequence of nodes in a graph, we want to predict which nodes are in the missing suffix. In particular, we focus on natural paths occurring as a by-product of the interaction of an agent with a network---a driver on the transportation network, an information seeker in Wikipedia, or a client in an online shop. Our interest is sparked by the realization that, in contrast to shortest-path problems, natural paths are usually not optimal in any graph-theoretic sense, but might still follow predictable patterns. Our main contribution is a graph neural network called Gretel. Conditioned on a path prefix, this network can efficiently extrapolate path suffixes, evaluate path likelihood, and sample from the future path distribution. Our experiments with GPS traces on a road network and user-navigation paths in Wikipedia confirm that Gretel is able to adapt to graphs with very different properties, while also comparing favorably to previous solutions. | To the extend of our knowledge, this is the first time the generalized path inference problem has been considered. An interesting relevant work proposed to classify nodes belonging to the shortest path between a source and a target @cite_14 , but this is a combinatorial problem optimizing a well known graph metric, rather than naturally occurring agents' paths. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2805516822"
],
"abstract": [
"Artificial intelligence (AI) has undergone a renaissance recently, making major progress in key domains such as vision, language, control, and decision-making. This has been due, in part, to cheap data and cheap compute resources, which have fit the natural strengths of deep learning. However, many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches. In particular, generalizing beyond one's experiences--a hallmark of human intelligence from infancy--remains a formidable challenge for modern AI. The following is part position paper, part review, and part unification. We argue that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective. Just as biology uses nature and nurture cooperatively, we reject the false choice between \"hand-engineering\" and \"end-to-end\" learning, and instead advocate for an approach which benefits from their complementary strengths. We explore how using relational inductive biases within deep learning architectures can facilitate learning about entities, relations, and rules for composing them. We present a new building block for the AI toolkit with a strong relational inductive bias--the graph network--which generalizes and extends various approaches for neural networks that operate on graphs, and provides a straightforward interface for manipulating structured knowledge and producing structured behaviors. We discuss how graph networks can support relational reasoning and combinatorial generalization, laying the foundation for more sophisticated, interpretable, and flexible patterns of reasoning. As a companion to this paper, we have released an open-source software library for building graph networks, with demonstrations of how to use them in practice."
]
} |
1903.07518 | 2955678903 | We consider the problem of path inference: given a path prefix, i.e., a partially observed sequence of nodes in a graph, we want to predict which nodes are in the missing suffix. In particular, we focus on natural paths occurring as a by-product of the interaction of an agent with a network---a driver on the transportation network, an information seeker in Wikipedia, or a client in an online shop. Our interest is sparked by the realization that, in contrast to shortest-path problems, natural paths are usually not optimal in any graph-theoretic sense, but might still follow predictable patterns. Our main contribution is a graph neural network called Gretel. Conditioned on a path prefix, this network can efficiently extrapolate path suffixes, evaluate path likelihood, and sample from the future path distribution. Our experiments with GPS traces on a road network and user-navigation paths in Wikipedia confirm that Gretel is able to adapt to graphs with very different properties, while also comparing favorably to previous solutions. | Random walks on graphs have been used previously in a deep learning context in order to sample paths from graphs and extract node representations @cite_3 @cite_4 using @cite_5 . We can see the pseudo-coordinates as node representations with regard to the observations, but the similarity stops there. | {
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_3"
],
"mid": [
"2153579005",
"2154851992",
"2962756421"
],
"abstract": [
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.",
"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks."
]
} |
1903.07593 | 2921984677 | We introduce a self-supervised method for learning visual correspondence from unlabeled video. The main idea is to use cycle-consistency in time as free supervisory signal for learning visual representations from scratch. At training time, our model learns a feature map representation to be useful for performing cycle-consistent tracking. At test time, we use the acquired representation to find nearest neighbors across space and time. We demonstrate the generalizability of the representation -- without finetuning -- across a range of visual correspondence tasks, including video object segmentation, keypoint tracking, and optical flow. Our approach outperforms previous self-supervised methods and performs competitively with strongly supervised methods. | Temporal structure serves as a useful signal for learning because the visual world is continuous and smoothly-varying. Spatio-temporal stability is thought to play a crucial role in the development of invariant representations in biological vision @cite_31 @cite_62 @cite_55 @cite_29 . For example, Wood @cite_67 showed that for newborn chicks raised in a visual world that was not temporally smooth, object recognition abilities were severely impaired. Computational approaches for unsupervised learning have sought to leverage this continuity, such as continuous transformation learning @cite_58 @cite_87 , slow" feature learning @cite_38 @cite_79 @cite_63 and information maximization between neighbouring patches in time @cite_35 . Our work can be seen as slow feature learning with fixation, learned end-to-end without supervision. | {
"cite_N": [
"@cite_38",
"@cite_67",
"@cite_62",
"@cite_35",
"@cite_87",
"@cite_29",
"@cite_55",
"@cite_79",
"@cite_63",
"@cite_31",
"@cite_58"
],
"mid": [
"2146444479",
"2401964427",
"",
"2842511635",
"",
"",
"",
"",
"",
"2119802329",
"2096388912"
],
"abstract": [
"Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.",
"Abstract Understanding how the brain learns to recognize objects is one of the ultimate goals in the cognitive sciences. To date, however, we have not yet characterized the environmental factors that cause object recognition to emerge in the newborn brain. Here, I present the results of a high-throughput controlled-rearing experiment that examined whether the development of object recognition requires experience with temporally smooth visual objects. When newborn chicks ( Gallus gallus ) were raised with virtual objects that moved smoothly over time, the chicks developed accurate color recognition, shape recognition, and color-shape binding abilities. In contrast, when newborn chicks were raised with virtual objects that moved non-smoothly over time, the chicks’ object recognition abilities were severely impaired. These results provide evidence for a “smoothness constraint” on newborn object recognition. Experience with temporally smooth objects facilitates the development of object recognition.",
"",
"While supervised learning has enabled great progress in many applications, unsupervised learning has not seen such widespread adoption, and remains an important and challenging endeavor for artificial intelligence. In this work, we propose a universal unsupervised learning approach to extract useful representations from high-dimensional data, which we call Contrastive Predictive Coding. The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models. We use a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future samples. It also makes the model tractable by using negative sampling. While most prior work has focused on evaluating representations for a particular modality, we demonstrate that our approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.",
"",
"",
"",
"",
"",
"Coherent visual experience requires that objects be represented as the same persisting individuals over time and motion. Cognitive science research has identified a powerful principle that guides such processing: Objects must trace continuous paths through space and time. Little is known, however, about how neural representations of objects, typically defined by visual features, are influenced by spatiotemporal continuity. Here, we report the consequences of spatiotemporally continuous vs. discontinuous motion on perceptual representations in human ventral visual cortex. In experiments using both dynamic occlusion and apparent motion, face-selective cortical regions exhibited significantly less activation when faces were repeated in continuous vs. discontinuous trajectories, suggesting that discontinuity caused featurally identical objects to be represented as different individuals. These results indicate that spatiotemporal continuity modulates neural representations of object identity, influencing judgments of object persistence even in the most staunchly “featural” areas of ventral visual cortex.",
"The visual system can reliably identify objects even when the retinal image is transformed considerably by commonly occurring changes in the environment. A local learning rule is proposed, which allows a network to learn to generalize across such transformations. During the learning phase, the network is exposed to temporal sequences of patterns undergoing the transformation. An application of the algorithm is presented in which the network learns invariance to shift in retinal position. Such a principle may be involved in the development of the characteristic shift invariance property of complex cells in the primary visual cortex, and also in the development of more complicated invariance properties of neurons in higher visual areas."
]
} |
1903.07593 | 2921984677 | We introduce a self-supervised method for learning visual correspondence from unlabeled video. The main idea is to use cycle-consistency in time as free supervisory signal for learning visual representations from scratch. At training time, our model learns a feature map representation to be useful for performing cycle-consistent tracking. At test time, we use the acquired representation to find nearest neighbors across space and time. We demonstrate the generalizability of the representation -- without finetuning -- across a range of visual correspondence tasks, including video object segmentation, keypoint tracking, and optical flow. Our approach outperforms previous self-supervised methods and performs competitively with strongly supervised methods. | Learning representations from video using time as supervision has been extensively studied, both as future prediction task @cite_36 @cite_43 @cite_46 @cite_66 as well as motion estimation @cite_20 @cite_90 @cite_19 @cite_28 @cite_4 . Our approach is most related to the methods of @cite_5 @cite_11 and @cite_50 , which use off-the-shelf tools for tracking and optical flow respectively, to provide supervisory signal for training. However, representations learned in this way are inherently limited by the power of these off-the-shelf tools as well as their failure modes. We address this issue by learning the representation and the tracker jointly, and find the two learning problems to be complementary. Our work is also inspired by the innovative approach of @cite_12 where video colorization is used as a pretext self-supervised task for learning to track. While the idea is very intriguing, in we find that colorization is a weaker source of supervision for correspondence than cycle-consistency, potentially due to the abundance of constant-color regions in natural scenes. | {
"cite_N": [
"@cite_12",
"@cite_4",
"@cite_36",
"@cite_90",
"@cite_28",
"@cite_43",
"@cite_19",
"@cite_50",
"@cite_5",
"@cite_46",
"@cite_66",
"@cite_20",
"@cite_11"
],
"mid": [
"2809836812",
"2963598128",
"1836533770",
"2198618282",
"2962958090",
"2952453038",
"",
"2575671312",
"219040644",
"2248556341",
"2950661620",
"1520997877",
"2743157634"
],
"abstract": [
"We use large amounts of unlabeled video to learn models for visual tracking without manual human supervision. We leverage the natural temporal coherency of color to create a model that learns to colorize gray-scale videos by copying colors from a reference frame. Quantitative and qualitative experiments suggest that this task causes the model to automatically learn to track visual regions. Although the model is trained without any ground-truth labels, our method learns to track well enough to outperform the latest methods based on optical flow. Moreover, our results suggest that failures to track are correlated with failures to colorize, indicating that advancing video colorization may further improve self-supervised visual tracking.",
"Videos contain highly redundant information between frames. Such redundancy has been studied extensively in video compression and encoding, but is less explored for more advanced video processing. In this paper, we propose a learnable unified framework for propagating a variety of visual properties of video images, including but not limited to color, high dynamic range (HDR), and segmentation mask, where the properties are available for only a few key-frames. Our approach is based on a temporal propagation network (TPN), which models the transition-related affinity between a pair of frames in a purely data-driven manner. We theoretically prove two essential properties of TPN: (a) by regularizing the global transformation matrix as orthogonal, the “style energy” of the property can be well preserved during propagation; and (b) such regularization can be achieved by the proposed switchable TPN with bi-directional training on pairs of frames. We apply the switchable TPN to three tasks: colorizing a gray-scale video based on a few colored key-frames, generating an HDR video from a low dynamic range (LDR) video and a few HDR frames, and propagating a segmentation mask from the first frame in videos. Experimental results show that our approach is significantly more accurate and efficient than the state-of-the-art methods.",
"Current state-of-the-art classification and detection algorithms train deep convolutional networks using labeled data. In this work we study unsupervised feature learning with convolutional networks in the context of temporally coherent unlabeled data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity priors. We establish a connection between slow feature learning and metric learning. Using this connection we define \"temporal coherence\" -- a criterion which can be used to set hyper-parameters in a principled and automated manner. In a transfer learning experiment, we show that the resulting encoder can be used to define a more semantically coherent metric without the use of labels.",
"Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance, i.e, they respond predictably to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.",
"Data-driven approaches for edge detection have proven effective and achieve top results on modern benchmarks. However, all current data-driven edge detectors require manual supervision for training in the form of hand-labeled region segments or object boundaries. Specifically, human annotators mark semantically meaningful edges which are subsequently used for training. Is this form of strong, highlevel supervision actually necessary to learn to accurately detect edges? In this work we present a simple yet effective approach for training edge detectors without human supervision. To this end we utilize motion, and more specifically, the only input to our method is noisy semi-dense matches between frames. We begin with only a rudimentary knowledge of edges (in the form of image gradients), and alternate between improving motion estimation and edge detection in turn. Using a large corpus of video data, we show that edge detectors trained using our unsupervised scheme approach the performance of the same methods trained with full supervision (within 3-5 ). Finally, we show that when using a deep network for the edge detector, our approach provides a novel pre-training scheme for object detection.",
"We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.",
"",
"This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as pseudo ground truth to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed pretext tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce.",
"Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.",
"Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset",
"We present an unsupervised representation learning approach that compactly encodes the motion dependencies in videos. Given a pair of images from a video clip, our framework learns to predict the long-term 3D motions. To reduce the complexity of the learning framework, we propose to describe the motion as a sequence of atomic 3D flows computed with RGB-D modality. We use a Recurrent Neural Network based Encoder-Decoder framework to predict these sequences of flows. We argue that in order for the decoder to reconstruct these sequences, the encoder must learn a robust video representation that captures long-term motion dependencies and spatial-temporal relations. We demonstrate the effectiveness of our learned temporal representations on activity classification across multiple modalities and datasets such as NTU RGB+D and MSR Daily Activity 3D. Our framework is generic to any input modality, i.e., RGB, Depth, and RGB-D videos.",
"The current dominant paradigm for feature learning in computer vision relies on training neural networks for the task of object recognition using millions of hand labelled images. Is it also possible to learn features for a diverse set of visual tasks using any other form of supervision? In biology, living organisms developed the ability of visual perception for the purpose of moving and acting in the world. Drawing inspiration from this observation, in this work we investigated if the awareness of egomotion(i.e. self motion) can be used as a supervisory signal for feature learning. As opposed to the knowledge of class labels, information about egomotion is freely available to mobile agents. We found that using the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using class-label as supervision on the tasks of scene recognition, object recognition, visual odometry and keypoint matching.",
"Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: \"different instances but a similar viewpoint and category\" and \"different viewpoints of the same instance\". By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task."
]
} |
1903.07593 | 2921984677 | We introduce a self-supervised method for learning visual correspondence from unlabeled video. The main idea is to use cycle-consistency in time as free supervisory signal for learning visual representations from scratch. At training time, our model learns a feature map representation to be useful for performing cycle-consistent tracking. At test time, we use the acquired representation to find nearest neighbors across space and time. We demonstrate the generalizability of the representation -- without finetuning -- across a range of visual correspondence tasks, including video object segmentation, keypoint tracking, and optical flow. Our approach outperforms previous self-supervised methods and performs competitively with strongly supervised methods. | Classic approaches to tracking treat it as a matching problem, where the goal is to find a given object patch in the next frame (see @cite_15 for overview), and the key challenge is to track reliably over extended time periods @cite_60 @cite_85 @cite_56 @cite_6 . Starting with the seminal work of @cite_47 , researchers largely turned to tracking as repeated recognition'', where trained object detectors are applied to each frame independently @cite_57 @cite_68 @cite_88 @cite_49 @cite_93 @cite_45 @cite_9 . Our work harks back to the classic tracking-by-matching methods in treating it as a correspondence problem, but uses learning to obtain a robust representation that is able to model wide range of appearance changes. | {
"cite_N": [
"@cite_47",
"@cite_60",
"@cite_9",
"@cite_85",
"@cite_6",
"@cite_56",
"@cite_57",
"@cite_45",
"@cite_49",
"@cite_88",
"@cite_93",
"@cite_15",
"@cite_68"
],
"mid": [
"2157939923",
"1979059554",
"2962824803",
"",
"",
"",
"",
"2799058067",
"2118097920",
"2089961441",
"2964253307",
"590253179",
""
],
"abstract": [
"We develop an algorithm for finding and kinematically tracking multiple people in long sequences. Our basic assumption is that people tend to take on certain canonical poses, even when performing unusual activities like throwing a baseball or figure skating. We build a person detector that quite accurately detects and localizes limbs of people in lateral walking poses. We use the estimated limbs from a detection to build a discriminative appearance model; we assume the features that discriminate a figure in one frame will discriminate the figure in other frames. We then use the models as limb detectors in a pictorial structure framework, detecting figures in unrestricted poses in both previous and successive frames. We have run our tracker on hundreds of thousands of frames, and present and apply a methodology for evaluating tracking on such a large scale. We test our tracker on real sequences including a feature-length film, an hour of footage from a public park, and various sports sequences. We find that we can quite accurately automatically find and track multiple people interacting with each other while performing fast and unusual motions.",
"Identifying the same physical point in more than one image, the correspondence problem, is vital in motion analysis. Most research for establishing correspondence uses only two frames of a sequence to solve this problem. By using a sequence of frames, it is possible to exploit the fact that due to inertia the motion of an object cannot change instantaneously. By using smoothness of motion, it is possible to solve the correspondence problem for arbitrary motion of several nonrigid objects in a scene. We formulate the correspondence problem as an optimization problem and propose an iterative algorithm to find trajectories of points in a monocular image sequence. A modified form of this algorithm is useful in case of occlusion also. We demonstrate the efficacy of this approach considering synthetic, laboratory, and real scenes.",
"The Correlation Filter is an algorithm that trains a linear template to discriminate between images and their translations. It is well suited to object tracking because its formulation in the Fourier domain provides a fast solution, enabling the detector to be re-trained once per frame. Previous works that use the Correlation Filter, however, have adopted features that were either manually designed or trained for a different task. This work is the first to overcome this limitation by interpreting the Correlation Filter learner, which has a closed-form solution, as a differentiable layer in a deep neural network. This enables learning deep features that are tightly coupled to the Correlation Filter. Experiments illustrate that our method has the important practical benefit of allowing lightweight architectures to achieve state-of-the-art performance at high framerates.",
"",
"",
"",
"",
"Visual object tracking has been a fundamental topic in recent years and many deep learning based trackers have achieved state-of-the-art performance on multiple benchmarks. However, most of these trackers can hardly get top performance with real-time speed. In this paper, we propose the Siamese region proposal network (Siamese-RPN) which is end-to-end trained off-line with large-scale image pairs. Specifically, it consists of Siamese subnetwork for feature extraction and region proposal subnetwork including the classification branch and regression branch. In the inference phase, the proposed framework is formulated as a local one-shot detection task. We can pre-compute the template branch of the Siamese subnetwork and formulate the correlation layers as trivial convolution layers to perform online tracking. Benefit from the proposal refinement, traditional multi-scale test and online fine-tuning can be discarded. The Siamese-RPN runs at 160 FPS while achieving leading performance in VOT2015, VOT2016 and VOT2017 real-time challenges.",
"In this paper, we study the challenging problem of tracking the trajectory of a moving object in a video with possibly very complex background. In contrast to most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning architectures, by putting more emphasis on the (unsupervised) feature learning problem. Specifically, by using auxiliary natural images, we train a stacked de-noising autoencoder offline to learn generic image features that are more robust against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural network which is constructed from the encoder part of the trained autoencoder as a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the moving object. Comparison with the state-of-the-art trackers on some challenging benchmark video sequences shows that our deep learning tracker is more accurate while maintaining low computational cost with real-time performance when our MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU).",
"Object tracking is one of the most important components in numerous applications of computer vision. While much progress has been made in recent years with efforts on sharing code and datasets, it is of great importance to develop a library and benchmark to gauge the state of the art. After briefly reviewing recent advances of online object tracking, we carry out large scale experiments with various evaluation criteria to understand how these algorithms perform. The test image sequences are annotated with different attributes for performance evaluation and analysis. By analyzing quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.",
"Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker’s state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker (Our tracker is available at http: davheld.github.io GOTURN GOTURN.html) is the first neural-network tracker that learns to track generic objects at 100 fps.",
"This extraordinary book gives a uniquely modern view of computer vision. Offering a general survey of the whole computer vision enterprise along with sufficient detail for readers to be able to build useful applications, this book is invaluable in providing a strategic overview of computer vision. With extensive use of probabilistic methods-- topics have been selected for their importance, both practically and theoretically--the book gives the most coherent possible synthesis of current views, emphasizing techniques that have been successful in building applications. Readers engaged in computer graphics, robotics, image processing, and imaging in general will find this text an informative reference.",
""
]
} |
1903.07593 | 2921984677 | We introduce a self-supervised method for learning visual correspondence from unlabeled video. The main idea is to use cycle-consistency in time as free supervisory signal for learning visual representations from scratch. At training time, our model learns a feature map representation to be useful for performing cycle-consistent tracking. At test time, we use the acquired representation to find nearest neighbors across space and time. We demonstrate the generalizability of the representation -- without finetuning -- across a range of visual correspondence tasks, including video object segmentation, keypoint tracking, and optical flow. Our approach outperforms previous self-supervised methods and performs competitively with strongly supervised methods. | Correspondence at the pixel level -- mapping where each pixel goes in the next frame -- is the optical flow estimation problem. Since the energy minimization framework of Horn and Schunck @cite_3 and coarse-to-fine image warping by Lucas and Kanade @cite_10 , much progress has been made in optical flow estimation @cite_33 @cite_73 @cite_70 @cite_65 @cite_71 @cite_84 @cite_53 . However, these methods still struggle to scale to long-range correspondence in dynamic scenes with partial observability. These issues have driven researchers to study methods for estimating long-range optical flow @cite_72 @cite_32 @cite_83 @cite_64 @cite_92 @cite_81 . For example, Brox and Malik @cite_32 introduced a descriptor that matches region hierarchies and provides dense and subpixel-level estimation of flow. Our work can be viewed as enabling mid-level optical flow estimation. | {
"cite_N": [
"@cite_64",
"@cite_33",
"@cite_70",
"@cite_92",
"@cite_53",
"@cite_65",
"@cite_32",
"@cite_3",
"@cite_84",
"@cite_72",
"@cite_83",
"@cite_81",
"@cite_71",
"@cite_73",
"@cite_10"
],
"mid": [
"1951289974",
"2156177012",
"2033959528",
"1762798876",
"2963782415",
"2951309005",
"2171857946",
"1578285471",
"2548527721",
"2144476963",
"2280177137",
"2068994826",
"2560474170",
"1867429401",
"2118877769"
],
"abstract": [
"We propose a novel approach for optical flow estimation, targeted at large displacements with significant occlusions. It consists of two steps: i) dense matching by edge-preserving interpolation from a sparse set of matches; ii) variational energy minimization initialized with the dense matches. The sparse-to-dense interpolation relies on an appropriate choice of the distance, namely an edge-aware geodesic distance. This distance is tailored to handle occlusions and motion boundaries - two common and difficult issues for optical flow computation. We also propose an approximation scheme for the geodesic distance to allow fast computation without loss of performance. Subsequent to the dense interpolation step, standard one-level variational energy minimization is carried out on the dense matches to obtain the final flow estimation. The proposed approach, called Edge-Preserving Interpolation of Correspondences (EpicFlow) is fast and robust to large displacements. It significantly outperforms the state of the art on MPI-Sintel and performs on par on Kitti and Middlebury.",
"We address the issue of recovering and segmenting the apparent velocity field in sequences of images. As for motion estimation, we minimize an objective function involving two robust terms. The first one cautiously captures the optical flow constraint, while the second (a priori) term incorporates a discontinuity-preserving smoothness constraint. To cope with the nonconvex minimization problem thus defined, we design an efficient deterministic multigrid procedure. It converges fast toward estimates of good quality, while revealing the large discontinuity structures of flow fields. We then propose an extension of the model by attaching to it a flexible object-based segmentation device based on deformable closed curves (different families of curve equipped with different kinds of prior can be easily supported). Experimental results on synthetic and natural sequences are presented, including an analysis of sensitivity to parameter tuning.",
"The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. The typical formulation, however, has changed little since the work of Horn and Schunck. We attempt to uncover what has made recent advances possible through a thorough analysis of how the objective function, the optimization method, and modern implementation practices influence accuracy. We discover that “classical” flow formulations perform surprisingly well when combined with modern optimization and implementation techniques. Moreover, we find that while median filtering of intermediate flow fields during optimization is a key to recent performance gains, it leads to higher energy solutions. To understand the principles behind this phenomenon, we derive a new objective that formalizes the median filtering heuristic. This objective includes a nonlocal term that robustly integrates flow estimates over large spatial neighborhoods. By modifying this new term to include information about flow and image boundaries we develop a method that ranks at the top of the Middlebury benchmark.",
"We introduce a novel matching algorithm, called DeepMatching, to compute dense correspondences between images. DeepMatching relies on a hierarchical, multi-layer, correlational architecture designed for matching images and was inspired by deep convolutional approaches. The proposed matching algorithm can handle non-rigid deformations and repetitive textures and efficiently determines dense correspondences in the presence of significant changes between images. We evaluate the performance of DeepMatching, in comparison with state-of-the-art matching algorithms, on the Mikolajczyk ( A comparison of affine region detectors, 2005), the MPI-Sintel ( A naturalistic open source movie for optical flow evaluation, 2012) and the Kitti ( Vision meets robotics: The KITTI dataset, 2013) datasets. DeepMatching outperforms the state-of-the-art algorithms and shows excellent results in particular for repetitive textures. We also apply DeepMatching to the computation of optical flow, called DeepFlow, by integrating it in the large displacement optical flow (LDOF) approach of Brox and Malik (Large displacement optical flow: descriptor matching in variational motion estimation, 2011). Additional robustness to large displacements and complex motion is obtained thanks to our matching approach. DeepFlow obtains competitive performance on public benchmarks for optical flow estimation.",
"We present a compact but effective CNN model for optical flow, called PWC-Net. PWC-Net has been designed according to simple and well-established principles: pyramidal processing, warping, and the use of a cost volume. Cast in a learnable feature pyramid, PWC-Net uses the current optical flow estimate to warp the CNN features of the second image. It then uses the warped features and features of the first image to construct a cost volume, which is processed by a CNN to estimate the optical flow. PWC-Net is 17 times smaller in size and easier to train than the recent FlowNet2 model. Moreover, it outperforms all published optical flow methods on the MPI Sintel final pass and KITTI 2015 benchmarks, running at about 35 fps on Sintel resolution (1024 A— 436) images. Our models are available on our project website.",
"Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.",
"The literature currently provides two ways to establish point correspondences between images with moving objects. On one side, there are energy minimization methods that yield very accurate, dense flow fields, but fail as displacements get too large. On the other side, there is descriptor matching that allows for large displacements, but correspondences are very sparse, have limited accuracy, and due to missing regularity constraints there are many outliers. In this paper we propose a method that can combine the advantages of both matching strategies. A region hierarchy is established for both images. Descriptor matching on these regions provides a sparse set of hypotheses for correspondences. These are integrated into a variational approach and guide the local optimization to large displacement solutions. The variational optimization selects among the hypotheses and provides dense and subpixel accurate estimates, making use of geometric constraints and all available image information.",
"Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantified rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image.",
"We learn to compute optical flow by combining a classical spatial-pyramid formulation with deep learning. This estimates large motions in a coarse-to-fine approach by warping one image of a pair at each pyramid level by the current flow estimate and computing an update to the flow. Instead of the standard minimization of an objective function at each pyramid level, we train one deep network per level to compute the flow update. Unlike the recent FlowNet approach, the networks do not need to deal with large motions, these are dealt with by the pyramid. This has several advantages. First, our Spatial Pyramid Network (SPyNet) is much simpler and 96 smaller than FlowNet in terms of model parameters. This makes it more efficient and appropriate for embedded applications. Second, since the flow at each pyramid level is small (",
"This paper describes a new approach to motion estimation in video. We represent video motion using a set of particles. Each particle is an image point sample with a longduration trajectory and other properties. To optimize these particles, we measure point-based matching along the particle trajectories and distortion between the particles. The resulting motion representation is useful for a variety of applications and cannot be directly obtained using existing methods such as optical flow or feature tracking. We demonstrate the algorithm on challenging real-world videos that include complex scene geometry, multiple types of occlusion, regions with low texture, and non-rigid deformations.",
"Although dense, long-rage, motion trajectories are a prominent representation of motion in videos, there is still no good solution for constructing dense motion tracks in a truly long-rage fashion. Ideally, we would want every scene feature that appears in multiple, not necessarily contiguous, parts of the sequence to be associated with the same motion track. Despite this reasonable and clearly stated objective, there has been surprisingly little work on general-purpose algorithms that can accomplish that task. State-of-the-art dense motion trackers process the sequence incrementally in a frame-by-frame manner, and associate, by design, features that disappear and reappear in the video, with different tracks, thereby losing important information of the long-term motion signal. In this paper, we propose a novel divide and conquer approach to long-range motion estimation. Given a long video or image sequence, we first produce high-accuracy local track estimates, or tracklets, and later propagate them into a global solution, while incorporating information from throughout the video. Tracklets are computed using state-of-the-art motion trackers [2, 3] that have become quite accurate for short sequences as demonstrated by standard evaluations. Our algorithm then constructs the long-range tracks by linking the short tracks in an optimal manner. This induces a combinatorial matching problem that we solve simultaneously for all tracklets in the sequence. The main contributions of this paper are: (a) a novel divide-andconquer style algorithm for constructing dense, long-rage motion tracks from a single monocular video, and (b) Novel criteria for evaluating longrange tracking results with and without ground-truth motion trajectory data. We evaluate our approach on a set of synthetic and natural videos, and explore the utilization of long-range tracks for action recognition.",
"Video provides not only rich visual cues such as motion and appearance, but also much less explored long-range temporal interactions among objects. We aim to capture such interactions and to construct a powerful intermediate-level video representation for subsequent recognition. Motivated by this goal, we seek to obtain spatio-temporal oversegmentation of a video into regions that respect object boundaries and, at the same time, associate object pixels over many video frames. The contributions of this paper are two-fold. First, we develop an efficient spatiotemporal video segmentation algorithm, which naturally incorporates long-range motion cues from the past and future frames in the form of clusters of point tracks with coherent motion. Second, we devise a new track clustering cost function that includes occlusion reasoning, in the form of depth ordering constraints, as well as motion similarity along the tracks. We evaluate the proposed approach on a challenging set of video sequences of office scenes from feature length movies.",
"The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50 . It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.",
"We study an energy functional for computing optical flow that combines three assumptions: a brightness constancy assumption, a gradient constancy assumption, and a discontinuity-preserving spatio-temporal smoothness constraint. In order to allow for large displacements, linearisations in the two data terms are strictly avoided. We present a consistent numerical scheme based on two nested fixed point iterations. By proving that this scheme implements a coarse-to-fine warping strategy, we give a theoretical foundation for warping which has been used on a mainly experimental basis so far. Our evaluation demonstrates that the novel method gives significantly smaller angular errors than previous techniques for optical flow estimation. We show that it is fairly insensitive to parameter variations, and we demonstrate its excellent robustness under noise.",
"Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system."
]
} |
1903.07593 | 2921984677 | We introduce a self-supervised method for learning visual correspondence from unlabeled video. The main idea is to use cycle-consistency in time as free supervisory signal for learning visual representations from scratch. At training time, our model learns a feature map representation to be useful for performing cycle-consistent tracking. At test time, we use the acquired representation to find nearest neighbors across space and time. We demonstrate the generalizability of the representation -- without finetuning -- across a range of visual correspondence tasks, including video object segmentation, keypoint tracking, and optical flow. Our approach outperforms previous self-supervised methods and performs competitively with strongly supervised methods. | Given our focus on finding correspondence at the patch level, our method is also related to the classic SIFT Flow @cite_41 algorithm and other methods for finding mid-level correspondences between regions across different scenes @cite_74 @cite_52 @cite_2 . More recently, researchers have studied modeling correspondence in deep feature space @cite_23 @cite_91 @cite_69 @cite_34 @cite_27 @cite_30 . In particular, our work draws from @cite_27 @cite_30 , who propose a differentiable soft inlier score for evaluating quality of alignment between spatial features and provides a loss for learning semantic correspondences. Most of these methods rely on learning from simulated or large-scale labeled datasets such as ImageNet, or smaller custom human-annotated data with narrow scope. We address the challenge of learning representations of correspondence without human annotations. | {
"cite_N": [
"@cite_30",
"@cite_69",
"@cite_91",
"@cite_41",
"@cite_2",
"@cite_52",
"@cite_27",
"@cite_23",
"@cite_74",
"@cite_34"
],
"mid": [
"",
"2593948489",
"2964213755",
"2090518410",
"1926639317",
"",
"2604233003",
"2747550417",
"2124861766",
"2963325280"
],
"abstract": [
"",
"We present a descriptor, called fully convolutional self-similarity (FCSS), for dense semantic correspondence. To robustly match points among different instances within the same object class, we formulate FCSS using local self-similarity (LSS) within a fully convolutional network. In contrast to existing CNN-based descriptors, FCSS is inherently insensitive to intra-class appearance variations because of its LSS-based structure, while maintaining the precise localization ability of deep neural networks. The sampling patterns of local structure and the self-similarity measure are jointly learned within the proposed network in an end-to-end and multi-scale manner. As training data for semantic correspondence is rather limited, we propose to leverage object candidate priors provided in existing image datasets and also correspondence consistency between object pairs to enable weakly-supervised learning. Experiments demonstrate that FCSS outperforms conventional handcrafted descriptors and CNN-based descriptors on various benchmarks.",
"Despite significant progress of deep learning in recent years, state-of-the-art semantic matching methods still rely on legacy features such as SIFT or HoG. We argue that the strong invariance properties that are key to the success of recent deep architectures on the classification task make them unfit for dense correspondence tasks, unless a large amount of supervision is used. In this work, we propose a deep network, termed AnchorNet, that produces image representations that are well-suited for semantic matching. It relies on a set of filters whose response is geometrically consistent across different object instances, even in the presence of strong intra-class, scale, or viewpoint variations. Trained only with weak image-level labels, the final representation successfully captures information about the object structure and improves results of state-of-the-art semantic matching methods such as the Deformable Spatial Pyramid or the Proposal Flow methods. We show positive results on the cross-instance matching task where different instances of the same object category are matched as well as on a new cross-category semantic matching task aligning pairs of instances each from a different object class.",
"While image alignment has been studied in different areas of computer vision for decades, aligning images depicting different scenes remains a challenging problem. Analogous to optical flow, where an image is aligned to its temporally adjacent frame, we propose SIFT flow, a method to align an image to its nearest neighbors in a large image corpus containing a variety of scenes. The SIFT flow algorithm consists of matching densely sampled, pixelwise SIFT features between two images while preserving spatial discontinuities. The SIFT features allow robust matching across different scene object appearances, whereas the discontinuity-preserving spatial model allows matching of objects located at different parts of the scene. Experiments show that the proposed approach robustly aligns complex scene pairs containing significant spatial differences. Based on SIFT flow, we propose an alignment-based large database framework for image analysis and synthesis, where image information is transferred from the nearest neighbors to a query image according to the dense scene correspondence. This framework is demonstrated through concrete applications such as motion field prediction from a single image, motion synthesis via object transfer, satellite image registration, and face recognition.",
"Given a set of poorly aligned images of the same visual concept without any annotations, we propose an algorithm to jointly bring them into pixel-wise correspondence by estimating a FlowWeb representation of the image set. FlowWeb is a fully-connected correspondence flow graph with each node representing an image, and each edge representing the correspondence flow field between a pair of images, i.e. a vector field indicating how each pixel in one image can find a corresponding pixel in the other image. Correspondence flow is related to optical flow but allows for correspondences between visually dissimilar regions if there is evidence they correspond transitively on the graph. Our algorithm starts by initializing all edges of this complete graph with an off-the-shelf, pairwise flow method. We then iteratively update the graph to force it to be more self-consistent. Once the algorithm converges, dense, globally-consistent correspondences can be read off the graph. Our results suggest that FlowWeb improves alignment accuracy over previous pairwise as well as joint alignment methods.",
"",
"We address the problem of determining correspondences between two images in agreement with a geometric model such as an affine or thin-plate spline transformation, and estimating its parameters. The contributions of this work are three-fold. First, we propose a convolutional neural network architecture for geometric matching. The architecture is based on three main components that mimic the standard steps of feature extraction, matching and simultaneous inlier detection and model parameter estimation, while being trainable end-to-end. Second, we demonstrate that the network parameters can be trained from synthetically generated imagery without the need for manual annotation and that our matching layer significantly increases generalization capabilities to never seen before images. Finally, we show that the same model can perform both instance-level and category-level matching giving state-of-the-art results on the challenging Proposal Flow dataset.",
"Estimating dense visual correspondences between objects with intra-class variation, deformations and background clutter remains a challenging problem. Thanks to the breakthrough of CNNs there are new powerful features available. Despite their easy accessibility and great success, existing semantic flow methods could not significantly benefit from these without extensive additional training. We introduce a novel method for semantic matching with pre-trained CNN features which is based on convolutional feature pyramids and activation guided feature selection. For the final matching we propose a sparse graph matching framework where each salient feature selects among a small subset of nearest neighbors in the target image. To improve our method in the unconstrained setting without bounding box annotations we introduce novel object proposal based matching constraints. Furthermore, we show that the sparse matching can be transformed into a dense correspondence field. Extensive experimental evaluations on benchmark datasets show that our method significantly outperforms existing semantic matching methods.",
"We introduce a fast deformable spatial pyramid (DSP) matching algorithm for computing dense pixel correspondences. Dense matching methods typically enforce both appearance agreement between matched pixels as well as geometric smoothness between neighboring pixels. Whereas the prevailing approaches operate at the pixel level, we propose a pyramid graph model that simultaneously regularizes match consistency at multiple spatial extents-ranging from an entire image, to coarse grid cells, to every single pixel. This novel regularization substantially improves pixel-level matching in the face of challenging image variations, while the \"deformable\" aspect of our model overcomes the strict rigidity of traditional spatial pyramids. Results on Label Me and Caltech show our approach outperforms state-of-the-art methods (SIFT Flow [15] and Patch-Match [2]), both in terms of accuracy and run time.",
"This paper addresses the problem of establishing semantic correspondences between images depicting different instances of the same object or scene category. Previous approaches focus on either combining a spatial regularizer with hand-crafted features, or learning a correspondence model for appearance only. We propose instead a convolutional neural network architecture, called SCNet, for learning a geometrically plausible model for semantic correspondence. SCNet uses region proposals as matching primitives, and explicitly incorporates geometric consistency in its loss function. It is trained on image pairs obtained from the PASCAL VOC 2007 keypoint dataset, and a comparative evaluation on several standard benchmarks demonstrates that the proposed approach substantially outperforms both recent deep learning architectures and previous methods based on hand-crafted features."
]
} |
1903.07593 | 2921984677 | We introduce a self-supervised method for learning visual correspondence from unlabeled video. The main idea is to use cycle-consistency in time as free supervisory signal for learning visual representations from scratch. At training time, our model learns a feature map representation to be useful for performing cycle-consistent tracking. At test time, we use the acquired representation to find nearest neighbors across space and time. We demonstrate the generalizability of the representation -- without finetuning -- across a range of visual correspondence tasks, including video object segmentation, keypoint tracking, and optical flow. Our approach outperforms previous self-supervised methods and performs competitively with strongly supervised methods. | Our work is influenced by the classic idea of forward-backward consistency in tracking @cite_60 @cite_85 @cite_56 @cite_6 , which has long been used as an evaluation metric for tracking @cite_17 as well as a measure of uncertainty @cite_51 . Recent work on optical flow estimation @cite_21 @cite_8 @cite_1 @cite_59 @cite_82 also utilizes forward-backward consistency as an optimization goal. For example, @cite_82 combines one-step forward and backward consistency check with pixel reconstruction loss for learning optical flows. Compared to pixel reconstruction, modeling correspondence in feature space allows us to follow and learn from longer cycles. Forward-backward consistency is a specific case of cycle-consistency, which has been widely applied as a learning objective for 3D shape matching @cite_37 , image alignment @cite_2 @cite_24 @cite_39 , depth estimation @cite_44 @cite_14 @cite_75 , and image-to-image translation @cite_16 @cite_42 . For example @cite_39 used 3D CAD models to render two synthetic views for pairs of training images and construct a correspondence flow 4-cycle. To the best of our knowledge, our work is the first to employ cycle-consistency across multiple steps in time. | {
"cite_N": [
"@cite_85",
"@cite_42",
"@cite_44",
"@cite_2",
"@cite_75",
"@cite_60",
"@cite_8",
"@cite_21",
"@cite_39",
"@cite_17",
"@cite_37",
"@cite_6",
"@cite_56",
"@cite_16",
"@cite_82",
"@cite_14",
"@cite_1",
"@cite_24",
"@cite_59",
"@cite_51"
],
"mid": [
"",
"2963917969",
"2609883120",
"1926639317",
"2963583471",
"1979059554",
"2894983388",
"1965004103",
"2474531669",
"2165737454",
"2143133882",
"",
"",
"2962793481",
"2963891416",
"2520707372",
"2608018946",
"2189538311",
"2770424797",
"2151282921"
],
"abstract": [
"",
"We introduce a data-driven approach for unsupervised video retargeting that translates content from one domain to another while preserving the style native to a domain, i.e., if contents of John Oliver’s speech were to be transferred to Stephen Colbert, then the generated content speech should be in Stephen Colbert’s style. Our approach combines both spatial and temporal information along with adversarial losses for content translation and style preservation. In this work, we first study the advantages of using spatiotemporal constraints over spatial constraints for effective retargeting. We then demonstrate the proposed approach for the problems where information in both space and time matters such as face-to-face translation, flower-to-flower, wind and cloud synthesis, sunrise and sunset.",
"We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.",
"Given a set of poorly aligned images of the same visual concept without any annotations, we propose an algorithm to jointly bring them into pixel-wise correspondence by estimating a FlowWeb representation of the image set. FlowWeb is a fully-connected correspondence flow graph with each node representing an image, and each edge representing the correspondence flow field between a pair of images, i.e. a vector field indicating how each pixel in one image can find a corresponding pixel in the other image. Correspondence flow is related to optical flow but allows for correspondences between visually dissimilar regions if there is evidence they correspond transitively on the graph. Our algorithm starts by initializing all edges of this complete graph with an off-the-shelf, pairwise flow method. We then iteratively update the graph to force it to be more self-consistent. Once the algorithm converges, dense, globally-consistent correspondences can be read off the graph. Our results suggest that FlowWeb improves alignment accuracy over previous pairwise as well as joint alignment methods.",
"We propose GeoNet, a jointly unsupervised learning framework for monocular depth, optical flow and egomotion estimation from videos. The three components are coupled by the nature of 3D scene geometry, jointly learned by our framework in an end-to-end manner. Specifically, geometric relationships are extracted over the predictions of individual modules and then combined as an image reconstruction loss, reasoning about static and dynamic scene parts separately. Furthermore, we propose an adaptive geometric consistency loss to increase robustness towards outliers and non-Lambertian regions, which resolves occlusions and texture ambiguities effectively. Experimentation on the KITTI driving dataset reveals that our scheme achieves state-of-the-art results in all of the three tasks, performing better than previously unsupervised methods and comparably with supervised ones.",
"Identifying the same physical point in more than one image, the correspondence problem, is vital in motion analysis. Most research for establishing correspondence uses only two frames of a sequence to solve this problem. By using a sequence of frames, it is possible to exploit the fact that due to inertia the motion of an object cannot change instantaneously. By using smoothness of motion, it is possible to solve the correspondence problem for arbitrary motion of several nonrigid objects in a scene. We formulate the correspondence problem as an optimization problem and propose an iterative algorithm to find trajectories of points in a monocular image sequence. A modified form of this algorithm is useful in case of occlusion also. We demonstrate the efficacy of this approach considering synthetic, laboratory, and real scenes.",
"Learning optical flow with neural networks is hampered by the need for obtaining training data with associated ground truth. Unsupervised learning is a promising direction, yet the performance of current unsupervised methods is still limited. In particular, the lack of proper occlusion handling in commonly used data terms constitutes a major source of error. While most optical flow methods process pairs of consecutive frames, more advanced occlusion reasoning can be realized when considering multiple frames. In this paper, we propose a framework for unsupervised learning of optical flow and occlusions over multiple frames. More specifically, we exploit the minimal configuration of three frames to strengthen the photometric loss and explicitly reason about occlusions. We demonstrate that our multi-frame, occlusion-sensitive formulation outperforms existing unsupervised two-frame methods and even produces results on par with some fully supervised methods.",
"We describe a method for plausible interpolation of images, with a wide range of applications like temporal up-sampling for smooth playback of lower frame rate video, smooth view interpolation, and animation of still images. The method is based on the intuitive idea, that a given pixel in the interpolated frames traces out a path in the source images. Therefore, we simply move and copy pixel gradients from the input images along this path. A key innovation is to allow arbitrary (asymmetric) transition points, where the path moves from one image to the other. This flexible transition preserves the frequency content of the originals without ghosting or blurring, and maintains temporal coherence. Perhaps most importantly, our framework makes occlusion handling particularly simple. The transition points allow for matches away from the occluded regions, at any suitable point along the path. Indeed, occlusions do not need to be handled explicitly at all in our initial graph-cut optimization. Moreover, a simple comparison of computed path lengths after the optimization, allows us to robustly identify occluded regions, and compute the most plausible interpolation in those areas. Finally, we show that significant improvements are obtained by moving gradients and using Poisson reconstruction.",
"Discriminative deep learning approaches have shown impressive results for problems where human-labeled ground truth is plentiful, but what about tasks where labels are difficult or impossible to obtain? This paper tackles one such problem: establishing dense visual correspondence across different object instances. For this task, although we do not know what the ground-truth is, we know it should be consistent across instances of that category. We exploit this consistency as a supervisory signal to train a convolutional neural network to predict cross-instance correspondences between pairs of images depicting objects of the same category. For each pair of training images we find an appropriate 3D CAD model and render two synthetic views to link in with the pair, establishing a correspondence flow 4-cycle. We use ground-truth synthetic-to-synthetic correspondences, provided by the rendering engine, to train a ConvNet to predict synthetic-to-real, real-to-real and realto-synthetic correspondences that are cycle-consistent with the ground-truth. At test time, no CAD models are required. We demonstrate that our end-to-end trained ConvNet supervised by cycle-consistency outperforms stateof-the-art pairwise matching methods in correspondencerelated tasks.",
"This paper proposes a novel method for tracking failure detection. The detection is based on the Forward-Backward error, i.e. the tracking is performed forward and backward in time and the discrepancies between these two trajectories are measured. We demonstrate that the proposed error enables reliable detection of tracking failures and selection of reliable trajectories in video sequences. We demonstrate that the approach is complementary to commonly used normalized cross-correlation (NCC). Based on the error, we propose a novel object tracker called Median Flow. State-of-the-art performance is achieved on challenging benchmark video sequences which include non-rigid objects.",
"Recent advances in shape matching have shown that jointly optimizing the maps among the shapes in a collection can lead to significant improvements when compared to estimating maps between pairs of shapes in isolation. These methods typically invoke a cycle-consistency criterion --- the fact that compositions of maps along a cycle of shapes should approximate the identity map. This condition regularizes the network and allows for the correction of errors and imperfections in individual maps. In particular, it encourages the estimation of maps between dissimilar shapes by compositions of maps along a path of more similar shapes. In this paper, we introduce a novel approach for obtaining consistent shape maps in a collection that formulates the cycle-consistency constraint as the solution to a semidefinite program (SDP). The proposed approach is based on the observation that, if the ground truth maps between the shapes are cycle-consistent, then the matrix that stores all pair-wise maps in blocks is low-rank and positive semidefinite. Motivated by recent advances in techniques for low-rank matrix recovery via semidefinite programming, we formulate the problem of estimating cycle-consistent maps as finding the closest positive semidefinite matrix to an input matrix that stores all the initial maps. By analyzing the Karush-Kuhn-Tucker (KKT) optimality condition of this program, we derive theoretical guarantees for the proposed algorithm, ensuring the correctness of the recovery when the errors in the inputs maps do not exceed certain thresholds. Besides this theoretical guarantee, experimental results on benchmark datasets show that the proposed approach outperforms state-of-the-art multiple shape matching methods.",
"",
"",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"",
"Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.",
"We propose SfM-Net, a geometry-aware neural network for motion estimation in videos that decomposes frame-to-frame pixel motion in terms of scene and object depth, camera motion and 3D object rotations and translations. Given a sequence of frames, SfM-Net predicts depth, segmentation, camera and rigid object motions, converts those into a dense frame-to-frame motion field (optical flow), differentiably warps frames in time to match pixels and back-propagates. The model can be trained with various degrees of supervision: 1) self-supervised by the re-projection photometric error (completely unsupervised), 2) supervised by ego-motion (camera motion), or 3) supervised by depth (e.g., as provided by RGBD sensors). SfM-Net extracts meaningful depth estimates and successfully estimates frame-to-frame camera rotations and translations. It often successfully segments the moving objects in the scene, even though such supervision is never provided.",
"In this paper we propose a global optimization-based approach to jointly matching a set of images. The estimated correspondences simultaneously maximize pairwise feature affinities and cycle consistency across multiple images. Unlike previous convex methods relying on semidefinite programming, we formulate the problem as a low-rank matrix recovery problem and show that the desired semidefiniteness of a solution can be spontaneously fulfilled. The low-rank formulation enables us to derive a fast alternating minimization algorithm in order to handle practical problems with thousands of features. Both simulation and real experiments demonstrate that the proposed algorithm can achieve a competitive performance with an order of magnitude speedup compared to the state-of-the-art algorithm. In the end, we demonstrate the applicability of the proposed method to match the images of different object instances and as a result the potential to reconstruct category-specific object models from those images.",
"It has been recently shown that a convolutional neural network can learn optical flow estimation with unsupervised learning. However, the performance of the unsupervised methods still has a relatively large gap compared to its supervised counterpart. Occlusion and large motion are some of the major factors that limit the current unsupervised learning of optical flow methods. In this work we introduce a new method which models occlusion explicitly and a new warping way that facilitates the learning of large motion. Our method shows promising results on Flying Chairs, MPI-Sintel and KITTI benchmark datasets. Especially on KITTI dataset where abundant unlabeled samples exist, our unsupervised method outperforms its counterpart trained with supervised learning.",
"Automatic evaluation of visual tracking algorithms in the absence of ground truth is a very challenging and important problem. In the context of online appearance modeling, there is an additional ambiguity involving the correctness of the appearance model. In this paper, we propose a novel performance evaluation strategy for tracking systems based on particle filter using a time reversed Markov chain. Starting from the latest observation, the time reversed chain is propagated back till the starting time t = 0 of the tracking algorithm. The posterior density of the time reversed chain is also computed. The distance between the posterior density of the time reversed chain (at t = 0) and the prior density used to initialize the tracking algorithm forms the decision statistic for evaluation. It is postulated that when the data is generated true to the underlying models, the decision statistic takes a low value. We empirically demonstrate the performance of the algorithm against various common failure modes in the generic visual tracking problem. Finally, we derive a small frame approximation that allows for very efficient computation of the decision statistic."
]
} |
1903.07137 | 2922338294 | We propose a topic-guided variational autoencoder (TGVAE) model for text generation. Distinct from existing variational autoencoder (VAE) based approaches, which assume a simple Gaussian prior for the latent code, our model specifies the prior as a Gaussian mixture model (GMM) parametrized by a neural topic module. Each mixture component corresponds to a latent topic, which provides guidance to generate sentences under the topic. The neural topic module and the VAE-based neural sequence module in our model are learned jointly. In particular, a sequence of invertible Householder transformations is applied to endow the approximate posterior of the latent code with high flexibility during model inference. Experimental results show that our TGVAE outperforms alternative approaches on both unconditional and conditional text generation, which can generate semantically-meaningful sentences with various topics. | The VAE was proposed by , and since then, it has been applied successfully in a variety of applications @cite_25 @cite_12 @cite_4 @cite_21 @cite_47 . Focusing on text generation, the methods in represent texts as bag-of-words, and proposed the usage of an RNN as the encoder and decoder, and found some negative results. In order to improve the performance, different convolutional designs @cite_0 @cite_39 @cite_30 have been proposed. A VAE variant was further developed in to control the sentiment and tense of generated sentences. Additionally, the VAE has also been considered for conditional text generation tasks, including machine translation @cite_54 , image captioning @cite_36 , dialogue generation @cite_51 @cite_20 @cite_55 and text summarization @cite_56 @cite_22 . In particular, distinct from the above works, we propose the usage of a topic-dependent prior to explicitly incorporate topic guidance into the text-generation framework. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_22",
"@cite_36",
"@cite_54",
"@cite_21",
"@cite_55",
"@cite_39",
"@cite_0",
"@cite_56",
"@cite_47",
"@cite_20",
"@cite_51",
"@cite_25",
"@cite_12"
],
"mid": [
"2594538354",
"2951183280",
"2951652470",
"2527569769",
"2394571815",
"",
"2605246398",
"2756946152",
"2951575317",
"2740593609",
"",
"2611714756",
"2399880602",
"1850742715",
"2949416428"
],
"abstract": [
"Recent work on generative modeling of text has found that variational auto-encoders (VAE) incorporating LSTM decoders perform worse than simpler LSTM language models (, 2015). This negative result is so far poorly understood, but has been attributed to the propensity of LSTM decoders to ignore conditioning information from the encoder. In this paper, we experiment with a new type of decoder for VAE: a dilated CNN. By changing the decoder's dilation architecture, we control the effective context from previously generated words. In experiments, we find that there is a trade off between the contextual capacity of the decoder and the amount of encoding information used. We show that with the right decoder, VAE can outperform LSTM language models. We demonstrate perplexity gains on two datasets, representing the first positive experimental result on the use VAE for generative modeling of text. Further, we conduct an in-depth investigation of the use of VAE (with our new decoding architecture) for semi-supervised and unsupervised labeling tasks, demonstrating gains over several strong baselines.",
"Two fundamental problems in unsupervised learning are efficient inference for latent-variable models and robust density estimation based on large amounts of unlabeled data. Algorithms for the two tasks, such as normalizing flows and generative adversarial networks (GANs), are often developed independently. In this paper, we propose the concept of continuous-time flows (CTFs), a family of diffusion-based methods that are able to asymptotically approach a target distribution. Distinct from normalizing flows and GANs, CTFs can be adopted to achieve the above two goals in one framework, with theoretical guarantees. Our framework includes distilling knowledge from a CTF for efficient inference, and learning an explicit energy-based distribution with CTFs for density estimation. Both tasks rely on a new technique for distribution matching within amortized learning. Experiments on various tasks demonstrate promising performance of the proposed CTF framework, compared to related techniques.",
"In this work we explore deep generative models of text in which the latent representation of a document is itself drawn from a discrete language model distribution. We formulate a variational auto-encoder for inference in this model and apply it to the task of compressing sentences. In this application the generative model first draws a latent summary sentence from a background language model, and then subsequently draws the observed sentence conditioned on this latent summary. In our empirical evaluation we show that generative formulations of both abstractive and extractive compression yield state-of-the-art results when trained on a large amount of supervised data. Further, we explore semi-supervised compression scenarios where we show that it is possible to achieve performance competitive with previously proposed supervised models while training on a fraction of the supervised data.",
"A novel variational autoencoder is developed to model images, as well as associated labels or captions. The Deep Generative Deconvolutional Network (DGDN) is used as a decoder of the latent image features, and a deep Convolutional Neural Network (CNN) is used as an image encoder; the CNN is used to approximate a distribution for the latent DGDN features code. The latent code is also linked to generative models for labels (Bayesian support vector machine) or captions (recurrent neural network). When predicting a label caption for a new image at test, averaging is performed across the distribution of latent codes; this is computationally efficient as a consequence of the learned CNN-based encoder. Since the framework is capable of modeling the image in the presence absence of associated labels captions, a new semi-supervised setting is manifested for CNN learning with images; the framework even allows unsupervised CNN learning, based on images alone.",
"Models of neural machine translation are often from a discriminative family of encoderdecoders that learn a conditional distribution of a target sentence given a source sentence. In this paper, we propose a variational model to learn this conditional distribution for neural machine translation: a variational encoderdecoder model that can be trained end-to-end. Different from the vanilla encoder-decoder model that generates target translations from hidden representations of source sentences alone, the variational model introduces a continuous latent variable to explicitly model underlying semantics of source sentences and to guide the generation of target translations. In order to perform efficient posterior inference and large-scale training, we build a neural posterior approximator conditioned on both the source and the target sides, and equip it with a reparameterization technique to estimate the variational lower bound. Experiments on both Chinese-English and English- German translation tasks show that the proposed variational neural machine translation achieves significant improvements over the vanilla neural machine translation baselines.",
"",
"While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.",
"A latent-variable model is introduced for text matching, inferring sentence representations by jointly optimizing generative and discriminative objectives. To alleviate typical optimization challenges in latent-variable models for text, we employ deconvolutional networks as the sequence decoder (generator), providing learned latent codes with more semantic information and better generalization. Our model, trained in an unsupervised manner, yields stronger empirical predictive performance than a decoder based on Long Short-Term Memory (LSTM), with less parameters and considerably faster training. Further, we apply it to text sequence-matching problems. The proposed model significantly outperforms several strong sentence-encoding baselines, especially in the semi-supervised setting.",
"In this paper we explore the effect of architectural choices on learning a Variational Autoencoder (VAE) for text generation. In contrast to the previously introduced VAE model for text where both the encoder and decoder are RNNs, we propose a novel hybrid architecture that blends fully feed-forward convolutional and deconvolutional components with a recurrent language model. Our architecture exhibits several attractive properties such as faster run time and convergence, ability to better handle long sequences and, more importantly, it helps to avoid some of the major difficulties posed by training VAE models on textual data.",
"We propose a new framework for abstractive text summarization based on a sequence-to-sequence oriented encoder-decoder model equipped with a deep recurrent generative decoder (DRGN). Latent structure information implied in the target summaries is learned based on a recurrent latent random model for improving the summarization quality. Neural variational inference is employed to address the intractable posterior inference for the recurrent latent variables. Abstractive summaries are generated based on both the generative latent variables and the discriminative deterministic states. Extensive experiments on some benchmark datasets in different languages show that DRGN achieves improvements over the state-of-the-art methods.",
"",
"Deep latent variable models have been shown to facilitate the response generation for open-domain dialog systems. However, these latent variables are highly randomized, leading to uncontrollable generated responses. In this paper, we propose a framework allowing conditional response generation based on specific attributes. These attributes can be either manually assigned or automatically detected. Moreover, the dialog states for both speakers are modeled separately in order to reflect personal features. We validate this framework on two different scenarios, where the attribute refers to genericness and sentiment states respectively. The experiment result testified the potential of our model, where meaningful responses can be generated in accordance with the specified attributes.",
"Sequential data often possesses a hierarchical structure with complex dependencies between subsequences, such as found between the utterances in a dialogue. In an effort to model this kind of generative process, we propose a neural network-based generative architecture, with latent stochastic variables that span a variable number of time steps. We apply the proposed model to the task of dialogue response generation and compare it with recent neural network architectures. We evaluate the model performance through automatic evaluation metrics and by carrying out a human evaluation. The experiments demonstrate that our model improves upon recently proposed models and that the latent variables facilitate the generation of long outputs and maintain the context.",
"This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.",
"The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new models that allow for effective generalisation from small labelled data sets to large unlabelled ones. Generative approaches have thus far been either inflexible, inefficient or non-scalable. We show that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning."
]
} |
1903.07137 | 2922338294 | We propose a topic-guided variational autoencoder (TGVAE) model for text generation. Distinct from existing variational autoencoder (VAE) based approaches, which assume a simple Gaussian prior for the latent code, our model specifies the prior as a Gaussian mixture model (GMM) parametrized by a neural topic module. Each mixture component corresponds to a latent topic, which provides guidance to generate sentences under the topic. The neural topic module and the VAE-based neural sequence module in our model are learned jointly. In particular, a sequence of invertible Householder transformations is applied to endow the approximate posterior of the latent code with high flexibility during model inference. Experimental results show that our TGVAE outperforms alternative approaches on both unconditional and conditional text generation, which can generate semantically-meaningful sentences with various topics. | The idea of using learned topics to improve NLP tasks has been explored previously, including methods combining topic and neural language models @cite_23 @cite_10 @cite_41 @cite_5 @cite_33 , as well as leveraging topic and word embeddings @cite_45 @cite_7 . Distinct from them, we propose the use of topics to guide the prior of a VAE, rather than only the language model (, the decoder in a VAE setup). This provides more flexibility in text modeling and also the ability to infer the posterior on latent codes, which could be useful for visualization and downstream tasks. | {
"cite_N": [
"@cite_33",
"@cite_7",
"@cite_41",
"@cite_45",
"@cite_23",
"@cite_5",
"@cite_10"
],
"mid": [
"2778817245",
"",
"2608962050",
"2238728730",
"2476140796",
"1999965501",
"2952723479"
],
"abstract": [
"We propose a Topic Compositional Neural Language Model (TCNLM), a novel method designed to simultaneously capture both the global semantic meaning and the local word ordering structure in a document. The TCNLM learns the global semantic coherence of a document via a neural topic model, and the probability of each learned latent topic is further used to build a Mixture-of-Experts (MoE) language model, where each expert (corresponding to one topic) is a recurrent neural network (RNN) that accounts for learning the local structure of a word sequence. In order to train the MoE model efficiently, a matrix factorization method is applied, by extending each weight matrix of the RNN to be an ensemble of topic-dependent weight matrices. The degree to which each member of the ensemble is used is tied to the document-dependent probability of the corresponding topics. Experimental results on several corpora show that the proposed approach outperforms both a pure RNN-based model and other topic-guided language models. Further, our model yields sensible topics, and also has the capacity to generate meaningful sentences conditioned on given topics.",
"",
"Language models are typically applied at the sentence level, without access to the broader document context. We present a neural language model that incorporates document context in the form of a topic model-like architecture, thus providing a succinct representation of the broader document context outside of the current sentence. Experiments over a range of datasets demonstrate that our model outperforms a pure sentence-based model in terms of language model perplexity, and leads to topics that are potentially more coherent than those produced by a standard LDA topic model. Our model also has the ability to generate related sentences for a topic, providing another way to interpret topics.",
"Most word embedding models typically represent each word using a single vector, which makes these models indiscriminative for ubiquitous homonymy and polysemy. In order to enhance discriminativeness, we employ latent topic models to assign topics for each word in the text corpus, and learn topical word embeddings (TWE) based on both words and their topics. In this way, contextual word embeddings can be flexibly obtained to measure contextual word similarity. We can also build document representations, which are more expressive than some widely-used document models such as latent topic models. In the experiments, we evaluate the TWE models on two tasks, contextual word similarity and text classification. The experimental results show that our models outperform typical word embedding models including the multi-prototype version on contextual word similarity, and also exceed latent topic models and other representative document models on text classification. The source code of this paper can be obtained from https: github.com largelymfs topical_word_embeddings.",
"Current language models have significant limitations in their ability to encode and decode knowledge. This is mainly because they acquire knowledge based on statistical co-occurrences, even if most of the knowledge words are rarely observed named entities. In this paper, we propose a Neural Knowledge Language Model (NKLM) which combines symbolic knowledge provided by a knowledge graph with the RNN language model. At each time step, the model predicts a fact on which the observed word is to be based. Then, a word is either generated from the vocabulary or copied from the knowledge graph. We train and test the model on a new dataset, WikiFacts. In experiments, we show that the NKLM significantly improves the perplexity while generating a much smaller number of unknown words. In addition, we demonstrate that the sampled descriptions include named entities which were used to be the unknown words in RNN language models.",
"Recurrent neural network language models (RNNLMs) have recently demonstrated state-of-the-art performance across a variety of tasks. In this paper, we improve their performance by providing a contextual real-valued input vector in association with each word. This vector is used to convey contextual information about the sentence being modeled. By performing Latent Dirichlet Allocation using a block of preceding text, we achieve a topic-conditioned RNNLM. This approach has the key advantage of avoiding the data fragmentation associated with building multiple topic models on different data subsets. We report perplexity results on the Penn Treebank data, where we achieve a new state-of-the-art. We further apply the model to the Wall Street Journal speech recognition task, where we observe improvements in word-error-rate.",
"In this paper, we propose TopicRNN, a recurrent neural network (RNN)-based language model designed to directly capture the global semantic meaning relating words in a document via latent topics. Because of their sequential nature, RNNs are good at capturing the local structure of a word sequence - both semantic and syntactic - but might face difficulty remembering long-range dependencies. Intuitively, these long-range dependencies are of semantic nature. In contrast, latent topic models are able to capture the global underlying semantic structure of a document but do not account for word ordering. The proposed TopicRNN model integrates the merits of RNNs and latent topic models: it captures local (syntactic) dependencies using an RNN and global (semantic) dependencies using latent topics. Unlike previous work on contextual RNN language modeling, our model is learned end-to-end. Empirical results on word prediction show that TopicRNN outperforms existing contextual RNN baselines. In addition, TopicRNN can be used as an unsupervised feature extractor for documents. We do this for sentiment analysis on the IMDB movie review dataset and report an error rate of @math . This is comparable to the state-of-the-art @math resulting from a semi-supervised approach. Finally, TopicRNN also yields sensible topics, making it a useful alternative to document models such as latent Dirichlet allocation."
]
} |
1903.07137 | 2922338294 | We propose a topic-guided variational autoencoder (TGVAE) model for text generation. Distinct from existing variational autoencoder (VAE) based approaches, which assume a simple Gaussian prior for the latent code, our model specifies the prior as a Gaussian mixture model (GMM) parametrized by a neural topic module. Each mixture component corresponds to a latent topic, which provides guidance to generate sentences under the topic. The neural topic module and the VAE-based neural sequence module in our model are learned jointly. In particular, a sequence of invertible Householder transformations is applied to endow the approximate posterior of the latent code with high flexibility during model inference. Experimental results show that our TGVAE outperforms alternative approaches on both unconditional and conditional text generation, which can generate semantically-meaningful sentences with various topics. | Neural abstractive summarization was pioneered in , and it was followed and extended by . Currently the RNN-based encoder-decoder framework with attention @cite_34 @cite_28 remains popular in this area. Attention models typically work as a keyword detector, which is similar to topic modeling in spirit. This fact motivated us to extend our topic-guided VAE model to text summarization. | {
"cite_N": [
"@cite_28",
"@cite_34"
],
"mid": [
"2952913664",
"2963929190"
],
"abstract": [
"Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.",
"In this work, we model abstractive text summarization using Attentional EncoderDecoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora. We propose several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-toword structure, and emitting words that are rare or unseen at training time. Our work shows that many of our proposed models contribute to further improvement in performance. We also propose a new dataset consisting of multi-sentence summaries, and establish performance benchmarks for further research."
]
} |
1903.07227 | 2952428868 | Machine learning models of music typically break up the task of composition into a chronological process, composing a piece of music in a single pass from beginning to end. On the contrary, human composers write music in a nonlinear fashion, scribbling motifs here and there, often revisiting choices previously made. In order to better approximate this process, we train a convolutional neural network to complete partial musical scores, and explore the use of blocked Gibbs sampling as an analogue to rewriting. Neither the model nor the generative procedure are tied to a particular causal direction of composition. Our model is an instance of orderless NADE (, 2014), which allows more direct ancestral sampling. However, we find that Gibbs sampling greatly improves sample quality, which we demonstrate to be due to some conditional distributions being poorly modeled. Moreover, we show that even the cheap approximate blocked Gibbs procedure from (2014) yields better samples than ancestral sampling, based on both log-likelihood and human evaluation. | Computer music researchers have taken inspiration from this pedagogical scheme by first teaching computers to write species counterpoint as opposed to full-fledged counterpoint. Farbood @cite_30 uses Markov chains to capture transition probabilities of different melodic and harmonic transitions rules. Herremans @cite_3 @cite_8 takes an optimization approach by writing down an objective function that consists of existing rules of counterpoint and using a variable neighbourhood search (VNS) algorithm to optimize it. | {
"cite_N": [
"@cite_30",
"@cite_3",
"@cite_8"
],
"mid": [
"147294073",
"2059927004",
""
],
"abstract": [
"This paper presents a novel approach to computergenerated Palestrina-style counterpoint using probabilistic Markov Chains. It is shown how Markov Chains adequately capture the rules of species counterpoint and how they can be used to synthesize species counterpoint given a cantus firmus. It is also shown how such rules can be inferred from given counterpoint examples.",
"In this article, a variable neighbourhood search (VNS) algorithm is developed that can generate musical fragments consisting of a melody for the cantus firmus and the first species counterpoint. Th...",
""
]
} |
1903.07227 | 2952428868 | Machine learning models of music typically break up the task of composition into a chronological process, composing a piece of music in a single pass from beginning to end. On the contrary, human composers write music in a nonlinear fashion, scribbling motifs here and there, often revisiting choices previously made. In order to better approximate this process, we train a convolutional neural network to complete partial musical scores, and explore the use of blocked Gibbs sampling as an analogue to rewriting. Neither the model nor the generative procedure are tied to a particular causal direction of composition. Our model is an instance of orderless NADE (, 2014), which allows more direct ancestral sampling. However, we find that Gibbs sampling greatly improves sample quality, which we demonstrate to be due to some conditional distributions being poorly modeled. Moreover, we show that even the cheap approximate blocked Gibbs procedure from (2014) yields better samples than ancestral sampling, based on both log-likelihood and human evaluation. | J.S. Bach chorales has been the main corpus in computer music that serves as a starting point to tackle full-fledged counterpoint. A wide range of approaches have been used to generate music in the style of Bach chorales, for example rule-based and instance-based approaches such as Cope's recombinancy method @cite_1 . This method involves first segmenting existing Bach chorales into smaller chunks based on music theory, analyzing their function and stylistic signatures and then re-concatenating the chunks into new coherent works. Other approaches range from constraint-based @cite_14 to statistical methods @cite_36 . In addition, @cite_39 gives a comprehensive survey of AI methods used not just for generating Bach chorales, but also algorithmic composition in general. | {
"cite_N": [
"@cite_36",
"@cite_14",
"@cite_1",
"@cite_39"
],
"mid": [
"",
"2153628411",
"2053952057",
"2103498773"
],
"abstract": [
"",
"We survey works on the musical problem of automatic harmonization. This problem, which consists in creating musical scores which satisfy given rules of harmony, has been the object of numerous studies, most of them using constraint techniques in one way or another. We outline the main results obtained and the current status of this category of problems.",
"A brief background of automated music Musical style and linguistics LISP programming and object orientation Style replication Musical examples Cybernetic composition.",
"Algorithmic composition is the partial or total automation of the process of music composition by using computers. Since the 1950s, different computational techniques related to Artificial Intelligence have been used for algorithmic composition, including grammatical representations, probabilistic methods, neural networks, symbolic rule-based systems, constraint programming and evolutionary algorithms. This survey aims to be a comprehensive account of research on algorithmic composition, presenting a thorough view of the field for researchers in Artificial Intelligence."
]
} |
1903.07227 | 2952428868 | Machine learning models of music typically break up the task of composition into a chronological process, composing a piece of music in a single pass from beginning to end. On the contrary, human composers write music in a nonlinear fashion, scribbling motifs here and there, often revisiting choices previously made. In order to better approximate this process, we train a convolutional neural network to complete partial musical scores, and explore the use of blocked Gibbs sampling as an analogue to rewriting. Neither the model nor the generative procedure are tied to a particular causal direction of composition. Our model is an instance of orderless NADE (, 2014), which allows more direct ancestral sampling. However, we find that Gibbs sampling greatly improves sample quality, which we demonstrate to be due to some conditional distributions being poorly modeled. Moreover, we show that even the cheap approximate blocked Gibbs procedure from (2014) yields better samples than ancestral sampling, based on both log-likelihood and human evaluation. | Sequence models such as HMMs and RNNs are natural choices for modeling music. Successful application of such models to polyphonic music often requires serializing or otherwise re-representing the music to fit the sequence paradigm. For instance, Liang in BachBot @cite_28 serializes four-part Bach chorales by interleaving the parts, while Allan and Williams @cite_27 construct a chord vocabulary. @cite_4 adopt a piano roll representation, a binary matrix @math where @math iff some instrument is playing pitch @math at time @math . To model the joint probability distribution of the multi-hot pitch vector @math , they employ a Restricted Boltzmann Machine (RBM @cite_7 @cite_33 ) or Neural Autoregressive Distribution Estimator (NADE @cite_22 ) at each time step. Similarly @cite_0 employ a Deep Belief Network @cite_33 on top of an RNN. | {
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_7",
"@cite_22",
"@cite_28",
"@cite_0",
"@cite_27"
],
"mid": [
"2962968839",
"2136922672",
"1813659000",
"2135181320",
"",
"1931432374",
"2161850243"
],
"abstract": [
"We investigate the problem of modeling symbolic sequences of polyphonic music in a completely general piano-roll representation. We introduce a probabilistic model based on distribution estimators conditioned on a recurrent neural network that is able to discover temporal dependencies in high-dimensional sequences. Our approach outperforms many traditional models of polyphonic music on a variety of realistic datasets. We show how our musical language model can serve as a symbolic prior to improve the accuracy of polyphonic transcription.",
"We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.",
"Abstract : At this early stage in the development of cognitive science, methodological issues are both open and central. There may have been times when developments in neuroscience, artificial intelligence, or cognitive psychology seduced researchers into believing that their discipline was on the verge of discovering the secret of intelligence. But a humbling history of hopes disappointed has produced the realization that understanding the mind will challenge the power of all these methodologies combined. The work reported in this chapter rests on the conviction that a methodology that has a crucial role to play in the development of cognitive science is mathematical analysis. The success of cognitive science, like that of many other sciences, will, I believe, depend upon the construction of a solid body of theoretical results: results that express in a mathematical language the conceptual insights of the field; results that squeeze all possible implications out of those insights by exploiting powerful mathematical techniques. This body of results, which I will call the theory of information processing, exists because information is a concept that lends itself to mathematical formalization. One part of the theory of information processing is already well-developed. The classical theory of computation provides powerful and elegant results about the notion of effective procedure, including languages for precisely expressing them and theoretical machines for realizing them.",
"We describe a new approach for modeling the distribution of high-dimensional vectors of discrete variables. This model is inspired by the restricted Boltzmann machine (RBM), which has been shown to be a powerful model of such distributions. However, an RBM typically does not provide a tractable distribution estimator, since evaluating the probability it assigns to some given observation requires the computation of the so-called partition function, which itself is intractable for RBMs of even moderate size. Our model circumvents this diculty by decomposing the joint distribution of observations into tractable conditional distributions and modeling each conditional using a non-linear function similar to a conditional of an RBM. Our model can also be interpreted as an autoencoder wired such that its output can be used to assign valid probabilities to observations. We show that this new model outperforms other multivariate binary distribution estimators on several datasets and performs similarly to a large (but intractable) RBM.",
"",
"In this paper, we propose a generic technique to model temporal dependencies and sequences using a combination of a recurrent neural network and a Deep Belief Network. Our technique, RNN-DBN, is an amalgamation of the memory state of the RNN that allows it to provide temporal information and a multi-layer DBN that helps in high level representation of the data. This makes RNN-DBNs ideal for sequence generation. Further, the use of a DBN in conjunction with the RNN makes this model capable of significantly more complex data representation than a Restricted Boltzmann Machine (RBM). We apply this technique to the task of polyphonic music generation.",
"We describe how we used a data set of chorale harmonisations composed by Johann Sebastian Bach to train Hidden Markov Models. Using a probabilistic framework allows us to create a harmonisation system which learns from examples, and which can compose new harmonisations. We make a quantitative comparison of our system's harmonisation performance against simpler models, and provide example harmonisations."
]
} |
1903.07227 | 2952428868 | Machine learning models of music typically break up the task of composition into a chronological process, composing a piece of music in a single pass from beginning to end. On the contrary, human composers write music in a nonlinear fashion, scribbling motifs here and there, often revisiting choices previously made. In order to better approximate this process, we train a convolutional neural network to complete partial musical scores, and explore the use of blocked Gibbs sampling as an analogue to rewriting. Neither the model nor the generative procedure are tied to a particular causal direction of composition. Our model is an instance of orderless NADE (, 2014), which allows more direct ancestral sampling. However, we find that Gibbs sampling greatly improves sample quality, which we demonstrate to be due to some conditional distributions being poorly modeled. Moreover, we show that even the cheap approximate blocked Gibbs procedure from (2014) yields better samples than ancestral sampling, based on both log-likelihood and human evaluation. | @cite_38 instead employ an undirected Markov model to learn pairwise relationships between neighboring notes up to a specified number of steps away in a score. Sampling involves Markov Chain Monte Carlo () using the model as a Metropolis-Hastings () objective. The model permits constraints on the state space to support tasks such as melody harmonization. However, the Markov assumption can limit the expressivity of the model. | {
"cite_N": [
"@cite_38"
],
"mid": [
"2523097914"
],
"abstract": [
"Modeling polyphonic music is a particularly challenging task because of the intricate interplay between melody and harmony. A good model should satisfy three requirements: statistical accuracy (capturing faithfully the statistics of correlations at various ranges, horizontally and vertically), flexibility (coping with arbitrary user constraints), and generalization capacity (inventing new material, while staying in the style of the training corpus). Models proposed so far fail on at least one of these requirements. We propose a statistical model of polyphonic music, based on the maximum entropy principle. This model is able to learn and reproduce pairwise statistics between neighboring note events in a given corpus. The model is also able to invent new chords and to harmonize unknown melodies. We evaluate the invention capacity of the model by assessing the amount of cited, re-discovered, and invented chords on a corpus of Bach chorales. We discuss how the model enables the user to specify and enforce user-defined constraints, which makes it useful for style-based, interactive music generation."
]
} |
1903.07227 | 2952428868 | Machine learning models of music typically break up the task of composition into a chronological process, composing a piece of music in a single pass from beginning to end. On the contrary, human composers write music in a nonlinear fashion, scribbling motifs here and there, often revisiting choices previously made. In order to better approximate this process, we train a convolutional neural network to complete partial musical scores, and explore the use of blocked Gibbs sampling as an analogue to rewriting. Neither the model nor the generative procedure are tied to a particular causal direction of composition. Our model is an instance of orderless NADE (, 2014), which allows more direct ancestral sampling. However, we find that Gibbs sampling greatly improves sample quality, which we demonstrate to be due to some conditional distributions being poorly modeled. Moreover, we show that even the cheap approximate blocked Gibbs procedure from (2014) yields better samples than ancestral sampling, based on both log-likelihood and human evaluation. | Hadjeres and Pachet in DeepBach @cite_16 model note predictions by breaking down its full context into three parts, with the past and the future modeled by stacked LSTMs going in the forward and backward directions respectively, and the present harmonic context modeled by a third neural network. The three are then combined by a fourth neural network and used in Gibbs sampling for generation. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2560316200"
],
"abstract": [
"This paper introduces DeepBach, a graphical model aimed at modeling polyphonic music and specifically hymn-like pieces. We claim that, after being trained on the chorale harmonizations by Johann Sebastian Bach, our model is capable of generating highly convincing chorales in the style of Bach. DeepBach's strength comes from the use of pseudo-Gibbs sampling coupled with an adapted representation of musical data. This is in contrast with many automatic music composition approaches which tend to compose music sequentially. Our model is also steerable in the sense that a user can constrain the generation by imposing positional constraints such as notes, rhythms or cadences in the generated score. We also provide a plugin on top of the MuseScore music editor making the interaction with DeepBach easy to use."
]
} |
1903.07227 | 2952428868 | Machine learning models of music typically break up the task of composition into a chronological process, composing a piece of music in a single pass from beginning to end. On the contrary, human composers write music in a nonlinear fashion, scribbling motifs here and there, often revisiting choices previously made. In order to better approximate this process, we train a convolutional neural network to complete partial musical scores, and explore the use of blocked Gibbs sampling as an analogue to rewriting. Neither the model nor the generative procedure are tied to a particular causal direction of composition. Our model is an instance of orderless NADE (, 2014), which allows more direct ancestral sampling. However, we find that Gibbs sampling greatly improves sample quality, which we demonstrate to be due to some conditional distributions being poorly modeled. Moreover, we show that even the cheap approximate blocked Gibbs procedure from (2014) yields better samples than ancestral sampling, based on both log-likelihood and human evaluation. | imposes higher-level structure by interleaving selective Gibbs sampling on a convolutional RBM @cite_9 and gradient descent that minimizes cost to template piece on features such as self-similarity. This procedure itself is wrapped in simulated annealing to ensure steps do not lower the solution quality too much. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2579406683"
],
"abstract": [
"We introduce a method for imposing higher-level structure on generated, polyphonic music. A Convolutional Restricted Boltzmann Machine (C-RBM) as a generative model is combined with gradient des- cent constraint optimisation to provide further control over the genera- tion process. Among other things, this allows for the use of a “template” piece, from which some structural properties can be extracted, and trans- ferred as constraints to the newly generated material. The sampling pro- cess is guided with Simulated Annealing to avoid local optima, and to find solutions that both satisfy the constraints, and are relatively stable with respect to the C-RBM. Results show that with this approach it is possible to control the higher-level self-similarity structure, the meter, and the tonal properties of the resulting musical piece, while preserving its local musical coherence."
]
} |
1903.07227 | 2952428868 | Machine learning models of music typically break up the task of composition into a chronological process, composing a piece of music in a single pass from beginning to end. On the contrary, human composers write music in a nonlinear fashion, scribbling motifs here and there, often revisiting choices previously made. In order to better approximate this process, we train a convolutional neural network to complete partial musical scores, and explore the use of blocked Gibbs sampling as an analogue to rewriting. Neither the model nor the generative procedure are tied to a particular causal direction of composition. Our model is an instance of orderless NADE (, 2014), which allows more direct ancestral sampling. However, we find that Gibbs sampling greatly improves sample quality, which we demonstrate to be due to some conditional distributions being poorly modeled. Moreover, we show that even the cheap approximate blocked Gibbs procedure from (2014) yields better samples than ancestral sampling, based on both log-likelihood and human evaluation. | We opt for an orderless training procedure which enables us to train a mixture of all possible directed models simultaneously. Finally, an approximate blocked Gibbs sampling procedure @cite_37 allows fast generation from the model. | {
"cite_N": [
"@cite_37"
],
"mid": [
"2166942303"
],
"abstract": [
"Neural Autoregressive Distribution Estimators (NADEs) have recently been shown as successful alternatives for modeling high dimensional multimodal distributions. One issue associated with NADEs is that they rely on a particular order of factorization for P(x). This issue has been recently addressed by a variant of NADE called Orderless NADEs and its deeper version, Deep Orderless NADE. Orderless NADEs are trained based on a criterion that stochastically maximizes P(x) with all possible orders of factorizations. Unfortunately, ancestral sampling from deep NADE is very expensive, corresponding to running through a neural net separately predicting each of the visible variables given some others. This work makes a connection between this criterion and the training criterion for Generative Stochastic Networks (GSNs). It shows that training NADEs in this way also trains a GSN, which defines a Markov chain associated with the NADE model. Based on this connection, we show an alternative way to sample from a trained Orderless NADE that allows to trade-off computing time and quality of the samples: a 3 to 10-fold speedup (taking into account the waste due to correlations between consecutive samples of the chain) can be obtained without noticeably reducing the quality of the samples. This is achieved using a novel sampling procedure for GSNs called annealed GSN sampling, similar to tempering methods that combines fast mixing (obtained thanks to steps at high noise levels) with accurate samples (obtained thanks to steps at low noise levels)."
]
} |
1903.07269 | 2921082920 | In this work, we formulate the process of generating explanations as model reconciliation for planning problems as one of planning with explanatory actions. We show that these problems could be better understood within the framework of epistemic planning and that, in fact, most earlier works on explanation as model reconciliation correspond to tractable subsets of epistemic planning problems. We empirically show how our approach is computationally more efficient than existing techniques for explanation generation and we end the paper with a discussion of how this formulation could be extended to generate novel explanatory behaviors. | It's widely accepted in social sciences literature that explanations must be generated while keeping in mind the beliefs of the agent receiving the explanation @cite_16 @cite_10 . As such, epistemic planning makes for an excellent framework for studying the problem of generating these explanations. While the most general formulation of epistemic planning has been shown to be undecidable, many simpler fragments has been identified @cite_14 . Recently, there have been a lot of interest in developing efficient methods for planning in such domains @cite_2 @cite_21 @cite_3 @cite_1 @cite_0 . In our base scenario, we will assume (1) a finite nesting of beliefs, (2) the human is merely an observer, and (3) all actions are public. The specific problems discussed in our paper hardly exercises most of the capabilities provided by epistemic planning. It's important to note that given the epistemic nature of the explanatory actions, solving the general model reconciliation problem would require leveraging all those capabilities. Our hope is that by presenting model reconciliation in this more general setting, the community would be motivated to start looking at more general and complex versions of these problems. | {
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_16",
"@cite_10"
],
"mid": [
"2295427940",
"2261365476",
"2811384011",
"2759266872",
"2740698270",
"2188435450",
"",
"2154988829"
],
"abstract": [
"Epistemic planning is a very expressive framework that extends automated planning by the incorporation of dynamic epistemic logic (DEL). We provide complexity results on the plan existence problem for multi-agent planning tasks, focusing on purely epistemic actions with propositional preconditions. We show that moving from epistemic preconditions to propositional preconditions makes it decidable, more precisely in EXPSPACE. The plan existence problem is PSPACE-complete when the underlying graphs are trees and NP-complete when they are chains (including singletons). We also show PSPACE-hardness of the plan verification problem, which strengthens previous results on the complexity of DEL model checking.",
"Single-agent planning in partially observable settings is a well understood problem and existing planners can represent and solve a wide variety of meaningful instances. In the most common formulation, the problem is cast as a non-deterministic search problem in belief space where beliefs are sets of states that the agent regards as possible. In this work, we build on the methods developed for representing beliefs in single-agent planning to introduce a simple but expressive formulation for handling beliefs in multi-agent settings. The resulting formulation deals with multiple agents that can act on the world (physical or ontic actions), and can sense either the state of the world (truth of objective formulas) or the mental state of other agents (truth of epistemic formulas). The formulation captures and defines a fragment of dynamic epistemic logics that is simple and expressive but which does not involve event models or product updates, and has the same complexity of belief tracking in the single agent setting and can benefit from the use of similar techniques. We show indeed that the problem of computing multiagent linear plans can be actually compiled into a classical planning problem using the techniques that have been developed for compiling conformant and contingent problems in the single agent setting and report experimental results.",
"",
"",
"In recent years, multi-agent epistemic planning has received attention from both dynamic logic and planning communities. Existing implementations of multi-agent epistemic planning are based on compilation into classical planning and suffer from various limitations, such as generating only linear plans, restriction to public actions, and incapability to handle disjunctive beliefs. In this paper, we propose a general representation language for multi-agent epistemic planning where the initial KB and the goal, the preconditions and effects of actions can be arbitrary multi-agent epistemic formulas, and the solution is an action tree branching on sensing results. To support efficient reasoning in the multi-agent KD45 logic, we make use of a normal form called alternating cover disjunctive formulas (ACDFs). We propose basic revision and update algorithms for ACDFs. We also handle static propositional common knowledge, which we call constraints. Based on our reasoning, revision and update algorithms, adapting the PrAO algorithm for contingent planning from the literature, we implemented a multi-agent epistemic planner called MEPK. Our experimental results show the viability of our approach.",
"Many AI applications involve the interaction of multiple autonomous agents, requiring those agents to reason about their own beliefs, as well as those of other agents. However, planning involving nested beliefs is known to be computationally challenging. In this work, we address the task of synthesizing plans that necessitate reasoning about the beliefs of other agents. We plan from the perspective of a single agent with the potential for goals and actions that involve nested beliefs, non-homogeneous agents, co-present observations, and the ability for one agent to reason as if it were another. We formally characterize our notion of planning with nested belief, and subsequently demonstrate how to automatically convert such problems into problems that appeal to classical planning technology. Our approach represents an important first step towards applying the well-established field of automated planning to the challenging task of planning involving nested beliefs of multiple agents.",
"",
"Attribution theorists typically have conceived the attribution process in terms of universal laws of cognitive functioning, independent of social interaction. In this paper we argue for the notion, grounded in recent ordinary language philosophy, that any consideration of the form of everyday explanation must take into account its function as an answer to a ‘why’ question within a conversational framework. Experiment 1 provides support for the idea that speakers should identify as causally relevant that necessary condition for the occurrence of an event about which the enquirer is ignorant. Experiment 2 replicates this basic finding and further demonstrates that speakers will change their explanations to enquirers believed to be sharing different knowledge about the same target event. Experiment 2 also assessed the role of individual differences in conversational rule-following, and found in apparent contrast some previous predictions that high self-monitoring individuals were no more likely than lows to tailor their explanations to suit the enquirer's knowledge state. If anything, the reverse occurred. Taken together, these experiments support the central contention of the abnormal conditions focus model (Hilton and Slugoski, 1986), that the common sense criterion of causality is that of an ‘abnormal condition’ rather than constant conjunction as instantiated in the ANOVA model of causal attribution (Kelley, 1967, 1973)."
]
} |
1903.07269 | 2921082920 | In this work, we formulate the process of generating explanations as model reconciliation for planning problems as one of planning with explanatory actions. We show that these problems could be better understood within the framework of epistemic planning and that, in fact, most earlier works on explanation as model reconciliation correspond to tractable subsets of epistemic planning problems. We empirically show how our approach is computationally more efficient than existing techniques for explanation generation and we end the paper with a discussion of how this formulation could be extended to generate novel explanatory behaviors. | Our work also looks at the use of explanatory actions as a means of communicating information to the human observer. The most obvious types of such explanatory action includes purely communicative actions such as speech @cite_5 or the use of mixed reality projections @cite_18 @cite_4 , but recent works have shown that physical agents could also use movements to relay information such as intention @cite_6 @cite_17 and incapability @cite_22 . Our framework could be easily adopted to any of these explanatory actions and would naturally allow for a trade-off between these different types of communication. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_6",
"@cite_5",
"@cite_17"
],
"mid": [
"2910655551",
"",
"2789347656",
"2808033819",
"2904313270",
"1992154343"
],
"abstract": [
"Recent advances in mixed-reality technologies have renewed interest in alternative modes of communication for human-robot interaction. However, most of the work in this direction has been confined to tasks such as teleoperation, simulation or explication of individual actions of a robot. In this paper, we will discuss how the capability to project intentions affect the task planning capabilities of a robot. Specifically, we will start with a discussion on how projection actions can be used to reveal information regarding the future intentions of the robot at the time of task execution. We will then pose a new planning paradigm - projection-aware planning - whereby a robot can trade off its plan cost with its ability to reveal its intentions using its projection actions. We will demonstrate each of these scenarios with the help of a joint human-robot activity using the HoloLens.",
"",
"Our goal is to enable robots to express their incapability, and to do so in a way that communicates both what they are trying to accomplish and why they are unable to accomplish it. We frame this as a trajectory optimization problem: maximize the similarity between the motion expressing incapability and what would amount to successful task execution, while obeying the physical limits of the robot. We introduce and evaluate candidate similarity measures, and show that one in particular generalizes to a range of tasks, while producing expressive motions that are tailored to each task. Our user study supports that our approach automatically generates motions expressing incapability that communicate both what and why to end-users, and improve their overall perception of the robot and willingness to collaborate with it in the future.",
"We introduce a novel framework to formalize and solve transparent planning tasks by executing actions selected in a suitable and timely fashion. A transparent planning task is defined as a task where the objective of the agent is to communicate its true goal to observers, thereby making its intentions and its action selection transparent. We formally define and model these tasks as Goal POMDPs where the state space is the Cartesian product of the states of the world and a given set of hypothetical goals. Action effects are deterministic in the world states of the problem but probabilistic in the observer's beliefs. Transition probabilities are obtained from making a call to a model-based plan recognition algorithm, which we refer to as an observer stereotype. We propose an action selection strategy via on-line planning that seeks actions to quickly convey the goal being pursued to an observer assumed to fit a given stereotype. In order to keep run-times feasible, we propose a novel model-based plan recognition algorithm that approximates well-known probabilistic plan recognition methods. The resulting on-line planner, after being evaluated over a diverse set of domains and three different observer stereotypes, is found to convey goal information faster than purely goal-directed planners.",
"",
"A key requirement for seamless human-robot collaboration is for the robot to make its intentions clear to its human collaborator. A collaborative robot's motion must be legible, or intent-expressive. Legibility is often described in the literature as and effect of predictable, unsurprising, or expected motion. Our central insight is that predictability and legibility are fundamentally different and often contradictory properties of motion. We develop a formalism to mathematically define and distinguish predictability and legibility of motion. We formalize the two based on inferences between trajectories and goals in opposing directions, drawing the analogy to action interpretation in psychology. We then propose mathematical models for these inferences based on optimizing cost, drawing the analogy to the principle of rational action. Our experiments validate our formalism's prediction that predictability and legibility can contradict, and provide support for our models. Our findings indicate that for robots to seamlessly collaborate with humans, they must change the way they plan their motion."
]
} |
1903.07113 | 2921939948 | In this paper, we describe a dataset and baseline result for a question answering that utilizes web tables. It contains commonly asked questions on the web and their corresponding answers found in tables on websites. Our dataset is novel in that every question is paired with a table of a different signature. In particular, the dataset contains two classes of tables: entity-instance tables and the key-value tables. Each QA instance comprises a table of either kind, a natural language question, and a corresponding structured SQL query. We build our model by dividing question answering into several tasks, including table retrieval and question element classification, and conduct experiments to measure the performance of each task. We extract various features specific to each task and compose a full pipeline which constructs the SQL query from its parts. Our work provides qualitative results and error analysis for each task, and identifies in detail the reasoning required to generate SQL expressions from natural language questions. This analysis of reasoning informs future models based on neural machine learning. | WikiSQL and Seq2SQL () @cite_4 use a deep neural network for translating natural language questions to corresponding SQL queries. WikiSQL consists of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables from Wikipedia, which is an order of magnitude larger than other comparable datasets. The Seq2SQL model uses rewards from in-the-loop query execution over the database to learn a policy to generate the query. A pointer network limits the output space of the generated sequence to the union of the table schema, question utterance, and SQL keywords. Our dataset is similar WikiSQL but distinguishes between different types of tables. Because of our smaller size dataset we break down the approach into tasks that can be solved with per-task machine learning models. This approach essentially eliminates logical form errors that are prevalent in neural models. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2751448157"
],
"abstract": [
"Relational databases store a significant amount of the worlds data. However, accessing this data currently requires users to understand a query language such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model uses rewards from in the loop query execution over the database to learn a policy to generate the query, which contains unordered parts that are less suitable for optimization via cross entropy loss. Moreover, Seq2SQL leverages the structure of SQL to prune the space of generated queries and significantly simplify the generation problem. In addition to the model, we release WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables fromWikipedia that is an order of magnitude larger than comparable datasets. By applying policy based reinforcement learning with a query execution environment to WikiSQL, Seq2SQL outperforms a state-of-the-art semantic parser, improving execution accuracy from 35.9 to 59.4 and logical form accuracy from 23.4 to 48.3 ."
]
} |
1903.07171 | 2921707824 | The ethical decisions behind the acquisition and analysis of audio, video or physiological human data, harnessed for (deep) machine learning algorithms, is an increasing concern for the Artificial Intelligence (AI) community. In this regard, herein we highlight the growing need for responsible, and representative data collection and analysis, through a discussion of modality diversification. Factors such as Auditability, Benchmarking, Confidence, Data-reliance, and Explainability (ABCDE), have been touched upon within the machine learning community, and here we lay out these ABCDE sub-categories in relation to the acquisition and analysis of multimodal data, to weave through the high priority ethical concerns currently under discussion for AI. To this end, we propose how these five subcategories can be included in early planning of such acquisition paradigms. | With AI efforts increasing, the ethical demands related to the needed Big Data, are expanding in parallel @cite_30 . A general consideration which is not being completely overlooked, particularly in the field of Natural Language Processing @cite_27 . With a vast amount of data sourced online, through social media platforms, the legal aspects in terms of user privacy are at the forefront @cite_38 . | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_27"
],
"mid": [
"2594646398",
"2774332045",
"2742139733"
],
"abstract": [
"Abstract Big Data is a digital phenomenon that enables the collection and use of massive amounts of data derived from both man and machine. This data is characterized in terms of its volume, variety, velocity, veracity, variability, and its complexity. While Big Data allows firms to rapidly capture, analyze, and exploit information, it can also enable access to data that compromises an individual's privacy. And this can happen either deliberately or inadvertently. Either way, Big Data fosters a discussion of ethical issues relative to the sharing and usage of data. Ethical debates are typically articulated within the context of ethical theories. These theories help to frame our understanding of moral issues. Their use affords insight into the context and the logic of the moral arguments being presented, thereby providing us with a rational mechanism by which to better evaluate whether an intended action or actual outcome is morally right or wrong. Four ethical theories are briefly reviewed in this paper: Kantianism, Utilitarianism, Social Contract Theory, and Virtue Theory. Each theory is than examined to show how it might be employed to examine Big Data issues.",
"One of the most used sources of information for fast and flexible crisis information is social media or crowdsourced data, as the information is rapidly disseminated, can reach a large amount of target audience and covers a wide variety of topics. However, the agility that these new methodologies enable comes at a price: ethics and privacy. This paper presents an analysis of the ethical risks and implications of using automated system that learn from social media data to provide intelligence in crisis management. The paper presents a short overview on the use of social media data in crisis management to then highlight ethical implication of machine learning and social media data using an example scenario. In conclusion general mitigation strategies and specific implementation guidelines for the scenario under analysis are presented.",
""
]
} |
1903.07171 | 2921707824 | The ethical decisions behind the acquisition and analysis of audio, video or physiological human data, harnessed for (deep) machine learning algorithms, is an increasing concern for the Artificial Intelligence (AI) community. In this regard, herein we highlight the growing need for responsible, and representative data collection and analysis, through a discussion of modality diversification. Factors such as Auditability, Benchmarking, Confidence, Data-reliance, and Explainability (ABCDE), have been touched upon within the machine learning community, and here we lay out these ABCDE sub-categories in relation to the acquisition and analysis of multimodal data, to weave through the high priority ethical concerns currently under discussion for AI. To this end, we propose how these five subcategories can be included in early planning of such acquisition paradigms. | ( bias), is currently a popular topic https: developers.google.com machine-learning fairness-overview . With three core biases discussed Interaction Bias, Latent Bias, and Selection Bias, in this paper we focus primarily on Selection Bias, as we propose that true multimodal and representative data could assist in avoiding this. Selection Bias the by-products of decisions during collection and analysis -- including misrepresentation through unbalanced gender classification @cite_26 -- is an important concern for technology companies. Such biasses can quite easily propagate into a resulting system, making it not only ethically problematic, but also fundamentally producing commercial limitations, who does the model represent, and who will buy it? | {
"cite_N": [
"@cite_26"
],
"mid": [
"1593989786"
],
"abstract": [
"In this paper, we target at face gender classification on consumer images in a multiethnic environment. The consumer images are much more challenging, since the faces captured in the real situation vary in pose, illumination and expression in a much larger extent than that captured in the constrained environments such as the case of snapshot images. To overcome the non-uniformity, a robust Active Shape Model (ASM) is used for face texture normalization. The probabilistic boosting tree approach is presented which achieves a more accurate classification boundary on consumer images. Besides that, we also take into consideration the ethnic factor in gender classification and prove that ethnicity specific gender classifiers could remarkably improve the gender classification accuracy in a multiethnic environment. Experiments show that our methods achieve better accuracy and robustness on consumer images in a multiethnic environment."
]
} |
1903.07171 | 2921707824 | The ethical decisions behind the acquisition and analysis of audio, video or physiological human data, harnessed for (deep) machine learning algorithms, is an increasing concern for the Artificial Intelligence (AI) community. In this regard, herein we highlight the growing need for responsible, and representative data collection and analysis, through a discussion of modality diversification. Factors such as Auditability, Benchmarking, Confidence, Data-reliance, and Explainability (ABCDE), have been touched upon within the machine learning community, and here we lay out these ABCDE sub-categories in relation to the acquisition and analysis of multimodal data, to weave through the high priority ethical concerns currently under discussion for AI. To this end, we propose how these five subcategories can be included in early planning of such acquisition paradigms. | Identity-representation itself is a prominent topic in AI today @cite_15 , and like many other human traits its manifestation within AI can be limited. The warnings are quite realistic, as bias created from the developers themselves @cite_2 , or through pre-existing archival ( data from historic sources) data @cite_20 , data-driven machine learning algorithms will simply replicate this. In this regard, the need for multimodal, representative data is more prevalent than ever. Not only, due to the aforementioned demographic biasses, including for gender-based variables @cite_29 , which can occur during collection, but also for representation and usability on a global scale, improving the overall impact of HCIs. As well as this, for effective Human Computer Interactions (HCIs) social robotics require observation of a scene through multiple modalities by for example enhancing the ability for gesture recognition @cite_21 . In the age of deep learning, multimodal interactions are the next stage for enhancing the usability of such algorithms @cite_35 @cite_18 . Offering benefits across domains, including conditions with a broad variety in population needs, such as Autism @cite_22 . | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_22",
"@cite_29",
"@cite_21",
"@cite_2",
"@cite_15",
"@cite_20"
],
"mid": [
"2782916306",
"2619383789",
"2786031771",
"2740983644",
"2097384738",
"1904875463",
"",
"2781999423"
],
"abstract": [
"The new method is proposed to monitor the level of current physical load and accumulated fatigue by several objective and subjective characteristics. It was applied to the dataset targeted to estimate the physical load and fatigue by several statistical and machine learning methods. The data from peripheral sensors (accelerometer, GPS, gyroscope, magnetometer) and brain-computing interface (electroencephalography) were collected, integrated, and analyzed by several statistical and machine learning methods (moment analysis, cluster analysis, principal component analysis, etc.). The hypothesis 1 was presented and proved that physical activity can be classified not only by objective parameters, but by subjective parameters also. The hypothesis 2 (experienced physical load and subsequent restoration as fatigue level can be estimated quantitatively and distinctive patterns can be recognized) was presented and some ways to prove it were demonstrated. Several \"physical load\" and \"fatigue\" metrics were proposed. The results presented allow to extend application of the machine learning methods for characterization of complex human activity patterns (for example, to estimate their actual physical load and fatigue, and give cautions and advice).",
"Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.",
"In this paper, we propose a “Response to Name Dataset” for autism spectrum disorder (ASD) study as well as a multimodal ASD auxiliary screening system based on machine learning. ASD children are characterized by their impaired interpersonal communication abilities and lack of response. In the proposed dataset, the reactions of children are recorded by cameras upon calling their names. The responsiveness of each child is then evaluated by a clinician with a score among 0, 1 and 2 following the Autism Diagnostic Observation Schedule (ADOS). We then develop a rule-based multimodal framework to quantitatively evaluate each child. Our system involves speech recognition based automatic name calling detection, face detection alignment, head pose estimation, and considers the response speed, eye contact duration and head orientation to output the final prediction. Compared to existing work, our dataset characterizes a more precise and detailed scoring system with clinical trial standards, as well as a more spontaneous setting by incorporating less lab-controlled sessions with dynamic cluttered environments, multi-pose mobile captured videos, and flexible number of accompanying adults. Experiments show that our machine predicted scores align closely with human professional diagnosis, showing promising potential in early screening of ASD, and shedding light on future clinical applications.",
"",
"We present a new framework for multimodal gesture recognition that is based on a multiple hypotheses rescoring fusion scheme. We specifically deal with a demanding Kinect-based multimodal data set, introduced in a recent gesture recognition challenge (ChaLearn 2013), where multiple subjects freely perform multimodal gestures. We employ multiple modalities, that is, visual cues, such as skeleton data, color and depth images, as well as audio, and we extract feature descriptors of the hands' movement, handshape, and audio spectral properties. Using a common hidden Markov model framework we build single-stream gesture models based on which we can generate multiple single stream-based hypotheses for an unknown gesture sequence. By multimodally rescoring these hypotheses via constrained decoding and a weighted combination scheme, we end up with a multimodally-selected best hypothesis. This is further refined by means of parallel fusion of the monomodal gesture models applied at a segmental level. In this setup, accurate gesture modeling is proven to be critical and is facilitated by an activity detection system that is also presented. The overall approach achieves 93.3 gesture recognition accuracy in the ChaLearn Kinect-based multimodal data set, significantly outperforming all recently published approaches on the same challenging multimodal gesture recognition task, providing a relative error rate reduction of at least 47.6 .",
"Nowadays, many decisions are made using predictive models built on historical data.Predictive models may systematically discriminate groups of people even if the computing process is fair and well-intentioned. Discrimination-aware data mining studies how to make predictive models free from discrimination, when historical data, on which they are built, may be biased, incomplete, or even contain past discriminatory decisions. Discrimination refers to disadvantageous treatment of a person based on belonging to a category rather than on individual merit. In this survey we review and organize various discrimination measures that have been used for measuring discrimination in data, as well as in evaluating performance of discrimination-aware predictive models. We also discuss related measures from other disciplines, which have not been used for measuring discrimination, but potentially could be suitable for this purpose. We computationally analyze properties of selected measures. We also review and discuss measuring procedures, and present recommendations for practitioners. The primary target audience is data mining, machine learning, pattern recognition, statistical modeling researchers developing new methods for non-discriminatory predictive modeling. In addition, practitioners and policy makers would use the survey for diagnosing potential discrimination by predictive models.",
"",
"In the age of algorithms, I focus on the question of how to ensure algorithms that will take over many of our familiar archival and library tasks, will behave according to human ethical norms that have evolved over many years. I start by characterizing physical archives in the context of related institutions such as libraries and museums. In this setting I analyze how ethical principles, in particular about access to information, have been formalized and communicated in the form of ethical codes, or: codes of conducts. After that I describe two main developments: digitalization, in which physical aspects of the world are turned into digital data, and algorithmization, in which intelligent computer programs turn this data into predictions and decisions. Both affect interactions that were once physical but now digital. In this new setting I survey and analyze the ethical aspects of algorithms and how they shape a vision on the future of archivists and librarians, in the form of algorithmic documentalists, or: codementalists. Finally I outline a general research strategy, called IntERMEeDIUM, to obtain algorithms that obey are human ethical values encoded in code of ethics."
]
} |
1903.07072 | 2921982115 | Partial person re-identification (ReID) is a challenging task because only partial information of person images is available for matching target persons. Few studies, especially on deep learning, have focused on matching partial person images with holistic person images. This study presents a novel deep partial ReID framework based on pairwise spatial transformer networks (STNReID), which can be trained on existing holistic person datasets. STNReID includes a spatial transformer network (STN) module and a ReID module. The STN module samples an affined image (a semantically corresponding patch) from the holistic image to match the partial image. The ReID module extracts the features of the holistic, partial, and affined images. Competition (or confrontation) is observed between the STN module and the ReID module, and two-stage training is applied to acquire a strong STNReID for partial ReID. Experimental results show that our STNReID obtains 66.7 and 54.6 rank-1 accuracies on partial ReID and partial iLIDS datasets, respectively. These values are at par with those obtained with state-of-the-art methods. | In this section, deep learning-based person ReID methods are summarized, and the existing relevant studies on partial ReID are reviewed because partial ReID is a sub-topic of person ReID. Then, STNs @cite_18 and their application to person ReID are investigated. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2951005624"
],
"abstract": [
"Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations."
]
} |
1903.07072 | 2921982115 | Partial person re-identification (ReID) is a challenging task because only partial information of person images is available for matching target persons. Few studies, especially on deep learning, have focused on matching partial person images with holistic person images. This study presents a novel deep partial ReID framework based on pairwise spatial transformer networks (STNReID), which can be trained on existing holistic person datasets. STNReID includes a spatial transformer network (STN) module and a ReID module. The STN module samples an affined image (a semantically corresponding patch) from the holistic image to match the partial image. The ReID module extracts the features of the holistic, partial, and affined images. Competition (or confrontation) is observed between the STN module and the ReID module, and two-stage training is applied to acquire a strong STNReID for partial ReID. Experimental results show that our STNReID obtains 66.7 and 54.6 rank-1 accuracies on partial ReID and partial iLIDS datasets, respectively. These values are at par with those obtained with state-of-the-art methods. | Deep learning-based person ReID uses deep CNNs (DCNNs) to represent the features of person images. On the basis of the loss functions of training DCNNs, most existing studies focus on two methods, namely, robust representation learning and deep metric learning. Representation learning-based methods @cite_20 @cite_4 aim to learn robust features for person ReID by using the Softmax loss (ID loss). An ID embedding network (IDENet) @cite_20 @cite_4 regards each person ID as a category of a given classification problem. In addition, Fan @cite_7 obtained the variants of the SoftMax function and achieved superior performance in the field of ReID. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_20"
],
"mid": [
"2963901085",
"2810967842",
"2531440880"
],
"abstract": [
"This paper presents a novel large-scale dataset and comprehensive baselines for end-to-end pedestrian detection and person recognition in raw video frames. Our baselines address three issues: the performance of various combinations of detectors and recognizers, mechanisms for pedestrian detection to help improve overall re-identification (re-ID) accuracy and assessing the effectiveness of different detectors for re-ID. We make three distinct contributions. First, a new dataset, PRW, is introduced to evaluate Person Re-identification in the Wild, using videos acquired through six synchronized cameras. It contains 932 identities and 11,816 frames in which pedestrians are annotated with their bounding box positions and identities. Extensive benchmarking results are presented on this dataset. Second, we show that pedestrian detection aids re-ID through two simple yet effective improvements: a cascaded fine-tuning strategy that trains a detection model first and then the classification model, and a Confidence Weighted Similarity (CWS) metric that incorporates detection scores into similarity measurement. Third, we derive insights in evaluating detector performance for the particular scenario of accurate person re-ID.",
"Abstract Many current successful Person Re-Identification (ReID) methods train a model with the softmax loss function to classify images of different persons and obtain the feature vectors at the same time. However, the underlying feature embedding space is ignored. In this paper, we use a modified softmax function, termed Sphere Softmax, to solve the classification problem and learn a hypersphere manifold embedding simultaneously. A balanced sampling strategy is also introduced. Finally, we propose a convolutional neural network called SphereReID adopting Sphere Softmax and training a single model end-to-end with a new warming-up learning rate schedule on four challenging datasets including Market-1501, DukeMTMC-reID, CHHK-03, and CUHK-SYSU. Experimental results demonstrate that this single model outperforms the state-of-the-art methods on all four datasets without fine-tuning or re-ranking. For example, it achieves 94.4 rank-1 accuracy on Market-1501 and 83.9 rank-1 accuracy on DukeMTMC-reID. The code and trained weights of our model will be released.",
"Person re-identification (re-ID) has become increasingly popular in the community due to its application and research significance. It aims at spotting a person of interest in other cameras. In the early days, hand-crafted algorithms and small-scale evaluation were predominantly reported. Recent years have witnessed the emergence of large-scale datasets and deep learning systems which make use of large data volumes. Considering different tasks, we classify most current re-ID methods into two classes, i.e., image-based and video-based; in both tasks, hand-crafted and deep learning systems will be reviewed. Moreover, two new re-ID tasks which are much closer to real-world applications are described and discussed, i.e., end-to-end re-ID and fast re-ID in very large galleries. This paper: 1) introduces the history of person re-ID and its relationship with image classification and instance retrieval; 2) surveys a broad selection of the hand-crafted systems and the large-scale methods in both image- and video-based re-ID; 3) describes critical future directions in end-to-end re-ID and fast retrieval in large galleries; and 4) finally briefs some important yet under-developed issues."
]
} |
1903.07072 | 2921982115 | Partial person re-identification (ReID) is a challenging task because only partial information of person images is available for matching target persons. Few studies, especially on deep learning, have focused on matching partial person images with holistic person images. This study presents a novel deep partial ReID framework based on pairwise spatial transformer networks (STNReID), which can be trained on existing holistic person datasets. STNReID includes a spatial transformer network (STN) module and a ReID module. The STN module samples an affined image (a semantically corresponding patch) from the holistic image to match the partial image. The ReID module extracts the features of the holistic, partial, and affined images. Competition (or confrontation) is observed between the STN module and the ReID module, and two-stage training is applied to acquire a strong STNReID for partial ReID. Experimental results show that our STNReID obtains 66.7 and 54.6 rank-1 accuracies on partial ReID and partial iLIDS datasets, respectively. These values are at par with those obtained with state-of-the-art methods. | Compared with representation learning, deep metric learning-based algorithms directly learn the distance of an image pair in the feature embedding space. The typical metric learning method is the triplet loss @cite_10 , which pulls the distance of a positive pair and pushes the distance of a negative pair. However, triplet loss is easily influenced by the selected samples. Hard mining techniques @cite_21 @cite_13 @cite_8 widely used to obtain triplet loss with high accuracies. Improved triplet loss @cite_9 and quadruplet loss @cite_17 are variants of the original triplet loss. At present, the combination of ID loss with triplet loss has attracted considerable attention due to its remarkable performance. | {
"cite_N": [
"@cite_8",
"@cite_10",
"@cite_9",
"@cite_21",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"2432402544",
"",
"2598634450",
"2762262611",
"2606377603"
],
"abstract": [
"",
"Person re-identification across disjoint camera views has been widely applied in video surveillance yet it is still a challenging problem. One of the major challenges lies in the lack of spatial and temporal cues, which makes it difficult to deal with large variations of lighting conditions, viewing angles, body poses, and occlusions. Recently, several deep-learning-based person re-identification approaches have been proposed and achieved remarkable performance. However, most of those approaches extract discriminative features from the whole frame at one glimpse without differentiating various parts of the persons to identify. It is essentially important to examine multiple highly discriminative local regions of the person images in details through multiple glimpses for dealing with the large appearance variance. In this paper, we propose a new soft attention-based model, i.e. , the end-to-end comparative attention network (CAN), specifically tailored for the task of person re-identification. The end-to-end CAN learns to selectively focus on parts of pairs of person images after taking a few glimpses of them and adaptively comparing their appearance. The CAN model is able to learn which parts of images are relevant for discerning persons and automatically integrates information from different parts to determine whether a pair of images belongs to the same person. In other words, our proposed CAN model simulates the human perception process to verify whether two images are from the same person. Extensive experiments on four benchmark person re-identification data sets, including CUHK01, CHUHK03, Market-1501, and VIPeR, clearly demonstrate that our proposed end-to-end CAN for person re-identification outperforms well established baselines significantly and offer the new state-of-the-art performance.",
"",
"In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning. The person re-identification subfield is no exception to this. Unfortunately, a prevailing belief in the community seems to be that the triplet loss is inferior to using surrogate losses (classification, verification) followed by a separate metric learning step. We show that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin.",
"Person re-identification (ReID) is an important task in computer vision. Recently, deep learning with a metric learning loss has become a common framework for ReID. In this paper, we also propose a new metric learning loss with hard sample mining called margin smaple mining loss (MSML) which can achieve better accuracy compared with other metric learning losses, such as triplet loss. In experi- ments, our proposed methods outperforms most of the state-of-the-art algorithms on Market1501, MARS, CUHK03 and CUHK-SYSU.",
"Person re-identification (ReID) is an important task in wide area video surveillance which focuses on identifying people across different cameras. Recently, deep learning networks with a triplet loss become a common framework for person ReID. However, the triplet loss pays main attentions on obtaining correct orders on the training set. It still suffers from a weaker generalization capability from the training set to the testing set, thus resulting in inferior performance. In this paper, we design a quadruplet loss, which can lead to the model output with a larger inter-class variation and a smaller intra-class variation compared to the triplet loss. As a result, our model has a better generalization ability and can achieve a higher performance on the testing set. In particular, a quadruplet deep network using a margin-based online hard negative mining is proposed based on the quadruplet loss for the person ReID. In extensive experiments, the proposed network outperforms most of the state-of-the-art algorithms on representative datasets which clearly demonstrates the effectiveness of our proposed method."
]
} |
1903.07072 | 2921982115 | Partial person re-identification (ReID) is a challenging task because only partial information of person images is available for matching target persons. Few studies, especially on deep learning, have focused on matching partial person images with holistic person images. This study presents a novel deep partial ReID framework based on pairwise spatial transformer networks (STNReID), which can be trained on existing holistic person datasets. STNReID includes a spatial transformer network (STN) module and a ReID module. The STN module samples an affined image (a semantically corresponding patch) from the holistic image to match the partial image. The ReID module extracts the features of the holistic, partial, and affined images. Competition (or confrontation) is observed between the STN module and the ReID module, and two-stage training is applied to acquire a strong STNReID for partial ReID. Experimental results show that our STNReID obtains 66.7 and 54.6 rank-1 accuracies on partial ReID and partial iLIDS datasets, respectively. These values are at par with those obtained with state-of-the-art methods. | Spatial Transformer Networks (STNs), which include a localization network and a grid generator, make up the deep learning method proposed in @cite_18 . The localization network utilizes feature maps and outputs the parameters of 2D affine transformations through several hidden layers. Then, such predicted transformation parameters are transferred in the grid generator to create a sampling grid, which is a set of points where the input feature map is sampled to produce the transformed output. An STN can perform 2D affine transformations, such as reflection, rotation, scaling, and translation. For person ReID, Zheng @cite_1 propose Pedestrian Alignment Network (PAN), which combines the STN and a deep ReID network. However, PAN is similar to STNs as it predicts parameters on the basis of the feature maps of one image and aligns the weak spatial changes of holistic person images. | {
"cite_N": [
"@cite_18",
"@cite_1"
],
"mid": [
"2951005624",
"2963383990"
],
"abstract": [
"Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.",
"Person re-identification (re-ID) is mostly viewed as an image retrieval problem. This task aims to search a query person in a large image pool. In practice, person re-ID usually adopts automatic detectors to obtain cropped pedestrian images. However, this process suffers from two types of detector errors: excessive background and part missing. Both errors deteriorate the quality of pedestrian alignment and may compromise pedestrian matching due to the position and scale variances. To address the misalignment problem, we propose that alignment be learned from an identification procedure. We introduce the pedestrian alignment network (PAN) which allows discriminative embedding learning pedestrian alignment without extra annotations. We observe that when the convolutional neural network (CNN) learns to discriminate between different identities, the learned feature maps usually exhibit strong activations on the human body rather than the background. The proposed network thus takes advantage of this attention mechanism to adaptively locate and align pedestrians within a bounding box. Visual examples show that pedestrians are better aligned with PAN. Experiments on three large-scale re-ID datasets confirm that PAN improves the discriminative ability of the feature embeddings and yields competitive accuracy with the state-of-the-art methods."
]
} |
1903.06934 | 2951273644 | This paper reports our efforts on swCaffe, a highly efficient parallel framework for accelerating deep neural networks (DNNs) training on Sunway TaihuLight, the current fastest supercomputer in the world that adopts a unique many-core heterogeneous architecture, with 40,960 SW26010 processors connected through a customized communication network. First, we point out some insightful principles to fully exploit the performance of the innovative many-core architecture. Second, we propose a set of optimization strategies for redesigning a variety of neural network layers based on Caffe. Third, we put forward a topology-aware parameter synchronization scheme to scale the synchronous Stochastic Gradient Descent (SGD) method to multiple processors efficiently. We evaluate our framework by training a variety of widely used neural networks with the ImageNet dataset. On a single node, swCaffe can achieve 23 119 overall performance compared with Caffe running on K40m GPU. As compared with the Caffe on CPU, swCaffe runs 3.04 7.84x faster on all the networks. Finally, we present the scalability of swCaffe for the training of ResNet-50 and AlexNet on the scale of 1024 nodes. | The work in @cite_19 was first proposed to train DNN models on a CPU-GPU hybrid HPC systems. Since then, a large number of works have already been focused on scaling DNN on GPU supercomputers and HPC clusters. Inspur-Caffe @cite_23 is an MPI-based Caffe fork that exploits parameter-server approach with stale asynchronous gradient updates. FireCaffe @cite_18 discusses scaling of DNN models on a cluster of 128 GPUs connected with Infiniband interconnects. It also adopts a allreduce-based parameter synchronization implemented with reduction trees. S-Caffe @cite_14 provides modern multi-GPU clusters with a CUDA-Aware MPI runtime for reducing broadcasting operations and scales DNN training to 160 GPUs. | {
"cite_N": [
"@cite_19",
"@cite_18",
"@cite_14",
"@cite_23"
],
"mid": [
"2162390675",
"2962911728",
"2580688187",
""
],
"abstract": [
"Scaling up deep learning algorithms has been shown to lead to increased performance in benchmark tasks and to enable discovery of complex high-level features. Recent efforts to train extremely large networks (with over 1 billion parameters) have relied on cloudlike computing infrastructure and thousands of CPU cores. In this paper, we present technical details and results from our own system based on Commodity Off-The-Shelf High Performance Computing (COTS HPC) technology: a cluster of GPU servers with Infiniband interconnects and MPI. Our system is able to train 1 billion parameter networks on just 3 machines in a couple of days, and we show that it can scale to networks with over 11 billion parameters using just 16 machines. As this infrastructure is much more easily marshaled by others, the approach enables much wider-spread research with extremely large neural networks.",
"Long training times for high-accuracy deep neural networks (DNNs) impede research into new DNN architectures and slow the development of high-accuracy DNNs. In this paper we present FireCaffe, which successfully scales deep neural network training across a cluster of GPUs. We also present a number of best practices to aid in comparing advancements in methods for scaling and accelerating the training of deep neural networks. The speed and scalability of distributed algorithms is almost always limited by the overhead of communicating between servers, DNN training is not an exception to this rule. Therefore, the key consideration here is to reduce communication overhead wherever possible, while not degrading the accuracy of the DNN models that we train. Our approach has three key pillars. First, we select network hardware that achieves high bandwidth between GPU servers – Infiniband or Cray interconnects are ideal for this. Second, we consider a number of communication algorithms, and we find that reduction trees are more efficient and scalable than the traditional parameter server approach. Third, we optionally increase the batch size to reduce the total quantity of communication during DNN training, and we identify hyperparameters that allow us to reproduce the small-batch accuracy while training with large batch sizes. When training GoogLeNet and Network-in-Network on ImageNet, we achieve a 47x and 39x speedup, respectively, when training on a cluster of 128 GPUs.",
"Availability of large data sets like ImageNet and massively parallel computation support in modern HPC devices like NVIDIA GPUs have fueled a renewed interest in Deep Learning (DL) algorithms. This has triggered the development of DL frameworks like Caffe, Torch, TensorFlow, and CNTK. However, most DL frameworks have been limited to a single node. In order to scale out DL frameworks and bring HPC capabilities to the DL arena, we propose, S-Caffe; a scalable and distributed Caffe adaptation for modern multi-GPU clusters. With an in-depth analysis of new requirements brought forward by the DL frameworks and limitations of current communication runtimes, we present a co-design of the Caffe framework and the MVAPICH2-GDR MPI runtime. Using the co-design methodology, we modify Caffe's workflow to maximize the overlap of computation and communication with multi-stage data propagation and gradient aggregation schemes. We bring DL-Awareness to the MPI runtime by proposing a hierarchical reduction design that benefits from CUDA-Aware features and provides up to a massive 133x speedup over OpenMPI and 2.6x speedup over MVAPICH2 for 160 GPUs. S-Caffe successfully scales up to 160 K-80 GPUs for GoogLeNet (ImageNet) with a speedup of 2.5x over 32 GPUs. To the best of our knowledge, this is the first framework that scales up to 160 GPUs. Furthermore, even for single node training, S-Caffe shows an improvement of 14 and 9 over Nvidia's optimized Caffe for 8 and 16 GPUs, respectively. In addition, S-Caffe achieves up to 1395 samples per second for the AlexNet model, which is comparable to the performance of Microsoft CNTK.",
""
]
} |
1903.06904 | 2922380854 | The k-means for lines is a set of k centers (points) that minimizes the sum of squared distances to a given set of n lines in R^d. This is a straightforward generalization of the k-means problem where the input is a set of n points. Related problems minimize sum of (non-squared) distances, other norms, m-estimators or ignore the t farthest points (outliers) from the k centers. We suggest the first provable PTAS algorithms for these problems that compute (1+epsilon)-approximation in time O(n (n) epsilon^2) for any given epsilon (0, 1), and constant integers k, d, t 1, including support for streaming and distributed input. Experimental results on Amazon EC2 cloud and open source are also provided. | Langebreg and Schulman @cite_35 addressed the @math -center problem (i.e., the case @math to find a single ball intersecting all input lines). From a computational point of view, the @math -center problem significantly differs from the general @math -center problem, the @math -center problem is a convex optimization problem and therefore fundamentally easier than the cases of @math -center for @math . | {
"cite_N": [
"@cite_35"
],
"mid": [
"2131472136"
],
"abstract": [
"This article introduces yaImpute, an R package for nearest neighbor search and imputation. Although nearest neighbor imputation is used in a host of disciplines, the methods implemented in the yaImpute package are tailored to imputation-based forest attribute estimation and mapping. The impetus to writing the yaImpute is a growing interest in nearest neighbor imputation methods for spatially explicit forest inventory, and a need within this research community for software that facilitates comparison among different nearest neighbor search algorithms and subsequent imputation techniques. yaImpute provides directives for defining the search space, subsequent distance calculation, and imputation rules for a given number of nearest neighbors. Further, the package offers a suite of diagnostics for comparison among results generated from different imputation analyses and a set of functions for mapping imputation results."
]
} |
1903.06904 | 2922380854 | The k-means for lines is a set of k centers (points) that minimizes the sum of squared distances to a given set of n lines in R^d. This is a straightforward generalization of the k-means problem where the input is a set of n points. Related problems minimize sum of (non-squared) distances, other norms, m-estimators or ignore the t farthest points (outliers) from the k centers. We suggest the first provable PTAS algorithms for these problems that compute (1+epsilon)-approximation in time O(n (n) epsilon^2) for any given epsilon (0, 1), and constant integers k, d, t 1, including support for streaming and distributed input. Experimental results on Amazon EC2 cloud and open source are also provided. | Also there has been work on clustering points with @math lines'' @cite_2 , @cite_19 , @cite_5 , where one finds a set of lines @math such that the set of cylinders with radius @math about these lines covers all the input points @math . | {
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_2"
],
"mid": [
"2050831206",
"1984675750",
"2029432209"
],
"abstract": [
"We consider the following instance of projective clustering, known as the 2-line-center problem: Given a set S of n points in R2, cover S by two congruent strips of minimum width. Algorithms that find the optimal solution for this problem have near-quadratic running time. In this paper we present an algorithm that, for any e > 0, computes in time O(n(logn + e-2 log(1 e)) + e-7 2 log(1 e)) a cover of S by two strips of width at most (1 + e)w*.",
"(MATH) Let P be a set of n points in @math k (P) denote the minimum over all k-flats @math of max peP Dist(p, ). We present an algorithm that computes, for any 0 k (P) from each point of P. The running time of the algorithm is dnO(k e5log(1 e)). The crucial step in obtaining this algorithm is a structural result that says that there is a near-optimal flat that lies in an affine subspace spanned by a small subset of points in P. The size of this \"core-set\" depends on k and e but is independent of the dimension.This approach also extends to the case where we want to find a k-flat that is close to a prescribed fraction of the entire point set, and to the case where we want to find j flats, each of dimension k, that are close to the point set. No efficient approximation schemes were known for these problems in high-dimensions, when k>1 or j>1.",
"We consider the following two instances of the projective clustering problem: Given a set S of n points in Rd and an integer k > 0, cover S by k slabs (respectively d-cylinders) so that the maximum width of a slab (respectively the maximum diameter of a d-cylinder) is minimized. Let w* be the smallest value so that S can be covered by k slabs (respectively d-cylinders), each of width (respectively diameter) at most w*. This paper contains three main results: (i) For d = 2, we present a randomized algorithm that computes O(k log k) strips of width at most w* that cover S. Its expected running time is O(nk2log4n) if k2 log k ≤ n; for larger values of k, the expected running time is O(n2 3k8 3log14 3n). (ii) For d = 3, a cover of S by O(k log k) slabs of width at most w* can be computed in expected time O(n3 2k9 4polygon(n)).(iii) We compute a cover of S ⊂ Rd by O(dk log k) d-cylinders of diameter at most 8w* in expected time O(dnk3 log4 n). We also present a few extensions of this result."
]
} |
1903.06657 | 2967746214 | The present study investigates users' movement behavior in a virtual environment when they attempted to avoid a virtual character. At each iteration of the experiment, four conditions (Self-Avatar LookAt, No Self-Avatar LookAt, Self-Avatar No LookAt, and No Self-Avatar No LookAt) were applied to examine users' movement behavior based on kinematic measures. During the experiment, 52 participants were asked to walk from a starting position to a target position. A virtual character was placed at the midpoint. Participants were asked to wear a head-mounted display throughout the task, and their locomotion was captured using a motion capture suit. We analyzed the captured trajectories of the participants' routes on four kinematic measures to explore whether the four experimental conditions influenced the paths they took. The results indicated that the Self-Avatar LookAt condition affected the path the participants chose more significantly than the other three conditions in terms of length, duration, and deviation, but not in terms of speed. Overall, the length and duration of the task, as well as the deviation of the trajectory from the straight line, were greater when a self-avatar represented participants. An additional effect on kinematic measures was found in the LookAt (Gaze) conditions. Implications for future research are discussed. | The effects of gaze interaction between users and virtual characters has also been examined in the past. It has been found that during interactions with humans, the gaze @cite_17 and mutual eye contact @cite_36 can be interpreted as a core social interaction mechanism and main social interaction factor. | {
"cite_N": [
"@cite_36",
"@cite_17"
],
"mid": [
"1987411016",
"2112800929"
],
"abstract": [
"Research on gaze and eye contact was organized within the framework of Patterson's (1982) sequential functional model of nonverbal exchange. Studies were reviewed showing how gaze functions to (a) provide information, (b) regulate interaction, (c) express intimacy, (d) exercise social control, and (",
"Abstract Tracking eye-movements provides easy access to cognitive processes involved in visual and sensorimotor processing. More recently, the underlying neural mechanisms have been examined by combining eye-tracking and functional neuroimaging methods. Apart from extracting visual information, gaze also serves important functions in social interactions. As a deictic cue, gaze can be used to direct the attention of another person to an object. Conversely, by following other persons’ gaze we gain access to their attentional focus, which is essential for understanding their mental states. Social gaze has therefore been studied extensively to understand the social brain. In this endeavor, gaze has mostly been investigated from an observational perspective using static displays of faces and eyes. However, there is growing consent that observational paradigms are insufficient for an understanding of the neural mechanisms of social gaze behavior, which typically involve active engagement in social interactions. Recent methodological advances have allowed increasing ecological validity by studying gaze in face-to-face encounters in real-time. Such improvements include interactions using virtual agents in gaze-contingent eye-tracking paradigms, live interactions via video feeds, and dual eye-tracking in two-person setups. These novel approaches can be used to analyze brain activity related to social gaze behavior. This review introduces these methodologies and discusses recent findings on the behavioral functions and neural mechanisms of gaze processing in social interaction."
]
} |
1903.06657 | 2967746214 | The present study investigates users' movement behavior in a virtual environment when they attempted to avoid a virtual character. At each iteration of the experiment, four conditions (Self-Avatar LookAt, No Self-Avatar LookAt, Self-Avatar No LookAt, and No Self-Avatar No LookAt) were applied to examine users' movement behavior based on kinematic measures. During the experiment, 52 participants were asked to walk from a starting position to a target position. A virtual character was placed at the midpoint. Participants were asked to wear a head-mounted display throughout the task, and their locomotion was captured using a motion capture suit. We analyzed the captured trajectories of the participants' routes on four kinematic measures to explore whether the four experimental conditions influenced the paths they took. The results indicated that the Self-Avatar LookAt condition affected the path the participants chose more significantly than the other three conditions in terms of length, duration, and deviation, but not in terms of speed. Overall, the length and duration of the task, as well as the deviation of the trajectory from the straight line, were greater when a self-avatar represented participants. An additional effect on kinematic measures was found in the LookAt (Gaze) conditions. Implications for future research are discussed. | In addition, gaze interaction has been examined during walking tasks. The study conducted by @cite_26 indicated that more personal space was given to virtual characters by the users who engaged in mutual gaze. @cite_37 found that the gaze of a virtual character toward a walking user improved the sense of immersion. @cite_9 found that participants used their gaze as a cue to avoid collision by changing their path to the opposite side of the character's gaze. Finally, the virtual reality study conducted by @cite_32 examined the effect of gaze interception during collision avoidance between two walkers. The authors concluded that the mutual gaze can be considered as a form of nonverbal communication between participants and virtual characters. | {
"cite_N": [
"@cite_9",
"@cite_37",
"@cite_26",
"@cite_32"
],
"mid": [
"2038770833",
"2548968337",
"2123834143",
"2889428601"
],
"abstract": [
"This study shows that humans (a) infer other people's movement trajectories from their gaze direction and (b) use this information to guide their own visual scanning of the environment and plan their own movement. In two eye-tracking experiments, participants viewed an animated character walking directly toward them on a street. The character looked constantly to the left or to the right (Experiment 1) or suddenly shifted his gaze from direct to the left or to the right (Experiment 2). Participants had to decide on which side they would skirt the character. They shifted their gaze toward the direction in which the character was not gazing, that is, away from his gaze, and chose to skirt him on that side. Gaze following is not always an obligatory social reflex; social-cognitive evaluations of gaze direction can lead to reversed gaze-following behavior.",
"We present a novel interactive approach, PedVR, to generate plausible behaviors for a large number of virtual humans, and to enable natural interaction between the real user and virtual agents. Our formulation is based on a coupled approach that combines a 2D multi-agent navigation algorithm with 3D human motion synthesis. The coupling can result in plausible movement of virtual agents and can generate gazing behaviors, which can considerably increase the believability. We have integrated our formulation with the DK-2 HMD and demonstrate the benefits of our crowd simulation algorithm over prior decoupled approaches. Our user evaluation suggests that the combination of coupled methods and gazing behavior can considerably increase the behavioral plausibility.",
"Digital immersive virtual environment technology (IVET) enables behavioral scientists to conduct ecologically realistic experiments with near-perfect experimental control. The authors employed IVET to study the interpersonal distance maintained between participants and virtual humans. In Study 1, participants traversed a three-dimensional virtual room in which a virtual human stood. In Study 2, a virtual human approached participants. In both studies, participant gender, virtual human gender, virtual human gaze behavior, and whether virtual humans were allegedly controlled by humans (i.e., avatars) or computers (i.e., agents) were varied. Results indicated that participants maintained greater distance from virtual humans when approaching their fronts compared to their backs. In addition, participants gave more personal space to virtual agents who engaged them in mutual gaze. Moreover, when virtual humans invaded their personal space, participants moved farthest from virtual human agents. The advantages an...",
"This paper presents a study performed in virtual reality on the effect of gaze interception during collision avoidance between two walkers. In such a situation, mutual gaze can be considered as a form of nonverbal communication. Additionally, gaze is believed to detail future path intentions and to be part of the nonverbal negotiation to achieve avoidance collaboratively. We considered an avoidance task between a real subject and a virtual human character and studied the influence of the character's gaze direction on the avoidance behaviour of the participant. Virtual reality provided an accurate control of the situation: seventeen participants were immersed in a virtual environment, instructed to navigate across a virtual space using a joystick and to avoid a virtual character that would appear from either side. The character would either gaze or not towards the participant. Further, the character would either perform or not a reciprocal adaptation of its trajectory to avoid a potential collision with the participant. The findings of this paper were that during an orthogonal collision avoidance task, gaze behaviour did not influence the collision avoidance behaviour of the participants. Further, the addition of reciprocal collision avoidance with gaze did not modify the collision behaviour of participants. These results suggest that for the duration of interaction in such a task, body motion cues were sufficient for coordination and regulation. We discuss the possible exploitation of these results to improve the design of virtual characters for populated virtual environments and user interaction."
]
} |
1903.06657 | 2967746214 | The present study investigates users' movement behavior in a virtual environment when they attempted to avoid a virtual character. At each iteration of the experiment, four conditions (Self-Avatar LookAt, No Self-Avatar LookAt, Self-Avatar No LookAt, and No Self-Avatar No LookAt) were applied to examine users' movement behavior based on kinematic measures. During the experiment, 52 participants were asked to walk from a starting position to a target position. A virtual character was placed at the midpoint. Participants were asked to wear a head-mounted display throughout the task, and their locomotion was captured using a motion capture suit. We analyzed the captured trajectories of the participants' routes on four kinematic measures to explore whether the four experimental conditions influenced the paths they took. The results indicated that the Self-Avatar LookAt condition affected the path the participants chose more significantly than the other three conditions in terms of length, duration, and deviation, but not in terms of speed. Overall, the length and duration of the task, as well as the deviation of the trajectory from the straight line, were greater when a self-avatar represented participants. An additional effect on kinematic measures was found in the LookAt (Gaze) conditions. Implications for future research are discussed. | A number of studies have used distance metrics between trajectories @cite_33 @cite_6 @cite_10 . @cite_40 proposed a set of metrics, namely the mean radius of curvature along the full path, the maximum Euclidean distance from a straight line between the origin and the target, and the minimum Euclidean distance between the path and the obstacles of the virtual environment. Principal component analysis of a set of trajectories has also been used @cite_35 . The stride length, step width, variability in stride velocity, and variability in step width have also been used to evaluate and compare trajectories generated in virtual and real environments based on the gait cycle of walkers @cite_29 @cite_11 . Finally, @cite_5 proposed nine metrics related to the shape, performance, and kinematic features that could be used to compare virtual and real trajectories. To evaluate the participants' trajectories in the current study, we adopted metrics proposed in @cite_5 . | {
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_29",
"@cite_6",
"@cite_40",
"@cite_5",
"@cite_10",
"@cite_11"
],
"mid": [
"2119468631",
"2543099233",
"2160400986",
"",
"2081509522",
"2033837003",
"",
""
],
"abstract": [
"To compare and evaluate locomotion interfaces for users who are (virtually) moving on foot in VEs, we performed a study to characterize task behavior and task performance with different visual and locomotion interfaces. In both a computer-generated environment and a corresponding real environment, study participants walked to targets on walls and stopped as close to them as they could without making contact. In each of five experimental conditions participants used a combination of one of three locomotion interfaces (really walking, walking-in-place, and joystick flying), and one of three visual conditions (head-mounted display, unrestricted natural vision, or field-of-view-restricted natural vision). We identified metrics and collected data that captured task performance and the underlying kinematics of the task. Our results show: 1) Over 95 of the variance in simple motion paths is captured in three critical values: peak velocity; when, in the course of a motion, the peak velocity occurs; and peak deceleration. 2) Correlations of those critical value data for the conditions taken pairwise suggest a coarse ordering of locomotion interfaces by \"naturalness.\" 3) Task performance varies with interface condition, but correlations of that value for conditions taken pairwise do not cluster by naturalness. 4) The perceptual variable, r (also known as the time-to-contact) calculated at the point of peak deceleration has higher correlation with task performance than r calculated at peak velocity.",
"This paper addresses the problem of understanding the shape of the locomotor trajectories for a human being walking in an empty space to reach a goal defined both in position and in direction. Among all the possible trajectories reaching a given goal what are the fundamental reasons to chose one trajectory instead of another (see Fig. (1))? The underlying idea to attack this question has been to relate this problem to an optimal control problem: the trajectory is chosen according to some optimization principle. This is our basic starting assumption. The subject being viewed as a controlled system, the question becomes what criteria is optimized? Is it the time to perform the trajectory? the length of the path? the minimum jerk along the path?... In this study we show that the human locomotor trajectories are well approximated by the geodesics of a differential system minimizing the L2 norm of the control. Such geodesics are made of arcs of clothoids. The clothoid or Cornu spiral is a curve, whose curvature grows with the distance from the origin. The study is based on an experimental protocol involving 7 subjects. They had to walk within a motion capture room from a fixed starting point, and to cross over distant porches from which both position in the room and orientation were changing over trials.",
"Previous research suggests that postural sway in standing increases in virtual reality (VR) environments. This study was conducted to examine whether gait instability is prevalent when people walk in a VR environment. Ten healthy adults participated in the study. Subjects walked on a treadmill in a VR environment and a non-VR environment at each of three walking speeds: 0.9, 1.1, and 1.3 m s. In the VR environment, an endless corridor with colored vertical stripes comprising the walls was projected onto a hemispherical screen placed in front of the treadmill. The speed of the moving corridor image was matched to the speed of the treadmill to create the illusion that subjects were walking through the endless corridor. Spatiotemporal data during gait were collected with an instrumented treadmill housing two piezoelectric force platforms. Gait parameters reflective of gait instability (stride length, step width, variability in stride velocity, and variability in step width) were compared between the VR and non-VR environments. Subjects walked in the VR environment with reduced stride lengths (p = 0.001), increased step widths (p = 0.001), and with increased variability in stride velocity (p < 0.001) and step width (p = 0.002). The gait deviations suggest that walking in a VR environment may induce gait instability in healthy subjects.",
"",
"Immersive virtual environments are a promising research tool for the study of perception and action, on the assumption that visual--motor behavior in virtual and real environments is essentially similar. We investigated this issue for locomotor behavior and tested the generality of Fajen and Warren's [2003] steering dynamics model. Participants walked to a stationary goal while avoiding a stationary obstacle in matched physical and virtual environments. There were small, but reliable, differences in locomotor paths, with a larger maximum deviation (Δ e 0.16 m), larger obstacle clearance (Δ e 0.16 m), and slower walking speed (Δ e 0.13 m s) in the virtual environment. Separate model fits closely captured the mean virtual and physical paths (R2 > 0.98). Simulations implied that the path differences are not because of walking speed or a 50p distance compression in virtual environments, but might be a result of greater uncertainty about the egocentric location of virtual obstacles. On the other hand, paths had similar shapes in the two environments with no difference in median curvature and could be modeled with a single set of parameter values (R2 > 0.95). Fajen and Warren's original parameters successfully generalized to new virtual and physical object configurations (R2 > 0.95). These results justify the use of virtual environments to study locomotor behavior.",
"Virtual walking, a fundamental task in Virtual Reality (VR), is greatly influenced by the locomotion interface being used, by the specificities of input and output devices, and by the way the virtual environment is represented. No matter how virtual walking is controlled, the generation of realistic virtual trajectories is absolutely required for some applications, especially those dedicated to the study of walking behaviors in VR, navigation through virtual places for architecture, rehabilitation and training. Previous studies focused on evaluating the realism of locomotion trajectories have mostly considered the result of the locomotion task (efficiency, accuracy) and its subjective perception (presence, cybersickness). Few focused on the locomotion trajectory itself, but in situation of geometrically constrained task. In this paper, we study the realism of unconstrained trajectories produced during virtual walking by addressing the following question: did the user reach his destination by virtually walking along a trajectory he would have followed in similar real conditions? To this end, we propose a comprehensive evaluation framework consisting on a set of trajectographical criteria and a locomotion model to generate reference trajectories. We consider a simple locomotion task where users walk between two oriented points in space. The travel path is analyzed both geometrically and temporally in comparison to simulated reference trajectories. In addition, we demonstrate the framework over a user study which considered an initial set of common and frequent virtual walking conditions, namely different input devices, output display devices, control laws, and visualization modalities. The study provides insight into the relative contributions of each condition to the overall realism of the resulting virtual trajectories.",
"",
""
]
} |
1903.06701 | 2922527104 | Training complex machine learning models in parallel is an increasingly important workload. We accelerate distributed parallel training by designing a communication primitive that uses a programmable switch dataplane to execute a key step of the training process. Our approach, SwitchML, reduces the volume of exchanged data by aggregating the model updates from multiple workers in the network. We co-design the switch processing with the end-host protocols and ML frameworks to provide a robust, efficient solution that speeds up training by up to 300 , and at least by 20 for a number of real-world benchmark models. | In-network aggregation. We are not the first to propose aggregating data in the network. Targeting partition-aggregate and big data (MapReduce) applications, NetAgg @cite_46 and CamDoop @cite_51 demonstrated significant performance advantages, by performing application-specific data aggregation at switch-attached high-performance middleboxes or at servers in a direct-connect network topology, respectively. Parameter Hub @cite_28 does the same with a rack-scale parameter server. Historically, some specialized supercomputer networks @cite_36 @cite_14 offloaded MPI collective operators (e.g., all-reduce) to the network. Certain Mellanox Infiniband switches support collective offloads through SHArP @cite_26 , which builds a reduction tree embedding laid over the actual network topology. Operators are applied to data as it traverses the reduction tree; however, a tree node's operator waits to execute until all its children's data has been received. differs from all of these approaches in that it performs in-network data reduction using a streaming aggregation protocol. Moreover, as we operate over Ethernet instead of Infiniband, we develop a failure recovery protocol. keeps the network architecture unmodified and exploits programmable switches; its low resource usage implies it can coexist with standard Ethernet switch functionality. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_28",
"@cite_36",
"@cite_46",
"@cite_51"
],
"mid": [
"2164352834",
"2563521659",
"",
"1971952282",
"2143295630",
"1969805974"
],
"abstract": [
"The IBM Blue Gene P (BG P) system is a massively parallel supercomputer succeeding BG L, and it is based on orders of magnitude in system size and significant power consumption efficiency. BG P comes with many enhancements to the machine design and new architectural features at the hardware and software levels. In this work, we demonstrate techniques to leverage the architectural features of BG P to deliver high performance MPI collective communication primitives.",
"Increased system size and a greater reliance on utilizing system parallelism to achieve computational needs, requires innovative system architectures to meet the simulation challenges. As a step towards a new network class of co-processors — intelligent network devices, which manipulate data traversing the data-center network, this paper describes the SHArP technology designed to offload collective operation processing to the network. This is implemented in Mellanox's SwitchIB-2 ASIC, using innetwork trees to reduce data from a group of sources, and to distribute the result. Multiple parallel jobs with several partially overlapping groups are supported each with several reduction operations in-flight. Large performance enhancements are obtained, with an improvement of a factor of 2.1 for an eight byte MPI_Allreduce() operation on 128 hosts, going from 6.01 to 2.83 microseconds. Pipelining is used for an improvement of a factor of 3.24 in the latency of a 4096 byte MPI_Allreduce() operations, declining from 46.93 to 14.48 microseconds.",
"",
"This paper gives an overview of the BlueGene L Supercomputer. This is a jointly funded research partnership between IBM and the Lawrence Livermore National Laboratory as part of the United States Department of Energy ASCI Advanced Architecture Research Program. Application performance and scaling studies have recently been initiated with partners at a number of academic and government institutions, including the San Diego Supercomputer Center and the California Institute of Technology. This massively parallel system of 65,536 nodes is based on a new architecture that exploits system-on-a-chip technology to deliver target peak processing power of 360 teraFLOPS (trillion floating-point operations per second). The machine is scheduled to be operational in the 2004--2005 time frame, at price performance and power consumption performance targets unobtainable with conventional architectures.",
"Data centre applications for batch processing (e.g. map reduce frameworks) and online services (e.g. search engines) scale by distributing data and computation across many servers. They typically follow a partition aggregation pattern: tasks are first partitioned across servers that process data locally, and then those partial results are aggregated. This data aggregation step, however, shifts the performance bottleneck to the network, which typically struggles to support many-to-few, high-bandwidth traffic between servers. Instead of performing data aggregation at edge servers, we show that it can be done more efficiently along network paths. We describe NETAGG, a software platform that supports on-path aggregation for network-bound partition aggregation applications. NETAGG exploits a middlebox-like design, in which dedicated servers (agg boxes) are connected by high-bandwidth links to network switches. Agg boxes execute aggregation functions provided by applications, which alleviates network hotspots because only a fraction of the incoming traffic is forwarded at each hop. NETAGG requires only minimal application changes: it uses shim layers on edge servers to redirect application traffic transparently to the agg boxes. Our experimental results show that NETAGG improves substantially the throughput of two sample applications, the Solr distributed search engine and the Hadoop batch processing framework. Its design allows for incremental deployment in existing data centres and incurs only a modest investment cost.",
"At the core of Machine Learning (ML) analytics is often an expert-suggested model, whose parameters are refined by iteratively processing a training dataset until convergence. The completion time (i.e. convergence time) and quality of the learned model not only depends on the rate at which the refinements are generated but also the quality of each refinement. While data-parallel ML applications often employ a loose consistency model when updating shared model parameters to maximize parallelism, the accumulated error may seriously impact the quality of refinements and thus delay completion time, a problem that usually gets worse with scale. Although more immediate propagation of updates reduces the accumulated error, this strategy is limited by physical network bandwidth. Additionally, the performance of the widely used stochastic gradient descent (SGD) algorithm is sensitive to step size. Simply increasing communication often fails to bring improvement without tuning step size accordingly and tedious hand tuning is usually needed to achieve optimal performance. This paper presents Bosen, a system that maximizes the network communication efficiency under a given inter-machine network bandwidth budget to minimize parallel error, while ensuring theoretical convergence guarantees for large-scale data-parallel ML applications. Furthermore, Bosen prioritizes messages most significant to algorithm convergence, further enhancing algorithm convergence. Finally, Bosen is the first distributed implementation of the recently presented adaptive revision algorithm, which provides orders of magnitude improvement over a carefully tuned fixed schedule of step size refinements for some SGD algorithms. Experiments on two clusters with up to 1024 cores show that our mechanism significantly improves upon static communication schedules."
]
} |
1903.06701 | 2922527104 | Training complex machine learning models in parallel is an increasingly important workload. We accelerate distributed parallel training by designing a communication primitive that uses a programmable switch dataplane to execute a key step of the training process. Our approach, SwitchML, reduces the volume of exchanged data by aggregating the model updates from multiple workers in the network. We co-design the switch processing with the end-host protocols and ML frameworks to provide a robust, efficient solution that speeds up training by up to 300 , and at least by 20 for a number of real-world benchmark models. | The closest work to ours is DAIET @cite_60 . Sapio at al. also proposed in-network aggregation for minimizing communication overhead of exchanging ML model updates. However, their short paper does not describe a complete design, does not address the major challenges ( ) of supporting ML applications, and provides only a simple proof-of-concept prototype for MapReduce applications running on a P4 emulator. It is not clear it could be made to work with a real programmable switch. | {
"cite_N": [
"@cite_60"
],
"mid": [
"2769986458"
],
"abstract": [
"Programmable data plane hardware creates new opportunities for infusing intelligence into the network. This raises a fundamental question: what kinds of computation should be delegated to the network? In this paper, we discuss the opportunities and challenges for co-designing data center distributed systems with their network layer. We believe that the time has finally come for offloading part of their computation to execute in-network. However, in-network computation tasks must be judiciously crafted to match the limitations of the network machine architecture of programmable devices. With the help of our experiments on machine learning and graph analytics workloads, we identify that aggregation functions raise opportunities to exploit the limited computation power of networking hardware to lessen network congestion and improve the overall application performance. Moreover, as a proof-of-concept, we propose Daiet, a system that performs in-network data aggregation. Experimental results with an initial prototype show a large data reduction ratio (86.9 -89.3 ) and a similar decrease in the workers' computation time."
]
} |
1903.06701 | 2922527104 | Training complex machine learning models in parallel is an increasingly important workload. We accelerate distributed parallel training by designing a communication primitive that uses a programmable switch dataplane to execute a key step of the training process. Our approach, SwitchML, reduces the volume of exchanged data by aggregating the model updates from multiple workers in the network. We co-design the switch processing with the end-host protocols and ML frameworks to provide a robust, efficient solution that speeds up training by up to 300 , and at least by 20 for a number of real-world benchmark models. | Accelerating DNN training. A large body of work has proposed improvements to hardware and software systems, as well as algorithmic advances to train DNN models faster. We only discuss a few relevant prior approaches. Improving training performance via data or model parallelism has been explored by numerous deep learning systems @cite_55 @cite_17 @cite_49 @cite_11 @cite_0 @cite_22 @cite_10 . Among the two strategies, data parallelism is the most common approach; but it can be advantageous to devise strategies that combine the two. Recent work even shows how to automatically find a fast parallelization strategy for a specific parallel machine @cite_62 . Underpinning any distributed training strategy, lies parameter synchronization. Gibiansky was among the first to research @cite_18 using fast collective algorithms in lieu of the PS approach, which has been a traditional mechanism in many ML frameworks. This approach is now commonly used in many platforms @cite_21 @cite_18 @cite_1 @cite_31 @cite_4 . Following this line of work, we view as a further advancement -- one that pushes the boundary by co-designing networking functions with ML applications. | {
"cite_N": [
"@cite_18",
"@cite_62",
"@cite_4",
"@cite_22",
"@cite_55",
"@cite_21",
"@cite_1",
"@cite_17",
"@cite_0",
"@cite_49",
"@cite_31",
"@cite_10",
"@cite_11"
],
"mid": [
"",
"2884700152",
"2513383847",
"",
"2168231600",
"2170135819",
"",
"",
"",
"",
"2787998955",
"",
""
],
"abstract": [
"",
"The computational requirements for training deep neural networks (DNNs) have grown to the point that it is now standard practice to parallelize training. Existing deep learning systems commonly use data or model parallelism, but unfortunately, these strategies often result in suboptimal parallelization performance. In this paper, we define a more comprehensive search space of parallelization strategies for DNNs called SOAP, which includes strategies to parallelize a DNN in the Sample, Operation, Attribute, and Parameter dimensions. We also propose FlexFlow, a deep learning framework that uses guided randomized search of the SOAP space to find a fast parallelization strategy for a specific parallel machine. To accelerate this search, FlexFlow introduces a novel execution simulator that can accurately predict a parallelization strategy's performance and is three orders of magnitude faster than prior approaches that have to execute each strategy. We evaluate FlexFlow with six real-world DNN benchmarks on two GPU clusters and show that FlexFlow can increase training throughput by up to 3.8x over state-of-the-art approaches, even when including its search time, and also improves scalability.",
"This tutorial will introduce the Computational Network Toolkit, or CNTK, Microsoft's cutting-edge open-source deep-learning toolkit for Windows and Linux. CNTK is a powerful computation-graph based deep-learning toolkit for training and evaluating deep neural networks. Microsoft product groups use CNTK, for example to create the Cortana speech models and web ranking. CNTK supports feed-forward, convolutional, and recurrent networks for speech, image, and text workloads, also in combination. Popular network types are supported either natively (convolution) or can be described as a CNTK configuration (LSTM, sequence-to-sequence). CNTK scales to multiple GPU servers and is designed around efficiency. The tutorial will give an overview of CNTK's general architecture and describe the specific methods and algorithms used for automatic differentiation, recurrent-loop inference and execution, memory sharing, on-the-fly randomization of large corpora, and multi-server parallelization. We will then show how typical uses looks like for relevant tasks like image recognition, sequence-to-sequence modeling, and speech recognition.",
"",
"Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestly- sized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm.",
"Long training times for high-accuracy deep neural networks (DNNs) impede research into new DNN architectures and slow the development of high-accuracy DNNs. In this paper we present FireCaffe, which successfully scales deep neural network training across a cluster of GPUs. We also present a number of best practices to aid in comparing advancements in methods for scaling and accelerating the training of deep neural networks. The speed and scalability of distributed algorithms is almost always limited by the overhead of communicating between servers; DNN training is not an exception to this rule. Therefore, the key consideration here is to reduce communication overhead wherever possible, while not degrading the accuracy of the DNN models that we train. Our approach has three key pillars. First, we select network hardware that achieves high bandwidth between GPU servers -- Infiniband or Cray interconnects are ideal for this. Second, we consider a number of communication algorithms, and we find that reduction trees are more efficient and scalable than the traditional parameter server approach. Third, we optionally increase the batch size to reduce the total quantity of communication during DNN training, and we identify hyperparameters that allow us to reproduce the small-batch accuracy while training with large batch sizes. When training GoogLeNet and Network-in-Network on ImageNet, we achieve a 47x and 39x speedup, respectively, when training on a cluster of 128 GPUs.",
"",
"",
"",
"",
"Training modern deep learning models requires large amounts of computation, often provided by GPUs. Scaling computation from one GPU to many can enable much faster training and research progress but entails two complications. First, the training library must support inter-GPU communication. Depending on the particular methods employed, this communication may entail anywhere from negligible to significant overhead. Second, the user must modify his or her training code to take advantage of inter-GPU communication. Depending on the training library's API, the modification required may be either significant or minimal. Existing methods for enabling multi-GPU training under the TensorFlow library entail non-negligible communication overhead and require users to heavily modify their model-building code, leading many researchers to avoid the whole mess and stick with slower single-GPU training. In this paper we introduce Horovod, an open source library that improves on both obstructions to scaling: it employs efficient inter-GPU communication via ring reduction and requires only a few lines of modification to user code, enabling faster, easier distributed training in TensorFlow. Horovod is available under the Apache 2.0 license at this https URL",
"",
""
]
} |
1903.06869 | 2950820225 | We formulate notions of opacity for cyberphysical systems modeled as discrete-time linear time-invariant systems. A set of secret states is @math -ISO with respect to a set of nonsecret states if, starting from these sets at time @math , the outputs at time @math are indistinguishable to an adversarial observer. Necessary and sufficient conditions to ensure that a secret specification is @math -ISO are established in terms of sets of reachable states. We also show how to adapt techniques for computing under-approximations and over-approximations of the set of reachable states of dynamical systems in order to soundly approximate k-ISO. Further, we provide a condition for output controllability, if @math -ISO holds, and show that the converse holds under an additional assumption. We extend the theory of opacity for single-adversary systems to the case of multiple adversaries and develop several notions of decentralized opacity. We study the following scenarios: i) the presence or lack of a centralized coordinator, and ii) the presence or absence of collusion among adversaries. In the case of colluding adversaries, we derive a condition for nonopacity that depends on the structure of the directed graph representing the communication between adversaries. Finally, we relax the condition that the outputs be indistinguishable and define a notion of @math -opacity, and also provide an extension to the case of nonlinear systems. | Opacity was first presented as a tool to study cryptographic protocols in @cite_42 . The intruder was modeled as a passive observer who could read messages exchanged between two parties, but could not modify, block, or send a message. The aim of the parties was to exchange secret information without making it accessible to the intruder. A theory of supervisory control for DESs represented by finite state automata (FSA) and regular languages was formulated in @cite_54 @cite_39 . This framework spawned research in many areas including fault diagnosis @cite_45 , hybrid systems @cite_46 , and robotics @cite_27 . | {
"cite_N": [
"@cite_54",
"@cite_42",
"@cite_39",
"@cite_27",
"@cite_45",
"@cite_46"
],
"mid": [
"1979349468",
"1562284768",
"2059057465",
"2132714442",
"2099395647",
"1594130515"
],
"abstract": [
"The paper studies the control of a class of discrete event processes, i.e., processes that are discrete, asynchronous and possibly nondeterministic. The controlled process is described as the generator of a formal language, while the controller, or supervisor, is constructed from a recognizer for a specified target language that incorporates the desired closed-loop system behavior. The existence problem for a supervisor is reduced to finding the largest controllable language contained in a given legal language. Two examples are provided.",
"The most studied property, secrecy, is not always sufficient to prove the security of a protocol. Other properties such as anonymity, privacy or opacity could be useful. Here, we give a simple definition of opacity by looking at the possible traces of the protocol. Our approach draws on a new property over messages called similarity. Then, using rewriting methods close to those used in unification, we demonstrate the decidability of our opacity property. This is only achieved in the case of atomic keys using a method called Key Quantification.",
"A discrete event system (DES) is a dynamic system that evolves in accordance with the abrupt occurrence, at possibly unknown irregular intervals, of physical events. Such systems arise in a variety of contexts ranging from computer operating systems to the control of complex multimode processes. A control theory for the logical aspects of such DESs is surveyed. The focus is on the qualitative aspects of control, but computation and the related issue of computational complexity are also considered. Automata and formal language models for DESs are surveyed. >",
"In this thesis we present a technique for the composition of robot control laws in dynamical environments. We propose a challenging robotic task, called Dynamical Pick and Place, in which a robot equipped with merely a soft paddle must capture and contain a ball, safely negotiate it past obstacles, and bring it to rest at a desired location. We develop a composition technique for local controllers that provides a formal guarantee of the stability of the switching behavior required in this task, and provide descriptive statistics of a working implementation. Our robotic system displays unusually dexterous behavior in the face of significant system noise, and recovers gracefully from large unexpected perturbations caused by the experimenters. Our approach to controller composition makes use of the funnel as a metaphor for asymptotic stability, is motivated by the pre-image backchaining techniques developed by Lozano-Perez, Mason and Taylor, and extends their ideas from quasi-static environments to systems with full dynamics. We introduce the concepts of \"dynamical obstacle avoidance\" and \"dynamical safety\" for systems with only intermittent control of their environment, and show that it is important not only that the system avoid obstacles directly, but also that the system will never reach an obstacle before getting another chance to effect control. The Dynamical Pick and Place problem addressed by this thesis is a difficult control problem, but an easy planning problem. The system we develop provides a way to engage more powerful AI planning tools without sacrificing access to the stability arguments of dynamical systems theory.",
"Detection and isolation of failures in large, complex systems is a crucial and challenging task. The increasingly stringent requirements on performance and reliability of complex technological systems have necessitated the development of sophisticated and systematic methods for the timely and accurate diagnosis of system failures. We propose a discrete-event systems (DES) approach to the failure diagnosis problem. This approach is applicable to systems that fall naturally in the class of DES; moreover, for the purpose of diagnosis, continuous-variable dynamic systems can often be viewed as DES at a higher level of abstraction. We present a methodology for modeling physical systems in a DES framework and illustrate this method with examples. We discuss the notion of diagnosability, the construction procedure of the diagnoser, and necessary and sufficient conditions for diagnosability. Finally, we illustrate our approach using realistic models of two different heating, ventilation, and air conditioning (HVAC) systems, one diagnosable and the other not diagnosable. While the modeling methodology presented here has been developed for the purpose of failure diagnosis, its scope is not restricted to this problem; it can also be used to develop DES models for other purposes such as control.",
"Modeling of hybrid systems.- Examples of hybrid dynamical systems.- Variable-structure systems.- Complementarity systems.- Analysis of hybrid systems.- Hybrid control design."
]
} |
1903.06869 | 2950820225 | We formulate notions of opacity for cyberphysical systems modeled as discrete-time linear time-invariant systems. A set of secret states is @math -ISO with respect to a set of nonsecret states if, starting from these sets at time @math , the outputs at time @math are indistinguishable to an adversarial observer. Necessary and sufficient conditions to ensure that a secret specification is @math -ISO are established in terms of sets of reachable states. We also show how to adapt techniques for computing under-approximations and over-approximations of the set of reachable states of dynamical systems in order to soundly approximate k-ISO. Further, we provide a condition for output controllability, if @math -ISO holds, and show that the converse holds under an additional assumption. We extend the theory of opacity for single-adversary systems to the case of multiple adversaries and develop several notions of decentralized opacity. We study the following scenarios: i) the presence or lack of a centralized coordinator, and ii) the presence or absence of collusion among adversaries. In the case of colluding adversaries, we derive a condition for nonopacity that depends on the structure of the directed graph representing the communication between adversaries. Finally, we relax the condition that the outputs be indistinguishable and define a notion of @math -opacity, and also provide an extension to the case of nonlinear systems. | Opacity was compared with detectability and diagnosability of DESs, and other privacy properties like secrecy and anonymity in @cite_28 . A subsequent paper @cite_0 defined opacity for DESs in a decentralized framework with multiple adversaries, each carrying out its own observation of the system. The authors of @cite_26 characterized language-based notions of opacity under unions and intersections. They demonstrated the existence of supremal and minimal opaque sublanguages and superlanguages. | {
"cite_N": [
"@cite_28",
"@cite_26",
"@cite_0"
],
"mid": [
"",
"2144982432",
"2058876757"
],
"abstract": [
"",
"Opacity describes the inability for an external observer to know what happened in a system. Recently, opacity has been investigated in the framework of discrete event systems. In our previous paper, we define two types of opacities: strong opacity and weak opacity. Given a general observation mapping, a language is strongly opaque if all strings in the language are confused with some strings in another language and it is weakly opaque if some strings in the language are confused with some strings in another language. In this paper, we investigate properties of opacities. We show that opacities are closed under union, but may not be closed under intersection. Based on these properties, we discuss how to modify languages to satisfy the strong opacity, weak opacity, and no opacity by investigating the sublanguages and superlanguages that are strongly opaque, weakly opaque, and not opaque respectively. We find the largest sublanguages and smallest superlanguages. Examples are given to illustrate results.",
"In this paper, we investigate opacity of discrete event systems in a decentralized framework with several agents, each of them performing its observation of the system. We consider two cases, one without coordination among agents and one with coordination. Both cases are useful because many systems used today are distributed over a network, some with agents coordinating among themselves and some without. We introduce general definitions of decentralized opacity for both cases. The definitions are based on languages. Therefore, they are flexible and can include other properties of discrete event systems as special cases. In particular, we show that co-observability used in supervisory control is a special case of decentralized opacity. We illustrate the usefulness of decentralized opacity by applying it in solving an interesting security problem in computer systems."
]
} |
1903.06869 | 2950820225 | We formulate notions of opacity for cyberphysical systems modeled as discrete-time linear time-invariant systems. A set of secret states is @math -ISO with respect to a set of nonsecret states if, starting from these sets at time @math , the outputs at time @math are indistinguishable to an adversarial observer. Necessary and sufficient conditions to ensure that a secret specification is @math -ISO are established in terms of sets of reachable states. We also show how to adapt techniques for computing under-approximations and over-approximations of the set of reachable states of dynamical systems in order to soundly approximate k-ISO. Further, we provide a condition for output controllability, if @math -ISO holds, and show that the converse holds under an additional assumption. We extend the theory of opacity for single-adversary systems to the case of multiple adversaries and develop several notions of decentralized opacity. We study the following scenarios: i) the presence or lack of a centralized coordinator, and ii) the presence or absence of collusion among adversaries. In the case of colluding adversaries, we derive a condition for nonopacity that depends on the structure of the directed graph representing the communication between adversaries. Finally, we relax the condition that the outputs be indistinguishable and define a notion of @math -opacity, and also provide an extension to the case of nonlinear systems. | Enforcement of opacity using techniques from supervisory control was studied in @cite_3 @cite_15 . The authors of @cite_14 formulated an alternate method of opacity enforcement using insertion functions, which are entities that modify the output behavior of the system in order to keep a secret. The model-checking and verification of notions of opacity at run time in online setups was presented in @cite_37 . A scheme for the verification of opacity in DESs using two-way observers was proposed in @cite_22 . This enabled a unified framework to verify multiple notions of opacity. There is a large body of literature focused on developing techniques to compute overapproximations and underapproximations of sets of reachable states. These will usually depend on how the initial set of states is specified, including support functions @cite_23 , zonotopes @cite_11 , and ellipsoids @cite_12 . A method to compute overapproximations of reachable sets of states for linear systems with uncertain, time-varying parameters and inputs was presented in @cite_31 . The reader is referred to @cite_16 for a succinct presentation of some of the techniques used in computing reachable sets. | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_31",
"@cite_22",
"@cite_3",
"@cite_23",
"@cite_15",
"@cite_16",
"@cite_12",
"@cite_11"
],
"mid": [
"1976202267",
"2040603575",
"1994826906",
"2601738991",
"2064886960",
"2026019548",
"",
"",
"1975630497",
"1532859487"
],
"abstract": [
"We are interested in the validation of opacity. Opacity models the impossibility for an attacker to retrieve the value of a secret in a system of interest. Roughly speaking, ensuring opacity provides confidentiality of a secret on the system that must not leak to an attacker. More specifically, we study how we can model-check, verify and enforce at system runtime, several levels of opacity. Besides existing notions of opacity, we also introduce K-step strong opacity, a more practical notion of opacity that provides a stronger level of confidentiality.",
"Abstract Opacity is a confidentiality property that characterizes whether a “secret” of a system can be inferred by an outside observer called an “intruder”. In this paper, we consider the problem of enforcing opacity in systems modeled as partially-observed finite-state automata. We propose a novel enforcement mechanism based on the use of insertion functions. An insertion function is a monitoring interface at the output of the system that changes the system’s output behavior by inserting additional observable events. We define the property of “i-enforceability” that an insertion function needs to satisfy in order to enforce opacity. I-enforceability captures an insertion function’s ability to respond to every system’s observed behavior and to output only modified behaviors that look like existing non-secret behaviors. Given an insertion function, we provide an algorithm that verifies whether it is i-enforcing. More generally, given an opacity notion, we determine whether it is i-enforceable or not by constructing a structure called the “All Insertion Structure” (AIS). The AIS enumerates all i-enforcing insertion functions in a compact state transition structure. If a given opacity notion has been verified to be i-enforceable, we show how to use the AIS to synthesize an i-enforcing insertion function.",
"This paper presents a method for using set-based approximations to the Peano-Baker series to compute overapproximations of reachable sets for linear systems with uncertain, time-varying parameters and inputs. Alternative representations for sets of uncertain system matrices are considered, including matrix polytopes, matrix zonotopes, and interval matrices. For each representation, the computational efficiency and resulting approximation error for reachable set computations are evaluated analytically and empirically. As an application, reachable sets are computed for a truck with hybrid dynamics due to a gain-scheduled yaw controller. As an alternative to computing reachable sets for the hybrid model, for which switching introduces an additional overapproximation error, the gain-scheduled controller is approximated with uncertain time-varying parameters, which leads to more efficient and more accurate reachable set computations.",
"Abstract In the context of security analysis for information flow properties, where a potentially malicious observer (intruder) tracks the observed behavior of a given system, infinite-step opacity (respectively, K -step opacity) holds if the intruder can never determine for sure that the system was in a secret state for any instant within infinite steps (respectively, K steps) prior to that particular instant. We present new algorithms for the verification of the properties of infinite-step opacity and K -step opacity for partially-observed discrete event systems modeled as finite-state automata. Our new algorithms are based on a novel separation principle for state estimates that characterizes the information dependence in opacity verification problems, and they have lower computational complexity than previously-proposed ones in the literature. Specifically, we propose a new information structure, called the two-way observer, that is used for the verification of infinite-step and K -step opacity. Based on the two-way observer, a new upper bound for the delay in K -step opacity is derived, which also improves previously-known results.",
"In the field of computer security, a problem that received little attention so far is the enforcement of confidentiality properties by supervisory control. Given a critical system G that may leak confidential information, the problem consists in designing a controller C, possibly disabling occurrences of a fixed subset of events of G, so that the closed-loop system G C does not leak confidential information. We consider this problem in the case where G is a finite transition system with set of events ? and an inquisitive user, called the adversary, observes a subset ?a of ?. The confidential information is the fact (when it is true) that the trace of the execution of G on ?* belongs to a regular set S ? ?*, called the secret. The secret S is said to be opaque w.r.t. G (respectively, G C) and ?a if the adversary cannot safely infer this fact from the trace of the execution of G (respectively, G C) on ?a*. In the converse case, the secret can be disclosed. We present an effective algorithm for computing the most permissive controller C such that S is opaque w.r.t. G C and ?a . This algorithm subsumes two earlier algorithms working under the strong assumption that the alphabet ?a of the adversary and the set of events that the controller can disable are comparable.",
"This work is concerned with the algorithmic reachability analysis of linear systems with constrained initial states and inputs. In this paper, we present a new approach for the computation of tight polyhedral over-approximations of the reachable sets of a linear system. The main contribution over our previous work is that it makes it possible to consider systems whose sets of initial states and inputs are given by arbitrary compact convex sets represented by their support functions. We first consider the discrete-time setting and then we show how our algorithm can be extended to handle continuous-time linear systems. Finally, the effectiveness of our approach is demonstrated through several examples.",
"",
"",
"This paper describes the computation of reach sets for discrete-time linear control systems with time-varying coefficients and ellipsoidal bounds on the controls and initial conditions. The algorithms construct external and internal ellipsoidal approximations that touch the reach set boundary from outside and from inside. Recurrence relations describe the time evolution of these approximations. An essential part of the paper deals with singular discrete-time linear systems",
"This work is concerned with the problem of computing the set of reachable states for linear time-invariant systems with bounded inputs. Our main contribution is a novel algorithm which improves significantly the computational complexity of reachability analysis. Algorithms to compute over and under-approximations of the reachable sets are proposed as well. These algorithms are not subject to the wrapping effect and therefore our approximations are tight. We show that these approximations are useful in the context of hybrid systems verification and control synthesis. The performance of a prototype implementation of the algorithm confirms its qualities and gives hope for scaling up verification technology for continuous and hybrid systems."
]
} |
1903.06814 | 2922139758 | In order to operate autonomously, a robot should explore the environment and build a model of each of the surrounding objects. A common approach is to carefully scan the whole workspace. This is time-consuming. It is also often impossible to reach all the viewpoints required to acquire full knowledge about the environment. Humans can perform shape completion of occluded objects by relying on past experience. Therefore, we propose a method that generates images of an object from various viewpoints using a single input RGB image. A deep neural network is trained to imagine the object appearance from many viewpoints. We present the whole pipeline, which takes a single RGB image as input and returns a sequence of RGB and depth images of the object. The method utilizes a CNN-based object detector to extract the object from the natural scene. Then, the proposed network generates a set of RGB and depth images. We show the results both on a synthetic dataset and on real images. | Single-view images can be used for effective planning of grasping points for vacuum-based end effectors because only a single visible point of contact of suitable surface geometry is required @cite_22 . Along with a greater number of fingers in a gripper, the estimation of grasping points becomes more difficult. A wide variety of grasp planning methods are available. For example, Kopicki @cite_3 presented a method for computing grasp contact points for a multi-finger robot given a partial 3D point cloud model. The grasp success rate decreases when this model is obtained from a single view. The proposed method for images generation can provide missing data and improve the grasping success rate. Another solution is to recover the 3D model and then apply grasp planning. Given a full 3D model a grasp can also be transferred to another novel object via contact warping @cite_27 . | {
"cite_N": [
"@cite_27",
"@cite_22",
"@cite_3"
],
"mid": [
"2068641309",
"2754340109",
"2300618187"
],
"abstract": [
"We present a method for transferring grasps between objects of the same functional category. This transfer is intended to preserve the functionality of a grasp constructed for one of the objects, thus enabling the analogous action to be performed on a novel object for which no grasp has been specified. Manipulation knowledge is hence generalized from a single example to a class of objects with a significant amount of shape variability. The transfer is achieved through warping the surface geometry of the source object onto the target object, and along with it the contact points of a grasp. The warped contacts are locally replanned, if necessary, to ensure grasp stability, and a suitable grasp pose is computed. We present extensive results of experiments with a database of four-finger grasps, designed to systematically cover variations on grasping the mugs of the Princeton Shape Benchmark.",
"Vacuum-based end effectors are widely used in industry and are often preferred over parallel-jaw and multifinger grippers due to their ability to lift objects with a single point of contact. Suction grasp planners often target planar surfaces on point clouds near the estimated centroid of an object. In this paper, we propose a compliant suction contact model that computes the quality of the seal between the suction cup and local target surface and a measure of the ability of the suction grasp to resist an external gravity wrench. To characterize grasps, we estimate robustness to perturbations in end-effector and object pose, material properties, and external wrenches. We analyze grasps across 1,500 3D object models to generate Dex-Net 3.0, a dataset of 2.8 million point clouds, suction grasps, and grasp robustness labels. We use Dex-Net 3.0 to train a Grasp Quality Convolutional Neural Network (GQ-CNN) to classify robust suction targets in point clouds containing a single object. We evaluate the resulting system in 350 physical trials on an ABB YuMi fitted with a pneumatic suction gripper. When evaluated on novel objects that we categorize as Basic (prismatic or cylindrical), Typical (more complex geometry), and Adversarial (with few available suction-grasp points) Dex-Net 3.0 achieves success rates of 98 @math , 82 @math , and 58 @math respectively, improving to 81 @math in the latter case when the training set includes only adversarial objects. Code, datasets, and supplemental material can be found at this http URL .",
"This paper presents a method for one-shot learning of dexterous grasps and grasp generation for novel objects. A model of each grasp type is learned from a single kinesthetic demonstration and several types are taught. These models are used to select and generate grasps for unfamiliar objects. Both the learning and generation stages use an incomplete point cloud from a depth camera, so no prior model of an object shape is used. The learned model is a product of experts, in which experts are of two types. The first type is a contact model and is a density over the pose of a single hand link relative to the local object surface. The second type is the hand-configuration model and is a density over the whole-hand configuration. Grasp generation for an unfamiliar object optimizes the product of these two model types, generating thousands of grasp candidates in under 30 seconds. The method is robust to incomplete data at both training and testing stages. When several grasp types are considered the method selects the highest-likelihood grasp across all the types. In an experiment, the training set consisted of five different grasps and the test set of 45 previously unseen objects. The success rate of the first-choice grasp is 84.4 or 77.7 if seven views or a single view of the test object are taken, respectively."
]
} |
1903.06814 | 2922139758 | In order to operate autonomously, a robot should explore the environment and build a model of each of the surrounding objects. A common approach is to carefully scan the whole workspace. This is time-consuming. It is also often impossible to reach all the viewpoints required to acquire full knowledge about the environment. Humans can perform shape completion of occluded objects by relying on past experience. Therefore, we propose a method that generates images of an object from various viewpoints using a single input RGB image. A deep neural network is trained to imagine the object appearance from many viewpoints. We present the whole pipeline, which takes a single RGB image as input and returns a sequence of RGB and depth images of the object. The method utilizes a CNN-based object detector to extract the object from the natural scene. Then, the proposed network generates a set of RGB and depth images. We show the results both on a synthetic dataset and on real images. | It is possible to recover the pose and shape of a known object from a single view using a Convolutional Neural Network (CNN), applied to the single-shot object pose estimation problem @cite_32 . However, most methods for object reconstruction focus on end-to-end learning a 3D voxel model of the object from a single image. A general approach, which enables the completion of a 3D shape from a single-view 3D point cloud using a CNN, was proposed by @cite_15 . The network generates a 3D voxel occupancy grid from a partial point cloud and can also generalize to novel objects. The detailed mesh of the object is obtained by further post-processing of both the input point cloud and a 3D occupancy grid @cite_15 . A similar approach to object reconstruction, based on the 3D Recurrent Reconstruction Neural Network architecture, is proposed by @cite_9 . In this case, the 3D occupancy grid is obtained from an RGB image. Another approach to 3D object reconstruction is based on a set of algorithms for object detection, segmentation, and pose estimation, which fit a deformable 3D shape to the image to produce the 3D reconstruction of the object @cite_16 . | {
"cite_N": [
"@cite_9",
"@cite_15",
"@cite_16",
"@cite_32"
],
"mid": [
"",
"2524140598",
"1893912098",
"2767032778"
],
"abstract": [
"",
"This work provides an architecture to enable robotic grasp planning via shape completion. Shape completion is accomplished through the use of a 3D convolutional neural network (CNN). The network is trained on our own new open source dataset of over 440,000 3D exemplars captured from varying viewpoints. At runtime, a 2.5D pointcloud captured from a single point of view is fed into the CNN, which fills in the occluded regions of the scene, allowing grasps to be planned and executed on the completed object. Runtime shape completion is very rapid because most of the computational costs of shape completion are borne during offline training. We explore how the quality of completions vary based on several factors. These include whether or not the object being completed existed in the training data and how many object models were used to train the network. We also look at the ability of the network to generalize to novel objects allowing the system to complete previously unseen objects at runtime. Finally, experimentation is done both in simulation and on actual robotic hardware to explore the relationship between completion quality and the utility of the completed mesh model for grasping.",
"Object reconstruction from a single image - in the wild - is a problem where we can make progress and get meaningful results today. This is the main message of this paper, which introduces an automated pipeline with pixels as inputs and 3D surfaces of various rigid categories as outputs in images of realistic scenes. At the core of our approach are deformable 3D models that can be learned from 2D annotations available in existing object detection datasets, that can be driven by noisy automatic object segmentations and which we complement with a bottom-up module for recovering high-frequency shape details. We perform a comprehensive quantitative analysis and ablation study of our approach using the recently introduced PASCAL 3D+ dataset and show very encouraging automatic reconstructions on PASCAL VOC.",
"Estimating the 6D pose of known objects is important for robots to interact with the real world. The problem is challenging due to the variety of objects as well as the complexity of a scene caused by clutter and occlusions between objects. In this work, we introduce PoseCNN, a new Convolutional Neural Network for 6D object pose estimation. PoseCNN estimates the 3D translation of an object by localizing its center in the image and predicting its distance from the camera. The 3D rotation of the object is estimated by regressing to a quaternion representation. We also introduce a novel loss function that enables PoseCNN to handle symmetric objects. In addition, we contribute a large scale video dataset for 6D object pose estimation named the YCB-Video dataset. Our dataset provides accurate 6D poses of 21 objects from the YCB dataset observed in 92 videos with 133,827 frames. We conduct extensive experiments on our YCB-Video dataset and the OccludedLINEMOD dataset to show that PoseCNN is highly robust to occlusions, can handle symmetric objects, and provide accurate pose estimation using only color images as input. When using depth data to further refine the poses, our approach achieves state-of-the-art results on the challenging OccludedLINEMOD dataset. Our code and dataset are available at this https URL."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.