aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1903.00729 | 2969966929 | Sketches are probabilistic data structures that can provide approximate results within mathematically proven error bounds while using orders of magnitude less memory than traditional approaches. They are tailored for streaming data analysis on architectures even with limited memory such as single-board computers that are widely exploited for IoT and edge computing. Since these devices offer multiple cores, with efficient parallel sketching schemes, they are able to manage high volumes of data streams. However, since their caches are relatively small, a careful parallelization is required. | CMS is proposed by Cormode and Muthukrishnan to summarize data streams @cite_9 . Later, they comment on its parallelization @cite_15 and briefly mention the single-table and multi-table approaches. There are studies in the literature employing synchronization primitives such as atomic operations for frequency counting @cite_10 . However, synchronization free approaches are more popular; propose an augmented frequency sketch for time-faded heavy hitters @cite_7 . They divided the stream into sub-streams and generated multiple sketches instead of a single one. A similar approach using multiple sketches is also taken by Mandal et al. @cite_16 . CMS has also been used as an underlying structure to design advanced sketches. Recently, developed ASketch which filters high frequent items first and handles the remaining with a sketch such as CMS which they used for implementation @cite_3 . However, their parallelization also employs multiple filters sketches. Another advanced sketch employing multiple CMSs for parallelization is FCM @cite_11 . | {
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_3",
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_11"
],
"mid": [
"2573038016",
"2080234606",
"2439904216",
"2012005986",
"2891910862",
"2117689707",
""
],
"abstract": [
"Abstract In this paper we present PFDCMSS (Parallel Forward Decay Count–Min Space Saving) which, to the best of our knowledge, is the world first message–passing parallel algorithm for mining time–faded heavy hitters. The algorithm is a parallel version of the recently published FDCMSS (Forward Decay Count–Min Space Saving) sequential algorithm. We formally prove its correctness by showing that the underlying data structure, a sketch augmented with a Space Saving stream summary holding exactly two counters, is mergeable. Whilst mergeability of traditional sketches derives immediately from theory, we show that, instead, merging our augmented sketch is non trivial. Nonetheless, the resulting parallel algorithm is fast and simple to implement. The very large volumes of modern datasets in the context of Big Data present new challenges that current sequential algorithms can not cope with; on the contrary, parallel computing enables near real time processing of very large datasets, which are growing at an unprecedented scale. Our algorithm’s implementation, taking advantage of the MPI (Message Passing Interface) library, is portable, reliable and provides cutting–edge performance. Extensive experimental results confirm that PFDCMSS retains the extreme accuracy and error bound provided by FDCMSS whilst providing excellent parallel scalability. Our contributions are three-fold: (i) we prove the non trivial mergeability of the augmented sketch used in the FDCMSS algorithm; (ii) we derive PFDCMSS, a novel message–passing parallel algorithm; (iii) we experimentally prove that PFDCMSS is extremely accurate and scalable, allowing near real time processing of large datasets. The result supports both casual users and seasoned, professional scientists working on expert and intelligent systems.",
"We introduce a new sublinear space data structure--the count-min sketch--for summarizing data streams. Our sketch allows fundamental queries in data stream summarization such as point, range, and inner product queries to be approximately answered very quickly; in addition, it can be applied to solve several important problems in data streams such as finding quantiles, frequent items, etc. The time and space bounds we show for using the CM sketch to solve these problems significantly improve those previously known--typically from 1 e2 to 1 e in factor.",
"Approximated algorithms are often used to estimate the frequency of items on high volume, fast data streams. The most common ones are variations of Count-Min sketch, which use sub-linear space for the count, but can produce errors in the counts of the most frequent items and can misclassify low-frequency items. In this paper, we improve the accuracy of sketch-based algorithms by increasing the frequency estimation accuracy of the most frequent items and reducing the possible misclassification of low-frequency items, while also improving the overall throughput. Our solution, called Augmented Sketch (ASketch), is based on a pre-filtering stage that dynamically identifies and aggregates the most frequent items. Items overflowing the pre-filtering stage are processed using a conventional sketch algorithm, thereby making the solution general and applicable in a wide range of contexts. The pre-filtering stage can be efficiently implemented with SIMD instructions on multi-core machines and can be further parallelized through pipeline parallelism where the filtering stage runs in one core and the sketch algorithm runs in another core.",
"Faced with handling multiple large data sets in modern data-processing settings, researchers have proposed sketch data structures that capture salient properties while occupying little memory and that update or probe quickly. In particular, the Count-Min sketch has proven effective for a variety of applications. It concurrently tracks many item counts with surprisingly strong accuracy.",
"Identifying the top-K frequent items is one of the most common and important operations in large data processing systems. As a result, several solutions have been proposed to solve this problem approximately. In this paper, we identify that in modern distributed settings with both multi-node as well as multi-core parallelism, existing algorithms, although theoretically sound, are suboptimal from the performance perspective. In particular, for identifying top-K frequent items, Count-Min Sketch (CMS) has fantastic update time but lack the important property of reducibility which is needed for exploiting available massive data parallelism. On the other end, popular Frequent algorithm (FA) leads to reducible summaries but the update costs are significant. In this paper, we present Topkapi, a fast and parallel algorithm for finding top-K frequent items, which gives the best of both worlds, i.e., it is reducible as well as efficient update time similar to CMS. Topkapi possesses strong theoretical guarantees and leads to significant performance gains due to increased parallelism, relative to past work. Topkapi also demonstrates the power of carefully tailored randomized algorithms accelerated over high-performance computing in obtaining disruptive speedups over distributed word counting benchmarks over the popular Spark frameworks.",
"Many real-world data stream analysis applications such as network monitoring, click stream analysis, and others require combining multiple streams of data arriving from multiple sources. This is referred to as multi-stream analysis. To deal with high stream arrival rates, it is desirable that such systems be capable of supporting very high processing throughput. The advent of multicore processors and powerful servers driven by these processors calls for efficient parallel designs that can effectively utilize the parallelism of the multicores, since performance improvement is possible only through effective parallelism. In this paper, we address the problem of parallelizing multi-stream analysis in the context of multicore processors. Specifically, we concentrate on parallelizing frequent elements, top-k, and frequency counting over multiple streams. We discuss the challenges in designing an efficient parallel system for multi-stream processing. Our evaluation and analysis reveals that traditional \"contention\" based locking results in excessive overhead and wait, which in turn leads to severe performance degradation in modern multicore architectures. Based on our analysis, we propose a \"cooperation\" based locking paradigm for efficient parallelization of frequency counting. The proposed \"cooperation\" based paradigm removes waits associated with synchronization, and allows replacing locks by much cheaper atomic synchronization primitives. Our implementation of the proposed paradigm to parallelize a well known frequency counting algorithm shows the benefits of the proposed \"cooperation\" based locking paradigm when compared to the traditional \"contention\" based locking paradigm. In our experiments, the proposed \"cooperation\" based design outperforms the traditional \"contention\" based design by a factor of 2--5.5X for synthetic zipfian data sets.",
""
]
} |
1903.00729 | 2969966929 | Sketches are probabilistic data structures that can provide approximate results within mathematically proven error bounds while using orders of magnitude less memory than traditional approaches. They are tailored for streaming data analysis on architectures even with limited memory such as single-board computers that are widely exploited for IoT and edge computing. Since these devices offer multiple cores, with efficient parallel sketching schemes, they are able to manage high volumes of data streams. However, since their caches are relatively small, a careful parallelization is required. | Although other hash functions can also be used, we employ tabular hashing which is recently shown to provide good statistical properties and reported to be fast @cite_8 @cite_1 . When multiple hashes on the same item are required, which is the case for many sketches, our merging technique will be useful for algorithms using tabular hashing. | {
"cite_N": [
"@cite_1",
"@cite_8"
],
"mid": [
"2752494670",
"2963339365"
],
"abstract": [
"Hashing is a basic tool for dimensionality reduction employed in several aspects of machine learning. However, the perfomance analysis is often carried out under the abstract assumption that a truly random unit cost hash function is used, without concern for which concrete hash function is employed. The concrete hash function may work fine on sufficiently random input. The question is if it can be trusted in the real world when faced with more structured input. In this paper we focus on two prominent applications of hashing, namely similarity estimation with the one permutation hashing (OPH) scheme of [NIPS'12] and feature hashing (FH) of [ICML'09], both of which have found numerous applications, i.e. in approximate near-neighbour search with LSH and large-scale classification with SVM. We consider mixed tabulation hashing of [FOCS'15] which was proved to perform like a truly random hash function in many applications, including OPH. Here we first show improved concentration bounds for FH with truly random hashing and then argue that mixed tabulation performs similar for sparse input. Our main contribution, however, is an experimental comparison of different hashing schemes when used inside FH, OPH, and LSH. We find that mixed tabulation hashing is almost as fast as the multiply-mod-prime scheme ax+b mod p. Mutiply-mod-prime is guaranteed to work well on sufficiently random data, but we demonstrate that in the above applications, it can lead to bias and poor concentration on both real-world and synthetic data. We also compare with the popular MurmurHash3, which has no proven guarantees. Mixed tabulation and MurmurHash3 both perform similar to truly random hashing in our experiments. However, mixed tabulation is 40 faster than MurmurHash3, and it has the proven guarantee of good performance on all possible input.",
"Randomized algorithms are often enjoyed for their simplicity, but the hash functions employed to yield the desired probabilistic guarantees are often too complicated to be practical. Here we survey recent results on how simple hashing schemes based on tabulation provide unexpectedly strong guarantees. Simple tabulation hashing dates back to Zobrist [1970]. Keys are viewed as consisting of c characters and we have precomputed character tables h1,...,hq mapping characters to random hash values. A key x--(x1,...,xc) is hashed to h1[x1] ⊕ h2[x2].....⊕ hc[xc]. This schemes is very fast with haracter tables in cache. While simple tabulation is not even 4-independent, it does provide many of the guarantees that are normally obtained via higher independence, e.g., linear probing and Cuckoo hashing. Next we consider twisted tabulation where one character is \"twisted\" with some simple operations. The resulting hash function has powerful distributional properties: Chernoff-Hoeffding type tail bounds and a very small bias for min-wise hashing. Finally, we consider double tabulation where we compose two simple tabulation functions, applying one to the output of the other, and show that this yields very high independence in the classic framework of Carter and Wegman [1977]. In fact, w.h.p., for a given set of size proportional to that of the space consumed, double tabulation gives fully-random hashing. While these tabulation schemes are all easy to implement and use, their analysis is not. This keynote talk surveys results from the papers in the reference list."
]
} |
1903.00729 | 2969966929 | Sketches are probabilistic data structures that can provide approximate results within mathematically proven error bounds while using orders of magnitude less memory than traditional approaches. They are tailored for streaming data analysis on architectures even with limited memory such as single-board computers that are widely exploited for IoT and edge computing. Since these devices offer multiple cores, with efficient parallel sketching schemes, they are able to manage high volumes of data streams. However, since their caches are relatively small, a careful parallelization is required. | To the best of our knowledge, our work is the first cache-focused, synchronization-free, single-table CMS generation algorithm specifically tuned for limited-memory multicore architectures such as SBCs. Our techniques can also be employed for other table-based sketches such as Count Sketch @cite_14 and CMS with conservative updates. | {
"cite_N": [
"@cite_14"
],
"mid": [
"1493892051"
],
"abstract": [
"We present a 1-pass algorithm for estimating the most frequent items in a data stream using limited storage space. Our method relies on a data structure called a COUNT SKETCH, which allows us to reliably estimate the frequencies of frequent items in the stream. Our algorithm achieves better space bounds than the previously known best algorithms for this problem for several natural distributions on the item frequencies. In addition, our algorithm leads directly to a 2-pass algorithm for the problem of estimating the items with the largest (absolute) change in frequency between two data streams. To our knowledge, this latter problem has not been previously studied in the literature."
]
} |
1903.00948 | 2918797311 | Motion planning under uncertainty for an autonomous system can be formulated as a Markov Decision Process. In this paper, we propose a solution to this decision theoretic planning problem using a continuous approximation of the underlying discrete value function and leveraging finite element methods. This approach allows us to obtain an accurate and continuous form of value function even with a small number of states from a very low resolution of state space. We achieve this by taking advantage of the second order Taylor expansion to approximate the value function, where the value function is modeled as a boundary-conditioned partial differential equation which can be naturally solved using a finite element method. We have validated our approach via extensive simulations, and the evaluations reveal that our solution provides continuous value functions, leading to better path results in terms of path smoothness, travel distance and time costs, even with a smaller state space. | In addition to formulating motion uncertainty via MDPs, the environment uncertainty should also be integrated in planning. Environmental model uncertainty has been combined in stochastic planning algorithms such as Rapidly Exploring Random Trees (RRT) @cite_28 . A prior knowledge of the motion model can be leveraged in linear quadratic Gaussian controller to account for other sources of uncertainty @cite_8 . | {
"cite_N": [
"@cite_28",
"@cite_8"
],
"mid": [
"2106716938",
"2002201291"
],
"abstract": [
"The ability of mobile robots to generate feasible trajectories online is an important requirement for their autonomous operation in unstructured environments. Many path generation techniques focus on generation of time- or distance-optimal paths while obeying dynamic constraints, and often assume precise knowledge of robot and or environmental (i.e. terrain) properties. In uneven terrain, it is essential that the robot mobility over the terrain be explicitly considered in the planning process. Further, since significant uncertainty is often associated with robot and or terrain parameter knowledge, this should also be accounted for in a path generation algorithm. Here, extensions to the rapidly exploring random tree (RRT) algorithm are presented that explicitly consider robot mobility and robot parameter uncertainty based on the stochastic response surface method (SRSM). Simulation results suggest that the proposed approach can be used for generating safe paths on uncertain, uneven terrain.",
"In this paper we present LQG-MP (linear-quadratic Gaussian motion planning), a new approach to robot motion planning that takes into account the sensors and the controller that will be used during the execution of the robotâs path. LQG-MP is based on the linear-quadratic controller with Gaussian models of uncertainty, and explicitly characterizes in advance (i.e. before execution) the a priori probability distributions of the state of the robot along its path. These distributions can be used to assess the quality of the path, for instance by computing the probability of avoiding collisions. Many methods can be used to generate the required ensemble of candidate paths from which the best path is selected; in this paper we report results using rapidly exploring random trees (RRT). We study the performance of LQG-MP with simulation experiments in three scenarios: (A) a kinodynamic car-like robot, (B) multi-robot planning with differential-drive robots, and (C) a 6-DOF serial manipulator. We also present a method that applies Kalman smoothing to make paths Ck-continuous and apply LQG-MP to precomputed roadmaps using a variant of Dijkstraâs algorithm to efficiently find high-quality paths."
]
} |
1903.00948 | 2918797311 | Motion planning under uncertainty for an autonomous system can be formulated as a Markov Decision Process. In this paper, we propose a solution to this decision theoretic planning problem using a continuous approximation of the underlying discrete value function and leveraging finite element methods. This approach allows us to obtain an accurate and continuous form of value function even with a small number of states from a very low resolution of state space. We achieve this by taking advantage of the second order Taylor expansion to approximate the value function, where the value function is modeled as a boundary-conditioned partial differential equation which can be naturally solved using a finite element method. We have validated our approach via extensive simulations, and the evaluations reveal that our solution provides continuous value functions, leading to better path results in terms of path smoothness, travel distance and time costs, even with a smaller state space. | To extend the discrete-time actions to be continuous, the Semi-Markov Decision Processes (SMDP) @cite_10 @cite_27 have been designed based on the temporal abstraction concept so that actions can be performed in a more flexible manner. However, without relying on function approximation, the basic form of state space is still assumed to be discrete and tabular. Though the framework of continuous stochastic control can provide continuous value function and continuous actions, it requires Brownian motion driven dynamics @cite_16 . Nevertheless, the Hamilton-Jacobi-Bellman equation for infinite horizon stochastic control shares a similar diffusion-type of equations with our approaches. | {
"cite_N": [
"@cite_27",
"@cite_16",
"@cite_10"
],
"mid": [
"2119567691",
"1586251222",
"2109910161"
],
"abstract": [
"From the Publisher: The past decade has seen considerable theoretical and applied research on Markov decision processes, as well as the growing use of these models in ecology, economics, communications engineering, and other fields where outcomes are uncertain and sequential decision-making processes are needed. A timely response to this increased activity, Martin L. Puterman's new work provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models. It discusses all major research directions in the field, highlights many significant applications of Markov decision processes models, and explores numerous important topics that have previously been neglected or given cursory coverage in the literature. Markov Decision Processes focuses primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous-time discrete state models. The book is organized around optimality criteria, using a common framework centered on the optimality (Bellman) equation for presenting results. The results are presented in a \"theorem-proof\" format and elaborated on through both discussion and examples, including results that are not available in any other book. A two-state Markov decision process model, presented in Chapter 3, is analyzed repeatedly throughout the book and demonstrates many results and algorithms. Markov Decision Processes covers recent research advances in such areas as countable state space models with average reward criterion, constrained models, and models with risk sensitive optimality criteria. It also explores several topics that have received little or no attention in other books, including modified policy iteration, multichain models with average reward criterion, and sensitive optimality. In addition, a Bibliographic Remarks section in each chapter comments on relevant historic",
"",
"Learning, planning, and representing knowledge at multiple levels of temporal ab- straction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforce- ment learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options—closed-loop policies for taking ac- tion over a period of time. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as mus- cle twitches and joint torques. Overall, we show that options enable temporally abstract knowledge and action to be included in the reinforcement learning frame- work in a natural and general way. In particular, we show that options may be used interchangeably with primitive actions in planning methods such as dynamic pro- gramming and in learning methods such as Q-learning. Formally, a set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), and the theory of SMDPs provides the foundation for the theory of options. However, the most interesting issues concern the interplay between the underlying MDP and the SMDP and are thus beyond SMDP theory. We present results for three such cases: 1) we show that the results of planning with options can be used during execution to interrupt options and thereby perform even better than planned, 2) we introduce new intra-option methods that are able to learn about an option from fragments of its execution, and 3) we propose a notion of subgoal that can be used to improve the options themselves. All of these results have precursors in the existing literature; the contribution of this paper is to establish them in a simpler and more general setting with fewer changes to the existing reinforcement learning framework. In particular, we show that these results can be obtained without committing to (or ruling out) any particular approach to state abstraction, hierarchy, function approximation, or the macro-utility problem."
]
} |
1903.00745 | 2919979355 | We study robot construction problems where multiple autonomous robots rearrange stacks of prefabricated blocks to build stable structures. These problems are challenging due to ramifications of actions, true concurrency, and requirements of supportedness of blocks by other blocks and stability of the structure at all times. We propose a formal hybrid planning framework to solve a wide range of robot construction problems, based on Answer Set Programming. This framework not only decides for a stable final configuration of the structure, but also computes the order of manipulation tasks for multiple autonomous robots to build the structure from an initial configuration, while simultaneously ensuring the stability, supportedness and other desired properties of the partial construction at each step of the plan. We prove the soundness and completeness of our formal method with respect to these properties. We introduce a set of challenging robot construction benchmark instances, including bridge building and stack overhanging scenarios, discuss the usefulness of our framework over these instances, and demonstrate the applicability of our method using a bimanual Baxter robot. | The well-known blocks world problems @cite_22 have been widely studied by AI community; it is proven to be NP-complete for polynomially bounded plans @cite_28 . Blocks world problems are quite restricted compared to robot construction problems, since while proposing the problem, Winograd's interest was in language rather than in construction problems. For instance, the blocks world deals with identical blocks and allows a block to be placed on a flat surface or on another block, but not on multiple blocks as necessitated by the robot construction problems. It does not allow manipulation of subassemblies, use of counterweights and scaffolds, or concurrent placements of blocks, either. Also, there is no consideration of feasibility checks to ensure the stability of the stack at each step of a plan. | {
"cite_N": [
"@cite_28",
"@cite_22"
],
"mid": [
"2054497239",
"2005814556"
],
"abstract": [
"Abstract In this paper, we show that in the best-known version of the blocks world (and several related versions), planning is difficult, in the sense that finding an optimal plan is NP-hard. However, the NP-hardness is not due to deleted-condition interactions, but instead due to a situation which we call a deadlock. For problems that do not contain deadlocks, there is a simple hill-climbing strategy that can easily find an optimal plan, regardless of whether or not the problem contains any deleted-condition interactions. The above result is rather surprising, since one of the primary roles of the blocks world in the planning literature has been to provide examples of deleted-condition interactions such as creative destruction and Sussman's anomaly. However, we can explain why deadlocks are hard to handle in terms of a domain-independent goal interaction which we call an enabling-condition interaction, in which an action invoked to achieve one goal has a side-effect of making it easier to achieve other goals. If different actions have different useful side-effects, then it can be difficult to determine which set of actions will produce the best plan.",
"Abstract This paper describes a computer system for understanding English. The system answers questions, executes commands, and accepts information in an interactive English dialog. It is based on the belief that in modeling language understanding, we must deal in an integrated way with all of the aspects of language—syntax, semantics, and inference. The system contains a parser, a recognition grammar of English, programs for semantic analysis, and a general problem solving system. We assume that a computer cannot deal reasonably with language unless it can understand the subject it is discussing. Therefore, the program is given a detailed model of a particular domain. In addition, the system has a simple model of its own mentality. It can remember and discuss its plans and actions as well as carrying them out. It enters into a dialog with a person, responding to English sentences with actions and English replies, asking for clarification when its heuristic programs cannot understand a sentence through the use of syntactic, semantic, contextual, and physical knowledge. Knowledge in the system is represented in the form of procedures, rather than tables of rules or lists of patterns. By developing special procedural representations for syntax, semantics, and inference, we gain flexibility and power. Since each piece of knowledge can be a procedure, it can call directly on any other piece of knowledge in the system."
]
} |
1903.00745 | 2919979355 | We study robot construction problems where multiple autonomous robots rearrange stacks of prefabricated blocks to build stable structures. These problems are challenging due to ramifications of actions, true concurrency, and requirements of supportedness of blocks by other blocks and stability of the structure at all times. We propose a formal hybrid planning framework to solve a wide range of robot construction problems, based on Answer Set Programming. This framework not only decides for a stable final configuration of the structure, but also computes the order of manipulation tasks for multiple autonomous robots to build the structure from an initial configuration, while simultaneously ensuring the stability, supportedness and other desired properties of the partial construction at each step of the plan. We prove the soundness and completeness of our formal method with respect to these properties. We introduce a set of challenging robot construction benchmark instances, including bridge building and stack overhanging scenarios, discuss the usefulness of our framework over these instances, and demonstrate the applicability of our method using a bimanual Baxter robot. | Later, Fahlman @cite_29 has introduced a set of robot construction problems where the goal is for a robot to build specified structures out of simple blocks of different shapes and sizes. These problems allow incorporation of subassemblies into the final design, and the use of extra blocks as temporary supports or counterweights during construction; they also consider collisions of blocks and instability of the structures, but not motion planning. Since Fahlman's main interest was in maximizing common sense (rather than soundness, completeness or optimality), he implemented a planning system guided with heuristics to solve some of these problems. These problems have not been investigated with a formal approach since then. | {
"cite_N": [
"@cite_29"
],
"mid": [
"2147072697"
],
"abstract": [
"This paper describes BUILD, a computer program which generates plans for building specified structures out of simple objects such as toy blocks. A powerful heuristic control structure enables BUILD to use a number of sophisticated construction techniques in its plans. Among these are the incorporation of pre-existing structure into the final design, pre-assembly of movable sub-structures on the table, and the use of extra blocks as temporary supports and counterweights in the course of the construction. BUILD does its planning in a modeled 3-space in which blocks of various shapes and sizes can be represented in any orientation and location. The modeling system can maintain several world models at once, and contains modules for displaying states, testing them for inter-object contact and collision, and for checking the stability of complex structures involving frictional forces. Suggestions are included for the extension of BUILD-like systems to other domains. Also discussed are the merits of BUILD's implementation language, conniver, for this type of problem solving."
]
} |
1903.00745 | 2919979355 | We study robot construction problems where multiple autonomous robots rearrange stacks of prefabricated blocks to build stable structures. These problems are challenging due to ramifications of actions, true concurrency, and requirements of supportedness of blocks by other blocks and stability of the structure at all times. We propose a formal hybrid planning framework to solve a wide range of robot construction problems, based on Answer Set Programming. This framework not only decides for a stable final configuration of the structure, but also computes the order of manipulation tasks for multiple autonomous robots to build the structure from an initial configuration, while simultaneously ensuring the stability, supportedness and other desired properties of the partial construction at each step of the plan. We prove the soundness and completeness of our formal method with respect to these properties. We introduce a set of challenging robot construction benchmark instances, including bridge building and stack overhanging scenarios, discuss the usefulness of our framework over these instances, and demonstrate the applicability of our method using a bimanual Baxter robot. | Toussaint @cite_8 has utilized stability checks for building some tallest stable tower from a set of unlabeled cylinders and blocks; no goal condition is specified. His method applies a restricted version of task planning to decide for the order of manipulation actions, based on simple Strips operators and Monte Carlo tree search, and considers a restricted form of stability check that depends on whether the objects are placed on support areas of other objects. Due to these restrictions, his method is limited to building towers with sequential plans. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2403069916"
],
"abstract": [
"We consider problems of sequential robot manipulation (aka. combined task and motion planning) where the objective is primarily given in terms of a cost function over the final geometric state, rather than a symbolic goal description. In this case we should leverage optimization methods to inform search over potential action sequences. We propose to formulate the problem holistically as a 1st- order logic extension of a mathematical program: a non-linear constrained program over the full world trajectory where the symbolic state-action sequence defines the (in-)equality constraints. We tackle the challenge of solving such programs by proposing three levels of approximation: The coarsest level introduces the concept of the effective end state kinematics, parametrically describing all possible end state configurations conditional to a given symbolic action sequence. Optimization on this level is fast and can inform symbolic search. The other two levels optimize over interaction keyframes and eventually over the full world trajectory across interactions. We demonstrate the approach on a problem of maximizing the height of a physically stable construction from an assortment of boards, cylinders and blocks."
]
} |
1903.00745 | 2919979355 | We study robot construction problems where multiple autonomous robots rearrange stacks of prefabricated blocks to build stable structures. These problems are challenging due to ramifications of actions, true concurrency, and requirements of supportedness of blocks by other blocks and stability of the structure at all times. We propose a formal hybrid planning framework to solve a wide range of robot construction problems, based on Answer Set Programming. This framework not only decides for a stable final configuration of the structure, but also computes the order of manipulation tasks for multiple autonomous robots to build the structure from an initial configuration, while simultaneously ensuring the stability, supportedness and other desired properties of the partial construction at each step of the plan. We prove the soundness and completeness of our formal method with respect to these properties. We introduce a set of challenging robot construction benchmark instances, including bridge building and stack overhanging scenarios, discuss the usefulness of our framework over these instances, and demonstrate the applicability of our method using a bimanual Baxter robot. | Note that for sophisticated constructions that involve temporary scaffolding, counterweights, and subassemblies, it is required to express ramifications of actions as well as true concurrency. However, expressing ramifications directly by simple Strips operators is not possible [Theorem 3] thiebauxHN05 due to lack of logical inference. Also, expressing true concurrency is not possible unless the description is extended with exponential number of new operators, where each operator characterizes a concurrent action. Due to these theoretical results, other studies @cite_27 @cite_13 @cite_44 that rely on simple Strips operators, do not present general methods for such sophisticated constructions either. It is important to note that these methods do not cover sophisticated structures, like bridges or overhangs, since objects are not necessarily placed on support areas of other objects. Such sophisticated structures require definition of transitive closure to ensure supportedness or connectedness. Transitive closure is not definable in first-order logic [Theorem 5] Fagin75 ; it is not directly supported by Strips either @cite_25 . | {
"cite_N": [
"@cite_44",
"@cite_27",
"@cite_13",
"@cite_25"
],
"mid": [
"2739341730",
"2025908595",
"1584484540",
"2134141465"
],
"abstract": [
"Joint symbolic and geometric planning is one of the core challenges in robotics. We address the problem of multi-agent cooperative manipulation, where we aim for jointly optimal paths for all agents and over the full manipulation sequence. This joint optimization problem can be framed as a logic-geometric program. Existing solvers lack several features (such as consistently handling kinematic switches) and efficiency to handle the cooperative manipulation domain. We propose a new approximate solver scheme, combining ideas from branch-and-bound and MCTS and exploiting multiple levels of bounds to better direct the search. We demonstrate the method in a scenario where a Baxter robot needs to help a human to reach for objects.",
"On the path to full autonomy, robotic agents have to learn how to manipulate their environments for their benefit. In particular, the ability to design structures that are functional in overcoming challenges is imperative. The problem of automated design of functional structures (ADFS) addresses the question of whether the objects in the environment can be placed in a useful configuration. In this work, we first make the observation that the ADFS problem represents a class of problems in high dimensional, continuous spaces that can be broken down into simpler subproblems with semantically meaningful actions. Next, we propose a framework where discrete actions that induce constraints can partition the solution space effectively. Subsequently, we solve the original class of problems by searching over the available actions, where the evaluation criteria for the search is the feasibility test of the accumulated constraints. We prove that with a sound feasibility test, our algorithm is complete. Additionally, we argue that a convexity requirement on the constraints leads to significant efficiency gains. Finally, we present successful results to the ADFS problem.",
"We present an algorithm that enables a humanoid robot to reason about its environment and use the available objects to build bridges, stairs and lever-fulcrum systems. Facing a challenge that is otherwise intractable, such as climbing a height or moving a heavy object, the proposed planner reasons about the physical limitations of the robot to design functional structures. Inducing constraints on the space of possible designs within a classical planning framework, the algorithm outputs feasible structures that can be used towards accomplishing goals. We present results in dynamic simulation with Golem Hubo, walking on a bridge to cross a hazardous area, and in real-world with Golem Krang, overturning 100 kg loads and pushing 240 kg obstacles.",
"There is controversy as to whether explicit support for pddl-like axioms and derived predicates is needed for planners to handle real-world domains effectively. Many researchers have deplored the lack of precise semantics for such axioms, while others have argued that it might be best to compile them away. We propose an adequate semantics for pddl axioms and show that they are an essential feature by proving that it is impossible to compile them away if we restrict the growth of plans and domain descriptions to be polynomial. These results suggest that adding a reasonable implementation to handle axioms inside the planner is beneficial for the performance. Our experiments confirm this suggestion."
]
} |
1903.00745 | 2919979355 | We study robot construction problems where multiple autonomous robots rearrange stacks of prefabricated blocks to build stable structures. These problems are challenging due to ramifications of actions, true concurrency, and requirements of supportedness of blocks by other blocks and stability of the structure at all times. We propose a formal hybrid planning framework to solve a wide range of robot construction problems, based on Answer Set Programming. This framework not only decides for a stable final configuration of the structure, but also computes the order of manipulation tasks for multiple autonomous robots to build the structure from an initial configuration, while simultaneously ensuring the stability, supportedness and other desired properties of the partial construction at each step of the plan. We prove the soundness and completeness of our formal method with respect to these properties. We introduce a set of challenging robot construction benchmark instances, including bridge building and stack overhanging scenarios, discuss the usefulness of our framework over these instances, and demonstrate the applicability of our method using a bimanual Baxter robot. | In automated manufacturing, assembly plans aim to determine the proper order of assembly operations to build a coherent object. During assembly planning, the goal configuration is well-defined and the problem is generally approached by starting with the goal configuration and working backwards to disassemble all parts. Object stability has also been considered within this context @cite_49 @cite_32 @cite_15 @cite_58 @cite_42 @cite_38 @cite_33 . The assembly planning problem is significantly different from the robotic construction problems: on the one hand, it allows assembly of irregular objects; on the other hand, the goal configuration is pre-determined and solutions are commonly restricted to monotone plans. | {
"cite_N": [
"@cite_38",
"@cite_33",
"@cite_42",
"@cite_32",
"@cite_49",
"@cite_15",
"@cite_58"
],
"mid": [
"2151075362",
"2521537405",
"1967234152",
"2040110103",
"",
"2044271772",
"1969196471"
],
"abstract": [
"The aim of object pile deconstruction is to safely remove elements one by one without compromising stability. The number of combinations of removal sequences increases dramatically with the number of objects and thus testing every combination is intractable in practical scenarios. We model the deconstruction sequencing problem using a disassembly graph, and investigate and discuss search strategies for discovery of stable sequences in an architectural context. We run and compare techniques in a large-scale experiment, on various virtual scenes of architectural models composed of different shapes, sizes and number of elements.",
"Purpose The purpose of this paper is to develop a planner for finding an optimal assembly sequence for robots to assemble objects. Each manipulated object in the optimal sequence is stable during assembly. They are easy to grasp and robust to motion uncertainty. Design methodology approach The input to the planner is the mesh models of the objects, the relative poses between the objects in the assembly and the final pose of the assembly. The output is an optimal assembly sequence, namely, in which order should one assemble the objects, from which directions should the objects be dropped and candidate grasps of each object. The proposed planner finds the optimal solution by automatically permuting, evaluating and searching the possible assembly sequences considering stability, graspability and assemblability qualities. Findings The proposed planner could plan an optimal sequence to guide robots to do assembly using translational motion. The sequence provides initial and goal configurations to motion planning algorithms and is ready to be used by robots. The usefulness of the proposed method is verified by both simulation and real-world executions. Originality value The paper proposes an assembly planner which can find an optimal assembly sequence automatically without teaching of the assembly orders and directions by skilled human technicians. The planner is highly expected to improve teachingless robotic manufacturing.",
"High-level assembly planning systems generate plans for the automated assembly of mechanical products by robots. The sequences to be generated underlie several physical and geometrical constraints, and in addition have to be efficient to increase productivity. The challenges still facing the field are to develop efficient and robust analysis tools, and to develop planners capable of finding optimal or near-optimal sequences rather than just feasible sequences. The presented high level assembly planning system High LAP automatically considers physical and geometrical constraints to generate and to evaluate stable assembly sequences. In this paper we propose a relational assembly model including a CAD description and the specification of features and relations of the assembly components. We use an optional specification of an arbitrary hierarchy of assemblies to speed up and guide the generation of sequences. High LAP evaluates all feasible assembly sequences considering several criteria like separability and manipulability of the generated (sub)assemblies. Furthermore, the necessity of reorientation for a mating operation and parallelism during plan execution are considered. Another important criterion is the stability of the generated (sub)assemblies. Most of the assembly planners developed up to date use heuristical or user-defined criteria to determine assembly stability for plan evaluation. The presented system is the first assembly planning system which automatically determines the range of all stable orientations of an assembly for plan evaluation. Therefore, we introduce a stability metric and an algorithm to calculate all stable orientations of an assembly considering friction. Experimental results are presented to demonstrate the efficiency of our assembly planning system.",
"Abstract This paper presents the application of geometric reasoning to the automatic construction of an assembly partial order from an attributed liaison graph representation of an assembly. The construction is based on the principle of assembly by disassembly and on the extraction of preferred subassemblies. On the basis of accessibility and manipulability criteria, the system first decomposes the given assembly into clusters of mutually inseparable parts. An abstract liaison graph is then generated wherein each cluster of mutually inseparable parts is represented as a supernode. A set of subassemblies is then generated by decomposing the abstract liaison graph into subgraphs, verifying the disassemblability of individual subgraphs, and applying the criteria for selecting preferred subassemblies to the disassemblable subgraphs. The recursive application of this process to the selected subassemblies results in a Hierarchical Partial-Order Graph (HPOG). A HPOG not only provides the precedence relations among assembly operations but also presents the temporal and spatial parallelism for implementing distributed and cooperative assembly. The system is organized under the “cooperative problem solving (CPS)” paradigm.",
"",
"This article describes a fully functional autonomous system with two cooperating robots that disassembles complex Duplo1structures in a restricted environment. The system operates on a table-top world with any number of Duplo structures for which object models have been given. The structures are dis assembled down to individual parts, using basic operations of single part removal, object partitioning, and the addition of stabilizing supports to structures that would otherwise fall over. All aspects are automatically planned, including operation selection, path and grasp planning, and simultaneous cooper ative robot motion. Operations are chosen so as to not create unstable structures and to not risk breakage in areas of low structural integrity. Overall planning is done with a process mechanism that heuristically generates efficient disassembly sequences, without searching a space of all possible operations. Several examples of actual system operation, using real robots, are presented.",
"Abstract In which order can a product be assembled or disassembled? How many hands are required? How many degrees of freedom? What parts should be withdrawn to allow the removal of a specified subassembly? To answer such questions automatically, important theoretical issues in geometric reasoning must be addressed. This paper investigates the planning of assembly algorithms specifying (dis) assembly operations on the components of a product and the ordering of these operations. It also presents measures to evaluate the complexity of these algorithms and techniques to estimate the inherent complexity of a product. The central concept underlying these planning and complexity evaluation techniques is that of a “non-directional blocking graph”, a qualitative representation of the internal structure of an assembly product. This representation describes the combinatorial set of parts interactions in polynomial space. It is obtained by identifying physical criticalities where geometric interferences among parts change. It is generated from an input geometric description of the product. The main application considered in the paper is the creation of smart environments to help designers create products that are easier to manufacture and service. Other possible applications include planning for rapid prototyping and autonomous robots."
]
} |
1903.00745 | 2919979355 | We study robot construction problems where multiple autonomous robots rearrange stacks of prefabricated blocks to build stable structures. These problems are challenging due to ramifications of actions, true concurrency, and requirements of supportedness of blocks by other blocks and stability of the structure at all times. We propose a formal hybrid planning framework to solve a wide range of robot construction problems, based on Answer Set Programming. This framework not only decides for a stable final configuration of the structure, but also computes the order of manipulation tasks for multiple autonomous robots to build the structure from an initial configuration, while simultaneously ensuring the stability, supportedness and other desired properties of the partial construction at each step of the plan. We prove the soundness and completeness of our formal method with respect to these properties. We introduce a set of challenging robot construction benchmark instances, including bridge building and stack overhanging scenarios, discuss the usefulness of our framework over these instances, and demonstrate the applicability of our method using a bimanual Baxter robot. | Geometric rearrangement with multiple movable objects and its variations (like navigation among movable obstacles @cite_63 @cite_20 ) have been studied in literature. Since even a simplified variant with only one movable obstacle has been proved to be NP-hard @cite_0 @cite_50 , many studies introduce several important restrictions to the problem, like monotonicity of plans @cite_23 @cite_31 @cite_52 @cite_9 @cite_37 @cite_35 @cite_3 . While a few can handle nonmonotone plans @cite_12 @cite_53 ; these studies do not allow stacking either. Recently, @cite_56 study rearrangement of objects in stack-like containers (by pushes and pops); these problems do not require stability checks. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_9",
"@cite_53",
"@cite_52",
"@cite_3",
"@cite_56",
"@cite_0",
"@cite_50",
"@cite_23",
"@cite_63",
"@cite_31",
"@cite_12",
"@cite_20"
],
"mid": [
"2418783521",
"2139039898",
"2141753064",
"2015241127",
"1989604857",
"2293698677",
"",
"2044725657",
"1999708036",
"2109447584",
"2540258482",
"",
"2087191078",
"2104991009"
],
"abstract": [
"Manipulating multiple movable obstacles is a hard problem that involves searching high-dimensional C-spaces. A milestone method for this problem was able to compute solutions for monotone instances. These are problems where every object needs to be transferred at most once to achieve a desired arrangement. The method uses backtracking search to find the order with which objects should be moved. This paper first proposes an approximate but significantly faster alternative for monotone rearrangement instances. The method defines a dependency graph between objects given minimum constraint removal paths (MCR) to transfer each object to its target. From this graph, the approach discovers the order of moving objects by performing topological sorting without backtracking search. The approximation arises from the limitation to consider only MCR paths, which minimize, however, the number of conflicts between objects. To solve non-monotone instances, this primitive is incorporated in a higher-level incremental search algorithm for general rearrangement planning, which operates similar to Bi-RRT. Given a start and a goal object arrangement, tree structures of reachable new arrangements are generated by using the primitive as an expansion procedure. The integrated solution achieves probabilistic completeness for the general non-monotone case and based on simulated experiments it achieves very good success ratios, solution times and path quality relative to alternatives.",
"In this paper, we describe a planner for a humanoid robot that is capable of finding a path in an environment with movable objects, whereas previous motion planner only deals with an environment with fixed objects. We address an environment manipulation problem for a humanoid robot that finds a walking path from the given start location to the goal location while displacing obstructing objects on the walking path. This problem requires more complex configuration space than previous researches using a mobile robot especially in a manipulation phase, since a humanoid robot has many degrees of freedom in its arm than a forklift type robot. Our approach is to build environment manipulation task graph that decompose the given task into subtasks which are solved using navigation path planner or whole body motion planner. We also propose a standing location search and a displacing obstacle location search for connecting subtasks. Efficient method to solve manipulation planning that relies on whole body inverse kinematics and motion planning technology is also shown. Finally, we show experimental results in an environment with movable objects such as chairs and trash boxes. The planner finds an action sequence consists of walking paths and manipulating obstructing objects to walk from the start position to the goal position.",
"We present DARRT, a sampling-based algorithm for planning with multiple types of manipulation. Given a robot, a set of movable objects, and a set of actions for manipulating the objects, DARRT returns a sequence of manipulation actions that move the robot and objects from an initial configuration to a final configuration. The manipulation actions may be non-prehensile,meaning that the object is not rigidly attached to the robot, such as push, tilt, or pull. We describe a simple extension to the RRT algorithm to search the combined space of robot and objects and present an implementation of DARRT on the Willow Garage PR2 robot.",
"This work proposes a method for efficiently computing manipulation paths to rearrange similar objects in a cluttered space. Rearrangement is a challenging problem as it involves combinatorially large, continuous configuration spaces due to the presence of multiple bodies and kinematically complex manipulators. This work leverages ideas from multi-robot motion planning and manipulation planning to propose appropriate graphical representations for this challenge. These representations allow to quickly reason whether manipulation paths allow the transition between entire sets of object arrangements without having to explicitly store these arrangements. The proposed method also takes advantage of precomputation given a manipulation roadmap for transferring a single object in the space. The approach is evaluated in simulation for a realistic model of a Baxter robot and executed on the real system, showing that the method solves complex instances and is promising in terms of scalability and success ratio.",
"This paper presents the resolve spatial constraints (RSC) algorithm for manipulation planning in a domain with movable obstacles. Empirically we show that our algorithm quickly generates plans for simulated articulated robots in a highly nonlinear search space of exponential dimension. RSC is a reverse-time search that samples future robot actions and constrains the space of prior object displacements. To optimize the efficiency of RSC, we identify methods for sampling object surfaces and generating connecting paths between grasps and placements. In addition to experimental analysis of RSC, this paper looks into object placements and task-space motion constraints among other unique features of the three dimensional manipulation planning domain.",
"",
"",
"Motion planning algorithms have generally dealt with motion in a static environment, or more recently, with motion in an environment that changes in a known manner. We consider the problem of finding collision-free motions in a changeable environment. That is, we wish to find a motion for an object where the object is permitted to move some of the obstacles. In such an environment the final positions of the movable obstacles may or may not be part of the goal. In the case where the final positions of the obstacles are unspecified, the motion planning problem is shown to be NP-hard. An algorithm that runs in O ( n 2 log n ) time after O ( n 3 log 2 n ) preprocessing time is presented when the object to be moved is polygonal and there is only one movable polygonal obstacle in a polygonal environment of complexity O ( n ). In the case where the final positions of the obstacles are specified the general problem is shown to be PSPACE-hard and an algorithm is given when there is one movable obstacle with the same preprocessing time as the previous algorithm but with O ( n 2 ) query time.",
"We prove NP-hardness of a wide class of pushing-block puzzles similar to the classic Sokoban, generalizing several previous results [E.D. , in: Proc. 12th Canad. Conf. Comput. Geom., 2000, pp. 211-219; E.D. , Technical Report, January 2000; A Dhagat, J. O'Rourke, in: Proc. 4th Canad. Conf. Comput. Geom., 1992, pp. 188-191; D. Dor, U. Zwick, Computational Geometry 13 (4) (1999) 215-228; J. O'Rourke, Technical Report, November 1999; G. Wilfong, Ann. Math. Artif. Intell. 3 (1991) 131-150]. The puzzles consist of unit square blocks on an integer lattice; all blocks are movable. The robot may move horizontally and vertically in order to reach a specified goal position. The puzzle variants differ in the number of blocks that the robot can push at once, ranging from at most one (PUSH-1) up to arbitrarily many (PUSH-*). Other variations were introduced to make puzzles more tractable, in which blocks must slide their maximal extent when pushed (PUSHPUSH), and in which the robot's path must not revisit itself (PUSH-X). We prove that all of these puzzles are NP-hard.",
"We present a novel planning algorithm for the problem of placing objects on a cluttered surface such as a table, counter or floor. The planner (1) selects a placement for the target object and (2) constructs a sequence of manipulation actions that create space for the object. When no continuous space is large enough for direct placement, the planner leverages means-end analysis and dynamic simulation to find a sequence of linear pushes that clears the necessary space. Our heuristic for determining candidate placement poses for the target object is used to guide the manipulation search. We show successful results for our algorithm in simulation.",
"In this paper, we address the problem of navigation among movable obstacles (NAMO): a practical extension to navigation for humanoids and other dexterous mobile robots. The robot is permitted to reconfigure the environment by moving obstacles and clearing free space for a path. Simpler problems have been shown to be P-SPACE hard. For real-world scenarios with large numbers of movable obstacles, complete motion planning techniques are largely intractable. This paper presents a resolution complete planner for a subclass of NAMO problems. Our planner takes advantage of the navigational structure through state-space decomposition and heuristic search. The planning complexity is reduced to the difficulty of the specific navigation task, rather than the dimensionality of the multi-object domain. We demonstrate real-time results for spaces that contain large numbers of movable obstacles. We also present a practical framework for single-agent search that can be used in algorithmic reasoning about this domain.",
"",
"We introduce a novel computational method for geometric rearrangement of multiple movable objects on a cluttered surface, where objects can change locations more than once by pick and or push actions. This method consists of four stages: (i) finding tentative collision-free final configurations for all objects (all the new objects together with all other objects in the clutter) while also trying to minimize the number of object relocations, (ii) gridization of the continuous plane for a discrete placement of the initial configurations and the tentative final configurations of objects on the cluttered surface, (iii) finding a sequence of feasible pick and push actions to achieve the final discrete placement for the objects in the clutter from their initial discrete place, while simultaneously minimizing the number of object relocations, and (iv) finding feasible final configurations for all objects according to the optimal task plan calculated in stage (iii). For (i) and (iv), we introduce algorithms that utilize local search with random restarts; for (ii), we introduce a mathematical modeling of the discretization problem and use the state-of-the-art ASP reasoners to solve it; for (iii) we introduce a formal hybrid reasoning framework that allows embedding of geometric reasoning in task planning, and use the expressive formalisms and reasoners of ASP. We illustrate the usefulness of our integrated AI approach with several scenarios that cannot be solved by the existing approaches. We also provide a dynamic simulation for one of the scenarios, as supplementary material.",
"This paper presents artificial constraints as a method for guiding heuristic search in the computationally challenging domain of motion planning among movable obstacles. The robot is permitted to manipulate unspecified obstacles in order to create space for a path. A plan is an ordered sequence of paths for robot motion and object manipulation. We show that under monotone assumptions, anticipating future manipulation paths results in constraints on both the choice of objects and their placements at earlier stages in the plan. We present an algorithm that uses this observation to incrementally reduce the search space and quickly find solutions to previously unsolved classes of movable obstacle problems. Our planner is developed for arbitrary robot geometry and kinematics. It is presented with an implementation for the domain of navigation among movable obstacles."
]
} |
1903.00963 | 2961036874 | Designing face recognition systems that are capable of matching face images obtained in the thermal spectrum with those obtained in the visible spectrum is a challenging problem. In this work, we propose the use of semantic-guided generative adversarial network (SG-GAN) to automatically synthesize visible face images from their thermal counterparts. Specifically, semantic labels, extracted by a face parsing network, are used to compute a semantic loss function to regularize the adversarial network during training. These semantic cues denote high-level facial component information associated with each pixel. Further, an identity extraction network is leveraged to generate multi-scale features to compute an identity loss function. To achieve photo-realistic results, a perceptual loss function is introduced during network training to ensure that the synthesized visible face is perceptually similar to the target visible face image. We extensively evaluate the benefits of individual loss functions, and combine them effectively to learn the mapping from thermal to visible face images. Experiments involving two multispectral face datasets show that the proposed method achieves promising results in both face synthesis and cross-spectral face matching. | Chen and Ross @cite_10 demonstrated that a VIS image could be reconstructed from a THM image using hidden factor analysis. @cite_2 performed VIS image reconstruction using a two-stage process, consisting of feature extraction and feature regression using CNNs. However, these reconstructed results were observed to be blurry. In a subsequent work, @cite_29 used a fully convolutional neural network to learn a global mapping between THM and VIS images, as well as the regions around the eyes, nose and mouth. The final synthesized image was a combination of global and local mapping functions resulting in better quality output. | {
"cite_N": [
"@cite_29",
"@cite_10",
"@cite_2"
],
"mid": [
"2963294002",
"919364087",
"2566614872"
],
"abstract": [
"Synthesis of visible spectrum faces from thermal facial imagery is a promising approach for heterogeneous face recognition; enabling existing face recognition software trained on visible imagery to be leveraged, and allowing human analysts to verify cross-spectrum matches more effectively. We propose a new synthesis method to enhance the discriminative quality of synthesized visible face imagery by leveraging both global (e.g., entire face) and local regions (e.g., eyes, nose, and mouth). Here, each region provides (1) an independent representation for the corresponding area, and (2) additional regularization terms, which impact the overall quality of synthesized images. We analyze the effects of using multiple regions to synthesize a visible face image from a thermal face. We demonstrate that our approach improves cross-spectrum verification rates over recently published synthesis approaches. Moreover, using our synthesized imagery, we report the results on facial landmark detection—commonly used for image registration— which is a critical part of the face recognition process.",
"Cascaded subspace learning scheme is used for matching visible against thermal faces.Whitening transform, factor analysis and common discriminant analysis are employed.Cross-database evaluation is adopted to convey the effectiveness of the approach. Matching thermal (THM) face images against visible (VIS) face images poses a significant challenge to automated face recognition systems. In this work, we introduce a Heterogeneous Face Recognition (HFR) matching framework, which uses multiple sets of subspaces generated by sampling patches from VIS and THM face images and subjecting them to a sequence of transformations. In the training phase of the proposed scheme, face images from VIS and THM are subjected to three different filters separately and then tessellated into patches. Each patch is represented by either a Pyramid Scale Invariant Feature Transform (PSIFT) or Histograms of Principal Oriented Gradients (HPOG). Then, a cascaded subspace learning process consisting of whitening transformation, factor analysis, and common discriminant analysis is used to construct multiple common subspaces between VIS and THM facial images. During the testing phase, the projected feature vectors from individual subspaces are concatenated to form a final feature vector. Nearest Neighbor (NN) classifier is used to compare feature vectors and the resulting scores corresponding to three filtered images are combined via the sum-rule. The proposed face matching algorithm is evaluated on two multispectral face datasets and is shown to achieve very promising results.",
"A method for synthesizing visible spectrum face imagery from polarimetric-thermal face imagery is presented. This work extends recent within-spectrum (i.e., visible-to-visible) reconstruction techniques for image representation understanding using convolutional neural networks. Despite the challenging task, we effectively demonstrate the ability to produce a visible image from a probe polarimetric-thermal image. Moreover, we are able to demonstrate the same capability with conventional thermal imagery, but we report a significant improvement by incorporating polarization-state information. These reconstructions, or estimates, can be used to aid human examiners performing one-to-one verification of matches retrieved from automated cross-spectrum face recognition algorithms."
]
} |
1903.00963 | 2961036874 | Designing face recognition systems that are capable of matching face images obtained in the thermal spectrum with those obtained in the visible spectrum is a challenging problem. In this work, we propose the use of semantic-guided generative adversarial network (SG-GAN) to automatically synthesize visible face images from their thermal counterparts. Specifically, semantic labels, extracted by a face parsing network, are used to compute a semantic loss function to regularize the adversarial network during training. These semantic cues denote high-level facial component information associated with each pixel. Further, an identity extraction network is leveraged to generate multi-scale features to compute an identity loss function. To achieve photo-realistic results, a perceptual loss function is introduced during network training to ensure that the synthesized visible face is perceptually similar to the target visible face image. We extensively evaluate the benefits of individual loss functions, and combine them effectively to learn the mapping from thermal to visible face images. Experiments involving two multispectral face datasets show that the proposed method achieves promising results in both face synthesis and cross-spectral face matching. | However, all the aforementioned methods do not explicitly considered the semantic information of the face to regularize network training. The main difference between our proposed method and @cite_13 is that we use a semantic loss function to regularize the training process, whereas the latter uses an attribute loss function to guide the learning process. Further, we demonstrate that the use of semantic loss function is able to reduce the per-pixel loss value, calculated between the synthesized VIS face and target VIS face. Another notable difference is that SG-GAN extracts multi-scale features using an identity feature extraction network (see Figure ). | {
"cite_N": [
"@cite_13"
],
"mid": [
"2963156682"
],
"abstract": [
"Thermal to visible face verification is a challenging problem due to the large domain discrepancy between the modalities. Existing approaches either attempt to synthesize visible faces from thermal faces or extract robust features from these modalities for cross-modal matching. In this paper, we take a different approach in which we make use of the attributes extracted from the visible image to synthesize the attribute-preserved visible image from the input thermal image for cross-modal matching. A pre-trained VGG-Face network is used to extract the attributes from the visible image. Then, a novel Attribute Preserved Generative Adversarial Network (AP-GAN) is proposed to synthesize the visible image from the thermal image guided by the extracted attributes. Finally, a deep network is used to extract features from the synthesized image and the input visible image for verification. Extensive experiments on the ARL Polarimetric face dataset show that the proposed method achieves significant improvements over the state-of-the-art methods."
]
} |
1903.00793 | 2918543823 | With a good image understanding capability, can we manipulate the images high level semantic representation? Such transformation operation can be used to generate or retrieve similar images but with a desired modification (for example changing beach background to street background); similar ability has been demonstrated in zero shot learning, attribute composition and attribute manipulation image search. In this work we show how one can learn transformations with no training examples by learning them on another domain and then transfer to the target domain. This is feasible if: first, transformation training data is more accessible in the other domain and second, both domains share similar semantics such that one can learn transformations in a shared embedding space. We demonstrate this on an image retrieval task where search query is an image, plus an additional transformation specification (for example: search for images similar to this one but background is a street instead of a beach). In one experiment, we transfer transformation from synthesized 2D blobs image to 3D rendered image, and in the other, we transfer from text domain to natural image domain. | : beside traditional text-to-image @cite_16 or image-to-image retrieval @cite_31 task, there are many image retrieval applications with other types of search query such as: sketch @cite_34 , scene layout @cite_13 , relevance feedback @cite_38 @cite_35 , product attribute feedback @cite_14 @cite_5 @cite_8 @cite_6 , dialog interaction @cite_27 and image text combination query @cite_28 . In this work, the image search query will be a combination of a reference image and a transformation specification. In our setup, labeled retrieval examples are not available, hence a standard training procedure like @cite_5 @cite_28 does not work. | {
"cite_N": [
"@cite_38",
"@cite_35",
"@cite_14",
"@cite_8",
"@cite_28",
"@cite_34",
"@cite_6",
"@cite_27",
"@cite_5",
"@cite_31",
"@cite_16",
"@cite_13"
],
"mid": [
"2101498401",
"1979246310",
"2033365921",
"",
"2949436286",
"2466618734",
"",
"2798503981",
"2735001949",
"1975517671",
"",
""
],
"abstract": [
"Content-based image retrieval (CBIR) has become one of the most active research areas in the past few years. Many visual feature representations have been explored and many systems built. While these research efforts establish the basis of CBIR, the usefulness of the proposed approaches is limited. Specifically, these efforts have relatively ignored two distinct characteristics of CBIR systems: (1) the gap between high-level concepts and low-level features, and (2) the subjectivity of human perception of visual content. This paper proposes a relevance feedback based interactive retrieval approach, which effectively takes into account the above two characteristics in CBIR. During the retrieval process, the user's high-level query and perception subjectivity are captured by dynamically updated weights based on the user's feedback. The experimental results over more than 70000 images show that the proposed approach greatly reduces the user's effort of composing a query, and captures the user's information need more precisely.",
"This paper addresses the challenge of Multimedia Event Detection by proposing a novel method for high-level and low-level features fusion based on collective classification. Generally, the method consists of three steps: training a classifier from low-level features; encoding high-level features into graphs; and diffusing the scores on the established graph to obtain the final prediction. The final prediction is derived from multiple graphs each of which corresponds to a high-level feature. The paper investigates two graph construction methods using logarithmic and exponential loss functions, respectively and two collective classification algorithms, i.e. Gibbs sampling and Markov random walk. The theoretical analysis demonstrates that the proposed method converges and is computationally scalable and the empirical analysis on TRECVID 2011 Multimedia Event Detection dataset validates its outstanding performance compared to state-of-the-art methods, with an added benefit of interpretability.",
"We propose a novel mode of feedback for image search, where a user describes which properties of exemplar images should be adjusted in order to more closely match his her mental model of the image(s) sought. For example, perusing image results for a query “black shoes”, the user might state, “Show me shoe images like these, but sportier.” Offline, our approach first learns a set of ranking functions, each of which predicts the relative strength of a nameable attribute in an image (‘sportiness’, ‘furriness’, etc.). At query time, the system presents an initial set of reference images, and the user selects among them to provide relative attribute feedback. Using the resulting constraints in the multi-dimensional attribute space, our method updates its relevance function and re-ranks the pool of images. This procedure iterates using the accumulated constraints until the top ranked images are acceptably close to the user's envisioned target. In this way, our approach allows a user to efficiently “whittle away” irrelevant portions of the visual feature space, using semantic language to precisely communicate her preferences to the system. We demonstrate the technique for refining image search for people, products, and scenes, and show it outperforms traditional binary relevance feedback in terms of search speed and accuracy.",
"",
"In this paper, we study the task of image retrieval, where the input query is specified in the form of an image plus some text that describes desired modifications to the input image. For example, we may present an image of the Eiffel tower, and ask the system to find images which are visually similar but are modified in small ways, such as being taken at nighttime instead of during the day. To tackle this task, we learn a similarity metric between a target image and a source image plus source text, an embedding and composing function such that target image feature is close to the source image plus text composition feature. We propose a new way to combine image and text using such function that is designed for the retrieval task. We show this outperforms existing approaches on 3 different datasets, namely Fashion-200k, MIT-States and a new synthetic dataset we create based on CLEVR. We also show that our approach can be used to classify input queries, in addition to image retrieval.",
"We present the Sketchy database, the first large-scale collection of sketch-photo pairs. We ask crowd workers to sketch particular photographic objects sampled from 125 categories and acquire 75,471 sketches of 12,500 objects. The Sketchy database gives us fine-grained associations between particular photos and sketches, and we use this to train cross-domain convolutional networks which embed sketches and photographs in a common feature space. We use our database as a benchmark for fine-grained retrieval and show that our learned representation significantly outperforms both hand-crafted features as well as deep features trained for sketch or photo classification. Beyond image retrieval, we believe the Sketchy database opens up new opportunities for sketch and image understanding and synthesis.",
"",
"Existing methods for interactive image retrieval have demonstrated the merit of integrating user feedback, improving retrieval results. However, most current systems rely on restricted forms of user feedback, such as binary relevance responses, or feedback based on a fixed set of relative attributes, which limits their impact. In this paper, we introduce a new approach to interactive image search that enables users to provide feedback via natural language, allowing for more natural and effective interaction. We formulate the task of dialog-based interactive image retrieval as a reinforcement learning problem, and reward the dialog system for improving the rank of the target image during each dialog turn. To avoid the cumbersome and costly process of collecting human-machine conversations as the dialog system learns, we train our system with a user simulator, which is itself trained to describe the differences between target and candidate images. The efficacy of our approach is demonstrated in a footwear retrieval application. Extensive experiments on both simulated and real-world data show that 1) our proposed learning framework achieves better accuracy than other supervised and reinforcement learning baselines and 2) user feedback based on natural language rather than pre-specified attributes leads to more effective retrieval results, and a more natural and expressive communication interface.",
"We introduce a new fashion search protocol where attribute manipulation is allowed within the interaction between users and search engines, e.g. manipulating the color attribute of the clothing from red to blue. It is particularly useful for image-based search when the query image cannot perfectly match users expectation of the desired product. To build such a search engine, we propose a novel memory-augmented Attribute Manipulation Network (AMNet) which can manipulate image representation at the attribute level. Given a query image and some attributes that need to modify, AMNet can manipulate the intermediate representation encoding the unwanted attributes and change them to the desired ones through following four novel components: (1) a dual-path CNN architecture for discriminative deep attribute representation learning, (2) a memory block with an internal memory and a neural controller for prototype attribute representation learning and hosting, (3) an attribute manipulation network to modify the representation of the query image with the prototype feature retrieved from the memory block, (4) a loss layer which jointly optimizes the attribute classification loss and a triplet ranking loss over triplet images for facilitating precise attribute manipulation and image retrieving. Extensive experiments conducted on two large-scale fashion search datasets, i.e. DARN and DeepFashion, have demonstrated that AMNet is able to achieve remarkably good performance compared with well-designed baselines in terms of effectiveness of attribute manipulation and search accuracy.",
"Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.",
"",
""
]
} |
1903.00793 | 2918543823 | With a good image understanding capability, can we manipulate the images high level semantic representation? Such transformation operation can be used to generate or retrieve similar images but with a desired modification (for example changing beach background to street background); similar ability has been demonstrated in zero shot learning, attribute composition and attribute manipulation image search. In this work we show how one can learn transformations with no training examples by learning them on another domain and then transfer to the target domain. This is feasible if: first, transformation training data is more accessible in the other domain and second, both domains share similar semantics such that one can learn transformations in a shared embedding space. We demonstrate this on an image retrieval task where search query is an image, plus an additional transformation specification (for example: search for images similar to this one but background is a street instead of a beach). In one experiment, we transfer transformation from synthesized 2D blobs image to 3D rendered image, and in the other, we transfer from text domain to natural image domain. | aims to recognize novel concepts relying on side data such as attribute @cite_1 @cite_32 @cite_22 or textual description @cite_23 @cite_2 . This side data represents high level semantics with structure and therefore can be manipulated, composed or transformed easily by human. On the other hand, corresponding manipulation, but in the low level feature domain (like raw image), is more difficult. | {
"cite_N": [
"@cite_22",
"@cite_1",
"@cite_32",
"@cite_23",
"@cite_2"
],
"mid": [
"64813323",
"2134270519",
"",
"2398118205",
"2123024445"
],
"abstract": [
"In this paper we study how to perform object classification in a principled way that exploits the rich structure of real world labels. We develop a new model that allows encoding of flexible relations between labels. We introduce Hierarchy and Exclusion (HEX) graphs, a new formalism that captures semantic relations between any two labels applied to the same object: mutual exclusion, overlap and subsumption. We then provide rigorous theoretical analysis that illustrates properties of HEX graphs such as consistency, equivalence, and computational implications of the graph structure. Next, we propose a probabilistic classification model based on HEX graphs and show that it enjoys a number of desirable properties. Finally, we evaluate our method using a large-scale benchmark. Empirical results demonstrate that our model can significantly improve object classification by exploiting the label relations.",
"We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes.",
"",
"State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manuallyencoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.",
"Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model."
]
} |
1903.00793 | 2918543823 | With a good image understanding capability, can we manipulate the images high level semantic representation? Such transformation operation can be used to generate or retrieve similar images but with a desired modification (for example changing beach background to street background); similar ability has been demonstrated in zero shot learning, attribute composition and attribute manipulation image search. In this work we show how one can learn transformations with no training examples by learning them on another domain and then transfer to the target domain. This is feasible if: first, transformation training data is more accessible in the other domain and second, both domains share similar semantics such that one can learn transformations in a shared embedding space. We demonstrate this on an image retrieval task where search query is an image, plus an additional transformation specification (for example: search for images similar to this one but background is a street instead of a beach). In one experiment, we transfer transformation from synthesized 2D blobs image to 3D rendered image, and in the other, we transfer from text domain to natural image domain. | is an active research area where high level semantics modification or synthesization of images is done @cite_25 @cite_24 @cite_15 @cite_37 @cite_9 @cite_17 . For example, style" can represent a high level semantic feature that one want to enforce on the output. @cite_25 generate images from reference images with textual description of new style". Another relevant research area is works on translation between scene image, scene graph and text captions @cite_19 @cite_20 . | {
"cite_N": [
"@cite_37",
"@cite_9",
"@cite_24",
"@cite_19",
"@cite_15",
"@cite_25",
"@cite_20",
"@cite_17"
],
"mid": [
"2560481159",
"2963201933",
"2963073614",
"2951343884",
"2768626898",
"2949999304",
"2796341166",
"2895668250"
],
"abstract": [
"Several recent works have used deep convolutional networks to generate realistic imagery. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach generates more realistic, diverse, and controllable outputs. The architecture is also effective at user-guided colorization of grayscale images.",
"In this paper, we investigate deep image synthesis guided by sketch, color, and texture. Previous image synthesis methods can be controlled by sketch and color strokes but we are the first to examine texture control. We allow a user to place a texture patch on a sketch at arbitrary locations and scales to control the desired output texture. Our generative network learns to synthesize objects consistent with these texture suggestions. To achieve this, we develop a local texture loss in addition to adversarial and content loss to train the generative network. We conduct experiments using sketches generated from real images and textures sampled from a separate texture database and results show that our proposed algorithm is able to generate plausible images that are faithful to user controls. Ablation studies show that our proposed pipeline can generate more realistic images than adapting existing methods directly.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.",
"Understanding a visual scene goes beyond recognizing individual objects in isolation. Relationships between objects also constitute rich semantic information about the scene. In this work, we explicitly model the objects and their relationships using scene graphs, a visually-grounded graphical structure of an image. We propose a novel end-to-end model that generates such structured scene representation from an input image. The model solves the scene graph inference problem using standard RNNs and learns to iteratively improves its predictions via message passing. Our joint inference model can take advantage of contextual cues to make better predictions on objects and their relationships. The experiments show that our model significantly outperforms previous methods for generating scene graphs using Visual Genome dataset and inferring support relations with NYU Depth v2 dataset.",
"Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks.",
"Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.",
"To truly understand the visual world our models should be able not only to recognize images but also generate them. To this end, there has been exciting recent progress on generating images from natural language descriptions. These methods give stunning results on limited domains such as descriptions of birds or flowers, but struggle to faithfully reproduce complex sentences with many objects and relationships. To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships. Our model uses graph convolution to process input graphs, computes a scene layout by predicting bounding boxes and segmentation masks for objects, and converts the layout to an image with a cascaded refinement network. The network is trained adversarially against a pair of discriminators to ensure realistic outputs. We validate our approach on Visual Genome and COCO-Stuff, where qualitative results, ablations, and user studies demonstrate our method's ability to generate complex images with multiple objects.",
"We present Swapnet, a framework to transfer garments across images of people with arbitrary body pose, shape, and clothing. Garment transfer is a challenging task that requires (i) disentangling the features of the clothing from the body pose and shape and (ii) realistic synthesis of the garment texture on the new body. We present a neural network architecture that tackles these sub-problems with two task-specific sub-networks. Since acquiring pairs of images showing the same clothing on different bodies is difficult, we propose a novel weakly-supervised approach that generates training pairs from a single image via data augmentation. We present the first fully automatic method for garment transfer in unconstrained images without solving the difficult 3D reconstruction problem. We demonstrate a variety of transfer results and highlight our advantages over traditional image-to-image and analogy pipelines."
]
} |
1903.00793 | 2918543823 | With a good image understanding capability, can we manipulate the images high level semantic representation? Such transformation operation can be used to generate or retrieve similar images but with a desired modification (for example changing beach background to street background); similar ability has been demonstrated in zero shot learning, attribute composition and attribute manipulation image search. In this work we show how one can learn transformations with no training examples by learning them on another domain and then transfer to the target domain. This is feasible if: first, transformation training data is more accessible in the other domain and second, both domains share similar semantics such that one can learn transformations in a shared embedding space. We demonstrate this on an image retrieval task where search query is an image, plus an additional transformation specification (for example: search for images similar to this one but background is a street instead of a beach). In one experiment, we transfer transformation from synthesized 2D blobs image to 3D rendered image, and in the other, we transfer from text domain to natural image domain. | is one of the main approaches for many retrieval and zero shot learning tasks. It usually relies on metric learning @cite_26 @cite_31 @cite_4 in retrieval context; though other supervised setting or even unsupervised learning @cite_29 can also work. The result will be encoders that embed raw input into a high semantic level feature space, where retrieval or recognition is performed. Our work concerns performing transformation within such space. In @cite_29 , it is demonstrated that walking or perform vector arithmetic operation there can translate to similar high level semantic changes in the raw image space. | {
"cite_N": [
"@cite_29",
"@cite_31",
"@cite_26",
"@cite_4"
],
"mid": [
"2173520492",
"1975517671",
"2106053110",
"2790210700"
],
"abstract": [
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.",
"The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.",
"Many recent works advancing deep learning tend to focus on large scale setting with the goal of more effective training and better fitting. This goal might be less applicable to the case of small to medium scale. Studying deep metric learning under such setting, we reason that better generalization could be a big contributing factor to improvement of previous works, as well as the goal for further improvement. We investigate using other layers in a deep metric learning system (beside the embedding layer) for feature extraction and analyze how well they perform on training data and generalize to testing data. From this study, we suggest a new regularization practice and demonstrate state-of-the-art performance on 3 fine-grained image retrieval benchmarks: Cars-196, CUB-200-2011 and Stanford Online Product."
]
} |
1903.00793 | 2918543823 | With a good image understanding capability, can we manipulate the images high level semantic representation? Such transformation operation can be used to generate or retrieve similar images but with a desired modification (for example changing beach background to street background); similar ability has been demonstrated in zero shot learning, attribute composition and attribute manipulation image search. In this work we show how one can learn transformations with no training examples by learning them on another domain and then transfer to the target domain. This is feasible if: first, transformation training data is more accessible in the other domain and second, both domains share similar semantics such that one can learn transformations in a shared embedding space. We demonstrate this on an image retrieval task where search query is an image, plus an additional transformation specification (for example: search for images similar to this one but background is a street instead of a beach). In one experiment, we transfer transformation from synthesized 2D blobs image to 3D rendered image, and in the other, we transfer from text domain to natural image domain. | these areas are at high level similar to what we want to do: perform learning on another domain where label is available and apply to the target domain @cite_36 @cite_0 @cite_7 . Here the source and target domains here are similar and the goal is to finetune the model trained on one domain for another by bridging the gap between 2 domains. Differently the task we are studying here requires transferring between 2 completely different domains (i.e. image and text) and so provides similarity supervision to facilitate that. | {
"cite_N": [
"@cite_36",
"@cite_0",
"@cite_7"
],
"mid": [
"2487365028",
"2963709863",
"2593768305"
],
"abstract": [
"Recent progress in computer vision has been driven by high-capacity models trained on large datasets. Unfortunately, creating large datasets with pixel-level labels has been extremely costly due to the amount of human effort required. In this paper, we present an approach to rapidly creating pixel-accurate semantic label maps for images extracted from modern computer games. Although the source code and the internal operation of commercial games are inaccessible, we show that associations between image patches can be reconstructed from the communication between the game and the graphics hardware. This enables rapid propagation of semantic labels within and across images synthesized by the game, with no access to the source code or the content. We validate the presented approach by producing dense pixel-level semantic annotations for 25 thousand images synthesized by a photorealistic open-world computer game. Experiments on semantic segmentation datasets show that using the acquired data to supplement real-world images significantly increases accuracy and that the acquired data enables reducing the amount of hand-labeled real-world data: models trained with game data and just ( 1 3 ) of the CamVid training set outperform models trained on the complete CamVid training set.",
"With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulators output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a self-regularization term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.",
"Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task."
]
} |
1903.00763 | 2920770442 | Recent years have witnessed the significant progress on convolutional neural networks (CNNs) in dynamic scene deblurring. While CNN models are generally learned by the reconstruction loss defined on training data, incorporating suitable image priors as well as regularization terms into the network architecture could boost the deblurring performance. In this work, we propose an Extreme Channel Prior embedded Network (ECPeNet) to plug the extreme channel priors (i.e., priors on dark and bright channels) into a network architecture for effective dynamic scene deblurring. A novel trainable extreme channel prior embedded layer (ECPeL) is developed to aggregate both extreme channel and blurry image representations, and sparse regularization is introduced to regularize the ECPeNet model learning. Furthermore, we present an effective multi-scale network architecture that works in both coarse-to-fine and fine-to-coarse manners for better exploiting information flow across scales. Experimental results on GoPro and Kohler datasets show that our proposed ECPeNet performs favorably against state-of-the-art deep image deblurring methods in terms of both quantitative metrics and visual quality. | Although those algorithms demonstrate their effectiveness in image deblurring, the simplified assumptions on the blur model and time-consuming parameter-tuning strategy are two lethal problems to hinder their performance in real-world cases. In this work, we utilize a realistic GoPro dataset @cite_40 to end-to-end train a new multi-scale network for latent sharp image restoration. | {
"cite_N": [
"@cite_40"
],
"mid": [
"2560533888"
],
"abstract": [
"Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multi-scale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively."
]
} |
1903.00987 | 2920232832 | Detailed 3D reconstruction is an important challenge with application to robotics, augmented and virtual reality, which has seen impressive progress throughout the past years. Advancements were driven by the availability of depth cameras (RGB-D), as well as increased compute power, e.g. in the form of GPUs -- but also thanks to inclusion of machine learning in the process. Here, we propose X-Section, an RGB-D 3D reconstruction approach that leverages deep learning to make object-level predictions about thicknesses that can be readily integrated into a volumetric multi-view fusion process, where we propose an extension to the popular KinectFusion approach. In essence, our method allows to complete shape in general indoor scenes behind what is sensed by the RGB-D camera, which may be crucial e.g. for robotic manipulation tasks or efficient scene exploration. Predicting object thicknesses rather than volumes allows us to work with comparably high spatial resolution without exploding memory and training data requirements on the employed Convolutional Neural Networks. In a series of qualitative and quantitative evaluations, we demonstrate how we accurately predict object thickness and reconstruct general 3D scenes containing multiple objects. | The most popular approach for reconstructing scenes from RGB-D images involves registering and fusing multiple frames into a 3D voxel grid. This volumetric fusion approach, popularised by KinectFusion @cite_39 , works by first tracking the camera pose and then it uses the integration approach of Curless and Levoy @cite_10 to fuse the depth images into the volume. Various improvements have been introduced, mainly focused on reducing tracking drift @cite_8 and increasing the size of scenes that can be reconstructed. Kintinuous @cite_11 , for example, uses a sliding volume to map large spaces. BundleFusion @cite_34 reduces tracking drift by global bundle-adjustment and re-integration into the mapping process. @cite_6 tackles the efficiency bottleneck by means of a tree data structure. With the advent of deep learning there has been much interest in learning geometrical, structural and semantic priors to enhance the reconstruction process. For example, @cite_14 makes use of surface normal predictions to improve a monocular reconstruction. @cite_40 uses semantic segmentation along with RGB-D reconstruction to create annotated maps of indoor scenes. More recently, Fusion++ @cite_21 introduced an object-centric approach to large scale mapping which builds a map consisting of multiple TSDFs, each representing a single object instance. | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_21",
"@cite_6",
"@cite_39",
"@cite_40",
"@cite_34",
"@cite_10",
"@cite_11"
],
"mid": [
"2737940453",
"2895289727",
"2888144883",
"2784112303",
"1987648924",
"2963357556",
"2336961836",
"2009422376",
"1716229439"
],
"abstract": [
"This paper presents an efficient framework for dense 3D scene reconstruction using input from a moving monocular camera. Visual SLAM (Simultaneous Localisation and Mapping) approaches based solely on geometric methods have proven to be quite capable of accurately tracking the pose of a moving camera and simultaneously building a map of the environment in real-time. However, most of them suffer from the 3D map being too sparse for practical use. The missing points in the generated map correspond mainly to areas lacking texture in the input images, and dense mapping systems often rely on hand-crafted priors like piecewise-planarity or piecewise-smooth depth. These priors do not always provide the required level of scene understanding to accurately fill the map. On the other hand, Convolutional Neural Networks (CNNs) have had great success in extracting high-level information from images and regressing pixel-wise surface normals, semantics, and even depth. In this work we leverage this high-level scene context learned by a deep CNN in the form of a surface normal prior. We show, in particular, that using the surface normal prior leads to better reconstructions than the weaker smoothness prior.",
"Sum-of-squares objective functions are very popular in computer vision algorithms. However, these objective functions are not always easy to optimize. The underlying assumptions made by solvers are often not satisfied and many problems are inherently ill-posed. In this paper, we propose a neural nonlinear least squares optimization algorithm which learns to effectively optimize these cost functions even in the presence of adversities. Unlike traditional approaches, the proposed solver requires no hand-crafted regularizers or priors as these are implicitly learned from the data. We apply our method to the problem of motion stereo ie. jointly estimating the motion and scene geometry from pairs of images of a monocular sequence. We show that our learned optimizer is able to efficiently and effectively solve this challenging optimization problem.",
"We propose an online object-level SLAM system which builds a persistent and accurate 3D graph map of arbitrary reconstructed objects. As an RGB-D camera browses a cluttered indoor scene, Mask-RCNN instance segmentations are used to initialise compact per-object Truncated Signed Distance Function (TSDF) reconstructions with object size-dependent resolutions and a novel 3D foreground mask. Reconstructed objects are stored in an optimisable 6DoF pose graph which is our only persistent map representation. Objects are incrementally refined via depth fusion, and are used for tracking, relocalisation and loop closure detection. Loop closures cause adjustments in the relative pose estimates of object instances, but no intra-object warping. Each object also carries semantic information which is refined over time and an existence probability to account for spurious instance predictions. We demonstrate our approach on a hand-held RGB-D sequence from a cluttered office scene with a large number and variety of object instances, highlighting how the system closes loops and makes good use of existing objects on repeated loops. We quantitatively evaluate the trajectory error of our system against a baseline approach on the RGB-D SLAM benchmark, and qualitatively compare reconstruction quality of discovered objects on the YCB video dataset. Performance evaluation shows our approach is highly memory efficient and runs online at 4-8Hz (excluding relocalisation) despite not being optimised at the software level.",
"We present a dense volumetric simultaneous localisation and mapping (SLAM) framework that uses an octree representation for efficient fusion and rendering of either a truncated signed distance field (TSDF) or an occupancy map. The primary aim of this letter is to use one single representation of the environment that can be used not only for robot pose tracking and high-resolution mapping, but seamlessly for planning. We show that our highly efficient octree representation of space fits SLAM and planning purposes in a real-time control loop. In a comprehensive evaluation, we demonstrate dense SLAM accuracy and runtime performance on-par with flat hashing approaches when using TSDF-based maps, and considerable speed-ups when using occupancy mapping compared to standard occupancy maps frameworks. Our SLAM system can run at 10–40 Hz on a modern quadcore CPU, without the need for massive parallelization on a GPU. We, furthermore, demonstrate a probabilistic occupancy mapping as an alternative to TSDF mapping in dense SLAM and show its direct applicability to online motion planning, using the example of informed rapidly-exploring random trees (RRT @math ).",
"We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.",
"For intelligent robots to interact in meaningful ways with their environment, they must understand both the geometric and semantic properties of the scene surrounding them. The majority of research to date has addressed these mapping challenges separately, focusing on either geometric or semantic mapping. In this paper we address the problem of building environmental maps that include both semantically meaningful, object-level entities and point- or mesh-based geometrical representations. We simultaneously build geometric point cloud models of previously unseen instances of known object classes and create a map that contains these object models as central entities. Our system leverages sparse, feature-based RGB-D SLAM, image-based deep-learning object detection and 3D unsupervised segmentation.",
"Real-time, high-quality, 3D scanning of large-scale scenes is key to mixed reality and robotic applications. However, scalability brings challenges of drift in pose estimation, introducing significant errors in the accumulated model. Approaches often require hours of offline processing to globally correct model errors. Recent online methods demonstrate compelling results but suffer from (1) needing minutes to perform online correction, preventing true real-time use; (2) brittle frame-to-frame (or frame-to-model) pose estimation, resulting in many tracking failures; or (3) supporting only unstructured point-based representations, which limit scan quality and applicability. We systematically address these issues with a novel, real-time, end-to-end reconstruction framework. At its core is a robust pose estimation strategy, optimizing per frame for a global set of camera poses by considering the complete history of RGB-D input with an efficient hierarchical approach. We remove the heavy reliance on temporal tracking and continually localize to the globally optimized frames instead. We contribute a parallelizable optimization framework, which employs correspondences based on sparse features and dense geometric and photometric matching. Our approach estimates globally optimized (i.e., bundle adjusted) poses in real time, supports robust tracking with recovery from gross tracking failures (i.e., relocalization), and re-estimates the 3D model in real time to ensure global consistency, all within a single framework. Our approach outperforms state-of-the-art online systems with quality on par to offline methods, but with unprecedented speed and scan completeness. Our framework leads to a comprehensive online scanning solution for large indoor environments, enabling ease of use and high-quality results.1",
"A number of techniques have been developed for reconstructing surfaces by integrating groups of aligned range images. A desirable set of properties for such algorithms includes: incremental updating, representation of directional uncertainty, the ability to fill gaps in the reconstruction, and robustness in the presence of outliers. Prior algorithms possess subsets of these properties. In this paper, we present a volumetric method for integrating range images that possesses all of these properties. Our volumetric representation consists of a cumulative weighted signed distance function. Working with one range image at a time, we first scan-convert it to a distance function, then combine this with the data already acquired using a simple additive scheme. To achieve space efficiency, we employ a run-length encoding of the volume. To achieve time efficiency, we resample the range image to align with the voxel grid and traverse the range and voxel scanlines synchronously. We generate the final manifold by extracting an isosurface from the volumetric grid. We show that under certain assumptions, this isosurface is optimal in the least squares sense. To fill gaps in the model, we tessellate over the boundaries between regions seen to be empty and regions never observed. Using this method, we are able to integrate a large number of range images (as many as 70) yielding seamless, high-detail models of up to 2.6 million triangles.",
"In this paper we present an extension to the KinectFusion algorithm that permits dense mesh-based mapping of extended scale environments in real-time. This is achieved through (i) altering the original algorithm such that the region of space being mapped by the KinectFusion algorithm can vary dynamically, (ii) extracting a dense point cloud from the regions that leave the KinectFusion volume due to this variation, and, (iii) incrementally adding the resulting points to a triangular mesh representation of the environment. The system is implemented as a set of hierarchical multi-threaded components which are capable of operating in real-time. The architecture facilitates the creation and integration of new modules with minimal impact on the performance on the dense volume tracking and surface reconstruction modules. We provide experimental results demonstrating the system’s ability to map areas considerably beyond the scale of the original KinectFusion algorithm including a two story apartment and an extended sequence taken from a car at night. In order to overcome failure of the iterative closest point (ICP) based odometry in areas of low geometric features we have evaluated the Fast Odometry from Vision (FOVIS) system as an alternative. We provide a comparison between the two approaches where we show a trade off between the reduced drift of the visual odometry approach and the higher local mesh quality of the ICP-based approach. Finally we present ongoing work on incorporating full simultaneous localisation and mapping (SLAM) pose-graph optimisation."
]
} |
1903.00987 | 2920232832 | Detailed 3D reconstruction is an important challenge with application to robotics, augmented and virtual reality, which has seen impressive progress throughout the past years. Advancements were driven by the availability of depth cameras (RGB-D), as well as increased compute power, e.g. in the form of GPUs -- but also thanks to inclusion of machine learning in the process. Here, we propose X-Section, an RGB-D 3D reconstruction approach that leverages deep learning to make object-level predictions about thicknesses that can be readily integrated into a volumetric multi-view fusion process, where we propose an extension to the popular KinectFusion approach. In essence, our method allows to complete shape in general indoor scenes behind what is sensed by the RGB-D camera, which may be crucial e.g. for robotic manipulation tasks or efficient scene exploration. Predicting object thicknesses rather than volumes allows us to work with comparably high spatial resolution without exploding memory and training data requirements on the employed Convolutional Neural Networks. In a series of qualitative and quantitative evaluations, we demonstrate how we accurately predict object thickness and reconstruct general 3D scenes containing multiple objects. | A number of approaches propose to complete the scene starting form RGB-D information. @cite_42 and ScanComplete @cite_15 infer the missing voxels in an occupancy grid map along with the semantic labels. OctNetFusion @cite_27 describes a deep learnt fusion process using an octree data structure for efficient inference. Their scheme can be seen as learning an implicit surface from the depth maps, helping with noise reduction and outlier suppression when fusing. Voxlets @cite_20 operates on partially reconstructed 3D voxel grids. Other approaches @cite_3 use GANs to train an RGB-D to voxel predictor. The main disadvantage of these approaches is that it is inefficient for fusing multiple views as its 3D convolutions are both memory and compute intensive, restricting their use in real time applications. | {
"cite_N": [
"@cite_42",
"@cite_3",
"@cite_27",
"@cite_15",
"@cite_20"
],
"mid": [
"2557465155",
"2963735494",
"2609754928",
"2777356020",
"2444097022"
],
"abstract": [
"This paper focuses on semantic scene completion, a task for producing a complete 3D voxel representation of volumetric occupancy and semantic labels for a scene from a single-view depth map observation. Previous work has considered scene completion and semantic labeling of depth maps separately. However, we observe that these two problems are tightly intertwined. To leverage the coupled nature of these two tasks, we introduce the semantic scene completion network (SSCNet), an end-to-end 3D convolutional network that takes a single depth image as input and simultaneously outputs occupancy and semantic labels for all voxels in the camera view frustum. Our network uses a dilation-based 3D context module to efficiently expand the receptive field and enable 3D context learning. To train our network, we construct SUNCG - a manually created largescale dataset of synthetic 3D scenes with dense volumetric annotations. Our experiments demonstrate that the joint model outperforms methods addressing each task in isolation and outperforms alternative approaches on the semantic scene completion task. The dataset and code is available at http: sscnet.cs.princeton.edu.",
"In this paper, we propose a novel 3D-RecGAN approach, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks. Unlike the existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid by filling in the occluded missing regions. The key idea is to combine the generative capabilities of autoencoders and the conditional Generative Adversarial Networks (GAN) framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space. Extensive experiments on large synthetic datasets show that the proposed 3D-RecGAN significantly outperforms the state of the art in single view 3D object reconstruction, and is able to reconstruct unseen types of objects. Our code and data are available at: https: github.com Yang7879 3D-RecGAN.",
"In this paper, we present a learning based approach to depth fusion, i.e., dense 3D reconstruction from multiple depth images. The most common approach to depth fusion is based on averaging truncated signed distance functions, which was originally proposed by Curless and Levoy in 1996. While this method is simple and provides great results, it is not able to reconstruct (partially) occluded surfaces and requires a large number frames to filter out sensor noise and outliers. Motivated by the availability of large 3D model repositories and recent advances in deep learning, we present a novel 3D CNN architecture that learns to predict an implicit surface representation from the input depth maps. Our learning based method significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression. By learning the structure of real world 3D objects and scenes, our approach is further able to reconstruct occluded regions and to fill in gaps in the reconstruction. We demonstrate that our learning based approach outperforms both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric fusion. Further, we demonstrate state-of-the-art 3D shape completion results.",
"We introduce ScanComplete, a novel data-driven approach for taking an incomplete 3D scan of a scene as input and predicting a complete 3D model along with per-voxel semantic labels. The key contribution of our method is its ability to handle large scenes with varying spatial extent, managing the cubic growth in data size as scene size increases. To this end, we devise a fully-convolutional generative 3D CNN model whose filter kernels are invariant to the overall scene size. The model can be trained on scene subvolumes but deployed on arbitrarily large scenes at test time. In addition, we propose a coarse-to-fine inference strategy in order to produce high-resolution output while also leveraging large input context sizes. In an extensive series of experiments, we carefully evaluate different model design choices, considering both deterministic and probabilistic models for completion and semantic inference. Our results show that we outperform other methods not only in the size of the environments handled and processing efficiency, but also with regard to completion quality and semantic segmentation performance by a significant margin.",
"Building a complete 3D model of a scene, given only a single depth image, is underconstrained. To gain a full volumetric model, one needs either multiple views, or a single view together with a library of unambiguous 3D models that will fit the shape of each individual object in the scene. We hypothesize that objects of dissimilar semantic classes often share similar 3D shape components, enabling a limited dataset to model the shape of a wide range of objects, and hence estimate their hidden geometry. Exploring this hypothesis, we propose an algorithm that can complete the unobserved geometry of tabletop-sized objects, based on a supervised model trained on already available volumetric elements. Our model maps from a local observation in a single depth image to an estimate of the surface shape in the surrounding neighborhood. We validate our approach both qualitatively and quantitatively on a range of indoor object collections and challenging real scenes."
]
} |
1903.00987 | 2920232832 | Detailed 3D reconstruction is an important challenge with application to robotics, augmented and virtual reality, which has seen impressive progress throughout the past years. Advancements were driven by the availability of depth cameras (RGB-D), as well as increased compute power, e.g. in the form of GPUs -- but also thanks to inclusion of machine learning in the process. Here, we propose X-Section, an RGB-D 3D reconstruction approach that leverages deep learning to make object-level predictions about thicknesses that can be readily integrated into a volumetric multi-view fusion process, where we propose an extension to the popular KinectFusion approach. In essence, our method allows to complete shape in general indoor scenes behind what is sensed by the RGB-D camera, which may be crucial e.g. for robotic manipulation tasks or efficient scene exploration. Predicting object thicknesses rather than volumes allows us to work with comparably high spatial resolution without exploding memory and training data requirements on the employed Convolutional Neural Networks. In a series of qualitative and quantitative evaluations, we demonstrate how we accurately predict object thickness and reconstruct general 3D scenes containing multiple objects. | More closely related to our approach is @cite_36 , where the authors extract curves along the silhouette and reconstruct the object by finding the smooth surface which adheres to the edge curves. This method, however, requires that the object is symmetric and that the silhouette image is taken perpendicular to the symmetry axis. | {
"cite_N": [
"@cite_36"
],
"mid": [
"171386943"
],
"abstract": [
"We show how a 3D model of a complex curved object can be easily extracted from a single 2D image. A userdefined silhouette is the key input; and we show that finding the smoothest 3D surface which projects exactly to this silhouette can be expressed as a quadratic optimization, a result which has not previously appeared in the large literature on the shape-from-silhouette problem. For simple models, this process can immediately yield a usable 3D model; but for more complex geometries the user will wish to further shape the surface. We show that a variety of editing operations—which can be defined either in the image or in 3D—can also be expressed as linear constraints on the 3D shape parameters. We extend the system to fit higher genus surfaces. Our method has several advantages over the system of Zhanget al [ZDPSS01] and over systems such asSKETCH and Teddy."
]
} |
1903.00987 | 2920232832 | Detailed 3D reconstruction is an important challenge with application to robotics, augmented and virtual reality, which has seen impressive progress throughout the past years. Advancements were driven by the availability of depth cameras (RGB-D), as well as increased compute power, e.g. in the form of GPUs -- but also thanks to inclusion of machine learning in the process. Here, we propose X-Section, an RGB-D 3D reconstruction approach that leverages deep learning to make object-level predictions about thicknesses that can be readily integrated into a volumetric multi-view fusion process, where we propose an extension to the popular KinectFusion approach. In essence, our method allows to complete shape in general indoor scenes behind what is sensed by the RGB-D camera, which may be crucial e.g. for robotic manipulation tasks or efficient scene exploration. Predicting object thicknesses rather than volumes allows us to work with comparably high spatial resolution without exploding memory and training data requirements on the employed Convolutional Neural Networks. In a series of qualitative and quantitative evaluations, we demonstrate how we accurately predict object thickness and reconstruct general 3D scenes containing multiple objects. | The advent of deep learning has led to a major boost in the complexity and quality of scenes and objects that can be reconstructed from a single view. Approaches like @cite_41 @cite_16 @cite_33 @cite_26 @cite_31 @cite_9 @cite_13 @cite_23 @cite_12 @cite_44 @cite_33 all attempt to reconstruct 3D objects from 2D views and or silhouettes. In the best case, these methods provide a view-centred reconstruction requiring to recover the translation and scale of the object, a challenging task itself. In the case the prediction is in a canonical pose, the full pose and scale has to be estimated. | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_41",
"@cite_9",
"@cite_44",
"@cite_23",
"@cite_31",
"@cite_16",
"@cite_13",
"@cite_12"
],
"mid": [
"2748348302",
"2962912205",
"2342277278",
"2546066744",
"2963453931",
"2963547760",
"2964137676",
"2338532005",
"",
"2511691466"
],
"abstract": [
"3D reconstruction from a single image is a key problem in multiple applications ranging from robotic manipulation to augmented reality. Prior methods have tackled this problem through generative models which predict 3D reconstructions as voxels or point clouds. However, these methods can be computationally expensive and miss fine details. We introduce a new differentiable layer for 3D data deformation and use it in DeformNet to learn a model for 3D reconstruction-through-deformation. DeformNet takes an image input, searches the nearest shape template from a database, and deforms the template to match the query image. We evaluate our approach on the ShapeNet dataset and show that - (a) the Free-Form Deformation layer is a powerful new building block for Deep Learning models that manipulate 3D data (b) DeformNet uses this FFD layer combined with shape retrieval for smooth and detail-preserving 3D reconstruction of qualitatively plausible point clouds with respect to a single query image (c) compared to other state-of-the-art 3D reconstruction methods, DeformNet quantitatively matches or outperforms their benchmarks by significant margins. For more information, visit: this https URL .",
"We present a framework for learning single-view shape and pose prediction without using direct supervision for either. Our approach allows leveraging multi-view observations from unknown poses as supervisory signal during training. Our proposed training setup enforces geometric consistency between the independently predicted shape and pose from two views of the same instance. We consequently learn to predict shape in an emergent canonical (view-agnostic) frame along with a corresponding pose predictor. We show empirical and qualitative results using the ShapeNet dataset and observe encouragingly competitive performance to previous techniques which rely on stronger forms of supervision. We also demonstrate the applicability of our framework in a realistic setting which is beyond the scope of existing techniques: using a training dataset comprised of online product images where the underlying shape and pose are unknown.",
"Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2). The network learns a mapping from images of objects to their underlying 3D shapes from a large collection of synthetic data [13]. Our network takes in one or more images of an object instance from arbitrary viewpoints and outputs a reconstruction of the object in the form of a 3D occupancy grid. Unlike most of the previous works, our network does not require any image annotations or object class labels for training or testing. Our extensive experimental analysis shows that our reconstruction framework (i) outperforms the state-of-the-art methods for single view reconstruction, and (ii) enables the 3D reconstruction of objects in situations when traditional SFM SLAM methods fail (because of lack of texture and or wide baseline).",
"We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods.",
"Supervised 3D reconstruction has witnessed a significant progress through the use of deep neural networks. However, this increase in performance requires large scale annotations of 2D 3D data. In this paper, we explore inexpensive 2D supervision as an alternative for expensive 3D CAD annotation. Specifically, we use foreground masks as weak supervision through a raytrace pooling layer that enables perspective projection and backpropagation. Additionally, since the 3D reconstruction from masks is an ill posed problem, we propose to constrain the 3D reconstruction to the manifold of unlabeled realistic 3D shapes that match mask observations. We demonstrate that learning a log-barrier solution to this constrained optimization problem resembles the GAN objective, enabling the use of existing tools for training GANs. We evaluate and analyze the manifold constrained reconstruction on various datasets for single and multi-view reconstruction of both synthetic and real images.",
"From a single image, humans are able to perceive the full 3D shape of an object by exploiting learned shape priors from everyday life. Contemporary single-image 3D reconstruction algorithms aim to solve this task in a similar fashion, but often end up with priors that are highly biased by training classes. Here we present an algorithm, Generalizable Reconstruction (GenRe), designed to capture more generic, class-agnostic shape priors. We achieve this with an inference network and training procedure that combine 2.5D representations of visible surfaces (depth and silhouette), spherical shape representations of both visible and non-visible surfaces, and 3D voxel-based representations, in a principled manner that exploits the causal structure of how 3D shapes give rise to 2D images. Experiments demonstrate that GenRe performs well on single-view shape reconstruction, and generalizes to diverse novel objects from categories not seen during training.",
"What is a good vector representation of an object? We believe that it should be generative in 3D, in the sense that it can produce new 3D objects; as well as be predictable from 2D, in the sense that it can be perceived from 2D images. We propose a novel architecture, called the TL-embedding network, to learn an embedding space with these properties. The network consists of two components: (a) an autoencoder that ensures the representation is generative; and (b) a convolutional network that ensures the representation is predictable. This enables tackling a number of tasks including voxel prediction from 2D images and 3D model retrieval. Extensive experimental analysis demonstrates the usefulness and versatility of this embedding.",
"With the advent of affordable depth sensors, 3D capture becomes more and more ubiquitous and already has made its way into commercial products. Yet, capturing the geometry or complete shapes of everyday objects using scanning devices (e.g. Kinect) still comes with several challenges that result in noise or even incomplete shapes.",
"",
"When working with three-dimensional data, choice of representation is key. We explore voxel-based models, and present evidence for the viability of voxellated representations in applications including shape modeling and object classification. Our key contributions are methods for training voxel-based variational autoencoders, a user interface for exploring the latent space learned by the autoencoder, and a deep convolutional neural network architecture for object classification. We address challenges unique to voxel-based representations, and empirically evaluate our models on the ModelNet benchmark, where we demonstrate a 51.5 relative improvement in the state of the art for object classification."
]
} |
1903.00865 | 2918958546 | Face images captured through the glass are usually contaminated by reflections. The non-transmitted reflections make the reflection removal more challenging than for general scenes, because important facial features are completely occluded. In this paper, we propose and solve the face image reflection removal problem. We remove non-transmitted reflections by incorporating inpainting ideas into a guided reflection removal framework and recover facial features by considering various face-specific priors. We use a newly collected face reflection image dataset to train our model and compare with state-of-the-art methods. The proposed method shows advantages in estimating reflection-free face images for improving face recognition. | Previous works on reflection removal can be roughly classified into two categories. The first category solves by using the non-learning based methods. For example, Li al @cite_10 and Nikolas al @cite_0 made use of the different blur levels of the background and reflection layers. Shih al @cite_37 used the GMM patch prior to remove reflections with the visible ghosting effects. The handcrafted priors adopted by these methods are based on the observations of some special properties between the background and reflection ( , different blur levels @cite_18 @cite_10 ) which is often violated in the general scenes especially when these properties are weakly observed. | {
"cite_N": [
"@cite_0",
"@cite_37",
"@cite_10",
"@cite_18"
],
"mid": [
"2608451532",
"1918869474",
"1980212291",
"2518628096"
],
"abstract": [
"Reflections are a common artifact in images taken through glass windows. Automatically removing the reflection artifacts after the picture is taken is an ill-posed problem. Attempts to solve this problem using optimization schemes therefore rely on various prior assumptions from the physical world. Instead of removing reflections from a single image, which has met with limited success so far, we propose a novel approach to suppress reflections. It is based on a Laplacian data fidelity term and an l-zero gradient sparsity term imposed on the output. With experiments on artificial and real-world images we show that our reflection suppression method performs better than the state-of-the-art reflection removal techniques.",
"Photographs taken through glass windows often contain both the desired scene and undesired reflections. Separating the reflection and transmission layers is an important but ill-posed problem that has both aesthetic and practical applications. In this work, we introduce the use of ghosting cues that exploit asymmetry between the layers, thereby helping to reduce the ill-posedness of the problem. These cues arise from shifted double reflections of the reflected scene off the glass surface. In double-pane windows, each pane reflects shifted and attenuated versions of objects on the same side of the glass as the camera. For single-pane windows, ghosting cues arise from shifted reflections on the two surfaces of the glass pane. Even though the ghosting is sometimes barely perceptible by humans, we can still exploit the cue for layer separation. In this work, we model the ghosted reflection using a double-impulse convolution kernel, and automatically estimate the spatial separation and relative attenuation of the ghosted reflection components. To separate the layers, we propose an algorithm that uses a Gaussian Mixture Model for regularization. Our method is automatic and requires only a single input image. We demonstrate that our approach removes a large fraction of reflections on both synthetic and real-world inputs.",
"This paper addresses extracting two layers from an image where one layer is smoother than the other. This problem arises most notably in intrinsic image decomposition and reflection interference removal. Layer decomposition from a single-image is inherently ill-posed and solutions require additional constraints to be enforced. We introduce a novel strategy that regularizes the gradients of the two layers such that one has a long tail distribution and the other a short tail distribution. While imposing the long tail distribution is a common practice, our introduction of the short tail distribution on the second layer is unique. We formulate our problem in a probabilistic framework and describe an optimization scheme to solve this regularization with only a few iterations. We apply our approach to the intrinsic image and reflection removal problems and demonstrate high quality layer separation on par with other techniques but being significantly faster than prevailing methods.",
"Reflection removal aims at separating the mixture of the desired scene and the undesired reflections. Locating reflection and background edges is a key step for reflection removal. In this paper, we present a visual depth guided method to remove reflections. Our idea is to use Depth of Field (DoF) to label the background and reflection edges. We propose a DoF confidence map where pixels with higher DoF values are assumed to belong to the desired background components. Moreover, we observe that images with different resolutions show different properties in the DoF map. Thus, we introduce a multi-scale DoF computing strategy to classify edge pixels more efficiently. Based on the results of edge classification, the background and reflection layers can be separated. Experimental results validate the effectiveness of our method using real-world photos."
]
} |
1903.00865 | 2918958546 | Face images captured through the glass are usually contaminated by reflections. The non-transmitted reflections make the reflection removal more challenging than for general scenes, because important facial features are completely occluded. In this paper, we propose and solve the face image reflection removal problem. We remove non-transmitted reflections by incorporating inpainting ideas into a guided reflection removal framework and recover facial features by considering various face-specific priors. We use a newly collected face reflection image dataset to train our model and compare with state-of-the-art methods. The proposed method shows advantages in estimating reflection-free face images for improving face recognition. | The deep learning framework also benefits reflection removal problems. For example, Fan al @cite_6 proposed a two-stage deep learning approach to learn the mapping between the mixture images and the estimated clean images. Recently, Wan al @cite_33 also proposed a concurrent model to better preserve the background details. The method proposed by Zhang al @cite_27 first utilized the generative model to better learn the mappings from the mixture image to the clean images. However, existing methods are all designed for general scenes, which have difficulty in preserving facial details in face image reflection removal problem. | {
"cite_N": [
"@cite_27",
"@cite_33",
"@cite_6"
],
"mid": [
"2798966709",
"2963177105",
"2963676366"
],
"abstract": [
"We present an approach to separating reflection from a single image. The approach uses a fully convolutional network trained end-to-end with losses that exploit low-level and high-level image information. Our loss function includes two perceptual losses: a feature loss from a visual perception network, and an adversarial loss that encodes characteristics of images in the transmission layers. We also propose a novel exclusion loss that enforces pixel-level layer separation. We create a dataset of real-world images with reflection and corresponding ground-truth transmission layers for quantitative evaluation and model training. We validate our method through comprehensive quantitative experiments and show that our approach outperforms state-of-the-art reflection removal methods in PSNR, SSIM, and perceptual user study. We also extend our method to two other image enhancement tasks to demonstrate the generality of our approach.",
"Removing the undesired reflections from images taken through the glass is of broad application to various computer vision tasks. Non-learning based methods utilize different handcrafted priors such as the separable sparse gradients caused by different levels of blurs, which often fail due to their limited description capability to the properties of real-world reflections. In this paper, we propose the Concurrent Reflection Removal Network (CRRN) to tackle this problem in a unified framework. Our proposed network integrates image appearance information and multi-scale gradient information with human perception inspired loss function, and is trained on a new dataset with 3250 reflection images taken under diverse real-world scenes. Extensive experiments on a public benchmark dataset show that the proposed method performs favorably against state-of-the-art methods.",
"This paper proposes a deep neural network structure that exploits edge information in addressing representative low-level vision tasks such as layer separation and image filtering. Unlike most other deep learning strategies applied in this context, our approach tackles these challenging problems by estimating edges and reconstructing images using only cascaded convolutional layers arranged such that no handcrafted or application-specific image-processing components are required. We apply the resulting transferrable pipeline to two different problem domains that are both sensitive to edges, namely, single image reflection removal and image smoothing. For the former, using a mild reflection smoothness assumption and a novel synthetic data generation method that acts as a type of weak supervision, our network is able to solve much more difficult reflection cases that cannot be handled by previous methods. For the latter, we also exceed the state-of-the-art quantitative and qualitative results by wide margins. In all cases, the proposed framework is simple, fast, and easy to transfer across disparate domains."
]
} |
1903.00865 | 2918958546 | Face images captured through the glass are usually contaminated by reflections. The non-transmitted reflections make the reflection removal more challenging than for general scenes, because important facial features are completely occluded. In this paper, we propose and solve the face image reflection removal problem. We remove non-transmitted reflections by incorporating inpainting ideas into a guided reflection removal framework and recover facial features by considering various face-specific priors. We use a newly collected face reflection image dataset to train our model and compare with state-of-the-art methods. The proposed method shows advantages in estimating reflection-free face images for improving face recognition. | Numerous methods have been proposed during the past decades to solve different face image enhancement problem including face hallucination @cite_20 , face deblurring @cite_35 , and face inpainting @cite_24 . Recently, the end-to-end deep learning framework are introduced to solve this problem in a data-driven manner. For example, Li al @cite_22 proposed a method based on the generative model to solve the face inpainting problem. Chen al @cite_19 made full use of the geometry prior to solve the face super-resolution problem. Shen al @cite_2 also proposed a method to solve the face deblurring problem by using the face semantic priors. However, the face image reflection removal problem has never been explicitly modeled and solved. | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_24",
"@cite_19",
"@cite_2",
"@cite_20"
],
"mid": [
"2187997753",
"2611104282",
"2096771748",
"2963676087",
"2792906307",
"2003749430"
],
"abstract": [
"The human face is one of the most interesting subjects involved in numerous applications. Significant progress has been made towards the image deblurring problem, however, existing generic deblurring methods are not able to achieve satisfying results on blurry face images. The success of the state-of-the-art image deblurring methods stems mainly from implicit or explicit restoration of salient edges for kernel estimation. When there is not much texture in the blurry image (e.g., face images), existing methods are less effective as only few edges can be used for kernel estimation. Moreover, recent methods are usually jeopardized by selecting ambiguous edges, which are imaged from the same edge of the object after blur, for kernel estimation due to local edge selection strategies. In this paper, we address these problems of deblurring face images by exploiting facial structures. We propose a maximum a posteriori (MAP) deblurring algorithm based on an exemplar dataset, without using the coarse-to-fine strategy or ad-hoc edge selections. Extensive evaluations against state-of-the-art methods demonstrate the effectiveness of the proposed algorithm for deblurring face images. We also show the extendability of our method to other specific deblurring tasks.",
"In this paper, we propose an effective face completion algorithm using a deep generative model. Different from well-studied background completion, the face completion task is more challenging as it often requires to generate semantically new pixels for the missing key components (e.g., eyes and mouths) that contain large appearance variations. Unlike existing nonparametric algorithms that search for patches to synthesize, our algorithm directly generates contents for missing regions based on a neural network. The model is trained with a combination of a reconstruction loss, two adversarial losses and a semantic parsing loss, which ensures pixel faithfulness and local-global contents consistency. With extensive experimental results, we demonstrate qualitatively and quantitatively that our model is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.",
"This paper presents a framework to automatically detect and recover the occluded facial region. We first derive a Bayesian formulation unifying the occlusion detection and recovery stages. Then a quality assessment model is developed to drive both the detection and recovery processes, which captures the face priors in both global correlation and local patterns. Based on this formulation, we further propose GraphCut-based detection and confidence-oriented sampling to attain optimal detection and recovery respectively. Compared to traditional works in image repairing, our approach is distinct in three aspects: (1) it frees the user from marking the occlusion area by incorporating an automatic occlusion detector; (2) it learns a face quality model as a criterion to guide the whole procedure; (3) it couples the detection and occlusion stages to simultaneously achieve two goals: accurate occlusion detection and high quality recovery. The comparative experiments show that our method can recover the occluded faces with both the global coherence and local details well preserved.",
"Face Super-Resolution (SR) is a domain-specific superresolution problem. The facial prior knowledge can be leveraged to better super-resolve face images. We present a novel deep end-to-end trainable Face Super-Resolution Network (FSRNet), which makes use of the geometry prior, i.e., facial landmark heatmaps and parsing maps, to super-resolve very low-resolution (LR) face images without well-aligned requirement. Specifically, we first construct a coarse SR network to recover a coarse high-resolution (HR) image. Then, the coarse HR image is sent to two branches: a fine SR encoder and a prior information estimation network, which extracts the image features, and estimates landmark heatmaps parsing maps respectively. Both image features and prior information are sent to a fine SR decoder to recover the HR image. To generate realistic faces, we also propose the Face Super-Resolution Generative Adversarial Network (FSRGAN) to incorporate the adversarial loss into FSRNet. Further, we introduce two related tasks, face alignment and parsing, as the new evaluation metrics for face SR, which address the inconsistency of classic metrics w.r.t. visual perception. Extensive experiments show that FSRNet and FSRGAN significantly outperforms state of the arts for very LR face SR, both quantitatively and qualitatively.",
"In this paper, we present an effective and efficient face deblurring algorithm by exploiting semantic cues via deep convolutional neural networks (CNNs). As face images are highly structured and share several key semantic components (e.g., eyes and mouths), the semantic information of a face provides a strong prior for restoration. As such, we propose to incorporate global semantic priors as input and impose local structure losses to regularize the output within a multi-scale deep CNN. We train the network with perceptual and adversarial losses to generate photo-realistic results and develop an incremental training strategy to handle random blur kernels in the wild. Quantitative and qualitative evaluations demonstrate that the proposed face deblurring algorithm restores sharp images with more facial details and performs favorably against state-of-the-art methods in terms of restoration quality, face recognition and execution speed.",
"In this paper, we study face hallucination, or synthesizing a high-resolution face image from an input low-resolution image, with the help of a large collection of other high-resolution face images. Our theoretical contribution is a two-step statistical modeling approach that integrates both a global parametric model and a local nonparametric model. At the first step, we derive a global linear model to learn the relationship between the high-resolution face images and their smoothed and down-sampled lower resolution ones. At the second step, we model the residue between an original high-resolution image and the reconstructed high-resolution image after applying the learned linear model by a patch-based non-parametric Markov network to capture the high-frequency content. By integrating both global and local models, we can generate photorealistic face images. A practical contribution is a robust warping algorithm to align the low-resolution face images to obtain good hallucination results. The effectiveness of our approach is demonstrated by extensive experiments generating high-quality hallucinated face images from low-resolution input with no manual alignment."
]
} |
1708.01715 | 2743992351 | This paper proposes a novel model for the rating prediction task in recommender systems which significantly outperforms previous state-of-the art models on a time-split Netflix data set. Our model is based on deep autoencoder with 6 layers and is trained end-to-end without any layer-wise pre-training. We empirically demonstrate that: a) deep autoencoder models generalize much better than the shallow ones, b) non-linear activation functions with negative parts are crucial for training deep models, and c) heavy use of regularization techniques such as dropout is necessary to prevent over-fiting. We also propose a new training algorithm based on iterative output re-feeding to overcome natural sparseness of collaborate filtering. The new algorithm significantly speeds up training and improves model performance. Our code is available at this https URL | Deep learning has led to breakthroughs in image recognition, natural language understanding, and reinforcement learning. Naturally, these successes fuel an interest for using deep learning in recommender systems. First attempts at using deep learning for recommender systems involved restricted Boltzman machines (RBM) . Several recent approaches use autoencoders , feed-forward neural networks and recurrent recommender networks . Many popular matrix factorization techniques can be thought of as a form of dimensionality reduction. It is, therefore, natural to adapt deep autoencoders for this task as well. (item-based autoencoder) and (user-based autoencoder) are first successful attempts to do so @cite_0 . | {
"cite_N": [
"@cite_0"
],
"mid": [
"2076566842"
],
"abstract": [
"Nonnegative matrix factorization (NMF) determines a lower rank approximation of a matrix @math where an integer @math is given and nonnegativity is imposed on all components of the factors @math and @math . NMF has attracted much attention for over a decade and has been successfully applied to numerous data analysis problems. In applications where the components of the data are necessarily nonnegative, such as chemical concentrations in experimental results or pixels in digital images, NMF provides a more relevant interpretation of the results since it gives nonsubtractive combinations of nonnegative basis vectors. In this paper, we introduce an algorithm for NMF based on alternating nonnegativity constrained least squares (NMF ANLS) and the active set-based fast algorithm for nonnegativity constrained least squares with multiple right-hand side vectors, and we discuss its convergence properties and a rigorous convergence criterion based on the Karush-Kuhn-Tucker (KKT) conditions. In addition, we also describe algorithms for sparse NMFs and regularized NMF. We show how we impose a sparsity constraint on one of the factors by @math -norm minimization and discuss its convergence properties. Our algorithms are compared to other commonly used NMF algorithms in the literature on several test data sets in terms of their convergence behavior."
]
} |
1708.01715 | 2743992351 | This paper proposes a novel model for the rating prediction task in recommender systems which significantly outperforms previous state-of-the art models on a time-split Netflix data set. Our model is based on deep autoencoder with 6 layers and is trained end-to-end without any layer-wise pre-training. We empirically demonstrate that: a) deep autoencoder models generalize much better than the shallow ones, b) non-linear activation functions with negative parts are crucial for training deep models, and c) heavy use of regularization techniques such as dropout is necessary to prevent over-fiting. We also propose a new training algorithm based on iterative output re-feeding to overcome natural sparseness of collaborate filtering. The new algorithm significantly speeds up training and improves model performance. Our code is available at this https URL | Note that Netflix Prize data also includes temporal signal - time when each rating has been made. Thus, several classic CF approaches has been extended to incorporate temporal information such as TimeSVD++ @cite_2 , as well as more recent RNN-based techniques such as recurrent recommender networks @cite_7 . | {
"cite_N": [
"@cite_7",
"@cite_2"
],
"mid": [
"2583674722",
"2099866409"
],
"abstract": [
"Recommender systems traditionally assume that user profiles and movie attributes are static. Temporal dynamics are purely reactive, that is, they are inferred after they are observed, e.g. after a user's taste has changed or based on hand-engineered temporal bias corrections for movies. We propose Recurrent Recommender Networks (RRN) that are able to predict future behavioral trajectories. This is achieved by endowing both users and movies with a Long Short-Term Memory (LSTM) autoregressive model that captures dynamics, in addition to a more traditional low-rank factorization. On multiple real-world datasets, our model offers excellent prediction accuracy and it is very compact, since we need not learn latent state but rather just the state transition function.",
"Most of the existing approaches to collaborative filtering cannot handle very large data sets. In this paper we show how a class of two-layer undirected graphical models, called Restricted Boltzmann Machines (RBM's), can be used to model tabular data, such as user's ratings of movies. We present efficient learning and inference procedures for this class of models and demonstrate that RBM's can be successfully applied to the Netflix data set, containing over 100 million user movie ratings. We also show that RBM's slightly outperform carefully-tuned SVD models. When the predictions of multiple RBM models and multiple SVD models are linearly combined, we achieve an error rate that is well over 6 better than the score of Netflix's own system."
]
} |
1708.01676 | 2745166287 | Given a textual description of an image, phrase grounding localizes objects in the image referred by query phrases in the description. State-of-the-art methods address the problem by ranking a set of proposals based on the relevance to each query, which are limited by the performance of independent proposal generation systems and ignore useful cues from context in the description. In this paper, we adopt a spatial regression method to break the performance limit, and introduce reinforcement learning techniques to further leverage semantic context information. We propose a novel Query-guided Regression network with Context policy (QRC Net) which jointly learns a Proposal Generation Network (PGN), a Query-guided Regression Network (QRN) and a Context Policy Network (CPN). Experiments show QRC Net provides a significant improvement in accuracy on two popular datasets: Flickr30K Entities and Referit Game, with 14.25 and 17.14 increase over the state-of-the-arts respectively. | requires learning correlation between visual and language modalities. Karpathy @cite_18 propose to align sentence fragments and image regions in a subspace, and later replace the dependency tree with a bi-directional RNN in @cite_17 . Hu @cite_24 propose a SCRC model which adopts a 2-layer LSTM to rank proposals using encoded query and visual features. Rohrbach @cite_0 employ a latent attention network conditioned on query which ranks proposals in unsupervised scenario. Other approaches learn the correlation between visual and language modalities based on Canonical Correlation Analysis (CCA) @cite_1 methods. Plummer @cite_12 first propose a CCA model to learn the multimodal correlation. Wang @cite_19 employ structured matching and use phrase pairs to boost performance. Recently, Plummer @cite_29 augment the CCA model to leverage extensive linguistic cues in the phrases. All of the above approaches are reliant on external object proposal systems and hence, are bounded by their performance limits. | {
"cite_N": [
"@cite_18",
"@cite_29",
"@cite_1",
"@cite_24",
"@cite_0",
"@cite_19",
"@cite_12",
"@cite_17"
],
"mid": [
"2953276893",
"",
"2100235303",
"2963735856",
"2247513039",
"2520141964",
"",
"2951805548"
],
"abstract": [
"We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit.",
"",
"We present a general method using kernel canonical correlation analysis to learn a semantic representation to web images and their associated text. The semantic space provides a common representation and enables a comparison between the text and images. In the experiments, we look at two approaches of retrieving images based on only their content from a text query. We compare orthogonalization approaches against a standard cross-representation retrieval technique known as the generalized vector space model.",
"In this paper, we address the task of natural language object retrieval, to localize a target object within a given image based on a natural language query of the object. Natural language object retrieval differs from text-based image retrieval task as it involves spatial information about objects within the scene and global scene context. To address this issue, we propose a novel Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate boxes for object retrieval, integrating spatial configurations and global scene-level contextual information into the network. Our model processes query text, local image descriptors, spatial configurations and global context features through a recurrent network, outputs the probability of the query text conditioned on each candidate box as a score for the box, and can transfer visual-linguistic knowledge from image captioning domain to our task. Experimental results demonstrate that our method effectively utilizes both local and global information, outperforming previous baseline methods significantly on different datasets and scenarios, and can exploit large scale vision and language datasets for knowledge transfer.",
"Grounding (i.e. localizing) arbitrary, free-form textual phrases in visual content is a challenging problem with many applications for human-computer interaction and image-text reference resolution. Few datasets provide the ground truth spatial localization of phrases, thus it is desirable to learn from data with no or little grounding supervision. We propose a novel approach which learns grounding by reconstructing a given phrase using an attention mechanism, which can be either latent or optimized directly. During training our approach encodes the phrase using a recurrent network language model and then learns to attend to the relevant image region in order to reconstruct the input phrase. At test time, the correct attention, i.e., the grounding, is evaluated. If grounding supervision is available it can be directly applied via a loss over the attention mechanism. We demonstrate the effectiveness of our approach on the Flickr30k Entities and ReferItGame datasets with different levels of supervision, ranging from no supervision over partial supervision to full supervision. Our supervised variant improves by a large margin over the state-of-the-art on both datasets.",
"In this paper we introduce a new approach to phrase localization: grounding phrases in sentences to image regions. We propose a structured matching of phrases and regions that encourages the semantic relations between phrases to agree with the visual relations between regions. We formulate structured matching as a discrete optimization problem and relax it to a linear program. We use neural networks to embed regions and phrases into vectors, which then define the similarities (matching weights) between regions and phrases. We integrate structured matching with neural networks to enable end-to-end training. Experiments on Flickr30K Entities demonstrate the empirical effectiveness of our approach.",
"",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations."
]
} |
1708.01676 | 2745166287 | Given a textual description of an image, phrase grounding localizes objects in the image referred by query phrases in the description. State-of-the-art methods address the problem by ranking a set of proposals based on the relevance to each query, which are limited by the performance of independent proposal generation systems and ignore useful cues from context in the description. In this paper, we adopt a spatial regression method to break the performance limit, and introduce reinforcement learning techniques to further leverage semantic context information. We propose a novel Query-guided Regression network with Context policy (QRC Net) which jointly learns a Proposal Generation Network (PGN), a Query-guided Regression Network (QRN) and a Context Policy Network (CPN). Experiments show QRC Net provides a significant improvement in accuracy on two popular datasets: Flickr30K Entities and Referit Game, with 14.25 and 17.14 increase over the state-of-the-arts respectively. | Proposal generation systems are widely used in object detection and phrase grounding tasks. Two popular methods: Selective Search @cite_16 and EdgeBoxes @cite_22 employ efficient low-level features to produce proposals on possible object locations. Based on proposals, spatial regression method is successfully applied in object detection. Fast R-CNN @cite_36 first employs a regression network to regress proposals generated by Selective Search @cite_16 . Based on this, Ren @cite_4 incorporate the proposal generation system by introducing a Region Proposal Network (RPN) which improves both accuracy and speed in object detection. Redmon @cite_5 employ regression method in grid level and use non-maximal suppression to improve the detection speed. Liu @cite_28 integrate proposal generation into a single network and use outputs discretized over different ratios and scales of feature maps to further increase the performance. Inspired by the success of RPN in object detection, we build a PGN and regress proposals conditioned on the input query. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_36",
"@cite_28",
"@cite_5",
"@cite_16"
],
"mid": [
"2953106684",
"7746136",
"",
"2193145675",
"",
"2088049833"
],
"abstract": [
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.",
"",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )."
]
} |
1708.01676 | 2745166287 | Given a textual description of an image, phrase grounding localizes objects in the image referred by query phrases in the description. State-of-the-art methods address the problem by ranking a set of proposals based on the relevance to each query, which are limited by the performance of independent proposal generation systems and ignore useful cues from context in the description. In this paper, we adopt a spatial regression method to break the performance limit, and introduce reinforcement learning techniques to further leverage semantic context information. We propose a novel Query-guided Regression network with Context policy (QRC Net) which jointly learns a Proposal Generation Network (PGN), a Query-guided Regression Network (QRN) and a Context Policy Network (CPN). Experiments show QRC Net provides a significant improvement in accuracy on two popular datasets: Flickr30K Entities and Referit Game, with 14.25 and 17.14 increase over the state-of-the-arts respectively. | is first introduced to deep neural network in Deep Q-learning (DQN) @cite_8 , which teaches an agent to play ATARI games. Lillicrap @cite_14 modify DQN by introducing deep deterministic policy gradients, which enables reinforcement learning framework to be optimized in continuous space. Recently, Yu @cite_33 adopt a reinforcer to guide speaker-listener network to sample more discriminative expressions in referring tasks. Liang @cite_9 introduce reinforcement learning to traverse a directed semantic action graph to learn visual relationship and attributes of objects in images. Inspired by the successful applications of reinforcement learning, we propose a CPN to assign rewards as policy gradients to leverage context information in training stage. | {
"cite_N": [
"@cite_9",
"@cite_14",
"@cite_33",
"@cite_8"
],
"mid": [
"2963650529",
"2173248099",
"2571175805",
"2145339207"
],
"abstract": [
"Computers still struggle to understand the interdependency of objects in the scene as a whole, e.g., relations between objects or their attributes. Existing methods often ignore global context cues capturing the interactions among different object instances, and can only recognize a handful of types by exhaustively training individual detectors for all possible relationships. To capture such global interdependency, we propose a deep Variation-structured Re-inforcement Learning (VRL) framework to sequentially discover object relationships and attributes in the whole image. First, a directed semantic action graph is built using language priors to provide a rich and compact representation of semantic correlations between object categories, predicates, and attributes. Next, we use a variation-structured traversal over the action graph to construct a small, adaptive action set for each step based on the current state and historical actions. In particular, an ambiguity-aware object mining scheme is used to resolve semantic ambiguity among object categories that the object detector fails to distinguish. We then make sequential predictions using a deep RL framework, incorporating global context cues and semantic embeddings of previously extracted phrases in the state vector. Our experiments on the Visual Relationship Detection (VRD) dataset and the large-scale Visual Genome dataset validate the superiority of VRL, which can achieve significantly better detection results on datasets involving thousands of relationship and attribute types. We also demonstrate that VRL is able to predict unseen types embedded in our action graph by learning correlations on shared graph nodes.",
"We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.",
"Referring expressions are natural language constructions used to identify particular objects within a scene. In this paper, we propose a unified framework for the tasks of referring expression comprehension and generation. Our model is composed of three modules: speaker, listener, and reinforcer. The speaker generates referring expressions, the listener comprehends referring expressions, and the reinforcer introduces a reward function to guide sampling of more discriminative expressions. The listener-speaker modules are trained jointly in an end-to-end learning framework, allowing the modules to be aware of one another during learning while also benefiting from the discriminative reinforcer’s feedback. We demonstrate that this unified framework and training achieves state-of-the-art results for both comprehension and generation on three referring expression datasets.",
"An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action."
]
} |
1708.01783 | 2742526081 | In the scenario of one multi-shot learning, conventional end-to-end learning strategies without sufficient supervision are usually not powerful enough to learn correct patterns from noisy signals. Thus, given a CNN pre-trained for object classification, this paper proposes a method that first summarizes the knowledge hidden inside the CNN into a dictionary of latent activation patterns, and then builds a new model for part localization by manually assembling latent patterns related to the target part via human interactions. We use very few (e.g., three) annotations of a semantic object part to retrieve certain latent patterns from conv-layers to represent the target part. We then visualize these latent patterns and ask users to further remove incorrect patterns, in order to refine part representation. With the guidance of human interactions, our method exhibited superior performance of part localization in experiments. | | | In recent years, many methods have been developed to explain the semantics hidden in the CNN. Studies of @cite_9 @cite_16 @cite_14 passively visualized content of some given CNN units. @cite_19 analyzed statistics of CNN features. | {
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_9",
"@cite_16"
],
"mid": [
"1661149683",
"2962851944",
"2952186574",
"2949987032"
],
"abstract": [
"We introduce an approach for analyzing the variation of features generated by convolutional neural networks (CNNs) trained on large image datasets with respect to scene factors that occur in natural images. Such factors may include object style, 3D viewpoint, color, and scene lighting configuration. Our approach analyzes CNN feature responses with respect to different scene factors by controlling for them via rendering using a large database of 3D CAD models. The rendered images are presented to a trained CNN and responses for different layers are studied with respect to the input scene factors. We perform a linear decomposition of the responses based on knowledge of the input scene factors and analyze the resulting components. In particular, we quantify their relative importance in the CNN responses and visualize them using principal component analysis. We show qualitative and quantitative results of our study on three trained CNNs: AlexNet [18], Places [43], and Oxford VGG [8]. We observe important differences across the different networks and CNN layers with respect to different scene factors and object categories. Finally, we demonstrate that our analysis based on computer-generated imagery translates to the network representation of natural images.",
"This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.",
"Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG and SIFT more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance."
]
} |
1708.01783 | 2742526081 | In the scenario of one multi-shot learning, conventional end-to-end learning strategies without sufficient supervision are usually not powerful enough to learn correct patterns from noisy signals. Thus, given a CNN pre-trained for object classification, this paper proposes a method that first summarizes the knowledge hidden inside the CNN into a dictionary of latent activation patterns, and then builds a new model for part localization by manually assembling latent patterns related to the target part via human interactions. We use very few (e.g., three) annotations of a semantic object part to retrieve certain latent patterns from conv-layers to represent the target part. We then visualize these latent patterns and ask users to further remove incorrect patterns, in order to refine part representation. With the guidance of human interactions, our method exhibited superior performance of part localization in experiments. | Unlike passive CNN visualization, we hope to actively semanticize CNNs by discovering patterns related to the target part, which is more challenging. Given CNN feature maps, Zhou @cite_2 @cite_18 discovered latent scene'' semantics. Simon discovered objects @cite_15 from CNN activations in an unsupervised manner, and learned part concepts in a supervised fashion @cite_7 . @cite_21 mined CNN patterns for a part concept and transformed the pattern knowledge into an AOG model. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_21",
"@cite_2",
"@cite_15"
],
"mid": [
"2950328304",
"2949194058",
"2954346764",
"1899185266",
"2949820118"
],
"abstract": [
"In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them",
"Current fine-grained classification approaches often rely on a robust localization of object parts to extract localized feature representations suitable for discrimination. However, part localization is a challenging task due to the large variation of appearance and pose. In this paper, we show how pre-trained convolutional neural networks can be used for robust and efficient object part discovery and localization without the necessity to actually train the network on the current dataset. Our approach called \"part detector discovery\" (PDD) is based on analyzing the gradient maps of the network outputs and finding activation centers spatially related to annotated semantic parts or bounding boxes. This allows us not just to obtain excellent performance on the CUB200-2011 dataset, but in contrast to previous approaches also to perform detection and bird classification jointly without requiring a given bounding box annotation during testing and ground-truth parts during training. The code is available at this http URL and this https URL",
"This paper proposes a learning strategy that extracts object-part concepts from a pre-trained convolutional neural network (CNN), in an attempt to 1) explore explicit semantics hidden in CNN units and 2) gradually grow a semantically interpretable graphical model on the pre-trained CNN for hierarchical object understanding. Given part annotations on very few (e.g., 3-12) objects, our method mines certain latent patterns from the pre-trained CNN and associates them with different semantic parts. We use a four-layer And-Or graph to organize the mined latent patterns, so as to clarify their internal semantic hierarchy. Our method is guided by a small number of part annotations, and it achieves superior performance (about 13 -107 improvement) in part center prediction on the PASCAL VOC and ImageNet datasets.",
"With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.",
"Part models of object categories are essential for challenging recognition tasks, where differences in categories are subtle and only reflected in appearances of small parts of the object. We present an approach that is able to learn part models in a completely unsupervised manner, without part annotations and even without given bounding boxes during learning. The key idea is to find constellations of neural activation patterns computed using convolutional neural networks. In our experiments, we outperform existing approaches for fine-grained recognition on the CUB200-2011, NA birds, Oxford PETS, and Oxford Flowers dataset in case no part or bounding box annotations are available and achieve state-of-the-art performance for the Stanford Dog dataset. We also show the benefits of neural constellation models as a data augmentation technique for fine-tuning. Furthermore, our paper unites the areas of generic and fine-grained classification, since our approach is suitable for both scenarios. The source code of our method is available online at this http URL"
]
} |
1708.01783 | 2742526081 | In the scenario of one multi-shot learning, conventional end-to-end learning strategies without sufficient supervision are usually not powerful enough to learn correct patterns from noisy signals. Thus, given a CNN pre-trained for object classification, this paper proposes a method that first summarizes the knowledge hidden inside the CNN into a dictionary of latent activation patterns, and then builds a new model for part localization by manually assembling latent patterns related to the target part via human interactions. We use very few (e.g., three) annotations of a semantic object part to retrieve certain latent patterns from conv-layers to represent the target part. We then visualize these latent patterns and ask users to further remove incorrect patterns, in order to refine part representation. With the guidance of human interactions, our method exhibited superior performance of part localization in experiments. | | | In many studies, people used AOGs to represent the hierarchical semantic hierarchy of objects or scenes @cite_12 @cite_3 . We use the AOG to associate the latent patterns with part semantics, which eases the visualization of CNN patterns and enables semantic-level interactions on CNN patterns. | {
"cite_N": [
"@cite_3",
"@cite_12"
],
"mid": [
"1999160507",
"347936517"
],
"abstract": [
"This paper presents a framework for unsupervised learning of a hierarchical reconfigurable image template - the AND-OR Template (AOT) for visual objects. The AOT includes: 1) hierarchical composition as \"AND\" nodes, 2) deformation and articulation of parts as geometric \"OR\" nodes, and 3) multiple ways of composition as structural \"OR\" nodes. The terminal nodes are hybrid image templates (HIT) [17] that are fully generative to the pixels. We show that both the structures and parameters of the AOT model can be learned in an unsupervised way from images using an information projection principle. The learning algorithm consists of two steps: 1) a recursive block pursuit procedure to learn the hierarchical dictionary of primitives, parts, and objects, and 2) a graph compression procedure to minimize model structure for better generalizability. We investigate the factors that influence how well the learning algorithm can identify the underlying AOT. And we propose a number of ways to evaluate the performance of the learned AOTs through both synthesized examples and real-world images. Our model advances the state of the art for object detection by improving the accuracy of template matching.",
"We present a novel structure learning method,Max Margin AND OR Graph (MM-AOG), for parsing the human body into parts and recovering their poses. Our method represents the human body and its parts by an AND OR graph, which is a multi-level mixture of Markov Random Fields (MRFs). Max margin learning, which is a generalization of the training algorithm for support vector machines (SVMs), is used to learn the parameters of the AND OR graph model discriminatively. There are four advantages from this combination of AND OR graphs and max-margin learning. Firstly, the AND OR graph allows us to handle enormous articulated poses with a compact graphical model. Secondly, max-margin learning has more discriminative power than the traditional maximum likelihood approach. Thirdly, the parameters of the AND OR graph model are optimized globally. In particular, the weights of the appearancemodel for individual nodes and the relative importance of spatial relationships between nodes are learnt simultaneously. Finally, the kernel trick can be used to handle high dimensional features and to enable complex similarity measure of shapes. We perform comparison experiments on the baseball datasets, showing significant improvements over state of the art methods."
]
} |
1708.01783 | 2742526081 | In the scenario of one multi-shot learning, conventional end-to-end learning strategies without sufficient supervision are usually not powerful enough to learn correct patterns from noisy signals. Thus, given a CNN pre-trained for object classification, this paper proposes a method that first summarizes the knowledge hidden inside the CNN into a dictionary of latent activation patterns, and then builds a new model for part localization by manually assembling latent patterns related to the target part via human interactions. We use very few (e.g., three) annotations of a semantic object part to retrieve certain latent patterns from conv-layers to represent the target part. We then visualize these latent patterns and ask users to further remove incorrect patterns, in order to refine part representation. With the guidance of human interactions, our method exhibited superior performance of part localization in experiments. | | | Unsupervised object discovery @cite_15 was formulated as a problem of mining common foreground patterns from images, and many sophisticated methods were developed for this problem. Whereas, given a pre-trained object-level model, un- weakly-supervised learning of part representations is a different problem. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2949820118"
],
"abstract": [
"Part models of object categories are essential for challenging recognition tasks, where differences in categories are subtle and only reflected in appearances of small parts of the object. We present an approach that is able to learn part models in a completely unsupervised manner, without part annotations and even without given bounding boxes during learning. The key idea is to find constellations of neural activation patterns computed using convolutional neural networks. In our experiments, we outperform existing approaches for fine-grained recognition on the CUB200-2011, NA birds, Oxford PETS, and Oxford Flowers dataset in case no part or bounding box annotations are available and achieve state-of-the-art performance for the Stanford Dog dataset. We also show the benefits of neural constellation models as a data augmentation technique for fine-tuning. Furthermore, our paper unites the areas of generic and fine-grained classification, since our approach is suitable for both scenarios. The source code of our method is available online at this http URL"
]
} |
1708.01797 | 2743461834 | In many lock-free algorithms, threads help one another, and each operation creates a descriptor that describes how other threads should help it. Allocating and reclaiming descriptors introduces significant space and time overhead. We introduce the first descriptor abstract data type (ADT), which captures the usage of descriptors by lock-free algorithms. We then develop a weak descriptor ADT which has weaker semantics, but can be implemented significantly more efficiently. We show how a large class of lock-free algorithms can be transformed to use weak descriptors, and demonstrate our technique by transforming several algorithms, including the leading k-compare-and-swap (k-CAS) algorithm. The original k-CAS algorithm allocates at least k+1 new descriptors per k-CAS. In contrast, our implementation allocates two descriptors per process, and each process simply reuses its two descriptors. Experiments on a variety of workloads show significant performance improvements over implementations that reclaim descriptors, and reductions of up to three orders of magnitude in peak memory usage. | The (LLR) problem is related to our work (see @cite_30 for a survey), but its solutions do not solve our problem. LLR provides processes with operations to one unique resource from a pool of resources, and subsequently it. One could imagine a scheme in which processes use LLR to reuse a small set of descriptors by invoking instead of allocating a new descriptor, and eventually invoking . Note, however, that a descriptor can safely be released only once it can no longer be accessed by any other process. Determining when it is safe to release a descriptor is as hard as performing general memory reclamation, and would also require delaying the release (and subsequent acquisition) of a descriptor (which would increase the number of descriptors needed). In contrast, our weak descriptors eliminate the need for memory reclamation, and allow immediate reuse. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2005147105"
],
"abstract": [
"Long-lived renaming allows processes to repeatedly get distinct names from a small name space and release these names. This paper presents two long-lived renaming algorithms in which the name a process gets is bounded above by the number of processes currently occupying a name or performing one of these operations. The first algorithm is asynchronous, uses LL SC objects, and has step complexity that is linear in the number of processes, c, currently getting or releasing a name. The second is synchronous, uses registers and counters, and has step complexity that is polylogarithmic in c. Both tolerate any number of process crashes."
]
} |
1708.01806 | 2743405448 | Noteheads are the interface between the written score and music. Each notehead on the page signifies one note to be played, and detecting noteheads is thus an unavoidable step for Optical Music Recognition. Noteheads are clearly distinct objects, however, the variety of music notation handwriting makes noteheads harder to identify, and while handwritten music notation symbol classification is a well-studied task, symbol detection has usually been limited to heuristics and rule-based systems instead of machine learning methods better suited to deal with the uncertainties in handwriting. We present ongoing work on a simple notehead detector using convolutional neural networks for pixel classification and bounding box regression that achieves a detection f-score of 0.97 on binary score images in the MUSCIMA++ dataset, does not require staff removal, and is applicable to a variety of handwriting styles and levels of musical complexity. | Convolutional neural networks have been previously successfully applied to music scores by Calvo-, for segmentation into staffline, notation, and text regions @cite_12 or binarization @cite_10 , with convincing results that generalize over various input modes. | {
"cite_N": [
"@cite_10",
"@cite_12"
],
"mid": [
"2721978941",
"2613159350"
],
"abstract": [
"Musical documents may contain heterogeneous information such as music symbols, text, staff lines, ornaments, annotations, and editorial data. Before any attempt at automatically recognizing the information on scores, it is usually necessary to detect and classify each constituent layer of information into different categories. The greatest obstacle of this classification process is the high heterogeneity among music collections, which makes it difficult to propose methods that can be generalizable to a broad range of sources. In this paper we propose a novel machine learning framework that focuses on extracting the different layers within musical documents by categorizing the image at pixel level. The main advantage of our approach is that it can be used regardless of the type of document provided, as long as training data is available. We illustrate some of the capabilities of the framework by showing examples of common tasks that are frequently performed on images of musical documents, such as binarization, staff-line removal, symbol isolation, and complete layout analysis. All these are tasks for which our approach has shown promising performance. We believe our framework will allow the development of generalizable and scalable automatic music recognition systems, thus facilitating the creation of large-scale browsable and searchable repositories of music documents.",
"Staff-line removal is an important preprocessing stage for most optical music recognition systems. Common procedures to solve this task involve image processing techniques. In contrast to these traditional methods based on hand-engineered transformations, the problem can also be approached as a classification task in which each pixel is labeled as either staff or symbol, so that only those that belong to symbols are kept in the image. In order to perform this classification, we propose the use of convolutional neural networks, which have demonstrated an outstanding performance in image retrieval tasks. The initial features of each pixel consist of a square patch from the input image centered at that pixel. The proposed network is trained by using a dataset which contains pairs of scores with and without the staff lines. Our results in both binary and grayscale images show that the proposed technique is very accurate, outperforming both other classifiers and the state-of-the-art strategies considered. In addition, several advantages of the presented methodology with respect to traditional procedures proposed so far are discussed."
]
} |
1903.00440 | 2919233950 | Super-resolution reconstruction (SRR) is a process aimed at enhancing spatial resolution of images, either from a single observation, based on the learned relation between low and high resolution, or from multiple images presenting the same scene. SRR is particularly valuable, if it is infeasible to acquire images at desired resolution, but many images of the same scene are available at lower resolution---this is inherent to a variety of remote sensing scenarios. Recently, we have witnessed substantial improvement in single-image SRR attributed to the use of deep neural networks for learning the relation between low and high resolution. Importantly, deep learning has not been exploited for multiple-image SRR, which benefits from information fusion and in general allows for achieving higher reconstruction accuracy. In this letter, we introduce a new method which combines the advantages of multiple-image fusion with learning the low-to-high resolution mapping using deep networks. The reported experimental results indicate that our algorithm outperforms the state-of-the-art SRR methods, including these that operate from a single image, as well as those that perform multiple-image fusion. | Recently, we proposed the EvoIM method @cite_10 @cite_19 , which employs a genetic algorithm to optimize the hyper-parameters that control the IM used in FRSR @cite_13 , and to evolve the convolution kernels instead of the Gaussian blur used in FRSR. We showed that the reconstruction process can be effectively adapted to different imaging conditions---in particular, we used Sentinel-2 images at original resolution as LR inputs, and compared the reconstruction outcome with SPOT images presenting the same region. | {
"cite_N": [
"@cite_13",
"@cite_19",
"@cite_10"
],
"mid": [
"2165939075",
"2794369679",
"2811720408"
],
"abstract": [
"Super-resolution reconstruction produces one or a set of high-resolution images from a set of low-resolution images. In the last two decades, a variety of super-resolution methods have been proposed. These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their shortcomings. We propose an alternate approach using L sub 1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. This computationally inexpensive method is robust to errors in motion and blur estimation and results in images with sharp edges. Simulation results confirm the effectiveness of our method and demonstrate its superiority to other super-resolution methods.",
"Super-resolution reconstruction (SRR) allows for producing a high-resolution (HR) image from a set of low-resolution (LR) observations. The majority of existing methods require tuning a number of hyper-parameters which control the reconstruction process and configure the imaging model that is supposed to reflect the relation between high and low resolution. In this paper, we demonstrate that the reconstruction process is very sensitive to the actual relation between LR and HR images, and we argue that this is a substantial obstacle in deploying SRR in practice. We propose to search the hyper-parameter space using a genetic algorithm (GA), thus adapting to the actual relation between LR and HR, which has not been reported in the literature so far. The results of our extensive experimental study clearly indicate that our GA improves the capacities of SRR. Importantly, the GA converges to different values of the hyper-parameters depending on the applied degradation procedure, which is confirmed using statistical tests.",
"Super-resolution reconstruction (SRR) allows for enhancing image spatial resolution from low-resolution (LR) observations, which are assumed to have been derived from a hypothetical high-resolution image by applying a certain imaging model (IM). However, if the actual degradation is different from the assumed IM, which is often the case in real-world scenarios, then the reconstruction quality is affected. We introduce a genetic algorithm to optimize the SRR hyper-parameters and to discover the actual IM by evolving the kernels exploited in the IM. The reported experimental results indicate that our approach outperforms the state of the art for a variety of images, including difficult real-life satellite data."
]
} |
1903.00502 | 2918695860 | Zero-shot learning extends the conventional object classification to the unseen class recognition by introducing semantic representations of classes. Existing approaches predominantly focus on learning the proper mapping function for visual-semantic embedding, while neglecting the effect of learning discriminative visual features. In this paper, we study the significance of the discriminative region localization. We propose a semantic-guided multi-attention localization model, which automatically discovers the most discriminative parts of objects for zero-shot learning without any human annotations. Our model jointly learns cooperative global and local features from the whole object as well as the detected parts to categorize objects based on semantic descriptions. Moreover, with the joint supervision of embedding softmax loss and class-center triplet loss, the model is encouraged to learn features with high inter-class dispersion and intra-class compactness. Through comprehensive experiments on three widely used zero-shot learning benchmarks, we show the efficacy of the multi-attention localization and our proposed approach improves the state-of-the-art results by a considerable margin. | While several early works of zero-shot learning @cite_5 make use of attribute as the intermediate information to infer the label of an image, the current majority of zero-shot learning approaches treat the problem as a visual-semantic embedding one. A bilinear compatibility function between the images and the attribute space is learned using the ranking loss in ALE @cite_0 or the ridge regression loss in ESZSL @cite_15 . Some other zero-shot learning approaches learn non-linear multi-model embeddings. LatEm @cite_27 learns a piecewise linear model by selection of learned multiple linear mappings. DEM @cite_9 presents a deep zero-shot learning model that raises the expression ability by adding non-linear activation ReLU. | {
"cite_N": [
"@cite_9",
"@cite_0",
"@cite_27",
"@cite_5",
"@cite_15"
],
"mid": [
"2950652153",
"2171061940",
"2334493732",
"2128532956",
"652269744"
],
"abstract": [
"Zero-shot learning (ZSL) models rely on learning a joint embedding space where both textual semantic description of object classes and visual representation of object images can be projected to for nearest neighbour search. Despite the success of deep neural networks that learn an end-to-end model between text and images in other vision problems such as image captioning, very few deep ZSL model exists and they show little advantage over ZSL models that utilise deep feature representations but do not learn an end-to-end embedding. In this paper we argue that the key to make deep ZSL models succeed is to choose the right embedding space. Instead of embedding into a semantic space or an intermediate space, we propose to use the visual space as the embedding space. This is because that in this space, the subsequent nearest neighbour search would suffer much less from the hubness problem and thus become more effective. This model design also provides a natural mechanism for multiple semantic modalities (e.g., attributes and sentence descriptions) to be fused and optimised jointly in an end-to-end manner. Extensive experiments on four benchmarks show that our model significantly outperforms the existing models.",
"Attributes act as intermediate representations that enable parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function that measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct classes rank higher than the incorrect ones. Results on the Animals With Attributes and Caltech-UCSD-Birds datasets show that the proposed framework outperforms the standard Direct Attribute Prediction baseline in a zero-shot learning scenario. Label embedding enjoys a built-in ability to leverage alternative sources of information instead of or in addition to attributes, such as, e.g., class hierarchies or textual descriptions. Moreover, label embedding encompasses the whole range of learning settings from zero-shot learning to regular learning with a large number of labeled examples.",
"We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.",
"We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.",
"Zero-shot learning consists in learning how to recognise new concepts by just having a description of them. Many sophisticated approaches have been proposed to address the challenges this problem comprises. In this paper we describe a zero-shot learning approach that can be implemented in just one line of code, yet it is able to outperform state of the art approaches on standard datasets. The approach is based on a more general framework which models the relationships between features, attributes, and classes as a two linear layers network, where the weights of the top layer are not learned but are given by the environment. We further provide a learning bound on the generalisation error of this kind of approaches, by casting them as domain adaptation methods. In experiments carried out on three standard real datasets, we found that our approach is able to perform significantly better than the state of art on all of them, obtaining a ratio of improvement up to 17 ."
]
} |
1903.00502 | 2918695860 | Zero-shot learning extends the conventional object classification to the unseen class recognition by introducing semantic representations of classes. Existing approaches predominantly focus on learning the proper mapping function for visual-semantic embedding, while neglecting the effect of learning discriminative visual features. In this paper, we study the significance of the discriminative region localization. We propose a semantic-guided multi-attention localization model, which automatically discovers the most discriminative parts of objects for zero-shot learning without any human annotations. Our model jointly learns cooperative global and local features from the whole object as well as the detected parts to categorize objects based on semantic descriptions. Moreover, with the joint supervision of embedding softmax loss and class-center triplet loss, the model is encouraged to learn features with high inter-class dispersion and intra-class compactness. Through comprehensive experiments on three widely used zero-shot learning benchmarks, we show the efficacy of the multi-attention localization and our proposed approach improves the state-of-the-art results by a considerable margin. | More related to our work, several end-to-end learning methods are proposed to address the pitfall that discriminative feature learning is neglected. SCoRe @cite_13 combines two semantic constraints to supervise attribute prediction and visual-semantic embedding respectively. LDF @cite_18 takes one step further and integrates a zoom network in the model to discover significant regions automatically, and learn discriminative visual feature representation. But the zoom mechanism can only discover the whole object by cropping out the background with a square shape, still being restricted to the global feature. In contrast, our multi-attention localization network can help find multiple finer part regions (e.g., head, tail) that are discriminative for zero-shot learning. | {
"cite_N": [
"@cite_18",
"@cite_13"
],
"mid": [
"2791906491",
"2605805765"
],
"abstract": [
"Zero-shot learning (ZSL) aims to recognize unseen image categories by learning an embedding space between image and semantic representations. For years, among existing works, it has been the center task to learn the proper mapping matrices aligning the visual and semantic space, whilst the importance to learn discriminative representations for ZSL is ignored. In this work, we retrospect existing methods and demonstrate the necessity to learn discriminative representations for both visual and semantic instances of ZSL. We propose an end-to-end network that is capable of 1) automatically discovering discriminative regions by a zoom network; and 2) learning discriminative semantic representations in an augmented space introduced for both user-defined and latent attributes. Our proposed method is tested extensively on two challenging ZSL datasets, and the experiment results show that the proposed method significantly outperforms state-of-the-art methods.",
"The role of semantics in zero-shot learning is considered. The effectiveness of previous approaches is analyzed according to the form of supervision provided. While some learn semantics independently, others only supervise the semantic subspace explained by training classes. Thus, the former is able to constrain the whole space but lacks the ability to model semantic correlations. The latter addresses this issue but leaves part of the semantic space unsupervised. This complementarity is exploited in a new convolutional neural network (CNN) framework, which proposes the use of semantics as constraints for recognition. Although a CNN trained for classification has no transfer ability, this can be encouraged by learning an hidden semantic layer together with a semantic code for classification. Two forms of semantic constraints are then introduced. The first is a loss-based regularizer that introduces a generalization constraint on each semantic predictor. The second is a codeword regularizer that favors semantic-to-class mappings consistent with prior semantic knowledge while allowing these to be learned from data. Significant improvements over the state-of-the-art are achieved on several datasets."
]
} |
1903.00452 | 2920517911 | There is increasing interest in using multicore processors to accelerate stream processing. For example, indexing sliding window content to enhance the performance of streaming queries is greatly improved by utilizing the computational capabilities of a multicore processor. However, designing an effective concurrency control mechanism that addresses the problem of concurrent indexing in highly dynamic settings remains a challenge. In this paper, we introduce an index data structure, called the Partitioned In-memory Merge-Tree, to address the challenges that arise when indexing highly dynamic data, which are common in streaming settings. To complement the index, we design an algorithm to realize a parallel index-based stream join that exploits the computational power of multicore processors. Our experiments using an octa-core processor show that our parallel stream join achieves up to 5.5 times higher throughput than a single-threaded approach. | -- A class of related work proposes accelerating window queries by utilizing an index. @cite_18 evaluated different sliding window indexing approaches, such as hash-based and tree-based indexing, for different types of stream operators. @cite_4 evaluated the performance of an asymmetric sliding stream join using different algorithms, such as nested loop join, hash-based join, and index-based join. @cite_29 and Ya- @cite_36 proposed the to accelerate index-based stream join utilizing coarse-grained tuple disposal. However, all of these approaches considered only single-threaded sliding window indexing, thus avoiding concurrency issues resulting from parallel update processing, which is central to the focus of our work. | {
"cite_N": [
"@cite_36",
"@cite_18",
"@cite_29",
"@cite_4"
],
"mid": [
"2318726294",
"2138531903",
"2008888174",
"2140894225"
],
"abstract": [
"Processing a join over unbounded input streams requires unbounded memory, since every tuple in one infinite stream must be compared with every tuple in the other. In fact, most join queries over unbounded input streams are restricted to finite memory due to sliding window constraints. So far, non-indexed and indexed stream equijoin algorithms based on sliding windows have been proposed in many literatures. However, none of them takes non-equijoin into consideration. In many cases, non-equijoin queries occur frequently. Hence, it is worth to discuss how to process non-equijoin queries effectively and efficiently. In this paper, we propose an indexed join algorithm for supporting non-equijoin queries. The experimental results show that our indexed non-equijoin techniques are more efficient than those without index.",
"We consider indexing sliding windows in main memory over on-line data streams. Our proposed data structures and query semantics are based on a division of the sliding window into sub-windows. By classifying windowed operators according to their method of execution, we motivate the need for two types of windowed indices: those which provide a list of attribute values and their counts for answering set-valued queries, and those which provide direct access to tuples for answering attribute-valued queries. We propose and evaluate indices for both of these cases and show that our techniques are more efficient than executing windowed queries without an index.",
"Efficient and scalable stream joins play an important role in performing real-time analytics for many cloud applications. However, like in conventional database processing, online theta-joins over data streams are computationally expensive and moreover, being memory-based processing, they impose high memory requirement on the system. In this paper, we propose a novel stream join model, called join-biclique, which organizes a large cluster as a complete bipartite graph. Join-biclique has several strengths over state-of-the-art techniques, including memory-efficiency, elasticity and scalability. These features are essential for building efficient and scalable streaming systems. Based on join-biclique, we develop a scalable distributed stream join system, BiStream, over a large-scale commodity cluster. Specifically, BiStream is designed to support efficient full-history joins, window-based joins and online data aggregation. BiStream also supports adaptive resource management to dynamically scale out and down the system according to its application workloads. We provide both theoretical cost analysis and extensive experimental evaluations to evaluate the efficiency, elasticity and scalability of BiStream.",
"We investigate algorithms for evaluating sliding window joins over pairs of unbounded streams. We introduce a unit-time-basis cost model to analyze the expected performance of these algorithms. Using this cost model, we propose strategies for maximizing the efficiency of processing joins in three scenarios. First, we consider the case where one stream is much faster than the other. We show that asymmetric combinations of join algorithms, (e.g., hash join on one input, nested-loops join on the other) can outperform symmetric join algorithm implementations. Second, we investigate the case where system resources are insufficient to keep up with the input streams. We show that we can maximize the number of join result tuples produced in this case by properly allocating computing resources across the two input streams. Finally, we investigate strategies for maximizing the number of result tuples produced when memory is limited, and show that proper memory allocation across the two input streams can result in significantly lower resource usage and or more result tuples produced."
]
} |
1903.00452 | 2920517911 | There is increasing interest in using multicore processors to accelerate stream processing. For example, indexing sliding window content to enhance the performance of streaming queries is greatly improved by utilizing the computational capabilities of a multicore processor. However, designing an effective concurrency control mechanism that addresses the problem of concurrent indexing in highly dynamic settings remains a challenge. In this paper, we introduce an index data structure, called the Partitioned In-memory Merge-Tree, to address the challenges that arise when indexing highly dynamic data, which are common in streaming settings. To complement the index, we design an algorithm to realize a parallel index-based stream join that exploits the computational power of multicore processors. Our experiments using an octa-core processor show that our parallel stream join achieves up to 5.5 times higher throughput than a single-threaded approach. | -- Window join processing has received considerable attention in recent years due to its computational complexity and importance in various data management applications. Several approaches explore parallel window join processing. Cell-join is a parallel stream join operator designed to exploit the computing power of the cell processor @cite_3 . Handshake join is a scalable stream join that propagates stream tuples along a linear chain of cores in opposing directions @cite_9 . @cite_23 enhanced the handshake join by proposing a fast-forward tuple propagation to attain lower latency. SplitJoin is based on a top-down data flow model that splits the join operation into independent store and process steps to reduce the dependency among processing units @cite_1 . @cite_29 proposed a real-time and scalable join model for a computing cluster by organizing processing units into a bipartite graph to reduce memory requirements and the dependency among processing units. All these approaches are based on context-insensitive window partitioning. Although these methods are effective for using nested loop join or for memory-bounded joins with high selectivity, context-insensitive window partitioning causes redundant index operations using IBWJ, which limits the system efficiency. | {
"cite_N": [
"@cite_9",
"@cite_29",
"@cite_1",
"@cite_3",
"@cite_23"
],
"mid": [
"2120276090",
"2008888174",
"2359704875",
"2115622196",
""
],
"abstract": [
"In spite of the omnipresence of parallel (multi-core) systems, the predominant strategy to evaluate window-based stream joins is still strictly sequential, mostly just straightforward along the definition of the operation semantics. In this work we present handshake join, a way of describing and executing window-based stream joins that is highly amenable to parallelized execution. Handshake join naturally leverages available hardware parallelism, which we demonstrate with an implementation on a modern multi-core system and on top of field-programmable gate arrays (FPGAs), an emerging technology that has shown distinctive advantages for high-throughput data processing. On the practical side, we provide a join implementation that substantially outperforms CellJoin (the fastest published result) and that will directly turn any degree of parallelism into higher throughput or larger supported window sizes. On the semantic side, our work gives a new intuition of window semantics, which we believe could inspire other stream processing algorithms or ongoing standardization efforts for stream query languages.",
"Efficient and scalable stream joins play an important role in performing real-time analytics for many cloud applications. However, like in conventional database processing, online theta-joins over data streams are computationally expensive and moreover, being memory-based processing, they impose high memory requirement on the system. In this paper, we propose a novel stream join model, called join-biclique, which organizes a large cluster as a complete bipartite graph. Join-biclique has several strengths over state-of-the-art techniques, including memory-efficiency, elasticity and scalability. These features are essential for building efficient and scalable streaming systems. Based on join-biclique, we develop a scalable distributed stream join system, BiStream, over a large-scale commodity cluster. Specifically, BiStream is designed to support efficient full-history joins, window-based joins and online data aggregation. BiStream also supports adaptive resource management to dynamically scale out and down the system according to its application workloads. We provide both theoretical cost analysis and extensive experimental evaluations to evaluate the efficiency, elasticity and scalability of BiStream.",
"There is a rising interest in accelerating stream processing through modern parallel hardware, yet it remains a challenge as how to exploit the available resources to achieve higher throughput without sacrificing latency due to the increased length of processing pipeline and communication path and the need for central coordination. To achieve these objectives, we introduce a novel top-down data flow model for stream join processing (arguably, one of the most resource-intensive operators in stream processing), called SplitJoin, that operates by splitting the join operation into independent storing and processing steps that gracefully scale with respect to the number of cores. Furthermore, SplitJoin eliminates the need for global coordination while preserving the order of input streams by re-thinking how streams are channeled into distributed join computation cores and maintaining the order of output streams by proposing a novel distributed punctuation technique. Throughout our experimental analysis, SplitJoin offered up to 60 improvement in throughput while reducing latency by up to 3.3X compared to state-of-the-art solutions.",
"Low-latency and high-throughput processing are key requirements of data stream management systems (DSMSs). Hence, multi-core processors that provide high aggregate processing capacity are ideal matches for executing costly DSMS operators. The recently developed Cell processor is a good example of a heterogeneous multi-core architecture and provides a powerful platform for executing data stream operators with high-performance. On the down side, exploiting the full potential of a multi-core processor like Cell is often challenging, mainly due to the heterogeneous nature of the processing elements, the software managed local memory at the co-processor side, and the unconventional programming model in general. In this paper, we study the problem of scalable execution of windowed stream join operators on multi-core processors, and specifically on the Cell processor. By examining various aspects of join execution flow, we determine the right set of techniques to apply in order to minimize the sequential segments and maximize parallelism. Concretely, we show that basic windows coupled with low-overhead pointer-shifting techniques can be used to achieve efficient join window partitioning, column-oriented join window organization can be used to minimize scattered data transfers, delay-optimized double buffering can be used for effective pipelining, rate-aware batching can be used to balance join throughput and tuple delay, and finally single-instruction multiple-data (SIMD) optimized operator code can be used to exploit data parallelism. Our experimental results show that, following the design guidelines and implementation techniques outlined in this paper, windowed stream joins can achieve high scalability (linear in the number of co-processors) by making efficient use of the extensive hardware parallelism provided by the Cell processor (reaching data processing rates of ?13 GB s) and significantly surpass the performance obtained form conventional high-end processors (supporting a combined input stream rate of 2,000 tuples s using 15 min windows and without dropping any tuples, resulting in ?8.3 times higher output rate compared to an SSE implementation on dual 3.2 GHz Intel Xeon).",
""
]
} |
1903.00650 | 2918569444 | In this paper, we focus on the challenging perception problem in robotic pouring. Most of the existing approaches either leverage visual or haptic information. However, these techniques may suffer from poor generalization performances on opaque containers or concerning measuring precision. To tackle these drawbacks, we propose to make use of audio vibration sensing and design a deep neural network PouringNet to predict the liquid height from the audio fragment during the robotic pouring task. PouringNet is trained on our collected real-world pouring dataset with multimodal sensing data, which contains more than 3000 recordings of audio, force feedback, video and trajectory data of the human hand that performs the pouring task. Each record represents a complete pouring procedure. We conduct several evaluations on PouringNet with our dataset and robotic hardware. The results demonstrate that our PouringNet generalizes well across different liquid containers, positions of the audio receiver, initial liquid heights and types of liquid, and facilitates a more robust and accurate audio-based perception for robotic pouring. | Robotic pouring has usually been implemented through generating motion trajectories or estimating specified features of the liquids or the containers as the guidance for pouring tasks. Brandi al @cite_23 suggested learning pouring tasks using kinaesthetic teaching and then generalized pouring actions by computing warped parameters. Learning dynamic pouring tasks from human demonstration was implemented in @cite_25 @cite_21 . Since transferring compliant manipulation skills from humans to robots is always difficult, Pan al solved the online trajectories of the source container in simulation using a receding-horizon optimization method to handle the fluid dynamics @cite_26 . More recently, Do al @cite_24 solved this problem by learning a pouring policy using deep deterministic policy gradients in simulation and transferring the learned policy from simulation to a real robot. On the other hand, some researchers focus on using different modalities to detect viscosity @cite_4 , height @cite_15 , the volume of the liquid or granular material @cite_12 , etc., thus relying on the perception results to perform pouring by a simple controller on the real robot. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_21",
"@cite_24",
"@cite_23",
"@cite_15",
"@cite_25",
"@cite_12"
],
"mid": [
"2964227849",
"2199040891",
"",
"2909112693",
"2076235166",
"2564989727",
"1844329104",
"2899466959"
],
"abstract": [
"We present an optimization-based motion planning algorithm to compute a smooth, collision-free trajectory for a manipulator used to transfer a liquid from a source to a target container. We take into account fluid dynamics constraints as part of the trajectory computation. In order to avoid the high complexity of exact fluid simulation, we introduce a simplified dynamics model based on physically inspired approximations and system identification. Our optimization approach can incorporate various other constraints such as collision avoidance with obstacles, kinematic and dynamics constraints of the manipulator, and fluid dynamics characteristics. We demonstrate the performance of our planner on different benchmarks corresponding to various obstacles and container shapes. We also evaluate its accuracy by validating the motion plan using an accurate but computationally costly Navier-Stokes fluid simulation.",
"A necessary skill when using liquids in the preparation of food is to be able to estimate viscosity, e.g. in order to control the pouring velocity or to determine the thickness of a sauce. We introduce a method to allow a robotic kitchen assistant discriminate between different but visually similar liquids. Using a Kinect depth camera, surface changes, induced by a simple pushing motion, are recorded and used as input to nearest neighbour and polynomial regression classification models. Results reveal that even when the classifier is trained on a relatively small dataset it generalises well to unknown containers and liquid fill rates. Furthermore, the regression model allows us to determine the approximate viscosity of unknown liquids.",
"",
"Pouring is a fundamental skill for robots in both domestic and industrial environments. Ideally, a robot should be able to pour with high accuracy to specific, pre-defined heights and without spilling. However, due to the complex dynamics of liquids, it is difficult to learn how to pour to achieve these goals. In this paper we present an approach to learn a policy for pouring using Deep Deterministic Policy Gradients (DDPG). We remove the need for collecting training experiences on a real robot, by using a state-of-the-art liquid simulator, which allows for learning the liquid dynamics. We show through our experiments, performed with a PR2 robot, that it is possible to successfully transfer the learned policy to a real robot and even apply it to different liquids.",
"One of the key challenges for learning manipulation skills is generalizing between different objects. The robot should adapt both its actions and the task constraints to the geometry of the object being manipulated. In this paper, we propose computing geometric parameters of novel objects by warping known objects to match their shape. We refer to the parameters computed in this manner as warped parameters, as they are defined as functions of the warped object's point cloud. The warped parameters form the basis of the features for the motor skill learning process, and they are used to generalize between different objects. The proposed method was successfully evaluated on a pouring task both in simulation and on a real robot.",
"Robotic assistants have the potential to greatly improve our quality of life by supporting us in our daily activities. A service robot acting autonomously in an indoor environment is faced with very complex tasks. Consider the problem of pouring a liquid into a cup, the robot should first determine if the cup is empty or partially filled. RGB-D cameras provide noisy depth measurements which depend on the opaqueness and refraction index of the liquid. In this paper, we present a novel probabilistic approach for estimating the fill-level of a liquid in a cup using an RGB-D camera. Our approach does not make any assumptions about the properties of the liquid like its opaqueness or its refraction index. We develop a probabilistic model using features extracted from RGB and depth data. Our experiments demonstrate the robustness of our method and an improvement over the state of the art.",
"We explore how to represent, plan and learn robot pouring. This is a case study of a complex task that has many variations and involves manipulating non-rigid materials such as liquids and granular substances. Variations of pouring we consider are the type of pouring (such as pouring into a glass or spreading a sauce on an object), material, container shapes, initial poses of containers and target amounts. The robot learns to select appropriate behaviors from a library of skills, such as tipping, shaking and tapping, to pour a range of materials from a variety of containers. The robot also learns to select behavioral parameters. Planning methods are used to adapt skills for some variations such as initial poses of containers. We show using simulation and experiments on a PR2 robot that our pouring behavior model is able to plan and learn to handle a wide variety of pouring tasks. This case study is a step towards enabling humanoid robots to perform tasks of daily living.",
""
]
} |
1903.00650 | 2918569444 | In this paper, we focus on the challenging perception problem in robotic pouring. Most of the existing approaches either leverage visual or haptic information. However, these techniques may suffer from poor generalization performances on opaque containers or concerning measuring precision. To tackle these drawbacks, we propose to make use of audio vibration sensing and design a deep neural network PouringNet to predict the liquid height from the audio fragment during the robotic pouring task. PouringNet is trained on our collected real-world pouring dataset with multimodal sensing data, which contains more than 3000 recordings of audio, force feedback, video and trajectory data of the human hand that performs the pouring task. Each record represents a complete pouring procedure. We conduct several evaluations on PouringNet with our dataset and robotic hardware. The results demonstrate that our PouringNet generalizes well across different liquid containers, positions of the audio receiver, initial liquid heights and types of liquid, and facilitates a more robust and accurate audio-based perception for robotic pouring. | Vision is one of the commonly used modalities while pouring and in everyday life, humans also highly rely on vision when pouring water. Obviously, vision-based perception highly depends on the lighting conditions, the color of the liquids and the shape of the target containers. Do al advocated a probabilistic approach to estimate the liquid height based on an RGB-D camera @cite_15 . They further switched the analytical estimation approaches depending on the type of liquid and utilized a Kalman filter dealing with the uncertainties of the vision data @cite_16 . However, the mean height errors for ten pours of 3 transparent liquids were larger than 4 ,mm. Instead of directly predicting the absolute height of liquid, another popular method estimates the input volume of the liquid by analyzing the visual information of the water flow @cite_2 . Schenck al @cite_2 used a thermal camera to generate pixel-level groundtruth data of heated water using thermal imagery. The estimation result was used to determine the water volume using both a model-based method and a neural network method. However, this method suffers from poor estimation error due to the varied liquid types and the complex shapes of water flow. | {
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_2"
],
"mid": [
"2564989727",
"2949317431",
"2964237810"
],
"abstract": [
"Robotic assistants have the potential to greatly improve our quality of life by supporting us in our daily activities. A service robot acting autonomously in an indoor environment is faced with very complex tasks. Consider the problem of pouring a liquid into a cup, the robot should first determine if the cup is empty or partially filled. RGB-D cameras provide noisy depth measurements which depend on the opaqueness and refraction index of the liquid. In this paper, we present a novel probabilistic approach for estimating the fill-level of a liquid in a cup using an RGB-D camera. Our approach does not make any assumptions about the properties of the liquid like its opaqueness or its refraction index. We develop a probabilistic model using features extracted from RGB and depth data. Our experiments demonstrate the robustness of our method and an improvement over the state of the art.",
"Robotic assistants in a home environment are expected to perform various complex tasks for their users. One particularly challenging task is pouring drinks into cups, which for successful completion, requires the detection and tracking of the liquid level during a pour to determine when to stop. In this paper, we present a novel approach to autonomous pouring that tracks the liquid level using an RGB-D camera and adapts the rate of pouring based on the liquid level feedback. We thoroughly evaluate our system on various types of liquids and under different conditions, conducting over 250 pours with a PR2 robot. The results demonstrate that our approach is able to pour liquids to a target height with an accuracy of a few millimeters.",
"Pouring a specific amount of liquid is a challenging task. In this paper we develop methods for robots to use visual feedback to perform closed-loop control for pouring liquids. We propose both a model-based and a model-free method utilizing deep learning for estimating the volume of liquid in a container. Our results show that the model-free method is better able to estimate the volume. We combine this with a simple PID controller to pour specific amounts of liquid, and show that the robot is able to achieve an average 38ml deviation from the target amount. To our knowledge, this is the first use of raw visual feedback to pour liquids in robotics."
]
} |
1903.00650 | 2918569444 | In this paper, we focus on the challenging perception problem in robotic pouring. Most of the existing approaches either leverage visual or haptic information. However, these techniques may suffer from poor generalization performances on opaque containers or concerning measuring precision. To tackle these drawbacks, we propose to make use of audio vibration sensing and design a deep neural network PouringNet to predict the liquid height from the audio fragment during the robotic pouring task. PouringNet is trained on our collected real-world pouring dataset with multimodal sensing data, which contains more than 3000 recordings of audio, force feedback, video and trajectory data of the human hand that performs the pouring task. Each record represents a complete pouring procedure. We conduct several evaluations on PouringNet with our dataset and robotic hardware. The results demonstrate that our PouringNet generalizes well across different liquid containers, positions of the audio receiver, initial liquid heights and types of liquid, and facilitates a more robust and accurate audio-based perception for robotic pouring. | Besides, haptic sensing, especially force and torque sensing, are also popular in the perception of robotic pouring. Specifically, force data is exerted to generate pouring trajectories by predicting the angular velocity of the pouring container in simulation @cite_14 . Rozo al @cite_17 used a parametric hidden Markov model to retrieve joint-level commands given the force-torque inputs from the human demonstration. Hannes al @cite_11 examined the viscosity estimation of the various liquids from tactile sensory data. Although force from the pouring container could explicitly represent the volume of the pour-out liquid, force cannot measure the liquid height in an unseen target container. | {
"cite_N": [
"@cite_14",
"@cite_11",
"@cite_17"
],
"mid": [
"2619531488",
"2058444317",
"2081857580"
],
"abstract": [
"Pouring is a simple task people perform daily. It is the second most frequently executed motion in cooking scenarios, after pick-and-place. We present a pouring trajectory generation approach, which uses force feedback from the cup to determine the future velocity of pouring. The approach uses recurrent neural networks as its building blocks. We collected the pouring demonstrations which we used for training. To test our approach in simulation, we also created and trained a force estimation system. The simulated experiments show that the system is able to generalize to single unseen element of the pouring characteristics.",
"The estimation of parameters that affect the dynamics of objects—such as viscosity or internal degree of freedom—is an important step in autonomous and dexterous robotic manipulation of objects. However, accurate and efficient estimation of these object parameters may be challenging due to complex, highly nonlinear underlying physical processes. To improve on the quality of otherwise hand-crafted solutions, automatic generation of control strategies can be helpful. We present a framework that uses active learning to help with sequential gathering of data samples,using information-theoretic ciriteria to find the optimal actions to perform at each time step. We demonstrate the usefulness of our approach on a robotic hand-arm setup, where the task involves shaking bottles of different liquids in order to determine the liquid's viscosity from only tactile feedback. We optimize the shaking frequency and the rotation angle of shaking in an online manner in order to speed up convergence of estimates.",
"Robot learning from demonstration faces new challenges when applied to tasks in which forces play a key role. Pouring liquid from a bottle into a glass is one such task, where not just a motion with a certain force profile needs to be learned, but the motion is subtly conditioned by the amount of liquid in the bottle. In this paper, the pouring skill is taught to a robot as follows. In a training phase, the human teleoperates the robot using a haptic device, and data from the demonstrations are statistically encoded by a parametric hidden Markov model, which compactly encapsulates the relation between the task parameter (dependent on the bottle weight) and the force-torque traces. Gaussian mixture regression is then used at the reproduction stage for retrieving the suitable robot actions based on the force perceptions. Computational and experimental results show that the robot is able to learn to pour drinks using the proposed framework, outperforming other approaches such as the classical hidden Markov models in that it requires less training, yields more compact encodings and shows better generalization capabilities."
]
} |
1903.00395 | 2919331634 | We present a method to restore a clear image from a haze-affected image using a Wasserstein generative adversarial network. As the problem is ill-conditioned, previous methods have required a prior on natural images or multiple images of the same scene. We train a generative adversarial network to learn the probability distribution of clear images conditioned on the haze-affected images using the Wasserstein loss function, using a gradient penalty to enforce the Lipschitz constraint. The method is data-adaptive, end-to-end, and requires no further processing or tuning of parameters. We also incorporate the use of a texture-based loss metric and the L1 loss to improve results, and show that our results are better than the current state-of-the-art. | @cite_1 proposed the generative adversarial network (GAN) to generate images (or text) from random noise samples. GANs consist of a generator and a discriminator. The generator tries to learn the probability distribution of the training samples and generate samples that can fool the discriminator into thinking they came from the training set. The discriminator tries to correctly identify samples as whether they come from the generator or from the training set. This is similar to a cop and counterfeiter game, where the counterfeiter tries to pass off counterfeited notes as real, while the cop tries to identify whether the notes he is shown are real or not. As the cop (the discriminator) and the counterfeiter (the generator) compete with each other, both get better at their tasks. GANs are difficult to train and face problems such as mode collapse and instability while training. Conditioning the output on some prior improves the training process. In a conditional GAN used for fog removal, the objective function is | {
"cite_N": [
"@cite_1"
],
"mid": [
"2099471712"
],
"abstract": [
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."
]
} |
1903.00395 | 2919331634 | We present a method to restore a clear image from a haze-affected image using a Wasserstein generative adversarial network. As the problem is ill-conditioned, previous methods have required a prior on natural images or multiple images of the same scene. We train a generative adversarial network to learn the probability distribution of clear images conditioned on the haze-affected images using the Wasserstein loss function, using a gradient penalty to enforce the Lipschitz constraint. The method is data-adaptive, end-to-end, and requires no further processing or tuning of parameters. We also incorporate the use of a texture-based loss metric and the L1 loss to improve results, and show that our results are better than the current state-of-the-art. | @math describes the 1-Lipschitz family of functions. In this formulation, @math is no longer called the discriminator, but is called the critic. This is because it does not perform classification, but simply produces a score that goes towards forming the objective function that is to be maximized with respect to @math and minimized with respect to @math . proposed to clip the weights of the critic in order to enforce the 1-Lipschitz constraint, which biases the critic towards simpler functions and requires careful tuning of the clipping parameter. Instead, @cite_6 proposed an alternative way to enforce the Lipschitz constraint, based on the observation that a function is 1-Lipschitz if and only if it has gradients of norm at most 1 everywhere. The new conditional WGAN objective then becomes: | {
"cite_N": [
"@cite_6"
],
"mid": [
"2962879692"
],
"abstract": [
"Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms."
]
} |
1903.00640 | 2919820522 | The decision and planning system for autonomous driving in urban environments is hard to design. Most current methods are to manually design the driving policy, which can be sub-optimal and expensive to develop and maintain at scale. Instead, with imitation learning we only need to collect data and then the computer will learn and improve the driving policy automatically. However, existing imitation learning methods for autonomous driving are hardly performing well for complex urban scenarios. Moreover, the safety is not guaranteed when we use a deep neural network policy. In this paper, we proposed a framework to learn the driving policy in urban scenarios efficiently given offline connected driving data, with a safety controller incorporated to guarantee safety at test time. The experiments show that our method can achieve high performance in realistic three-dimensional simulations of urban driving scenarios, with only hours of data collection and training on a single consumer GPU. | The first imitation learning algorithm applied to autonomous driving was 30 years ago, when ALVINN system @cite_6 used a 3-layer neural network to perform road following from raw camera sensor data. Helped by recent progress in deep learning, NIVIDIA developed an end-to-end driving system using deep convolutional neural networks @cite_16 @cite_2 , which can perform good lane following behaviors even in challenging environments where no road markings can be recognized. Researchers also trained deep neural networks to predict the control command from camera image and evaluate their open loop performance (e.g, the prediction error). @cite_0 used a FCN-LSTM architecture with a segmentation mask to train a deep driving policy. @cite_12 proposed an object-centric model to predict the vehicle action with higher accuracy. Although both @cite_0 and @cite_12 can achieve good prediction performance for complex urban scenarios, they did not provide closed loop evaluation either on real world or simulated vehicles. | {
"cite_N": [
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_16",
"@cite_12"
],
"mid": [
"2167224731",
"2559767995",
"2611430843",
"2342840547",
"2901057675"
],
"abstract": [
"ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perform the task differs dramatically when the network is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand.",
"Robust perception-action models should be learned from training data with diverse visual appearances and realistic behaviors, yet current approaches to deep visuomotor policy learning have been generally limited to in-situ models learned from a single vehicle or simulation environment. We advocate learning a generic vehicle motion model from large scale crowd-sourced video data, and develop an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state. Our model incorporates a novel FCN-LSTM architecture, which can be learned from large-scale crowd-sourced vehicle action data, and leverages available scene segmentation side tasks to improve performance under a privileged learning paradigm. We provide a novel large-scale dataset of crowd-sourced driving behavior suitable for training our model, and report results predicting the driver action on held out sequences across diverse conditions.",
"As part of a complete software stack for autonomous driving, NVIDIA has created a neural-network-based system, known as PilotNet, which outputs steering angles given images of the road ahead. PilotNet is trained using road images paired with the steering angles generated by a human driving a data-collection car. It derives the necessary domain knowledge by observing human drivers. This eliminates the need for human engineers to anticipate what is important in an image and foresee all the necessary rules for safe driving. Road tests demonstrated that PilotNet can successfully perform lane keeping in a wide variety of driving conditions, regardless of whether lane markings are present or not. The goal of the work described here is to explain what PilotNet learns and how it makes its decisions. To this end we developed a method for determining which elements in the road image most influence PilotNet's steering decision. Results show that PilotNet indeed learns to recognize relevant objects on the road. In addition to learning the obvious features such as lane markings, edges of roads, and other cars, PilotNet learns more subtle features that would be hard to anticipate and program by engineers, for example, bushes lining the edge of the road and atypical vehicle classes.",
"We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).",
"While learning visuomotor skills in an end-to-end manner is appealing, deep neural networks are often uninterpretable and fail in surprising ways. For robotics tasks, such as autonomous driving, models that explicitly represent objects may be more robust to new scenes and provide intuitive visualizations. We describe a taxonomy of \"object-centric\" models which leverage both object instances and end-to-end learning. In the Grand Theft Auto V simulator, we show that object-centric models outperform object-agnostic methods in scenes with other vehicles and pedestrians, even with an imperfect detector. We also demonstrate that our architectures perform well on real-world environments by evaluating on the Berkeley DeepDrive Video dataset, where an object-centric model outperforms object-agnostic models in the low-data regimes."
]
} |
1903.00640 | 2919820522 | The decision and planning system for autonomous driving in urban environments is hard to design. Most current methods are to manually design the driving policy, which can be sub-optimal and expensive to develop and maintain at scale. Instead, with imitation learning we only need to collect data and then the computer will learn and improve the driving policy automatically. However, existing imitation learning methods for autonomous driving are hardly performing well for complex urban scenarios. Moreover, the safety is not guaranteed when we use a deep neural network policy. In this paper, we proposed a framework to learn the driving policy in urban scenarios efficiently given offline connected driving data, with a safety controller incorporated to guarantee safety at test time. The experiments show that our method can achieve high performance in realistic three-dimensional simulations of urban driving scenarios, with only hours of data collection and training on a single consumer GPU. | CARLA simulator @cite_21 has been developed and open-sourced recently. It enables training and testing autonomous driving systems in a realistic three-dimensional urban driving simulation environment. Based on CARLA, @cite_14 used conditional imitation learning to lean an end-to-end deep policy that follows high level commands such as go straight and turn left right. @cite_15 defined several intermediate affordance such as distance to objects, learned a deep neural network to map camera image to the affordance, and then performed model-based control based on the affordance. | {
"cite_N": [
"@cite_14",
"@cite_15",
"@cite_21"
],
"mid": [
"2962894046",
"2808272600",
"2767621168"
],
"abstract": [
"Deep networks trained on demonstrations of human driving have learned to follow roads and avoid obstacles. However, driving policies trained via imitation learning cannot be controlled at test time. A vehicle trained end-to-end to imitate an expert cannot be guided to take a specific turn at an upcoming intersection. This limits the utility of such systems. We propose to condition imitation learning on high-level command input. At test time, the learned driving policy functions as a chauffeur that handles sensorimotor coordination but continues to respond to navigational commands. We evaluate different architectures for conditional imitation learning in vision-based driving. We conduct experiments in realistic three-dimensional simulations of urban driving and on a 1 5 scale robotic truck that is trained to drive in a residential area. Both systems drive based on visual input yet remain responsive to high-level navigational commands.",
"Most existing approaches to autonomous driving fall into one of two categories: modular pipelines, that build an extensive model of the environment, and imitation learning approaches, that map images directly to control outputs. A recently proposed third paradigm, direct perception, aims to combine the advantages of both by using a neural network to learn appropriate low-dimensional intermediate representations. However, existing direct perception approaches are restricted to simple highway situations, lacking the ability to navigate intersections, stop at traffic lights or respect speed limits. In this work, we propose a direct perception approach which maps video input to intermediate representations suitable for autonomous navigation in complex urban environments given high-level directional inputs. Compared to state-of-the-art reinforcement and conditional imitation learning approaches, we achieve an improvement of up to 68 in goal-directed navigation on the challenging CARLA simulation benchmark. In addition, our approach is the first to handle traffic lights and speed signs by using image-level labels only, as well as smooth car-following, resulting in a significant reduction of traffic accidents in simulation.",
"We introduce CARLA, an open-source simulator for autonomous driving research. CARLA has been developed from the ground up to support development, training, and validation of autonomous urban driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites and environmental conditions. We use CARLA to study the performance of three approaches to autonomous driving: a classic modular pipeline, an end-to-end model trained via imitation learning, and an end-to-end model trained via reinforcement learning. The approaches are evaluated in controlled scenarios of increasing difficulty, and their performance is examined via metrics provided by CARLA, illustrating the platform's utility for autonomous driving research. The supplementary video can be viewed at this https URL"
]
} |
1903.00112 | 2918922367 | In this work we present a self-supervised learning framework to simultaneously train two Convolutional Neural Networks (CNNs) to predict depth and surface normals from a single image. In contrast to most existing frameworks which represent outdoor scenes as fronto-parallel planes at piece-wise smooth depth, we propose to predict depth with surface orientation while assuming that natural scenes have piece-wise smooth normals. We show that a simple depth-normal consistency as a soft-constraint on the predictions is sufficient and effective for training both these networks simultaneously. The trained normal network provides state-of-the-art predictions while the depth network, relying on much realistic smooth normal assumption, outperforms the traditional self-supervised depth prediction network by a large margin on the KITTI benchmark. Demo video: this https URL | Recently, deep learning-based methods dominate this area @cite_21 @cite_34 @cite_33 . For example, @cite_21 train a multi-scale Convolutional Neural Network, operating at coarse and fine image resolutions, to regress a depth map from a single image, and in @cite_46 they extend their network to a three-scale architecture and regress for depth maps, normal maps, and semantic labels in real-time from a single image. The semantic label maps were predicted from a single RGB-D image as the additional depth channel improved results. In @cite_38 the latter work was extended to jointly predict depth, surface normals and surface curvature, which improved the results of all three tasks. | {
"cite_N": [
"@cite_38",
"@cite_33",
"@cite_21",
"@cite_46",
"@cite_34"
],
"mid": [
"2674324363",
"2600383743",
"2171740948",
"1905829557",
""
],
"abstract": [
"Understanding the 3D structure of a scene is of vital importance, when it comes to developing fully autonomous robots. To this end, we present a novel deep learning based framework that estimates depth, surface normals and surface curvature by only using a single RGB image. To the best of our knowledge this is the first work to estimate surface curvature from colour using a machine learning approach. Additionally, we demonstrate that by tuning the network to infer well designed features, such as surface curvature, we can achieve improved performance at estimating depth and normals. This indicates that network guidance is still a useful aspect of designing and training a neural network. We run extensive experiments where the network is trained to infer different tasks while the model capacity is kept constant resulting in different feature maps based on the tasks at hand. We outperform the previous state-of-the-art benchmarks which jointly estimate depths and surface normals while predicting surface curvature in parallel.",
"There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model - uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.",
"Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation.",
"In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.",
""
]
} |
1903.00112 | 2918922367 | In this work we present a self-supervised learning framework to simultaneously train two Convolutional Neural Networks (CNNs) to predict depth and surface normals from a single image. In contrast to most existing frameworks which represent outdoor scenes as fronto-parallel planes at piece-wise smooth depth, we propose to predict depth with surface orientation while assuming that natural scenes have piece-wise smooth normals. We show that a simple depth-normal consistency as a soft-constraint on the predictions is sufficient and effective for training both these networks simultaneously. The trained normal network provides state-of-the-art predictions while the depth network, relying on much realistic smooth normal assumption, outperforms the traditional self-supervised depth prediction network by a large margin on the KITTI benchmark. Demo video: this https URL | Liu al @cite_16 proposed to formulate depth estimation as a deep continuous Conditional Random Fields (CRF) learning problem. Given the continuous nature of the depth values, they learn the unary depth values and weightings for the pairwise smoothness potential functions via CNNs in an end-to-end framework. @cite_30 used a fully convolutional network architecture based on ResNet @cite_9 with a novel upsampler for decoding the depth map at input resolution. Kendall al @cite_33 adapted the DenseNet architecture for several regression tasks including depth prediction, and showed that jointly predicting pixelwise depths and confidences, where the output is modeled as a multivariate Gaussian distribution, improves depth estimation results. @cite_0 combined shallow convolutional networks with regression forests to reduce the need for large training sets. Recently in @cite_13 , it was proposed that sharper predictions at depth boundaries can be achieved by emphasizing local depth error gradients. This same phenomenon was observed in @cite_46 @cite_2 that also emphasized local depth errors during training. Our inverse-depth normal consistency terms also emphasizes local depth errors based on the predicted normals and achieves a similar effect but in a more implicit unsupervised fashion (i.e. with no dedicated sensory depth data). | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_9",
"@cite_0",
"@cite_2",
"@cite_46",
"@cite_16",
"@cite_13"
],
"mid": [
"2963591054",
"2600383743",
"",
"2436453945",
"2561074213",
"1905829557",
"1803059841",
"2790104265"
],
"abstract": [
"This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.",
"There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model - uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.",
"",
"This paper presents a novel deep architecture, called neural regression forest (NRF), for depth estimation from a single image. NRF combines random forests and convolutional neural networks (CNNs). Scanning windows extracted from the image represent samples which are passed down the trees of NRF for predicting their depth. At every tree node, the sample is filtered with a CNN associated with that node. Results of the convolutional filtering are passed to left and right children nodes, i.e., corresponding CNNs, with a Bernoulli probability, until the leaves, where depth estimations are made. CNNs at every node are designed to have fewer parameters than seen in recent work, but their stacked processing along a path in the tree effectively amounts to a deeper CNN. NRF allows for parallelizable training of all \"shallow\" CNNs, and efficient enforcing of smoothness in depth estimation results. Our evaluation on the benchmark Make3D and NYUv2 datasets demonstrates that NRF outperforms the state of the art, and gracefully handles gradually decreasing training datasets.",
"In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training.",
"In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.",
"In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.",
"This paper considers the problem of single image depth estimation. The employment of convolutional neural networks (CNNs) has recently brought about significant advancements in the research of this problem. However, most existing methods suffer from loss of spatial resolution in the estimated depth maps; a typical symptom is distorted and blurry reconstruction of object boundaries. In this paper, toward more accurate estimation with a focus on depth maps with higher spatial resolution, we propose two improvements to existing approaches. One is about the strategy of fusing features extracted at different scales, for which we propose an improved network architecture consisting of four modules: an encoder, decoder, multi-scale feature fusion module, and refinement module. The other is about loss functions for measuring inference errors used in training. We show that three loss terms, which measure errors in depth, gradients and surface normals, respectively, contribute to improvement of accuracy in an complementary fashion. Experimental results show that these two improvements enable to attain higher accuracy than the current state-of-the-arts, which is given by finer resolution reconstruction, for example, with small objects and object boundaries."
]
} |
1903.00112 | 2918922367 | In this work we present a self-supervised learning framework to simultaneously train two Convolutional Neural Networks (CNNs) to predict depth and surface normals from a single image. In contrast to most existing frameworks which represent outdoor scenes as fronto-parallel planes at piece-wise smooth depth, we propose to predict depth with surface orientation while assuming that natural scenes have piece-wise smooth normals. We show that a simple depth-normal consistency as a soft-constraint on the predictions is sufficient and effective for training both these networks simultaneously. The trained normal network provides state-of-the-art predictions while the depth network, relying on much realistic smooth normal assumption, outperforms the traditional self-supervised depth prediction network by a large margin on the KITTI benchmark. Demo video: this https URL | Using stereo pairs for training, @cite_6 @cite_42 deploy an auto-encoder like framework where the authors propose to predict the disparity (inverse depth) of the left image, using which the right image of the stereo pair can be warped to synthesize the left image. The photometric difference between the input (left) image and the warped image is minimized to train the single view depth predictor. An inverse depth smoothness prior on the predicted depths is used to regularize the solution, encouraging piece-wise smooth depth maps. @cite_4 extended the above framework to jointly estimate depth and ego-motion using monocular videos - upto a scale. Methods like @cite_29 @cite_26 proposed to combine the advantages of using both spatial and temporal information available in KITTI sequences for improving depth predictions while solving the scaling ambiguity issue. A large body of work since have been targeted to use better loss functions, in particular the image alignment loss @cite_42 @cite_3 propose to use SSIM and GANs respectively for image matching. Enforcing temporal consistency in the predicted depths by aligning the back-projected depth maps via differentiable approximation of ICP has been studied in @cite_10 . | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_29",
"@cite_42",
"@cite_6",
"@cite_3",
"@cite_10"
],
"mid": [
"2962816904",
"2609883120",
"2760477413",
"2520707372",
"2949634581",
"2913483780",
"2785512290"
],
"abstract": [
"Despite learning based methods showing promising results in single view depth estimation and visual odometry, most existing approaches treat the tasks in a supervised manner. Recent approaches to single view depth estimation explore the possibility of learning without full supervision via minimizing photometric error. In this paper, we explore the use of stereo sequences for learning depth and visual odometry. The use of stereo sequences enables the use of both spatial (between left-right pairs) and temporal (forward backward) photometric warp error, and constrains the scene depth and camera motion to be in a common, real-world scale. At test time our framework is able to estimate single view depth and two-view odometry from a monocular sequence. We also show how we can improve on a standard photometric warp loss by considering a warp of deep features. We show through extensive experiments that: (i) jointly training for single view depth and visual odometry improves depth prediction because of the additional constraint imposed on depths and achieves competitive results for visual odometry; (ii) deep feature-based warping loss improves upon simple photometric warp loss for both single view depth estimation and visual odometry. Our method outperforms existing learning based methods on the KITTI driving dataset in both tasks. The source code is available at https: github.com Huangying-Zhan Depth-VO-Feat.",
"We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.",
"We propose a novel monocular visual odometry (VO) system called UnDeepVO in this paper. UnDeepVO is able to estimate the 6-DoF pose of a monocular camera and the depth of its view by using deep neural networks. There are two salient features of the proposed UnDeepVO: one is the unsupervised deep learning scheme, and the other is the absolute scale recovery. Specifically, we train UnDeepVO by using stereo image pairs to recover the scale but test it by using consecutive monocular images. Thus, UnDeepVO is a monocular system. The loss function defined for training the networks is based on spatial and temporal dense information. A system overview is shown in Fig. 1. The experiments on KITTI dataset show our UnDeepVO achieves good performance in terms of pose accuracy.",
"Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.",
"A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation.",
"Estimating depth from a single image is a very challenging and exciting topic in computer vision with implications in several application domains. Recently proposed deep learning approaches achieve outstanding results by tackling it as an image reconstruction task and exploiting geometry constraints (e.g., epipolar geometry) to obtain supervisory signals for training. Inspired by these works and compelling results achieved by Generative Adversarial Network (GAN) on image reconstruction and generation tasks, in this paper we propose to cast unsupervised monocular depth estimation within a GAN paradigm. The generator network learns to infer depth from the reference image to generate a warped target image. At training time, the discriminator network learns to distinguish between fake images generated by the generator and target frames acquired with a stereo rig. To the best of our knowledge, our proposal is the first successful attempt to tackle monocular depth estimation with a GAN paradigm and the extensive evaluation on CityScapes and KITTI datasets confirm that it enables to improve traditional approaches. Additionally, we highlight a major issue with data deployed by a standard evaluation protocol widely used in this field and fix this problem using a more reliable dataset recently made available by the KITTI evaluation benchmark.",
"We present a novel approach for unsupervised learning of depth and ego-motion from monocular video. Unsupervised learning removes the need for separate supervisory signals (depth or ego-motion ground truth, or multi-view video). Prior work in unsupervised depth learning uses pixel-wise or gradient-based losses, which only consider pixels in small local neighborhoods. Our main contribution is to explicitly consider the inferred 3D geometry of the scene, enforcing consistency of the estimated 3D point clouds and ego-motion across consecutive frames. This is a challenging task and is solved by a novel (approximate) backpropagation algorithm for aligning 3D structures. We combine this novel 3D-based loss with 2D losses based on photometric quality of frame reconstructions using estimated depth and ego-motion from adjacent frames. We also incorporate validity masks to avoid penalizing areas in which no useful information exists. We test our algorithm on the KITTI dataset and on a video dataset captured on an uncalibrated mobile phone camera. Our proposed approach consistently improves depth estimates on both datasets, and outperforms the state-of-the-art for both depth and ego-motion. Because we only require a simple video, learning depth and ego-motion on large and varied datasets becomes possible. We demonstrate this by training on the low quality uncalibrated video dataset and evaluating on KITTI, ranking among top performing prior methods which are trained on KITTI itself."
]
} |
1903.00112 | 2918922367 | In this work we present a self-supervised learning framework to simultaneously train two Convolutional Neural Networks (CNNs) to predict depth and surface normals from a single image. In contrast to most existing frameworks which represent outdoor scenes as fronto-parallel planes at piece-wise smooth depth, we propose to predict depth with surface orientation while assuming that natural scenes have piece-wise smooth normals. We show that a simple depth-normal consistency as a soft-constraint on the predictions is sufficient and effective for training both these networks simultaneously. The trained normal network provides state-of-the-art predictions while the depth network, relying on much realistic smooth normal assumption, outperforms the traditional self-supervised depth prediction network by a large margin on the KITTI benchmark. Demo video: this https URL | While most of the self supervised approaches have mainly focused on getting accurate depth-maps, little attention has been devoted to use other scene representations. We are aware of two recent works @cite_27 @cite_44 which incorporate the surface orientation (normal) estimation for single view geometric understanding. Similar to @cite_4 , @cite_27 @cite_44 learn depth from monocular sequences using a self-supervised photometric loss but additionally they compute surface normals from the predicted depths using a weighted mean cross product @cite_41 . They propose to regularize the inverse depths and the normals computed from the depth predictions simultaneously. We believe that this is redundant and a separate normal prediction is beneficial then relying on the normals to be computed from predicted depth. The differences between our work and @cite_27 @cite_44 is detailed in section and a extensive comparison of the proposed work with these methods is described in section . | {
"cite_N": [
"@cite_44",
"@cite_41",
"@cite_27",
"@cite_4"
],
"mid": [
"2963549785",
"2119759375",
"",
"2609883120"
],
"abstract": [
"Learning to estimate 3D geometry in a single image by watching unlabeled videos via deep convolutional network is attracting significant attention. In this paper, we introduce a \"3D as-smooth-as-possible (3D-ASAP)\" prior inside the pipeline, which enables joint estimation of edges and 3D scene, yielding results with significant improvement in accuracy for fine detailed structures. Specifically, we define the 3D-ASAP prior by requiring that any two points recovered in 3D from an image should lie on an existing planar surface if no other cues provided. We design an unsupervised framework that Learns Edges and Geometry (depth, normal) all at Once (LEGO). The predicted edges are embedded into depth and surface normal smoothness terms, where pixels without edges in-between are constrained to satisfy the prior. In our framework, the predicted depths, normals and edges are forced to be consistent all the time. We conduct experiments on KITTI to evaluate our estimated geometry and CityScapes to perform edge evaluation. We show that in all of the tasks, i.e. depth, normal and edge, our algorithm vastly outperforms other state-of-the-art (SOTA) algorithms, demonstrating the benefits of our approach.",
"This paper concerns accurate computation of the singular value decomposition (SVD) of an (m n ) matrix (A ). As is well known, cross-product matrix based SVD algorithms compute large singular values accurately but generally deliver poor small singular values. A new novel cross-product matrix based SVD method is proposed: (a) Use a backward stable algorithm to compute the eigenpairs of (A^ T A ) and take the square roots of the large eigenvalues of it as the large singular values of (A ) ; (b) form the Rayleigh quotient of (A^ T A ) with respect to the matrix consisting of the computed eigenvectors associated with the computed small eigenvalues of (A^ T A ) ; (c) compute the eigenvalues of the Rayleigh quotient and take the square roots of them as the small singular values of (A ). A detailed quantitative error analysis is conducted on the method. It is proved that if small singular values are well separated from the large ones then the method can compute the small ones accurately up to the order of the unit roundoff ( ). An algorithm is developed that is not only cheaper than the standard Golub–Reinsch and Chan SVD algorithms but also can update or downdate a new SVD by adding or deleting a row and compute certain refined Ritz vectors for large matrix eigenproblems at very low cost. Several variants of the algorithm are proposed that compute some or all parts of the SVD. Typical numerical examples confirm the high accuracy of our algorithm.",
"",
"We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings."
]
} |
1903.00278 | 2920073645 | Continuous integration is an indispensable step of modern software engineering practices to systematically manage the life cycles of system development. Developing a machine learning model is no difference - it is an engineering process with a life cycle, including design, implementation, tuning, testing, and deployment. However, most, if not all, existing continuous integration engines do not support machine learning as first-class citizens. In this paper, we present ease.ml ci, to our best knowledge, the first continuous integration system for machine learning. The challenge of building ease.ml ci is to provide rigorous guarantees, e.g., single accuracy point error tolerance with 0.999 reliability, with a practical amount of labeling effort, e.g., 2K labels per test. We design a domain specific language that allows users to specify integration conditions with reliability constraints, and develop simple novel optimizations that can lower the number of labels required by up to two orders of magnitude for test conditions popularly used in real production systems. | The baseline implementation of builds on intensive previous work on generalization and adaptive analysis. The non-adaptive version of the system is based on simple concentration inequalities @cite_7 and the fully adaptive version of the system is inspired by Ladder @cite_11 . Comparing to the second, is less restrictive on the feedback and more expressive given the specification of the test conditions. This leads to a higher number of test samples needed in general. It is well-known that the @math sample complexity of Hoeffding's inequality becomes @math when the variance of the random variable @math is of the same order of @math @cite_7 . In this paper, we develop techniques to adapt the same observation to a real-world scenario (Pattern 1). The technique of only labeling the difference between models is inspired by disagreement-based active learning @cite_0 , which illustrates the potential of taking advantage of the overlapping structure between models to decrease labeling complexity. In fact, the technique we develop implies that one can achieve @math label complexity when the overlapping ratio between two models @math . | {
"cite_N": [
"@cite_0",
"@cite_7",
"@cite_11"
],
"mid": [
"2056138823",
"568673721",
"1798749056"
],
"abstract": [
"Active learning is a protocol for supervised machine learning, in which a learning algorithm sequentially requests the labels of selected data points from a large pool of unlabeled data. This contrasts with passive learning, where the labeled data are taken at random. The objective in active learning is to produce a highly-accurate classifier, ideally using fewer labels than the number of random labeled data sufficient for passive learning to achieve the same. This article describes recent advances in our understanding of the theoretical benefits of active learning, and implications for the design of effective active learning algorithms. Much of the article focuses on a particular technique, namely disagreement-based active learning, which by now has amassed a mature and coherent literature. It also briefly surveys several alternative approaches from the literature. The emphasis is on theorems regarding the performance of a few general algorithms, including rigorous proofs where appropriate. However, the presentation is intended to be pedagogical, focusing on results that illustrate fundamental ideas, rather than obtaining the strongest or most general known theorems. The intended audience includes researchers and advanced graduate students in machine learning and statistics, interested in gaining a deeper understanding of the recent and ongoing developments in the theory of active learning.",
"Concentration inequalities for functions of independent random variables is an area of probability theory that has witnessed a great revolution in the last few decades, and has applications in a wide variety of areas such as machine learning, statistics, discrete mathematics, and high-dimensional geometry. Roughly speaking, if a function of many independent random variables does not depend too much on any of the variables then it is concentrated in the sense that with high probability, it is close to its expected value. This book offers a host of inequalities to illustrate this rich theory in an accessible way by covering the key developments and applications in the field. The authors describe the interplay between the probabilistic structure (independence) and a variety of tools ranging from functional inequalities to transportation arguments to information theory. Applications to the study of empirical processes, random projections, random matrix theory, and threshold phenomena are also presented. A self-contained introduction to concentration inequalities, it includes a survey of concentration of sums of independent random variables, variance bounds, the entropy method, and the transportation method. Deep connections with isoperimetric problems are revealed whilst special attention is paid to applications to the supremum of empirical processes. Written by leading experts in the field and containing extensive exercise sections this book will be an invaluable resource for researchers and graduate students in mathematics, theoretical computer science, and engineering.",
"The organizer of a machine learning competition faces the problem of maintaining an accurate leaderboard that faithfully represents the quality of the best submission of each competing team. What makes this estimation problem particularly challenging is its sequential and adaptive nature. As participants are allowed to repeatedly evaluate their submissions on the leaderboard, they may begin to overfit to the holdout data that supports the leaderboard. Few theoretical results give actionable advice on how to design a reliable leaderboard. Existing approaches therefore often resort to poorly understood heuristics such as limiting the bit precision of answers and the rate of resubmission. In this work, we introduce a notion of leaderboard accuracy tailored to the format of a competition. We introduce a natural algorithm called the Ladder and demonstrate that it simultaneously supports strong theoretical guarantees in a fully adaptive model of estimation, withstands practical adversarial attacks, and achieves high utility on real submission files from an actual competition hosted by Kaggle. Notably, we are able to sidestep a powerful recent hardness result for adaptive risk estimation that rules out algorithms such as ours under a seemingly very similar notion of accuracy. On a practical note, we provide a completely parameter-free variant of our algorithm that can be deployed in a real competition with no tuning required whatsoever."
]
} |
1903.00278 | 2920073645 | Continuous integration is an indispensable step of modern software engineering practices to systematically manage the life cycles of system development. Developing a machine learning model is no difference - it is an engineering process with a life cycle, including design, implementation, tuning, testing, and deployment. However, most, if not all, existing continuous integration engines do not support machine learning as first-class citizens. In this paper, we present ease.ml ci, to our best knowledge, the first continuous integration system for machine learning. The challenge of building ease.ml ci is to provide rigorous guarantees, e.g., single accuracy point error tolerance with 0.999 reliability, with a practical amount of labeling effort, e.g., 2K labels per test. We design a domain specific language that allows users to specify integration conditions with reliability constraints, and develop simple novel optimizations that can lower the number of labels required by up to two orders of magnitude for test conditions popularly used in real production systems. | The key difference between and a differential privacy approach @cite_3 for answering statistical queries lies in the optimization techniques we design. By knowing the structure of the queries we are able to considerably lower the number of samples needed. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2027595342"
],
"abstract": [
"The problem of privacy-preserving data analysis has a long history spanning multiple disciplines. As electronic data about individuals becomes increasingly detailed, and as technology enables ever more powerful collection and curation of these data, the need increases for a robust, meaningful, and mathematically rigorous definition of privacy, together with a computationally rich class of algorithms that satisfy this definition. Differential Privacy is such a definition.After motivating and discussing the meaning of differential privacy, the preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example. A key point is that, by rethinking the computational goal, one can often obtain far better results than would be achieved by methodically replacing each step of a non-private computation with a differentially private implementation. Despite some astonishingly powerful computational results, there are still fundamental limitations — not just on what can be achieved with differential privacy but on what can be achieved with any method that protects against a complete breakdown in privacy. Virtually all the algorithms discussed herein maintain differential privacy against adversaries of arbitrary computational power. Certain algorithms are computationally intensive, others are efficient. Computational complexity for the adversary and the algorithm are both discussed.We then turn from fundamentals to applications other than queryrelease, discussing differentially private methods for mechanism design and machine learning. The vast majority of the literature on differentially private algorithms considers a single, static, database that is subject to many analyses. Differential privacy in other models, including distributed databases and computations on data streams is discussed.Finally, we note that this work is meant as a thorough introduction to the problems and techniques of differential privacy, but is not intended to be an exhaustive survey — there is by now a vast amount of work in differential privacy, and we can cover only a small portion of it."
]
} |
1903.00041 | 2950069022 | We consider the problem of making efficient use of heterogeneous training data in neural machine translation (NMT). Specifically, given a training dataset with a sentence-level feature such as noise, we seek an optimal curriculum, or order for presenting examples to the system during training. Our curriculum framework allows examples to appear an arbitrary number of times, and thus generalizes data weighting, filtering, and fine-tuning schemes. Rather than relying on prior knowledge to design a curriculum, we use reinforcement learning to learn one automatically, jointly with the NMT system, in the course of a single training run. We show that this approach can beat uniform and filtering baselines on Paracrawl and WMT English-to-French datasets by up to +3.4 BLEU, and match the performance of a hand-designed, state-of-the-art curriculum. | The idea of a curriculum was popularized by , who viewed it as a way to improve convergence by presenting heuristically-identified easy examples first. Two recent papers @cite_13 @cite_12 explore similar ideas for NMT, and verify that this strategy can reduce training time and improve quality. | {
"cite_N": [
"@cite_13",
"@cite_12"
],
"mid": [
"2741838462",
"2898846200"
],
"abstract": [
"We examine the effects of particular orderings of sentence pairs on the on-line training of neural machine translation (NMT). We focus on two types of such orderings: (1) ensuring that each minibatch contains sentences similar in some aspect and (2) gradual inclusion of some sentence types as the training progresses (so called \"curriculum learning\"). In our English-to-Czech experiments, the internal homogeneity of minibatches has no effect on the training but some of our \"curricula\" achieve a small improvement over the baseline.",
"Machine translation systems based on deep neural networks are expensive to train. Curriculum learning aims to address this issue by choosing the order in which samples are presented during training to help train better models faster. We adopt a probabilistic view of curriculum learning, which lets us flexibly evaluate the impact of curricula design, and perform an extensive exploration on a German-English translation task. Results show that it is possible to improve convergence time at no loss in translation quality. However, results are highly sensitive to the choice of sample difficulty criteria, curriculum schedule and other hyperparameters."
]
} |
1903.00058 | 2920195923 | Neural Networks trained with gradient descent are known to be susceptible to catastrophic forgetting caused by parameter shift during the training process. In the context of Neural Machine Translation (NMT) this results in poor performance on heterogeneous datasets and on sub-tasks like rare phrase translation. On the other hand, non-parametric approaches are immune to forgetting, perfectly complementing the generalization ability of NMT. However, attempts to combine non-parametric or retrieval based approaches with NMT have only been successful on narrow domains, possibly due to over-reliance on sentence level retrieval. We propose a novel n-gram level retrieval approach that relies on local phrase level similarities, allowing us to retrieve neighbors that are useful for translation even when overall sentence similarity is low. We complement this with an expressive neural network, allowing our model to extract information from the noisy retrieved context. We evaluate our semi-parametric NMT approach on a heterogeneous dataset composed of WMT, IWSLT, JRC-Acquis and OpenSubtitles, and demonstrate gains on all 4 evaluation sets. The semi-parametric nature of our approach opens the door for non-parametric domain adaptation, demonstrating strong inference-time adaptation performance on new domains without the need for any parameter updates. | Tools incorporating information from individual translation pairs, or translation memories @cite_30 @cite_36 , have been widely utilized by human translators in the industry. There have been a few efforts attempting to combine non-parametric methods with NMT @cite_39 @cite_12 @cite_32 , but the key difference of our approach is the introduction of local, sub-sentence level similarity in the retrieval process, via n-gram level retrieval. Combined with our architectural improvements, motivated by the target encoder and gated attention from @cite_32 and the extended transformer model from @cite_46 , our semi-parametric NMT model is able to out-perform purely neural models in broad multi-domain settings. | {
"cite_N": [
"@cite_30",
"@cite_36",
"@cite_32",
"@cite_39",
"@cite_46",
"@cite_12"
],
"mid": [
"2187442798",
"2276085145",
"2891713103",
"2618463334",
"2962712961",
"2795933031"
],
"abstract": [
"Translation Memory (TM) systems have been under the spotlight of translation technology research led by both software developers and academic institutions. Both ends try to find ways to maximize the benefits deriving from the use of these tools, whether those translate into productivity enhancements or cost savings. The involvement of the user in these efforts has always been problematic. It is usually too costly, it delays the development of the product because it takes time, and it requires a well designed mechanism to be in place that facilitates the communication between the user and the developer. Naturally, many developers cannot afford to set up such capability, thus they risk producing TM tools that fail to correspond to the needs of translation professionals. The Translation Memories Survey 2006 (abbr. TM Survey 2006), reported in this paper, was initiated with a view to acting as this very channel of information deriving from users (or potential users) of TM systems. The main purpose behind it is to present the users perspective about TM systems and to supply data on the application domain, that is, information on the procedural aspects of the translation activity, on frequent work practices and on the tasks related to TM systems. It reports on the factors that affect TM use and offers an evaluation of the most commonly used systems according to functional and non-functional criteria. The results also reveal a range of future directions in TM research as those are envisioned by translation professionals.",
"Commercial Translation Memory systems (TM) have been available on the market for over two decades now. They have become the major language technology to support the translation and localization industries. The following paper will provide an overview of the state of the art in TM technology, explaining the major concepts and looking at recent trends in both commercial systems and research. The paper will start with a short overview of the history of TM systems and a description of their main components and types. It will then discuss the relation between TM and machine translation (MT) as well as ways of integrating the two types of translation technologies. After taking a closer look at data exchange standards relevant to TM environments the focus of the paper then shift towards approaches to enhance the retrieval performance of TM systems looking at both non-linguistic and linguistic approaches.",
"",
"In this paper, we extend an attention-based neural machine translation (NMT) model by allowing it to access an entire training set of parallel sentence pairs even after training. The proposed approach consists of two stages. In the first stage--retrieval stage--, an off-the-shelf, black-box search engine is used to retrieve a small subset of sentence pairs from a training set given a source sentence. These pairs are further filtered based on a fuzzy matching score based on edit distance. In the second stage--translation stage--, a novel translation model, called translation memory enhanced NMT (TM-NMT), seamlessly uses both the source sentence and a set of retrieved sentence pairs to perform the translation. Empirical evaluation on three language pairs (En-Fr, En-De, and En-Es) shows that the proposed approach significantly outperforms the baseline approach and the improvement is more significant when more relevant sentence pairs were retrieved.",
"",
"One of the difficulties of neural machine translation (NMT) is the recall and appropriate translation of low-frequency words or phrases. In this paper, we propose a simple, fast, and effective method for recalling previously seen translation examples and incorporating them into the NMT decoding process. Specifically, for an input sentence, we use a search engine to retrieve sentence pairs whose source sides are similar with the input sentence, and then collect @math -grams that are both in the retrieved target sentences and aligned with words that match in the source sentences, which we call \"translation pieces\". We compute pseudo-probabilities for each retrieved sentence based on similarities between the input sentence and the retrieved source sentences, and use these to weight the retrieved translation pieces. Finally, an existing NMT model is used to translate the input sentence, with an additional bonus given to outputs that contain the collected translation pieces. We show our method improves NMT translation results up to 6 BLEU points on three narrow domain translation tasks where repetitiveness of the target sentences is particularly salient. It also causes little increase in the translation time, and compares favorably to another alternative retrieval-based method with respect to accuracy, speed, and simplicity of implementation."
]
} |
1903.00058 | 2920195923 | Neural Networks trained with gradient descent are known to be susceptible to catastrophic forgetting caused by parameter shift during the training process. In the context of Neural Machine Translation (NMT) this results in poor performance on heterogeneous datasets and on sub-tasks like rare phrase translation. On the other hand, non-parametric approaches are immune to forgetting, perfectly complementing the generalization ability of NMT. However, attempts to combine non-parametric or retrieval based approaches with NMT have only been successful on narrow domains, possibly due to over-reliance on sentence level retrieval. We propose a novel n-gram level retrieval approach that relies on local phrase level similarities, allowing us to retrieve neighbors that are useful for translation even when overall sentence similarity is low. We complement this with an expressive neural network, allowing our model to extract information from the noisy retrieved context. We evaluate our semi-parametric NMT approach on a heterogeneous dataset composed of WMT, IWSLT, JRC-Acquis and OpenSubtitles, and demonstrate gains on all 4 evaluation sets. The semi-parametric nature of our approach opens the door for non-parametric domain adaptation, demonstrating strong inference-time adaptation performance on new domains without the need for any parameter updates. | Some works have proposed using phrase tables or the outputs of Phrase based MT within NMT @cite_17 @cite_25 @cite_2 . While this reduces the noise present within the retrieved translation pairs, it requires training and maintaining a separate SMT system which might introduce errors of its own. | {
"cite_N": [
"@cite_2",
"@cite_25",
"@cite_17"
],
"mid": [
"2608870981",
"2765271678",
"2743229121"
],
"abstract": [
"Neural machine translation (NMT) becomes a new approach to machine translation and generates much more fluent results compared to statistical machine translation (SMT). However, SMT is usually better than NMT in translation adequacy. It is therefore a promising direction to combine the advantages of both NMT and SMT. In this paper, we propose a neural system combination framework leveraging multi-source NMT, which takes as input the outputs of NMT and SMT systems and produces the final translation. Extensive experiments on the Chinese-to-English translation task show that our model archives significant improvement by 5.3 BLEU points over the best single system output and 3.4 BLEU points over the state-of-the-art traditional system combination methods.",
"Compared to traditional statistical machine translation (SMT), neural machine translation (NMT) often sacrifices adequacy for the sake of fluency. We propose a method to combine the advantages of traditional SMT and NMT by exploiting an existing phrase-based SMT model to compute the phrase-based decoding cost for an NMT output and then using this cost to rerank the n-best NMT outputs. The main challenge in implementing this approach is that NMT outputs may not be in the search space of the standard phrase-based decoding algorithm, because the search space of phrase-based SMT is limited by the phrase-based translation rule table. We propose a soft forced decoding algorithm, which can always successfully find a decoding path for any NMT output. We show that using the forced decoding cost to rerank the NMT outputs can successfully improve translation quality on four different language pairs.",
"In this paper, we introduce a hybrid search for attention-based neural machine translation (NMT). A target phrase learned with statistical MT models extends a hypothesis in the NMT beam search when the attention of the NMT model focuses on the source words translated by this phrase. Phrases added in this way are scored with the NMT model, but also with SMT features including phrase-level translation probabilities and a target language model. Experimental results on German->English news domain and English->Russian e-commerce domain translation tasks show that using phrase-based models in NMT search improves MT quality by up to 2.3 BLEU absolute as compared to a strong NMT baseline."
]
} |
1903.00058 | 2920195923 | Neural Networks trained with gradient descent are known to be susceptible to catastrophic forgetting caused by parameter shift during the training process. In the context of Neural Machine Translation (NMT) this results in poor performance on heterogeneous datasets and on sub-tasks like rare phrase translation. On the other hand, non-parametric approaches are immune to forgetting, perfectly complementing the generalization ability of NMT. However, attempts to combine non-parametric or retrieval based approaches with NMT have only been successful on narrow domains, possibly due to over-reliance on sentence level retrieval. We propose a novel n-gram level retrieval approach that relies on local phrase level similarities, allowing us to retrieve neighbors that are useful for translation even when overall sentence similarity is low. We complement this with an expressive neural network, allowing our model to extract information from the noisy retrieved context. We evaluate our semi-parametric NMT approach on a heterogeneous dataset composed of WMT, IWSLT, JRC-Acquis and OpenSubtitles, and demonstrate gains on all 4 evaluation sets. The semi-parametric nature of our approach opens the door for non-parametric domain adaptation, demonstrating strong inference-time adaptation performance on new domains without the need for any parameter updates. | Beyond NMT, there have been a few other attempts to incorporate non-parametric approaches into neural generative models @cite_3 @cite_28 @cite_5 . This strong trend towards combining neural generative models with non-parametric methods is an attempt to counter the weaknesses of neural networks, especially their failure to remember information from individual training instances and the diversity problem of seq2seq models @cite_23 @cite_35 . | {
"cite_N": [
"@cite_35",
"@cite_28",
"@cite_3",
"@cite_23",
"@cite_5"
],
"mid": [
"",
"2949734169",
"2963018920",
"2949555952",
"2950179609"
],
"abstract": [
"",
"In models to generate program source code from natural language, representing this code in a tree structure has been a common approach. However, existing methods often fail to generate complex code correctly due to a lack of ability to memorize large and complex structures. We introduce ReCode, a method based on subtree retrieval that makes it possible to explicitly reference existing code examples within a neural code generation model. First, we retrieve sentences that are similar to input sentences using a dynamic-programming-based sentence similarity scoring method. Next, we extract n-grams of action sequences that build the associated abstract syntax tree. Finally, we increase the probability of actions that cause the retrieved n-gram action subtree to be in the predicted code. We show that our approach improves the performance on two code generation tasks by up to +2.6 BLEU.",
"We propose a new generative language model for sentences that first samples a prototype sentence from the training corpus and then edits it into a new sentence. Compared to traditional language models that generate from scratch either left-to-right or by first sampling a latent sentence vector, our prototype-then-edit model improves perplexity on language modeling and generates higher quality outputs according to human evaluation. Furthermore, the model gives rise to a latent edit vector that captures interpretable semantics such as sentence similarity and sentence-level analogies.",
"Neural sequence models are widely used to model time-series data. Equally ubiquitous is the usage of beam search (BS) as an approximate inference algorithm to decode output sequences from these models. BS explores the search space in a greedy left-right fashion retaining only the top-B candidates - resulting in sequences that differ only slightly from each other. Producing lists of nearly identical sequences is not only computationally wasteful but also typically fails to capture the inherent ambiguity of complex AI tasks. To overcome this problem, we propose Diverse Beam Search (DBS), an alternative to BS that decodes a list of diverse outputs by optimizing for a diversity-augmented objective. We observe that our method finds better top-1 solutions by controlling for the exploration and exploitation of the search space - implying that DBS is a better search algorithm. Moreover, these gains are achieved with minimal computational or memory over- head as compared to beam search. To demonstrate the broad applicability of our method, we present results on image captioning, machine translation and visual question generation using both standard quantitative metrics and qualitative human studies. Further, we study the role of diversity for image-grounded language generation tasks as the complexity of the image changes. We observe that our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models.",
"Sequence generation models for dialogue are known to have several problems: they tend to produce short, generic sentences that are uninformative and unengaging. Retrieval models on the other hand can surface interesting responses, but are restricted to the given retrieval set leading to erroneous replies that cannot be tuned to the specific context. In this work we develop a model that combines the two approaches to avoid both their deficiencies: first retrieve a response and then refine it -- the final sequence generator treating the retrieval as additional context. We show on the recent CONVAI2 challenge task our approach produces responses superior to both standard retrieval and generation models in human evaluations."
]
} |
1903.00252 | 2949063233 | Recently, learning to hash has been widely studied for image retrieval thanks to the computation and storage efficiency of binary codes. For most existing learning to hash methods, sufficient training images are required and used to learn precise hashing codes. However, in some real-world applications, there are not always sufficient training images in the domain of interest. In addition, some existing supervised approaches need a amount of labeled data, which is an expensive process in term of time, label and human expertise. To handle such problems, inspired by transfer learning, we propose a simple yet effective unsupervised hashing method named Optimal Projection Guided Transfer Hashing (GTH) where we borrow the images of other different but related domain i.e., source domain to help learn precise hashing codes for the domain of interest i.e., target domain. Besides, we propose to seek for the maximum likelihood estimation (MLE) solution of the hashing functions of target and source domains due to the domain gap. Furthermore,an alternating optimization method is adopted to obtain the two projections of target and source domains such that the domain hashing disparity is reduced gradually. Extensive experiments on various benchmark databases verify that our method outperforms many state-of-the-art learning to hash methods. The implementation details are available at this https URL. | In the past 10 years, various hashing methods have been proposed. Based on whether priori semantic information is used, they can categorized into two major groups: supervised hashing and unsupervised hashing. There are a lot of supervised hashing methods such as LDA hashing @cite_3 , Minimal Loss Hashing @cite_0 , FastHash @cite_29 , Kernel-based Supervised Hashing (KSH) @cite_26 , Supervised Discrete Hashing (SDH) @cite_21 , the Kernel-based Supervised Discrete Hashing (KSDH) @cite_34 , and Supervised Quantization for similarity search (SQ) @cite_24 that preserve similarity dissimilarity of intra-class inter-class images by using semantic information. However, there always lacks label information for model learning due to the high cost of labour and finance in some real-world application situation. | {
"cite_N": [
"@cite_26",
"@cite_29",
"@cite_21",
"@cite_3",
"@cite_0",
"@cite_24",
"@cite_34"
],
"mid": [
"1992371516",
"2153273131",
"1910300841",
"2134514757",
"2221852422",
"2473499128",
"2519631826"
],
"abstract": [
"Recent years have witnessed the growing popularity of hashing in large-scale vision problems. It has been shown that the hashing quality could be boosted by leveraging supervised information into hash function learning. However, the existing supervised methods either lack adequate performance or often incur cumbersome model training. In this paper, we propose a novel kernel-based supervised hashing model which requires a limited amount of supervised information, i.e., similar and dissimilar data pairs, and a feasible training cost in achieving high quality hashing. The idea is to map the data to compact binary codes whose Hamming distances are minimized on similar pairs and simultaneously maximized on dissimilar pairs. Our approach is distinct from prior works by utilizing the equivalence between optimizing the code inner products and the Hamming distances. This enables us to sequentially and efficiently train the hash functions one bit at a time, yielding very short yet discriminative codes. We carry out extensive experiments on two image benchmarks with up to one million samples, demonstrating that our approach significantly outperforms the state-of-the-arts in searching both metric distance neighbors and semantically similar neighbors, with accuracy gains ranging from 13 to 46 .",
"Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the Hamming space. Non-linear hash functions have demonstrated their advantage over linear ones due to their powerful generalization capability. In the literature, kernel functions are typically used to achieve non-linearity in hashing, which achieve encouraging retrieval perfor- mance at the price of slow evaluation and training time. Here we propose to use boosted decision trees for achieving non-linearity in hashing, which are fast to train and evalu- ate, hence more suitable for hashing with high dimensional data. In our approach, we first propose sub-modular for- mulations for the hashing binary code inference problem and an efficient GraphCut based block search method for solving large-scale inference. Then we learn hash func- tions by training boosted decision trees to fit the binary codes. Experiments demonstrate that our proposed method significantly outperforms most state-of-the-art methods in retrieval precision and training time. Especially for high- dimensional data, our method is orders of magnitude faster than many methods in terms of training time.",
"Recently, learning based hashing techniques have attracted broad research interests because they can support efficient storage and retrieval for high-dimensional data such as images, videos, documents, etc. However, a major difficulty of learning to hash lies in handling the discrete constraints imposed on the pursued hash codes, which typically makes hash optimizations very challenging (NP-hard in general). In this work, we propose a new supervised hashing framework, where the learning objective is to generate the optimal binary hash codes for linear classification. By introducing an auxiliary variable, we reformulate the objective such that it can be solved substantially efficiently by employing a regularization algorithm. One of the key steps in this algorithm is to solve a regularization sub-problem associated with the NP-hard binary optimization. We show that the sub-problem admits an analytical solution via cyclic coordinate descent. As such, a high-quality discrete solution can eventually be obtained in an efficient computing manner, therefore enabling to tackle massive datasets. We evaluate the proposed approach, dubbed Supervised Discrete Hashing (SDH), on four large image datasets and demonstrate its superiority to the state-of-the-art hashing methods in large-scale image retrieval.",
"SIFT-like local feature descriptors are ubiquitously employed in computer vision applications such as content-based retrieval, video analysis, copy detection, object recognition, photo tourism, and 3D reconstruction. Feature descriptors can be designed to be invariant to certain classes of photometric and geometric transformations, in particular, affine and intensity scale transformations. However, real transformations that an image can undergo can only be approximately modeled in this way, and thus most descriptors are only approximately invariant in practice. Second, descriptors are usually high dimensional (e.g., SIFT is represented as a 128-dimensional vector). In large-scale retrieval and matching problems, this can pose challenges in storing and retrieving descriptor data. We map the descriptor vectors into the Hamming space in which the Hamming metric is used to compare the resulting representations. This way, we reduce the size of the descriptors by representing them as short binary strings and learn descriptor invariance from examples. We show extensive experimental validation, demonstrating the advantage of the proposed approach.",
"We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.",
"In this paper, we address the problem of searching for semantically similar images from a large database. We present a compact coding approach, supervised quantization. Our approach simultaneously learns feature selection that linearly transforms the database points into a low-dimensional discriminative subspace, and quantizes the data points in the transformed space. The optimization criterion is that the quantized points not only approximate the transformed points accurately, but also are semantically separable: the points belonging to a class lie in a cluster that is not overlapped with other clusters corresponding to other classes, which is formulated as a classification problem. The experiments on several standard datasets show the superiority of our approach over the state-of-the art supervised hashing and unsupervised quantization algorithms.",
"Recently hashing has become an important tool to tackle the problem of large-scale nearest neighbor searching in computer vision. However, learning discrete hashing codes is a very challenging task due to the NP hard optimization problem. In this paper, we propose a novel yet simple kernel-based supervised discrete hashing method via an asymmetric relaxation strategy. Specifically, we present an optimization model with preserving the hashing function and the relaxed linear function simultaneously to reduce the accumulated quantization error between hashing and linear functions. Furthermore, we improve the hashing model by relaxing the hashing function into a general binary code matrix and introducing an additional regularization term. Then we solve these two optimization models via an alternative strategy, which can effectively and stably preserve the similarity of neighbors in a low-dimensional Hamming space. The proposed hashing method can produce informative short binary codes that require less storage volume and lower optimization time cost. Extensive experiments on multiple benchmark databases demonstrate the effectiveness of the proposed hashing method with short binary codes and its superior performance over the state of the arts."
]
} |
1903.00252 | 2949063233 | Recently, learning to hash has been widely studied for image retrieval thanks to the computation and storage efficiency of binary codes. For most existing learning to hash methods, sufficient training images are required and used to learn precise hashing codes. However, in some real-world applications, there are not always sufficient training images in the domain of interest. In addition, some existing supervised approaches need a amount of labeled data, which is an expensive process in term of time, label and human expertise. To handle such problems, inspired by transfer learning, we propose a simple yet effective unsupervised hashing method named Optimal Projection Guided Transfer Hashing (GTH) where we borrow the images of other different but related domain i.e., source domain to help learn precise hashing codes for the domain of interest i.e., target domain. Besides, we propose to seek for the maximum likelihood estimation (MLE) solution of the hashing functions of target and source domains due to the domain gap. Furthermore,an alternating optimization method is adopted to obtain the two projections of target and source domains such that the domain hashing disparity is reduced gradually. Extensive experiments on various benchmark databases verify that our method outperforms many state-of-the-art learning to hash methods. The implementation details are available at this https URL. | Unsupervised hashing methods aim to explore the intrinsic structure of data to preserve the similarity of neighbors without any supervised information. A number of unsupervised hashing methods have been developed in recent years. Locality-sensitive Hashing (LSH) @cite_9 , a typical data-independent method, uses a set of randomly generating projection to transform the image features to hashing codes. The representative unsupervised and data-dependent hashing methods include Spectral Hashing (SH) @cite_17 , Anchor Graph Hashing (AGH) @cite_4 , Iterative Quantization (ITQ) @cite_27 , Density Sensitive Hashing (DSH) @cite_33 , Circulant Binary Embedding (CBE) @cite_2 , etc. Several ranking-preserved hashing algorithms have been proposed recently to learn more discriminative binary codes e.g., Scalable Graph Hashing (SGH) @cite_6 , and Ordinal Constraint Hashing (OCH) @cite_25 . | {
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_9",
"@cite_6",
"@cite_27",
"@cite_2",
"@cite_25",
"@cite_17"
],
"mid": [
"",
"2040918046",
"1502916507",
"2402125293",
"2084363474",
"138476454",
"2794673239",
""
],
"abstract": [
"",
"Nearest neighbor search is a fundamental problem in various research fields like machine learning, data mining and pattern recognition. Recently, hashing-based approaches, for example, locality sensitive hashing (LSH), are proved to be effective for scalable high dimensional nearest neighbor search. Many hashing algorithms found their theoretic root in random projection. Since these algorithms generate the hash tables (projections) randomly, a large number of hash tables (i.e., long codewords) are required in order to achieve both high precision and recall. To address this limitation, we propose a novel hashing algorithm called density sensitive hashing (DSH) in this paper. DSH can be regarded as an extension of LSH. By exploring the geometric structure of the data, DSH avoids the purely random projections selection and uses those projective functions which best agree with the distribution of the data. Extensive experimental results on real-world data sets have shown that the proposed method achieves better performance compared to the state-of-the-art hashing approaches.",
"The nearestor near-neighbor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search index structures for performing similarity search over high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases. Unfortunately, all known techniques for solving this problem fall prey to the of dimensionality.\" That is, the data structures scale poorly with data dimensionality; in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related structures involves the inspection of a large fraction of the database, thereby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determining an approximate nearest neighbor should su ce for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search based on hashing. The basic idea is to hash the points Supported by NAVY N00014-96-1-1221 grant and NSF Grant IIS-9811904. Supported by Stanford Graduate Fellowship and NSF NYI Award CCR-9357849. Supported by ARO MURI Grant DAAH04-96-1-0007, NSF Grant IIS-9811904, and NSF Young Investigator Award CCR9357849, with matching funds from IBM, Mitsubishi, Schlumberger Foundation, Shell Foundation, and Xerox Corporation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. from the database so as to ensure that the probability of collision is much higher for objects that are close to each other than for those that are far apart. We provide experimental evidence that our method gives signi cant improvement in running time over other methods for searching in highdimensional spaces based on hierarchical tree decomposition. Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions (more than 50).",
"Hashing has been widely used for approximate nearest neighbor (ANN) search in big data applications because of its low storage cost and fast retrieval speed. The goal of hashing is to map the data points from the original space into a binary-code space where the similarity (neighborhood structure) in the original space is preserved. By directly exploiting the similarity to guide the hashing code learning procedure, graph hashing has attracted much attention. However, most existing graph hashing methods cannot achieve satisfactory performance in real applications due to the high complexity for graph modeling. In this paper, we propose a novel method, called scalable graph hashing with feature transformation (SGH), for large-scale graph hashing. Through feature transformation, we can effectively approximate the whole graph without explicitly computing the similarity graph matrix, based on which a sequential learning method is proposed to learn the hash functions in a bitwise manner. Experiments on two datasets with one million data points show that our SGH method can outperform the state-of-the-art methods in terms of both accuracy and scalability.",
"This paper addresses the problem of learning similarity-preserving binary codes for efficient retrieval in large-scale image collections. We propose a simple and efficient alternating minimization scheme for finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube. This method, dubbed iterative quantization (ITQ), has connections to multi-class spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). Our experiments show that the resulting binary coding schemes decisively outperform several other state-of-the-art methods.",
"Binary embedding of high-dimensional data requires long codes to preserve the discriminative power of the input space. Traditional binary coding methods often suffer from very high computation and storage costs in such a scenario. To address this problem, we propose Circulant Binary Embedding (CBE) which generates binary codes by projecting the data with a circulant matrix. The circulant structure enables the use of Fast Fourier Transformation to speed up the computation. Compared to methods that use unstructured matrices, the proposed method improves the time complexity from O(d2) to O(d log d), and the space complexity from O(d2) to O(d) where d is the input dimensionality. We also propose a novel time-frequency alternating optimization to learn data-dependent circulant projections, which alternatively minimizes the objective in original and Fourier domains. We show by extensive experiments that the proposed approach gives much better performance than the state-of-the-art approaches for fixed time, and provides much faster computation with no performance degradation for fixed number of bits.",
"Binary code learning, a.k.a. hashing, has been successfully applied to the approximate nearest neighbor search in large-scale image collections. The key challenge lies in reducing the quantization error from the original real-valued feature space to a discrete Hamming space. Recent advances in unsupervised hashing advocate the preservation of ranking information, which is achieved by constraining the binary code learning to be correlated with pairwise similarity. However, few unsupervised methods consider the preservation of ordinal relations in the learning process, which serves as a more basic cue to learn optimal binary codes. In this paper, we propose a novel hashing scheme, termed Ordinal Constraint Hashing (OCH), which embeds the ordinal relation among data points to preserve ranking into binary codes. The core idea is to construct an ordinal graph via tensor product, and then train the hash function over this graph to preserve the permutation relations among data points in the Hamming space. Subsequently, an in-depth acceleration scheme, termed Ordinal Constraint Projection (OCP), is introduced, which approximates the @math -pair ordinal graph by @math -pair anchor-based ordinal graph, and reduce the corresponding complexity from @math to @math ( @math ). Finally, to make the optimization tractable, we further relax the discrete constrains and design a customized stochastic gradient decent algorithm on the Stiefel manifold. Experimental results on serval large-scale benchmarks demonstrate that the proposed OCH method can achieve superior performance over the state-of-the-art approaches.",
""
]
} |
1903.00252 | 2949063233 | Recently, learning to hash has been widely studied for image retrieval thanks to the computation and storage efficiency of binary codes. For most existing learning to hash methods, sufficient training images are required and used to learn precise hashing codes. However, in some real-world applications, there are not always sufficient training images in the domain of interest. In addition, some existing supervised approaches need a amount of labeled data, which is an expensive process in term of time, label and human expertise. To handle such problems, inspired by transfer learning, we propose a simple yet effective unsupervised hashing method named Optimal Projection Guided Transfer Hashing (GTH) where we borrow the images of other different but related domain i.e., source domain to help learn precise hashing codes for the domain of interest i.e., target domain. Besides, we propose to seek for the maximum likelihood estimation (MLE) solution of the hashing functions of target and source domains due to the domain gap. Furthermore,an alternating optimization method is adopted to obtain the two projections of target and source domains such that the domain hashing disparity is reduced gradually. Extensive experiments on various benchmark databases verify that our method outperforms many state-of-the-art learning to hash methods. The implementation details are available at this https URL. | Transfer learning (TL) @cite_19 , a new proposed learning conception, aims to transfer knowledge across two different domains such that rich source domain knowledge can be utilized to generate better classifiers on a target domain. In transfer learning, the transferred knowledge can be labels @cite_32 , @cite_11 , features @cite_14 , @cite_31 , @cite_22 , @cite_5 and cross domain correspondences @cite_23 , @cite_10 . Transfer learning has shown promising results in many machine learning tasks, such as classification and regression. To the best of our knowledge, there are few works on studying transfer learning for hashing. Most of them are based on deep learning @cite_7 . The recent work @cite_20 proposes a transfer hashing from shallow to deep. Different from their works, we focus on how to transfer knowledge across hashing projection in an unsupervised manner. It is worth noting that the labels in neither of target and source domains are used in our GTH. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_32",
"@cite_19",
"@cite_23",
"@cite_5",
"@cite_31",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"2517537544",
"2435696002",
"2951111165",
"2062518264",
"",
"2283717164",
"2770824758",
"2752674409",
"",
"2799787995",
"2608045553"
],
"abstract": [
"",
"Hashing has shown its efficiency and effectiveness in facilitating large-scale multimedia applications. Supervised knowledge (, semantic labels or pair-wise relationship) associated to data is capable of significantly improving the quality of hash codes and hash functions. However, confronted with the rapid growth of newly-emerging concepts and multimedia data on the Web, existing supervised hashing approaches may easily suffer from the scarcity and validity of supervised information due to the expensive cost of manual labelling. In this paper, we propose a novel hashing scheme, termed (ZSH), which compresses images of \"unseen\" categories to binary codes with hash functions learned from limited training data of \"seen\" categories. Specifically, we project independent data labels (i.e., 0 1-form label vectors) into semantic embedding space, where semantic relationships among all the labels can be precisely characterized and thus seen supervised knowledge can be transferred to unseen classes. Moreover, in order to cope with the semantic shift problem, we rotate the embedded space to more suitably align the embedded semantics with the low-level visual feature space, thereby alleviating the influence of semantic gap. In the meantime, to exert positive effects on learning high-quality hash functions, we further propose to preserve local structural property and discrete nature in binary codes. Besides, we develop an efficient alternating algorithm to solve the ZSH model. Extensive experiments conducted on various real-life datasets show the superior zero-shot image retrieval performance of ZSH as compared to several state-of-the-art hashing methods.",
"In recent years, deep neural networks have emerged as a dominant machine learning tool for a wide variety of application domains. However, training a deep neural network requires a large amount of labeled data, which is an expensive process in terms of time, labor and human expertise. Domain adaptation or transfer learning algorithms address this challenge by leveraging labeled data in a different, but related source domain, to develop a model for the target domain. Further, the explosive growth of digital data has posed a fundamental challenge concerning its storage and retrieval. Due to its storage and retrieval efficiency, recent years have witnessed a wide application of hashing in a variety of computer vision applications. In this paper, we first introduce a new dataset, Office-Home, to evaluate domain adaptation algorithms. The dataset contains images of a variety of everyday objects from multiple domains. We then propose a novel deep learning framework that can exploit labeled source data and unlabeled target data to learn informative hash codes, to accurately classify unseen target data. To the best of our knowledge, this is the first research effort to exploit the feature learning capabilities of deep neural networks to learn representative hash codes to address the domain adaptation problem. Our extensive empirical studies on multiple transfer tasks corroborate the usefulness of the framework in learning efficient hash codes which outperform existing competitive baselines for unsupervised domain adaptation.",
"Most previous heterogeneous transfer learning methods learn a cross-domain feature mapping between heterogeneous feature spaces based on a few cross-domain instance-correspondences, and these corresponding instances are assumed to be representative in the source and target domains respectively. However, in many real-world scenarios, this assumption may not hold. As a result, the constructed feature mapping may not be precise due to the bias issue of the correspondences in the target or (and) source domain(s). In this case, a classifier trained on the labeled transformed-source-domain data may not be useful for the target domain. In this paper, we present a new transfer learning framework called Hybrid Heterogeneous Transfer Learning (HHTL), which allows the corresponding instances across domains to be biased in either the source or target domain. Specifically, we propose a deep learning approach to learn a feature mapping between cross-domain heterogeneous features as well as a better feature representation for mapped data to reduce the bias issue caused by the cross-domain correspondences. Extensive experiments on several multilingual sentiment classification tasks verify the effectiveness of our proposed approach compared with some baseline methods.",
"",
"We propose a novel reconstruction-based transfer learning method called latent sparse domain transfer (LSDT) for domain adaptation and visual categorization of heterogeneous data. For handling cross-domain distribution mismatch, we advocate reconstructing the target domain data with the combined source and target domain data points based on @math -norm sparse coding. Furthermore, we propose a joint learning model for simultaneous optimization of the sparse coding and the optimal subspace representation. In addition, we generalize the proposed LSDT model into a kernel-based linear nonlinear basis transformation learning framework for tackling nonlinear subspace shifts in reproduced kernel Hilbert space. The proposed methods have three advantages: 1) the latent space and the reconstruction are jointly learned for pursuit of an optimal subspace transfer; 2) with the theory of sparse subspace clustering, a few valuable source and target data points are formulated to reconstruct the target data with noise (outliers) from source domain removed during domain adaptation, such that the robustness is guaranteed; and 3) a nonlinear projection of some latent space with kernel is easily generalized for dealing with highly nonlinear domain shift (e.g., face poses). Extensive experiments on several benchmark vision data sets demonstrate that the proposed approaches outperform other state-of-the-art representation-based domain adaptation methods.",
"Subspace learning and reconstruction have been widely explored in recent transfer learning work and generally a specially designed projection and reconstruction transfer matrix are wanted. However, existing subspace reconstruction based algorithms neglect the class prior such that the learned transfer function is biased, especially when data scarcity of some class is encountered. Different from those previous methods, in this paper, we propose a novel reconstruction-based transfer learning method called Class-specific Reconstruction Transfer Learning (CRTL), which optimizes a well-designed transfer loss function without class bias. Using a class-specific reconstruction matrix to align the source domain with the target domain which provides help for classification with class prior modeling. Furthermore, to keep the intrinsic relationship between data and labels after feature augmentation, a projected Hilbert-Schmidt Independence Criterion (pHSIC), that measures the dependency between two sets, is first proposed by mapping the data from original space to RKHS in transfer learning. In addition, combining low-rank and sparse constraints on the class-specific reconstruction coefficient matrix, the global and local data structures can be effectively preserved. Extensive experiments demonstrate that the proposed method outperforms conventional representation-based domain adaptation methods.",
"Hashing has been recognized as one of the most promising ways in indexing and retrieving high-dimensional data due to the excellent merits in efficiency and effectiveness. Nevertheless, most existing approaches inevitably suffer from the problem of “semantic gap”, especially when facing the rapid evolution of newly-emerging “unseen” categories on the Web. In this work, we propose an innovative approach, termed Attribute Hashing (AH), to facilitate zero-shot image retrieval (i.e., query by “unseen” images). In particular, we propose a multi-layer hierarchy for hashing, which fully exploits attributes to model the relationships among visual features, binary codes and labels. Besides, we deliberately preserve the nature of hash codes (i.e., discreteness and local structure) to the greatest extent. We conduct extensive experiments on several real-world image datasets to show the superiority of our proposed AH approach as compared to the state-of-the-arts.",
"",
"One major assumption used in most existing hashing approaches is that the domain of interest (i.e., the target domain) could provide sufficient training data, either labeled or unlabeled. However, this assumption may be violated in practice. To address this so-called data sparsity issue in hashing, a new framework termed transfer hashing with privileged information (THPI) is proposed, which marriages hashing and transfer learning (TL). To show the efficacy of THPI, we propose three variants of the well-known iterative quantization (ITQ) [11] as a showcase. The proposed methods, ITQ+, LapITQ+, and deep transfer hashing (DTH), solve the aforementioned data sparsity issue from different aspects. Specifically, ITQ+ is a shallow model, which makes ITQ achieve hashing in a TL manner. ITQ+ learns a new slack function from the source domain to approximate the quantization error on the target domain given by ITQ. To further improve the performance of ITQ+, LapITQ+ is proposed by embedding the geometric relationship of the source domain into the target domain. Moreover, DTH is proposed to show the generality of our framework by utilizing the powerful representative capacity of deep learning. To the best of our knowledge, this could be one of the first DTH works. Extensive experiments on several popular data sets demonstrate the effectiveness of our shallow and DTH approaches comparing with several state-of-the-art hashing approaches.",
"Despite the promising progress made in recent years, person re-identification (re-ID) remains a challenging task due to the complex variations in human appearances from different camera views. For this challenging problem, a large variety of algorithms have been developed in the fully supervised setting, requiring access to a large amount of labeled training data. However, the main bottleneck for fully supervised re-ID is the limited availability of labeled training samples. To address this problem, we propose a self-trained subspace learning paradigm for person re-ID that effectively utilizes both labeled and unlabeled data to learn a discriminative subspace where person images across disjoint camera views can be easily matched. The proposed approach first constructs pseudo-pairwise relationships among unlabeled persons using the k-nearest neighbors algorithm. Then, with the pseudo-pairwise relationships, the unlabeled samples can be easily combined with the labeled samples to learn a discriminative projection by solving an eigenvalue problem. In addition, we refine the pseudo-pairwise relationships iteratively, which further improves learning performance. A multi-kernel embedding strategy is also incorporated into the proposed approach to cope with the non-linearity in a person’s appearance and explore the complementation of multiple kernels. In this way, the performance of person re-ID can be greatly enhanced when training data are insufficient. Experimental results on six widely used datasets demonstrate the effectiveness of our approach, and its performance can be comparable to the reported results of most state-of-the-art fully supervised methods while using much fewer labeled data."
]
} |
1903.00049 | 2920492183 | Bayes factors, in many cases, have been proven to bridge the classic -value based significance testing and bayesian analysis of posterior odds. This paper discusses this phenomena within the binomial A B testing setup (applicable for example to conversion testing). It is shown that the bayes factor is controlled by the of success ratios in two tested groups, which can be further bounded by the Welch statistic. As a result, bayesian sample bounds almost match frequentionist's sample bounds. The link between Jensen-Shannon divergence and Welch's test as well as the derivation are an elegant application of tools from information geometry. | Bounds eq:1 and eq:2 are close to each other by a constant factor (a different small factor is necessary to make the bound small in both the bayesian credibility and p-value sense). The difference (under the normalized constant) is illustrated on fig:example , for the case when one wants to test a relative uplift of @math @math e ^ - ^2 2 $, this is true for p-values much lower than the standard threshold of 0.05. In some sense, the bayesian approach is more conservative and less reluctant to reject than frequentionist tests; this conclusion is shared with other works @cite_1 . | {
"cite_N": [
"@cite_1"
],
"mid": [
"2030346622"
],
"abstract": [
"Bayesian inference is usually presented as a method for determining how scientific belief should be modified by data. Although Bayesian methodology has been one of the most active areas of statistical development in the past 20 years, medical researchers have been reluctant to embrace what they perceive as a subjective approach to data analysis. It is little understood that Bayesian methods have a data-based core, which can be used as a calculus of evidence. This core is the Bayes factor, which in its simplest form is also called a likelihood ratio. The minimum Bayes factor is objective and can be used in lieu of the P value as a measure of the evidential strength. Unlike P values, Bayes factors have a sound theoretical foundation and an interpretation that allows their use in both inference and decision making. Bayes factors show that P values greatly overstate the evidence against the null hypothesis. Most important, Bayes factors require the addition of background knowledge to be transformed into inferences-probabilities that a given conclusion is right or wrong. They make the distinction clear between experimental evidence and inferential conclusions while providing a framework in which to combine prior with current evidence."
]
} |
1903.00159 | 2952054750 | The problem of localization on a geo-referenced satellite map given a query ground view image is useful yet remains challenging due to the drastic change in viewpoint. To this end, in this paper we work on the extension of our earlier work on the Cross-View Matching Network (CVM-Net) for the ground-to-aerial image matching task since the traditional image descriptors fail due to the drastic viewpoint change. In particular, we show more extensive experimental results and analyses of the network architecture on our CVM-Net. Furthermore, we propose a Markov localization framework that enforces the temporal consistency between image frames to enhance the geo-localization results in the case where a video stream of ground view images is available. Experimental results show that our proposed Markov localization framework can continuously localize the vehicle within a small error on our Singapore dataset. | In the early stage, traditional features that were commonly used in the computer vision community were utilized to do the cross-view image matching ( @cite_35 ; @cite_18 ; @cite_45 ; @cite_4 ; @cite_36 ; @cite_12 ). However, due to the huge difference in viewpoint, the aerial image and ground view image of the same location appeared to be very different. This caused direct matching with traditional local features to fail. Hence, a number of approaches warped the ground image to the top-down view to improve feature matching ( @cite_35 ; @cite_45 ; @cite_12 ). In cases where building facades are visible from oblique aerial images, geo-localization can be achieved with facade patch-matching @cite_18 . | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_4",
"@cite_36",
"@cite_45",
"@cite_12"
],
"mid": [
"14517147",
"2061153075",
"2065250506",
"",
"1981154748",
"1988873224"
],
"abstract": [
"Obtaining an accurate vehicle position is important for intelligent vehicles in supporting driver safety and comfort. This paper proposes an accurate ego-localization method by matching in-vehicle camera images to an aerial image. There are two major problems in performing an accurate matching: (1) image difference between the aerial image and the in-vehicle camera image due to view-point and illumination conditions, and (2) occlusions in the in-vehicle camera image. To solve the first problem, we use the SURF image descriptor, which achieves robust feature-point matching for the various image differences. Additionally, we extract appropriate feature-points from each road-marking region on the road plane in both images. For the second problem, we utilize sequential multiple in-vehicle camera frames in the matching. The experimental results demonstrate that the proposed method improves both ego-localization accuracy and stability.",
"We study the feasibility of solving the challenging problem of geolocalizing ground level images in urban areas with respect to a database of images captured from the air such as satellite and oblique aerial images. We observe that comprehensive aerial image databases are widely available while complete coverage of urban areas from the ground is at best spotty. As a result, localization of ground level imagery with respect to aerial collections is a technically important and practically significant problem. We exploit two key insights: (1) satellite image to oblique aerial image correspondences are used to extract building facades, and (2) building facades are matched between oblique aerial and ground images for geo-localization. Key contributions include: (1) A novel method for extracting building facades using building outlines; (2) Correspondence of building facades between oblique aerial and ground images without direct matching; and (3) Position and orientation estimation of ground images. We show results of ground image localization in a dense urban area.",
"In this paper, we present a novel computer vision framework for precise localization of a mobile robot on sidewalks. In our framework, we combine stereo camera images, visual odometry, satellite map matching, and a sidewalk probability transfer function obtained from street maps in order to attain globally corrected localization results. The framework is capable of precisely localizing a mobile robot platform that navigates on sidewalks, without the use of traditional wheel odometry, GPS or INS inputs. On a complex 570-meter sidewalk route, we show that we obtain superior localization results compared to visual odometry and GPS.",
"",
"We present a framework for global vehicle localization and 3D point cloud reconstruction that combines stereo visual odometry, satellite images, and road maps under a particle-filtering architecture. The framework focuses on the general vehicle localization scenario without the use of global positioning system for urban and rural environments and with the presence of moving objects. The main novelties of our approach are using road maps and rendering accurate top views using stereo reconstruction, and match these views with the satellite images in order to eliminate drifts and obtain accurate global localization. We show that our method is practicable by presenting experimental results on a 2 km road where mostly specific road features do not exist.",
""
]
} |
1903.00159 | 2952054750 | The problem of localization on a geo-referenced satellite map given a query ground view image is useful yet remains challenging due to the drastic change in viewpoint. To this end, in this paper we work on the extension of our earlier work on the Cross-View Matching Network (CVM-Net) for the ground-to-aerial image matching task since the traditional image descriptors fail due to the drastic viewpoint change. In particular, we show more extensive experimental results and analyses of the network architecture on our CVM-Net. Furthermore, we propose a Markov localization framework that enforces the temporal consistency between image frames to enhance the geo-localization results in the case where a video stream of ground view images is available. Experimental results show that our proposed Markov localization framework can continuously localize the vehicle within a small error on our Singapore dataset. | As deep learning approaches are proven to be extremely successful in image video classification and recognition tasks, many efforts were taken to introduce deep learning into the domain of cross-view image matching and retrieval. Workman and Jacobs @cite_24 conducted experiments on the AlexNet @cite_32 model trained on ImageNet @cite_49 and Places @cite_33 . They showed that deep features for common image classification significantly outperformed hand-crafted features. Later on, @cite_11 further improved the matching accuracy by training the convolutional neural network on aerial branch. Vo and Hays @cite_38 conducted thorough experiments on existing classification and retrieval networks, including binary classification network, Siamese network and Triplet network. With the novel soft-margin Triplet loss and exhausting mini-batch training strategy, they achieved a significant improvement on the retrieval accuracy. On the other hand, @cite_15 proposed a weakly supervised training network to obtain the semantic layout of satellite images. These layouts were used as image descriptors to do retrieval from database. | {
"cite_N": [
"@cite_38",
"@cite_33",
"@cite_32",
"@cite_24",
"@cite_49",
"@cite_15",
"@cite_11"
],
"mid": [
"2479919622",
"2134670479",
"2163605009",
"1938020354",
"2108598243",
"2572697301",
"2199890863"
],
"abstract": [
"In this paper we aim to determine the location and orientation of a ground-level query image by matching to a reference database of overhead (e.g. satellite) images. For this task we collect a new dataset with one million pairs of street view and overhead images sampled from eleven U.S. cities. We explore several deep CNN architectures for cross-domain matching – Classification, Hybrid, Siamese, and Triplet networks. Classification and Hybrid architectures are accurate but slow since they allow only partial feature precomputation. We propose a new loss function which significantly improves the accuracy of Siamese and Triplet embedding networks while maintaining their applicability to large-scale retrieval tasks like image geolocalization. This image matching task is challenging not just because of the dramatic viewpoint difference between ground-level and overhead imagery but because the orientation (i.e. azimuth) of the street views is unknown making correspondence even more difficult. We examine several mechanisms to match in spite of this – training for rotation invariance, sampling possible rotations at query time, and explicitly predicting relative rotation of ground and overhead images with our deep networks. It turns out that explicit orientation supervision also improves location prediction accuracy. Our best performing architectures are roughly 2.5 times as accurate as the commonly used Siamese network baseline.",
"Scene recognition is one of the hallmark tasks of computer vision, allowing definition of a context for object recognition. Whereas the tremendous recent progress in object recognition tasks is due to the availability of large datasets like ImageNet and the rise of Convolutional Neural Networks (CNNs) for learning high-level features, performance at scene recognition has not attained the same level of success. This may be because current deep features trained from ImageNet are not competitive enough for such tasks. Here, we introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. We propose new methods to compare the density and diversity of image datasets and show that Places is as dense as other scene datasets and has more diversity. Using CNN, we learn deep features for scene recognition tasks, and establish new state-of-the-art results on several scene-centric datasets. A visualization of the CNN layers' responses allows us to show differences in the internal representations of object-centric and scene-centric networks.",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"As the availability of geotagged imagery has increased, so has the interest in geolocation-related computer vision applications, ranging from wide-area image geolocalization to the extraction of environmental data from social network imagery. Encouraged by the recent success of deep convolutional networks for learning high-level features, we investigate the usefulness of deep learned features for such problems. We compare features extracted from various layers of convolutional neural networks and analyze their discriminative ability with regards to location. Our analysis spans several problem settings, including region identification, visualizing land cover in aerial imagery, and ground-image localization in regions without ground-image reference data (where we achieve state-of-the-art performance on a benchmark dataset). We present results on multiple datasets, including a new dataset we introduce containing hundreds of thousands of ground-level and aerial images in a large region centered around San Francisco.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"We introduce a novel strategy for learning to extract semantically meaningful features from aerial imagery. Instead of manually labeling the aerial imagery, we propose to predict (noisy) semantic features automatically extracted from co-located ground imagery. Our network architecture takes an aerial image as input, extracts features using a convolutional neural network, and then applies an adaptive transformation to map these features into the ground-level perspective. We use an end-to-end learning approach to minimize the difference between the semantic segmentation extracted directly from the ground image and the semantic segmentation predicted solely based on the aerial image. We show that a model learned using this strategy, with no additional training, is already capable of rough semantic labeling of aerial imagery. Furthermore, we demonstrate that by finetuning this model we can achieve more accurate semantic segmentation than two baseline initialization strategies. We use our network to address the task of estimating the geolocation and geo-orientation of a ground image. Finally, we show how features extracted from an aerial image can be used to hallucinate a plausible ground-level panorama.",
"We propose to use deep convolutional neural networks to address the problem of cross-view image geolocalization, in which the geolocation of a ground-level query image is estimated by matching to georeferenced aerial images. We use state-of-the-art feature representations for ground-level images and introduce a cross-view training approach for learning a joint semantic feature representation for aerial images. We also propose a network architecture that fuses features extracted from aerial images at multiple spatial scales. To support training these networks, we introduce a massive database that contains pairs of aerial and ground-level images from across the United States. Our methods significantly out-perform the state of the art on two benchmark datasets. We also show, qualitatively, that the proposed feature representations are discriminative at both local and continental spatial scales."
]
} |
1903.00159 | 2952054750 | The problem of localization on a geo-referenced satellite map given a query ground view image is useful yet remains challenging due to the drastic change in viewpoint. To this end, in this paper we work on the extension of our earlier work on the Cross-View Matching Network (CVM-Net) for the ground-to-aerial image matching task since the traditional image descriptors fail due to the drastic viewpoint change. In particular, we show more extensive experimental results and analyses of the network architecture on our CVM-Net. Furthermore, we propose a Markov localization framework that enforces the temporal consistency between image frames to enhance the geo-localization results in the case where a video stream of ground view images is available. Experimental results show that our proposed Markov localization framework can continuously localize the vehicle within a small error on our Singapore dataset. | The most important part of image retrieval is to find a good descriptor of an image which is discriminative and fast for comparison. Sivic and Zisserman @cite_44 proposed the Bag-of-Visual-Word descriptors to aggregate a set of local features into a histogram of visual words, i.e. the global descriptor. They showed that the descriptor is partially viewpoint and occlusion invariant, and outperformed local feature matching. Nister and Stewenius @cite_46 created a tree structure vocabulary to support more visual words. @cite_3 proposed the VLAD descriptor. Instead of a histogram, they aggregated the residuals of the local features to cluster centroids. Based on that work, @cite_8 proposed a learnable layer of VLAD, i.e. NetVLAD, that could be embedded into the deep network for end-to-end training. In their extended paper @cite_13 , they illustrated that NetVLAD was better than multiple fully connected layers, max pooling and VLAD. Due to the superior performance of NetVLAD, we adopt the NetVLAD layer in our proposed network. | {
"cite_N": [
"@cite_8",
"@cite_3",
"@cite_44",
"@cite_46",
"@cite_13"
],
"mid": [
"2620629206",
"2012592962",
"2131846894",
"2128017662",
""
],
"abstract": [
"We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks.",
"We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms.",
"We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films.",
"A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CDs. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.",
""
]
} |
1903.00138 | 2920117846 | Neural machine translation systems have become state-of-the-art approaches for Grammatical Error Correction (GEC) task. In this paper, we propose a copy-augmented architecture for the GEC task by copying the unchanged words from the source sentence to the target sentence. Since the GEC suffers from not having enough labeled training data to achieve high accuracy. We pre-train the copy-augmented architecture with a denoising auto-encoder using the unlabeled One Billion Benchmark and make comparisons between the fully pre-trained model and a partially pre-trained model. It is the first time copying words from the source context and fully pre-training a sequence to sequence model are experimented on the GEC task. Moreover, We add token-level and sentence-level multi-task learning for the GEC task. The evaluation results on the CoNLL-2014 test set show that our approach outperforms all recently published state-of-the-art results by a large margin. The code and pre-trained models are released at this https URL. | Early published works in GEC develop specific classifiers for different error types and then use them to build hybrid systems. Later, leveraging the progress of statistical machine translation(SMT) and large-scale error corrected data, GEC systems are further improved treated as a translation problem. SMT systems can remember phrase-based correction pairs, but they are hard to generalize beyond what was seen in training. The CoNLL-14 shared task overview paper @cite_5 provides a comparative evaluation of approaches. @cite_0 detailed classification and machine translation approaches to grammatical error correction problems, and combined the strengths for both methods. | {
"cite_N": [
"@cite_0",
"@cite_5"
],
"mid": [
"2515384205",
"2098297786"
],
"abstract": [
"We focus on two leading state-of-the-art approaches to grammatical error correction – machine learning classification and machine translation. Based on the comparative study of the two learning frameworks and through error analysis of the output of the state-of-the-art systems, we identify key strengths and weaknesses of each of these approaches and demonstrate their complementarity. In particular, the machine translation method learns from parallel data without requiring further linguistic input and is better at correcting complex mistakes. The classification approach possesses other desirable characteristics, such as the ability to easily generalize beyond what was seen in training, the ability to train without human-annotated data, and the flexibility to adjust knowledge sources for individual error types. Based on this analysis, we develop an algorithmic approach that combines the strengths of both methods. We present several systems based on resources used in previous work with a relative improvement of over 20 (and 7.4 F score points) over the previous state-of-the-art.",
"The CoNLL-2014 shared task was devoted to grammatical error correction of all error types. In this paper, we give the task definition, present the data sets, and describe the evaluation metric and scorer used in the shared task. We also give an overview of the various approaches adopted by the participating teams, and present the evaluation results. Compared to the CoNLL2013 shared task, we have introduced the following changes in CoNLL-2014: (1) A participating system is expected to detect and correct grammatical errors of all types, instead of just the five error types in CoNLL-2013; (2) The evaluation metric was changed from F1 to F0.5, to emphasize precision over recall; and (3) We have two human annotators who independently annotated the test essays, compared to just one human annotator in CoNLL-2013."
]
} |
1903.00138 | 2920117846 | Neural machine translation systems have become state-of-the-art approaches for Grammatical Error Correction (GEC) task. In this paper, we propose a copy-augmented architecture for the GEC task by copying the unchanged words from the source sentence to the target sentence. Since the GEC suffers from not having enough labeled training data to achieve high accuracy. We pre-train the copy-augmented architecture with a denoising auto-encoder using the unlabeled One Billion Benchmark and make comparisons between the fully pre-trained model and a partially pre-trained model. It is the first time copying words from the source context and fully pre-training a sequence to sequence model are experimented on the GEC task. Moreover, We add token-level and sentence-level multi-task learning for the GEC task. The evaluation results on the CoNLL-2014 test set show that our approach outperforms all recently published state-of-the-art results by a large margin. The code and pre-trained models are released at this https URL. | Recently, neural machine translation approaches have been shown to be very powerful. @cite_19 developed a neural sequence-labeling model for error detection to calculate the probability of each token in a sentence as being correct or incorrect, and then use the error detecting model's result as a feature to re-rank the N best hypotheses. @cite_36 proposed a hybrid neural model incorporating both the word and character-level information. @cite_12 used a multilayer convolutional encoder-decoder neural network and outperforms all prior neural and statistical based systems on this task. @cite_28 tried deep RNN @cite_37 and transformer @cite_8 encoder-decoder models and got a higher result by using transformer and a set of model-independent methods for neural GEC. | {
"cite_N": [
"@cite_37",
"@cite_8",
"@cite_36",
"@cite_28",
"@cite_19",
"@cite_12"
],
"mid": [
"2737711067",
"2963403868",
"2726264694",
"2797371199",
"2758774757",
"2785047343"
],
"abstract": [
"It has been shown that increasing model depth improves the quality of neural machine translation. However, different architectural variants to increase model depth have been proposed, and so far, there has been no thorough comparative study. In this work, we describe and evaluate several existing approaches to introduce depth in neural machine translation. Additionally, we explore novel architectural variants, including deep transition RNNs, and we vary how attention is used in the deep decoder. We introduce a novel \"BiDeep\" RNN architecture that combines deep transition RNNs and stacked RNNs. Our evaluation is carried out on the English to German WMT news translation dataset, using a single-GPU machine for both training and inference. We find that several of our proposed architectures improve upon existing approaches in terms of speed and translation quality. We obtain best improvements with a BiDeep RNN of combined depth 8, obtaining an average improvement of 1.5 BLEU over a strong shallow baseline. We release our code for ease of adoption.",
"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.",
"Grammatical error correction (GEC) systems strive to correct both global errors in word order and usage, and local errors in spelling and inflection. Further developing upon recent work on neural machine translation, we propose a new hybrid neural model with nested attention layers for GEC. Experiments show that the new model can effectively correct errors of both types by incorporating word and character-level information,and that the model significantly outperforms previous neural models for GEC as measured on the standard CoNLL-14 benchmark dataset. Further analysis also shows that the superiority of the proposed model can be largely attributed to the use of the nested attention mechanism, which has proven particularly effective in correcting local errors that involve small edits in orthography.",
"Previously, neural methods in grammatical error correction (GEC) did not reach state-of-the-art results compared to phrase-based statistical machine translation (SMT) baselines. We demonstrate parallels between neural GEC and low-resource neural MT and successfully adapt several methods from low-resource MT to neural GEC. We further establish guidelines for trustable results in neural GEC and propose a set of model-independent methods for neural GEC that can be easily applied in most GEC settings. Proposed methods include adding source-side noise, domain-adaptation techniques, a GEC-specific training-objective, transfer learning with monolingual data, and ensembling of independently trained GEC models and language models. The combined effects of these methods result in better than state-of-the-art neural GEC models that outperform previously best neural GEC systems by more than 10 M @math on the CoNLL-2014 benchmark and 5.9 on the JFLEG test set. Non-neural state-of-the-art systems are outperformed by more than 2 on the CoNLL-2014 benchmark and by 4 on JFLEG.",
"",
"We improve automatic correction of grammatical, orthographic, and collocation errors in text using a multilayer convolutional encoder-decoder neural network. The network is initialized with embeddings that make use of character N-gram information to better suit this task. When evaluated on common benchmark test data sets (CoNLL-2014 and JFLEG), our model substantially outperforms all prior neural approaches on this task as well as strong statistical machine translation-based systems with neural and task-specific features trained on the same data. Our analysis shows the superiority of convolutional neural networks over recurrent neural networks such as long short-term memory (LSTM) networks in capturing the local context via attention, and thereby improving the coverage in correcting grammatical errors. By ensembling multiple models, and incorporating an N-gram language model and edit features via rescoring, our novel method becomes the first neural approach to outperform the current state-of-the-art statistical machine translation-based approach, both in terms of grammaticality and fluency."
]
} |
1903.00271 | 2919455813 | The task of video prediction is forecasting the next frames given some previous frames. Despite much recent progress, this task is still challenging mainly due to high nonlinearity in the spatial domain. To address this issue, we propose a novel architecture, Frequency Domain Transformer Network (FDTN), which is an end-to-end learnable model that estimates and uses the transformations of the signal in the frequency domain. Experimental evaluations show that this approach can outperform some widely used video prediction methods like Video Ladder Network (VLN) and Predictive Gated Pyramids (PGP). | Although many approaches to the video prediction task have been explored, the most successful approaches utilize deep learning models. @cite_6 proposed to add recurrent lateral connections in Ladder Networks to capture temporal dynamics of video. These recurrent connections, as well as lateral shortcuts, relive the deeper layers from modeling spatial detail. The VLN architecture achieves competitive results to Video Pixel Networks @cite_7 , the state-of-the-art on Moving MNIST dataset, using far fewer parameters. | {
"cite_N": [
"@cite_7",
"@cite_6"
],
"mid": [
"2529769424",
"2568597297"
],
"abstract": [
"We propose a probabilistic video model, the Video Pixel Network (VPN), that estimates the discrete joint distribution of the raw pixel values in a video. The model and the neural architecture reflect the time, space and color structure of video tensors and encode it as a four-dimensional dependency chain. The VPN approaches the best possible performance on the Moving MNIST benchmark, a leap over the previous state of the art, and the generated videos show only minor deviations from the ground truth. The VPN also produces detailed samples on the action-conditional Robotic Pushing benchmark and generalizes to the motion of novel objects.",
"We present the Video Ladder Network (VLN) for efficiently generating future video frames. VLN is a neural encoder-decoder model augmented at all layers by both recurrent and feedforward lateral connections. At each layer, these connections form a lateral recurrent residual block, where the feedforward connection represents a skip connection and the recurrent connection represents the residual. Thanks to the recurrent connections, the decoder can exploit temporal summaries generated from all layers of the encoder. This way, the top layer is relieved from the pressure of modeling lower-level spatial and temporal details. Furthermore, we extend the basic version of VLN to incorporate ResNet-style residual blocks in the encoder and decoder, which help improving the prediction results. VLN is trained in self-supervised regime on the Moving MNIST dataset, achieving competitive results while having very simple structure and providing fast inference."
]
} |
1903.00271 | 2919455813 | The task of video prediction is forecasting the next frames given some previous frames. Despite much recent progress, this task is still challenging mainly due to high nonlinearity in the spatial domain. To address this issue, we propose a novel architecture, Frequency Domain Transformer Network (FDTN), which is an end-to-end learnable model that estimates and uses the transformations of the signal in the frequency domain. Experimental evaluations show that this approach can outperform some widely used video prediction methods like Video Ladder Network (VLN) and Predictive Gated Pyramids (PGP). | Another well-known model is PGP @cite_8 , which is based on a gated auto-encoder and the bilinear transformation model of RAE @cite_4 . PGP has the assumption that two temporally consecutive frames can be described as a linear transformation of each other. In the PGP model, by using a bi-linear model, the hidden layer of mapping units encodes the transformation. These transformation encodings are then used to predict the next frame. Conv-PGP @cite_5 reduces the number of parameters significantly, by utilizing convolutional layers. | {
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_8"
],
"mid": [
"2610668384",
"1970819022",
"2138960858"
],
"abstract": [
"In this thesis, a recently proposed bilinear model for predicting spatiotemporal data has been implemented and extended. The model was trained in an unsupervised manner and uses spatiotemporal synchrony to encode transformations between inputs of a sequence up to a time t, in order to predict the next input at t + 1. A convolutional version of the model was developed in order to reduce the number of parameters and improve the predictive capabilities. The original and the convolutional models were tested and compared on a dataset containing videos of bouncing balls and both versions are able to predict the motion of the balls. The developed convolutional version halved the 4-step prediction loss while reducing the number of parameters by a factor of 159 compared to the original model. Some important differences between the models are discussed in the thesis and suggestions for further improvements of the convolutional model are identified and presented.",
"A fundamental operation in many vision tasks, including motion understanding, stereopsis, visual odometry, or invariant recognition, is establishing correspondences between images or between images and data from other modalities. Recently, there has been increasing interest in learning to infer correspondences from data using relational, spatiotemporal, and bilinear variants of deep learning methods. These methods use multiplicative interactions between pixels or between features to represent correlation patterns across multiple images. In this paper, we review the recent work on relational feature learning, and we provide an analysis of the role that multiplicative interactions play in learning to encode relations. We also discuss how square-pooling and complex cell models can be viewed as a way to represent multiplicative interactions and thereby as a way to encode relations.",
"We propose modeling time series by representing the transformations that take a frame at time t to a frame at time t+1. To this end we show how a bi-linear model of transformations, such as a gated autoencoder, can be turned into a recurrent network, by training it to predict future frames from the current one and the inferred transformation using backprop-through-time. We also show how stacking multiple layers of gating units in a recurrent pyramid makes it possible to represent the \"syntax\" of complicated time series, and that it can outperform standard recurrent neural networks in terms of prediction accuracy on a variety of tasks."
]
} |
1903.00271 | 2919455813 | The task of video prediction is forecasting the next frames given some previous frames. Despite much recent progress, this task is still challenging mainly due to high nonlinearity in the spatial domain. To address this issue, we propose a novel architecture, Frequency Domain Transformer Network (FDTN), which is an end-to-end learnable model that estimates and uses the transformations of the signal in the frequency domain. Experimental evaluations show that this approach can outperform some widely used video prediction methods like Video Ladder Network (VLN) and Predictive Gated Pyramids (PGP). | Image registration is a fundamental task in image processing which estimates the relative transformation between two similar images. A well-known method for image registration using Fourier domain representation is Phase Correlation. Phase Correlation can be used to calculate the relative translative offset between two similar images. @cite_0 demonstrated that rotation and scaling differences between two images can be estimated by converting them to log-polar coordinates. @cite_2 extended this method to work with subpixel transformation. @cite_3 proposed an extended version of phase correlation which is more robust and can work under a higher scale. We are inspired by the phase correlation method and designed FDTN. | {
"cite_N": [
"@cite_0",
"@cite_3",
"@cite_2"
],
"mid": [
"2131372145",
"2325063050",
"2158453800"
],
"abstract": [
"This correspondence discusses an extension of the well-known phase correlation technique to cover translation, rotation, and scaling. Fourier scaling properties and Fourier rotational properties are used to find scale and rotational movement. The phase correlation technique determines the translational movement. This method shows excellent robustness against random noise.",
"",
"In this paper, we have derived analytic expressions for the phase correlation of downsampled images. We have shown that for downsampled images the signal power in the phase correlation is not concentrated in a single peak, but rather in several coherent peaks mostly adjacent to each other. These coherent peaks correspond to the polyphase transform of a filtered unit impulse centered at the point of registration. The analytic results provide a closed-form solution to subpixel translation estimation, and are used for detailed error analysis. Excellent results have been obtained for subpixel translation estimation of images of different nature and across different spectral bands."
]
} |
1903.00228 | 2969018702 | Given the task of learning robotic grasping solely based on a depth camera input and gripper force feedback, we derive a learning algorithm from an applied point of view to significantly reduce the amount of required training data. Major improvements in time and data efficiency are achieved by: Firstly, we exploit the geometric consistency between the undistorted depth images and the task space. Using a relative small, fully-convolutional neural network, we predict grasp and gripper parameters with great advantages in training as well as inference performance. Secondly, motivated by the small random grasp success rate of around 3 , the grasp space was explored in a systematic manner. The final system was learned with 23000 grasp attempts in around 60h, improving current solutions by an order of magnitude. For typical bin picking scenarios, we measured a grasp success rate of @math . Further experiments showed that the system is able to generalize and transfer knowledge to novel objects and environments. | In recent years, research of robotic grasping in unsystematic environments gained a lot of traction. @cite_3 divided possible approaches into the grasping of known, familiar or unknown objects. For known objects, grasping is mostly reduced to object recognition and pose estimation. In a more flexible manner, this process can be extended by matching the similarity of known objects to new, familiar objects. In the case of unknown objects, the approaches can be split into analytical and empirical methods @cite_0 . Analytical grasp synthesis relies on the definition of a grasp quality measure, such as the force closure or other geometrically motivated measures. However, these approaches usually need high-quality and flawless 3D maps of the scene. Data-driven methods intrinsically develop generalization and error robustness, but suffer from the demand of large datasets. First implementations sampled grasp candidates in a simulator and evaluated them based on an analytical measure @cite_18 . In 2008, @cite_14 applied a probabilistic model to a mostly simulated dataset to identify grasping points in image patches. More recently, @cite_4 built a large synthetic grasping database with analytic metrics, used supervised learning to train a NN and applied it to bin picking @cite_2 . | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_3",
"@cite_0",
"@cite_2"
],
"mid": [
"1510186039",
"2041376653",
"2600030077",
"2950303304",
"2005824379",
"2773721443"
],
"abstract": [
"A robotic grasping simulator, called Graspit!, is presented as versatile tool for the grasping community. The focus of the grasp analysis has been on force-closure grasps, which are useful for pick-and-place type tasks. This work discusses the different types of world elements and the general robot definition, and presented the robot library. The paper also describes the user interface of Graspit! and present the collision detection and contact determination system. The grasp analysis and visualization method were also presented that allow a user to evaluate a grasp and compute optimal grasping forces. A brief overview of the dynamic simulation system was provided.",
"We consider the problem of grasping novel objects, specifically objects that are being seen for the first time through vision. Grasping a previously unknown object, one for which a 3-d model is not available, is a challenging problem. Furthermore, even if given a model, one still has to decide where to grasp the object. We present a learning algorithm that neither requires nor tries to build a 3-d model of the object. Given two (or more) images of an object, our algorithm attempts to identify a few points in each image corresponding to good locations at which to grasp the object. This sparse set of points is then triangulated to obtain a 3-d location at which to attempt a grasp. This is in contrast to standard dense stereo, which tries to triangulate every single point in an image (and often fails to return a good 3-d model). Our algorithm for identifying grasp locations from an image is trained by means of supervised learning, using synthetic images for the training set. We demonstrate this approach on two robotic manipulation platforms. Our algorithm successfully grasps a wide variety of objects, such as plates, tape rolls, jugs, cellphones, keys, screwdrivers, staplers, a thick coil of wire, a strangely shaped power horn and others, none of which were seen in the training set. We also apply our method to the task of unloading items from dishwashers.",
"To reduce data collection time for deep learning of robust robotic grasp plans, we explore training from a synthetic dataset of 6.7 million point clouds, grasps, and analytic grasp metrics generated from thousands of 3D models from Dex-Net 1.0 in randomized poses on a table. We use the resulting dataset, Dex-Net 2.0, to train a Grasp Quality Convolutional Neural Network (GQ-CNN) model that rapidly predicts the probability of success of grasps from depth images, where grasps are specified as the planar position, angle, and depth of a gripper relative to an RGB-D sensor. Experiments with over 1,000 trials on an ABB YuMi comparing grasp planning methods on singulated objects suggest that a GQ-CNN trained with only synthetic data from Dex-Net 2.0 can be used to plan grasps in 0.8sec with a success rate of 93 on eight known objects with adversarial geometry and is 3x faster than registering point clouds to a precomputed dataset of objects and indexing grasps. The Dex-Net 2.0 grasp planner also has the highest success rate on a dataset of 10 novel rigid objects and achieves 99 precision (one false positive out of 69 grasps classified as robust) on a dataset of 40 novel household objects, some of which are articulated or deformable. Code, datasets, videos, and supplementary material are available at http: berkeleyautomation.github.io dex-net .",
"",
"This overview presents computational algorithms for generating 3D object grasps with autonomous multi-fingered robotic hands. Robotic grasping has been an active research subject for decades, and a great deal of effort has been spent on grasp synthesis algorithms. Existing papers focus on reviewing the mechanics of grasping and the finger-object contact interactions Bicchi and Kumar (2000) [12] or robot hand design and their control Al- (1993) [70]. Robot grasp synthesis algorithms have been reviewed in Shimoga (1996) [71], but since then an important progress has been made toward applying learning techniques to the grasping problem. This overview focuses on analytical as well as empirical grasp synthesis approaches.",
""
]
} |
1903.00228 | 2969018702 | Given the task of learning robotic grasping solely based on a depth camera input and gripper force feedback, we derive a learning algorithm from an applied point of view to significantly reduce the amount of required training data. Major improvements in time and data efficiency are achieved by: Firstly, we exploit the geometric consistency between the undistorted depth images and the task space. Using a relative small, fully-convolutional neural network, we predict grasp and gripper parameters with great advantages in training as well as inference performance. Secondly, motivated by the small random grasp success rate of around 3 , the grasp space was explored in a systematic manner. The final system was learned with 23000 grasp attempts in around 60h, improving current solutions by an order of magnitude. For typical bin picking scenarios, we measured a grasp success rate of @math . Further experiments showed that the system is able to generalize and transfer knowledge to novel objects and environments. | Alternatively to simulation, the system can be automated and trained in a large-scale self-supervised manner. Our work is closely related to the research of Pinto and Gupta @cite_17 . They were able to retrain a NN within 700 h and achieved a grasping rate for seen objects of 73 Our work is based on the theoretical framework of RL in combination with convolutional NN for function approximation @cite_6 . While deep RL showed impressive results in learning from visual input in simple simulated environments @cite_15 @cite_8 , direct applications of RL for robot learning have proven to be more difficult. On real robots, continuous policy-based methods have been applied to visuomotor learning, either in an end-to-end fashion @cite_12 or by using spatial autoencoders @cite_13 . @cite_9 showed in a recent comparison of RL algorithms for simulated grasping that simple value-based approaches performed best. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_15",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2788575380",
"2121863487",
"2534269850",
"2963634205",
"2964161785",
"2201912979"
],
"abstract": [
"",
"In this paper, we explore deep reinforcement learning algorithms for vision-based robotic grasping. Model-free deep reinforcement learning (RL) has been successfully applied to a range of challenging environments, but the proliferation of algorithms makes it difficult to discern which particular approach would be best suited for a rich, diverse task like grasping. To answer this question, we propose a simulated benchmark for robotic grasping that emphasizes off-policy learning and generalization to unseen objects. Off-policy learning enables utilization of grasping data over a wide variety of objects, and diversity is important to enable the method to generalize to new objects that were not seen during training. We evaluate the benchmark tasks against a variety of Q-function estimation methods, a method previously proposed for robotic grasping with deep neural network models, and a novel approach based on a combination of Monte Carlo return estimation and an off-policy correction. Our results indicate that several simple methods provide a surprisingly strong competitor to popular algorithms such as double Q-learning, and our analysis of stability sheds light on the relative tradeoffs between the algorithms.",
"Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.",
"",
"Reinforcement learning provides a powerful and flexible framework for automated acquisition of robotic motion skills. However, applying reinforcement learning requires a sufficiently detailed representation of the state, including the configuration of task-relevant objects. We present an approach that automates state-space construction by learning a state representation directly from camera images. Our method uses a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects, and then learns a motion skill with these feature points using an efficient reinforcement learning method based on local linear models. The resulting controller reacts continuously to the learned feature points, allowing the robot to dynamically manipulate objects in the world with closed-loop control. We demonstrate our method with a PR2 robot on tasks that include pushing a free-standing toy block, picking up a bag of rice using a spatula, and hanging a loop of rope on a hook at various positions. In each task, our method automatically learns to track task-relevant objects and manipulate their configuration with the robot's arm.",
"Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.",
"Current model free learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping."
]
} |
1708.01377 | 2742359097 | Static visualizations have analytic and expressive value. However, many interactive tasks cannot be completed using static visualizations. As datasets grow in size and complexity, static visualizations start losing their analytic and expressive power for interactive data exploration. Despite this limitation of static visualizations, there are still many cases where visualizations are limited to being static (e.g., visualizations on presentation slides or posters). We believe in many of these cases, static visualizations will benefit from allowing users to perform interactive tasks on them. Inspired by the introduction of numerous commercial personal augmented reality (AR) devices, we propose an AR solution that allows interactive data exploration of datasets on static visualizations. In particular, we present a prototype system named VisAR that uses the Microsoft Hololens to enable users to complete interactive tasks on static visualizations. | Augmented reality allows users to have a seamless experience between their world and content others have created. One interesting aspect of AR is that it can breath life into static content. In the AR domain, there have been numerous projects that animate static objects by adding interactivity to the once-inanimate experience. For example, Billinghurst's MagicBook was a novel AR interface that allowed static components of a physical book to be interactive to the reader @cite_14 . This lead to various follow up studies that experimented with adding interactivity to physical books @cite_20 . Researchers investigated how AR solutions affect user experience while performing tasks. For example, in one study, researchers observed that students using AR-enhanced books were better motivated and more engaged in the material @cite_17 compared to using other methods. | {
"cite_N": [
"@cite_14",
"@cite_20",
"@cite_17"
],
"mid": [
"1912205781",
"2067689756",
"2123634452"
],
"abstract": [
"The MagicBook project is an early attempt to explore how we can use a physical object to smoothly transport users between reality and virtuality. Young children often fantasize about flying into the pages of a fairy tale and becoming part of the story. The MagicBook project makes this fantasy a reality using a normal book as the main interface object. People can turn the pages of the book, look at the pictures, and read the text without any additional technology. However, if a person looks at the pages through an augmented reality display, they see 3D virtual models appearing out of the pages. The models appear attached to the real page so users can see the augmented reality scene from any perspective by moving themselves or the book. The virtual content can be any size and is animated, so the augmented reality view is an enhanced version of a traditional 3D pop-up book.",
"We are introducing a new type of digitally enhanced book which symbiotically merges different type of media in a seamless approach. By keeping the traditional book (and its affordance) and enhancing it visually and aurally, we provide a highly efficient combination of the physical and digital world. Our solution utilizes recent developments in computer vision tracking, advanced GPU technology and spatial sound rendering. The systems' collaboration capabilities also allow other users to be part of the story.",
"Evaluations of AR experiences in an educational setting provide insights into how this technology can enhance traditional learning models and what obstacles stand in the way of its broader use. A related video can be seen here: http: youtu.be ndUjLwcBIOw. It shows examples of augmented reality experiences in an educational setting."
]
} |
1708.01377 | 2742359097 | Static visualizations have analytic and expressive value. However, many interactive tasks cannot be completed using static visualizations. As datasets grow in size and complexity, static visualizations start losing their analytic and expressive power for interactive data exploration. Despite this limitation of static visualizations, there are still many cases where visualizations are limited to being static (e.g., visualizations on presentation slides or posters). We believe in many of these cases, static visualizations will benefit from allowing users to perform interactive tasks on them. Inspired by the introduction of numerous commercial personal augmented reality (AR) devices, we propose an AR solution that allows interactive data exploration of datasets on static visualizations. In particular, we present a prototype system named VisAR that uses the Microsoft Hololens to enable users to complete interactive tasks on static visualizations. | In the interaction space, coined the concept of spatio-data coordination which defines the mapping between the physical interaction space and the virtual visualization space @cite_0 . When structuring our approach through this concept, we see that bystanders of visualizations cannot interact with the visualizations, having no coordination between their interaction space and the visualization space. By introducing additional elements in the visualization space that have spatio-data coordination with personal interaction spaces, we allow users to directly interact with the visualization and view personalized content in their own display space. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2755214627"
],
"abstract": [
"We introduce the concept of “spatio-data coordination” (SD coordination) which defines the mapping of user actions in physical space into the space of data in a visualisation. SD coordination is intended to lower the user's cognitive load when exploring complex multi-dimensional data such as biomedical data, multiple data attributes vs time in a space-time-cube visualisation, or three-dimensional projections of three-or-higher-dimensional data sets. To inform the design of interaction devices to allow for SD coordination, we define a design space and demonstrate it with sketches and early prototypes of three exemplar devices for SD coordinated interaction."
]
} |
1708.01377 | 2742359097 | Static visualizations have analytic and expressive value. However, many interactive tasks cannot be completed using static visualizations. As datasets grow in size and complexity, static visualizations start losing their analytic and expressive power for interactive data exploration. Despite this limitation of static visualizations, there are still many cases where visualizations are limited to being static (e.g., visualizations on presentation slides or posters). We believe in many of these cases, static visualizations will benefit from allowing users to perform interactive tasks on them. Inspired by the introduction of numerous commercial personal augmented reality (AR) devices, we propose an AR solution that allows interactive data exploration of datasets on static visualizations. In particular, we present a prototype system named VisAR that uses the Microsoft Hololens to enable users to complete interactive tasks on static visualizations. | Our work is also inspired by the information-rich virtual environment concept introduced by @cite_16 . The concept looks into interactions between virtual environments and abstract information. If we consider a synthetic static visualization as a virtual representation of data, our solution provides abstract information about that environment. Users can perform interaction tasks that point from the virtual environment to abstract information; that is, users can retrieve details associated with a synthetic visualization. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2099805804"
],
"abstract": [
"Virtual environments (VEs) allow users to experience and interact with a rich sensory environment, but most virtual worlds contain only sensory information similar to that which we experience in the physical world. Information-rich virtual environments (IRVEs) combine the power of VEs and information visualization, augmenting VEs with additional abstract information such as text, numbers, or graphs. IRVEs can be useful in many contexts, such as education, medicine, or construction. In our work, we are developing a theoretical foundation for the study of IRVEs and tools for their development and evaluation. We present a working definition of IRVEs, a discussion of information display and interaction in IRVEs. We also describe a software framework for IRVE development and a testbed enabling evaluation of text display techniques for IRVEs. Finally, we present a research agenda for this area."
]
} |
1708.01476 | 2745228278 | System monitoring is an established tool to measure the utilization and health of HPC systems. Usually system monitoring infrastructures make no connection to job information and do not utilize hardware performance monitoring (HPM) data. To increase the efficient use of HPC systems automatic and continuous performance monitoring of jobs is an essential component. It can help to identify pathological cases, provides instant performance feedback to the users, offers initial data to judge on the optimization potential of applications and helps to build a statistical foundation about application specific system usage. The LIKWID monitoring stack is a modular framework build on top of the LIKWID tools library. It aims on enabling job specific performance monitoring using HPM data, system metrics and applicationlevel data for small to medium sized commodity clusters. Moreover, it is designed to integrate in existing monitoring infrastructures to speed up the change from pure system monitoring to job-aware monitoring. | Job-specific performance monitoring using hardware performance counting facilities is in the focus of tool developers since the early 2000s. Especially large HPC centers, such as the National Labs in the US and the Gauss Supercomputing Centers in Germany, are active in this field. Examples are the Monitoring tool developed at PNNL @cite_13 , efforts at LANL @cite_6 , the tool @cite_8 and its successor @cite_16 developed at LRZ Garching, and the tool collection @cite_9 developed at J "ulich Supercomputing Centre. Most of this work focuses on the technical challenges in scaling out a measurement infrastructure on large machines without disturbing production runs while keeping the generated data volume under control. The only commercial vendor offering a built-in HPM job monitoring with user feedback is Cray. Many of these solutions are site- or vendor-specific and are thus not easy to deploy at other sites. A solution that also targets small- to medium-sized clusters is @cite_2 , which is also used as part of the larger project @cite_15 . Recent and current efforts include the project @cite_10 , from which also the approach presented in this paper originates, and the just-started @cite_4 . | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_10",
"@cite_9",
"@cite_6",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_13"
],
"mid": [
"",
"1498372445",
"2747580110",
"1489554222",
"1952564456",
"2011516616",
"2151316504",
"248527484",
"2155475498"
],
"abstract": [
"",
"Developing efficient parallel programs for supercomputers is a challenging task. It requires insight into the application, the parallelization concepts, as well as the parallel architectures. Performance analysis tools such as Periscope, an automatic performance analysis tool currently under development at Technische Universitat Munchen, help the programmer in detecting performance bottlenecks. The goal of the ISAR project is to enhance the existing Periscope research prototype and deliver a production version. This paper focuses on the evaluation of Periscope's main features based on two large scale simulation codes.",
"This thesis presents concepts for systemwide monitoring and performance analysis of HPC systems, which are aimed at a preliminary detection of inefficient applications. On-line analyses without instrumentation of user codes are performed with codified expert knowledge designed to reveal bottlenecks in running applications. Novel optimizations to collect data and data reduction techniques ensure an efficient and scalable monitoring of HPC architectures. Detailed results are provided for a petaflop system.",
"To maximise the scientific output of a high-performance computing system, different stakeholders pursue different strategies. While individual application developers are trying to shorten the time to solution by optimising their codes, system administrators are tuning the configuration of the overall system to increase its throughput. Yet, the complexity of today’s machines with their strong interrelationship between application and system performance presents serious challenges to achieving these goals. The HOPSA project (HOlistic Performance System Analysis) therefore sets out to create an integrated diagnostic infrastructure for combined application and system-level tuning – with the former provided by the EU and the latter by the Russian project partners. Starting from system-wide basic performance screening of individual jobs, an automated workflow routes findings on potential bottlenecks either to application developers or system administrators with recommendations on how to identify their root cause using more powerful diagnostic tools. Developers can choose from a variety of mature performance-analysis tools developed by our consortium. Within this project, the tools will be further integrated and enhanced with respect to scalability, depth of analysis, and support for asynchronous tasking, a node-level paradigm playing an increasingly important role in hybrid programs on emerging hierarchical and heterogeneous systems.",
"Monitoring High Performance Computing clusters is currently geared towards providing system administrators the information they need to make informed decisions on the resources used in the cluster. However, this emphasis leaves out the End User, those who utilize the cluster resources towards projects and programs, as they are not given the information of how their workflow is impacting the cluster. By providing a subset of monitoring data in a format End Users can easily interpret and utilize, they can help make better use of the computing resources provided to them.",
"This paper reports on a comprehensive, fully automated resource use monitoring package, TACC Stats, which enables both consultants, users and other stakeholders in an HPC system to systematically and actively identify jobs applications that could benefit from expert support and to aid in the diagnosis of software and hardware issues. TACC Stats continuously collects and analyzes resource usage data for every job run on a system and differs significantly from conventional profilers because it requires no action on the part of the user or consultants -- it is always collecting data on every node for every job. TACC Stats is open source and downloadable, configurable and compatible with general Linux-based computing platforms, and extensible to new CPU architectures and hardware devices. It is meant to provide a comprehensive resource usage monitoring solution. In addition to describing TACC Stats, the paper illustrates its application to identifying production jobs which have inefficient resource use characteristics.",
"Open XDMoD is an open source tool designed to facilitate the management of high-performance computing (HPC) systems. The Open XDMoD portal provides a rich set of analysis and charting tools that let users quickly display a wide variety of job accounting metrics over any desired timeframe. Two additional tools, which provide quality-of-service metrics and job-level performance data, have been developed and integrated with Open XDMoD to extend its functionality. These tools, combined in an integrated package through Open XDMoD, enable the comprehensive management of HPC resources, allowing HPC center personnel to ensure that the resource is operating efficiently and to determine what applications are running, how efficiently they're running, and what resources they're consuming, all of which are important to optimizing the HPC system.",
"This paper presents a systemwide monitoring and analysis tool for high performance computers with several features aimed at minimizing the transport of performance data along a network of agents. The aim of the tool is to do a preliminary detection of performance bottlenecks on user applications running in HPC systems with a negligible impact on production runs. Continuous systemwide monitoring can lead to large volumes of data, if the data is required to be stored permanently to be available for queries. For system monitoring level we require to store the monitoring data synchronously. We retain the descriptive qualities by using quantiles; an aggregation with respect to the number of cores used by the application at every measuring interval. The optimization of the transport route for the performance data enables us to precisely calculate quantiles as opposed to quantile estimation.",
"We present NWPerf, a new system for analyzing fine granularity performance metric data on large-scale supercomputing clusters. This tool is able to measure application efficiency on a system wide basis from both a global system perspective as well as providing a detailed view of individual applications. NWPerf provides this service while minimizing the impact on the performance of user applications. We describe the type of information that can be derived from the system, and demonstrate how the system was used detect and eliminate a performance problem in an application application that improved performance by up to several thousand percent. The NWPerf architecture has proven to be a stable and scalable platform for gathering performance data on a large 1954-CPU production Linux cluster at PNNL."
]
} |
1708.01292 | 2743065247 | People spend considerable effort managing the impressions they give others. Social psychologists have shown that people manage these impressions differently depending upon their personality. Facebook and other social media provide a new forum for this fundamental process; hence, understanding people's behaviour on social media could provide interesting insights on their personality. In this paper we investigate automatic personality recognition from Facebook profile pictures. We analyze the effectiveness of four families of visual features and we discuss some human interpretable patterns that explain the personality traits of the individuals. For example, extroverts and agreeable individuals tend to have warm colored pictures and to exhibit many faces in their portraits, mirroring their inclination to socialize; while neurotic ones have a prevalence of pictures of indoor places. Then, we propose a classification approach to automatically recognize personality traits from these visual features. Finally, we compare the performance of our classification approach to the one obtained by human raters and we show that computer-based classifications are significantly more accurate than averaged human-based classifications for Extraversion and Neuroticism. | In the last years, the interest in automatic personality recognition has grown (see Vinciarelli and Mohammadi for a comprehensive survey @cite_0 ) and several works have started exploiting the wealth of data made available by social media @cite_41 @cite_22 @cite_44 @cite_12 , microphones and cameras @cite_39 @cite_59 @cite_43 @cite_33 , and mobile phones @cite_8 @cite_1 . Two works addressed the automatic recognition of personality traits from self-presentation videos @cite_13 @cite_37 . Biel @cite_13 used a dataset of 442 vlogs and asked external observers to rate vlogger's personality types; instead Batrinca @cite_55 recorded video self-presentations of 89 subjects in a lab setting, asking them to complete a self-assessed personality test. | {
"cite_N": [
"@cite_37",
"@cite_22",
"@cite_33",
"@cite_8",
"@cite_41",
"@cite_55",
"@cite_1",
"@cite_39",
"@cite_0",
"@cite_44",
"@cite_43",
"@cite_59",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"",
"",
"2130815877",
"2544158810",
"1994745305",
"",
"1986714255",
"2089809964",
"",
"",
"",
"2028422222",
""
],
"abstract": [
"",
"",
"",
"In this work, we investigate the relationships between social network structure and personality; we assess the performances of different subsets of structural network features, and in particular those concerned with ego-networks, in predicting the Big-5 personality traits. In addition to traditional survey-based data, this work focuses on social networks derived from real-life data gathered through smartphones. Besides showing that the latter are superior to the former for the task at hand, our results provide a fine-grained analysis of the contribution the various feature sets are able to provide to personality classification, along with an assessment of the relative merits of the various networks exploited.",
"Psychological personality has been shown to affect a variety of aspects: preferences for interaction styles in the digital world and for music genres, for example. Consequently, the design of personalized user interfaces and music recommender systems might benefit from understanding the relationship between personality and use of social media. Since there has not been a study between personality and use of Twitter at large, we set out to analyze the relationship between personality and different types of Twitter users, including popular users and influentials. For 335 users, we gather personality data, analyze it, and find that both popular users and influentials are extroverts and emotionally stable (low in the trait of Neuroticism). Interestingly, we also find that popular users are imaginative' (high in Openness), while influentials tend to be organized' (high in Conscientiousness). We then show a way of accurately predicting a user's personality simply based on three counts publicly available on profiles: following, followers, and listed counts. Knowing these three quantities about an active user, one can predict the user's five personality traits with a root-mean-squared error below 0.88 on a @math scale. Based on these promising results, we argue that being able to predict user personality goes well beyond our initial goal of informing the design of new personalized applications as it, for example, expands current studies on privacy in social media.",
"Personality plays an important role in the way people manage the images they convey in self-presentations and employment interviews, trying to affect the other\"s first impressions and increase effectiveness. This paper addresses the automatically detection of the Big Five personality traits from short (30-120 seconds) self-presentations, by investigating the effectiveness of 29 simple acoustic and visual non-verbal features. Our results show that Conscientiousness and Emotional Stability Neuroticism are the best recognizable traits. The lower accuracy levels for Extraversion and Agreeableness are explained through the interaction between situational characteristics and the differential activation of the behavioral dispositions underlying those traits.",
"",
"This paper targets the automatic detection of personality traits in a meeting environment by means of audio and visual features; information about the relational context is captured by means of acoustic features designed to that purpose. Two personality traits are considered: Extraversion (from the Big Five) and the Locus of Control. The classification task is applied to thin slices of behaviour, in the form of 1-minute sequences. SVM were used to test the performances of several training and testing instance setups, including a restricted set of audio features obtained through feature selection. The outcomes improve considerably over existing results, provide evidence about the feasibility of the multimodal analysis of personality, the role of social context, and pave the way to further studies addressing different features setups and or targeting different personality traits.",
"Personality is a psychological construct aimed at explaining the wide variety of human behaviors in terms of a few, stable and measurable individual characteristics. In this respect, any technology involving understanding, prediction and synthesis of human behavior is likely to benefit from Personality Computing approaches, i.e. from technologies capable of dealing with human personality. This paper is a survey of such technologies and it aims at providing not only a solid knowledge base about the state-of-the-art, but also a conceptual model underlying the three main problems addressed in the literature, namely Automatic Personality Recognition (inference of the true personality of an individual from behavioral evidence), Automatic Personality Perception (inference of personality others attribute to an individual based on her observable behavior) and Automatic Personality Synthesis (generation of artificial personalities via embodied agents). Furthermore, the article highlights the issues still open in the field and identifies potential application areas.",
"",
"",
"",
"Despite an increasing interest in understanding human perception in social media through the automatic analysis of users' personality, existing attempts have explored user profiles and text blog data only. We approach the study of personality impressions in social media from the novel perspective of crowdsourced impressions, social attention, and audiovisual behavioral analysis on slices of conversational vlogs extracted from YouTube. Conversational vlogs are a unique case study to understand users in social media, as vloggers implicitly or explicitly share information about themselves that words, either written or spoken cannot convey. In addition, research in vlogs may become a fertile ground for the study of video interactions, as conversational video expands to innovative applications. In this work, we first investigate the feasibility of crowdsourcing personality impressions from vlogging as a way to obtain judgements from a variate audience that consumes social media video. Then, we explore how these personality impressions mediate the online video watching experience and relate to measures of attention in YouTube. Finally, we investigate on the use of automatic nonverbal cues as a suitable lens through which impressions are made, and we address the task of automatic prediction of vloggers' personality impressions using nonverbal cues and machine learning techniques. Our study, conducted on a dataset of 442 YouTube vlogs and 2210 annotations collected in Amazon's Mechanical Turk, provides new findings regarding the suitability of collecting personality impressions from crowdsourcing, the types of personality impressions that emerge through vlogging, their association with social attention, and the level of utilization of nonverbal cues in this particular setting. In addition, it constitutes a first attempt to address the task of automatic vlogger personality impression prediction using nonverbal cues, with promising results.",
""
]
} |
1708.01388 | 2612008866 | The current virtualization solution in the Cloud widely relies on hypervisor-based technologies. Along with the recent popularity of Docker, the container-based virtualization starts receiving more attention for being a promising alternative. Since both of the virtualization solutions are not resource-free, their performance overheads would lead to negative impacts on the quality of Cloud services. To help fundamentally understand the performance difference between these two types of virtualization solutions, we use a physical machine with "just-enough" resource as a baseline to investigate the performance overhead of a standalone Docker container against a standalone virtual machine (VM). With findings contrary to the related work, our evaluation results show that the virtualization's performance overhead could vary not only on a feature-by-feature basis but also on a job-to-job basis. Although the container-based solution is undoubtedly lightweight, the hypervisor-based technology does not come with higher performance overhead in every case. For example, Docker containers particularly exhibit lower QoS in terms of storage transaction speed. | Although the performance advantage of containers were investigated in several pioneer studies @cite_11 @cite_15 @cite_16 , the container-based virtualization solution did not gain significant popularity until the recent underlying improvements in the Linux kernel, and especially until the emergence of Docker @cite_13 . Starting from an open-source project in early 2013 @cite_5 , Docker quickly becomes the most popular container solution @cite_18 by significantly facilitating the management of containers. Technically, through offering the unified tool set and API, Docker relieves the complexity of utilizing the relevant kernel-level techniques including the LXC, the cgroup and a copy-on-write filesystem. To examine the performance of Docker containers, a molecular modeling simulation software @cite_7 and a postgreSQL database-based Joomla application @cite_8 have been used to benchmark the Docker environment against the VM environment. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_8",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"1959671196",
"1623427068",
"1569469665",
"2296335794",
"2105534358",
"2106317613",
"",
"2125187067"
],
"abstract": [
"Containerization is widely discussed as a lightweight virtualization solution. Apart from exhibiting benefits over traditional virtual machines in the cloud, containers are especially relevant for platform-as-a-service (PaaS) clouds to manage and orchestrate applications through containers as an application packaging mechanism. This article discusses the requirements that arise from having to facilitate applications through distributed multicloud platforms.",
"High Performance Computing (HPC) applications require systems with environments for maximum use of limited resources to facilitate efficient computations. However, these systems are faced with a large trade-off between efficient resource allocation and minimum execution times for the applications executed on them. Also, deploying applications in newer environments is exacting. To alleviate this challenge, container-based systems are recently being deployed to reduce the trade-off. In this paper, we investigate container-based technology as an efficient virtualization technology for running high performance scientific applications. We select Docker as the container-based technology for our test bed. We execute autodock3, a molecular modeling simulation software mostly used for Protein-ligand docking, in Docker containers and VMs created using OpenStack. We compare the execution times of the docking process in both Docker containers and in VMs.",
"With the advent of cloud computing and virtualization, modern distributed applications run on virtualized environments for hardware resource utilization and flexibility of operations in an infrastructure. However, when it comes to virtualization, resource overhead is involved. Linux containers can be an alternative to traditional virtualization technologies because of its high resource utilization and less overhead. This paper provides a comparison between Linux containers and virtual machines in terms of performance and scalability.",
"Docker promises the ability to package applications and their dependencies into lightweight containers that move easily between different distros, start up quickly and are isolated from each other.",
"Although virtualization holds numerous merits, it meanwhile incurs some performance loss. As the pivotal component of a virtualization system, the efficiency of virtual machine monitor(VMM) will largely impact the performance of the whole system. Therefore, it's indispensable to evaluate the performance of virtual machine monitors with different virtualization technologies. In this paper, we measure and analyze the performance of three open source virtual machine monitors-OpenVZ, Xen and KVM, which adopt the container-based virtualization, para-virtualization and full-virtualization respectively. We first measure them as a black box about their macro-performance on the virtualization of processor, memory, disk, network, server applications(including web, database and Java) and their micro-performance on the virtualization of system operation and context switch with several canonical benchmarks, and then analyze these testing results by examining their design and implementation as a white box. The experimental data not only show some valuable information for designers, but also provide a comprehensive performance understanding for users.",
"Virtualization as a platform for resource-intensive applications, such as MapReduce (MR), has been the subject of many studies in the last years, as it has brought benefits such as better manageability, overall resource utilization, security and scalability. Nevertheless, because of the performance overheads, virtualization has traditionally been avoided in computing environments where performance is a critical factor. In this context, container-based virtualization can be considered a lightweight alternative to the traditional hypervisor-based virtualization systems. In fact, there is a trend towards using containers in MR clusters in order to provide resource sharing and performance isolation (e.g., Mesos and YARN). However, there are still no studies evaluating the performance overhead of the current container-based systems and their ability to provide performance isolation when running MR applications. In this work, we conducted experiments to effectively compare and contrast the current container-based systems (Linux VServer, OpenVZ and Linux Containers (LXC)) in terms of performance and manageability when running on MR clusters. Our results showed that although all container-based systems reach a near-native performance for MapReduce workloads, LXC is the one that offers the best relationship between performance and management capabilities (specially regarding to performance isolation).",
"",
"Virtualization is a common strategy for improving the utilization of existing computing resources, particularly within data centers. However, its use for high performance computing (HPC) applications is currently limited despite its potential for both improving resource utilization as well as providing resource guarantees to its users. This paper systematically evaluates various VMs for computationally intensive HPC applications using various standard benchmarks. Using VMWare Server, xen, and OpenVZ we examine the suitability of full virtualization, paravirtualization, and operating system-level virtualization in terms of network utilization SMP performance, file system performance, and MPI scalability. We show that the operating system-level virtualization provided by OpenVZ provides the best overall performance, particularly for MPI scalability."
]
} |
1708.01388 | 2612008866 | The current virtualization solution in the Cloud widely relies on hypervisor-based technologies. Along with the recent popularity of Docker, the container-based virtualization starts receiving more attention for being a promising alternative. Since both of the virtualization solutions are not resource-free, their performance overheads would lead to negative impacts on the quality of Cloud services. To help fundamentally understand the performance difference between these two types of virtualization solutions, we use a physical machine with "just-enough" resource as a baseline to investigate the performance overhead of a standalone Docker container against a standalone virtual machine (VM). With findings contrary to the related work, our evaluation results show that the virtualization's performance overhead could vary not only on a feature-by-feature basis but also on a job-to-job basis. Although the container-based solution is undoubtedly lightweight, the hypervisor-based technology does not come with higher performance overhead in every case. For example, Docker containers particularly exhibit lower QoS in terms of storage transaction speed. | The closest work to ours is the CPU-oriented study @cite_0 and the IBM research report @cite_17 on the performance comparison of VM and Linux containers. However, both studies are incomplete (e.g., the former was not concerned with the non-CPU features, and the latter did not finish the container's network evaluation). More importantly, our work denies the IBM report's finding that containers and VMs impose almost no overhead on CPU and memory usage" and also doubts about Docker equals or exceeds KVM performance in every case". Furthermore, in addition to the average performance overhead of virtualization technologies, we are more concerned with their overhead in performance variability. | {
"cite_N": [
"@cite_0",
"@cite_17"
],
"mid": [
"2181936530",
"2075174112"
],
"abstract": [
"In these days, Cloud computing is provided by various service ways, and it is possible that practical implement and service by virtualized environment. With developing of cloud computing techniques, many companies propose the different type of platforms through research the relevant technique. Among other platforms, we are going to talk about the performance comparison analysis of Linux Container and Virtual Machine in this paper. We built Cloud environment first on Docker which is based on Linux Container and Hypervisor which is Virtual Machine, we analyzed each of the size, Boot speed, and CPU performance. With this analysis result, Users will be able to understand characteristic of each platforms, and they will be able to choose the platforms reasonably what they need.",
"Cloud computing makes extensive use of virtual machines because they permit workloads to be isolated from one another and for the resource usage to be somewhat controlled. In this paper, we explore the performance of traditional virtual machine (VM) deployments, and contrast them with the use of Linux containers. We use KVM as a representative hypervisor and Docker as a container manager. Our results show that containers result in equal or better performance than VMs in almost all cases. Both VMs and containers require tuning to support I Ointensive applications. We also discuss the implications of our performance results for future cloud architectures."
]
} |
1708.01388 | 2612008866 | The current virtualization solution in the Cloud widely relies on hypervisor-based technologies. Along with the recent popularity of Docker, the container-based virtualization starts receiving more attention for being a promising alternative. Since both of the virtualization solutions are not resource-free, their performance overheads would lead to negative impacts on the quality of Cloud services. To help fundamentally understand the performance difference between these two types of virtualization solutions, we use a physical machine with "just-enough" resource as a baseline to investigate the performance overhead of a standalone Docker container against a standalone virtual machine (VM). With findings contrary to the related work, our evaluation results show that the virtualization's performance overhead could vary not only on a feature-by-feature basis but also on a job-to-job basis. Although the container-based solution is undoubtedly lightweight, the hypervisor-based technology does not come with higher performance overhead in every case. For example, Docker containers particularly exhibit lower QoS in terms of storage transaction speed. | Note that, although there are also performance studies on deploying containers inside VMs (e.g., @cite_4 @cite_21 ), such a redundant structure might not be suitable for an apple-to-apple" comparison between Docker containers and VMs, and thus we do not include this virtualization scenario in this study. | {
"cite_N": [
"@cite_21",
"@cite_4"
],
"mid": [
"1519061084",
"2056198910"
],
"abstract": [
"There has been a growing effort in decreasing energy consumption of large-scale cloud data centers via maximization of host-level utilization and load balancing techniques. However, with the recent introduction of Container as a Service (CaaS) by cloud providers, maximizing the utilization at virtual machine (VM) level becomes essential. To this end, this paper focuses on finding efficient virtual machine sizes for hosting containers in such a way that the workload is executed with minimum wastage of resources on VM level. Suitable VM sizes for containers are calculated, and application tasks are grouped and clustered based on their usage patterns obtained from historical data. Furthermore, tasks are mapped to containers and containers are hosted on their associated VM types. We analyzed clouds' trace logs from Google cluster and consider the cloud workload variances, which is crucial for testing and validating our proposed solutions. Experimental results showed up to 7.55 improvement in the average energy consumption compared to baseline scenarios where the virtual machine sizes are fixed. In addition, comparing to the baseline scenarios, the total number of VMs instantiated for hosting the containers is also improved by 68 on average.",
"PaaS vendors face challenges in efficiently providing services with the growth of their offerings. In this paper, we explore how PaaS vendors are using containers as a means of hosting Apps. The paper starts with a discussion of PaaS Use case and the current adoption of Container based PaaS architectures with the existing vendors. We explore various container implementations - Linux Containers, Docker, Warden Container, lmctfy and OpenVZ. We look at how each of this implementation handle Process, FileSystem and Namespace isolation. We look at some of the unique features of each container and how some of them reuse base Linux Container implementation or differ from it. We also explore how IaaSlayer itself has started providing support for container lifecycle management along with Virtual Machines. In the end, we look at factors affecting container implementation choices and some of the features missing from the existing implementations for the next generation PaaS"
]
} |
1708.01341 | 2743650536 | The growing demands of processing massive datasets have promoted irresistible trends of running machine learning applications on MapReduce. When processing large input data, it is often of greater values to produce fast and accurate enough approximate results than slow exact results. Existing techniques produce approximate results by processing parts of the input data, thus incurring large accuracy losses when using short job execution times, because all the skipped input data potentially contributes to result accuracy. We address this limitation by proposing AccurateML that aggregates information of input data in each map task to create small aggregated data points. These aggregated points enable all map tasks producing initial outputs quickly to save computation times and decrease the outputs' size to reduce communication times. Our approach further identifies the parts of input data most related to result accuracy, thus first using these parts to improve the produced outputs to minimize accuracy losses. We evaluated AccurateML using real machine learning applications and datasets. The results show: (i) it reduces execution times by 30 times with small accuracy losses compared to exact results; (ii) when using the same execution times, it achieves 2.71 times reductions in accuracy losses compared to existing approximate processing techniques. | Reducing execution times of MapReduce jobs has attracted much attention in recent years. Many approaches have been proposed based on producing and they typically fall into three categories. The first category of approaches dynamically manages the execution orders of multiple MapReduce jobs based on their cost models @cite_10 , past execution logs @cite_24 , or utilities of violating deadlines @cite_8 . The second category of approaches proposes new MapReduce schedulers to address two key problems in job execution @cite_2 @cite_1 : data locality (placing tasks on nodes that having their input data) @cite_14 and the dependence between map and reduce tasks @cite_23 @cite_29 . The third category of approaches focuses on improving the performance of data transfer (e.g. the shuffle phase) in MapReduce jobs @cite_4 @cite_17 . Our approximate processing approach forms a complement to the above techniques. In this section, we discuss related work based on producing . | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_29",
"@cite_1",
"@cite_24",
"@cite_23",
"@cite_2",
"@cite_10",
"@cite_17"
],
"mid": [
"",
"2110514750",
"1525444182",
"",
"",
"1968160647",
"",
"110612056",
"2164507334",
""
],
"abstract": [
"",
"Cluster computing applications like MapReduce and Dryad transfer massive amounts of data between their computation stages. These transfers can have a significant impact on job performance, accounting for more than 50 of job completion times. Despite this impact, there has been relatively little work on optimizing the performance of these data transfers, with networking researchers traditionally focusing on per-flow traffic management. We address this limitation by proposing a global management architecture and a set of algorithms that (1) improve the transfer times of common communication patterns, such as broadcast and shuffle, and (2) allow scheduling policies at the transfer level, such as prioritizing a transfer over other transfers. Using a prototype implementation, we show that our solution improves broadcast completion times by up to 4.5X compared to the status quo in Hadoop. We also show that transfer-level scheduling can reduce the completion time of high-priority transfers by 1.7X.",
"There is an increasing need for cloud service performance that can be tailored to customer requirements. In the context of jobs submitted to cloud computing clusters, a crucial requirement is the specification of job completion-times. A natural way to model this specification, is through client job utility functions that are dependent on job completion-times. We present a method to allocate and schedule heterogeneous resources to jointly optimize the utilities of jobs in a cloud. Specifically: (i) we formulate a completion-time optimal resource allocation (CORA) problem to apportion cluster resources across the jobs that enforces max-min fairness among job utilities, and (ii) starting with an integer programming problem, we perform a series of steps to transform it into an equivalent linear programming problem, and (iii) we implement the proposed framework as a utility-aware resource scheduler in the widely used Hadoop data processing framework, and finally (iv) through extensive experiments with real-world datasets, we show that our prototype achieves significant performance improvement over existing resource-allocation policies.",
"",
"",
"Large-scale MapReduce clusters that routinely process petabytes of unstructured and semi-structured data represent a new entity in the changing landscape of clouds. A key challenge is to increase the utilization of these MapReduce clusters. In this work, we consider a subset of the production workload that consists of MapReduce jobs with no dependencies. We observe that the order in which these jobs are executed can have a significant impact on their overall completion time and the cluster resource utilization. Our goal is to automate the design of a job schedule that minimizes the completion time (makespan) of such a set of MapReduce jobs. We offer a novel abstraction framework and a heuristic, called BalancedPools, that efficiently utilizes performance properties of MapReduce jobs in a given workload for constructing an optimized job schedule. Simulations performed over a realistic workload demonstrate that 15 -38 makespan improvements are achievable by simply processing the jobs in the right order.",
"",
"Sharing a MapReduce cluster between users is attractive because it enables statistical multiplexing (lowering costs) and allows users to share a common large data set. However, we find that traditional scheduling algorithms can perform very poorly in MapReduce due to two aspects of the MapReduce setting: the need for data locality (running computation where the data is) and the dependence between map and reduce tasks. We illustrate these problems through our experience designing a fair scheduler for MapReduce at Facebook, which runs a 600-node multiuser data warehouse on Hadoop. We developed two simple techniques, delay scheduling and copy-compute splitting, which improve throughput and response times by factors of 2 to 10. Although we focus on multi-user workloads, our techniques can also raise throughput in a single-user, FIFO workload by a factor of 2.",
"In online aggregation, a database system processes a user’s aggregation query in an online fashion. At all times during processing, the system gives the user an estimate of the final query result, with the confidence bounds that become tighter over time. In this paper, we consider how online aggregation can be built into a MapReduce system for large-scale data processing. Given the MapReduce paradigm’s close relationship with cloud computing (in that one might expect a large fraction of MapReduce jobs to be run in the cloud), online aggregation is a very attractive technology. Since large-scale cloud computations are typically pay-as-you-go, a user can monitor the accuracy obtained in an online fashion, and then save money by killing the computation early once sufficient accuracy has been obtained.",
""
]
} |
1708.01410 | 2743254697 | With the recent rise in the amount of structured data available, there has been considerable interest in methods for machine learning with graphs. Many of these approaches have been kernel methods, which focus on measuring the similarity between graphs. These generally involving measuring the similarity of structural elements such as walks or paths. Borgwardt and Kriegel proposed the all-paths kernel but emphasized that it is NP-hard to compute and infeasible in practice, favouring instead the shortest-path kernel. In this paper, we introduce a new algorithm for computing the all-paths kernel which is very efficient and enrich it further by including the simple cycles as well. We demonstrate how it is feasible even on large datasets to compute all the paths and simple cycles up to a moderate length. We show how to count labelled paths simple cycles between vertices of a graph and evaluate a labelled path and simple cycles kernel. Extensive evaluations on a variety of graph datasets demonstrate that the all-paths and cycles kernel has superior performance to the shortest-path kernel and state-of-the-art performance overall. | Graph kernels based on walks have also been proposed, the simplest example of which is the random walk kernel @cite_13 which counts the number of matching walks between two graphs where @math is the set of all walks in a graph, @math is the delta function kernel, i.e. 1 when the walks match and 0 otherwise, and @math is a weighting function dependent on the length of the walk. Two walks match if they are the same length or in the case of labelled graphs, if they have the same label sequence. This kernel is of interest because it can be efficiently evaluated in cubic time from the adjacency matrix of the product graph, i.e. @math . A number of suggestions have been made to improve the performance of the random walk kernel, for example to remove backtracking steps from the walk @cite_14 @cite_1 since this eliminates double-counting of structure. | {
"cite_N": [
"@cite_1",
"@cite_14",
"@cite_13"
],
"mid": [
"2030339748",
"2115412287",
""
],
"abstract": [
"The aim of this paper is to explore the use of backtrackless walks and prime cycles for characterizing both labeled and unlabeled graphs. The reason for using backtrackless walks and prime cycles is that they avoid tottering, and can increase the discriminative power of the resulting graph representation. However, the use of such methods is limited in practice because of their computational cost. In this paper, we present efficient methods for computing graph kernels, which are based on backtrackless walks in a labeled graph and whose worst case running time is the same as that of kernels based on random walks. For clustering unlabeled graphs, we construct feature vectors using Ihara coefficients, since these coefficients are related to the frequencies of prime cycles in the graph. To efficiently compute the low order coefficients, we present an O(|V|3) algorithm which is better than the O(|V|6) worst case running time of previously known algorithms. In the experimental evaluation, we apply the proposed method to clustering both labeled and unlabeled graphs. The results show that using backtrackless walks and prime cycles instead of random walks can increase the accuracy of recognition.",
"Positive definite kernels between labeled graphs have recently been proposed. They enable the application of kernel methods, such as support vector machines, to the analysis and classification of graphs, for example, chemical compounds. These graph kernels are obtained by marginalizing a kernel between paths with respect to a random walk model on the graph vertices along the edges. We propose two extensions of these graph kernels, with the double goal to reduce their computation time and increase their relevance as measure of similarity between graphs. First, we propose to modify the label of each vertex by automatically adding information about its environment with the use of the Morgan algorithm. Second, we suggest a modification of the random walk model to prevent the walk from coming back to a vertex that was just visited. These extensions are then tested on benchmark experiments of chemical compounds classification, with promising results.",
""
]
} |
1708.01410 | 2743254697 | With the recent rise in the amount of structured data available, there has been considerable interest in methods for machine learning with graphs. Many of these approaches have been kernel methods, which focus on measuring the similarity between graphs. These generally involving measuring the similarity of structural elements such as walks or paths. Borgwardt and Kriegel proposed the all-paths kernel but emphasized that it is NP-hard to compute and infeasible in practice, favouring instead the shortest-path kernel. In this paper, we introduce a new algorithm for computing the all-paths kernel which is very efficient and enrich it further by including the simple cycles as well. We demonstrate how it is feasible even on large datasets to compute all the paths and simple cycles up to a moderate length. We show how to count labelled paths simple cycles between vertices of a graph and evaluate a labelled path and simple cycles kernel. Extensive evaluations on a variety of graph datasets demonstrate that the all-paths and cycles kernel has superior performance to the shortest-path kernel and state-of-the-art performance overall. | Another alternative to avoid double-counting structure and increase the discriminative power of the kernel is to use a decomposition into . A path is a sequence of edges such that no vertex is repeated in the path. The All-Paths (AP) kernel was proposed by Borgwardt and Kriegel @cite_0 : where @math is the set of all paths in a graph and @math is a base kernel for paths, typically the delta-function kernel. Borgwardt and Kriegel define a path to be an edge path, one which may not repeat edges but may repeat vertices, which is different from our definition but this has little practical impact. They note that computing all the paths is NP-hard in principle and difficult in practice, making this kernel impractical. Instead they adopt the kernel, where @math only contains the paths which are geodesically the shortest between any pair of vertices. Paths are labelled by their end-point labels. The shortest-path kernel is efficiently computable and practical to implement but ignores most of the paths present in a graph. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2147286743"
],
"abstract": [
"Data mining algorithms are facing the challenge to deal with an increasing number of complex objects. For graph data, a whole toolbox of data mining algorithms becomes available by defining a kernel function on instances of graphs. Graph kernels based on walks, subtrees and cycles in graphs have been proposed so far. As a general problem, these kernels are either computationally expensive or limited in their expressiveness. We try to overcome this problem by defining expressive graph kernels which are based on paths. As the computation of all paths and longest paths in a graph is NP-hard, we propose graph kernels based on shortest paths. These kernels are computable in polynomial time, retain expressivity and are still positive definite. In experiments on classification of graph models of proteins, our shortest-path kernels show significantly higher classification accuracy than walk-based kernels."
]
} |
1708.01410 | 2743254697 | With the recent rise in the amount of structured data available, there has been considerable interest in methods for machine learning with graphs. Many of these approaches have been kernel methods, which focus on measuring the similarity between graphs. These generally involving measuring the similarity of structural elements such as walks or paths. Borgwardt and Kriegel proposed the all-paths kernel but emphasized that it is NP-hard to compute and infeasible in practice, favouring instead the shortest-path kernel. In this paper, we introduce a new algorithm for computing the all-paths kernel which is very efficient and enrich it further by including the simple cycles as well. We demonstrate how it is feasible even on large datasets to compute all the paths and simple cycles up to a moderate length. We show how to count labelled paths simple cycles between vertices of a graph and evaluate a labelled path and simple cycles kernel. Extensive evaluations on a variety of graph datasets demonstrate that the all-paths and cycles kernel has superior performance to the shortest-path kernel and state-of-the-art performance overall. | The kernel which we implement in this work is an extension of the all-paths kernel which includes the simple cycles as well. Recall that a simple cycle is a cycle @math such that all @math are distinct. Denoting by @math the set of all paths and simple cycles on @math , we defined the All-Paths and Cycles (APC) kernel to be Just as for the paths, counting simple cycles is a @math P-complete problem, and the problem of counting paths and simple cycles of length at most @math parametrised by @math is a @math W[1]-complete @cite_8 , so the APC kernel is, in principle, hard to compute. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2108136433"
],
"abstract": [
"We develop a parameterized complexity theory for counting problems. As the basis of this theory, we introduce a hierarchy of parameterized counting complexity classes #W[t], for t spl ges 1, that corresponds to Downey and Fellows' (1999) W-hierarchy and show that a few central W-completeness results for decision problems translate to #W-completeness results for the corresponding counting problems. Counting complexity gets interesting with problems whose decision version is tractable, but whose counting version is hard. Our main result states that counting cycles and paths of length k in both directed and undirected graphs, parameterized by k, are #W[1]-complete. This makes it highly unlikely that any of these problems is fixed-parameter tractable, even though their decision versions are. More explicitly, our result shows that most likely there is no f(k) spl middot n sup c -algorithm for counting cycles or paths of length k in a graph of size n for any computable function f: spl Nopf spl rarr spl Nopf and constant c, even though there is a 2 sup O(k) spl middot n sup 2.376 algorithm for finding a cycle or path of length k (2)."
]
} |
1708.01571 | 2952159401 | Explaining to what extent the real power of genetic algorithms lies in the ability of crossover to recombine individuals into higher quality solutions is an important problem in evolutionary computation. In this paper we show how the interplay between mutation and crossover can make genetic algorithms hillclimb faster than their mutation-only counterparts. We devise a Markov Chain framework that allows to rigorously prove an upper bound on the runtime of standard steady state genetic algorithms to hillclimb the OneMax function. The bound establishes that the steady-state genetic algorithms are 25 faster than all standard bit mutation-only evolutionary algorithms with static mutation rate up to lower order terms for moderate population sizes. The analysis also suggests that larger populations may be faster than populations of size 2. We present a lower bound for a greedy (2+1) GA that matches the upper bound for populations larger than 2, rigorously proving that 2 individuals cannot outperform larger population sizes under greedy selection and greedy crossover up to lower order terms. In complementary experiments the best population size is greater than 2 and the greedy genetic algorithms are faster than standard ones, further suggesting that the derived lower bound also holds for the standard steady state (2+1) GA. | The first rigorous groundbreaking proof that crossover can considerably improve the performance of EAs was given by Jansen and Wegener for the with an unrealistically low crossover probability @cite_35 . A series of following works on the analysis of the function have made the algorithm characteristics increasingly realistic @cite_10 @cite_29 . Today it has been rigorously proved that the standard steady state with realistic parameter settings does not require artificial diversity enforcement to outperform its standard bit mutation-only counterpart to escape the plateau of local optima of the function @cite_34 . | {
"cite_N": [
"@cite_35",
"@cite_29",
"@cite_34",
"@cite_10"
],
"mid": [
"1606499289",
"2011559870",
"",
"2487146608"
],
"abstract": [
"There is a lot of experimental evidence that crossover is, for some functions, an essential operator of evolutionary algorithms. Nevertheless, it was an open problem to prove for some function that an evolutionary algorithm using crossover is essentially more efficient than evolutionary algorithms without crossover. In this paper, such an example is presented and its properties are proved.",
"Understanding the impact of crossover on performance is a major problem in the theory of genetic algorithms (GAs). We present new insight on working principles of crossover by analyzing the performance of crossover-based GAs on the simple functions OneMax and Jump. First, we assess the potential speedup by crossover when combined with a fitness-invariant bit shuffling operator that simulates a lineage of independent evolution on a function of unitation. Theoretical and empirical results show drastic speedups for both functions. Second, we consider a simple GA without shuffling and investigate the interplay of mutation and crossover on Jump. If the crossover probability is small, subsequent mutations create sufficient diversity, even for very small populations. Contrarily, with high crossover probabilities crossover tends to lose diversity more quickly than mutation can create it. This has a drastic impact on the performance on Jump. We complement our theoretical findings by Monte Carlo simulations on the population diversity.",
"",
"Population diversity is essential for the effective use of any crossover operator. We compare seven commonly used diversity mechanisms and prove rigorous run time bounds for the (μ+1) GA using uniform crossover on the fitness function Jumpk. All previous results in this context only hold for unrealistically low crossover probability pc=O(k n), while we give analyses for the setting of constant pc 2 and constant pc, we can compare the resulting expected optimisation times for different diversity mechanisms assuming an optimal choice of μ: O (nk-1) for duplicate elimination minimisation, O (n2 log n) for maximising the convex hull, O(n log n) for det. crowding (assuming pc = k n), O(n log n) for maximising the Hamming distance, O(n log n) for fitness sharing, O(n log n) for the single-receiver island model. This proves a sizeable advantage of all variants of the (μ+1) GA compared to the (1+1) EA, which requires θ(nk). In a short empirical study we confirm that the asymptotic differences can also be observed experimentally."
]
} |
1708.01571 | 2952159401 | Explaining to what extent the real power of genetic algorithms lies in the ability of crossover to recombine individuals into higher quality solutions is an important problem in evolutionary computation. In this paper we show how the interplay between mutation and crossover can make genetic algorithms hillclimb faster than their mutation-only counterparts. We devise a Markov Chain framework that allows to rigorously prove an upper bound on the runtime of standard steady state genetic algorithms to hillclimb the OneMax function. The bound establishes that the steady-state genetic algorithms are 25 faster than all standard bit mutation-only evolutionary algorithms with static mutation rate up to lower order terms for moderate population sizes. The analysis also suggests that larger populations may be faster than populations of size 2. We present a lower bound for a greedy (2+1) GA that matches the upper bound for populations larger than 2, rigorously proving that 2 individuals cannot outperform larger population sizes under greedy selection and greedy crossover up to lower order terms. In complementary experiments the best population size is greater than 2 and the greedy genetic algorithms are faster than standard ones, further suggesting that the derived lower bound also holds for the standard steady state (2+1) GA. | Proofs that crossover may make a difference between polynomial and exponential time for escaping local optima have also been available for some time @cite_20 @cite_41 . The authors devised example functions where, if sufficient diversity was enforced by some mechanism, then crossover could efficiently combine different individuals into an optimal solution. Mutation, on the other hand required a long time because of the great Hamming distance between the local and global optima. The authors chose to call the artificially designed functions Real Royal Road functions because the Royal Road functions devised to support the building block hypothesis had failed to do so @cite_13 . The Real Royal Road functions, though, had no resemblance with the schemata structures required by the building block hypothesis. | {
"cite_N": [
"@cite_41",
"@cite_13",
"@cite_20"
],
"mid": [
"1994258343",
"1527462214",
"2148770681"
],
"abstract": [
"Mutation and crossover are the main search operators of different variants of evolutionary algorithms. Despite the many discussions on the importance of crossover nobody has proved rigorously for some explicitly defined fitness functions f\"n: 0,1 ^n->R that a genetic algorithm with crossover can optimize f\"n in expected polynomial time while all evolution strategies based only on mutation (and selection) need expected exponential time. Here such functions and proofs are presented for a genetic algorithm without any idealization. For some functions one-point crossover is appropriate while for others uniform crossover is the right choice.",
"",
"Evolutionary and genetic algorithms (EAs and GAs) are quite successful randomized function optimizers. This success is mainly based on the interaction of different operators like selection, mutation, and crossover. Since this interaction is still not well understood, one is interested in the analysis of the single operators. Jansen and Wegener [Proceedings of GECCO'2001, 2001, pp. 375-382] have described so-called real royal road functions where simple steady-state GAs have a polynomial expected optimization time while the success probability of mutation-based EAs is exponentially small even after an exponential number of steps. This success of the GA is based on the crossover operator and a population whose size is moderately increasing with the dimension of the search space. Here new real royal road functions are presented where crossover leads to a small optimization time, although the GA works with the smallest possible population size--namely 2."
]
} |
1708.01571 | 2952159401 | Explaining to what extent the real power of genetic algorithms lies in the ability of crossover to recombine individuals into higher quality solutions is an important problem in evolutionary computation. In this paper we show how the interplay between mutation and crossover can make genetic algorithms hillclimb faster than their mutation-only counterparts. We devise a Markov Chain framework that allows to rigorously prove an upper bound on the runtime of standard steady state genetic algorithms to hillclimb the OneMax function. The bound establishes that the steady-state genetic algorithms are 25 faster than all standard bit mutation-only evolutionary algorithms with static mutation rate up to lower order terms for moderate population sizes. The analysis also suggests that larger populations may be faster than populations of size 2. We present a lower bound for a greedy (2+1) GA that matches the upper bound for populations larger than 2, rigorously proving that 2 individuals cannot outperform larger population sizes under greedy selection and greedy crossover up to lower order terms. In complementary experiments the best population size is greater than 2 and the greedy genetic algorithms are faster than standard ones, further suggesting that the derived lower bound also holds for the standard steady state (2+1) GA. | The utility of crossover has also been proved for less artificial problems such as coloring problems inspired by the Ising model from physics @cite_39 , computing input-output sequences in finite state machines @cite_3 , shortest path problems @cite_33 , vertex cover @cite_27 and multi-objective optimization problems @cite_16 . The above works show that crossover allows to escape from local optima that have large basins of attraction for the mutation operator. Hence, they establish the usefulness of crossover as an operator to enchance the exploration capabilities of the algorithm. | {
"cite_N": [
"@cite_33",
"@cite_3",
"@cite_39",
"@cite_27",
"@cite_16"
],
"mid": [
"2064545488",
"1981241191",
"2160241726",
"",
"2015353056"
],
"abstract": [
"We show that a natural evolutionary algorithm for the all-pairs shortest path problem is significantly faster with a crossover operator than without. This is the first theoretical analysis proving the usefulness of crossover for a non-artificial problem.",
"Unique input–output (UIO) sequences have important applications in conformance testing of finite state machines (FSMs). Previous experimental and theoretical research has shown that evolutionary algorithms (EAs) can compute UIOs efficiently on many FSM instance classes, but fail on others. However, it has been unclear how and to what degree EA parameter settings influence the runtime on the UIO problem. This paper investigates the choice of acceptance criterion in the (1 + 1) EA and the use of crossover in the @math Steady State Genetic Algorithm. It is rigorously proved that changing these parameters can reduce the runtime from exponential to polynomial for some instance classes of the UIO problem.",
"Due to experimental evidence it is incontestable that crossover is essential for some fitness functions. However, theoretical results without assumptions are difficult. So-called real royal road functions are known where crossover is proved to be essential, i.e., mutation-based algorithms have an exponential expected runtime while the expected runtime of a genetic algorithm is polynomially bounded. However, these functions are artificial and have been designed in such a way that crossover is essential only at the very end (or at other well-specified points) of the optimization process.Here, a more natural fitness function based on a generalized Ising model is presented where crossover is essential throughout the whole optimization process. Mutation-based algorithms such as (μ+λ) EAs with constant population size are proved to have an exponential expected runtime while the expected runtime of a simple genetic algorithm with population size 2 and fitness sharing is polynomially bounded.",
"",
"Evolutionary algorithms (EAs) are increasingly popular approaches to multi-objective optimization. One of their significant advantages is that they can directly optimize the Pareto front by evolving a population of solutions, where the recombination (also called crossover) operators are usually employed to reproduce new and potentially better solutions by mixing up solutions in the population. Recombination in multi-objective evolutionary algorithms is, however, mostly applied heuristically. In this paper, we investigate how from a theoretical viewpoint a recombination operator will affect a multi-objective EA. First, we employ artificial benchmark problems: the Weighted LPTNO problem (a generalization of the well-studied LOTZ problem), and the well-studied COCZ problem, for studying the effect of recombination. Our analysis discloses that recombination may accelerate the filling of the Pareto front by recombining diverse solutions and thus help solve multi-objective optimization. Because of this, for these two problems, we find that a multi-objective EA with recombination enabled achieves a better expected running time than any known EAs with recombination disabled. We further examine the effect of recombination on solving the multi-objective minimum spanning tree problem, which is an NP-hard problem. Following our finding on the artificial problems, our analysis shows that recombination also helps accelerate filling the Pareto front and thus helps find approximate solutions faster."
]
} |
1708.01571 | 2952159401 | Explaining to what extent the real power of genetic algorithms lies in the ability of crossover to recombine individuals into higher quality solutions is an important problem in evolutionary computation. In this paper we show how the interplay between mutation and crossover can make genetic algorithms hillclimb faster than their mutation-only counterparts. We devise a Markov Chain framework that allows to rigorously prove an upper bound on the runtime of standard steady state genetic algorithms to hillclimb the OneMax function. The bound establishes that the steady-state genetic algorithms are 25 faster than all standard bit mutation-only evolutionary algorithms with static mutation rate up to lower order terms for moderate population sizes. The analysis also suggests that larger populations may be faster than populations of size 2. We present a lower bound for a greedy (2+1) GA that matches the upper bound for populations larger than 2, rigorously proving that 2 individuals cannot outperform larger population sizes under greedy selection and greedy crossover up to lower order terms. In complementary experiments the best population size is greater than 2 and the greedy genetic algorithms are faster than standard ones, further suggesting that the derived lower bound also holds for the standard steady state (2+1) GA. | The interplay between crossover and mutation may produce a speed-up also in the exploitation phase, for instance when the algorithm is hillclimbing. Research in this direction has recently appeared. The design of the (1+( @math )) GA was theoretically driven to beat the @math lower bound of all unary unbiased black box algorithms. Since the dynamics of the algorithm differ considerably from those of standard GAs, it is difficult to achieve more general conclusions about the performance of GAs from the analysis of the (1+( @math )) GA. From this point of view the work of Sudholt is more revealing when he shows that any standard ( @math ) GA outperforms its standard bit mutation-only counterpart for hillclimbing the function @cite_28 . The only caveat is that the selection stage enforces diversity artificially, similarly to how Jansen and Wegener had enforced diversity for the Real Royal Road function analysis. In this paper we rigorously prove that it is not necessary to enforce diversity artificially for standard-steady state GAs to outperform their standard bit mutation-only counterpart. | {
"cite_N": [
"@cite_28"
],
"mid": [
"2187785713"
],
"abstract": [
"We reinvestigate a fundamental question: How effective is crossover in genetic algorithms in combining building blocks of good solutions? Although this has been discussed controversially for decades, we are still lacking a rigorous and intuitive answer. We provide such answers for royal road functions and OneMax, where every bit is a building block. For the latter, we show that using crossover makes every + i¾ genetic algorithm at least twice as fast as the fastest evolutionary algorithm using only standard bit mutation, up to small-order terms and for moderatei¾ andi¾ . Crossover is beneficial because it can capitalize on mutations that have both beneficial and disruptive effects on building blocks: crossover is able to repair the disruptive effects of mutation in later generations. Compared to mutation-based evolutionary algorithms, this makes multibit mutations more useful. Introducing crossover changes the optimal mutation rate on OneMax from to . This holds both for uniform crossover and k-point crossover. Experiments and statistical tests confirm that our findings apply to a broad class of building block functions."
]
} |
1708.01565 | 2742601054 | We present a Lipreading system, i.e. a speech recognition system using only visual features, which uses domain-adversarial training for speaker independence. Domain-adversarial training is integrated into the optimization of a lipreader based on a stack of feedforward and LSTM (Long Short-Term Memory) recurrent neural networks, yielding an end-to-end trainable system which only requires a very small number of frames of untranscribed target data to substantially improve the recognition accuracy on the target speaker. On pairs of different source and target speakers, we achieve a relative accuracy improvement of around 40 with only 15 to 20 seconds of untranscribed target speech data. On multi-speaker training setups, the accuracy improvements are smaller but still substantial. | Versatile lipreading features have been proposed, such as Active Appearance Models @cite_17 , Local Binary Patterns @cite_4 , and PCA-based @cite_31 and @cite_36 . For tackling speaker dependency, diverse scaling and normalization techniques have been employed @cite_5 @cite_29 . Classification is often done with Hidden Markov Models (HMMs), @cite_16 @cite_21 @cite_12 @cite_7 . Mouth tracking is done as a preprocessing step @cite_7 @cite_21 @cite_39 . For a comprehensive review see @cite_37 . | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_7",
"@cite_36",
"@cite_29",
"@cite_21",
"@cite_39",
"@cite_5",
"@cite_31",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2136155248",
"1908325895",
"2129160496",
"",
"",
"",
"",
"2157190406",
"",
"",
"2113814270"
],
"abstract": [
"",
"Visual speech information plays an important role in lipreading under noisy conditions or for listeners with a hearing impairment. In this paper, we present local spatiotemporal descriptors to represent and recognize spoken isolated phrases based solely on visual input. Spatiotemporal local binary patterns extracted from mouth regions are used for describing isolated phrase sequences. In our experiments with 817 sequences from ten phrases and 20 speakers, promising accuracies of 62 and 70 were obtained in speaker-independent and speaker-dependent recognition, respectively. In comparison with other methods on AVLetters database, the accuracy, 62.8 , of our method clearly outperforms the others. Analysis of the confusion matrix for 26 English letters shows the good clustering characteristics of visemes for the proposed descriptors. The advantages of our approach include local processing and robustness to monotonic gray-scale changes. Moreover, no error prone segmentation of moving lips is needed.",
"For automatic lipreading, there are many competing methods for feature extraction. Often, because of the complexity of the task these methods are tested on only quite restricted datasets, such as the letters of the alphabet or digits, and from only a few speakers. In this paper we compare some of the leading methods for lip feature extraction and compare them on the GRID dataset which uses a constrained vocabulary over, in this case, 15 speakers. Previously the GRID data has had restricted attention because of the requirements to track the face and lips accurately. We overcome this via the use of a novel linear predictor (LP) tracker which we use to control an Active Appearance Model (AAM). By ignoring shape and or appearance parameters from the AAM we can quantify the effect of appearance and or shape when lip-reading. We find that shape alone is a useful cue for lipreading (which is consistent with human experiments). However, the incremental effect of shape on appearance appears to be not significant which implies that the inner appearance of the mouth contains more information than the shape.",
"The article compares two approaches to the description of ultrasound vocal tract images for application in a \"silent speech interface,\" one based on tongue contour modeling, and a second, global coding approach in which images are projected onto a feature space of Eigentongues. A curvature-based lip profile feature extraction method is also presented. Extracted visual features are input to a neural network which learns the relation between the vocal tract configuration and line spectrum frequencies (LSF) contained in a one-hour speech corpus. An examination of the quality of LSFs derived from the two approaches demonstrates that the Eigemongues approach has a more efficient implementation and provides superior results based on a normalized mean squared error criterion.",
"",
"",
"",
"",
"We improve the performance of a hybrid connectionist speech recognition system by incorporating visual information about the corresponding lip movements. Specifically, we investigate the benefits of adding visual features in the presence of additive noise and crosstalk (cocktail party effect). Our study extends our previous experiments by using a new visual front end, and an alternative architecture for combining the visual and acoustic information. Furthermore, we have extended our recognizer to a multi-speaker, connected letters recognizer. Our results show a significant improvement for the combined architecture (acoustic and visual information) over just the acoustic system in the presence of additive noise and crosstalk. >",
"",
"",
"The multimodal nature of speech is often ignored in human-computer interaction, but lip deformations and other body motion, such as those of the head, convey additional information. We integrate speech cues from many sources and this improves intelligibility, especially when the acoustic signal is degraded. The paper shows how this additional, often complementary, visual speech information can be used for speech recognition. Three methods for parameterizing lip image sequences for recognition using hidden Markov models are compared. Two of these are top-down approaches that fit a model of the inner and outer lip contours and derive lipreading features from a principal component analysis of shape or shape and appearance, respectively. The third, bottom-up, method uses a nonlinear scale-space analysis to form features directly from the pixel intensity. All methods are compared on a multitalker visual speech recognition task of isolated letters."
]
} |
1708.01565 | 2742601054 | We present a Lipreading system, i.e. a speech recognition system using only visual features, which uses domain-adversarial training for speaker independence. Domain-adversarial training is integrated into the optimization of a lipreader based on a stack of feedforward and LSTM (Long Short-Term Memory) recurrent neural networks, yielding an end-to-end trainable system which only requires a very small number of frames of untranscribed target data to substantially improve the recognition accuracy on the target speaker. On pairs of different source and target speakers, we achieve a relative accuracy improvement of around 40 with only 15 to 20 seconds of untranscribed target speech data. On multi-speaker training setups, the accuracy improvements are smaller but still substantial. | Neural networks have early been applied to the Lipreading task @cite_0 , however, they have become widespread only in recent years, with the advent of state-of-the-art learning techniques (and the necessary hardware). The first deep neural network for lipreading was a seven-layer convolutional net as a preprocessing stage for an HMM-based word recognizer @cite_39 . Since then, several end-to-end trainable systems were presented @cite_24 @cite_9 @cite_19 . The current state-of-the-art accuracy on the GRID corpus is 3.3 In , it is assumed that a learning task exhibits a domain shift between the training (or ) and test (or ) data. This can be mitigated in several ways @cite_8 ; we apply @cite_22 , where an intermediate layer in a multi-layer network is driven to learn a representation of the input data which is optimized to be domain-agnostic, to make it difficult to detect whether an input sample is from the source or the target domain. A great advantage of this approach is the end-to-end trainability of the entire system. For a summary of further approaches to domain adaptation with neural networks, we refer to the excellent overview in @cite_22 . | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_9",
"@cite_39",
"@cite_0",
"@cite_19",
"@cite_24"
],
"mid": [
"1882958252",
"2165698076",
"2578229578",
"",
"",
"2952746495",
"2951015274"
],
"abstract": [
"Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.",
"A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"Lipreading is the task of decoding text from the movement of a speaker's mouth. Traditional approaches separated the problem into two stages: designing or learning visual features, and prediction. More recent deep lipreading approaches are end-to-end trainable (, 2016; Chung & Zisserman, 2016a). However, existing work on models trained end-to-end perform only word classification, rather than sentence-level sequence prediction. Studies have shown that human lipreading performance increases for longer words (Easton & Basala, 1982), indicating the importance of features capturing temporal context in an ambiguous communication channel. Motivated by this observation, we present LipNet, a model that maps a variable-length sequence of video frames to text, making use of spatiotemporal convolutions, a recurrent network, and the connectionist temporal classification loss, trained entirely end-to-end. To the best of our knowledge, LipNet is the first end-to-end sentence-level lipreading model that simultaneously learns spatiotemporal visual features and a sequence model. On the GRID corpus, LipNet achieves 95.2 accuracy in sentence-level, overlapped speaker split task, outperforming experienced human lipreaders and the previous 86.4 word-level state-of-the-art accuracy (, 2016).",
"",
"",
"The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem - unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) a 'Watch, Listen, Attend and Spell' (WLAS) network that learns to transcribe videos of mouth motion to characters; (2) a curriculum learning strategy to accelerate training and to reduce overfitting; (3) a 'Lip Reading Sentences' (LRS) dataset for visual speech recognition, consisting of over 100,000 natural sentences from British television. The WLAS model trained on the LRS dataset surpasses the performance of all previous work on standard lip reading benchmark datasets, often by a significant margin. This lip reading performance beats a professional lip reader on videos from BBC television, and we also demonstrate that visual information helps to improve speech recognition performance even when the audio is available.",
"Lipreading, i.e. speech recognition from visual-only recordings of a speaker's face, can be achieved with a processing pipeline based solely on neural networks, yielding significantly better accuracy than conventional methods. Feed-forward and recurrent neural network layers (namely Long Short-Term Memory; LSTM) are stacked to form a single structure which is trained by back-propagating error gradients through all the layers. The performance of such a stacked network was experimentally evaluated and compared to a standard Support Vector Machine classifier using conventional computer vision features (Eigenlips and Histograms of Oriented Gradients). The evaluation was performed on data from 19 speakers of the publicly available GRID corpus. With 51 different words to classify, we report a best word accuracy on held-out evaluation speakers of 79.6 using the end-to-end neural network-based solution (11.6 improvement over the best feature-based solution evaluated)."
]
} |
1708.01348 | 2773835151 | While page views are often sold instantly through real-time auctions when users visit websites, they can also be sold in advance via guaranteed contracts. In this paper, we present a dynamic programming model to study how an online publisher should optimally allocate and price page views between guaranteed and spot markets. The problem is challenging because the allocation and pricing of guaranteed contracts affect advertisers' purchase between the two markets, and the terminal value of the model is endogenously determined by the updated dual force of supply and demand in auctions. We take the advertisers' purchasing behaviour into consideration, i.e., risk aversion and stochastic demand arrivals, and present a scalable and efficient algorithm for the optimal solution. The model is also empirically validated with a commercial dataset. The experimental results show that selling page views via both guaranteed contracts and auctions can increase the publisher's expected total revenue, and the optimal pricing and allocation strategies are robust to different market and advertiser types. | Our paper is also related to the literature in revenue management , in which many studies focus on how a seller uses dynamic pricing models to produce or offer a menu of products or services to its customers. For example, @cite_17 used intensity control to sell a given stock of products by a deadline when demand is price sensitive and stochastic and the seller s objective is to maximise his expected revenue. Their model fits many applications such as single-route flight tickets selling and hotel rooms booking. @cite_15 @cite_6 proposed a dynamic pricing framework for selling flight tickets under the assumption of static demand. Our problem setting for PG is similar to the existing literature, however, the terminal value in our case is uncertain because the remaining impressions are auctioned off in RTB. | {
"cite_N": [
"@cite_15",
"@cite_6",
"@cite_17"
],
"mid": [
"2165746909",
"1965370893",
"2035518103"
],
"abstract": [
"The paper describes a methodology that has been implemented in a major British airline to find the optimal price to charge for airline tickets under one-way pricing. An analytical model has been developed to describe the buying behaviour of customers for flights over the selling period. Using this model and a standard analytical method for constrained optimization, we can find an expression for the optimal price structure for a flight. The expected number of bookings made on each day of the selling period and in each fare class given these prices can then be easily calculated. A simulation model is used to find the confidence ranges on the numbers of bookings and these ranges can be used to regulate the sale of tickets. A procedure to update the price structure based on the remaining capacity has also been developed.",
"In many industrial settings, managers face the problem of establishing a pricing policy that maximises the revenue from selling a given inventory of items by a fixed deadline, with the full inventory of items being available for sale from the beginning of the selling period. This problem arises in a variety of industries, including the sale of fashion garments, flight seats, and hotel rooms. We present a family of continuous pricing functions for which the optimal pricing strategy can be explicitly characterised and easily implemented. These pricing functions are the basis for a general pricing methodology which is particularly well suited for application in the context of an increasing role for the Internet as a means to market goods and services.",
"In many industries, managers face the problem of selling a given stock of items by a deadline. We investigate the problem of dynamically pricing such inventories when demand is price sensitive and stochastic and the firm's objective is to maximize expected revenues. Examples that fit this framework include retailers selling fashion and seasonal goods and the travel and leisure industry, which markets space such as seats on airline flights, cabins on vacation cruises, and rooms in hotels that become worthless if not sold by a specific time. We formulate this problem using intensity control and obtain structural monotonicity results for the optimal intensity resp., price as a function of the stock level and the length of the horizon. For a particular exponential family of demand functions, we find the optimal pricing policy in closed form. For general demand functions, we find an upper bound on the expected revenue based on analyzing the deterministic version of the problem and use this bound to prove that simple, fixed price policies are asymptotically optimal as the volume of expected sales tends to infinity. Finally, we extend our results to the case where demand is compound Poisson; only a finite number of prices is allowed; the demand rate is time varying; holding costs are incurred and cash flows are discounted; the initial stock is a decision variable; and reordering, overbooking, and random cancellations are allowed."
]
} |
1708.01348 | 2773835151 | While page views are often sold instantly through real-time auctions when users visit websites, they can also be sold in advance via guaranteed contracts. In this paper, we present a dynamic programming model to study how an online publisher should optimally allocate and price page views between guaranteed and spot markets. The problem is challenging because the allocation and pricing of guaranteed contracts affect advertisers' purchase between the two markets, and the terminal value of the model is endogenously determined by the updated dual force of supply and demand in auctions. We take the advertisers' purchasing behaviour into consideration, i.e., risk aversion and stochastic demand arrivals, and present a scalable and efficient algorithm for the optimal solution. The model is also empirically validated with a commercial dataset. The experimental results show that selling page views via both guaranteed contracts and auctions can increase the publisher's expected total revenue, and the optimal pricing and allocation strategies are robust to different market and advertiser types. | Existing literature has also studied selling products or services via both auctions and posted prices. For example, @cite_2 discussed a problem of two channels selling, where the products can be sold through either an auction or an alternative channel with a posted price. They considered two scenarios of this dual-channel optimisation problem: in the first scenario, the posted price is an external channel run by another company; in the second scenario, the seller manages both auction and posted price channels. The second scenario is similar to our model setting. However, their discussion is mainly about the static posted price and they assume that the original values are uniformly distributed and there is no penalty cost. @cite_14 studied a hybrid model that unifies both future and spot markets for dynamic spectrum access, in which buyers can purchase under-utilized licensed spectrum either through predefined contracts or through spot transactions with a VCG-like auction model. Their work is similar to ours, however, the seller does not optimise the contract price dynamically. | {
"cite_N": [
"@cite_14",
"@cite_2"
],
"mid": [
"153719418",
"2146113145"
],
"abstract": [
"Dynamic spectrum access is a new paradigm of secondary spectrum utilization and sharing. It allows unlicensed secondary users (SUs) to exploit the opportunistically underutilized licensed spectrum. Market mechanism is a widely used promising means to regulate the consuming behaviours of users and, hence, achieve the efficient allocation and consumption of limited resources. In this paper, we propose and study a hybrid secondary spectrum market consisting of both the futures market and the spot market, in which SUs (buyers) purchase underutilized licensed spectrum from a spectrum regulator (SR), either through predefined contracts via the futures market, or through spot transactions via the spot market. We focus on the optimal spectrum allocation among SUs in an exogenous hybrid market that maximizes the secondary spectrum utilization efficiency. The problem is challenging because of the stochasticity and asymmetry of network information. To solve this problem, we first derive an off-line optimal allocatio...",
"We analyze a revenue management problem in which a seller facing a Poisson arrival stream of consumers operates an online multiunit auction. Consumers can get the product from an alternative list price channel. We consider two variants of this problem: In the first variant, the list price is an external channel run by another firm. In the second one, the seller manages both the auction and the list price channels. Each consumer, trying to maximize his own surplus, must decide either to buy at the posted price and get the item at no risk, or to join the auction and wait until its end, when the winners are revealed and the auction price is disclosed. Our approach consists of two parts. First, we study structural properties of the problem, and show that the equilibrium strategy for both versions of this game is of the threshold type, meaning that a consumer will join the auction only if his arrival time is above a function of his own valuation. This consumer's strategy can be computed using an iterative algorithm in a function space, provably convergent under some conditions. Unfortunately, this procedure is computationally intensive. Second, and to overcome this limitation, we formulate an asymptotic version of the problem, in which the demand rate and the initial number of units grow proportionally large. We obtain a simple closed-form expression for the equilibrium strategy in this regime, which is then used as an approximate solution to the original problem. Numerical computations show that this heuristic is very accurate. The asymptotic solution culminates in simple and precise recipes of how bidders should behave, as well as how the seller should structure the auction, and price the product in the dual-channel case."
]
} |
1708.01348 | 2773835151 | While page views are often sold instantly through real-time auctions when users visit websites, they can also be sold in advance via guaranteed contracts. In this paper, we present a dynamic programming model to study how an online publisher should optimally allocate and price page views between guaranteed and spot markets. The problem is challenging because the allocation and pricing of guaranteed contracts affect advertisers' purchase between the two markets, and the terminal value of the model is endogenously determined by the updated dual force of supply and demand in auctions. We take the advertisers' purchasing behaviour into consideration, i.e., risk aversion and stochastic demand arrivals, and present a scalable and efficient algorithm for the optimal solution. The model is also empirically validated with a commercial dataset. The experimental results show that selling page views via both guaranteed contracts and auctions can increase the publisher's expected total revenue, and the optimal pricing and allocation strategies are robust to different market and advertiser types. | Guaranteed contract pricing has also been discussed in several recent studies. @cite_22 presented two algorithms to compute the price of a guaranteed contract based on the statistics of users' visits to the web pages. @cite_19 and @cite_20 used queueing systems and discussed two different pricing schemes for a publisher who promises to deliver a certain number of clicks or impressions on the ads posted, where uncertain demand, traffic and click behaviour are considered. @cite_10 , @cite_8 and @cite_4 discussed several pricing methods for various flexible guaranteed contracts tailored to display advertising, called ad options. The ideas came from financial and real options . Simply, if an advertiser pays a small fee to buy an ad option, he is guaranteed a priority buying right but not an obligation of his targeted future impressions. He can then decide to pay the fixed price in the future to advertise. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_8",
"@cite_19",
"@cite_10",
"@cite_20"
],
"mid": [
"622482665",
"2073358075",
"1924186583",
"",
"2095916875",
"2169265988"
],
"abstract": [
"",
"We consider the problem of pricing guaranteed contracts in online display advertising. This problem has two key characteristics that when taken together distinguish it from related offline and online pricing problems: (1) the guaranteed contracts are sold months in advance, and at various points in time, and (2) the inventory that is sold to guaranteed contracts - user visits - is very high-dimensional, having hundreds of possible attributes, and advertisers can potentially buy any of the very large number (many trillions) of combinations of these attributes. Consequently, traditional pricing methods such as real-time or combinatorial auctions, or optimization-based pricing based on self- and cross-elasticities are not directly applicable to this problem. We hence propose a new pricing method, whereby the price of a guaranteed contract is computed based on the prices of the individual user visits that the contract is expected to get. The price of each individual user visit is in turn computed using historical sales prices that are negotiated between a sales person and an advertiser, and we propose two different variants in this context. Our evaluation using real guaranteed contracts shows that the proposed pricing method is accurate in the sense that it can effectively predict the prices of other (out-of-sample) historical contracts.",
"A new advertisement option that allows an advertiser to pay a fixed CPM CPC to purchase impressions or clicks. The fixed payment can be different to the underlying ad format.The proposed option can be priced under the lattice framework for both SV and GBM underlying models.The studied model is validated by two advertising datasets. Advertisement (abbreviated ad) options are a recent development in online advertising. Simply, an ad option is a first look contract in which a publisher or search engine grants an advertiser a right but not obligation to enter into transactions to purchase impressions or clicks from a specific ad slot at a pre-specified price on a specific delivery date. Such a structure provides advertisers with more flexibility of their guaranteed deliveries. The valuation of ad options is an important topic and previous studies on ad options pricing have been mostly restricted to the situations where the underlying prices follow a geometric Brownian motion (GBM). This assumption is reasonable for sponsored search; however, some studies have also indicated that it is not valid for display advertising. In this paper, we address this issue by employing a stochastic volatility (SV) model and discuss a lattice framework to approximate the proposed SV model in option pricing. Our developments are validated by experiments with real advertising data: (i) we find that the SV model has a better fitness over the GBM model; (ii) we validate the proposed lattice model via two sequential Monte Carlo simulation methods; (iii) we demonstrate that advertisers are able to flexibly manage their guaranteed deliveries by using the proposed options, and publishers can have an increased revenue when some of their inventories are sold via ad options.",
"",
"Many online advertising slots are sold through bidding mechanisms by publishers and search engines. Highly affected by the dual force of supply and demand, the prices of advertising slots vary significantly over time. This then influences the businesses whose major revenues are driven by online advertising, particularly for publishers and search engines. To address the problem, we propose to sell the future advertising slots via option contracts (also called ad options). The ad option can give its buyer the right to buy the future advertising slots at a prefixed price. The pricing model of ad options is developed in order to reduce the volatility of the income of publishers or search engines. Our experimental results confirm the validity of ad options and the embedded risk management mechanisms.",
"Display advertising is a $25 billion business with a promising upward revenue trend. In this paper, we consider an online display advertising setting in which a web publisher posts display ads on its website and charges based on the cost-per-click (CPC) pricing scheme while promising to deliver a certain number of clicks to the ads posted. The publisher is faced with uncertain demand for advertising slots and uncertain traffic to its website as well as uncertain click behavior of visitors. We formulate the problem as a novel queueing system, where the slots correspond to service channels with the service rate of each server inversely related to the number of active servers. We obtain the closed-form solution for the steady-state probabilities of the number of ads in the publisher's system. We determine the publisher's optimal price to charge per click and show that it can increase in the number of advertising slots and the number of promised clicks. We show that the common heuristic used by many web publishers to convert between the cost-per-click and cost-per-impression pricing schemes using the so-called click-through-rate can be misleading as it may incur web publishers substantial revenue loss. We provide an alternative explanation for the phenomenon observed by several publishers that the click-through-rate tends to drop when they switch from the cost-per-click to cost-per-impression pricing scheme."
]
} |
1708.01348 | 2773835151 | While page views are often sold instantly through real-time auctions when users visit websites, they can also be sold in advance via guaranteed contracts. In this paper, we present a dynamic programming model to study how an online publisher should optimally allocate and price page views between guaranteed and spot markets. The problem is challenging because the allocation and pricing of guaranteed contracts affect advertisers' purchase between the two markets, and the terminal value of the model is endogenously determined by the updated dual force of supply and demand in auctions. We take the advertisers' purchasing behaviour into consideration, i.e., risk aversion and stochastic demand arrivals, and present a scalable and efficient algorithm for the optimal solution. The model is also empirically validated with a commercial dataset. The experimental results show that selling page views via both guaranteed contracts and auctions can increase the publisher's expected total revenue, and the optimal pricing and allocation strategies are robust to different market and advertiser types. | Our research in this paper concerns both pricing and allocation so the optimal solution includes and reflects their interaction effects. Our problem setup is similar to @cite_0 . However, both the model and the analysis are significantly different from theirs in three important aspects. First, we consider stochastic demand for buying guaranteed contracts. We use a Poisson process to model the arrival of advertisers and allow unfulfilled demand to be backlogged. While @cite_0 assumes the demand for advertising in the future period can be shifted in advance by using a deterministic exponential decay function, the unfulfilled demand in their setting is not explicitly considered at later time points. Second, we devise an optimal pricing and allocation solution to maximise the publisher s expected total revenue, by extending an algorithm for the Knapsack problem. Different from @cite_0 where the optimal solution is linearly searched, our solution is a greedy algorithm, and is relatively scalable and efficient. Third, we further analyse the model's robustness by incorporating supply and demand uncertainty and customising optimal pricing and allocation for different advertiser segments. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2069696398"
],
"abstract": [
"There are two major ways of selling impressions in display advertising. They are either sold in spot through auction mechanisms or in advance via guaranteed contracts. The former has achieved a significant automation via real-time bidding (RTB); however, the latter is still mainly done over the counter through direct sales. This paper proposes a mathematical model that allocates and prices the future impressions between real-time auctions and guaranteed contracts. Under conventional economic assumptions, our model shows that the two ways can be seamless combined programmatically and the publisher's revenue can be maximized via price discrimination and optimal allocation. We consider advertisers are risk-averse, and they would be willing to purchase guaranteed impressions if the total costs are less than their private values. We also consider that an advertiser's purchase behavior can be affected by both the guaranteed price and the time interval between the purchase time and the impression delivery date. Our solution suggests an optimal percentage of future impressions to sell in advance and provides an explicit formula to calculate at what prices to sell. We find that the optimal guaranteed prices are dynamic and are non-decreasing over time. We evaluate our method with RTB datasets and find that the model adopts different strategies in allocation and pricing according to the level of competition. From the experiments we find that, in a less competitive market, lower prices of the guaranteed contracts will encourage the purchase in advance and the revenue gain is mainly contributed by the increased competition in future RTB. In a highly competitive market, advertisers are more willing to purchase the guaranteed contracts and thus higher prices are expected. The revenue gain is largely contributed by the guaranteed selling."
]
} |
1902.11134 | 2916970320 | In spite of achieving revolutionary successes in machine learning, deep convolutional neural networks have been recently found to be vulnerable to adversarial attacks and difficult to generalize to novel test images with reasonably large geometric transformations. Inspired by a recent neuroscience discovery revealing that primate brain employs disentangled shape and appearance representations for object recognition, we propose a general disentangled deep autoencoding regularization framework that can be easily applied to any deep embedding based classification model for improving the robustness of deep neural networks. Our framework effectively learns disentangled appearance code and geometric code for robust image classification, which is the first disentangling based method defending against adversarial attacks and complementary to standard defense methods. Extensive experiments on several benchmark datasets show that, our proposed regularization framework leveraging disentangled embedding significantly outperforms traditional unregularized convolutional neural networks for image classification on robustness against adversarial attacks and generalization to novel test data. | For ease of comparison with related work, we choose FGSM and BIM. Black-box attacks are based on a limited knowledge of the target model. The greedy local search method @cite_25 uses an iterative search procedure, where in each round a local neighborhood is used to refine the current image and to optimize some objective function depending on the network output. The second black-box attack approach @cite_30 leverages property where a different known model is used to generate adversarial examples. We evaluate our model robustness using both types of black-box attacks. | {
"cite_N": [
"@cite_30",
"@cite_25"
],
"mid": [
"2603766943",
"2745565856"
],
"abstract": [
"Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24 of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19 and 88.94 . We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.",
"Deep neural networks are powerful and popular learning models that achieve state-of-the-art pattern recognition performance on many computer vision, speech, and language processing tasks. However, these networks have also been shown susceptible to crafted adversarial perturbations which force misclassification of the inputs. Adversarial examples enable adversaries to subvert the expected system behavior leading to undesired consequences and could pose a security risk when these systems are deployed in the real world.,,,,,, In this work, we focus on deep convolutional neural networks and demonstrate that adversaries can easily craft adversarial examples even without any internal knowledge of the target network. Our attacks treat the network as an oracle (black-box) and only assume that the output of the network can be observed on the probed inputs. Our attacks utilize a novel local-search based technique to construct numerical approximation to the network gradient, which is then carefully used to construct a small set of pixels in an image to perturb. We demonstrate how this underlying idea can be adapted to achieve several strong notions of misclassification. The simplicity and effectiveness of our proposed schemes mean that they could serve as a litmus test for designing robust networks."
]
} |
1902.11102 | 2915761715 | Wireless communication systems operate in complex time-varying environments. Therefore, selecting the optimal configuration parameters in these systems is a challenging problem. For wireless links, rate selection is used to select the optimal data transmission rate that maximizes the link throughput subject to an application-defined latency constraint. We model rate selection as a stochastic multi-armed bandit (MAB) problem, where a finite set of transmission rates are modeled as independent bandit arms. For this setup, we propose Con-TS, a novel constrained version of the Thompson sampling algorithm, where the latency requirement is modeled by a linear constraint on arm selection probabilities. Since our algorithm learns a Bayesian model of the wireless link, it can be adapted to exploit prior knowledge often available in practical wireless networks. Through numerical results from simulated experiments, we demonstrate that Con-TS significantly outperforms state-of-the-art bandit algorithms proposed in the literature. Further, we compare Con-TS with the outer loop link adaptation (OLLA) scheme, which is the state-of-the-art in practical wireless networks and relies on carefully tuned offline link models. We show that Con-TS outperforms OLLA in simulations, further, it can elegantly incorporate information from the offline link models to substantially improve performance. | Constrained bandit problems have recently been studied in the context of revenue maximization under a finite inventory setting, termed (BwK). @cite_4 , a upper confidence bound (UCB)-based approach was introduced that was shown to be optimal for the stochastic BwK problem. Further in @cite_15 , a Thompson sampling algorithms for budgeted multi-armed bandits was proposed that outperforms the UCB BwK algorithm. Subsequently in @cite_25 , Thompson Sampling was studied for revenue optimization for a finite inventory that contains multiple, non-identical products. In this paper, we formulate the the rate selection problem as an a knapsack problem. To the best of our knowledge, this is a novel formulation of the rate selection problem that provides new insights into its analysis. Further, unlike finite-inventory problems where the inventory costs accrue over time, we show that the latency constraint can be formulated in terms of the expected transmission success probability. | {
"cite_N": [
"@cite_15",
"@cite_4",
"@cite_25"
],
"mid": [
"",
"2110005947",
"2107550635"
],
"abstract": [
"",
"Multi-armed bandit problems are the predominant theoretical model of exploration-exploitation tradeoffs in learning, and they have countless applications ranging from medical trials, to communication networks, to Web search and advertising. In many of these application domains the learner may be constrained by one or more supply (or budget) limits, in addition to the customary limitation on the time horizon. The literature lacks a general model encompassing these sorts of problems. We introduce such a model, called \"bandits with knapsacks\", that combines aspects of stochastic integer programming with online learning. A distinctive feature of our problem, in comparison to the existing regret-minimization literature, is that the optimal policy for a given latent distribution may significantly outperform the policy that plays the optimal fixed arm. Consequently, achieving sub linear regret in the bandits-with-knapsacks problem is significantly more challenging than in conventional bandit problems. We present two algorithms whose reward is close to the information-theoretic optimum: one is based on a novel \"balanced exploration\" paradigm, while the other is a primal-dual algorithm that uses multiplicative updates. Further, we prove that the regret achieved by both algorithms is optimal up to polylogarithmic factors. We illustrate the generality of the problem by presenting applications in a number of different domains including electronic commerce, routing, and scheduling. As one example of a concrete application, we consider the problem of dynamic posted pricing with limited supply and obtain the first algorithm whose regret, with respect to the optimal dynamic policy, is sub linear in the supply.",
"We consider a network revenue management problem where an online retailer aims to maximize revenue from multiple products with limited inventory constraints. As common in practice, the retailer does not know the consumer's purchase probability at each price and must learn the mean demand from sales data. We propose an efficient and effective dynamic pricing algorithm, which builds upon the Thompson sampling algorithm used for multi-armed bandit problems by incorporating inventory constraints into the model and algorithm. Our algorithm proves to have both strong theoretical performance guarantees as well as promising numerical performance results when compared to other algorithms developed for the same setting. More broadly, our paper contributes to the literature on the multi-armed bandit problem with resource constraints, since our algorithm applies directly to this setting when the inventory constraint is interpreted as general resource constraints.\u0000"
]
} |
1902.11102 | 2915761715 | Wireless communication systems operate in complex time-varying environments. Therefore, selecting the optimal configuration parameters in these systems is a challenging problem. For wireless links, rate selection is used to select the optimal data transmission rate that maximizes the link throughput subject to an application-defined latency constraint. We model rate selection as a stochastic multi-armed bandit (MAB) problem, where a finite set of transmission rates are modeled as independent bandit arms. For this setup, we propose Con-TS, a novel constrained version of the Thompson sampling algorithm, where the latency requirement is modeled by a linear constraint on arm selection probabilities. Since our algorithm learns a Bayesian model of the wireless link, it can be adapted to exploit prior knowledge often available in practical wireless networks. Through numerical results from simulated experiments, we demonstrate that Con-TS significantly outperforms state-of-the-art bandit algorithms proposed in the literature. Further, we compare Con-TS with the outer loop link adaptation (OLLA) scheme, which is the state-of-the-art in practical wireless networks and relies on carefully tuned offline link models. We show that Con-TS outperforms OLLA in simulations, further, it can elegantly incorporate information from the offline link models to substantially improve performance. | @cite_23 , a constrained multi-play UCB approach, ConUCB, was developed to estimate a probabilistic selection vector for determining the optimal subset of links that are displayed to the user in each round. The ConUCB algorithm learns the click-through-rate (CTR) for each link, and the expected revenue collected after a user clicks a particular link. Subsequently, ConUCB calculates a probabilistic selection vector in each round to select the optimal subset of links, i.e., the subset of links which exceed the CTR constraint and maximize the expected cumulative revenue over a finite time horizon. The rate selection problem can be interpreted as a single-play variant of the problem studied in @cite_23 , where the second-level reward is the fixed throughput associated with a given rate. In Sec. , we show that our novel Bayesian algorithm for the rate selection problem, (i) empirically outperforms a suitably adapted version of the ConUCB approach, and (ii) allows us to incorporate valuable prior information available in wireless communication networks. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2799539834"
],
"abstract": [
"The web link selection problem is to select a small subset of web links from a large web link pool, and to place the selected links on a web page that can only accommodate a limited number of links, e.g., advertisements, recommendations, or news feeds. Despite the long concerned click-through rate which reflects the attractiveness of the link itself, the revenue can only be obtained from user actions after clicks, e.g., purchasing after being directed to the product pages by recommendation links. Thus, the web links have an intrinsic . With this observation, we consider the context-free web link selection problem, where the objective is to maximize revenue while ensuring that the attractiveness is no less than a preset threshold. The key challenge of the problem is that each link's multi-level feedbacks are stochastic, and unobservable unless the link is selected. We model this problem with a constrained stochastic multi-armed bandit formulation, and design an efficient link selection algorithm, called Constrained Upper Confidence Bound algorithm (), and prove @math bounds on both the regret and the violation of the attractiveness constraint. We conduct extensive experiments on three real-world datasets, and show that outperforms state-of-the-art context-free bandit algorithms concerning the multi-level feedback structure."
]
} |
1902.11102 | 2915761715 | Wireless communication systems operate in complex time-varying environments. Therefore, selecting the optimal configuration parameters in these systems is a challenging problem. For wireless links, rate selection is used to select the optimal data transmission rate that maximizes the link throughput subject to an application-defined latency constraint. We model rate selection as a stochastic multi-armed bandit (MAB) problem, where a finite set of transmission rates are modeled as independent bandit arms. For this setup, we propose Con-TS, a novel constrained version of the Thompson sampling algorithm, where the latency requirement is modeled by a linear constraint on arm selection probabilities. Since our algorithm learns a Bayesian model of the wireless link, it can be adapted to exploit prior knowledge often available in practical wireless networks. Through numerical results from simulated experiments, we demonstrate that Con-TS significantly outperforms state-of-the-art bandit algorithms proposed in the literature. Further, we compare Con-TS with the outer loop link adaptation (OLLA) scheme, which is the state-of-the-art in practical wireless networks and relies on carefully tuned offline link models. We show that Con-TS outperforms OLLA in simulations, further, it can elegantly incorporate information from the offline link models to substantially improve performance. | State-of-the-art wireless networks address rate selection though a combination of offline and online approaches. An offline model is carefully tuned for a broad class of wireless channels and rates @cite_14 @cite_11 . The transmitter uses this offline model to obtain initial estimates of the transmission success probabilities for each candidate rate for the current channel conditions. The transmitter then refines these estimates through (OLLA) step adjustments based on the observed transmission successes and failures. However, OLLA suffers from two major shortcomings: (i) OLLA converges to match the latency constraint @cite_10 , however, in Sec. , we show that such a rate selection strategy is not necessarily optimal. (ii) The OLLA step sizes are heuristically defined, and often lead to poor convergence and steady state performance steady-state performance of OLLA depends on the scaling factor of the step sizes @cite_19 . In Sec. , we describe the OLLA algorithm and in Sec. , we show that Con-TS significantly outperforms OLLA for simulated experiments. | {
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_10",
"@cite_11"
],
"mid": [
"1985853617",
"2145850537",
"2099059301",
""
],
"abstract": [
"In this letter, a comprehensive analysis of throughput performance statistics in a live LTE network is presented. The analysis shows the relationship between several widely accepted throughput performance indicators, i.e., the user throughput, the cell throughput, and the radio link throughput, and how these indicators are related to signal quality statistics. The analysis is performed on a per-cell and per-connection basis. For this pur- pose, throughput and signal quality statistics are collected from network performance counters and call traces in cells of a live LTE system. Results show that all throughput measures are strongly affected by chatty applications dominating current LTE networks due to the last transmission time interval transmissions and the outer loop link adaptation mechanism. Index Terms—LTE, throughput, CQI, live cellular network.",
"This paper gives an overview of some so-called link performance models used in system level simulations to determine the link packet error rate (PER) at reduced complexity. A subset of link performance models is evaluated in terms of PER prediction accuracy focusing on a single receive and transmit antenna OFDM link with different coding options and channel characteristics. The results demonstrate that a mutual-information based metric which accounts for the modulation alphabet is preferable in the considered cases and, furthermore, applicable to the large class of MIMO-OFDM transmission techniques with linear pre- and post-processing",
"This paper studies the tradeoff between channel coding and ARQ (automatic repeat request) in Rayleigh blockfading channels. A heavily coded system corresponds to a low transmission rate with few ARQ retransmissions, whereas lighter coding corresponds to a higher transmitted rate but more retransmissions. The optimum error probability, where optimum refers to the operating point that maximizes the average successful throughput, is derived and is shown to be a decreasing function of the average signal-to-noise ratio and of the channel diversity order. A general conclusion of the work is that the optimum error probability is quite large (e.g., 10 or larger) for reasonable channel parameters, and that operating at a very small error probability can lead to a significantly reduced throughput.",
""
]
} |
1902.11154 | 2916790266 | In this paper we propose a robust visual odometry system for a wide-baseline camera rig with wide field-of-view (FOV) fisheye lenses, which provides full omnidirectional stereo observations of the environment. For more robust and accurate ego-motion estimation we adds three components to the standard VO pipeline, 1) the hybrid projection model for improved feature matching, 2) multi-view P3P RANSAC algorithm for pose estimation, and 3) online update of rig extrinsic parameters. The hybrid projection model combines the perspective and cylindrical projection to maximize the overlap between views and minimize the image distortion that degrades feature matching performance. The multi-view P3P RANSAC algorithm extends the conventional P3P RANSAC to multi-view images so that all feature matches in all views are considered in the inlier counting for robust pose estimation. Finally the online extrinsic calibration is seamlessly integrated in the backend optimization framework so that the changes in camera poses due to shocks or vibrations can be corrected automatically. The proposed system is extensively evaluated with synthetic datasets with ground-truth and real sequences of highly dynamic environment, and its superior performance is demonstrated. | In the VO and visual SLAM literature, many different camera configurations have been researched. There are various monocular systems @cite_1 @cite_4 @cite_13 that are point feature-based, directly optimizing poses with image contents, or hybrid. They show outstanding performance, but due to fundamental limitation of monocular setup, metric poses cannot be estimated. For robotic application, stereo-based systems @cite_14 @cite_5 have been proposed. Another limitation of the conventional systems is small FOV which can make a VO system unstable due to lack of features or existence of dynamic objects. For this practical reason, fisheye camera based methods have been researched recently. @cite_0 propose a fisheye visual SLAM with direct methods. @cite_2 use a fisheye stereo camera and recover metric scale trajectory. Most recently, @cite_6 proposed an omnidirectional visual odometry with the direct sparse method. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_13"
],
"mid": [
"2535547924",
"2474281075",
"1612997784",
"2809451359",
"2202251471",
"2564804471",
"2751489730",
"1970504153"
],
"abstract": [
"We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.",
"Direct Sparse Odometry (DSO) is a visual odometry method based on a novel, highly accurate sparse and direct structure and motion formulation. It combines a fully direct probabilistic model (minimizing a photometric error) with consistent, joint optimization of all model parameters, including geometry-represented as inverse depth in a reference frame-and camera motion. This is achieved in real time by omitting the smoothness prior used in other direct methods and instead sampling pixels evenly throughout the images. Since our method does not depend on keypoint detectors or descriptors, it can naturally sample pixels from across all image regions that have intensity gradient, including edges or smooth intensity variations on essentially featureless walls. The proposed model integrates a full photometric calibration, accounting for exposure time, lens vignetting, and non-linear response functions. We thoroughly evaluate our method on three different datasets comprising several hours of video. The experiments show that the presented approach significantly outperforms state-of-the-art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness.",
"This paper presents ORB-SLAM, a feature-based monocular simultaneous localization and mapping (SLAM) system that operates in real time, in small and large indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public.",
"We propose a novel real-time direct monocular visual odometry for omnidirectional cameras. Our method extends direct sparse odometry by using the unified omnidirectional model as a projection function, which can be applied to fisheye cameras with a field-of-view (FoV) well above 180 @math . This formulation allows for using the full area of the input image even with strong distortion, while most existing visual odometry methods can only use a rectified and cropped part of it. Model parameters within an active keyframe window are jointly optimized, including the intrinsic extrinsic camera parameters, three-dimensional position of points, and affine brightness parameters. Thanks to the wide FoV, image overlap between frames becomes bigger and points are more spatially distributed. Our results demonstrate that our method provides increased accuracy and robustness over state-of-the-art visual odometry algorithms.",
"We propose a real-time, direct monocular SLAM method for omnidirectional or wide field-of-view fisheye cameras. Both tracking (direct image alignment) and mapping (pixel-wise distance filtering) are directly formulated for the unified omnidirectional model, which can model central imaging devices with a field of view above 180 °. This is in contrast to existing direct mono-SLAM approaches like DTAM or LSD-SLAM, which operate on rectified images, in practice limiting the field of view to around 130 ° diagonally. Not only does this allows to observe - and reconstruct - a larger portion of the surrounding environment, but it also makes the system more robust to degenerate (rotation-only) movement. The two main contribution are (1) the formulation of direct image alignment for the unified omnidirectional model, and (2) a fast yet accurate approach to incremental stereo directly on distorted images. We evaluated our framework on real-world sequences taken with a 185 ° fisheye lens, and compare it to a rectified and a piecewise rectified approach.",
"We present a direct visual odometry algorithm for a fisheye-stereo camera. Our algorithm performs simultaneous camera motion estimation and semi-dense reconstruction. The pipeline consists of two threads: a tracking thread and a mapping thread. In the tracking thread, we estimate the camera pose via semi-dense direct image alignment. To have a wider field of view (FoV) which is important for robotic perception, we use fisheye images directly without converting them to conventional pinhole images which come with a limited FoV. To address the epipolar curve problem, plane-sweeping stereo is used for stereo matching and depth initialization. Multiple depth hypotheses are tracked for selected pixels to better capture the uncertainty characteristics of stereo matching. Temporal motion stereo is then used to refine the depth and remove false positive depth hypotheses. Our implementation runs at an average of 20 Hz on a low-end PC. We run experiments in outdoor environments to validate our algorithm, and discuss the experimental results. We experimentally show that we are able to estimate 6D poses with low drift, and at the same time, do semi-dense 3D reconstruction with high accuracy. To the best of our knowledge, there is no other existing semi-dense direct visual odometry algorithm for a fisheye-stereo camera.",
"We propose Stereo Direct Sparse Odometry (Stereo DSO) as a novel method for highly accurate real-time visual odometry estimation of large-scale environments from stereo cameras. It jointly optimizes for all the model parameters within the active window, including the intrinsic extrinsic camera parameters of all keyframes and the depth values of all selected pixels. In particular, we propose a novel approach to integrate constraints from static stereo into the bundle adjustment pipeline of temporal multi-view stereo. Real-time optimization is realized by sampling pixels uniformly from image regions with sufficient intensity gradient. Fixed-baseline stereo resolves scale drift. It also reduces the sensitivities to large optical flow and to rolling shutter effect which are known shortcomings of direct image alignment methods. Quantitative evaluation demonstrates that the proposed Stereo DSO outperforms existing state-of-the-art visual odometry methods both in terms of tracking accuracy and robustness. Moreover, our method delivers a more precise metric 3D reconstruction than previous dense semi-dense direct approaches while providing a higher reconstruction density than feature-based methods.",
"We propose a semi-direct monocular visual odometry algorithm that is precise, robust, and faster than current state-of-the-art methods. The semi-direct approach eliminates the need of costly feature extraction and robust matching techniques for motion estimation. Our algorithm operates directly on pixel intensities, which results in subpixel precision at high frame-rates. A probabilistic mapping method that explicitly models outlier measurements is used to estimate 3D points, which results in fewer outliers and more reliable points. Precise and high frame-rate motion estimation brings increased robustness in scenes of little, repetitive, and high-frequency texture. The algorithm is applied to micro-aerial-vehicle state-estimation in GPS-denied environments and runs at 55 frames per second on the onboard embedded computer and at more than 300 frames per second on a consumer laptop. We call our approach SVO (Semi-direct Visual Odometry) and release our implementation as open-source software."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.