aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1506.00842 | 1790519466 | Heterogeneous computing, which combines devices with different architectures, is rising in popularity, and promises increased performance combined with reduced energy consumption. OpenCL has been proposed as a standard for programing such systems, and offers functional portability. It does, however, suffer from poor performance portability, code tuned for one device must be re-tuned to achieve good performance on another device. In this paper, we use machine learning-based auto-tuning to address this problem. Benchmarks are run on a random subset of the entire tuning parameter configuration space, and the results are used to build an artificial neural network based model. The model can then be used to find interesting parts of the parameter space for further search. We evaluate our method with different benchmarks, on several devices, including an Intel i7 3770 CPU, an Nvidia K40 GPU and an AMD Radeon HD 7970 GPU. Our model achieves a mean relative error as low as 6.1 , and is able to find configurations as little as 1.3 worse than the global minimum. | Auto-tuning is a well established technique, which has been successfully applied in a number of widely used high performance libraries, such as FFTW @cite_14 @cite_33 for fast Fourier transforms, OSKI @cite_29 for sparse matrices and ATLAS @cite_6 for linear algebra @cite_38 . | {
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_33",
"@cite_29",
"@cite_6"
],
"mid": [
"2116477028",
"2102182691",
"2166247098",
"2099625934",
"2119395117"
],
"abstract": [
"Well-written scientific simulations typically get tremendous performance gains by using highly optimized library routines. Some of the most fundamental of these routines perform matrix-matrix multiplications and related routines, known as BLAS (Basic Linear Algebra Subprograms). Optimizing these library routines for efficiency is therefore of tremendous importance for many scientific simulations. In fact, some of them are often hand-optimized in assembly language for a given processor, in order to get the best possible performance. In this paper, we present a new tuning approach, combining a small snippet of assembly code with an auto-tuner. For our preliminary test-case, the symmetric rank-2 update, the resulting routine outperforms the best auto-tuner and vendor supplied code on our target machine, an Intel quad-core processor. It also performs less than 1.2 slower than the best hand coded library. Our novel approach shows a lot of promise for further performance gains on modern multi-core and many-core processors.",
"FFTW is an implementation of the discrete Fourier transform (DFT) that adapts to the hardware in order to maximize performance. This paper shows that such an approach can yield an implementation that is competitive with hand-optimized libraries, and describes the software structure that makes our current FFTW3 version flexible and adaptive. We further discuss a new algorithm for real-data DFTs of prime size, a new way of implementing DFTs by means of machine-specific single-instruction, multiple-data (SIMD) instructions, and how a special-purpose compiler can derive optimized implementations of the discrete cosine and sine transforms automatically from a DFT algorithm.",
"Fast bit-reversal algorithms have been of strong interest for many decades, especially after Cooley and Tukey introduced their FFT implementation in 1965. Many recent algorithms, including FFTW try to avoid the bit-reversal all together by doing in-place algorithms within their FFTs. We therefore motivate our work by showing that for FFTs of up to 65.536 points, a minimally tuned Cooley-Tukey FFT in C using our bit-reversal algorithm performs comparable or better than the default FFTW algorithm. In this paper, we present an extremely fast linear bit-reversal adapted for modern multithreaded architectures. Our bit-reversal algorithm takes advantage of recursive calls combined with the fact that it only generates pairs of indices for which the corresponding elements need to be exchanges, thereby avoiding any explicit tests. In addition we have implemented an adaptive approach which explores the trade-off between compile time and run-time work load. By generating look-up tables at compile time, our algorithm becomes even faster at run-time. Our results also show that by using more than one thread on tightly coupled architectures, further speed-up can be achieved.",
"The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.",
"The Basic Linear Algebra Subprograms lBLASr define one of the most heavily used performance-critical APIs in scientific computing today. It has long been understood that the most important of these routines, the dense Level 3 BLAS, may be written efficiently given a highly optimized general matrix multiply routine. In this paper, however, we show that an even larger set of operations can be efficiently maintained using a much simpler matrix multiply kernel. Indeed, this is how our own project, ATLAS lwhich provides one of the most widely used BLAS implementations in use todayr, supports a large variety of performance-critical routines. Copyright © 2004 John Wiley & Sons, Ltd."
]
} |
1506.00842 | 1790519466 | Heterogeneous computing, which combines devices with different architectures, is rising in popularity, and promises increased performance combined with reduced energy consumption. OpenCL has been proposed as a standard for programing such systems, and offers functional portability. It does, however, suffer from poor performance portability, code tuned for one device must be re-tuned to achieve good performance on another device. In this paper, we use machine learning-based auto-tuning to address this problem. Benchmarks are run on a random subset of the entire tuning parameter configuration space, and the results are used to build an artificial neural network based model. The model can then be used to find interesting parts of the parameter space for further search. We evaluate our method with different benchmarks, on several devices, including an Intel i7 3770 CPU, an Nvidia K40 GPU and an AMD Radeon HD 7970 GPU. Our model achieves a mean relative error as low as 6.1 , and is able to find configurations as little as 1.3 worse than the global minimum. | There are also been examples of application specific empirical auto tuning on GPUs, e.g. for stencil computations @cite_17 , matrix multiplication @cite_0 and FFTs @cite_32 . Furthermore, analytical performance models for GPUs and heterogeneous systems have been developed @cite_21 @cite_30 @cite_1 @cite_3 and used for auto-tuning @cite_18 . | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_0",
"@cite_17"
],
"mid": [
"",
"2153637321",
"2081146773",
"2148802605",
"2107483876",
"2167334577",
"1863336885",
"2016618963"
],
"abstract": [
"",
"Empirical program optimizers estimate the values of key optimization parameters by generating different program versions and running them on the actual hardware to determine which values give the best performance. In contrast, conventional compilers use models of programs and machines to choose these parameters. It is widely believed that model-driven optimization does not compete with empirical optimization, but few quantitative comparisons have been done to date. To make such a comparison, we replaced the empirical optimization engine in ATLAS (a system for generating a dense numerical linear algebra library called the BLAS) with a model-driven optimization engine that used detailed models to estimate values for optimization parameters, and then measured the relative performance of the two systems on three different hardware platforms. Our experiments show that model-driven optimization can be surprisingly effective, and can generate code whose performance is comparable to that of code generated by empirical optimizers for the BLAS.",
"Predicting how well applications may run on modern systems is becoming increasingly challenging. It is no longer sufficient to look at number of floating point operations and communication costs, but one also needs to model the underlying systems and how their topology, heterogeneity, system loads, etc, may impact performance. This work focuses on developing a practical model for heterogeneous computing by looking at the older BSP model, which attempts to model communication costs on homogeneous systems, and looks at how its library implementations can be extended to include a run-time system that may be useful for heterogeneous systems. Our extensions of BSPlib with MPI and GASnet mechanisms at the communication layer should provide useful tools for evaluating applications with respect to how they may run on heterogeneous systems.",
"Node level heterogeneous architectures have become attractive during the last decade for several reasons: compared to traditional symmetric CPUs, they offer high peak performance and are energy and or cost efficient. With the increase of fine-grained parallelism in high-performance computing, as well as the introduction of parallelism in workstations, there is an acute need for a good overview and understanding of these architectures. We give an overview of the state-of-the-art in heterogeneous computing, focusing on three commonly found architectures: the Cell Broadband Engine Architecture, graphics processing units (GPUs), and field programmable gate arrays (FPGAs). We present a review of hardware, available software tools, and an overview of state-of-the-art techniques and algorithms. Furthermore, we present a qualitative and quantitative comparison of the architectures, and give our view on the future of heterogeneous computing.",
"Existing implementations of FFTs on GPUs are optimized for specific transform sizes like powers of two, and exhibit unstable and peaky performance i.e., do not perform as well in other sizes that appear in practice. Our new auto-tuning 3-D FFT on CUDA generates high performance CUDA kernels for FFTs of varying transform sizes, alleviating this problem. Although auto-tuning has been implemented on GPUs for dense kernels such as DGEMM and stencils, this is the first instance that has been applied comprehensively to bandwidth intensive and complex kernels such as 3-D FFTs. Bandwidth intensive optimizations such as selecting the number of threads and inserting padding to avoid bank conflicts on shared memory are systematically applied. Our resulting autotuner is fast and results in performance that essentially beats all 3-D FFT implementations on a single processor to date, and moreover exhibits stable performance irrespective of problem sizes or the underlying GPU hardware.",
"GPU architectures are increasingly important in the multi-core era due to their high number of parallel processors. Programming thousands of massively parallel threads is a big challenge for software engineers, but understanding the performance bottlenecks of those parallel programs on GPU architectures to improve application performance is even more difficult. Current approaches rely on programmers to tune their applications by exploiting the design space exhaustively without fully understanding the performance characteristics of their applications. To provide insights into the performance bottlenecks of parallel applications on GPU architectures, we propose a simple analytical model that estimates the execution time of massively parallel programs. The key component of our model is estimating the number of parallel memory requests (we call this the memory warp parallelism) by considering the number of running threads and memory bandwidth. Based on the degree of memory warp parallelism, the model estimates the cost of memory requests, thereby estimating the overall execution time of a program. Comparisons between the outcome of the model and the actual execution time in several GPUs show that the geometric mean of absolute error of our model on micro-benchmarks is 5.4 and on GPU computing applications is 13.3 . All the applications are written in the CUDA programming language.",
"The development of high performance dense linear algebra (DLA) critically depends on highly optimized BLAS, and especially on the matrix multiplication routine (GEMM). This is especially true for Graphics Processing Units (GPUs), as evidenced by recently published results on DLA for GPUs that rely on highly optimized GEMM. However, the current best GEMM performance, e.g. of up to 375 GFlop s in single precision and of up to 75 GFlop s in double precision arithmetic on NVIDIA's GTX 280, is difficult to achieve. The development involves extensive GPU knowledge and even backward engineering to understand some undocumented insides about the architecture that have been of key importance in the development. In this paper, we describe some GPU GEMM auto-tuning optimization techniques that allow us to keep up with changing hardware by rapidly reusing, rather than reinventing, the existing ideas. Auto-tuning, as we show in this paper, is a very practical solution where in addition to getting an easy portability, we can often get substantial speedups even on current GPUs (e.g. up to 27 in certain cases for both single and double precision GEMMs on the GTX 280).",
"This paper develops and evaluates search and optimization techniques for auto-tuning 3D stencil (nearest-neighbor) computations on GPUs. Observations indicate that parameter tuning is necessary for heterogeneous GPUs to achieve optimal performance with respect to a search space. Our proposed framework takes a most concise specification of stencil behavior from the user as a single formula, auto-generates tunable code from it, systematically searches for the best configuration and generates the code with optimal parameter configurations for different GPUs. This auto-tuning approach guarantees adaptive performance for different generations of GPUs while greatly enhancing programmer productivity. Experimental results show that the delivered floating point performance is very close to previous handcrafted work and outperforms other auto-tuned stencil codes by a large margin."
]
} |
1506.00842 | 1790519466 | Heterogeneous computing, which combines devices with different architectures, is rising in popularity, and promises increased performance combined with reduced energy consumption. OpenCL has been proposed as a standard for programing such systems, and offers functional portability. It does, however, suffer from poor performance portability, code tuned for one device must be re-tuned to achieve good performance on another device. In this paper, we use machine learning-based auto-tuning to address this problem. Benchmarks are run on a random subset of the entire tuning parameter configuration space, and the results are used to build an artificial neural network based model. The model can then be used to find interesting parts of the parameter space for further search. We evaluate our method with different benchmarks, on several devices, including an Intel i7 3770 CPU, an Nvidia K40 GPU and an AMD Radeon HD 7970 GPU. Our model achieves a mean relative error as low as 6.1 , and is able to find configurations as little as 1.3 worse than the global minimum. | Much work has been done on machine learning based auto-tuning, e.g. to determine loop unroll factors @cite_4 , whether to perform SIMD vectorization @cite_5 and general compiler optimizations @cite_13 . @cite_19 developed a method to determine a good ordering of the compiler optimization phases, on a per function basis. Their method uses a neural network to determine the best optimization phase to apply next, given characteristics of the current, partially optimized code. They evaluated their method in a dynamic compilation setting, using Java. @cite_35 used a method similar to ours, where they trained an artificial neural network performance model. However, they focus on large-scale parallel platforms such as the BlueGene L, and do not use their model as part of a auto-tuner. @cite_7 also adopt a similar approach to us, by building a machine learning based performance model for MapReduce with Hadoop, and using it in an auto-tuner. In contrast to these works, our method uses values of tuning parameters to directly predict execution time, as part of an auto-tuner, using OpenCL in a heterogeneous setting. | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_7",
"@cite_19",
"@cite_5",
"@cite_13"
],
"mid": [
"2131230754",
"1967846636",
"2038361169",
"2147623728",
"2052156072",
"2156560068"
],
"abstract": [
"National Science Foundation Grant Number CCF-0444413; United States Department of Energy Grant Number W-7405-Eng-48",
"Compilers base many critical decisions on abstracted architectural models. While recent research has shown that modeling is effective for some compiler problems, building accurate models requires a great deal of human time and effort. This paper describes how machine learning techniques can be leveraged to help compiler writers model complex systems. Because learning techniques can effectively make sense of high dimensional spaces, they can be a valuable tool for clarifying and discerning complex decision boundaries. In this work we focus on loop unrolling, a well-known optimization for exposing instruction level parallelism. Using the Open Research Compiler as a testbed, we demonstrate how one can use supervised learning techniques to determine the appropriateness of loop unrolling. We use more than 2,500 loops - drawn from 72 benchmarks - to train two different learning algorithms to predict unroll factors (i.e., the amount by which to unroll a loop) for any novel loop. The technique correctly predicts the unroll factor for 65 of the loops in our dataset, which leads to a 5 overall improvement for the SPEC 2000 benchmark suite (9 for the SPEC 2000 floating point benchmarks).",
"MapReduce, which is the de facto programming model for large-scale distributed data processing, and its most popular implementation Hadoop have enjoyed widespread adoption in industry during the past few years. Unfortunately, from a performance point of view getting the most out of Hadoop is still a big challenge due to the large number of configuration parameters. Currently these parameters are tuned manually by trial and error, which is ineffective due to the large parameter space and the complex interactions among the parameters. Even worse, the parameters have to be re-tuned for different MapReduce applications and clusters. To make the parameter tuning process more effective, in this paper we explore machine learning-based performance models that we use to auto-tune the configuration parameters. To this end, we first evaluate several machine learning models with diverse MapReduce applications and cluster configurations, and we show that support vector regression model (SVR) has good accuracy and is also computationally efficient. We further assess our auto-tuning approach, which uses the SVR performance model, against the Starfish auto tuner, which uses a cost-based performance model. Our findings reveal that our auto-tuning approach can provide comparable or in some cases better performance improvements than Starfish with a smaller number of parameters. Finally, we propose and discuss a complete and practical end-to-end auto-tuning flow that combines our machine learning-based performance models with smart search algorithms for the effective training of the models and the effective exploration of the parameter space.",
"Today's compilers have a plethora of optimizations to choose from, and the correct choice of optimizations can have a significant impact on the performance of the code being optimized. Furthermore, choosing the correct order in which to apply those optimizations has been a long standing problem in compilation research. Each of these optimizations interacts with the code and in turn with all other optimizations in complicated ways. Traditional compilers typically apply the same set of optimization in a fixed order to all functions in a program, without regard the code being optimized. Understanding the interactions of optimizations is very important in determining a good solution to the phase-ordering problem. This paper develops a new approach that automatically selects good optimization orderings on a per method basis within a dynamic compiler. Our approach formulates the phase-ordering problem as a Markov process and uses a characterization of the current state of the code being optimized to creating a better solution to the phase ordering problem. Our technique uses neuro-evolution to construct an artificial neural network that is capable of predicting beneficial optimization ordering for a piece of code that is being optimized. We implemented our technique in Jikes RVM and achieved significant improvements on a set of standard Java benchmarks over a well-engineered fixed order.",
"",
"Tuning compiler optimizations for rapidly evolving hardware makes porting and extending an optimizing compiler for each new platform extremely challenging. Iterative optimization is a popular approach to adapting programs to a new architecture automatically using feedback-directed compilation. However, the large number of evaluations required for each program has prevented iterative compilation from widespread take-up in production compilers. Machine learning has been proposed to tune optimizations across programs systematically but is currently limited to a few transformations, long training phases and critically lacks publicly released, stable tools. Our approach is to develop a modular, extensible, self-tuning optimization infrastructure to automatically learn the best optimizations across multiple programs and architectures based on the correlation between program features, run-time behavior and optimizations. In this paper we describe Milepost GCC, the first publicly-available open-source machine learning-based compiler. It consists of an Interactive Compilation Interface (ICI) and plugins to extract program features and exchange optimization data with the cTuning.org open public repository. It automatically adapts the internal optimization heuristic at function-level granularity to improve execution time, code size and compilation time of a new program on a given architecture. Part of the MILEPOST technology together with low-level ICI-inspired plugin framework is now included in the mainline GCC. We developed machine learning plugins based on probabilistic and transductive approaches to predict good combinations of optimizations. Our preliminary experimental results show that it is possible to automatically reduce the execution time of individual MiBench programs, some by more than a factor of 2, while also improving compilation time and code size. On average we are able to reduce the execution time of the MiBench benchmark suite by 11 for the ARC reconfigurable processor. We also present a realistic multi-objective optimization scenario for Berkeley DB library using Milepost GCC and improve execution time by approximately 17 , while reducing compilation time and code size by 12 and 7 respectively on Intel Xeon processor."
]
} |
1506.00842 | 1790519466 | Heterogeneous computing, which combines devices with different architectures, is rising in popularity, and promises increased performance combined with reduced energy consumption. OpenCL has been proposed as a standard for programing such systems, and offers functional portability. It does, however, suffer from poor performance portability, code tuned for one device must be re-tuned to achieve good performance on another device. In this paper, we use machine learning-based auto-tuning to address this problem. Benchmarks are run on a random subset of the entire tuning parameter configuration space, and the results are used to build an artificial neural network based model. The model can then be used to find interesting parts of the parameter space for further search. We evaluate our method with different benchmarks, on several devices, including an Intel i7 3770 CPU, an Nvidia K40 GPU and an AMD Radeon HD 7970 GPU. Our model achieves a mean relative error as low as 6.1 , and is able to find configurations as little as 1.3 worse than the global minimum. | The two works most closely related to ours are @cite_28 @cite_39 . In @cite_28 , a model based on boosted regression trees were used to build an auto-tuner, evaluated with a single GPU benchmark, filterbank correlation. The Starchart @cite_39 system builds a regression tree model which can be used to partition the design space, discover its structure and find optimal parameter values within the different regions. It is then used to develop an auto-tuner for several GPU benchmarks. In contrast, our work uses a different machine learning model, has more parameters for each kernel, and uses OpenCL to tune applications for both CPUs and GPUs. | {
"cite_N": [
"@cite_28",
"@cite_39"
],
"mid": [
"2033088400",
"2070544163"
],
"abstract": [
"The rapidly evolving landscape of multicore architectures makes the construction of efficient libraries a daunting task. A family of methods known collectively as “auto-tuning” has emerged to address this challenge. Two major approaches to auto-tuning are empirical and model-based: empirical autotuning is a generic but slow approach that works by measuring runtimes of candidate implementations, model-based auto-tuning predicts those runtimes using simplified abstractions designed by hand. We show that machine learning methods for non-linear regression can be used to estimate timing models from data, capturing the best of both approaches. A statistically-derived model offers the speed of a model-based approach, with the generality and simplicity of empirical auto-tuning. We validate our approach using the filterbank correlation kernel described in Pinto and Cox [2012], where we find that 0.1 seconds of hill climbing on the regression model (“predictive auto-tuning”) can achieve almost the same speed-up as is brought by minutes of empirical auto-tuning. Our approach is not specific to filterbank correlation, nor even to GPU kernel auto-tuning, and can be applied to almost any templated-code optimization problem, spanning a wide variety of problem types, kernel types, and platforms.",
"Graphics processing units (GPUs) are in increasingly wide use, but significant hurdles lie in selecting the appropriate algorithms, runtime parameter settings, and hardware configurations to achieve power and performance goals with them. Exploring hardware and software choices requires time-consuming simulations or extensive real-system measurements. While some auto-tuning support has been proposed, it is often narrow in scope and heuristic in operation. This paper proposes and evaluates a statistical analysis technique, Starchart, that partitions the GPU hardware software tuning space by automatically discerning important inflection points in design parameter values. Unlike prior methods, Starchart can identify the best parameter choices within different regions of the space. Our tool is efficient - evaluating at most 0.3 of the tuning space, and often much less - and is robust enough to analyze highly variable real-system measurements, not just simulation. In one case study, we use it to automatically find platform-specific parameter settings that are 6.3× faster (for AMD) and 1.3× faster (for NVIDIA) than a single general setting. We also show how power-optimized parameter settings can save 47W (26 of total GPU power) with little performance loss. Overall, Starchart can serve as a foundation for a range of GPU compiler optimizations, auto-tuners, and programmer tools. Furthermore, because Starchart does not rely on specific GPU features, we expect it to be useful for broader CPU GPU studies as well."
]
} |
1506.00842 | 1790519466 | Heterogeneous computing, which combines devices with different architectures, is rising in popularity, and promises increased performance combined with reduced energy consumption. OpenCL has been proposed as a standard for programing such systems, and offers functional portability. It does, however, suffer from poor performance portability, code tuned for one device must be re-tuned to achieve good performance on another device. In this paper, we use machine learning-based auto-tuning to address this problem. Benchmarks are run on a random subset of the entire tuning parameter configuration space, and the results are used to build an artificial neural network based model. The model can then be used to find interesting parts of the parameter space for further search. We evaluate our method with different benchmarks, on several devices, including an Intel i7 3770 CPU, an Nvidia K40 GPU and an AMD Radeon HD 7970 GPU. Our model achieves a mean relative error as low as 6.1 , and is able to find configurations as little as 1.3 worse than the global minimum. | Work has also been done on OpenCL performance portability. @cite_40 identify a number of parameters, or tuning knobs, which affects the performance of OpenCL codes on different platforms, and shows how setting the appropriate values can improve performance. @cite_36 use iterative optimization to adapt OpenCL kernels to different hardware by picking the optimal tiling sizes. @cite_23 take a different approach, and attempt to determine application settings which will achieve good performance on different devices, rather than optimal performance on any single device. | {
"cite_N": [
"@cite_36",
"@cite_40",
"@cite_23"
],
"mid": [
"2051277525",
"110609005",
"2081245617"
],
"abstract": [
"Nowadays, computers include several computational devices with parallel capacities, such as multicore processors and Graphic Processing Units (GPUs). OpenCL enables the programming of all these kinds of devices. An OpenCL program consists of a host code which discovers the computational devices available in the host system and it queues up commands to the devices, and the kernel code which defines the core of the parallel computation executed in the devices. This work addresses two of the most important problems faced by an OpenCL programmer: (1) hosts codes are quite verbose but they can be automatically generated if some parameters are known; (2) OpenCL codes that are hand-optimized for a given device do not get necessarily a good performance in a different one. This paper presents a source-to-source iterative optimization tool, called OCLoptimizer, that aims to generate host codes automatically and to optimize OpenCL kernels taking as inputs an annotated version of the original kernel and a configuration file. Iterative optimization is a well-known technique which allows to optimize a given code by exploring different configuration parameters in a systematic manner. For example, we can apply tiling on one loop and the iterative optimizer would select the optimal tile size by exploring the space of possible tile sizes. The experimental results show that the tool can automatically optimize a set of OpenCL kernels for multicore processors.",
"We study the performance portability of OpenCL across diverse architectures including NVIDIA GPU, Intel Ivy Bridge CPU, and AMD Fusion APU. We present detailed performance analysis at assembly level on three exemplar OpenCL benchmarks: SGEMM, SpMV, and FFT. We also identify a number of tuning knobs that are critical to performance portability, including threads-data mapping, data layout, tiling size, data caching, and operation-specific factors. We further demonstrate that proper tuning could improve the OpenCL portable performance from the current 15 to a potential 67 of the state-of-the-art performance on the Ivy Bridge CPU. Finally, we evaluate the current OpenCL programming model, and propose a list of extensions that improve performance portability.",
"This paper reports on the development of an MPI OpenCL implementation of LU, an application-level benchmark from the NAS Parallel Benchmark Suite. An account of the design decisions addressed during the development of this code is presented, demonstrating the importance of memory arrangement and work-item work-group distribution strategies when applications are deployed on different device types. The resulting platform-agnostic, single source application is benchmarked on a number of different architectures, and is shown to be 1.3-1.5x slower than native FORTRAN 77 or CUDA implementations on a single node and 1.3-3.1x slower on multiple nodes. We also explore the potential performance gains of OpenCL's device fissioning capability, demonstrating up to a 3x speed-up over our original OpenCL implementation."
]
} |
1506.00799 | 1947624690 | A significant performance reduction is often observed in speech recognition when the rate of speech (ROS) is too low or too high. Most of present approaches to addressing the ROS variation focus on the change of speech signals in dynamic properties caused by ROS, and accordingly modify the dynamic model, e.g., the transition probabilities of the hidden Markov model (HMM). However, an abnormal ROS changes not only the dynamic but also the static property of speech signals, and thus can not be compensated for purely by modifying the dynamic model. This paper proposes an ROS learning approach based on deep neural networks (DNN), which involves an ROS feature as the input of the DNN model and so the spectrum distortion caused by ROS can be learned and compensated for. The experimental results show that this approach can deliver better performance for too slow and too fast utterances, demonstrating our conjecture that ROS impacts both the dynamic and the static property of speech. In addition, the proposed approach can be combined with the conventional HMM transition adaptation method, offering additional performance gains. | This paper is related to previous work on ROS compensation, most of which has been mentioned in the introduction. It should be highlighted that the frame rate normalization approach proposed in @cite_11 is similar to our method in the sense that both change the features extraction according to the ROS. The difference is that our method introduces the ROS feature to regularize the acoustic model learning, while the work in @cite_11 changes the frame step size and so is still an implicit way to adjust the dynamic model. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2163089518"
],
"abstract": [
"This paper describes a speaking rate adaptation technique for automatic speech recognition. The technique aims to reduce speaking rate variations by applying temporal warping in front-end processing so that the average phone duration in terms of feature frames remains constant. Speaking rate estimates are given by timing information from unadapted decoding outputs. We implement the proposed continuous frame rate normalization (CFRN) technique on a state-of-the-art speech recognition architecture, and evaluate it on the most recent GALE broadcast transcription tasks. Results show that CFRN gives consistent improvement on all four separate systems and two different languages. In fact, the reported numbers represent the best decoding error rates of the corresponding test sets. It is further shown that the technique is effective without retraining, and adds little overhead to the multi-pass recognition pipeline found in state-of-the-art transcription systems."
]
} |
1506.00799 | 1947624690 | A significant performance reduction is often observed in speech recognition when the rate of speech (ROS) is too low or too high. Most of present approaches to addressing the ROS variation focus on the change of speech signals in dynamic properties caused by ROS, and accordingly modify the dynamic model, e.g., the transition probabilities of the hidden Markov model (HMM). However, an abnormal ROS changes not only the dynamic but also the static property of speech signals, and thus can not be compensated for purely by modifying the dynamic model. This paper proposes an ROS learning approach based on deep neural networks (DNN), which involves an ROS feature as the input of the DNN model and so the spectrum distortion caused by ROS can be learned and compensated for. The experimental results show that this approach can deliver better performance for too slow and too fast utterances, demonstrating our conjecture that ROS impacts both the dynamic and the static property of speech. In addition, the proposed approach can be combined with the conventional HMM transition adaptation method, offering additional performance gains. | Finally, this work is related to DNN adaptation. For example in @cite_12 @cite_2 , a speaker indicator in the form of an i-vector is involved in the model training and provides better performance. This is quite similar to our approach; the only difference is that the i-vector is replaced by ROS in our work. | {
"cite_N": [
"@cite_12",
"@cite_2"
],
"mid": [
"2015633636",
"2404901536"
],
"abstract": [
"State of the art speaker recognition systems are based on the i-vector representation of speech segments. In this paper we show how this representation can be used to perform blind speaker adaptation of hybrid DNN-HMM speech recognition system and we report excellent results on a French language audio transcription task. The implemenation is very simple. An audio file is first diarized and each speaker cluster is represented by an i-vector. Acoustic feature vectors are augmented by the corresponding i-vectors before being presented to the DNN. (The same i-vector is used for all acoustic feature vectors aligned with a given speaker.) This supplementary information improves the DNN's ability to discriminate between phonetic events in a speaker independent way without having to make any modification to the DNN training algorithms. We report results on the ETAPE 2011 transcription task, and show that i-vector based speaker adaptation is effective irrespective of whether cross-entropy or sequence training is used. For cross-entropy training, we obtained a word error rate (WER) reduction from 22.16 to 20.67 whereas for sequence training the WER reduces from 19.93 to 18.40 .",
"Deep neural networks (DNN) are currently very successful for acoustic modeling in ASR systems. One of the main challenges with DNNs is unsupervised speaker adaptation from an initial speaker clustering, because DNNs have a very large number of parameters. Recently, a method has been proposed to adapt DNNs to speakers by combining speaker-specific information (in the form of i-vectors computed at the speaker-cluster level) with fMLLR-transformed acoustic features. In this paper we try to gain insight on what kind of adaptation is performed on DNNs when stacking i-vectors with acoustic features and what information exactly is carried by i-vectors. We observe on REPERE corpus that DNNs trained on i-vector features concatenated with fMLLR-transformed acoustic features lead to a gain of 0.7 points. The experiments shows that using ivector stacking in DNN acoustic models is not only performing speaker adaptation, but also adaptation to acoustic conditions."
]
} |
1506.00961 | 640057443 | Given a compact set of real numbers, a random (C^ m + )-diffeomorphism is constructed such that the image of any measure concentrated on the set and satisfying a certain condition involving a real number (s ), almost surely has Fourier dimension greater than or equal to (s (m + ) ). This is used to show that every Borel subset of the real numbers of Hausdorff dimension (s ) is (C^ m + )-equivalent to a set of Fourier dimension greater than or equal to (s (m + ) ). In particular every Borel set is diffeomorphic to a Salem set, and the Fourier dimension is not invariant under (C^ m )-diffeomorphisms for any (m ). | In @cite_4 , Bluhm gave a method for randomly perturbing a class of self-similar measures on @math , such that the perturbed measure almost surely has Fourier dimension equal to the similarity dimension of the original measure. For @math , the uniform measures on Cantor sets with constant contraction ratio are among the measures considered by Bluhm, and if the parameters in the construction are chosen suitably then the perturbation is a bi-Lipschitz map. Thus it follows from Bluhm's result that such Cantor sets are bi-Lipschitz equivalent to Salem sets. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2021822733"
],
"abstract": [
"In this paper we investigate the pointwise Fourier decay of some selfsimilar random measures. As an application we construct statistically selfsimilar Salem sets. For example, our result shows that a “slight” random perturbation of the classical Cantor set becomes a “nice” set in the sense that its Fourier dimension equals its Hausdorff dimension."
]
} |
1506.00961 | 640057443 | Given a compact set of real numbers, a random (C^ m + )-diffeomorphism is constructed such that the image of any measure concentrated on the set and satisfying a certain condition involving a real number (s ), almost surely has Fourier dimension greater than or equal to (s (m + ) ). This is used to show that every Borel subset of the real numbers of Hausdorff dimension (s ) is (C^ m + )-equivalent to a set of Fourier dimension greater than or equal to (s (m + ) ). In particular every Borel set is diffeomorphic to a Salem set, and the Fourier dimension is not invariant under (C^ m )-diffeomorphisms for any (m ). | In @cite_2 , subsets @math and @math of were constructed such that @math and they can be taken to be included in @math . If @math then @math and @math since @math and @math are separated (see [Theorem 2] EPS15 ). Thus @math changes the Fourier dimension of at least one of @math , @math and @math , showing that the Fourier dimension in is not in general invariant under @math -functions. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2243579221"
],
"abstract": [
"The Fourier dimension is not, in general, stable under finite unions of sets. Moreover, the stability of the Fourier dimension on particular pairs of sets is independent from the stability of the compact Fourier dimension."
]
} |
1506.00395 | 2294219430 | We describe a hierarchical structure-from-motion pipeline.No information is needed beside images themselves.The pipeline proved successful in real-world tasks. This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D structure from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method. | A class of solutions that have been proposed are the so-called partitioning methods @cite_36 . They reduce the problem into smaller and better conditioned subproblems which can be effectively optimized. Within this approach, two main strategies can be distinguished. | {
"cite_N": [
"@cite_36"
],
"mid": [
"1543549646"
],
"abstract": [
"We describe progress in completely automatically recovering 3D scene structure together with 3D camera positions from a sequence of images acquired by an unknown camera undergoing unknown movement."
]
} |
1506.00395 | 2294219430 | We describe a hierarchical structure-from-motion pipeline.No information is needed beside images themselves.The pipeline proved successful in real-world tasks. This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D structure from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method. | The first one is to tackle directly the bundle adjustment algorithm, exploiting its properties and regularities. The idea is to split the optimization problem into smaller, more tractable components. The subproblems can be selected analytically as in @cite_9 , where spectral partitioning has been applied to , or they can emerge from the underlying 3D structure of the problem, as described in @cite_6 . The computational gain of such methods is obtained by limiting the combinatorial explosion of the algorithm complexity as the number of images and points increases. | {
"cite_N": [
"@cite_9",
"@cite_6"
],
"mid": [
"1974260711",
"2143116602"
],
"abstract": [
"We propose a spectral partitioning approach for large-scale optimization problems, specifically structure from motion. In structure from motion, partitioning methods reduce the problem into smaller and better conditioned subproblems which can be efficiently optimized. Our partitioning method uses only the Hessian of the reprojection error and its eigenvector. We show that partitioned systems that preserve the eigenvectors corresponding to small eigenvalues result in lower residual error when optimized. We create partitions by clustering the entries of the eigenvectors of the Hessian corresponding to small eigenvalues. This is a more general technique than relying on domain knowledge and heuristics such as bottom-up structure from motion approaches. Simultaneously, it takes advantage of more information than generic matrix partitioning algorithms.",
"Large-scale 3D reconstruction has recently received much attention from the computer vision community. Bundle adjustment is a key component of 3D reconstruction problems. However, traditional bundle adjustment algorithms require a considerable amount of memory and computational resources. In this paper, we present an extremely efficient, inherently out-of-core bundle adjustment algorithm. We decouple the original problem into several submaps that have their own local coordinate systems and can be optimized in parallel. A key contribution to our algorithm is making as much progress towards optimizing the global non-linear cost function as possible using the fragments of the reconstruction that are currently in core memory. This allows us to converge with very few global sweeps (often only two) through the entire reconstruction. We present experimental results on large-scale 3D reconstruction datasets, both synthetic and real."
]
} |
1506.00395 | 2294219430 | We describe a hierarchical structure-from-motion pipeline.No information is needed beside images themselves.The pipeline proved successful in real-world tasks. This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D structure from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method. | Within the solutions aimed at reducing the impact of the bundle adjustment phase, hierarchical approaches include @cite_74 @cite_27 @cite_45 and this paper. The first can be considered as the first paper where the idea has been set forth: a spanning tree is built to establish in which order the images must be processed. After that, however, the images are processed in a standard incremental way. The approach described in @cite_27 is based on recursive partitioning of the problem into fully-constrained sub-problems, exploiting the bipartite structure of the visibility graph. The partitioning operates on the problem variables, whereas our approach works on the input images. | {
"cite_N": [
"@cite_27",
"@cite_45",
"@cite_74"
],
"mid": [
"",
"1980635903",
"1541642243"
],
"abstract": [
"",
"We present a completely automated Structure and Motionpipeline capable of working with uncalibrated images with varying internal parameters and no ancillary information. The system is based on a novel hierarchical scheme which reduces the total complexity by one order of magnitude. We assess the quality of our approach analytically by comparing the recovered point clouds with laser scans, which serves as ground truth data.",
"There has been considerable success in automated reconstruction for image sequences where small baseline algorithms can be used to establish matches across a number of images. In contrast in the case of widely separated views, methods have generally been restricted to two or three views.In this paper we investigate the problem of establishing relative viewpoints given a large number of images where no ordering information is provided. A typical application would be where images are obtained from different sources or at different times: both the viewpoint (position, orientation, scale) and lighting conditions may vary significantly over the data set.Such a problem is not fundamentally amenable to exhaustive pair wise and triplet wide baseline matching because this would be prohibitively expensive as the number of views increases. Instead, we investiate how a combination of image invariants, covariants, and multiple view relations can be used in concord to enable efficient multiple view matching. The result is a matching algorithm which is linear in the number of views.The methods are illustrated on several real image data sets. The output enables an image based technique for navigating in a 3D scene, moving from one image to whichever image is the next most appropriate."
]
} |
1506.00395 | 2294219430 | We describe a hierarchical structure-from-motion pipeline.No information is needed beside images themselves.The pipeline proved successful in real-world tasks. This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D structure from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method. | Orthogonally to the aforementioned approaches, a solution to the the computational complexity of is to throw additional computational power at the problem @cite_16 . Within such a framework, the former algorithmic challenges are substituted by load balancing and subdivision of tasks. Such a direction of research strongly suggest that the current monolithic pipelines should be modified to accommodate ways to parallelize and optimally split the workflow of tasks. In @cite_46 image selection (via clustering) is combined with a highly parallel implementation that exploits graphic processors and multi-core architectures. | {
"cite_N": [
"@cite_46",
"@cite_16"
],
"mid": [
"2099443716",
"2163446794"
],
"abstract": [
"This paper introduces an approach for dense 3D reconstruction from unregistered Internet-scale photo collections with about 3 million images within the span of a day on a single PC (\"cloudless\"). Our method advances image clustering, stereo, stereo fusion and structure from motion to achieve high computational performance. We leverage geometric and appearance constraints to obtain a highly parallel implementation on modern graphics processors and multi-core architectures. This leads to two orders of magnitude higher performance on an order of magnitude larger dataset than competing state-of-the-art approaches.",
"We present a system that can reconstruct 3D geometry from large, unorganized collections of photographs such as those found by searching for a given city (e.g., Rome) on Internet photo-sharing sites. Our system is built on a set of new, distributed computer vision algorithms for image matching and 3D reconstruction, designed to maximize parallelism at each stage of the pipeline and to scale gracefully with both the size of the problem and the amount of available computation. Our experimental results demonstrate that it is now possible to reconstruct city-scale image collections with more than a hundred thousand images in less than a day."
]
} |
1506.00395 | 2294219430 | We describe a hierarchical structure-from-motion pipeline.No information is needed beside images themselves.The pipeline proved successful in real-world tasks. This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D structure from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method. | Another relevant issue in is the level of generality, i.e., the number of assumption that are made concerning the input images, or, equivalently the amount of extra information that is required in addition to pixel values. Existing pipelines either assume known internal parameters @cite_17 @cite_15 @cite_44 @cite_49 @cite_13 @cite_69 @cite_55 , or constant internal parameters @cite_76 , or rely on EXIF data plus external information (camera CCD dimensions) @cite_41 @cite_47 . Methods working in large scale environments usually rely on a lot of additional information, such as camera calibration and GPS INS navigation systems @cite_58 @cite_0 or geotags @cite_68 . | {
"cite_N": [
"@cite_69",
"@cite_47",
"@cite_41",
"@cite_55",
"@cite_58",
"@cite_44",
"@cite_0",
"@cite_68",
"@cite_49",
"@cite_15",
"@cite_76",
"@cite_13",
"@cite_17"
],
"mid": [
"2071616405",
"2156598602",
"1503420277",
"2084613528",
"2151992422",
"2171244244",
"2059496137",
"2013472030",
"",
"2145645587",
"2127016242",
"1823013803",
"2106364358"
],
"abstract": [
"Multiview structure recovery from a collection of images requires the recovery of the positions and orientations of the cameras relative to a global coordinate system. Our approach recovers camera motion as a sequence of two global optimizations. First, pair wise Essential Matrices are used to recover the global rotations by applying robust optimization using either spectral or semi definite programming relaxations. Then, we directly employ feature correspondences across images to recover the global translation vectors using a linear algorithm based on a novel decomposition of the Essential Matrix. Our method is efficient and, as demonstrated in our experiments, achieves highly accurate results on collections of real images for which ground truth measurements are available.",
"We present a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface. Our system consists of an image-based modeling front end that automatically computes the viewpoint of each photograph as well as a sparse 3D model of the scene and image to model correspondences. Our photo explorer uses image-based rendering techniques to smoothly transition between photographs, while also enabling full 3D navigation and exploration of the set of images and world geometry, along with auxiliary information such as overhead maps. Our system also makes it easy to construct photo tours of scenic or historic locations, and to annotate image details, which are automatically transferred to other relevant images. We demonstrate our system on several large personal photo collections as well as images gathered from Internet photo sharing sites.",
"We present an automatic pipeline for recovering the geometry of a 3D scene from a set of unordered, uncalibrated images. The contributions in the paper are the presentation of the system as a whole, from images to geometry, the estimation of the local scale for various scene components in the orientation-topology module, the procedure for orienting the cloud components, and the method for dealing with points of contact. The methods are aimed to process complex scenes and non-uniformly sampled, noisy data sets.",
"Multi-view structure from motion (SfM) estimates the position and orientation of pictures in a common 3D coordinate frame. When views are treated incrementally, this external calibration can be subject to drift, contrary to global methods that distribute residual errors evenly. We propose a new global calibration approach based on the fusion of relative motions between image pairs. We improve an existing method for robustly computing global rotations. We present an efficient a contrario trifocal tensor estimation method, from which stable and precise translation directions can be extracted. We also define an efficient translation registration method that recovers accurate camera positions. These components are combined into an original SfM pipeline. Our experiments show that, on most datasets, it outperforms in accuracy other existing incremental and global pipelines. It also achieves strikingly good running times: it is about 20 times faster than the other global method we could compare to, and as fast as the best incremental method. More importantly, it features better scalability properties.",
"Supplying realistically textured 3D city models at ground level promises to be useful for pre-visualizing upcoming traffic situations in car navigation systems. Because this pre-visualization can be rendered from the expected future viewpoints of the driver, the required maneuver will be more easily understandable. 3D city models can be reconstructed from the imagery recorded by surveying vehicles. The vastness of image material gathered by these vehicles, however, puts extreme demands on vision algorithms to ensure their practical usability. Algorithms need to be as fast as possible and should result in compact, memory efficient 3D city models for future ease of distribution and visualization. For the considered application, these are not contradictory demands. Simplified geometry assumptions can speed up vision algorithms while automatically guaranteeing compact geometry models. In this paper, we present a novel city modeling framework which builds upon this philosophy to create 3D content at high speed. Objects in the environment, such as cars and pedestrians, may however disturb the reconstruction, as they violate the simplified geometry assumptions, leading to visually unpleasant artifacts and degrading the visual realism of the resulting 3D city model. Unfortunately, such objects are prevalent in urban scenes. We therefore extend the reconstruction framework by integrating it with an object recognition module that automatically detects cars in the input video streams and localizes them in 3D. The two components of our system are tightly integrated and benefit from each other's continuous input. 3D reconstruction delivers geometric scene context, which greatly helps improve detection precision. The detected car locations, on the other hand, are used to instantiate virtual placeholder models which augment the visual realism of the reconstructed city model.",
"It is known that the problem of multiview reconstruction can be solved in two steps: first estimate camera rotations and then translations using them. This paper presents new robust techniques for both of these steps, (i) Given pair-wise relative rotations, global camera rotations are estimated linearly in least squares, (ii) Camera translations are estimated using a standard technique based on Second Order Cone Programming. Robustness is achieved by using only a subset of points according to a new criterion that diminishes the risk of choosing a mismatch. It is shown that only four points chosen in a special way are sufficient to represent a pairwise reconstruction almost equally as all points. This leads to a significant speedup. In image sets with repetitive or similar structures, non-existent epipolar geometries may be found. Due to them, some rotations and consequently translations may be estimated incorrectly. It is shown that iterative removal of pairwise reconstructions with the largest residual and reregistration removes most non-existent epipolar geometries. The performance of the proposed method is demonstrated on difficult wide base-line image sets.",
"The paper presents a system for automatic, geo-registered, real-time 3D reconstruction from video of urban scenes. The system collects video streams, as well as GPS and inertia measurements in order to place the reconstructed models in geo-registered coordinates. It is designed using current state of the art real-time modules for all processing steps. It employs commodity graphics hardware and standard CPU's to achieve real-time performance. We present the main considerations in designing the system and the steps of the processing pipeline. Our system extends existing algorithms to meet the robustness and variability necessary to operate out of the lab. To account for the large dynamic range of outdoor videos the processing pipeline estimates global camera gain changes in the feature tracking stage and efficiently compensates for these in stereo estimation without impacting the real-time performance. The required accuracy for many applications is achieved with a two-step stereo reconstruction process exploiting the redundancy across frames. We show results on real video sequences comprising hundreds of thousands of frames.",
"Recent work in structure from motion (SfM) has successfully built 3D models from large unstructured collections of images downloaded from the Internet. Most approaches use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the number of images grows, and can drift or fall into bad local minima. We present an alternative formulation for SfM based on finding a coarse initial solution using a hybrid discrete-continuous optimization, and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and the points, including noisy geotags and vanishing point estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it can produce models that are similar to or better than those produced with incremental bundle adjustment, but more robustly and in a fraction of the time.",
"",
"This work reports on the advances and on the current status of a terrestrial city modeling approach, which uses images contributed by end-users as input. Hence, the Wiki principle well known from textual knowledge databases is transferred to the goal of incrementally building a virtual representation of the occupied habitat. In order to achieve this objective, many state-of-the-art computer vision methods must be applied and modified according to this task. We describe the utilized 3D vision methods and show initial results obtained from the current image database acquired by in-house participants.",
"The use of 3D information in the field of cultural heritage is increasing year by year. From this field comes a large demand for cheaper and more flexible ways of 3D reconstruction. This paper describes a web-based 3D reconstruction service, developed to relieve those needs of the cultural heritage field. This service consists of a pipeline that starts with the user uploading images of an object or scene(s) he wants to reconstruct in 3D. The automatic reconstruction process, running on a server connected to a cluster of computers, computes the camera calibration, as well as dense depth (or range-) maps for the images. This result can be downloaded from an ftp server and visualized with a specific tool running on the user’s PC.",
"We present a non-incremental approach to structure from motion. Our solution is based on robustly computing global rotations from relative geometries and feeding these into the known-rotation framework to create an initial solution for bundle adjustment. To increase robustness we present a new method for constructing reliable point tracks from pairwise matches. We show that our method can be seen as maximizing the reliability of a point track if the quality of the weakest link in the track is used to evaluate reliability. To estimate the final geometry we alternate between bundle adjustment and a robust version of the known-rotation formulation. The ability to compute both structure and camera translations independent of initialization makes our algorithm insensitive to degenerate epipolar geometries. We demonstrate the performance of our system on a number of image collections.",
"This paper presents a system for fully automatic recognition and reconstruction of 3D objects in image databases. We pose the object recognition problem as one of finding consistent matches between all images, subject to the constraint that the images were taken from a perspective camera. We assume that the objects or scenes are rigid. For each image, we associate a camera matrix, which is parameterised by rotation, translation and focal length. We use invariant local features to find matches between all images, and the RANSAC algorithm to find those that are consistent with the fundamental matrix. Objects are recognised as subsets of matching images. We then solve for the structure and motion of each object, using a sparse bundle adjustment algorithm. Our results demonstrate that it is possible to recognise and reconstruct 3D objects from an unordered image database with no user input at all."
]
} |
1506.00395 | 2294219430 | We describe a hierarchical structure-from-motion pipeline.No information is needed beside images themselves.The pipeline proved successful in real-world tasks. This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D structure from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method. | Autocalibration (a.k.a. self-calibration) has generated a lot of theoretical interest since its introduction in the seminal paper by Maybank and Faugeras @cite_75 . The attention created by the problem however is inherently practical, since it eliminates the need for off-line calibration and enables the use of content acquired in an uncontrolled setting. Modern computer vision has partly sidestepped the issue by using ancillary information, such as EXIF tags embedded in some image formats. Unfortunately it is not always guaranteed that such data will be present or consistent with its medium, and do not eliminate the need for reliable autocalibration procedures. | {
"cite_N": [
"@cite_75"
],
"mid": [
"2065592949"
],
"abstract": [
"There is a close connection between the calibration of a single camera and the epipolar transformation obtained when the camera undergoes a displacement. The epipolar transformation imposes two algebraic constraints on the camera calibration. If two epipolar transformations, arising from different camera displacements, are available then the compatible camera calibrations are parameterized by an algebraic curve of genus four. The curve can be represented either by a space curve of degree seven contained in the intersection of two cubic surfaces, or by a curve of degree six in the dual of the image plane. The curve in the dual plane has one singular point of order three and three singular points of order two."
]
} |
1506.00395 | 2294219430 | We describe a hierarchical structure-from-motion pipeline.No information is needed beside images themselves.The pipeline proved successful in real-world tasks. This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D structure from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method. | A great deal of published methods rely on equations involving the dual image of the absolute quadric (DIAQ), introduced by Triggs in @cite_63 . Earlier approaches for variable focal lengths were based on linear, weighted systems @cite_34 @cite_14 , solved directly or iteratively @cite_24 . Their reliability has been improved by more recent algorithms, such as @cite_61 , solving super-linear systems while directly forcing the positive definiteness of the DIAQ. Such enhancements were necessary because of the structural non-linearity of the task: for this reason the problem has also been approached using branch and bound schemes, based either on the Kruppa equations @cite_35 , dual linear autocalibration @cite_79 or the modulus constraint @cite_42 . | {
"cite_N": [
"@cite_61",
"@cite_35",
"@cite_14",
"@cite_42",
"@cite_24",
"@cite_79",
"@cite_63",
"@cite_34"
],
"mid": [
"2115108008",
"2132674797",
"1488694087",
"2149888534",
"2159345128",
"2099679793",
"1960801737",
"2151285962"
],
"abstract": [
"We present an autocalibration algorithm for upgrading a projective reconstruction to a metric reconstruction by estimating the absolute dual quadric. The algorithm enforces the rank degeneracy and the positive semidefiniteness of the dual quadric as part of the estimation procedure, rather than as a post-processing step. Furthermore, the method allows the user, if he or she so desires, to enforce conditions on the plane at infinity so that the reconstruction satisfies the chirality constraints. The algorithm works by constructing low degree polynomial optimization problems, which are solved to their global optimum using a series of convex linear matrix inequality relaxations. The algorithm is fast, stable, robust and has time complexity independent of the number of views. We show extensive results on synthetic as well as real datasets to validate our algorithm.",
"We address the problem of autocalibration of a moving camera with unknown constant intrinsic parameters. Existing autocalibration techniques use numerical optimization algorithms whose convergence to the correct result cannot be guaranteed, in general. To address this problem, we have developed a method where an interval branch-and-bound method is employed for numerical minimization. Thanks to the properties of interval analysis this method converges to the global solution with mathematical certainty and arbitrary accuracy and the only input information it requires from the user are a set of point correspondences and a search interval. The cost function is based on the Huang-Faugeras constraint of the essential matrix. A recently proposed interval extension based on Bernstein polynomial forms has been investigated to speed up the search for the solution. Finally, experimental results are presented.",
"In this paper we address the problem of uncalibrated structure and motion recovery from image sequences that contain dominant planes in some of the views. Traditional approaches fail when the features common to three consecutive views are all located on a plane. This happens because in the uncalibrated case there is a fundamental ambiguity in relating the structure before and after the plane. This is, however, a situation that is often hard to avoid in man-made environments. We propose a complete approach that detects the problem and defers the computation of parameters that are ambiguous in projective space (i.e. the registration between partial reconstructions only sharing a common plane and poses of cameras only seeing planar features) till after self-calibration. Also a new linear self-calibration algorithm is proposed that couples the intrinsics between multiple subsequences. The final result is a complete metric 3D reconstruction of both structure and motion for the whole sequence. Experimental results on real image sequences show that the approach yields very good results.",
"We present a practical, stratified autocalibration algorithm with theoretical guarantees of global optimality. Given a projective reconstruction, the first stage of the algorithm upgrades it to affine by estimating the position of the plane at infinity. The plane at infinity is computed by globally minimizing a least squares formulation of the modulus constraints. In the second stage, the algorithm upgrades this affine reconstruction to a metric one by globally minimizing the infinite homography relation to compute the dual image of the absolute conic (DIAC). The positive semidefiniteness of the DIAC is explicitly enforced as part of the optimization process, rather than as a post-processing step. For each stage, we construct and minimize tight convex relaxations of the highly non-convex objective functions in a branch and bound optimization framework. We exploit the problem structure to restrict the search space for the DIAC and the plane at infinity to a small, fixed number of branching dimensions, independent of the number of views. Experimental evidence of the accuracy, speed and scalability of our algorithm is presented on synthetic and real data. MATLAB code for the implementation is made available to the community.",
"In this paper, an iterative algorithm for auto-calibration is presented. The proposed algorithm switches between linearly estimating the dual of the absolute conic and the intrinsic parameters, while also incorporating the rank-3 constraint on the intrinsic parameters. The most important property of the algorithm is that it is completely general in the sense that any type of constraint on the intrinsic parameters might be used. The proposed algorithm locates in-between of a non-linear optimization and initial linear computation, and provides robust and sufficiently accurate initial values for a bundle adjustment routine. The performance of the algorithm is shown for both simulated and real data, especially in the important case of natural (zero skew and unit aspect ratio) cameras.",
"We investigate the problem of finding the metric structure of a general 3D scene viewed by a moving camera with square pixels and constant unknown focal length. While the problem has a concise and well-understood formulation in the stratified framework thanks to the absolute dual quadric, two open issues remain. The first issue concerns the generic Critical Motion Sequences, i.e. camera motions for which self-calibration is ambiguous. Most of the previous work focuses on the varying focal length case. We provide a thorough study of the constant focal length case. The second issue is to solve the nonlinear set of equations in four unknowns arising from the dual quadric formulation. Most of the previous work either does local nonlinear optimization, thereby requiring an initial solution, or linearizes the problem, which introduces artificial degeneracies, most of which likely to arise in practice. We use interval analysis to solve this problem. The resulting algorithm is guaranteed to find the solution and is not subject to artificial degeneracies. Directly using interval analysis usually results in computationally expensive algorithms. We propose a carefully chosen set of inclusion functions, making it possible to find the solution within few seconds. Comparisons of the proposed algorithm with existing ones are reported for simulated and real data.",
"The author describes a new method for camera autocalibration and scaled Euclidean structure and motion, from three or more views taken by a moving camera with fixed but unknown intrinsic parameters. The motion constancy of these is used to rectify an initial projective reconstruction. Euclidean scene structure is formulated in terms of the absolute quadric-the singular dual 3D quadric (4 spl times 4 rank 3 matrix) giving the Euclidean dot-product between plane normals. This is equivalent to the traditional absolute conic but simpler to use. It encodes both affine and Euclidean structure, and projects very simply to the dual absolute image conic which encodes camera calibration. Requiring the projection to be constant gives a bilinear constraint between the absolute quadric and image conic, from which both can be recovered nonlinearly from m spl ges 3 images, or quasi-linearly from m spl ges 4. Calibration and Euclidean structure follow easily. The nonlinear method is stabler, faster, more accurate and more general than the quasi-linear one. It is based on a general constrained optimization technique-sequential quadratic programming-that may well be useful in other vision problems.",
"In this paper the feasibility of self-calibration in the presence of varying internal camera parameters is under investigation. A self-calibration method is presented which efficiently deals with all kinds of constraints on the internal camera parameters. Within this framework a practical method is proposed which can retrieve metric reconstruction from image sequences obtained with uncalibrated zooming focusing cameras. The feasibility of the approach is illustrated on real and synthetic examples."
]
} |
1506.00395 | 2294219430 | We describe a hierarchical structure-from-motion pipeline.No information is needed beside images themselves.The pipeline proved successful in real-world tasks. This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D structure from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method. | The algorithm described in @cite_20 shares, with the branch and bound approaches, the guarantee of convergence; the non-linear part, corresponding to the localization of the plane at infinity, is solved exhaustively after having used the cheiral inequalities to compute explicit bounds on its location. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2018296774"
],
"abstract": [
"This paper considers the problem of self-calibration of a camera from an image sequence in the case where the camera's internal parameters (most notably focal length) may change. The problem of camera self-calibration from a sequence of images has proven to be a difficult one in practice, due to the need ultimately to resort to non-linear methods, which have often proven to be unreliable. In a stratified approach to self-calibration, a projective reconstruction is obtained first and this is successively refined first to an affine and then to a Euclidean (or metric) reconstruction. It has been observed that the difficult step is to obtain the affine reconstruction, or equivalently to locate the plane at infinity in the projective coordinate frame. The problem is inherently non-linear and requires iterative methods that risk not finding the optimal solution. The present paper overcomes this difficulty by imposing chirality constraints to limit the search for the plane at infinity to a 3-dimensional cubic region of parameter space. It is then possible to carry out a dense search over this cube in reasonable time. For each hypothesised placement of the plane at infinity, the calibration problem is reduced to one of calibration of a nontranslating camera, for which fast non-iterative algorithms exist. A cost function based on the result of the trial calibration is used to determine the best placement of the plane at infinity. Because of the simplicity of each trial, speeds of over 10,000 trials per second are achieved on a 256 MHz processor. It is shown that this dense search allows one to avoid areas of local minima effectively and find global minima of the cost function."
]
} |
1506.00278 | 627986001 | In this paper, we introduce a new dataset consisting of 360,001 focused natural language descriptions for 10,738 images. This dataset, the Visual Madlibs dataset, is collected using automatically produced fill-in-the-blank templates designed to gather targeted descriptions about: people and objects, their appearances, activities, and interactions, as well as inferences about the general scene or its broader context. We provide several analyses of the Visual Madlibs dataset and demonstrate its applicability to two new description generation tasks: focused description generation, and multiple-choice question-answering for images. Experiments using joint-embedding and deep learning methods show promising results on these tasks. | Recently, there has been an explosion of interest in methods for producing natural language descriptions for images or video. Early work in this area generally explored two complementary directions. The first type of approach focused on detecting content elements such as objects, attributes, activities, or spatial relationships and then composing captions for images @cite_21 @cite_33 @cite_7 @cite_25 or videos @cite_29 using linguistically inspired templates. The second type of approach explored methods to make use of existing text either directly associated with an image @cite_28 @cite_1 or retrieved from visually similar images @cite_38 @cite_32 @cite_30 . | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_33",
"@cite_7",
"@cite_28",
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_25"
],
"mid": [
"",
"2109586012",
"1858383477",
"8316075",
"1750831471",
"2152984213",
"",
"2114841702",
"2149172860",
"1897761818"
],
"abstract": [
"",
"We develop and demonstrate automatic image description methods using a large captioned photo collection. One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results. We also develop methods incorporating many state of the art, but fairly noisy, estimates of image content to produce even more pleasing results. Finally we introduce a new objective performance measure for image captioning.",
"We propose a sentence generation strategy that describes images by predicting the most likely nouns, verbs, scenes and prepositions that make up the core sentence structure. The input are initial noisy estimates of the objects and scenes detected in the image using state of the art trained detectors. As predicting actions from still images directly is unreliable, we use a language model trained from the English Gigaword corpus to obtain their estimates; together with probabilities of co-located nouns, scenes and prepositions. We use these estimates as parameters on a HMM that models the sentence generation process, with hidden nodes as sentence components and image detections as the emissions. Experimental results show that our strategy of combining vision and language produces readable and descriptive sentences compared to naive strategies that use vision alone.",
"This paper introduces a novel generation system that composes humanlike descriptions of images from computer vision detections. By leveraging syntactically informed word co-occurrence statistics, the generator filters and constrains the noisy detections output from a vision system to generate syntactic trees that detail what the computer vision system sees. Results show that the generation system outperforms state-of-the-art systems, automatically generating some of the most natural image descriptions to date.",
"Image annotation, the task of automatically generating description words for a picture, is a key component in various image search and retrieval applications. Creating image databases for model development is, however, costly and time consuming, since the keywords must be hand-coded and the process repeated for new collections. In this work we exploit the vast resource of images and documents available on the web for developing image annotation models without any human involvement. We describe a probabilistic model based on the assumption that images and their co-occurring textual data are generated by mixtures of latent topics. We show that this model outperforms previously proposed approaches when applied to image annotation and the related task of text illustration despite the noisy nature of our dataset.",
"We present a holistic data-driven technique that generates natural-language descriptions for videos. We combine the output of state-of-the-art object and activity detectors with \"real-world\" knowledge to select the most probable subject-verb-object triplet for describing a video. We show that this knowledge, automatically mined from web-scale text corpora, enhances the triplet selection algorithm by providing it contextual information and leads to a four-fold increase in activity identification. Unlike previous methods, our approach can annotate arbitrary videos without requiring the expensive collection and annotation of a similar training video corpus. We evaluate our technique against a baseline that does not use text-mined knowledge and show that humans prefer our descriptions 61 of the time.",
"",
"This paper presents a novel approach to automatic captioning of geo-tagged images by summarizing multiple web-documents that contain information related to an image's location. The summarizer is biased by dependency pattern models towards sentences which contain features typically provided for different scene types such as those of churches, bridges, etc. Our results show that summaries biased by dependency pattern models lead to significantly higher ROUGE scores than both n-gram language models reported in previous work and also Wikipedia baseline summaries. Summaries generated using dependency patterns also lead to more readable summaries than those generated without dependency patterns.",
"We present a holistic data-driven approach to image description generation, exploiting the vast amount of (noisy) parallel image data and associated natural language descriptions available on the web. More specifically, given a query image, we retrieve existing human-composed phrases used to describe visually similar images, then selectively combine those phrases to generate a novel description for the query image. We cast the generation process as constraint optimization problems, collectively incorporating multiple interconnected aspects of language composition for content planning, surface realization and discourse structure. Evaluation by human annotators indicates that our final system generates more semantically correct and linguistically appealing descriptions than two nontrivial baselines.",
"Humans can prepare concise descriptions of pictures, focusing on what they find important. We demonstrate that automatic methods can do so too. We describe a system that can compute a score linking an image to a sentence. This score can be used to attach a descriptive sentence to a given image, or to obtain images that illustrate a given sentence. The score is obtained by comparing an estimate of meaning obtained from the image to one obtained from the sentence. Each estimate of meaning comes from a discriminative procedure that is learned us-ingdata. We evaluate on a novel dataset consisting of human-annotated images. While our underlying estimate of meaning is impoverished, it is sufficient to produce very good quantitative results, evaluated with a novel score that can account for synecdoche."
]
} |
1506.00278 | 627986001 | In this paper, we introduce a new dataset consisting of 360,001 focused natural language descriptions for 10,738 images. This dataset, the Visual Madlibs dataset, is collected using automatically produced fill-in-the-blank templates designed to gather targeted descriptions about: people and objects, their appearances, activities, and interactions, as well as inferences about the general scene or its broader context. We provide several analyses of the Visual Madlibs dataset and demonstrate its applicability to two new description generation tasks: focused description generation, and multiple-choice question-answering for images. Experiments using joint-embedding and deep learning methods show promising results on these tasks. | With the advancement of deep learning for content estimation, there have been many exciting recent attempts to generate image descriptions using neural network based approaches. Some methods first detect words or phrases using Convolutional Neural Network (CNN) features, then generate and re-rank candidate sentences @cite_24 @cite_11 . Other approaches take a more end-to-end approach to generate output descriptions directly from images. Kiros @cite_12 learn a joint image-sentence embedding using visual CNNs and Long Short Term Memory (LSTM) networks. Similarly, several other methods have made use of CNN features and LSTM or recurrent neural networks (RNN) for generation with a variety of different architectures @cite_5 @cite_13 @cite_37 . These new methods have shown great promise for image description generation under some measures (e.g. BLEU-1) achieving near-human performance levels. We look at related, but more focused description generation tasks. | {
"cite_N": [
"@cite_37",
"@cite_24",
"@cite_5",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2122180654",
"2949769367",
"2951912364",
"2951805548",
"1527575280",
""
],
"abstract": [
"In this paper we explore the bi-directional mapping between images and their sentence-based descriptions. We propose learning this mapping using a recurrent neural network. Unlike previous approaches that map both sentences and images to a common embedding, we enable the generation of novel sentences given an image. Using the same model, we can also reconstruct the visual features associated with an image given its visual description. We use a novel recurrent visual memory that automatically learns to remember long-term visual concepts to aid in both sentence generation and visual feature reconstruction. We evaluate our approach on several tasks. These include sentence generation, sentence retrieval and image retrieval. State-of-the-art results are shown for the task of generating novel image descriptions. When compared to human generated captions, our automatically generated captions are preferred by humans over @math of the time. Results are better than or comparable to state-of-the-art results on the image and sentence retrieval tasks for methods using similar visual features.",
"This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1 . When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34 of the time.",
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.",
"Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* - \"blue\" + \"red\" is near images of red cars. Sample captions generated for 800 images are made available for comparison.",
""
]
} |
1506.00278 | 627986001 | In this paper, we introduce a new dataset consisting of 360,001 focused natural language descriptions for 10,738 images. This dataset, the Visual Madlibs dataset, is collected using automatically produced fill-in-the-blank templates designed to gather targeted descriptions about: people and objects, their appearances, activities, and interactions, as well as inferences about the general scene or its broader context. We provide several analyses of the Visual Madlibs dataset and demonstrate its applicability to two new description generation tasks: focused description generation, and multiple-choice question-answering for images. Experiments using joint-embedding and deep learning methods show promising results on these tasks. | Along with the development of image captioning algorithms there have been a number of datasets collected for this task. One of the first datasets collected for this problem was the UIUC Pascal Sentence data set @cite_25 which contains 1,000 images with 5 sentences per image written by workers on Amazon Mechanical Turk. As the description problem gained popularity larger and richer datasets were collected, including the Flickr8K @cite_8 and Flickr30K @cite_16 datasets, containing 8,000 and 30,000 images respectively. In an alternative approach, the SBU Captioned photo dataset @cite_38 contains 1 million images with existing captions collected from Flickr. This dataset is larger, but the text tends to contain more contextual information since captions were written by the photo owners. Most recently, Microsoft released the MS COCO @cite_17 dataset. MS COCO contains 120,000 images depicting 80 common object classes, with object segmentations and 5 turker written descriptions per image. These datasets have been one of the driving forces in improving methods for description generation, but are currently limited to a single description about the general content of an image. We make use of MS COCO data, extending the types of descriptions associated with images. | {
"cite_N": [
"@cite_38",
"@cite_8",
"@cite_16",
"@cite_25",
"@cite_17"
],
"mid": [
"2109586012",
"2119775030",
"2185175083",
"1897761818",
""
],
"abstract": [
"We develop and demonstrate automatic image description methods using a large captioned photo collection. One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results. We also develop methods incorporating many state of the art, but fairly noisy, estimates of image content to produce even more pleasing results. Finally we introduce a new objective performance measure for image captioning.",
"Crowd-sourcing approaches such as Amazon's Mechanical Turk (MTurk) make it possible to annotate or collect large amounts of linguistic data at a relatively low cost and high speed. However, MTurk offers only limited control over who is allowed to particpate in a particular task. This is particularly problematic for tasks requiring free-form text entry. Unlike multiple-choice tasks there is no correct answer, and therefore control items for which the correct answer is known cannot be used. Furthermore, MTurk has no effective built-in mechanism to guarantee workers are proficient English writers. We describe our experience in creating corpora of images annotated with multiple one-sentence descriptions on MTurk and explore the effectiveness of different quality control strategies for collecting linguistic data using Mechanical MTurk. We find that the use of a qualification test provides the highest improvement of quality, whereas refining the annotations through follow-up tasks works rather poorly. Using our best setup, we construct two image corpora, totaling more than 40,000 descriptive captions for 9000 images.",
"We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics , which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph , i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions.",
"Humans can prepare concise descriptions of pictures, focusing on what they find important. We demonstrate that automatic methods can do so too. We describe a system that can compute a score linking an image to a sentence. This score can be used to attach a descriptive sentence to a given image, or to obtain images that illustrate a given sentence. The score is obtained by comparing an estimate of meaning obtained from the image to one obtained from the sentence. Each estimate of meaning comes from a discriminative procedure that is learned us-ingdata. We evaluate on a novel dataset consisting of human-annotated images. While our underlying estimate of meaning is impoverished, it is sufficient to produce very good quantitative results, evaluated with a novel score that can account for synecdoche.",
""
]
} |
1506.00051 | 1766449538 | This paper presents a higher level representation for videos aiming at video genre retrieval. In video genre retrieval, there is a challenge that videos may comprise multiple categories, for instance, news videos may be composed of sports, documentary, and action. Therefore, it is interesting to encode the distribution of such genres in a compact and effective manner. We propose to create a visual dictionary using a genre classifier. Each visual word in the proposed model corresponds to a region in the classification space determined by the classifier's model learned on the training frames. Therefore, the video feature vector contains a summary of the activations of each genre in its contents. We evaluate the bag-of-genres model for video genre retrieval, using the dataset of MediaEval Tagging Task of 2012. Results show that the proposed model increases the quality of the representation being more compact than existing features. | Many solutions exist in the literature aiming at including semantics in the representation. There are techniques in which an image is represented as a scale-invariant response map of a large number of pre-trained generic object detectors @cite_17 , which could be seen as a dictionary of objects. Poselets have also been used similarly to a dictionary of poses for recognizing people poses @cite_12 . Labeled local patches have also been used for having a dictionary with more semantics @cite_10 . Boureau et al. @cite_0 also present a way to supervise the dictionary creation. Other approaches can also be considered as related to the intention of having dictionaries with more meaningful visual words @cite_15 @cite_2 @cite_11 | {
"cite_N": [
"@cite_11",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2090042335",
"",
"1981998038",
"2146022472",
"2535410496",
"2169177311"
],
"abstract": [
"",
"Many successful models for scene or object recognition transform low-level descriptors (such as Gabor filter responses, or SIFT descriptors) into richer representations of intermediate complexity. This process can often be broken down into two steps: (1) a coding step, which performs a pointwise transformation of the descriptors into a representation better adapted to the task, and (2) a pooling step, which summarizes the coded features over larger neighborhoods. Several combinations of coding and pooling schemes have been proposed in the literature. The goal of this paper is threefold. We seek to establish the relative importance of each step of mid-level feature extraction through a comprehensive cross evaluation of several types of coding modules (hard and soft vector quantization, sparse coding) and pooling schemes (by taking the average, or the maximum), which obtains state-of-the-art performance or better on several recognition benchmarks. We show how to improve the best performing coding scheme by learning a supervised discriminative dictionary for sparse coding. We provide theoretical and empirical insight into the remarkable performance of max pooling. By teasing apart components shared by modern mid-level feature extractors, our approach aims to facilitate the design of better recognition architectures.",
"",
"We propose a new approach for constructing mid-level visual features for image classification. We represent an image using the outputs of a collection of binary classifiers. These binary classifiers are trained to differentiate pairs of object classes in an object hierarchy. Our feature representation implicitly captures the hierarchical structure in object classes. We show that our proposed approach outperforms other baseline methods in image classification.",
"In this paper, we present a novel image representation that renders it possible to access natural scenes by local semantic description. Our work is motivated by the continuing effort in content-based image retrieval to extract and to model the semantic content of images. The basic idea of the semantic modeling is to classify local image regions into semantic concept classes such as water, rocks, or foliage. Images are represented through the frequency of occurrence of these local concepts. Through extensive experiments, we demonstrate that the image representation is well suited for modeling the semantic content of heterogenous scene categories, and thus for categorization and retrieval. The image representation also allows us to rank natural scenes according to their semantic similarity relative to certain scene categories. Based on human ranking data, we learn a perceptually plausible distance measure that leads to a high correlation between the human and the automatically obtained typicality ranking. This result is especially valuable for content-based image retrieval where the goal is to present retrieval results in descending semantic similarity from the query.",
"We address the classic problems of detection, segmentation and pose estimation of people in images with a novel definition of a part, a poselet. We postulate two criteria (1) It should be easy to find a poselet given an input image (2) it should be easy to localize the 3D configuration of the person conditioned on the detection of a poselet. To permit this we have built a new dataset, H3D, of annotations of humans in 2D photographs with 3D joint information, inferred using anthropometric constraints. This enables us to implement a data-driven search procedure for finding poselets that are tightly clustered in both 3D joint configuration space as well as 2D image appearance. The algorithm discovers poselets that correspond to frontal and profile faces, pedestrians, head and shoulder views, among others. Each poselet provides examples for training a linear SVM classifier which can then be run over the image in a multiscale scanning mode. The outputs of these poselet detectors can be thought of as an intermediate layer of nodes, on top of which one can run a second layer of classification or regression. We show how this permits detection and localization of torsos or keypoints such as left shoulder, nose, etc. Experimental results show that we obtain state of the art performance on people detection in the PASCAL VOC 2007 challenge, among other datasets. We are making publicly available both the H3D dataset as well as the poselet parameters for use by other researchers.",
"Robust low-level image features have been proven to be effective representations for a variety of visual recognition tasks such as object recognition and scene classification; but pixels, or even local image patches, carry little semantic meanings. For high level visual tasks, such low-level image representations are potentially not enough. In this paper, we propose a high-level image representation, called the Object Bank, where an image is represented as a scale-invariant response map of a large number of pre-trained generic object detectors, blind to the testing dataset or visual task. Leveraging on the Object Bank representation, superior performances on high level visual recognition tasks can be achieved with simple off-the-shelf classifiers such as logistic regression and linear SVM. Sparsity algorithms make our representation more efficient and scalable for large scene datasets, and reveal semantically meaningful feature patterns."
]
} |
1506.00490 | 2761230608 | We consider a communication network consisting of nodes and directed edges that connect the nodes. The network may contain cycles. The communications are slotted where the duration of each time slot is equal to the maximum propagation delay experienced by the edges. The edges with negligible delays are allowed to be operated before the other edges in each time slot. For any pair of adjacent edges @math and @math , where @math terminates at node @math and @math originates from node @math , we say @math incurs zero delay on @math if @math is operated before @math ; otherwise, we say @math incurs a unit delay on @math . In the classical model, every edge incurs a unit delay on every adjacent edge and the cut-set bound is a well-known outer bound on the capacity region. In this paper, we investigate the multimessage multicast network (MMN) consisting of independent channels, where each channel is associated with a set of edges and each edge may incur zero delay on some other edges. Our result reveals that the capacity region of the MMN with independent channels and zero-delay edges lies within the classical cut-set bound despite a violation of the unit-delay assumption. | The main contribution of this paper is twofold: First, we establish an edge-delay model for the MMN consisting of independent channels which may contain zero-delay edges. Our model subsumes the classical model which assumes that every edge incurs a unit delay on every adjacent edge. Second, we prove that for each DM-MMN consisting of independent channels with zero-delay edges, the capacity region always lies within the classical cut-set bound despite a violation of the classical unit-delay assumption. Combining our cut-set bound result with existing achievability results from network equivalence theory @cite_6 and noisy network coding (NNC) @cite_0 @cite_7 , we establish the tightness of our cut-set bound under our edge-delay model for the MMN with independent DMCs, and hence fully characterize the capacity region. More specifically, we show that the capacity region is the same as the set of achievable rate tuples under the classical unit-delay assumption. The capacity region result is then generalized to the MMN consisting of independent additive white Gaussian noise (AWGN) channels with zero-delay edges. | {
"cite_N": [
"@cite_0",
"@cite_7",
"@cite_6"
],
"mid": [
"2156567124",
"2132100261",
"2054405836"
],
"abstract": [
"This paper deals with the problem of multicasting a set of discrete memoryless correlated sources (DMCS) over a cooperative relay network. Necessary conditions with cut-set interpretation are presented. A Joint source-Wyner-Ziv encoding sliding window decoding scheme is proposed, in which decoding at each receiver is done with respect to an ordered partition of other nodes. For each ordered partition a set of feasibility constraints is derived. Then, utilizing the submodular property of the entropy function and a novel geometrical approach, the results of different ordered partitions are consolidated, which lead to sufficient conditions for our problem. The proposed scheme achieves operational separation between source coding and channel coding. It is shown that sufficient conditions are indeed necessary conditions in two special cooperative networks, namely, Aref network and finite-field deterministic network. Also, in Gaussian cooperative networks, it is shown that reliable transmission of all DMCS whose Slepian-Wolf region intersects the cut-set bound region within a constant number of bits, is feasible. In particular, all results of the paper are specialized to obtain an achievable rate region for cooperative relay networks which includes relay networks and two-way relay networks.",
"A noisy network coding scheme for communicating messages between multiple sources and destinations over a general noisy network is presented. For multi-message multicast networks, the scheme naturally generalizes network coding over noiseless networks by Ahlswede, Cai, Li, and Yeung, and compress-forward coding for the relay channel by Cover and El Gamal to discrete memoryless and Gaussian networks. The scheme also extends the results on coding for wireless relay networks and deterministic networks by Avestimehr, Diggavi, and Tse, and coding for wireless erasure networks by Dana, Gowaikar, Palanki, Hassibi, and Effros. The scheme involves lossy compression by the relay as in the compress-forward coding scheme for the relay channel. However, unlike previous compress-forward schemes in which independent messages are sent over multiple blocks, the same message is sent multiple times using independent codebooks as in the network coding scheme for cyclic networks. Furthermore, the relays do not use Wyner-Ziv binning as in previous compress-forward schemes, and each decoder performs simultaneous decoding of the received signals from all the blocks without uniquely decoding the compression indices. A consequence of this new scheme is that achievability is proved simply and more generally without resorting to time expansion to extend results for acyclic networks to networks with cycles. The noisy network coding scheme is then extended to general multi-message networks by combining it with decoding techniques for the interference channel. For the Gaussian multicast network, noisy network coding improves the previously established gap to the cutset bound. We also demonstrate through two popular Gaussian network examples that noisy network coding can outperform conventional compress-forward, amplify-forward, and hash-forward coding schemes.",
"A family of equivalence tools for bounding network capacities is introduced. Given a network N with node set V, the capacity of N is a set of non-negative vectors with elements corresponding to all possible multicast connections in N; a vector ℜ is in the capacity region for N if and only if it is possible to simultaneously and reliably establish all multicast connections across N at the given rates. Any other demand type with independent messages is a special case of this multiple multicast problem, and is therefore included in the given rate region. In Part I, we show that the capacity of a network N is unchanged if any independent, memoryless, point-to-point channel in N is replaced by a noiseless bit pipe with throughput equal to the removed channel's capacity. It follows that the capacity of a network comprised entirely of such point-to-point channels equals the capacity of an error-free network that replaces each channel by a noiseless bit pipe of the corresponding capacity. A related separation result was known previously for a single multicast connection over an acyclic network of independent, memoryless, point-to-point channels; our result treats general connections (e.g., a collection of simultaneous unicasts) and allows cyclic or acyclic networks."
]
} |
1506.00490 | 2761230608 | We consider a communication network consisting of nodes and directed edges that connect the nodes. The network may contain cycles. The communications are slotted where the duration of each time slot is equal to the maximum propagation delay experienced by the edges. The edges with negligible delays are allowed to be operated before the other edges in each time slot. For any pair of adjacent edges @math and @math , where @math terminates at node @math and @math originates from node @math , we say @math incurs zero delay on @math if @math is operated before @math ; otherwise, we say @math incurs a unit delay on @math . In the classical model, every edge incurs a unit delay on every adjacent edge and the cut-set bound is a well-known outer bound on the capacity region. In this paper, we investigate the multimessage multicast network (MMN) consisting of independent channels, where each channel is associated with a set of edges and each edge may incur zero delay on some other edges. Our result reveals that the capacity region of the MMN with independent channels and zero-delay edges lies within the classical cut-set bound despite a violation of the unit-delay assumption. | It was shown by Effros @cite_4 that under the positive-delay assumption Effros's framework does not consider zero-delay edges, which can be seen from the encoding rules stated in [Def. 1] EffrosIndependentDelay that assumes @math is a function of @math . in the classical setting, the set of achievable rate tuples for the MMN with independent channels does not depend on the amount of positive delay incurred by each edge on each other edge. Our capacity result for the MMN with independent DMCs (as well as AWGNs) complements Effros's finding as follows: The set of achievable rate tuples for the MMN with independent DMCs (as well as AWGNs) does not depend on the amount of delay incurred by each edge on each other edge, even with the presence of zero-delay edges. From a practical point of view, the capacity region of the MMN with independent DMCs (as well as AWGNs) is not affected by the way of handling delays among the channels or how the channels are synchronized, even when zero-delay edges are present. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2051200480"
],
"abstract": [
"This paper investigates the impact of noise dependence and signal delay on the capacities of networks. It is shown that statistical dependence between the noise in a network's component channels is helpful for communication. In particular, the capacity region of a network whose component channels exhibit dependent noise is a superset of the capacity region of a network built from the same component channels when the noise in those channels is independent. It is also shown that delay has no impact on capacity. That is, the capacity of a network of memoryless channels is unchanged by the addition of finite delays anywhere in the network. Both results are proven for all possible networks of memoryless point-to-point and multi-terminal channels under multiple multicast connections. All other connection types (e.g., single unicast, multiple unicast, single multicast, and mixed unicast and multicast connections) are special cases of the multiple multicast problem, and are therefore included in the given results. The results also generalize from reliable communication of independent sources to lossy or lossy description of possibly dependent sources (i.e., joint source-channel coding) across the same network."
]
} |
1506.00379 | 2952854166 | Representation learning of knowledge bases (KBs) aims to embed both entities and relations into a low-dimensional space. Most existing methods only consider direct relations in representation learning. We argue that multiple-step relation paths also contain rich inference patterns between entities, and propose a path-based representation learning model. This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, we design a path-constraint resource allocation algorithm to measure the reliability of relation paths. (2) We represent relation paths via semantic composition of relation embeddings. Experimental results on real-world datasets show that, as compared with baselines, our model achieves significant and consistent improvements on knowledge base completion and relation extraction from text. | Recent years have witnessed great advances of modeling multi-relational data such as social networks and KBs. Many works cope with relational learning as a multi-relational representation learning problem, encoding both entities and relations in a low-dimensional latent space, based on Bayesian clustering @cite_3 @cite_4 @cite_14 @cite_25 , energy-based models @cite_17 @cite_28 @cite_16 @cite_29 @cite_0 , matrix factorization @cite_27 @cite_20 @cite_31 . Among existing representation models, TransE @cite_29 regards a relation as translation between head and tail entities for optimization, which achieves a good trade-off between prediction accuracy and computational efficiency. All existing representation learning methods of knowledge bases only use direct relations between entities, ignoring rich information in relation paths. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_28",
"@cite_29",
"@cite_3",
"@cite_0",
"@cite_27",
"@cite_31",
"@cite_16",
"@cite_25",
"@cite_20",
"@cite_17"
],
"mid": [
"2123228027",
"",
"1771625187",
"2127795553",
"",
"2951131188",
"2117420919",
"2099752825",
"2127426251",
"2158535911",
"205829674",
"2156954687"
],
"abstract": [
"We consider the problem of learning probabilistic models for complex relational structures between various types of objects. A model can help us \"understand\" a dataset of relational facts in at least two ways, by finding interpretable structure in the data, and by supporting predictions, or inferences about whether particular unobserved relations are likely to be true. Often there is a tradeoff between these two aims: cluster-based models yield more easily interpretable representations, while factorization-based approaches have given better predictive performance on large data sets. We introduce the Bayesian Clustered Tensor Factorization (BCTF) model, which embeds a factorized representation of relations in a nonparametric Bayesian clustering framework. Inference is fully Bayesian but scales well to large data sets. The model simultaneously discovers interpretable clusters and yields predictive performance that matches or beats previous probabilistic models for relational data.",
"",
"Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 75.8 .",
"We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.",
"",
"Large-scale relational learning becomes crucial for handling the huge amounts of structured data generated daily in many application domains ranging from computational biology or information retrieval, to natural language processing. In this paper, we present a new neural network architecture designed to embed multi-relational graphs into a flexible continuous vector space in which the original data is kept and enhanced. The network is trained to encode the semantics of these graphs in order to assign high probabilities to plausible components. We empirically show that it reaches competitive performance in link prediction on standard datasets from the literature.",
"Relational learning is concerned with predicting unknown values of a relation, given a database of entities and observed relations among entities. An example of relational learning is movie rating prediction, where entities could include users, movies, genres, and actors. Relations encode users' ratings of movies, movies' genres, and actors' roles in movies. A common prediction technique given one pairwise relation, for example a #users x #movies ratings matrix, is low-rank matrix factorization. In domains with multiple relations, represented as multiple matrices, we may improve predictive accuracy by exploiting information from one relation while predicting another. To this end, we propose a collective matrix factorization model: we simultaneously factor several matrices, sharing parameters among factors when an entity participates in multiple relations. Each relation can have a different value type and error distribution; so, we allow nonlinear relationships between the parameters and outputs, using Bregman divergences to measure error. We extend standard alternating projection algorithms to our model, and derive an efficient Newton update for the projection. Furthermore, we propose stochastic optimization methods to deal with large, sparse matrices. Our model generalizes several existing matrix factorization methods, and therefore yields new large-scale optimization algorithms for these problems. Our model can handle any pairwise relational schema and a wide variety of error models. We demonstrate its efficiency, as well as the benefit of sharing parameters among relations.",
"Vast amounts of structured information have been published in the Semantic Web's Linked Open Data (LOD) cloud and their size is still growing rapidly. Yet, access to this information via reasoning and querying is sometimes difficult, due to LOD's size, partial data inconsistencies and inherent noisiness. Machine Learning offers an alternative approach to exploiting LOD's data with the advantages that Machine Learning algorithms are typically robust to both noise and data inconsistencies and are able to efficiently utilize non-deterministic dependencies in the data. From a Machine Learning point of view, LOD is challenging due to its relational nature and its scale. Here, we present an efficient approach to relational learning on LOD data, based on the factorization of a sparse tensor that scales to data consisting of millions of entities, hundreds of relations and billions of known facts. Furthermore, we show how ontological knowledge can be incorporated in the factorization to improve learning results and how computation can be distributed across multiple nodes. We demonstrate that our approach is able to factorize the YAGO 2 core ontology and globally predict statements for this large knowledge base using a single dual-core desktop computer. Furthermore, we show experimentally that our approach achieves good results in several relational learning tasks that are relevant to Linked Data. Once a factorization has been computed, our model is able to predict efficiently, and without any additional training, the likelihood of any of the 4.3 ⋅ 1014 possible triples in the YAGO 2 core ontology.",
"Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively.",
"As the availability and importance of relational data—such as the friendships summarized on a social networking website—increases, it becomes increasingly important to have good models for such data. The kinds of latent structure that have been considered for use in predicting links in such networks have been relatively limited. In particular, the machine learning community has focused on latent class models, adapting Bayesian nonparametric methods to jointly infer how many latent classes there are while learning which entities belong to each class. We pursue a similar approach with a richer kind of latent variable—latent features—using a Bayesian nonparametric approach to simultaneously infer the number of features at the same time we learn which entities have each feature. Our model combines these inferred features with known covariates in order to perform link prediction. We demonstrate that the greater expressiveness of this approach allows us to improve performance on three datasets.",
"Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.",
""
]
} |
1506.00379 | 2952854166 | Representation learning of knowledge bases (KBs) aims to embed both entities and relations into a low-dimensional space. Most existing methods only consider direct relations in representation learning. We argue that multiple-step relation paths also contain rich inference patterns between entities, and propose a path-based representation learning model. This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, we design a path-constraint resource allocation algorithm to measure the reliability of relation paths. (2) We represent relation paths via semantic composition of relation embeddings. Experimental results on real-world datasets show that, as compared with baselines, our model achieves significant and consistent improvements on knowledge base completion and relation extraction from text. | Relation paths have already been widely considered in social networks and recommender systems. Most of these works regard each relation and path as discrete symbols, and deal with them using graph-based algorithms, such as random walks with restart @cite_15 . Relation paths have also been used for inference on large-scale KBs, such as Path Ranking algorithm (PRA) @cite_5 , which has been adopted for expert finding @cite_5 and information retrieval @cite_18 . PRA has also been used for relation extraction based on KB structure @cite_6 @cite_2 . @cite_7 further learns a recurrent neural network (RNN) to represent unseen relation paths according to involved relations. We note that, these methods focus on modeling relation paths for relation extraction without considering any information of entities. In contrast, by successfully integrating the merits of modeling entities and relation paths, PTransE can learn superior representations of both entities and relations for knowledge graph completion and relation extraction as shown in our experiments. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_6",
"@cite_2",
"@cite_5",
"@cite_15"
],
"mid": [
"1950142954",
"2584683887",
"1756422141",
"114118985",
"2029249040",
"2133299088"
],
"abstract": [
"We study how to extend a large knowledge base (Freebase) by reading relational information from a large Web text corpus. Previous studies on extracting relational knowledge from text show the potential of syntactic patterns for extraction, but they do not exploit background knowledge of other relations in the knowledge base. We describe a distributed, Web-scale implementation of a path-constrained random walk model that learns syntactic-semantic inference rules for binary relations from a graph representation of the parsed text and the knowledge base. Experiments show significant accuracy improvements in binary relation prediction over methods that consider only text, or only the existing knowledge base.",
"Traditional approaches to knowledge base completion have been based on symbolic representations. Lowdimensional vector embedding models proposed recently for this task are attractive since they generalize to possibly unlimited sets of relations. A significant drawback of previous embedding models for KB completion is that they merely support reasoning on individual relations (e.g., bornIn(X, Y ) ⇒ nationality(X, Y )). In this work, we develop models for KB completion that support chains of reasoning on paths of any length using compositional vector space models. We construct compositional vector representations for the paths in the KB graph from the semantic vector representations of the binary relations in that path and perform inference directly in the vector space. Unlike previous methods, our approach can generalize to paths that are unseen in training and, in a zero-shot setting, predict target relations without supervised training data for that relation.",
"We consider the problem of performing learning and inference in a large scale knowledge base containing imperfect knowledge with incomplete coverage. We show that a soft inference procedure based on a combination of constrained, weighted, random walks through the knowledge base graph can be used to reliably infer new beliefs for the knowledge base. More specifically, we show that the system can learn to infer different target relations by tuning the weights associated with random walks that follow different paths through the graph, using a version of the Path Ranking Algorithm (Lao and Cohen, 2010b). We apply this approach to a knowledge base of approximately 500,000 beliefs extracted imperfectly from the web by NELL, a never-ending language learner (, 2010). This new system improves significantly over NELL's earlier Horn-clause learning and inference method: it obtains nearly double the precision at rank 100, and the new learning method is also applicable to many more inference tasks.",
"Automatically constructed Knowledge Bases (KBs) are often incomplete and there is a genuine need to improve their coverage. Path Ranking Algorithm (PRA) is a recently proposed method which aims to improve KB coverage by performing inference directly over the KB graph. For the first time, we demonstrate that addition of edges labeled with latent features mined from a large dependency parsed corpus of 500 million Web documents can significantly outperform previous PRAbased approaches on the KB inference task. We present extensive experimental results validating this finding. The resources presented in this paper are publicly available.",
"Scientific literature with rich metadata can be represented as a labeled directed graph. This graph representation enables a number of scientific tasks such as ad hoc retrieval or named entity recognition (NER) to be formulated as typed proximity queries in the graph. One popular proximity measure is called Random Walk with Restart (RWR), and much work has been done on the supervised learning of RWR measures by associating each edge label with a parameter. In this paper, we describe a novel learnable proximity measure which instead uses one weight per edge label sequence: proximity is defined by a weighted combination of simple \"path experts\", each corresponding to following a particular sequence of labeled edges. Experiments on eight tasks in two subdomains of biology show that the new learning method significantly outperforms the RWR model (both trained and untrained). We also extend the method to support two additional types of experts to model intrinsic properties of entities: query-independent experts, which generalize the PageRank measure, and popular entity experts which allow rankings to be adjusted for particular entities that are especially important.",
"How closely related are two nodes in a graph? How to compute this score quickly, on huge, disk-resident, real graphs? Random walk with restart (RWR) provides a good relevance score between two nodes in a weighted graph, and it has been successfully used in numerous settings, like automatic captioning of images, generalizations to the \"connection subgraphs\", personalized PageRank, and many more. However, the straightforward implementations of RWR do not scale for large graphs, requiring either quadratic space and cubic pre-computation time, or slow response time on queries. We propose fast solutions to this problem. The heart of our approach is to exploit two important properties shared by many real graphs: (a) linear correlations and (b) block- wise, community-like structure. We exploit the linearity by using low-rank matrix approximation, and the community structure by graph partitioning, followed by the Sherman- Morrison lemma for matrix inversion. Experimental results on the Corel image and the DBLP dabasets demonstrate that our proposed methods achieve significant savings over the straightforward implementations: they can save several orders of magnitude in pre-computation and storage cost, and they achieve up to 150x speed up with 90 + quality preservation."
]
} |
1506.00528 | 2951765406 | In this paper, we present a novel approach for medical synonym extraction. We aim to integrate the term embedding with the medical domain knowledge for healthcare applications. One advantage of our method is that it is very scalable. Experiments on a dataset with more than 1M term pairs show that the proposed approach outperforms the baseline approaches by a large margin. | A wide range of techniques has been applied to synonym detection, including the use of lexicosyntactic patterns @cite_24 , clustering @cite_13 , graph-based models @cite_7 @cite_17 @cite_9 @cite_21 and distributional semantics @cite_4 @cite_0 @cite_30 @cite_12 @cite_18 . There are also efforts to improve the detection performance using multiple sources or ensemble methods @cite_23 @cite_25 @cite_29 . | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_21",
"@cite_29",
"@cite_24",
"@cite_0",
"@cite_23",
"@cite_13",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"80719611",
"80286018",
"1983578042",
"2113367658",
"",
"2006163720",
"2081016167",
"2068737686",
"2160732112",
"1998324505",
"2121227244",
"2123294466",
"322824424",
"1491314197"
],
"abstract": [
"Medical terminologies and ontologies are important tools for natural language processing of health record narratives. To account for the variability of language use, synonyms need to be stored in a semantic resource as textual instantiations of a concept. Developing such resources manually is, however, prohibitively expensive and likely to result in low coverage. To facilitate and expedite the process of lexical resource development, distributional analysis of large corpora provides a powerful data-driven means of (semi-)automatically identifying semantic relations, including synonymy, between terms. In this paper, we demonstrate how distributional analysis of a large corpus of electronic health records – the MIMIC-II database – can be employed to extract synonyms of SNOMED CT preferred terms. A distinctive feature of our method is its ability to identify synonymous relations between terms of varying length.",
"We present a study that developed and tested three query expansion methods for the retrieval of clinical documents. Finding relevant documents in a large clinical data warehouse is a challenging task. To address this issue, first, we implemented a synonym expansion strategy that used a few selected vocabularies. Second, we trained a topic model on a large set of clinical documents, which was then used to identify related terms for query expansion. Third, we obtained related terms from a large predicate database derived from Medline abstracts for query expansion. The three expansion methods were tested on a set of clinical notes. All three methods successfully achieved higher average recalls and average F-measures when compared with the baseline method. The average precisions and precision at 10, however, decreased with all expansions. Amongst the three expansion methods, the topic model-based method performed the best in terms of recall and F-measure.",
"How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena. By inducing global knowledge indirectly from local co-occurrence data in a large body of representative text, LSA acquired knowledge about the full vocabulary of English at a comparable rate to schoolchildren. LSA uses no prior linguistic or perceptual similarity knowledge; it is based solely on a general mathematical learning method that achieves powerful inductive effects by extracting the right number of dimensions (e.g., 300) to represent objects and contexts. Relations to other theories, phenomena, and problems are sketched.",
"We introduce a concept of similarity between vertices of directed graphs. Let GA and GB be two directed graphs with, respectively, nA and nB vertices. We define an nB nA similarity matrix S whose real entry sij expresses how similar vertex j (in GA) is to vertex i (in GB): we say that sij is their similarity score. The similarity matrix can be obtained as the limit of the normalized even iterates of Sk+1 = BSkAT + BTSkA, where A and B are adjacency matrices of the graphs and S0 is a matrix whose entries are all equal to 1. In the special case where GA = GB = G, the matrix S is square and the score sij is the similarity score between the vertices i and j of G. We point out that Kleinberg's \"hub and authority\" method to identify web-pages relevant to a given query can be viewed as a special case of our definition in the case where one of the graphs has two vertices and a unique directed edge between them. In analogy to Kleinberg, we show that our similarity scores are given by the components of a dominant eigenvector of a nonnegative matrix. Potential applications of our similarity concept are numerous. We illustrate an application for the automatic extraction of synonyms in a monolingual dictionary.",
"",
"We consider a parsed text corpus as an instance of a labelled directed graph, where nodes represent words and weighted directed edges represent the syntactic relations between them. We show that graph walks, combined with existing techniques of supervised learning, can be used to derive a task-specific word similarity measure in this graph. We also propose a new path-constrained graph walk method, in which the graph walk process is guided by high-level knowledge about meaningful edge sequences (paths). Empirical evaluation on the task of named entity coordinate term extraction shows that this framework is preferable to vector-based models for small-sized corpora. It is also shown that the path-constrained graph walk algorithm yields both performance and scalability gains.",
"Current approaches to the prediction of associations rely on just one type of information, generally taking the form of either word space models or collocation measures. At the moment, it is an open question how these approaches compare to one another. In this paper, we will investigate the performance of these two types of models and that of a new approach based on compounding. The best single predictor is the log-likelihood ratio, followed closely by the document-based word space model. We will show, however, that an ensemble method that combines these two best approaches with the compounding algorithm achieves an increase in performance of almost 30 over the current state of the art.",
"We describe a method for the automatic acquisition of the hyponymy lexical relation from unrestricted text. Two goals motivate the approach: (i) avoidance of the need for pre-encoded knowledge and (ii) applicability across a wide range of text. We identify a set of lexico-syntactic patterns that are easily recognizable, that occur frequently and across text genre boundaries, and that indisputably indicate the lexical relation of interest. We describe a method for discovering these patterns and suggest that other lexical relations will also be acquirable in this way. A subset of the acquisition algorithm is implemented and the results are used to augment and critique the structure of a large hand-built thesaurus. Extensions and applications to areas such as information retrieval are suggested.",
"Terminologies that account for variation in language use by linking synonyms and abbreviations to their corresponding concept are important enablers of high-quality information extraction from medical texts. Due to the use of specialized sub-languages in the medical domain, manual construction of semantic resources that accurately reflect language use is both costly and challenging, often resulting in low coverage. Although models of distributional semantics applied to large corpora provide a potential means of supporting development of such resources, their ability to isolate synonymy from other semantic relations is limited. Their application in the clinical domain has also only recently begun to be explored. Combining distributional models and applying them to different types of corpora may lead to enhanced performance on the tasks of automatically extracting synonyms and abbreviation-expansion pairs. A combination of two distributional models – Random Indexing and Random Permutation – employed in conjunction with a single corpus outperforms using either of the models in isolation. Furthermore, combining semantic spaces induced from different types of corpora – a corpus of clinical text and a corpus of medical journal articles – further improves results, outperforming a combination of semantic spaces induced from a single source, as well as a single semantic space induced from the conjoint corpus. A combination strategy that simply sums the cosine similarity scores of candidate terms is generally the most profitable out of the ones explored. Finally, applying simple post-processing filtering rules yields substantial performance gains on the tasks of extracting abbreviation-expansion pairs, but not synonyms. The best results, measured as recall in a list of ten candidate terms, for the three tasks are: 0.39 for abbreviations to long forms, 0.33 for long forms to abbreviations, and 0.47 for synonyms. This study demonstrates that ensembles of semantic spaces can yield improved performance on the tasks of automatically extracting synonyms and abbreviation-expansion pairs. This notion, which merits further exploration, allows different distributional models – with different model parameters – and different types of corpora to be combined, potentially allowing enhanced performance to be obtained on a wide range of natural language processing tasks.",
"Ensemble methods are state of the art for many NLP tasks. Recent work by Banko and Brill (2001) suggests that this would not necessarily be true if very large training corpora were available. However, their results are limited by the simplicity of their evaluation task and individual classifiers.Our work explores ensemble efficacy for the more complex task of automatic thesaurus extraction on up to 300 million words. We examine our conflicting results in terms of the constraints on, and complexity of, different contextual representations, which contribute to the sparseness-and noise-induced bias behaviour of NLP systems on very large corpora.",
"We address the problem of predicting a word from previous words in a sample of text. In particular, we discuss n-gram models based on classes of words. We also discuss several statistical algorithms for assigning words to classes based on the frequency of their co-occurrence with other words. We find that we are able to extract classes that have the flavor of either syntactically based groupings or semantically based groupings, depending on the nature of the underlying statistics.",
"Automatically acquiring synonymous words (synonyms) from corpora is a challenging task. For this task, methods that use only one kind of resources are inadequate because of low precision or low recall. To improve the performance of synonym extraction, we propose a method to extract synonyms with multiple resources including a monolingual dictionary, a bilingual corpus, and a large monolingual corpus. This approach uses an ensemble to combine the synonyms extracted by individual extractors which use the three resources. Experimental results prove that the three resources are complementary to each other on synonym extraction, and that the ensemble method we used is very effective to improve both precisions and recalls of extracted synonyms.",
"The various ways in which one can refer to the same clinical concept needs to be accounted for in a semantic resource such as SNOMED CT. Developing terminological resources manually is, however, prohibitively expensive and likely to result in low coverage, especially given the high variability of language use in clinical text. To support this process, distributional methods can be employed in conjunction with a large corpus of electronic health records to extract synonym candidates for clinical terms. In this paper, we exemplify the potential of our proposed method using the Swedish version of SNOMED CT, which currently lacks synonyms. A medical expert inspects two thousand term pairs generated by two semantic spaces ‐ one of which models multiword terms in addition to single words ‐ for one hundred preferred terms of the semantic types disorder and finding.",
"Wikipedia has become a huge phenomenon on the WWW. As a corpus for knowledge extraction, it has various impressive characteristics such as a huge amount of articles, live updates, a dense link structure, brief link texts and URL identification for concepts. In this paper, we propose an efficient link mining method pfibf (Path Frequency - Inversed Backward link Frequency) and the extension method \"forward backward link weighting (FB weighting)\" in order to construct a huge scale association thesaurus. We proved the effectiveness of our proposed methods compared with other conventional methods such as cooccurrence analysis and TF-IDF."
]
} |
1506.00528 | 2951765406 | In this paper, we present a novel approach for medical synonym extraction. We aim to integrate the term embedding with the medical domain knowledge for healthcare applications. One advantage of our method is that it is very scalable. Experiments on a dataset with more than 1M term pairs show that the proposed approach outperforms the baseline approaches by a large margin. | The vector space models are directly related to synonym extraction. Some approaches use the low rank approximation idea to decompose large matrices that capture the statistical information of the corpus. The most representative method under this category is Latent Semantic Analysis (LSA) @cite_26 . Some new models also follow this approach like Hellinger PCA @cite_27 and GloVe @cite_8 . | {
"cite_N": [
"@cite_27",
"@cite_26",
"@cite_8"
],
"mid": [
"2951943225",
"2147152072",
"2250539671"
],
"abstract": [
"Word embeddings resulting from neural language models have been shown to be successful for a large variety of NLP tasks. However, such architecture might be difficult to train and time-consuming. Instead, we propose to drastically simplify the word embeddings computation through a Hellinger PCA of the word co-occurence matrix. We compare those new word embeddings with some well-known embeddings on NER and movie review tasks and show that we can reach similar or even better performance. Although deep learning is not really necessary for generating good word embeddings, we show that it can provide an easy way to adapt embeddings to specific tasks.",
"A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.",
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition."
]
} |
1506.00528 | 2951765406 | In this paper, we present a novel approach for medical synonym extraction. We aim to integrate the term embedding with the medical domain knowledge for healthcare applications. One advantage of our method is that it is very scalable. Experiments on a dataset with more than 1M term pairs show that the proposed approach outperforms the baseline approaches by a large margin. | Neural network based representation learning has attracted a lot of attentions recently. One of the earliest work was done in @cite_31 . This idea was then applied to language modeling @cite_20 , which motivated a number of research projects in machine learning to construct the vector representations for natural language processing tasks @cite_1 @cite_2 @cite_6 @cite_11 @cite_5 @cite_32 . | {
"cite_N": [
"@cite_1",
"@cite_32",
"@cite_6",
"@cite_2",
"@cite_5",
"@cite_31",
"@cite_20",
"@cite_11"
],
"mid": [
"2117130368",
"1570587036",
"22861983",
"1662133657",
"",
"",
"2132339004",
"1423339008"
],
"abstract": [
"We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.",
"Most state-of-the-art approaches for named-entity recognition (NER) use semi supervised information in the form of word clusters and lexicons. Recently neural network-based language models have been explored, as they as a byproduct generate highly informative vector representations for words, known as word embeddings. In this paper we present two contributions: a new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embeddings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for CoNLL 2003---significantly better than any previous system trained on public data, and matching a system employing massive private industrial query-log data.",
"The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.",
"Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field.",
"",
"",
"A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"Recursive structure is commonly found in the inputs of different modalities such as natural scene images or natural language sentences. Discovering this recursive structure helps us to not only identify the units that an image or sentence contains but also how they interact to form a whole. We introduce a max-margin structure prediction architecture based on recursive neural networks that can successfully recover such structure both in complex scene images as well as sentences. The same algorithm can be used both to provide a competitive syntactic parser for natural language sentences from the Penn Treebank and to outperform alternative approaches for semantic scene segmentation, annotation and classification. For segmentation and annotation our algorithm obtains a new level of state-of-the-art performance on the Stanford background dataset (78.1 ). The features from the image parse tree outperform Gist descriptors for scene classification by 4 ."
]
} |
1506.00326 | 1985007871 | Securely sharing and managing personal content is a challenging task in multi-device environments. In this paper, we design and implement a new platform called personal content networking (PCN). Our work is inspired by content-centric networking (CCN) because we aim to enable access to personal content using its name instead of its location. The unique challenge of PCN is to support secure file operations such as replication, updates, and access control over distributed untrusted devices. The primary contribution of this work is the design and implementation of a secure content management platform that supports secure updates, replications, and fine-grained content-centric access control of files. Furthermore, we demonstrate its feasibility through a prototype implementation on the CCNx skeleton. | Several systems have been designed for multi-device environments. Unmanaged Internet Architecture (UIA) provides zero-configuration connectivity among mobile devices through personal names @cite_45 . Unlike the existing work, UIA assumes that each device has its own persistent namespace, and a user must track all files located across multiple devices. In contrast, Eyo @cite_30 offers a device transparency model in which users view and manage their entire data collection of all devices through periodically flooding metadata everywhere. PersonalRAID @cite_39 supports optimistic replication at a volume level, and a mobile storage device is used to provide the abstraction of a single coherent storage name space that is available everywhere, and it ensures reliability through maintaining data redundancy on a number of storage devices. Footloose @cite_4 is a user-centered data store that can share data and reconcile conflicts across diverse devices. Footloose supports application-specific optimistic replication with eventual consistency (e.g. address books), and yet it uses a persistent flat namespace (called ObjectID). | {
"cite_N": [
"@cite_30",
"@cite_45",
"@cite_4",
"@cite_39"
],
"mid": [
"2008335092",
"2008427387",
"2122351934",
"2096414445"
],
"abstract": [
"This paper proposes a new storage model, device transparency, in which users view and manage their entire data collection from any of their devices, even from disconnected storage-limited devices holding only a subset of the entire collection.",
"The Unmanaged Internet Architecture (UIA) provides zero-configuration connectivity among mobile devices through personal names. Users assign personal names through an ad hoc device introduction process requiring no central allocation. Once assigned, names bind securely to the global identities of their target devices independent of network location. Each user manages one namespace, shared among all the user's devices and always available on each device. Users can also name other users to share resources with trusted acquaintances. Devices with naming relationships automatically arrange connectivity when possible, both in ad hoc networks and using global infrastructure when available. A UIA prototype demonstrates these capabilities using optimistic replication for name resolution and group management and a routing algorithm exploiting the user's social network for connectivity.",
"Users are increasingly inundated with small devices with communication and storage capabilities. Unfortunately, the user is still responsible for reconciling all of the devices whenever a change is made. We present Footloose, a user-centered data store that can share data and reconcile conflicts across diverse devices. Footloose is an optimistic system based on physical eventual consistency: consistency based on the movement of devices, and selective conflict resolution, which allows conflicts to flow through devices that cannot resolve the conflict to devices that can. Using these techniques, Footloose can present consistent views of data on the devices closest to the user without user interaction.",
"This paper presents the design and implementation of a mobile storage system called a Personal-RAID. PersonalRAID manages a number of disconnected storage devices. At the heart of a Personal-RAID system is a mobile storage device that transparently propagates data to ensure eventual consistency. Using this mobile device, a PersonalRAID provides the abstraction of a single coherent storage name space that is available everywhere, and it ensures reliability by maintaining data redundancy on a number of storage devices. One central aspect of the PersonalRAID design is that the entire storage system consists solely of a collection of storage logs; the log-structured design not only provides an efficient means for update propagation, but also allows efficient direct I O accesses to the logs without incurring unnecessary log replay delays. The PersonalRAID prototype demonstrates that the system provides the desired transparency and reliability functionalities without imposing any serious performance penalty on a mobile storage user."
]
} |
1506.00326 | 1985007871 | Securely sharing and managing personal content is a challenging task in multi-device environments. In this paper, we design and implement a new platform called personal content networking (PCN). Our work is inspired by content-centric networking (CCN) because we aim to enable access to personal content using its name instead of its location. The unique challenge of PCN is to support secure file operations such as replication, updates, and access control over distributed untrusted devices. The primary contribution of this work is the design and implementation of a secure content management platform that supports secure updates, replications, and fine-grained content-centric access control of files. Furthermore, we demonstrate its feasibility through a prototype implementation on the CCNx skeleton. | : Wide area P2P storage can be classified based on the overlay structure: (1) a structured system (e.g. PAST @cite_21 , CFS, Ivy) forms a structured overlay network using a distributed hash table (DHT), and (2) a structure-less scheme (e.g. Gnutella and eDonkey) forms a structure-less overlay network where the overlay links are arbitrarily established. Unlike unstructured P2P networks, DHTs provide better performance for searching for items over a large number of distributed nodes, and they have been widely adopted to implement wide area P2P storage. Most P2P storage systems assume wired Internet scenarios and support strong consistency, which is less suitable for personal content networking. As a recent work, Plethora focuses on semi-static peers with strong network connectivity and a partially persistent network state. In a semi-static P2P network, peers are likely to remain participants in the network over long periods of time (e.g., compute servers), and are capable of providing reasonably high availability and response-time guarantees @cite_2 . | {
"cite_N": [
"@cite_21",
"@cite_2"
],
"mid": [
"1957582590",
"1502216169"
],
"abstract": [
"This paper sketches the design of PAST, a large-scale, Internet-based, global storage utility that provides scalability, high availability, persistence and security. PAST is a peer-to-peer Internet application and is entirely self-organizing. PAST nodes serve as access points for clients, participate in the routing of client requests, and contribute storage to the system. Nodes are not trusted, they may join the system at any time and may silently leave the system without warning. Yet, the system is able to provide strong assurances, efficient storage access, load balancing and scalability. Among the most interesting aspects of PAST's design are (1) the Pastry location and routing scheme, which reliably and efficiently routes client requests among the PAST nodes, has good network locality properties and automatically resolves node failures and node additions; (2) the use of randomization to ensure diversity in the set of nodes that store a file's replicas and to provide load balancing; and (3) the optional use of smartcards, which are held by each PAST user and issued by a third party called a broker The smartcards support a quota system that balances supply and demand of storage in the system.",
"Trends in conventional storage infrastructure motivate the development of foundational technologies for building a wide-area read-write storage repository capable of providing a single image of a distributed storage resource The overarching design goals of such an infrastructure include client performance, global resource utilization, system scalability (providing a single logical view of larger resource and user pools) and application scalability (enabling single applications with large resource requirements) Such a storage infrastructure forms the basis for second generation data-grid efforts underlying massive data handling in high-energy physics, nanosciences, and bioinformatics, among others. This paper describes some of the foundational technologies underlying such a repository, Plethora, for semi-static peer-to-peer (P2P) networks implemented on a wide-area Internet testbed In contrast to many current efforts that focus entirely on unstructured dynamic P2P environments, Plethora focuses on semi-static peers with strong network connectivity and a partially persistent network state In a semi-static P2P network, peers are likely to remain participants in the network over long periods of time (e.g., compute servers), and are capable of providing reasonably high availability and response-time guarantees The repository integrates novel concepts in locality enhancing overlay networks, transactional semantics for read-write data coupled with hierarchical versioning, and novel erasure codes for robustness While mentioning approaches taken by Plethora to other problems, this paper focuses on the problem of routing data request to blocks, while integrating caching and locality enhancing overlays into a single framework We show significant performance improvements resulting from our routing techniques."
]
} |
1506.00326 | 1985007871 | Securely sharing and managing personal content is a challenging task in multi-device environments. In this paper, we design and implement a new platform called personal content networking (PCN). Our work is inspired by content-centric networking (CCN) because we aim to enable access to personal content using its name instead of its location. The unique challenge of PCN is to support secure file operations such as replication, updates, and access control over distributed untrusted devices. The primary contribution of this work is the design and implementation of a secure content management platform that supports secure updates, replications, and fine-grained content-centric access control of files. Furthermore, we demonstrate its feasibility through a prototype implementation on the CCNx skeleton. | : The following concepts are closely related: user authentication, access control authorization, and data confidentiality. Existing access control systems can be classified based on their authentication method. When AUTH (UNIX's default) and Kerberos are used, systems mostly provide UNIX-style ACL (e.g. Network File System (NFS), Andrew File System (AFS), xFS @cite_26 ). When public key cryptography is used, systems typically support either UNIX-style ACL (e.g. SFS @cite_38 ) or certificate authorization (e.g. DisCFS @cite_27 ). These systems assume that , but the network is not secure; thus, data confidentiality is guaranteed through securing the channel (e.g. SSL). If the servers are not trustworthy, we can either rely on other semi-trusted servers as in Cobalt @cite_47 or use cryptographic encryption to preserve data confidentiality as in Cryptographic File System (CFS) @cite_32 , Plutus @cite_1 , and SiRiUS @cite_46 . A detailed survey of recent decentralized access control has been presented in this survey paper @cite_10 . | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_46",
"@cite_1",
"@cite_32",
"@cite_27",
"@cite_47",
"@cite_10"
],
"mid": [
"2104112849",
"",
"100863554",
"1572593068",
"2159339961",
"1496748351",
"1524977650",
"2158978506"
],
"abstract": [
"No secure network file system has ever grown to span the Internet. Existing systems all lack adequate key management for security at a global scale. Given the diversity of the Internet, any particular mechanism a file system employs to manage keys will fail to support many types of use.We propose separating key management from file system security, letting the world share a single global file system no matter how individuals manage keys. We present SFS, a secure file system that avoids internal key management. While other file systems need key management to map file names to encryption keys, SFS file names effectively contain public keys, making them self-certifying pathnames. Key management in SFS occurs outside of the file system, in whatever procedure users choose to generate file names.Self-certifying pathnames free SFS clients from any notion of administrative realm, making inter-realm file sharing trivial. They let users authenticate servers through a number of different techniques. The file namespace doubles as a key certification namespace, so that people can realize many key management schemes using only standard file utilities. Finally, with self-certifying pathnames, people can bootstrap one key management mechanism using another. These properties make SFS more versatile than any file system with built-in key management.",
"",
"This paper presents SiRiUS, a secure file system designed to be layered over insecure network and P2P file systems such as NFS, CIFS, OceanStore, and Yahoo! Briefcase. SiRiUS assumes the network storage is untrusted and provides its own read-write cryptographic access control for file level sharing. Key management and revocation is simple with minimal out-of-band communication. File system freshness guarantees are supported by SiRiUS using hash tree constructions. SiRiUS contains a novel method of performing file random access in a cryptographic file system without the use of a block server. Extensions to SiRiUS include large scale group sharing using the NNL key revocation construction. Our implementation of SiRiUS performs well relative to the underlying file system despite using cryptographic operations.",
"Plutus is a cryptographic storage system that enables secure file sharing without placing much trust on the file servers. In particular, it makes novel use of cryptographic primitives to protect and share files. Plutus features highly scalable key management while allowing individual users to retain direct control over who gets access to their files. We explain the mechanisms in Plutus to reduce the number of cryptographic keys exchanged between users by using filegroups, distinguish file read and write access, handle user revocation efficiently, and allow an untrusted server to authorize file writes. We have built a prototype of Plutus on OpenAFS. Measurements of this prototype show that Plutus achieves strong security with overhead comparable to systems that encrypt all network traffic.",
"Although cryptographic techniques are playing an increasingly important role in modern computing system security, user-level tools for encrypting file data are cumbersome and suffer from a number of inherent vulnerabilities. The Cryptographic File System (CFS) pushes encryption services into the file system itself. CFS supports secure storage at the system level through a standard Unix file system interface to encrypted files. Users associate a cryptographic key with the directories they wish to protect. Files in these directories (as well as their pathname components) are transparently encrypted and decrypted with the specified key without further user intervention; cleartext is never stored on a disk or sent to a remote file server. CFS can use any available file system for its underlying storage without modification, including remote file servers such as NFS. System management functions, such as file backup, work in a normal manner and without knowledge of the key. This paper describes the design and implementation of CFS under Unix. Encryption techniques for file system-level encryption are described, and general issues of cryptographic system interfaces to support routine secure computing are discussed.",
"Trust management credentials directly authorize actions, rather than divide the authorization task into authentication and access control. Unlike traditional credentials, which bind keys to principals, trust management credentials bind keys to the authorization to perform certain tasks. The Distributed Credential FileSystem (DisCFS) uses trust management credentials to identify: (1) files being stored; (2) users; and (3) conditions under which their file access is allowed. Users share files by delegating access rights, issuing credentials in the style of traditional capabilities. Credentials permit, for example, access by remote users not known in advance to the file server, which simply enforces sharing policies rather than entangling itself in their management. Throughput and latency benchmarks of our prototype DisCFS implementation indicate performance roughly comparable to NFS version 2, while preserving the advantages of credentials for distributed control.",
"How should a distributed file system manage access to protected content? On one hand, distributed storage should make data access pervasive: authorized users should be able to access their data from any location. On the other hand, content protection is designed to restrict access -- this is often accomplished by limiting the set of computers from which content can be accessed. In this paper, we propose a new method for storing content in distributed storage called Cobalt. Rather than grant access to data based on the computer that reads the data, Cobalt grants access based on the physical proximity of authorized users. Protected content is stored encrypted in the distributed Blue File System; files can only be decrypted through the cooperation of a personal, mobile device such as cell phone. The Cobalt device is verified by content providers: it acts as a proxy that protects their interests by only decrypting data when policies specified during content acquisition are satisfied. Wireless communication with the device is used to determine the physical proximity of its user; when the Cobalt device moves out of range, protected content is made inaccessible. Our results show that Cobalt adds only modest overhead to content acquisition and playback, yet it enables new forms of interaction such as the ability to access protected content on ad hoc media players and create playlists that adapt to the tastes of nearby users.",
"The Internet enables global sharing of data across organizational boundaries. Distributed file systems facilitate data sharing in the form of remote file access. However, traditional access control mechanisms used in distributed file systems are intended for machines under common administrative control, and rely on maintaining a centralized database of user identities. They fail to scale to a large user base distributed across multiple organizations. We provide a survey of decentralized access control mechanisms in distributed file systems intended for large scale, in both administrative domains and users. We identify essential properties of such access control mechanisms. We analyze both popular production and experimental distributed file systems in the context of our survey."
]
} |
1506.00326 | 1985007871 | Securely sharing and managing personal content is a challenging task in multi-device environments. In this paper, we design and implement a new platform called personal content networking (PCN). Our work is inspired by content-centric networking (CCN) because we aim to enable access to personal content using its name instead of its location. The unique challenge of PCN is to support secure file operations such as replication, updates, and access control over distributed untrusted devices. The primary contribution of this work is the design and implementation of a secure content management platform that supports secure updates, replications, and fine-grained content-centric access control of files. Furthermore, we demonstrate its feasibility through a prototype implementation on the CCNx skeleton. | In the CFS approach, authentication is typically undertaken using public key cryptography where a user's public key is used as an ID, and a digital certificate is used for authentication. CFS uses a single key for encryption (coarse-grained, e.g. directory volume) and is dependent on the underlying file system for write authorization @cite_32 . Later variants use a lockbox to protect the keys (with more fine-grained access control) and introduce several mechanisms for verifying the write operations without depending on the underlying file system @cite_46 @cite_1 . In particular, SiRiUS @cite_46 assumes that the network storage is untrusted and provides its own read-write cryptographic access control for file-level sharing. It permits a file to be shared by multiple individuals or groups using a common file encryption key that is encrypted again using each user group's public key. | {
"cite_N": [
"@cite_46",
"@cite_1",
"@cite_32"
],
"mid": [
"100863554",
"1572593068",
"2159339961"
],
"abstract": [
"This paper presents SiRiUS, a secure file system designed to be layered over insecure network and P2P file systems such as NFS, CIFS, OceanStore, and Yahoo! Briefcase. SiRiUS assumes the network storage is untrusted and provides its own read-write cryptographic access control for file level sharing. Key management and revocation is simple with minimal out-of-band communication. File system freshness guarantees are supported by SiRiUS using hash tree constructions. SiRiUS contains a novel method of performing file random access in a cryptographic file system without the use of a block server. Extensions to SiRiUS include large scale group sharing using the NNL key revocation construction. Our implementation of SiRiUS performs well relative to the underlying file system despite using cryptographic operations.",
"Plutus is a cryptographic storage system that enables secure file sharing without placing much trust on the file servers. In particular, it makes novel use of cryptographic primitives to protect and share files. Plutus features highly scalable key management while allowing individual users to retain direct control over who gets access to their files. We explain the mechanisms in Plutus to reduce the number of cryptographic keys exchanged between users by using filegroups, distinguish file read and write access, handle user revocation efficiently, and allow an untrusted server to authorize file writes. We have built a prototype of Plutus on OpenAFS. Measurements of this prototype show that Plutus achieves strong security with overhead comparable to systems that encrypt all network traffic.",
"Although cryptographic techniques are playing an increasingly important role in modern computing system security, user-level tools for encrypting file data are cumbersome and suffer from a number of inherent vulnerabilities. The Cryptographic File System (CFS) pushes encryption services into the file system itself. CFS supports secure storage at the system level through a standard Unix file system interface to encrypted files. Users associate a cryptographic key with the directories they wish to protect. Files in these directories (as well as their pathname components) are transparently encrypted and decrypted with the specified key without further user intervention; cleartext is never stored on a disk or sent to a remote file server. CFS can use any available file system for its underlying storage without modification, including remote file servers such as NFS. System management functions, such as file backup, work in a normal manner and without knowledge of the key. This paper describes the design and implementation of CFS under Unix. Encryption techniques for file system-level encryption are described, and general issues of cryptographic system interfaces to support routine secure computing are discussed."
]
} |
1506.00326 | 1985007871 | Securely sharing and managing personal content is a challenging task in multi-device environments. In this paper, we design and implement a new platform called personal content networking (PCN). Our work is inspired by content-centric networking (CCN) because we aim to enable access to personal content using its name instead of its location. The unique challenge of PCN is to support secure file operations such as replication, updates, and access control over distributed untrusted devices. The primary contribution of this work is the design and implementation of a secure content management platform that supports secure updates, replications, and fine-grained content-centric access control of files. Furthermore, we demonstrate its feasibility through a prototype implementation on the CCNx skeleton. | Given that Attribute-Based Encryption (ABE) is designed to provide fine-grained, expressive access control, several existing works have used ABE for content sharing over untrusted storage @cite_0 @cite_7 @cite_18 . In particular, @cite_18 used a key policy ABE (KP-ABE) to provide privacy-aware content sharing over untrusted cloud storage and Proxy Re-Encryption (PRE) to delegate the task of re-encryption on cloud servers. While PCN is considered to be a cryptographic file system, unlike existing systems, PCN provides fine-grained expressive access control using CP-ABE in a fully distributed environment with untrusted nodes and it allows file owners to set up expressive access policies based on attributes (e.g. college friends, family members, etc.). Furthermore, none of the aforementioned systems provide between the name and data; thus, the channel must be secured in order to prevent man-in-the-middle attacks. | {
"cite_N": [
"@cite_0",
"@cite_18",
"@cite_7"
],
"mid": [
"2123558779",
"2116626915",
"2116648071"
],
"abstract": [
"Access control in content distribution networks (CDNs) is a long-standing problem and has attracted extensive research. Traditional centralized access control approaches, such as reference monitor based approach, do not suit for CDNs as such networks are of large scale and geographically distributed in nature. Current CDNs usually resort to cryptographic-based distributed approaches for better fulfilling the goal of access control. Hence, it is highly critical to design and adapt appropriate cryptographic primitives for such purpose. In this paper, we propose a novel distributed access control approach for CDNs by exploiting a new cryptographic primitive called Ciphertext Policy Attributed-Based Encryption (CP-ABE). Our approach provides flexible yet fine-grained access control (per file level) so that the contents are available only to the authorized users. We further consider the protection of user privacy and enhance the current design of CP-ABE so that not only the cAccess control in content distribution networks (CDNs) is a long-standing problem and has attracted extensive research. Traditional centralized access control approaches, such as reference monitor based approach, do not suit for CDNs as such networks are of large scale and geographically distributed in nature. Current CDNs usually resort to cryptographic-based distributed approaches for better fulfilling the goal of access control. Hence, it is highly critical to design and adapt appropriate cryptographic primitives for such purpose. In this paper, we propose a novel distributed access control approach for CDNs by exploiting a new cryptographic primitive called ciphertext policy attributed-based encryption (CP-ABE). Our approach provides flexible yet fine-grained access control (per file level) so that the contents are available only to the authorized users. We further consider the protection of user privacy and enhance the current design of CP-ABE so that not only the contents themselves but also the access policies, which could lead to the revelation of sensitive user information, are well protected.ontents themselves but also the access policies, which could lead to the revelation of sensitive user information, are well protected.",
"",
"Online social networks (OSNs) are immensely popular, with some claiming over 200 million users. Users share private content, such as personal information or photographs, using OSN applications. Users must trust the OSN service to protect personal information even as the OSN provider benefits from examining and sharing that information. We present Persona, an OSN where users dictate who may access their information. Persona hides user data with attribute-based encryption (ABE), allowing users to apply fine-grained policies over who may view their data. Persona provides an effective means of creating applications in which users, not the OSN, define policy over access to private data. We demonstrate new cryptographic mechanisms that enhance the general applicability of ABE. We show how Persona provides the functionality of existing online social networks with additional privacy benefits. We describe an implementation of Persona that replicates Facebook applications and show that Persona provides acceptable performance when browsing privacy-enhanced web pages, even on mobile devices."
]
} |
1506.00529 | 2295185209 | We establish an equivalence between two seemingly different theories: one is the traditional axiomatisation of incomplete preferences on horse lotteries based on the mixture independence axiom; the other is the theory of desirable gambles developed in the context of imprecise probability. The equivalence allows us to revisit incomplete preferences from the viewpoint of desirability and through the derived notion of coherent lower previsions. On this basis, we obtain new results and insights: in particular, we show that the theory of incomplete preferences can be developed assuming only the existence of a worst act---no best act is needed---, and that a weakened Archimedean axiom suffices too; this axiom allows us also to address some controversy about the regularity assumption (that probabilities should be positive---they need not), which enables us also to deal with uncountable possibility spaces; we show that it is always possible to extend in a minimal way a preference relation to one with a worst act, and yet the resulting relation is never Archimedean, except in a trivial case; we show that the traditional notion of state independence coincides with the notion called strong independence in imprecise probability---this leads us to give much a weaker definition of state independence than the traditional one; we rework and uniform the notions of complete preferences, beliefs, values; we argue that Archimedeanity does not capture all the problems that can be modelled with sets of expected utilities and we provide a new notion that does precisely that. Perhaps most importantly, we argue throughout that desirability is a powerful and natural setting to model, and work with, incomplete preferences, even in case of non-Archimedean problems. This leads us to suggest that desirability, rather than preference, should be the primitive notion at the basis of decision-theoretic axiomatisations. | The link between desirability and preference has been surfacing in the literature in a number of cases, but has apparently gone unnoticed. That it has surfaced is not surprising, because the theoretical study of preferences based on the mixture-independence axiom results in, and is worked out using, cones; cones are also the fundamental tool in desirability. That it has not been remarked and exploited explicitly, as we do in this paper, is. Perhaps the most evident case where the two theories have nearly touched each other is in Galaabaatar and Karni's work @cite_10 . In the next sections, we discuss this and two other main approaches in the literature that have dealt with the axiomatisation of incomplete preferences, and compare our approach with them. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2154367906"
],
"abstract": [
"This paper extends the subjective expected utility model of decision making under uncertainty to include incomplete beliefs and tastes. The main results are two axiomatizations of the multiprior expected multiutility representations of preference relations under uncertainty. The paper also introduces new axiomatizations of Knightian uncertainty and the expected multiutility model with complete beliefs."
]
} |
1506.00529 | 2295185209 | We establish an equivalence between two seemingly different theories: one is the traditional axiomatisation of incomplete preferences on horse lotteries based on the mixture independence axiom; the other is the theory of desirable gambles developed in the context of imprecise probability. The equivalence allows us to revisit incomplete preferences from the viewpoint of desirability and through the derived notion of coherent lower previsions. On this basis, we obtain new results and insights: in particular, we show that the theory of incomplete preferences can be developed assuming only the existence of a worst act---no best act is needed---, and that a weakened Archimedean axiom suffices too; this axiom allows us also to address some controversy about the regularity assumption (that probabilities should be positive---they need not), which enables us also to deal with uncountable possibility spaces; we show that it is always possible to extend in a minimal way a preference relation to one with a worst act, and yet the resulting relation is never Archimedean, except in a trivial case; we show that the traditional notion of state independence coincides with the notion called strong independence in imprecise probability---this leads us to give much a weaker definition of state independence than the traditional one; we rework and uniform the notions of complete preferences, beliefs, values; we argue that Archimedeanity does not capture all the problems that can be modelled with sets of expected utilities and we provide a new notion that does precisely that. Perhaps most importantly, we argue throughout that desirability is a powerful and natural setting to model, and work with, incomplete preferences, even in case of non-Archimedean problems. This leads us to suggest that desirability, rather than preference, should be the primitive notion at the basis of decision-theoretic axiomatisations. | One of the most influential works for this paper has been the one carried out by Galaabaatar and Karni (GK) in @cite_10 . | {
"cite_N": [
"@cite_10"
],
"mid": [
"2154367906"
],
"abstract": [
"This paper extends the subjective expected utility model of decision making under uncertainty to include incomplete beliefs and tastes. The main results are two axiomatizations of the multiprior expected multiutility representations of preference relations under uncertainty. The paper also introduces new axiomatizations of Knightian uncertainty and the expected multiutility model with complete beliefs."
]
} |
1506.00481 | 2952861847 | This paper presents a computationally efficient yet powerful binary framework for robust facial representation based on image gradients. It is termed as structural binary gradient patterns (SBGP). To discover underlying local structures in the gradient domain, we compute image gradients from multiple directions and simplify them into a set of binary strings. The SBGP is derived from certain types of these binary strings that have meaningful local structures and are capable of resembling fundamental textural information. They detect micro orientational edges and possess strong orientation and locality capabilities, thus enabling great discrimination. The SBGP also benefits from the advantages of the gradient domain and exhibits profound robustness against illumination variations. The binary strategy realized by pixel correlations in a small neighborhood substantially simplifies the computational complexity and achieves extremely efficient processing with only 0.0032s in Matlab for a typical face image. Furthermore, the discrimination power of the SBGP can be enhanced on a set of defined orientational image gradient magnitudes, further enforcing locality and orientation. Results of extensive experiments on various benchmark databases illustrate significant improvements of the SBGP based representations over the existing state-of-the-art local descriptors in the terms of discrimination, robustness and complexity. Codes for the SBGP methods will be available at this http URL | Local histograms built on IGO statistics have been considered as visually prominent features and have favorable properties such as invariance against illumination. In @cite_39 , Zhang . computed Gradientfaces by using IGOs representation instead of intensities to obtain an illumination insensitive measure. They showed that features extracted from gradient domain are more discriminative and robust than those in the intensity domain, and are even more tolerant to illumination variations than the methods based on the reflectance model @cite_63 @cite_54 @cite_29 . Similarly, Tzimiropoulos . @cite_50 presented a simple yet robust similarity measure based on IGO representation and cosine of kernels of IGO differences between images ( @math ). Then PCA subspace is adapted in the IGO space to generate a more compact, discriminant and robust representation, referred to as the @math @cite_50 . | {
"cite_N": [
"@cite_54",
"@cite_29",
"@cite_39",
"@cite_50",
"@cite_63"
],
"mid": [
"2075268548",
"2146772029",
"2103056131",
"2099629511",
"2109486640"
],
"abstract": [
"This paper presents a new approach for face recognition based on the fusion of tensors of census transform histograms from Local Gaussian features maps. Local Gaussian feature maps encode the most relevant information from Gaussian derivative features. Census Transform (CT) histograms are calculated and concatenated to form a tensor for each class of Gaussian map. Multi-linear Principal Component Analysis (MPCA) is applied to each tensor to reduce the number of dimensions as well as the correlation between neighboring pixels due to the Census Transform. We then train Kernel Discriminative Common Vectors (KDCV) to generate a discriminative vector using the results of the MPCA. Results of recognition using MPCA of tensors-CT histograms from Gaussian features maps with KDCV is shown to compare favorably with competing techniques that use more complex features maps like for example Gabor features maps in the FERET and Yale datasets. Additional experiments were done in the Yale B+ extended Yale B Faces dataset to show the performance of Gaussian features map with hard illumination changes.",
"In this paper, we present the logarithmic total variation (LTV) model for face recognition under varying illumination, including natural lighting conditions, where we rarely know the strength, direction, or number of light sources. The proposed LTV model has the ability to factorize a single face image and obtain the illumination invariant facial structure, which is then used for face recognition. Our model is inspired by the SQI model but has better edge-preserving ability and simpler parameter selection. The merit of this model is that neither does it require any lighting assumption nor does it need any training. The LTV model reaches very high recognition rates in the tests using both Yale and CMU PIE face databases as well as a face database containing 765 subjects under outdoor lighting conditions",
"In this correspondence, we propose a novel method to extract illumination insensitive features for face recognition under varying lighting called the gradient faces. Theoretical analysis shows gradient faces is an illumination insensitive measure, and robust to different illumination, including uncontrolled, natural lighting. In addition, gradient faces is derived from the image gradient domain such that it can discover underlying inherent structure of face images since the gradient domain explicitly considers the relationships between neighboring pixel points. Therefore, gradient faces has more discriminating power than the illumination insensitive measure extracted from the pixel domain. Recognition rates of 99.83 achieved on PIE database of 68 subjects, 98.96 achieved on Yale B of ten subjects, and 95.61 achieved on Outdoor database of 132 subjects under uncontrolled natural lighting conditions show that gradient faces is an effective method for face recognition under varying illumination. Furthermore, the experimental results on Yale database validate that gradient faces is also insensitive to image noise and object artifacts (such as facial expressions).",
"We introduce the notion of subspace learning from image gradient orientations for appearance-based object recognition. As image data are typically noisy and noise is substantially different from Gaussian, traditional subspace learning from pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data population. We show that replacing pixel intensities with gradient orientations and the l2 norm with a cosine-based distance measure offers, to some extend, a remedy to this problem. Within this framework, which we coin Image Gradient Orientations (IGO) subspace learning, we first formulate and study the properties of Principal Component Analysis of image gradient orientations (IGO-PCA). We then show its connection to previously proposed robust PCA techniques both theoretically and experimentally. Finally, we derive a number of other popular subspace learning techniques, namely, Linear Discriminant Analysis (LDA), Locally Linear Embedding (LLE), and Laplacian Eigenmaps (LE). Experimental results show that our algorithms significantly outperform popular methods such as Gabor features and Local Binary Patterns and achieve state-of-the-art performance for difficult problems such as illumination and occlusion-robust face recognition. In addition to this, the proposed IGO-methods require the eigendecomposition of simple covariance matrices and are as computationally efficient as their corresponding l2 norm intensity-based counterparts. Matlab code for the methods presented in this paper can be found at http: ibug.doc.ic.ac.uk resources.",
"A face image can be represented by a combination of large-and small-scale features. It is well-known that the variations of illumination mainly affect the large-scale features (low-frequency components), and not so much the small-scale features. Therefore, in relevant existing methods only the small-scale features are extracted as illumination-invariant features for face recognition, while the large-scale intrinsic features are always ignored. In this paper, we argue that both large-and small-scale features of a face image are important for face restoration and recognition. Moreover, we suggest that illumination normalization should be performed mainly on the large-scale features of a face image rather than on the original face image. A novel method of normalizing both the Small-and Large-scale (S&L) features of a face image is proposed. In this method, a single face image is first decomposed into large-and small-scale features. After that, illumination normalization is mainly performed on the large-scale features, and only a minor correction is made on the small-scale features. Finally, a normalized face image is generated by combining the processed large-and small-scale features. In addition, an optional visual compensation step is suggested for improving the visual quality of the normalized image. Experiments on CMU-PIE, Extended Yale B, and FRGC 2.0 face databases show that by using the proposed method significantly better recognition performance and visual results can be obtained as compared to related state-of-the-art methods."
]
} |
1506.00481 | 2952861847 | This paper presents a computationally efficient yet powerful binary framework for robust facial representation based on image gradients. It is termed as structural binary gradient patterns (SBGP). To discover underlying local structures in the gradient domain, we compute image gradients from multiple directions and simplify them into a set of binary strings. The SBGP is derived from certain types of these binary strings that have meaningful local structures and are capable of resembling fundamental textural information. They detect micro orientational edges and possess strong orientation and locality capabilities, thus enabling great discrimination. The SBGP also benefits from the advantages of the gradient domain and exhibits profound robustness against illumination variations. The binary strategy realized by pixel correlations in a small neighborhood substantially simplifies the computational complexity and achieves extremely efficient processing with only 0.0032s in Matlab for a typical face image. Furthermore, the discrimination power of the SBGP can be enhanced on a set of defined orientational image gradient magnitudes, further enforcing locality and orientation. Results of extensive experiments on various benchmark databases illustrate significant improvements of the SBGP based representations over the existing state-of-the-art local descriptors in the terms of discrimination, robustness and complexity. Codes for the SBGP methods will be available at this http URL | Recently, a number of local facial descriptors have been derived from the Gabor or LBP features or their combinations. In @cite_65 , the local gabor binary pattern histogram sequence (LGBPHS) was proposed by first running Gabor filters on face images and then building LBP histogram features on the resulted Gabor magnitude faces. Similar methods include the histogram of Gabor phase patterns (HGPP) @cite_33 and Gabor volume based LBP (GV-LBP) @cite_36 . The advantages of these methods are built on the virtues of both Gabor and LBP descriptors. However, they commonly suffer from the difficulties of Gabor based representations, i.e. high computational complexity and high dimensionality. | {
"cite_N": [
"@cite_36",
"@cite_65",
"@cite_33"
],
"mid": [
"2075772568",
"",
"2054891869"
],
"abstract": [
"Information jointly contained in image space, scale and orientation domains can provide rich important clues not seen in either individual of these domains. The position, spatial frequency and orientation selectivity properties are believed to have an important role in visual perception. This paper proposes a novel face representation and recognition approach by exploring information jointly in image space, scale and orientation domains. Specifically, the face image is first decomposed into different scale and orientation responses by convolving multiscale and multi-orientation Gabor filters. Second, local binary pattern analysis is used to describe the neighboring relationship not only in image space, but also in different scale and orientation responses. This way, information from different domains is explored to give a good face representation for recognition. Discriminant classification is then performed based upon weighted histogram intersection or conditional mutual information with linear discriminant analysis techniques. Extensive experimental results on FERET, AR, and FRGC ver 2.0 databases show the significant advantages of the proposed method over the existing ones.",
"",
"A novel object descriptor, histogram of Gabor phase pattern (HGPP), is proposed for robust face recognition. In HGPP, the quadrant-bit codes are first extracted from faces based on the Gabor transformation. Global Gabor phase pattern (GGPP) and local Gabor phase pattern (LGPP) are then proposed to encode the phase variations. GGPP captures the variations derived from the orientation changing of Gabor wavelet at a given scale (frequency), while LGPP encodes the local neighborhood variations by using a novel local XOR pattern (LXP) operator. They are both divided into the nonoverlapping rectangular regions, from which spatial histograms are extracted and concatenated into an extended histogram feature to represent the original image. Finally, the recognition is performed by using the nearest-neighbor classifier with histogram intersection as the similarity measurement. The features of HGPP lie in two aspects: 1) HGPP can describe the general face images robustly without the training procedure; 2) HGPP encodes the Gabor phase information, while most previous face recognition methods exploit the Gabor magnitude information. In addition, Fisher separation criterion is further used to improve the performance of HGPP by weighing the subregions of the image according to their discriminative powers. The proposed methods are successfully applied to face recognition, and the experiment results on the large-scale FERET and CAS-PEAL databases show that the proposed algorithms significantly outperform other well-known systems in terms of recognition rate"
]
} |
1506.00481 | 2952861847 | This paper presents a computationally efficient yet powerful binary framework for robust facial representation based on image gradients. It is termed as structural binary gradient patterns (SBGP). To discover underlying local structures in the gradient domain, we compute image gradients from multiple directions and simplify them into a set of binary strings. The SBGP is derived from certain types of these binary strings that have meaningful local structures and are capable of resembling fundamental textural information. They detect micro orientational edges and possess strong orientation and locality capabilities, thus enabling great discrimination. The SBGP also benefits from the advantages of the gradient domain and exhibits profound robustness against illumination variations. The binary strategy realized by pixel correlations in a small neighborhood substantially simplifies the computational complexity and achieves extremely efficient processing with only 0.0032s in Matlab for a typical face image. Furthermore, the discrimination power of the SBGP can be enhanced on a set of defined orientational image gradient magnitudes, further enforcing locality and orientation. Results of extensive experiments on various benchmark databases illustrate significant improvements of the SBGP based representations over the existing state-of-the-art local descriptors in the terms of discrimination, robustness and complexity. Codes for the SBGP methods will be available at this http URL | As a simpler approach, Jie @cite_11 proposed a Weber local descriptor (WLD) based on the Weber's Law of human perception system, which states that the noticeable change of a stimulus is a constant ratio of the original stimulus. In @cite_62 , Tan and Triggs presented local ternary patterns (LTP) by extending LBP to 3-valued codes for increasing its robustness to noise in the near-uniform image regions. Both methods have been shown to be highly discriminative and resistant to illumination changes, extending the advantages of LBP. However, similar to LBP, both descriptors build local relationships in the intensity domain, which can be seriously affected by dramatic changes of pixel intensity. | {
"cite_N": [
"@cite_62",
"@cite_11"
],
"mid": [
"2131081720",
"2130258210"
],
"abstract": [
"Making recognition more reliable under uncontrolled lighting conditions is one of the most important challenges for practical face recognition systems. We tackle this by combining the strengths of robust illumination normalization, local texture-based face representations, distance transform based matching, kernel-based feature extraction and multiple feature fusion. Specifically, we make three main contributions: 1) we present a simple and efficient preprocessing chain that eliminates most of the effects of changing illumination while still preserving the essential appearance details that are needed for recognition; 2) we introduce local ternary patterns (LTP), a generalization of the local binary pattern (LBP) local texture descriptor that is more discriminant and less sensitive to noise in uniform regions, and we show that replacing comparisons based on local spatial histograms with a distance transform based similarity metric further improves the performance of LBP LTP based face recognition; and 3) we further improve robustness by adding Kernel principal component analysis (PCA) feature extraction and incorporating rich local appearance cues from two complementary sources-Gabor wavelets and LBP-showing that the combination is considerably more accurate than either feature set alone. The resulting method provides state-of-the-art performance on three data sets that are widely used for testing recognition under difficult illumination conditions: Extended Yale-B, CAS-PEAL-R1, and Face Recognition Grand Challenge version 2 experiment 4 (FRGC-204). For example, on the challenging FRGC-204 data set it halves the error rate relative to previously published methods, achieving a face verification rate of 88.1 at 0.1 false accept rate. Further experiments show that our preprocessing method outperforms several existing preprocessors for a range of feature sets, data sets and lighting conditions.",
"Inspired by Weber's Law, this paper proposes a simple, yet very powerful and robust local descriptor, called the Weber Local Descriptor (WLD). It is based on the fact that human perception of a pattern depends not only on the change of a stimulus (such as sound, lighting) but also on the original intensity of the stimulus. Specifically, WLD consists of two components: differential excitation and orientation. The differential excitation component is a function of the ratio between two terms: One is the relative intensity differences of a current pixel against its neighbors, the other is the intensity of the current pixel. The orientation component is the gradient orientation of the current pixel. For a given image, we use the two components to construct a concatenated WLD histogram. Experimental results on the Brodatz and KTH-TIPS2-a texture databases show that WLD impressively outperforms the other widely used descriptors (e.g., Gabor and SIFT). In addition, experimental results on human face detection also show a promising performance comparable to the best known results on the MIT+CMU frontal face test set, the AR face data set, and the CMU profile test set."
]
} |
1506.00481 | 2952861847 | This paper presents a computationally efficient yet powerful binary framework for robust facial representation based on image gradients. It is termed as structural binary gradient patterns (SBGP). To discover underlying local structures in the gradient domain, we compute image gradients from multiple directions and simplify them into a set of binary strings. The SBGP is derived from certain types of these binary strings that have meaningful local structures and are capable of resembling fundamental textural information. They detect micro orientational edges and possess strong orientation and locality capabilities, thus enabling great discrimination. The SBGP also benefits from the advantages of the gradient domain and exhibits profound robustness against illumination variations. The binary strategy realized by pixel correlations in a small neighborhood substantially simplifies the computational complexity and achieves extremely efficient processing with only 0.0032s in Matlab for a typical face image. Furthermore, the discrimination power of the SBGP can be enhanced on a set of defined orientational image gradient magnitudes, further enforcing locality and orientation. Results of extensive experiments on various benchmark databases illustrate significant improvements of the SBGP based representations over the existing state-of-the-art local descriptors in the terms of discrimination, robustness and complexity. Codes for the SBGP methods will be available at this http URL | The proposed SBGP is closely related to the center-symmetric local binary pattern (CS-LBP) @cite_61 which computes local binary from symmetric neighboring pixels. However, the SBGP differs distinctly in three aspects. First, structural patterns and multiple spatial resolutions are defined in the SBGP. We show some theoretical insights that the structural patterns of SBGP work as oriented edge detectors, a key to discriminative and compact representation. The multiple spatial resolution strategy increases descriptor's flexibility with stronger discriminative power. Second, motivated by the multiple channels strategy of the invariant descriptors such as the SIFT @cite_69 and POEM @cite_59 , we facilitate the SBGP descriptor on a set of orientational image gradient magnitudes. This further enhances its discriminative power. Finally, the CS-LBP was originally developed for image matching, while the SBGP is proposed for face recognition. The task of face recognition often requires more detailed and robust local features than general features for matching. As it we will be shown in Section 6.2.1 that the CS-LBP is highly sensitive to significant illuminations. | {
"cite_N": [
"@cite_61",
"@cite_69",
"@cite_59"
],
"mid": [
"2172196609",
"2151103935",
"2009094591"
],
"abstract": [
"This paper presents a novel method for interest region description. We adopted the idea that the appearance of an interest region can be well characterized by the distribution of its local features. The most well-known descriptor built on this idea is the SIFT descriptor that uses gradient as the local feature. Thus far, existing texture features are not widely utilized in the context of region description. In this paper, we introduce a new texture feature called center-symmetric local binary pattern (CS-LBP) that is a modified version of the well-known local binary pattern (LBP) feature. To combine the strengths of the SIFT and LBP, we use the CS-LBP as the local feature in the SIFT algorithm. The resulting descriptor is called the CS-LBP descriptor. In the matching and object category classification experiments, our descriptor performs favorably compared to the SIFT. Furthermore, the CS-LBP descriptor is computationally simpler than the SIFT.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"A good feature descriptor is desired to be discriminative, robust, and computationally inexpensive in both terms of time and storage requirement. In the domain of face recognition, these properties allow the system to quickly deliver high recognition results to the end user. Motivated by the recent feature descriptor called Patterns of Oriented Edge Magnitudes (POEM), which balances the three concerns, this paper aims at enhancing its performance with respect to all these criteria. To this end, we first optimize the parameters of POEM and then apply the whitened principal-component-analysis dimensionality reduction technique to get a more compact, robust, and discriminative descriptor. For face recognition, the efficiency of our algorithm is proved by strong results obtained on both constrained (Face Recognition Technology, FERET) and unconstrained (Labeled Faces in the Wild, LFW) data sets in addition with the low complexity. Impressively, our algorithm is about 30 times faster than those based on Gabor filters. Furthermore, by proposing an additional technique that makes our descriptor robust to rotation, we validate its efficiency for the task of image matching."
]
} |
1505.08075 | 2949952998 | We propose a technique for learning representations of parser states in transition-based dependency parsers. Our primary innovation is a new control structure for sequence-to-sequence neural networks---the stack LSTM. Like the conventional stack data structures used in transition-based parsing, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. This lets us formulate an efficient parsing model that captures three facets of a parser's state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. Standard backpropagation techniques are used for training and yield state-of-the-art parsing performance. | A variety of authors have used neural networks to predict parser actions in shift-reduce parsers. The earliest attempt we are aware of is due to . The resurgence of interest in neural networks has resulted in in several applications to transition-based dependency parsers @cite_37 @cite_4 @cite_21 . In these works, the conditioning structure was manually crafted and sensitive to only certain properties of the state, while we are conditioning on the global state object. Like us, used recursively composed representations of the tree fragments (a head and its dependents). Neural networks have also been used to learn representations for use in chart parsing @cite_38 @cite_39 @cite_41 @cite_7 . | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_4",
"@cite_7",
"@cite_41",
"@cite_21",
"@cite_39"
],
"mid": [
"",
"2951588135",
"2250861254",
"2251628756",
"2133280805",
"",
"2149337887"
],
"abstract": [
"",
"We present structured perceptron training for neural network transition-based dependency parsing. We learn the neural network representation using a gold corpus augmented by a large number of automatically parsed sentences. Given this fixed network representation, we learn a final layer using the structured perceptron with beam-search decoding. On the Penn Treebank, our parser reaches 94.26 unlabeled and 92.41 labeled attachment accuracy, which to our knowledge is the best accuracy on Stanford Dependencies to date. We also provide in-depth ablative analysis to determine which aspects of our model provide the largest gains in accuracy.",
"Almost all current dependency parsers classify based on millions of sparse indicator features. Not only do these features generalize poorly, but the cost of feature computation restricts parsing speed significantly. In this work, we propose a novel way of learning a neural network classifier for use in a greedy, transition-based dependency parser. Because this classifier learns and uses just a small number of dense features, it can work very fast, while achieving an about 2 improvement in unlabeled and labeled attachment scores on both English and Chinese datasets. Concretely, our parser is able to parse more than 1000 sentences per second at 92.2 unlabeled attachment score on the English Penn Treebank.",
"We propose the first implementation of an infinite-order generative dependency model. The model is based on a new recursive neural network architecture, the Inside-Outside Recursive Neural Network. This architecture allows information to flow not only bottom-up, as in traditional recursive neural networks, but also topdown. This is achieved by computing content as well as context representations for any constituent, and letting these representations interact. Experimental results on the English section of the Universal Dependency Treebank show that the infinite-order model achieves a perplexity seven times lower than the traditional third-order model using counting, and tends to choose more accurate parses in k-best lists. In addition, reranking with this model achieves state-of-the-art unlabelled attachment scores and unlabelled exact match scores.",
"Natural language parsing has typically been done with small sets of discrete categories such as NP and VP, but this representation does not capture the full syntactic nor semantic richness of linguistic phrases, and attempts to improve on this by lexicalizing phrases or splitting categories only partly address the problem at the cost of huge feature spaces and sparseness. Instead, we introduce a Compositional Vector Grammar (CVG), which combines PCFGs with a syntactically untied recursive neural network that learns syntactico-semantic, compositional vector representations. The CVG improves the PCFG of the Stanford Parser by 3.8 to obtain an F1 score of 90.4 . It is fast to train and implemented approximately as an efficient reranker it is about 20 faster than the current Stanford factored parser. The CVG learns a soft notion of head words and improves performance on the types of ambiguities that require semantic information such as PP attachments.",
"",
"We introduce a framework for syntactic parsing with latent variables based on a form of dynamic Sigmoid Belief Networks called Incremental Sigmoid Belief Networks. We demonstrate that a previous feed-forward neural network parsing model can be viewed as a coarse approximation to inference with this class of graphical model. By constructing a more accurate but still tractable approximation, we significantly improve parsing accuracy, suggesting that ISBNs provide a good idealization for parsing. This generative model of parsing achieves state-of-theart results on WSJ text and 8 error reduction over the baseline neural network parser."
]
} |
1505.08075 | 2949952998 | We propose a technique for learning representations of parser states in transition-based dependency parsers. Our primary innovation is a new control structure for sequence-to-sequence neural networks---the stack LSTM. Like the conventional stack data structures used in transition-based parsing, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. This lets us formulate an efficient parsing model that captures three facets of a parser's state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. Standard backpropagation techniques are used for training and yield state-of-the-art parsing performance. | Finally, our work can be understood as a progression toward using larger contexts in parsing. An exhaustive summary is beyond the scope of this paper, but some of the important milestones in this tradition are the use of cube pruning to efficiently include nonlocal features in discriminative chart reranking @cite_34 , approximate decoding techniques based on LP relaxations in graph-based parsing to include higher-order features @cite_44 , and randomized hill-climbing methods that enable arbitrary nonlocal features in global discriminative parsing models @cite_29 . Since our parser is sensitive to any part of the input, its history, or its stack contents, it is similar in spirit to the last approach, which permits truly arbitrary features. | {
"cite_N": [
"@cite_44",
"@cite_29",
"@cite_34"
],
"mid": [
"1598566484",
"2252840007",
"2134729743"
],
"abstract": [
"A computer for implementing an event driven algorithm which is used in conjunction with a master computer is disclosed. The computer includes a plurality of processors coupled in a ring arrangement each of which is microprogrammable. Each processor includes a memory and a memory address generator. The generator can generate addresses based on a combination of signals from both the microcode and signals on the data bus.",
"Dependency parsing with high-order features results in a provably hard decoding problem. A lot of work has gone into developing powerful optimization methods for solving these combinatorial problems. In contrast, we explore, analyze, and demonstrate that a substantially simpler randomized greedy inference algorithm already suffices for near optimal parsing: a) we analytically quantify the number of local optima that the greedy method has to overcome in the context of first-order parsing; b) we show that, as a decoding algorithm, the greedy method surpasses dual decomposition in second-order parsing; c) we empirically demonstrate that our approach with up to third-order and global features outperforms the state-of-the-art dual decomposition and MCMC sampling methods when evaluated on 14 languages of non-projective CoNLL datasets. 1",
"Conventional n-best reranking techniques often suffer from the limited scope of the nbest list, which rules out many potentially good alternatives. We instead propose forest reranking, a method that reranks a packed forest of exponentially many parses. Since exact inference is intractable with non-local features, we present an approximate algorithm inspired by forest rescoring that makes discriminative training practical over the whole Treebank. Our final result, an F-score of 91.7, outperforms both 50-best and 100-best reranking baselines, and is better than any previously reported systems trained on the Treebank."
]
} |
1505.08097 | 2232222953 | This paper presents the first complete, integrated and end-to-end solution for ad hoc cloud computing environments. Ad hoc clouds harvest resources from existing sporadically available, non-exclusive (i.e. primarily used for some other purpose) and unreliable infrastructures. In this paper we discuss the problems ad hoc cloud computing solves and outline our architecture which is based on BOINC. | Chandra propose a similar idea using Nebulas (synonymous to an cloud) where volunteer resources are used to create a cloud platform @cite_7 . They note that Nebulas are particularly useful for applications that do not have strong performance guarantees and hence the authors focus on the performance and reliability of such platforms. | {
"cite_N": [
"@cite_7"
],
"mid": [
"67629752"
],
"abstract": [
"Current cloud services are deployed on well-provisioned and centrally controlled infrastructures. However, there are several classes of services for which the current cloud model may not fit well: some do not need strong performance guarantees, the pricing may be too expensive for some, and some may be constrained by the data movement costs to the cloud. To satisfy the requirements of such services, we propose the idea of using distributed voluntary resources--those donated by end-user hosts--to form nebulas: more dispersed, less-managed clouds. We first discuss the requirements of cloud services and the challenges in meeting these requirements in such voluntary clouds. We then present some possible solutions to these challenges and also discuss opportunities for further improvements to make nebulas a viable cloud paradigm."
]
} |
1505.07765 | 1574502566 | A recurring problem when building probabilistic latent variable models is regularization and model selection, for instance, the choice of the dimensionality of the latent space. In the context of belief networks with latent variables, this problem has been adressed with Automatic Relevance Determination (ARD) employing Monte Carlo inference. We present a variational inference approach to ARD for Deep Generative Models using doubly stochastic variational inference to provide fast and scalable learning. We show empirical results on a standard dataset illustrating the effects of contracting the latent space automatically. We show that the resulting latent representations are significantly more compact without loss of expressive power of the learned models. | An initial Bayesian treatment for regularization of Neural Networks was performed in seminal work by @cite_3 and @cite_5 . They introduced a notion called Automatic Relevance Determination (ARD), which consists of the idea of using a prior distribution on generative weights attached to the latent variable which encourages the weights to be zero. Effectively, by integrating over such priors using Monte Carlo, settings for the variances of the prior can be inferred from data leading to pruning of unnecessary latent dimensions. ARD was also notably used in a variational treatment of the relevance vector machine @cite_8 to infer a mask over the data features needed for a preditive model. The idea of relevance determination for belief networks in combination with variational inference has also been explored before in @cite_1 , but is different from our model. | {
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_3",
"@cite_8"
],
"mid": [
"1567512734",
"",
"2139701068",
"2952550066"
],
"abstract": [
"From the Publisher: Artificial \"neural networks\" are now widely used as flexible models for regression classification applications, but questions remain regarding what these models mean, and how they can safely be used when training data is limited. Bayesian Learning for Neural Networks shows that Bayesian methods allow complex neural network models to be used without fear of the \"overfitting\" that can occur with traditional neural network learning methods. Insight into the nature of these complex Bayesian models is provided by a theoretical investigation of the priors over functions that underlie them. Use of these models in practice is made possible using Markov chain Monte Carlo techniques. Both the theoretical and computational aspects of this work are of wider statistical interest, as they contribute to a better understanding of how Bayesian methods can be applied to complex problems. Presupposing only the basic knowledge of probability and statistics, this book should be of interest to many researchers in statistics, engineering, and artificial intelligence. Software for Unix systems that implements the methods described is freely available over the Internet.",
"",
"Bayesian probability theory provides a unifying framework for data modelling. In this framework the overall aims are to find models that are well-matched to the data, and to use these models to make optimal predictions. Neural network learning is interpreted as an inference of the most probable parameters for the model, given the training data. The search in model space (i.e., the space of architectures, noise models, preprocessings, regularizers and weight decay constants) can then also be treated as an inference problem, in which we infer the relative probability of alternative models, given the data. This review describes practical techniques based on Gaussian approximations for implementation of these powerful methods for controlling, comparing and using adaptive networks.",
"The Support Vector Machine (SVM) of Vapnik (1998) has become widely established as one of the leading approaches to pattern recognition and machine learning. It expresses predictions in terms of a linear combination of kernel functions centred on a subset of the training data, known as support vectors. Despite its widespread success, the SVM suffers from some important limitations, one of the most significant being that it makes point predictions rather than generating predictive distributions. Recently Tipping (1999) has formulated the Relevance Vector Machine (RVM), a probabilistic model whose functional form is equivalent to the SVM. It achieves comparable recognition accuracy to the SVM, yet provides a full predictive distribution, and also requires substantially fewer kernel functions. The original treatment of the RVM relied on the use of type II maximum likelihood (the evidence framework') to provide point estimates of the hyperparameters which govern model sparsity. In this paper we show how the RVM can be formulated and solved within a completely Bayesian paradigm through the use of variational inference, thereby giving a posterior distribution over both parameters and hyperparameters. We demonstrate the practicality and performance of the variational RVM using both synthetic and real world examples."
]
} |
1505.07765 | 1574502566 | A recurring problem when building probabilistic latent variable models is regularization and model selection, for instance, the choice of the dimensionality of the latent space. In the context of belief networks with latent variables, this problem has been adressed with Automatic Relevance Determination (ARD) employing Monte Carlo inference. We present a variational inference approach to ARD for Deep Generative Models using doubly stochastic variational inference to provide fast and scalable learning. We show empirical results on a standard dataset illustrating the effects of contracting the latent space automatically. We show that the resulting latent representations are significantly more compact without loss of expressive power of the learned models. | Additionally, ARD is a key component of many Gaussian Process models @cite_18 , where it is used to select features of the input data to be passed through a covariance function. A similarly inspired model to ours uses Bayesian Gaussian Process Latent Variable Models with ARD @cite_7 to learn rich latent spaces on multiple views, with the main difference being the use of Gaussian Processes instead of nonlinear parametric latent variable models. Finally, a related inference method was presented in @cite_4 , where doubly stochastic variational inference was used for relevance determination on the input weights in logistic regression, but not in the context of deep generative models. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7"
],
"mid": [
"2161767008",
"",
"2951741291"
],
"abstract": [
"The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior distribution over functions. In this paper we investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations. Two methods, using optimization and averaging (via Hybrid Monte Carlo) over hyperparameters have been tested on a number of challenging problems and have produced excellent results.",
"",
"In this paper we present a fully Bayesian latent variable model which exploits conditional nonlinear(in)-dependence structures to learn an efficient latent representation. The latent space is factorized to represent shared and private information from multiple views of the data. In contrast to previous approaches, we introduce a relaxation to the discrete segmentation and allow for a \"softly\" shared latent space. Further, Bayesian techniques allow us to automatically estimate the dimensionality of the latent spaces. The model is capable of capturing structure underlying extremely high dimensional spaces. This is illustrated by modelling unprocessed images with tenths of thousands of pixels. This also allows us to directly generate novel images from the trained model by sampling from the discovered latent spaces. We also demonstrate the model by prediction of human pose in an ambiguous setting. Our Bayesian framework allows us to perform disambiguation in a principled manner by including latent space priors which incorporate the dynamic nature of the data."
]
} |
1505.07765 | 1574502566 | A recurring problem when building probabilistic latent variable models is regularization and model selection, for instance, the choice of the dimensionality of the latent space. In the context of belief networks with latent variables, this problem has been adressed with Automatic Relevance Determination (ARD) employing Monte Carlo inference. We present a variational inference approach to ARD for Deep Generative Models using doubly stochastic variational inference to provide fast and scalable learning. We show empirical results on a standard dataset illustrating the effects of contracting the latent space automatically. We show that the resulting latent representations are significantly more compact without loss of expressive power of the learned models. | Apart from ARD, there is other recent work regularizing and manipulating deep generative models to add structure to the latent space. @cite_16 a penalty term is introduced to force latent variables to decorrelate, while in @cite_10 known factors of variation are manipulated in controlled manual fashion during gradient descent to encourage learning of disentangled latent variables. Such approaches bring benefits in learning more interpretably shaped latent spaces and result in impressive performance gains. | {
"cite_N": [
"@cite_16",
"@cite_10"
],
"mid": [
"1915328033",
"1691728462"
],
"abstract": [
"Deep learning has enjoyed a great deal of success because of its ability to learn useful features for tasks such as classification. But there has been less exploration in learning the factors of variation apart from the classification signal. By augmenting autoencoders with simple regularization terms during training, we demonstrate that standard deep architectures can discover and explicitly represent factors of variation beyond those relevant for categorization. We introduce a cross-covariance penalty (XCov) as a method to disentangle factors like handwriting style for digits and subject identity in faces. We demonstrate this on the MNIST handwritten digit database, the Toronto Faces Database (TFD) and the Multi-PIE dataset by generating manipulated instances of the data. Furthermore, we demonstrate these deep networks can extrapolate hidden' variation in the supervised signal.",
"This paper presents the Deep Convolution Inverse Graphics Network (DC-IGN), a model that aims to learn an interpretable representation of images, disentangled with respect to three-dimensional scene structure and viewing transformations such as depth rotations and lighting variations. The DC-IGN model is composed of multiple layers of convolution and de-convolution operators and is trained using the Stochastic Gradient Variational Bayes (SGVB) algorithm [10]. We propose a training procedure to encourage neurons in the graphics code layer to represent a specific transformation (e.g. pose or light). Given a single input image, our model can generate new images of the same object with variations in pose and lighting. We present qualitative and quantitative tests of the model's efficacy at learning a 3D rendering engine for varied object classes including faces and chairs."
]
} |
1505.07922 | 2950940417 | We address the problem of cross-domain image retrieval, considering the following practical application: given a user photo depicting a clothing image, our goal is to retrieve the same or attribute-similar clothing items from online shopping stores. This is a challenging problem due to the large discrepancy between online shopping images, usually taken in ideal lighting pose background conditions, and user photos captured in uncontrolled conditions. To address this problem, we propose a Dual Attribute-aware Ranking Network (DARN) for retrieval feature learning. More specifically, DARN consists of two sub-networks, one for each domain, whose retrieval feature representations are driven by semantic attribute learning. We show that this attribute-guided learning is a key factor for retrieval accuracy improvement. In addition, to further align with the nature of the retrieval problem, we impose a triplet visual similarity constraint for learning to rank across the two sub-networks. Another contribution of our work is a large-scale dataset which makes the network learning feasible. We exploit customer review websites to crawl a large set of online shopping images and corresponding offline user photos with fine-grained clothing attributes, i.e., around 450,000 online shopping images and about 90,000 exact offline counterpart images of those online ones. All these images are collected from real-world consumer websites reflecting the diversity of the data modality, which makes this dataset unique and rare in the academic community. We extensively evaluate the retrieval performance of networks in different configurations. The top-20 retrieval accuracy is doubled when using the proposed DARN other than the current popular solution using pre-trained CNN features only (0.570 vs. 0.268). | Fashion Datasets . Recently, several datasets containing a wide variety of clothing images captured from fashion websites have been carefully annotated with attribute labels @cite_29 @cite_7 @cite_34 @cite_5 . These datasets are primarily designed for training and evaluation of clothing parsing and attribute estimation algorithms. In contrast, our data is comprised of a large set of clothing image pairs depicting user photos and corresponding garments from online shopping, in addition to fine-grained attributes. Notably, this real-world data is essential to bridge the gap between the two domains. | {
"cite_N": [
"@cite_5",
"@cite_29",
"@cite_34",
"@cite_7"
],
"mid": [
"2146851707",
"2074621908",
"2006471615",
""
],
"abstract": [
"We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose two classes of data driven models in the Deterministic Fashion Recommenders (DFR) and Stochastic Fashion Recommenders (SFR) for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive experimentation on a large-scale data set and baseline them against existing ideas from color science. We also illustrate key fashion insights learned through these experiments and show how they can be employed to design better recommendation systems. The industrial applicability of proposed models is in the context of mobile fashion shopping. Finally, we also outline a large-scale annotated data set of fashion images Fashion-136K) that can be exploited for future research in data driven visual fashion.",
"In this paper we demonstrate an effective method for parsing clothing in fashion photographs, an extremely challenging problem due to the large number of possible garment items, variations in configuration, garment appearance, layering, and occlusion. In addition, we provide a large novel dataset and tools for labeling garment items, to enable future research on clothing estimation. Finally, we present intriguing initial results on using clothing estimates to improve pose identification, and demonstrate a prototype application for pose-independent visual garment retrieval.",
"In this work, we present a new social image dataset related to the fashion and clothing domain. The dataset contains more than 32000 images, their context and social metadata. Furthermore the dataset is enriched with several types of annotations collected from the Amazon Mechanical Turk (AMT) crowdsourcing platform, which can serve as ground truth for various content analysis algorithms. This dataset has been successfully used at the Crowdsourcing task of the 2013 MediaEval Multimedia Benchmarking initiative. The dataset contributes to several research areas such as Crowdsourcing, multimedia content and context analysis as well as hybrid human automatic approaches. In this paper, the dataset is described in detail and the dataset collection strategy, statistics, applications of dataset and its contribution to MediaEval 2013 is discussed.",
""
]
} |
1505.07922 | 2950940417 | We address the problem of cross-domain image retrieval, considering the following practical application: given a user photo depicting a clothing image, our goal is to retrieve the same or attribute-similar clothing items from online shopping stores. This is a challenging problem due to the large discrepancy between online shopping images, usually taken in ideal lighting pose background conditions, and user photos captured in uncontrolled conditions. To address this problem, we propose a Dual Attribute-aware Ranking Network (DARN) for retrieval feature learning. More specifically, DARN consists of two sub-networks, one for each domain, whose retrieval feature representations are driven by semantic attribute learning. We show that this attribute-guided learning is a key factor for retrieval accuracy improvement. In addition, to further align with the nature of the retrieval problem, we impose a triplet visual similarity constraint for learning to rank across the two sub-networks. Another contribution of our work is a large-scale dataset which makes the network learning feasible. We exploit customer review websites to crawl a large set of online shopping images and corresponding offline user photos with fine-grained clothing attributes, i.e., around 450,000 online shopping images and about 90,000 exact offline counterpart images of those online ones. All these images are collected from real-world consumer websites reflecting the diversity of the data modality, which makes this dataset unique and rare in the academic community. We extensively evaluate the retrieval performance of networks in different configurations. The top-20 retrieval accuracy is doubled when using the proposed DARN other than the current popular solution using pre-trained CNN features only (0.570 vs. 0.268). | Visual Analysis of Clothing . Many methods have been recently proposed for automated analysis of clothing images, spanning a wide range of application domains. In particular, clothing recognition has been used for context-aided people identification @cite_20 , fashion style recognition @cite_36 , occupation recognition @cite_16 , and social tribe prediction @cite_14 . Clothing parsing methods, which produce semantic labels for each pixel in the input image, have received significant attention in the past few years @cite_29 @cite_25 . In the surveillance domain, matching clothing images across cameras is a fundamental task for the well-known person re-identification problem @cite_31 @cite_9 . | {
"cite_N": [
"@cite_14",
"@cite_36",
"@cite_29",
"@cite_9",
"@cite_31",
"@cite_16",
"@cite_25",
"@cite_20"
],
"mid": [
"1993636005",
"181871703",
"2074621908",
"",
"",
"2155171143",
"",
"2156001867"
],
"abstract": [
"Iljung S. Kwak1 iskwak@cs.ucsd.edu Ana C. Murillo2 acm@unizar.es Peter N. Belhumeur3 belhumeur@cs.columbia.edu David Kriegman1 kriegman@cs.ucsd.edu Serge Belongie1 sjb@cs.ucsd.edu 1 Dept. of Computer Science and Engineering University of California, San Diego, USA. 2 Dpt. Informatica e Ing. Sistemas Inst. Investigacion en Ingenieria de Aragon. University of Zaragoza, Spain. 3 Department of Computer Science Columbia University, USA.",
"The clothing we wear and our identities are closely tied, revealing to the world clues about our wealth, occupation, and socio-identity. In this paper we examine questions related to what our clothing reveals about our personal style. We first design an online competitive Style Rating Game called Hipster Wars to crowd source reliable human judgments of style. We use this game to collect a new dataset of clothing outfits with associated style ratings for 5 style categories: hipster, bohemian, pinup, preppy, and goth. Next, we train models for between-class and within-class classification of styles. Finally, we explore methods to identify clothing elements that are generally discriminative for a style, and methods for identifying items in a particular outfit that may indicate a style.",
"In this paper we demonstrate an effective method for parsing clothing in fashion photographs, an extremely challenging problem due to the large number of possible garment items, variations in configuration, garment appearance, layering, and occlusion. In addition, we provide a large novel dataset and tools for labeling garment items, to enable future research on clothing estimation. Finally, we present intriguing initial results on using clothing estimates to improve pose identification, and demonstrate a prototype application for pose-independent visual garment retrieval.",
"",
"",
"Predicting human occupations in photos has great application potentials in intelligent services and systems. However, using traditional classification methods cannot reliably distinguish different occupations due to the complex relations between occupations and the low-level image features. In this paper, we investigate the human occupation prediction problem by modeling the appearances of human clothing as well as surrounding context. The human clothing, regarding its complex details and variant appearances, is described via part-based modeling on the automatically aligned patches of human body parts. The image patches are represented with semantic-level patterns such as clothes and haircut styles using methods based on sparse coding towards informative and noise-tolerant capacities. This description of human clothing is proved to be more effective than traditional methods. Different kinds of surrounding context are also investigated as a complementarity of human clothing features in the cases that the background information is available. Experiments are conducted on a well labeled image database that contains more than 5; 000 images from 20 representative occupation categories. The preliminary study shows the human occupation is reasonably predictable using the proposed clothing features and possible context.",
"",
"Researches have verified that clothing provides information about the identity of the individual. To extract features from the clothing, the clothing region first must be localized or segmented in the image. At the same time, given multiple images of the same person wearing the same clothing, we expect to improve the effectiveness of clothing segmentation. Therefore, the identity recognition and clothing segmentation problems are inter-twined; a good solution for one aides in the solution for the other. We build on this idea by analyzing the mutual information between pixel locations near the face and the identity of the person to learn a global clothing mask. We segment the clothing region in each image using graph cuts based on a clothing model learned from one or multiple images believed to be the same person wearing the same clothing. We use facial features and clothing features to recognize individuals in other images. The results show that clothing segmentation provides a significant improvement in recognition accuracy for large image collections, and useful clothing masks are simultaneously produced. A further significant contribution is that we introduce a publicly available consumer image collection where each individual is identified. We hope this dataset allows the vision community to more easily compare results for tasks related to recognizing people in consumer image collections."
]
} |
1505.07922 | 2950940417 | We address the problem of cross-domain image retrieval, considering the following practical application: given a user photo depicting a clothing image, our goal is to retrieve the same or attribute-similar clothing items from online shopping stores. This is a challenging problem due to the large discrepancy between online shopping images, usually taken in ideal lighting pose background conditions, and user photos captured in uncontrolled conditions. To address this problem, we propose a Dual Attribute-aware Ranking Network (DARN) for retrieval feature learning. More specifically, DARN consists of two sub-networks, one for each domain, whose retrieval feature representations are driven by semantic attribute learning. We show that this attribute-guided learning is a key factor for retrieval accuracy improvement. In addition, to further align with the nature of the retrieval problem, we impose a triplet visual similarity constraint for learning to rank across the two sub-networks. Another contribution of our work is a large-scale dataset which makes the network learning feasible. We exploit customer review websites to crawl a large set of online shopping images and corresponding offline user photos with fine-grained clothing attributes, i.e., around 450,000 online shopping images and about 90,000 exact offline counterpart images of those online ones. All these images are collected from real-world consumer websites reflecting the diversity of the data modality, which makes this dataset unique and rare in the academic community. We extensively evaluate the retrieval performance of networks in different configurations. The top-20 retrieval accuracy is doubled when using the proposed DARN other than the current popular solution using pre-trained CNN features only (0.570 vs. 0.268). | Recently, there is a growing interest in methods for clothing retrieval @cite_28 @cite_4 @cite_1 @cite_49 and outfit recommendation @cite_5 . Most of those methods do not model the discrepancy between the user photos and online clothing images. An exception is the work of @cite_1 , which follows a very different methodology than ours and does not exploit the richness of our data obtained by mining images from customer reviews. | {
"cite_N": [
"@cite_4",
"@cite_28",
"@cite_1",
"@cite_49",
"@cite_5"
],
"mid": [
"",
"2143183660",
"2135367695",
"2051926867",
"2146851707"
],
"abstract": [
"",
"We present a scalable approach to automatically suggest relevant clothing products, given a single image without metadata. We formulate the problem as cross-scenario retrieval: the query is a real-world image, while the products from online shopping catalogs are usually presented in a clean environment. We divide our approach into two main stages: a) Starting from articulated pose estimation, we segment the person area and cluster promising image regions in order to detect the clothing classes present in the query image. b) We use image retrieval techniques to retrieve visually similar products from each of the detected classes. We achieve clothing detection performance comparable to the state-of-the-art on a very recent annotated dataset, while being more than 50 times faster. Finally, we present a large scale clothing suggestion scenario, where the product database contains over one million products.",
"We address a cross-scenario clothing retrieval problem- given a daily human photo captured in general environment, e.g., on street, finding similar clothing in online shops, where the photos are captured more professionally and with clean background. There are large discrepancies between daily photo scenario and online shopping scenario. We first propose to alleviate the human pose discrepancy by locating 30 human parts detected by a well trained human detector. Then, founded on part features, we propose a two-step calculation to obtain more reliable one-to-many similarities between the query daily photo and online shopping photos: 1) the within-scenario one-to-many similarities between a query daily photo and an extra auxiliary set are derived by direct sparse reconstruction; 2) by a cross-scenario many-to-many similarity transfer matrix inferred offline from the auxiliary set and the online shopping set, the reliable cross-scenario one-to-many similarities between the query daily photo and all online shopping photos are obtained.",
"Automatic clothes search in consumer photos is not a trivial problem as photos are usually taken under completely uncontrolled realistic imaging conditions. In this paper, a novel framework is presented to tackle this issue by leveraging low-level features (e.g., color) and high-level features (attributes) of clothes. First, a content-based image retrieval(CBIR) approach based on the bag-of-visual-words (BOW) model is developed as our baseline system, in which a codebook is constructed from extracted dominant color patches. A reranking approach is then proposed to improve search quality by exploiting clothes attributes, including the type of clothes, sleeves, patterns, etc. The experiments on photo collections show that our approach is robust to large variations of images taken in unconstrained environment, and the reranking algorithm based on attribute learning significantly improves retrieval performance in combination with the proposed baseline.",
"We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose two classes of data driven models in the Deterministic Fashion Recommenders (DFR) and Stochastic Fashion Recommenders (SFR) for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive experimentation on a large-scale data set and baseline them against existing ideas from color science. We also illustrate key fashion insights learned through these experiments and show how they can be employed to design better recommendation systems. The industrial applicability of proposed models is in the context of mobile fashion shopping. Finally, we also outline a large-scale annotated data set of fashion images Fashion-136K) that can be exploited for future research in data driven visual fashion."
]
} |
1505.07922 | 2950940417 | We address the problem of cross-domain image retrieval, considering the following practical application: given a user photo depicting a clothing image, our goal is to retrieve the same or attribute-similar clothing items from online shopping stores. This is a challenging problem due to the large discrepancy between online shopping images, usually taken in ideal lighting pose background conditions, and user photos captured in uncontrolled conditions. To address this problem, we propose a Dual Attribute-aware Ranking Network (DARN) for retrieval feature learning. More specifically, DARN consists of two sub-networks, one for each domain, whose retrieval feature representations are driven by semantic attribute learning. We show that this attribute-guided learning is a key factor for retrieval accuracy improvement. In addition, to further align with the nature of the retrieval problem, we impose a triplet visual similarity constraint for learning to rank across the two sub-networks. Another contribution of our work is a large-scale dataset which makes the network learning feasible. We exploit customer review websites to crawl a large set of online shopping images and corresponding offline user photos with fine-grained clothing attributes, i.e., around 450,000 online shopping images and about 90,000 exact offline counterpart images of those online ones. All these images are collected from real-world consumer websites reflecting the diversity of the data modality, which makes this dataset unique and rare in the academic community. We extensively evaluate the retrieval performance of networks in different configurations. The top-20 retrieval accuracy is doubled when using the proposed DARN other than the current popular solution using pre-trained CNN features only (0.570 vs. 0.268). | Visual Attributes . Research on attribute-based visual representations have received renewed attention by the computer vision community in the past few years @cite_2 @cite_37 @cite_18 @cite_43 . Attributes are usually referred as semantic properties of objects or scenes that are shared across categories. Among other applications, attributes have been used for zero-shot learning @cite_2 , image ranking and retrieval @cite_44 @cite_21 @cite_45 , fine-grained categorization @cite_19 , scene understanding @cite_8 , and sentence generation from images @cite_40 . | {
"cite_N": [
"@cite_37",
"@cite_18",
"@cite_8",
"@cite_21",
"@cite_44",
"@cite_43",
"@cite_19",
"@cite_45",
"@cite_40",
"@cite_2"
],
"mid": [
"",
"",
"2070148066",
"2033365921",
"2085660690",
"",
"",
"2007085438",
"2066134726",
"2134270519"
],
"abstract": [
"",
"",
"In this paper we present the first large-scale scene attribute database. First, we perform crowd-sourced human studies to find a taxonomy of 102 discriminative attributes. Next, we build the “SUN attribute database” on top of the diverse SUN categorical database. Our attribute database spans more than 700 categories and 14,000 images and has potential for use in high-level scene understanding and fine-grained scene recognition. We use our dataset to train attribute classifiers and evaluate how well these relatively simple classifiers can recognize a variety of attributes related to materials, surface properties, lighting, functions and affordances, and spatial envelope properties.",
"We propose a novel mode of feedback for image search, where a user describes which properties of exemplar images should be adjusted in order to more closely match his her mental model of the image(s) sought. For example, perusing image results for a query “black shoes”, the user might state, “Show me shoe images like these, but sportier.” Offline, our approach first learns a set of ranking functions, each of which predicts the relative strength of a nameable attribute in an image (‘sportiness’, ‘furriness’, etc.). At query time, the system presents an initial set of reference images, and the user selects among them to provide relative attribute feedback. Using the resulting constraints in the multi-dimensional attribute space, our method updates its relevance function and re-ranks the pool of images. This procedure iterates using the accumulated constraints until the top ranked images are acceptably close to the user's envisioned target. In this way, our approach allows a user to efficiently “whittle away” irrelevant portions of the visual feature space, using semantic language to precisely communicate her preferences to the system. We demonstrate the technique for refining image search for people, products, and scenes, and show it outperforms traditional binary relevance feedback in terms of search speed and accuracy.",
"We propose a novel approach for ranking and retrieval of images based on multi-attribute queries. Existing image retrieval methods train separate classifiers for each word and heuristically combine their outputs for retrieving multiword queries. Moreover, these approaches also ignore the interdependencies among the query terms. In contrast, we propose a principled approach for multi-attribute retrieval which explicitly models the correlations that are present between the attributes. Given a multi-attribute query, we also utilize other attributes in the vocabulary which are not present in the query, for ranking retrieval. Furthermore, we integrate ranking and retrieval within the same formulation, by posing them as structured prediction problems. Extensive experimental evaluation on the Labeled Faces in the Wild(LFW), FaceTracer and PASCAL VOC datasets show that our approach significantly outperforms several state-of-the-art ranking and retrieval methods.",
"",
"",
"Taking the shoe as a concrete example, we present an innovative product retrieval system that leverages object detection and retrieval techniques to support a brand-new online shopping experience in this article. The system, called Circle & Search, enables users to naturally indicate any preferred product by simply circling the product in images as the visual query, and then returns visually and semantically similar products to the users. The system is characterized by introducing attributes in both the detection and retrieval of the shoe. Specifically, we first develop an attribute-aware part-based shoe detection model. By maintaining the consistency between shoe parts and attributes, this shoe detector has the ability to model high-order relations between parts and thus the detection performance can be enhanced. Meanwhile, the attributes of this detected shoe can also be predicted as the semantic relations between parts. Based on the result of shoe detection, the system ranks all the shoes in the repository using an attribute refinement retrieval model that takes advantage of query-specific information and attribute correlation to provide an accurate and robust shoe retrieval. To evaluate this retrieval system, we build a large dataset with 17,151 shoe images, in which each shoe is annotated with 10 shoe attributes e.g., heel height, heel shape, sole shape, etc.). According to the experimental result and the user study, our Circle & Search system achieves promising shoe retrieval performance and thus significantly improves the users' online shopping experience.",
"We posit that visually descriptive language offers computer vision researchers both information about the world, and information about how people describe the world. The potential benefit from this source is made more significant due to the enormous amount of language data easily available today. We present a system to automatically generate natural language descriptions from images that exploits both statistics gleaned from parsing large quantities of text data and recognition algorithms from computer vision. The system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work.",
"We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes."
]
} |
1505.07922 | 2950940417 | We address the problem of cross-domain image retrieval, considering the following practical application: given a user photo depicting a clothing image, our goal is to retrieve the same or attribute-similar clothing items from online shopping stores. This is a challenging problem due to the large discrepancy between online shopping images, usually taken in ideal lighting pose background conditions, and user photos captured in uncontrolled conditions. To address this problem, we propose a Dual Attribute-aware Ranking Network (DARN) for retrieval feature learning. More specifically, DARN consists of two sub-networks, one for each domain, whose retrieval feature representations are driven by semantic attribute learning. We show that this attribute-guided learning is a key factor for retrieval accuracy improvement. In addition, to further align with the nature of the retrieval problem, we impose a triplet visual similarity constraint for learning to rank across the two sub-networks. Another contribution of our work is a large-scale dataset which makes the network learning feasible. We exploit customer review websites to crawl a large set of online shopping images and corresponding offline user photos with fine-grained clothing attributes, i.e., around 450,000 online shopping images and about 90,000 exact offline counterpart images of those online ones. All these images are collected from real-world consumer websites reflecting the diversity of the data modality, which makes this dataset unique and rare in the academic community. We extensively evaluate the retrieval performance of networks in different configurations. The top-20 retrieval accuracy is doubled when using the proposed DARN other than the current popular solution using pre-trained CNN features only (0.570 vs. 0.268). | Existing approaches for image retrieval based on deep learning have outperformed previous methods based on other image representations @cite_48 . However, they are not designed to handle the problem of cross-domain image retrieval. Several domain adaptation methods based on deep learning have been recently proposed @cite_35 @cite_12 . Related to our work, @cite_32 uses a double-path network with alignment cost layers for attribute prediction. In contrast, our work addresses the problem of cross-domain retrieval, proposing a novel network architecture that learns effective features for measuring visual similarity across domains. @PARASPLIT | {
"cite_N": [
"@cite_48",
"@cite_32",
"@cite_12",
"@cite_35"
],
"mid": [
"204268067",
"1946323491",
"2186639548",
"2963449250"
],
"abstract": [
"It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time.",
"We address the problem of describing people based on fine-grained clothing attributes. This is an important problem for many practical applications, such as identifying target suspects or finding missing people based on detailed clothing descriptions in surveillance videos or consumer photos. We approach this problem by first mining clothing images with fine-grained attribute labels from online shopping stores. A large-scale dataset is built with about one million images and fine-detailed attribute sub-categories, such as various shades of color (e.g., watermelon red, rosy red, purplish red), clothing types (e.g., down jacket, denim jacket), and patterns (e.g., thin horizontal stripes, houndstooth). As these images are taken in ideal pose lighting background conditions, it is unreliable to directly use them as training data for attribute prediction in the domain of unconstrained images captured, for example, by mobile phones or surveillance cameras. In order to bridge this gap, we propose a novel double-path deep domain adaptation network to model the data from the two domains jointly. Several alignment cost layers placed inbetween the two columns ensure the consistency of the two domain features and the feasibility to predict unseen attribute categories in one of the domains. Finally, to achieve a working system with automatic human body alignment, we trained an enhanced RCNN-based detector to localize human bodies in images. Our extensive experimental evaluation demonstrates the effectiveness of the proposed approach for describing people based on fine-grained clothing attributes.",
"In many real world applications of machine learning, the distribution of the training data (on which the machine learning model is trained) is dierent from the distribution of the test data (where the learnt model is actually deployed). This is known as the problem of Domain Adaptation. We propose a novel deep learning model for domain adaptation which attempts to learn a predictively useful representation of the data by taking into account information from the distribution shift between the training and test data. Our key proposal is to successively learn multiple intermediate representations along an path\" between the train and test domains. Our experiments on a standard object recognition dataset show a signicant performance improvement over the state-of-the-art.",
""
]
} |
1505.07987 | 2949483861 | This paper describes SEPIA, a tool for automated proof generation in Coq. SEPIA combines model inference with interactive theorem proving. Existing proof corpora are modelled using state-based models inferred from tactic sequences. These can then be traversed automatically to identify proofs. The SEPIA system is described and its performance evaluated on three Coq datasets. Our results show that SEPIA provides a useful complement to existing automated tactics in Coq. | Jamnik have previously applied an Inductive Logic Programming technique to examples of proofs in the @math mega system @cite_6 . Given a collection of well chosen proof method sequences, Jamnik perform a method of least generalisation to infer what are ultimately regular grammars. The value of even basic models is intuitive. Proofs could be derived automatically using the technique. However, the proof steps learned do not contain any parameters. The parameters required are reconstructed after running the learning technique. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2063371232"
],
"abstract": [
"In this paper we present an approach to automated learning within mathematical reasoning systems. In particular, the approach enables proof planning systems to automatically learn new proof methods from well-chosen examples of proofs which use a similar reasoning pattern to prove related theorems. Our approach consists of an abstract representation for methods and a machine learning technique which can learn methods using this representation formalism. We present an implementation of the approach within the mega proof planning system, which we call Learnma tic. We also present the results of the experiments that we ran on this implementation in order to evaluate if and how it improves the power of proof planning systems."
]
} |
1505.07987 | 2949483861 | This paper describes SEPIA, a tool for automated proof generation in Coq. SEPIA combines model inference with interactive theorem proving. Existing proof corpora are modelled using state-based models inferred from tactic sequences. These can then be traversed automatically to identify proofs. The SEPIA system is described and its performance evaluated on three Coq datasets. Our results show that SEPIA provides a useful complement to existing automated tactics in Coq. | Another approach that concentrated on Isabelle proofs was implemented by Duncan @cite_11 . Duncan's approach was to identify commonly occurring sequences of tactics from a given corpora. After eliciting these tactic sequences, evolutionary algorithms were used to automatically formulate new tactics. The evaluation showed that simple properties could be derived automatically using the technique; however the parameter information was left out of the learning approach. | {
"cite_N": [
"@cite_11"
],
"mid": [
"1533817411"
],
"abstract": [
"This paper discusses the usse of data-mining for the automatic formation of tactics. It was presented at the Workshop on Computer-Supported Mathematical Theory Development held at IJCAR in 2004. The aim of this project is to evaluate the applicability of data-mining techniques to the automatic formation of tactics from large corpuses of proofs. We data-mine information from large proof corpuses to find commonly occurring patterns. These patterns are then evolved into tactics using genetic programming techniques."
]
} |
1505.07427 | 2951336016 | We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 6 degree accuracy for large scale outdoor scenes and 0.5m and 10 degree accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show the convnet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples. PoseNet code, dataset and an online demonstration is available on our project webpage, at this http URL | There are generally two approaches to localization: metric and appearance-based. Metric SLAM localizes a mobile robot by focusing on creating a sparse @cite_21 @cite_16 or dense @cite_8 @cite_22 map of the environment. Metric SLAM estimates the camera's continuous pose, given a good initial pose estimate. Appearance-based localization provides this coarse estimate by classifying the scene among a limited number of discrete locations. Scalable appearance-based localizers have been proposed such as @cite_28 which uses SIFT features @cite_11 in a bag of words approach to probabilistically recognize previously viewed scenery. Convnets have also been used to classify a scene into one of several location labels @cite_13 . Our approach combines the strengths of these approaches: it does not need an initial pose estimate, and produces a continuous pose. Note we do not build a map, rather we train a neural network, whose size, unlike a map, does not require memory linearly proportional to the size of the scene (see fig. ). | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_28",
"@cite_21",
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"612478963",
"2108134361",
"2144824356",
"2151290401",
"1968315983",
"2951399172",
""
],
"abstract": [
"We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.",
"DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non-convex optimisation framework. Interleaved, we track the camera's 6DOF motion precisely by frame-rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves real-time performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application.",
"This paper describes a probabilistic approach to the problem of recognizing places based on their appearance. The system we present is not limited to localization, but can determine that a new observation comes from a previously unseen place, and so augment its map. Effectively this is a SLAM system in the space of appearance. Our probabilistic approach allows us to explicitly account for perceptual aliasing in the environment—identical but indistinctive observations receive a low probability of having come from the same place. We achieve this by learning a generative model of place appearance. By partitioning the learning problem into two parts, new place models can be learned online from only a single observation of a place. The algorithm complexity is linear in the number of places in the map, and is particularly suitable for online loop closure detection in mobile robotics.",
"This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand-held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer: one thread deals with the task of robustly tracking erratic hand-held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems.",
"We present a novel data structure, the Bayes tree, that provides an algorithmic foundation enabling a better understanding of existing graphical model inference algorithms and their connection to sparse matrix factorization methods. Similar to a clique tree, a Bayes tree encodes a factored probability density, but unlike the clique tree it is directed and maps more naturally to the square root information matrix of the simultaneous localization and mapping (SLAM) problem. In this paper, we highlight three insights provided by our new data structure. First, the Bayes tree provides a better understanding of the matrix factorization in terms of probability densities. Second, we show how the fairly abstract updates to a matrix factorization translate to a simple editing of the Bayes tree and its conditional densities. Third, we apply the Bayes tree to obtain a completely novel algorithm for sparse nonlinear incremental optimization, named iSAM2, which achieves improvements in efficiency through incremental variable re-ordering and fluid relinearization, eliminating the need for periodic batch steps. We analyze various properties of iSAM2 in detail, and show on a range of real and simulated datasets that our algorithm compares favorably with other recent mapping algorithms in both quality and efficiency.",
"After the incredible success of deep learning in the computer vision domain, there has been much interest in applying Convolutional Network (ConvNet) features in robotic fields such as visual navigation and SLAM. Unfortunately, there are fundamental differences and challenges involved. Computer vision datasets are very different in character to robotic camera data, real-time performance is essential, and performance priorities can be different. This paper comprehensively evaluates and compares the utility of three state-of-the-art ConvNets on the problems of particular relevance to navigation for robots; viewpoint-invariance and condition-invariance, and for the first time enables real-time place recognition performance using ConvNets with large maps by integrating a variety of existing (locality-sensitive hashing) and novel (semantic search space partitioning) optimization techniques. We present extensive experiments on four real world datasets cultivated to evaluate each of the specific challenges in place recognition. The results demonstrate that speed-ups of two orders of magnitude can be achieved with minimal accuracy degradation, enabling real-time performance. We confirm that networks trained for semantic place categorization also perform better at (specific) place recognition when faced with severe appearance changes and provide a reference for which networks and layers are optimal for different aspects of the place recognition problem.",
""
]
} |
1505.07427 | 2951336016 | We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 6 degree accuracy for large scale outdoor scenes and 0.5m and 10 degree accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show the convnet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples. PoseNet code, dataset and an online demonstration is available on our project webpage, at this http URL | Our work most closely follows from the Scene Coordinate Regression Forests for relocalization proposed in @cite_1 . This algorithm uses depth images to create scene coordinate labels which map each pixel from camera coordinates to global scene coordinates. This was then used to train a regression forest to regress these labels and localize the camera. However, unlike our approach, this algorithm is limited to RGB-D images to generate the scene coordinate label, in practice constraining its use to indoor scenes. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1989476314"
],
"abstract": [
"We address the problem of inferring the pose of an RGB-D camera relative to a known 3D scene, given only a single acquired image. Our approach employs a regression forest that is capable of inferring an estimate of each pixel's correspondence to 3D points in the scene's world coordinate frame. The forest uses only simple depth and RGB pixel comparison features, and does not require the computation of feature descriptors. The forest is trained to be capable of predicting correspondences at any pixel, so no interest point detectors are required. The camera pose is inferred using a robust optimization scheme. This starts with an initial set of hypothesized camera poses, constructed by applying the forest at a small fraction of image pixels. Preemptive RANSAC then iterates sampling more pixels at which to evaluate the forest, counting inliers, and refining the hypothesized poses. We evaluate on several varied scenes captured with an RGB-D camera and observe that the proposed technique achieves highly accurate relocalization and substantially out-performs two state of the art baselines."
]
} |
1505.07427 | 2951336016 | We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 6 degree accuracy for large scale outdoor scenes and 0.5m and 10 degree accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show the convnet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples. PoseNet code, dataset and an online demonstration is available on our project webpage, at this http URL | Previous research such as @cite_0 @cite_12 @cite_4 @cite_24 has also used SIFT-like point based features to match and localize from landmarks. However these methods require a large database of features and efficient retrieval methods. A method which uses these point features is structure from motion (SfM) @cite_2 @cite_15 @cite_25 which we use here as an offline tool to automatically label video frames with camera pose. We use @cite_18 to generate a dense visualisation of our relocalization results. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_0",
"@cite_24",
"@cite_2",
"@cite_15",
"@cite_25",
"@cite_12"
],
"mid": [
"",
"1970702722",
"",
"2147854204",
"2105303354",
"2163446794",
"2156598602",
"1616969904"
],
"abstract": [
"",
"In this paper, we study the problem of landmark recognition and propose to leverage 3D visual phrases to improve the performance. A 3D visual phrase is a triangular facet on the surface of a reconstructed 3D landmark model. In contrast to existing 2D visual phrases which are mainly based on co-occurrence statistics in 2D image planes, such 3D visual phrases explicitly characterize the spatial structure of a 3D object (landmark), and are highly robust to projective transformations due to viewpoint changes. We present an effective solution to discover, describe, and detect 3D visual phrases. The experiments on 10 landmarks have achieved promising results, which demonstrate that our approach provides a good balance between precision and recall of landmark recognition while reducing the dependence on post-verification to reject false positives.",
"",
"In this paper we propose a new technique for learning a discriminative codebook for local feature descriptors, specifically designed for scalable landmark classification. The key contribution lies in exploiting the knowledge of correspondences within sets of feature descriptors during code-book learning. Feature correspondences are obtained using structure from motion (SfM) computation on Internet photo collections which serve as the training data. Our codebook is defined by a random forest that is trained to map corresponding feature descriptors into identical codes. Unlike prior forest-based codebook learning methods, we utilize fine-grained descriptor labels and address the challenge of training a forest with an extremely large number of labels. Our codebook is used with various existing feature encoding schemes and also a variant we propose for importance-weighted aggregation of local features. We evaluate our approach on a public dataset of 25 landmarks and our new dataset of 620 landmarks (614K images). Our approach significantly outperforms the state of the art in landmark classification. Furthermore, our method is memory efficient and scalable.",
"The time complexity of incremental structure from motion (SfM) is often known as O(n^4) with respect to the number of cameras. As bundle adjustment (BA) being significantly improved recently by preconditioned conjugate gradient (PCG), it is worth revisiting how fast incremental SfM is. We introduce a novel BA strategy that provides good balance between speed and accuracy. Through algorithm analysis and extensive experiments, we show that incremental SfM requires only O(n) time on many major steps including BA. Our method maintains high accuracy by regularly re-triangulating the feature matches that initially fail to triangulate. We test our algorithm on large photo collections and long video sequences with various settings, and show that our method offers state of the art performance for large-scale reconstructions. The presented algorithm is available as part of VisualSFM at http: homes.cs.washington.edu ccwu vsfm .",
"We present a system that can reconstruct 3D geometry from large, unorganized collections of photographs such as those found by searching for a given city (e.g., Rome) on Internet photo-sharing sites. Our system is built on a set of new, distributed computer vision algorithms for image matching and 3D reconstruction, designed to maximize parallelism at each stage of the pipeline and to scale gracefully with both the size of the problem and the amount of available computation. Our experimental results demonstrate that it is now possible to reconstruct city-scale image collections with more than a hundred thousand images in less than a day.",
"We present a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface. Our system consists of an image-based modeling front end that automatically computes the viewpoint of each photograph as well as a sparse 3D model of the scene and image to model correspondences. Our photo explorer uses image-based rendering techniques to smoothly transition between photographs, while also enabling full 3D navigation and exploration of the set of images and world geometry, along with auxiliary information such as overhead maps. Our system also makes it easy to construct photo tours of scenic or historic locations, and to annotate image details, which are automatically transferred to other relevant images. We demonstrate our system on several large personal photo collections as well as images gathered from Internet photo sharing sites.",
"We address the problem of determining where a photo was taken by estimating a full 6-DOF-plus-intrincs camera pose with respect to a large geo-registered 3D point cloud, bringing together research on image localization, landmark recognition, and 3D pose estimation. Our method scales to datasets with hundreds of thousands of images and tens of millions of 3D points through the use of two new techniques: a co-occurrence prior for RANSAC and bidirectional matching of image features with 3D points. We evaluate our method on several large data sets, and show state-of-the-art results on landmark recognition as well as the ability to locate cameras to within meters, requiring only seconds per query."
]
} |
1505.07427 | 2951336016 | We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 6 degree accuracy for large scale outdoor scenes and 0.5m and 10 degree accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show the convnet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples. PoseNet code, dataset and an online demonstration is available on our project webpage, at this http URL | Despite their ability in classifying spatio-temporal data, convolutional neural networks are only just beginning to be used for regression. They have advanced the state of the art in object detection @cite_26 and human pose regression @cite_20 . However these have limited their regression targets to lie in the 2-D image plane. Here we demonstrate regressing the full 6-DOF camera pose transform including depth and out-of-plane rotation. Furthermore, we show we are able to learn regression as opposed to being a very fine resolution classifier. | {
"cite_N": [
"@cite_26",
"@cite_20"
],
"mid": [
"2950179405",
"2113325037"
],
"abstract": [
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images."
]
} |
1505.07427 | 2951336016 | We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 6 degree accuracy for large scale outdoor scenes and 0.5m and 10 degree accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show the convnet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples. PoseNet code, dataset and an online demonstration is available on our project webpage, at this http URL | It has been shown that convnet representations trained on classification problems generalize well to other tasks @cite_6 @cite_3 @cite_23 @cite_27 . We show that you can apply these representations of classification to 6-DOF regression problems. Using these pre-learned representations allows convnets to be used on smaller datasets without overfitting. | {
"cite_N": [
"@cite_27",
"@cite_3",
"@cite_23",
"@cite_6"
],
"mid": [
"2953360861",
"2161381512",
"2163922914",
"2953391683"
],
"abstract": [
"We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.",
"Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.",
"The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.",
"Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the network which was trained to perform object classification on ILSVRC13. We use features extracted from the network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or @math distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks."
]
} |
1505.07522 | 2105175049 | To choose restaurants and coffee shops, people are increasingly relying on social-networking sites. In a popular site such as Foursquare or Yelp, a place comes with descriptions and reviews, and with profile pictures of people who frequent them. Descriptions and reviews have been widely explored in the research area of data mining. By contrast, profile pictures have received little attention. Previous work showed that people are able to partly guess a place's ambiance, clientele, and activities not only by observing the place itself but also by observing the profile pictures of its visitors. Here we further that work by determining which visual cues people may have relied upon to make their guesses; showing that a state-of-the-art algorithm could make predictions more accurately than humans at times; and demonstrating that the visual cues people relied upon partly differ from those of the algorithm. | In the context of location-based services, place reviews have been widely explored. Tips and descriptions have been explored to study the popularity of Foursquare places @cite_11 @cite_4 ; to explain the demand of restaurants in Yelp; @cite_6 ; and to learn their relationship with check-ins and photos at Foursquare venues @cite_38 . | {
"cite_N": [
"@cite_38",
"@cite_4",
"@cite_6",
"@cite_11"
],
"mid": [
"1977705987",
"2151731441",
"2128383499",
"2079264505"
],
"abstract": [
"Location-based social networking platform (e.g., Foursquare), as a popular scenario of participatory sensing system that collects heterogeneous information (such as tips and photos) of venues from users, has attracted much attention recently. In this paper, we study the distribution of these information and their relationship, based on a large dataset crawled from Foursquare, which consists of 2,728,411 photos, 1,212,136 tips and 148,924,749 check-ins of 190,649 venues, contributed by 508,467 users. We analyze the distribution of user-generated check-ins, venue photos and venue tips, and show interesting category patterns and correlation among these information. In addition, we make the following observations: i) Venue photos in Foursquare are able to significantly make venues more social and popular. ii) Users share venue photos highly related to food category. iii) Category dynamics of venue photo sharing have similar patterns as that of venue tips and user check-ins at the venues. iv) Users tend to share photos rather than tips. We distribute our data and source codes under the request of research purposes (email: yi.yu.yy@gmail.com).",
"Online Location Based Social Networks (LBSNs), which combine social network features with geographic information sharing, are becoming increasingly popular. One such application is Foursquare, which doubled its user population in less than six months. Among other features, Foursquare allows users to leave tips (i.e., reviews or recommendations) at specific venues as well as to give feedback on previously posted tips by adding them to their to-do lists or marking them as done. In this paper, we analyze how Foursquare users exploit these three features - tips, dones and to-dos - uncovering different behavior profiles. Our study reveals the existence of very active and influential users, some of which are famous businesses and brands, that seem engaged in posting tips at a large variety of venues while also receiving a great amount of user feedback on them. We also provide evidence of spamming, showing the existence of users that post tips whose contents are unrelated to the nature or domain of the venue where the tips were left.",
"Do online consumer reviews affect restaurant demand? I investigate this question using a novel dataset combining reviews from the website Yelp.com and restaurant data from the Washington State Department of Revenue. Because Yelp prominently displays a restaurant's rounded average rating, I can identify the causal impact of Yelp ratings on demand with a regression discontinuity framework that exploits Yelp's rounding thresholds. I present three findings about the impact of consumer reviews on the restaurant industry: (1) a one-star increase in Yelp rating leads to a 5-9 percent increase in revenue, (2) this effect is driven by independent restaurants; ratings do not affect restaurants with chain affiliation, and (3) chain restaurants have declined in market share as Yelp penetration has increased. This suggests that online consumer reviews substitute for more traditional forms of reputation. I then test whether consumers use these reviews in a way that is consistent with standard learning models. I present two additional findings: (4) consumers do not use all available information and are more responsive to quality changes that are more visible and (5) consumers respond more strongly when a rating contains more information. Consumer response to a restaurant's average rating is affected by the number of reviews and whether the reviewers are certified as \"elite\" by Yelp, but is unaffected by the size of the reviewers' Yelp friends network.",
"Foursquare, the currently most popular location-based social network, allows users not only to share the places (venues) they visit but also post micro-reviews (tips) about their previous experiences at specific venues as well as \"like\" previously posted tips. The number of \"likes\" a tip receives ultimately reflects its popularity among users, providing valuable feedback to venue owners and other users. In this paper, we provide an extensive analysis of the popularity dynamics of Foursquare tips using a large dataset containing over 10 million tips and 9 million likes posted by over 13,5 million users. Our results show that, unlike other types of online content such as news and photos, Foursquare tips experience very slow popularity evolution, attracting user likes through longer periods of time. Moreover, we find that the social network of the user who posted the tip plays an important role on the tip popularity throughout its lifetime, but particularly at earlier periods after posting time. We also find that most tips experience their daily popularity peaks within the first month in the system, although most of their likes are received after the peak. Moreover, compared to other types of online content (e.g., videos), we observe a weaker presence of the rich-get-richer effect in our data, demonstrating a lower correlation between the early and late popularities. Finally, we evaluate the stability of the tip popularity ranking over time, assessing to which extent the current popularity ranking of a set of tips can be used to predict their popularity ranking at a future time. To that end, we compare a prediction approach based solely on the current popularity ranking against a method that exploits a linear regression model using a multidimensional set of predictors as input. Our results show that use of the richer set of features can indeed improve the prediction accuracy, provided that enough data is available to train the regression model."
]
} |
1505.07522 | 2105175049 | To choose restaurants and coffee shops, people are increasingly relying on social-networking sites. In a popular site such as Foursquare or Yelp, a place comes with descriptions and reviews, and with profile pictures of people who frequent them. Descriptions and reviews have been widely explored in the research area of data mining. By contrast, profile pictures have received little attention. Previous work showed that people are able to partly guess a place's ambiance, clientele, and activities not only by observing the place itself but also by observing the profile pictures of its visitors. Here we further that work by determining which visual cues people may have relied upon to make their guesses; showing that a state-of-the-art algorithm could make predictions more accurately than humans at times; and demonstrating that the visual cues people relied upon partly differ from those of the algorithm. | Face images have been studied in different disciplines. Computer vision researchers have analyzed faces for several decades. Researchers did so to automatically recognize face ovals @cite_25 @cite_30 , identify face expressions @cite_37 , predict personality traits @cite_40 @cite_7 , assess political competence @cite_19 , infer visual persuasion @cite_33 , and score portraits for photographic beauty @cite_0 . | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_33",
"@cite_7",
"@cite_0",
"@cite_19",
"@cite_40",
"@cite_25"
],
"mid": [
"2098693229",
"2033773055",
"2073775149",
"2001658220",
"1606532349",
"2170954611",
"2118265251",
"2137401668"
],
"abstract": [
"An approach to the detection and identification of human faces is presented, and a working, near-real-time face recognition system which tracks a subject's head and then recognizes the person by comparing characteristics of the face to those of known individuals is described. This approach treats face recognition as a two-dimensional recognition problem, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. Face images are projected onto a feature space ('face space') that best encodes the variation among known face images. The face space is defined by the 'eigenfaces', which are the eigenvectors of the set of faces; they do not necessarily correspond to isolated features such as eyes, ears, and noses. The framework provides the ability to learn to recognize new faces in an unsupervised manner. >",
"Over the last decade, automatic facial expression analysis has become an active research area that finds potential applications in areas such as more engaging human-computer interfaces, talking heads, image retrieval and human emotion analysis. Facial expressions reflect not only emotions, but other mental activities, social interaction and physiological signals. In this survey we introduce the most prominent automatic facial expression analysis methods and systems presented in the literature. Facial motion and deformation extraction approaches as well as classification methods are discussed with respect to issues such as face normalization, facial expression dynamics and facial expression intensity, but also with regard to their robustness towards environmental changes.",
"In this paper we introduce the novel problem of understanding visual persuasion. Modern mass media make extensive use of images to persuade people to make commercial and political decisions. These effects and techniques are widely studied in the social sciences, but behavioral studies do not scale to massive datasets. Computer vision has made great strides in building syntactical representations of images, such as detection and identification of objects. However, the pervasive use of images for communicative purposes has been largely ignored. We extend the significant advances in syntactic analysis in computer vision to the higher-level challenge of understanding the underlying communicative intent implied in images. We begin by identifying nine dimensions of persuasive intent latent in images of politicians, such as \"socially dominant, \" \"energetic, \" and \"trustworthy, \" and propose a hierarchical model that builds on the layer of syntactical attributes, such as \"smile\" and \"waving hand, \" to predict the intents presented in the images. To facilitate progress, we introduce a new dataset of 1, 124 images of politicians labeled with ground-truth intents in the form of rankings. This study demonstrates that a systematic focus on visual persuasion opens up the field of computer vision to a new class of investigations around mediated images, intersecting with media analysis, psychology, and political communication.",
"Despite the evidence that social video conveys rich human personality information, research investigating the automatic prediction of personality impressions in vlogging has shown that, amongst the Big-Five traits, automatic nonverbal behavioral cues are useful to predict mainly the Extraversion trait. This finding, also reported in other conversational settings, indicates that personality information may be coded in other behavioral dimensions like the verbal channel, which has been less studied in multimodal interaction research. In this paper, we address the task of predicting personality impressions from vloggers based on what they say in their YouTube videos. First, we use manual transcripts of vlogs and verbal content analysis techniques to understand the ability of verbal content for the prediction of crowdsourced Big-Five personality impressions. Second, we explore the feasibility of a fully-automatic framework in which transcripts are obtained using automatic speech recognition (ASR). Our results show that the analysis of error-free verbal content is useful to predict four of the Big-Five traits, three of them better than using nonverbal cues, and that the errors caused by the ASR system decrease the performance significantly.",
"Digital portrait photographs are everywhere, and while the number of face pictures keeps growing, not much work has been done to on automatic portrait beauty assessment. In this paper, we design a specific framework to automatically evaluate the beauty of digital portraits. To this end, we procure a large dataset of face images annotated not only with aesthetic scores but also with information about the traits of the subject portrayed. We design a set of visual features based on portrait photography literature, and extensively analyze their relation with portrait beauty, exposing interesting findings about what makes a portrait beautiful. We find that the beauty of a portrait is linked to its artistic value, and independent from age, race and gender of the subject. We also show that a classifier trained with our features to separate beautiful portraits from non-beautiful portraits outperforms generic aesthetic classifiers.",
"Recent research has shown that rapid judgments about the personality traits of political candidates, based solely on their appearance, can predict their electoral success. This suggests that voters rely heavily on appearances when choosing which candidate to elect. Here we review this literature and examine the determinants of the relationship between appearance-based trait inferences and voting. We also reanalyze previous data to show that facial competence is a highly robust and specific predictor of political preferences. Finally, we introduce a computer model of face-based competence judgments, which we use to derive some of the facial features associated with these judgments.",
"Despite the crucial role of physical appearance in forming first impressions, little research has examined the accuracy of personality impressions based on appearance alone. This study examined the accuracy of observers’ impressions on 10 personality traits based on full-body photographs using criterion measures based on self and peer reports. When targets’ posture and expression were constrained (standardized condition), observers’ judgments were accurate for extraversion, self-esteem, and religiosity. When targets were photographed with a spontaneous pose and facial expression (spontaneous condition), observers’ judgments were accurate for almost all of the traits examined. Lens model analyses demonstrated that both static cues (e.g., clothing style) and dynamic cues (e.g., facial expression, posture) offered valuable personality-relevant information. These results suggest that personality is manifested through both static and expressive channels of appearance, and observers use this information to form...",
"This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second."
]
} |
1505.07522 | 2105175049 | To choose restaurants and coffee shops, people are increasingly relying on social-networking sites. In a popular site such as Foursquare or Yelp, a place comes with descriptions and reviews, and with profile pictures of people who frequent them. Descriptions and reviews have been widely explored in the research area of data mining. By contrast, profile pictures have received little attention. Previous work showed that people are able to partly guess a place's ambiance, clientele, and activities not only by observing the place itself but also by observing the profile pictures of its visitors. Here we further that work by determining which visual cues people may have relied upon to make their guesses; showing that a state-of-the-art algorithm could make predictions more accurately than humans at times; and demonstrating that the visual cues people relied upon partly differ from those of the algorithm. | More recently, faces have been also studied in the context of social-networking sites. It has been found that, on Facebook, faces engage users more than other subjects @cite_32 , and that faces partly reflect personality traits @cite_8 @cite_27 @cite_23 . | {
"cite_N": [
"@cite_27",
"@cite_32",
"@cite_23",
"@cite_8"
],
"mid": [
"2159855467",
"2168117362",
"2137351972",
"2101790396"
],
"abstract": [
"Abstract Online social networking sites have revealed an entirely new method of self-presentation. This cyber social tool provides a new site of analysis to examine personality and identity. The current study examines how narcissism and self-esteem are manifested on the social networking Web site Facebook.com. Self-esteem and narcissistic personality self-reports were collected from 100 Facebook users at York University. Participant Web pages were also coded based on self-promotional content features. Correlation analyses revealed that individuals higher in narcissism and lower in self-esteem were related to greater online activity as well as some self-promotional content. Gender differences were found to influence the type of self-promotional content presented by individual Facebook users. Implications and future research directions of narcissism and self-esteem on social networking Web sites are discussed.",
"Photos are becoming prominent means of communication online. Despite photos' pervasive presence in social media and online world, we know little about how people interact and engage with their content. Understanding how photo content might signify engagement, can impact both science and design, influencing production and distribution. One common type of photo content that is shared on social media, is the photos of people. From studies of offline behavior, we know that human faces are powerful channels of non-verbal communication. In this paper, we study this behavioral phenomena online. We ask how presence of a face, it's age and gender might impact social engagement on the photo. We use a corpus of 1 million Instagram images and organize our study around two social engagement feedback factors, likes and comments. Our results show that photos with faces are 38 more likely to receive likes and 32 more likely to receive comments, even after controlling for social network reach and activity. We find, however, that the number of faces, their age and gender do not have an effect. This work presents the first results on how photos with human faces relate to engagement on large scale image sharing communities. In addition to contributing to the research around online user behavior, our findings offer a new line of future work using visual analysis.",
"",
"Face perception, perhaps the most highly developed visual skill in humans, is mediated by a distributed neural system in humans that is comprised of multiple, bilateral regions. We propose a model for the organization of this system that emphasizes a distinction between the representation of invariant and changeable aspects of faces. The representation of invariant aspects of faces underlies the recognition of individuals, whereas the representation of changeable aspects of faces, such as eye gaze, expression, and lip movement, underlies the perception of information that facilitates social communication. The model is also hierarchical insofar as it is divided into a core system and an extended system. The core system is comprised of occipitotemporal regions in extrastriate visual cortex that mediate the visual analysis of faces. In the core system, the representation of invariant aspects is mediated more by the face-responsive region in the fusiform gyrus, whereas the representation of changeable aspects is mediated more by the face-responsive region in the superior temporal sulcus. The extended system is comprised of regions from neural systems for other cognitive functions that can be recruited to act in concert with the regions in the core system to extract meaning from faces."
]
} |
1505.07499 | 405323996 | Camouflaging data by generating fake information is a well-known obfuscation technique for protecting data privacy. In this paper, we focus on a very sensitive and increasingly exposed type of data: location data. There are two main scenarios in which fake traces are of extreme value to preserve location privacy: publishing datasets of location trajectories, and using location-based services. Despite advances in protecting (location) data privacy, there is no quantitative method to evaluate how realistic a synthetic trace is, and how much utility and privacy it provides in each scenario. Also, the lack of a methodology to generate privacy-preserving fake traces is evident. In this paper, we fill this gap and propose the first statistical metric and model to generate fake location traces such that both the utility of data and the privacy of users are preserved. We build upon the fact that, although geographically they visit distinct locations, people have strongly semantically similar mobility patterns, for example, their transition pattern across activities (e.g., working, driving, staying at home) is similar. We define a statistical metric and propose an algorithm that automatically discovers the hidden semantic similarities between locations from a bag of real location traces as seeds, without requiring any initial semantic annotations. We guarantee that fake traces are geographically dissimilar to their seeds, so they do not leak sensitive location information. We also protect contributors to seed traces against membership attacks. Interleaving fake traces with mobile users' traces is a prominent location privacy defense mechanism. We quantitatively show the effectiveness of our methodology in protecting against localization inference attacks while preserving utility of sharing publishing traces. | Hiding the user's true location among fake locations is a promising yet very little-explored approach to protecting location privacy. There are few simple techniques proposed so far: adding independently selected fake locations drawn from the population's location distribution @cite_22 , generating dummy locations at random as a random walk on a grid @cite_14 @cite_23 , constructing fake driving trips by building the path between two random locations on the map given the more probable paths traveled by drivers @cite_27 , or adding noise to the paths generated by road trip planner algorithms @cite_18 . These solutions lack a formal model for human mobility and do not consider the semantics associated with sequence of locations visited by people over time. Thus, the generated traces can be distinguished from real location traces. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_27",
"@cite_23"
],
"mid": [
"2040246974",
"1689932539",
"1565026843",
"1544471796",
""
],
"abstract": [
"The amount of contextual data collected, stored, mined, and shared is increasing exponentially. Street cameras, credit card transactions, chat and Twitter logs, e-mail, web site visits, phone logs and recordings, social networking sites, all are examples of data that persists in a manner not under individual control, leading some to declare the death of privacy. We argue here that the ability to generate convincing fake contextual data can be a basic tool in the fight to preserve privacy. One use for the technology is for an individual to make his actual data indistinguishable amongst a pile of false data. In this paper we consider two examples of contextual data, search engine query data and location data. We describe the current state of faking these types of data and our own efforts in this direction.",
"Recently, highly accurate positioning devices enable us to provide various types of location-based services. On the other hand, because such position data include deeply personal information, the protection of location privacy is one of the most significant problems in location-based services. In this paper, we propose an anonymous communication technique to protect the location privacy of the users of location-based services. In our proposed technique, such users generate several false position data (dummies) to send to service providers with the true position data of users. Because service providers cannot distinguish the true position data, user location privacy is protected. We also describe a cost reduction technique for communication between a client and a server. Moreover, we conducted performance study experiments on our proposed technique using practical position data. As a result of the experiments, we observed that our proposed technique protects the location privacy of people and can sufficiently reduce communication costs so that our communication techniques can be applied in practical location-based services.",
"Mobile users expose their location to potentially untrusted entities by using location-based services. Based on the frequency of location exposure in these applications, we divide them into two main types: Continuous and Sporadic. These two location exposure types lead to different threats. For example, in the continuous case, the adversary can track users over time and space, whereas in the sporadic case, his focus is more on localizing users at certain points in time. We propose a systematic way to quantify users' location privacy by modeling both the location-based applications and the location-privacy preserving mechanisms (LPPMs), and by considering a well-defined adversary model. This framework enables us to customize the LPPMs to the employed location-based application, in order to provide higher location privacy for the users. In this paper, we formalize localization attacks for the case of sporadic location exposure, using Bayesian inference for Hidden Markov Processes. We also quantify user location privacy with respect to the adversaries with two different forms of background knowledge: Those who only know the geographical distribution of users over the considered regions, and those who also know how users move between the regions (i.e., their mobility pattern). Using the Location-Privacy Meter tool, we examine the effectiveness of the following techniques in increasing the expected error of the adversary in the localization attack: Location obfuscation and fake location injection mechanisms for anonymous traces.",
"Simulated, false location reports can be an effective way to confuse a privacy attacker. When a mobile user must transmit his or her location to a central server, these location reports can be accompanied by false reports that, ideally, cannot be distinguished from the true one. The realism of the false reports is important, because otherwise an attacker could filter out all but the real data. Using our database of GPS tracks from over 250 volunteer drivers, we developed probabilistic models of driving behavior and applied the models to create realistic driving trips. The simulations model realistic start and end points, slightly non-optimal routes, realistic driving speeds, and spatially varying GPS noise.",
""
]
} |
1505.07548 | 1606019685 | Stackelberg security game models and associated computational tools have seen deployment in a number of high-consequence security settings, such as LAX canine patrols and Federal Air Marshal Service. These models focus on isolated systems with only one defender, despite being part of a more complex system with multiple players. Furthermore, many real systems such as transportation networks and the power grid exhibit interdependencies between targets and, consequently, between decision makers jointly charged with protecting them. To understand such multidefender strategic interactions present in security, we investigate game theoretic models of security games with multiple defenders. Unlike most prior analysis, we focus on the situations in which each defender must protect multiple targets, so that even a single defender's best response decision is, in general, highly non-trivial. We start with an analytical investigation of multidefender security games with independent targets, offering an equilibrium and price-of-anarchy analysis of three models with increasing generality. In all models, we find that defenders have the incentive to over-protect targets, at times significantly. Additionally, in the simpler models, we find that the price of anarchy is unbounded, linearly increasing both in the number of defenders and the number of targets per defender. Considering interdependencies among targets, we develop a novel mixed-integer linear programming formulation to compute a defender's best response, and make use of this formulation in approximating Nash equilibria of the game. We apply this approach towards computational strategic analysis of several models of networks representing interdependencies, including real-world power networks. Our analysis shows how network structure and the probability of failure spread determine the propensity of defenders to over- or under-invest in security. | Our work, like much work in the recent security game literature, builds on the notion of Stackelberg games @cite_4 , which model commitment in strategic settings. The first thorough computational treatment of randomized (mixed strategy) commitment was due to Conitzer06:Computing . In this line of work, of greatest relevance to our effort are multiple-leader Stackelberg games . In many cases, these approaches leverage specialized problem structure, and are not immediately applicable to our setting. In particular, Sherali84 and DeMiguel09 focus on relatively simple models with firms setting production quantity (a single variable), aiming to maximize profit. Both show existence and uniqueness of equilibria in their setting, and leverage these characterization results to obtain solutions to the games. Similarly, Rodoplu10 consider a relatively simple model of network competition in which leaders are nodes setting prices for packets transmitted through them; again, each leader only sets a single variable, the utility functions are problem-specific, and algorithms are specialized to the particular problem structure (and are inapplicable to our setting). | {
"cite_N": [
"@cite_4"
],
"mid": [
"2109100253"
],
"abstract": [
"A Course in Game Theory presents the main ideas of game theory at a level suitable for graduate students and advanced undergraduates, emphasizing the theory's foundations and interpretations of its basic concepts. The authors provide precise definitions and full proofs of results, sacrificing generalities and limiting the scope of the material in order to do so. The text is organized in four parts: strategic games, extensive games with perfect information, extensive games with imperfect information, and coalitional games. It includes over 100 exercises."
]
} |
1505.07548 | 1606019685 | Stackelberg security game models and associated computational tools have seen deployment in a number of high-consequence security settings, such as LAX canine patrols and Federal Air Marshal Service. These models focus on isolated systems with only one defender, despite being part of a more complex system with multiple players. Furthermore, many real systems such as transportation networks and the power grid exhibit interdependencies between targets and, consequently, between decision makers jointly charged with protecting them. To understand such multidefender strategic interactions present in security, we investigate game theoretic models of security games with multiple defenders. Unlike most prior analysis, we focus on the situations in which each defender must protect multiple targets, so that even a single defender's best response decision is, in general, highly non-trivial. We start with an analytical investigation of multidefender security games with independent targets, offering an equilibrium and price-of-anarchy analysis of three models with increasing generality. In all models, we find that defenders have the incentive to over-protect targets, at times significantly. Additionally, in the simpler models, we find that the price of anarchy is unbounded, linearly increasing both in the number of defenders and the number of targets per defender. Considering interdependencies among targets, we develop a novel mixed-integer linear programming formulation to compute a defender's best response, and make use of this formulation in approximating Nash equilibria of the game. We apply this approach towards computational strategic analysis of several models of networks representing interdependencies, including real-world power networks. Our analysis shows how network structure and the probability of failure spread determine the propensity of defenders to over- or under-invest in security. | Another somewhat related line of work considers the problem of coordination and teamwork among multiple defenders in a purely cooperative setting @cite_3 @cite_2 . This work, however, is entirely unlike ours: in particular, our primary focus is on the impact of among the defenders with different (though certainly related) motivations, rather than coordination issues and teamwork. While often effective coordination among multiple defenders can be achieved, just as often (if not predominantly) decentralization of decision making processes and resources inherently give rise to distinct, and often conflicting, incentives among defenders. | {
"cite_N": [
"@cite_3",
"@cite_2"
],
"mid": [
"2150744284",
"1818018820"
],
"abstract": [
"We examine security domains where defenders choose their security levels in the face of a possible attack by an adversary who attempts to destroy as many of them as possible. Though the attacker only selects one target, and only has a certain probability of destroying it depending on that defender's security level, a successful attack may infect other defenders. By choosing a higher security level the defenders increase their probability of survival, but incur a higher cost of security. We assume that the adversary observes the security levels chosen by the defenders before selecting whom to attack. We show that under this assumption the defenders over-protect themselves, exhausting all their surplus, so optimal policy requires taxing security, as opposed to the subsidies recommended by alternative models for contagious attacks which do not take into account the attacker's ability to observe the defenders' choices.",
"We study security games with multiple defenders. To achieve maximum security, defenders must perfectly synchronize their randomized allocations of resources. However, in real-life scenarios (such as protection of the port of Boston) this is not the case. Our goal is to quantify the loss incurred by miscoordination between defenders, both theoretically and empirically. We introduce two notions that capture this loss under different assumptions: the price of miscoordination, and the price of sequential commitment. Generally speaking, our theoretical bounds indicate that the loss may be extremely high in the worst case, while our simulations establish a smaller yet significant loss in practice."
]
} |
1505.07204 | 432772781 | The paper presents several results that address a fundamental question in low-rank matrices recovery: how many measurements are needed to recover low rank matrices? We begin by investigating the complex matrices case and show that @math generic measurements are both necessary and sufficient for the recovery of rank- @math matrices in @math by algebraic tools. Thus, we confirm a conjecture which is raised by Eldar, Needell and Plan for the complex case. We next consider the real case and prove that the bound @math is tight provided @math . Motivated by Vinzant's work, we construct @math matrices in @math by computer random search and prove they define injective measurements on rank- @math matrices in @math . This disproves the conjecture raised by Eldar, Needell and Plan for the real case. Finally, we use the results in this paper to investigate the phase retrieval by projection and show fewer than @math orthogonal projections are possible for the recovery of @math from the norm of them. | In context of low-rank matrix recovery, it is Eldar, Needell and Plan @cite_5 that show that @math Gaussian matrices @math has low-rank matrix recovery property for @math with probability 1 (see also @cite_0 @cite_14 @cite_4 ) provided @math . Naturally, one may be interested in whether the number @math is tight. In @cite_5 , the authors made the following conjecture: | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_14",
"@cite_4"
],
"mid": [
"1657130172",
"1968843188",
"1485629353",
""
],
"abstract": [
"This paper establishes information-theoretic limits for estimating a finite-field low-rank matrix given random linear measurements of it. These linear measurements are obtained by taking inner products of the low-rank matrix with random sensing matrices. Necessary and sufficient conditions on the number of measurements required are provided. It is shown that these conditions are sharp and the minimum-rank decoder is asymptotically optimal. The reliability function of this decoder is also derived by appealing to de Caen's lower bound on the probability of a union. The sufficient condition also holds when the sensing matrices are sparse-a scenario that may be amenable to efficient decoding. More precisely, it is shown that if the n × n-sensing matrices contain, on average, Ω(nlog n) entries, the number of measurements required is the same as that when the sensing matrices are dense and contain entries drawn uniformly at random from the field. Analogies are drawn between the aforementioned results and rank-metric codes in the coding theory literature. In fact, we are also strongly motivated by understanding when minimum rank distance decoding of random rank-metric codes succeeds. To this end, we derive minimum distance properties of equiprobable and sparse rank-metric codes. These distance properties provide a precise geometric interpretation of the fact that the sparse ensemble requires as few measurements as the dense one.",
"Abstract Low-rank matrix recovery addresses the problem of recovering an unknown low-rank matrix from few linear measurements. There has been a large influx of literature deriving conditions under which certain tractable methods will succeed in recovery, demonstrating that m ⩾ C n r Gaussian measurements are often sufficient to recover any rank-r n × n matrix. In this paper we address the theoretical question of how many measurements are needed via any method whatsoever — tractable or not. We show that for a family of random measurement ensembles, m ⩾ 4 n r − 4 r 2 and m ⩾ 2 n r − r 2 + 1 measurements are sufficient to guarantee strong recovery and weak recovery, respectively, by rank minimization. These results give a benchmark to which we may compare the efficacy of tractable methods such as nuclear-norm minimization.",
"Nuclear norm minimization (NNM) has recently gained significant attention for its use in rank minimization problems. Similar to compressed sensing, using null space characterizations, recovery thresholds for NNM have been studied in. However simulations show that the thresholds are far from optimal, especially in the low rank region. In this paper we apply the recent analysis of Stojnic for compressed sensing to the null space conditions of NNM. The resulting thresholds are significantly better and in particular our weak threshold appears to match with simulation results. Further our curves suggest for any rank growing linearly with matrix size n we need only three times of oversampling (the model complexity) for weak recovery. Similar to we analyze the conditions for weak, sectional and strong thresholds. Additionally a separate analysis is given for special case of positive semidefinite matrices. We conclude by discussing simulation results and future research directions.",
""
]
} |
1505.07409 | 2133440863 | This paper explores novel approaches for improving the spatial codification for the pooling of local descriptors to solve the semantic segmentation problem. We propose to partition the image into three regions for each object to be described: Figure, Border and Ground. This partition aims at minimizing the influence of the image context on the object description and vice versa by introducing an intermediate zone around the object contour. Furthermore, we also propose a richer visual descriptor of the object by applying a Spatial Pyramid over the Figure region. Two novel Spatial Pyramid configurations are explored: Cartesian-based and crown-based Spatial Pyramids. We test these approaches with state-of-the-art techniques and show that they improve the Figure-Ground based pooling in the Pascal VOC 2011 and 2012 semantic segmentation challenges. | To analyze our approaches for improving the spatial codification in semantic segmentation in a real context, we have adopted a solution based on the architecture proposed and released by in @cite_18 , which is briefly described next. 150 CPMC object candidates @cite_10 are extracted per image and each object candidate is described by its Figure and Ground features. Three types of enriched local features (eSIFT, eMSIFT and eLBP) are densely extracted and pooled using O2P @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_10"
],
"mid": [
"78159342",
"2046382188"
],
"abstract": [
"Feature extraction, coding and pooling, are important components on many contemporary object recognition paradigms. In this paper we explore novel pooling techniques that encode the second-order statistics of local descriptors inside a region. To achieve this effect, we introduce multiplicative second-order analogues of average and max-pooling that together with appropriate non-linearities lead to state-of-the-art performance on free-form region recognition, without any type of feature coding. Instead of coding, we found that enriching local descriptors with additional image information leads to large performance gains, especially in conjunction with the proposed pooling methodology. We show that second-order pooling over free-form regions produces results superior to those of the winning systems in the Pascal VOC 2011 semantic segmentation challenge, with models that are 20,000 times faster.",
"We present a novel framework to generate and rank plausible hypotheses for the spatial extent of objects in images using bottom-up computational processes and mid-level selection cues. The object hypotheses are represented as figure-ground segmentations, and are extracted automatically, without prior knowledge of the properties of individual object classes, by solving a sequence of Constrained Parametric Min-Cut problems (CPMC) on a regular image grid. In a subsequent step, we learn to rank the corresponding segments by training a continuous model to predict how likely they are to exhibit real-world regularities (expressed as putative overlap with ground truth) based on their mid-level region properties, then diversify the estimated overlap score using maximum marginal relevance measures. We show that this algorithm significantly outperforms the state of the art for low-level segmentation in the VOC 2009 and 2010 data sets. In our companion papers [1], [2], we show that the algorithm can be used, successfully, in a segmentation-based visual object category recognition pipeline. This architecture ranked first in the VOC2009 and VOC2010 image segmentation and labeling challenges."
]
} |
1505.07184 | 2103471677 | Meaning of a word varies from one domain to another. Despite this important domain dependence in word semantics, existing word representation learning methods are bound to a single domain. Given a pair of - domains, we propose an unsupervised method for learning domain-specific word representations that accurately capture the domain-specific aspects of word semantics. First, we select a subset of frequent words that occur in both domains as . Next, we optimize an objective function that enforces two constraints: (a) for both source and target domain documents, pivots that appear in a document must accurately predict the co-occurring non-pivots, and (b) word representations learnt for pivots must be similar in the two domains. Moreover, we propose a method to perform domain adaptation using the learnt word representations. Our proposed method significantly outperforms competitive baselines including the state-of-the-art domain-insensitive word representations, and reports best sentiment classification accuracies for all domain-pairs in a benchmark dataset. | Representing the semantics of a word using some algebraic structure such as a vector (more generally a tensor) is a common first step in many NLP tasks @cite_4 . By applying algebraic operations on the word representations, we can perform numerous tasks in NLP, such as composing representations for larger textual units beyond individual words such as phrases @cite_3 . Moreover, word representations are found to be useful for measuring semantic similarity, and for solving proportional analogies @cite_22 . Two main approaches for computing word representations can be identified in prior work @cite_18 : and . | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_3"
],
"mid": [
"2251803266",
"1662133657",
"",
"2137607259"
],
"abstract": [
"Context-predicting models (more commonly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the literature is still lacking a systematic comparison of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counterparts.",
"Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field.",
"",
"This paper proposes a framework for representing the meaning of phrases and sentences in vector space. Central to our approach is vector composition which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models which we evaluate empirically on a sentence similarity task. Experimental results demonstrate that the multiplicative models are superior to the additive alternatives when compared against human judgments."
]
} |
1505.07184 | 2103471677 | Meaning of a word varies from one domain to another. Despite this important domain dependence in word semantics, existing word representation learning methods are bound to a single domain. Given a pair of - domains, we propose an unsupervised method for learning domain-specific word representations that accurately capture the domain-specific aspects of word semantics. First, we select a subset of frequent words that occur in both domains as . Next, we optimize an objective function that enforces two constraints: (a) for both source and target domain documents, pivots that appear in a document must accurately predict the co-occurring non-pivots, and (b) word representations learnt for pivots must be similar in the two domains. Moreover, we propose a method to perform domain adaptation using the learnt word representations. Our proposed method significantly outperforms competitive baselines including the state-of-the-art domain-insensitive word representations, and reports best sentiment classification accuracies for all domain-pairs in a benchmark dataset. | In counting-based approaches @cite_33 , a word @math is represented by a vector @math that contains other words that co-occur with @math in a corpus. Numerous methods for selecting co-occurrence contexts such as proximity or dependency relations have been proposed @cite_4 . Despite the numerous successful applications of co-occurrence counting-based distributional word representations, their high dimensionality and sparsity are often problematic in practice. Consequently, further post-processing steps such as dimensionality reduction, and feature selection are often required when using counting-based word representations. | {
"cite_N": [
"@cite_4",
"@cite_33"
],
"mid": [
"1662133657",
"2128870637"
],
"abstract": [
"Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field.",
"Research into corpus-based semantics has focused on the development of ad hoc models that treat single tasks, or sets of closely related tasks, as unrelated challenges to be tackled by extracting different kinds of distributional information from the corpus. As an alternative to this \"one task, one model\" approach, the Distributional Memory framework extracts distributional information once and for all from the corpus, in the form of a set of weighted word-link-word tuples arranged into a third-order tensor. Different matrices are then generated from the tensor, and their rows and columns constitute natural spaces to deal with different semantic problems. In this way, the same distributional information can be shared across tasks such as modeling word similarity judgments, discovering synonyms, concept categorization, predicting selectional preferences of verbs, solving analogy problems, classifying relations between word pairs, harvesting qualia structures with patterns or example pairs, predicting the typical properties of concepts, and classifying verbs into alternation classes. Extensive empirical testing in all these domains shows that a Distributional Memory implementation performs competitively against task-specific algorithms recently reported in the literature for the same tasks, and against our implementations of several state-of-the-art methods. The Distributional Memory approach is thus shown to be tenable despite the constraints imposed by its multi-purpose nature."
]
} |
1505.07184 | 2103471677 | Meaning of a word varies from one domain to another. Despite this important domain dependence in word semantics, existing word representation learning methods are bound to a single domain. Given a pair of - domains, we propose an unsupervised method for learning domain-specific word representations that accurately capture the domain-specific aspects of word semantics. First, we select a subset of frequent words that occur in both domains as . Next, we optimize an objective function that enforces two constraints: (a) for both source and target domain documents, pivots that appear in a document must accurately predict the co-occurring non-pivots, and (b) word representations learnt for pivots must be similar in the two domains. Moreover, we propose a method to perform domain adaptation using the learnt word representations. Our proposed method significantly outperforms competitive baselines including the state-of-the-art domain-insensitive word representations, and reports best sentiment classification accuracies for all domain-pairs in a benchmark dataset. | On the other hand, prediction-based approaches first assign each word, for example, with a @math -dimensional real-vector, and learn the elements of those vectors by applying them in an auxiliary task such as language modeling, where the goal is to predict the next word in a given sequence. The dimensionality @math is fixed for all the words in the vocabulary, and, unlike counting-based word representations, is much smaller (e.g. @math in practice) compared to the vocabulary size. The neural network language model (NNLM) @cite_35 uses a multi-layer feed-forward neural network to predict the next word in a sequence, and uses backpropagation to update the word vectors such that the prediction error is minimized. | {
"cite_N": [
"@cite_35"
],
"mid": [
"2132339004"
],
"abstract": [
"A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts."
]
} |
1505.07184 | 2103471677 | Meaning of a word varies from one domain to another. Despite this important domain dependence in word semantics, existing word representation learning methods are bound to a single domain. Given a pair of - domains, we propose an unsupervised method for learning domain-specific word representations that accurately capture the domain-specific aspects of word semantics. First, we select a subset of frequent words that occur in both domains as . Next, we optimize an objective function that enforces two constraints: (a) for both source and target domain documents, pivots that appear in a document must accurately predict the co-occurring non-pivots, and (b) word representations learnt for pivots must be similar in the two domains. Moreover, we propose a method to perform domain adaptation using the learnt word representations. Our proposed method significantly outperforms competitive baselines including the state-of-the-art domain-insensitive word representations, and reports best sentiment classification accuracies for all domain-pairs in a benchmark dataset. | Although NNLMs learn word representations as a by-product, the main focus on language modeling is to predict the next word in a sentence given the previous words, and not learning word representations that capture semantics. Moreover, training multi-layer neural networks using large text corpora is time consuming. To overcome those limitations, methods that specifically focus on learning word representations that model word co-occurrences in large corpora have been proposed @cite_32 @cite_12 @cite_1 @cite_24 . Unlike the NNLM, these methods use the words in a contextual window in the prediction task. Methods that use one or no hidden layers are proposed to improve the scalability of the learning algorithms. For example, the skip-gram model @cite_23 predicts the words @math that appear in the local context of a word @math , whereas the continuous bag-of-words model (CBOW) predicts a word @math conditioned on all the words @math that appear in @math 's local context @cite_32 . Methods that use global co-occurrences in the entire corpus to learn word representations have shown to outperform methods that use only local co-occurrences @cite_1 @cite_24 . Overall, prediction-based methods have shown to outperform counting-based methods @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_32",
"@cite_24",
"@cite_23",
"@cite_12"
],
"mid": [
"2251803266",
"2164019165",
"",
"2250539671",
"2950133940",
""
],
"abstract": [
"Context-predicting models (more commonly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the literature is still lacking a systematic comparison of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counterparts.",
"Unsupervised word representations are very useful in NLP tasks both as inputs to learning algorithms and as extra word features in NLP systems. However, most of these models are built with only local context and one representation per word. This is problematic because words are often polysemous and global context can also provide useful information for learning word meanings. We present a new neural network architecture which 1) learns word embeddings that better capture the semantics of words by incorporating both local and global document context, and 2) accounts for homonymy and polysemy by learning multiple embeddings per word. We introduce a new dataset with human judgments on pairs of words in sentential context, and evaluate our model on it, showing that our model outperforms competitive baselines and other neural language models.",
"",
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
""
]
} |
1505.07184 | 2103471677 | Meaning of a word varies from one domain to another. Despite this important domain dependence in word semantics, existing word representation learning methods are bound to a single domain. Given a pair of - domains, we propose an unsupervised method for learning domain-specific word representations that accurately capture the domain-specific aspects of word semantics. First, we select a subset of frequent words that occur in both domains as . Next, we optimize an objective function that enforces two constraints: (a) for both source and target domain documents, pivots that appear in a document must accurately predict the co-occurring non-pivots, and (b) word representations learnt for pivots must be similar in the two domains. Moreover, we propose a method to perform domain adaptation using the learnt word representations. Our proposed method significantly outperforms competitive baselines including the state-of-the-art domain-insensitive word representations, and reports best sentiment classification accuracies for all domain-pairs in a benchmark dataset. | Although in this paper we focus on the monolingual setting where source and target domains belong to the same language, the related setting where learning representations for words that are translational pairs across languages has been studied @cite_7 @cite_8 @cite_30 . Such representations are particularly useful for cross-lingual information retrieval @cite_13 . It will be an interesting future research direction to extend our proposed method to learn such cross-lingual word representations. | {
"cite_N": [
"@cite_30",
"@cite_13",
"@cite_7",
"@cite_8"
],
"mid": [
"2952037945",
"2053301011",
"1562955078",
"2251033195"
],
"abstract": [
"We introduce BilBOWA (Bilingual Bag-of-Words without Alignments), a simple and computationally-efficient model for learning bilingual distributed representations of words which can scale to large monolingual datasets and does not require word-aligned parallel training data. Instead it trains directly on monolingual data and extracts a bilingual signal from a smaller set of raw-text sentence-aligned data. This is achieved using a novel sampled bag-of-words cross-lingual objective, which is used to regularize two noise-contrastive language models for efficient cross-lingual feature learning. We show that bilingual embeddings learned using the proposed model outperform state-of-the-art methods on a cross-lingual document classification task as well as a lexical translation task on WMT11 data.",
"Latent relational search is a new search paradigm based on the degree of analogy between two word pairs. A latent relational search engine is expected to return the word Paris as an answer to the question mark (?) in the query (Japan, Tokyo), (France, ?) because the relation between Japan and Tokyo is highly similar to that between France and Paris. We propose an approach for exploring and indexing word pairs to efficiently retrieve candidate answers for a latent relational search query. Representing relations between two words in a word pair by lexical patterns allows our search engine to achieve a high MRR and high precision for the top 1 ranked result. When evaluating with a Web corpus, the proposed method achieves an MRR of 0.963 and it retrieves correct answer in the top 1 for 95.0 of queries.",
"Distributed representations of meaning are a natural way to encode covariance relationships between words and phrases in NLP. By overcoming data sparsity problems, as well as providing information about semantic relatedness which is not available in discrete representations, distributed representations have proven useful in many NLP tasks. Recent work has shown how compositional semantic representations can successfully be applied to a number of monolingual applications such as sentiment analysis. At the same time, there has been some initial success in work on learning shared word-level representations across languages. We combine these two approaches by proposing a method for learning distributed representations in a multilingual setup. Our model learns to assign similar embeddings to aligned sentences and dissimilar ones to sentence which are not aligned while not requiring word alignments. We show that our representations are semantically informative and apply them to a cross-lingual document classification task where we outperform the previous state of the art. Further, by employing parallel corpora of multiple language pairs we find that our model learns representations that capture semantic relationships across languages for which no parallel data was used.",
"Distributed representations of words have proven extremely useful in numerous natural language processing tasks. Their appeal is that they can help alleviate data sparsity problems common to supervised learning. Methods for inducing these representations require only unlabeled language data, which are plentiful for many natural languages. In this work, we induce distributed representations for a pair of languages jointly. We treat it as a multitask learning problem where each task corresponds to a single word, and task relatedness is derived from co-occurrence statistics in bilingual parallel data. These representations can be used for a number of crosslingual learning tasks, where a learner can be trained on annotations present in one language and applied to test data in another. We show that our representations are informative by using them for crosslingual document classification, where classifiers trained on these representations substantially outperform strong baselines (e.g. machine translation) when applied to a new language."
]
} |
1505.06973 | 2949078765 | Formulations of the Image Decomposition Problem as a Multicut Problem (MP) w.r.t. a superpixel graph have received considerable attention. In contrast, instances of the MP w.r.t. a pixel grid graph have received little attention, firstly, because the MP is NP-hard and instances w.r.t. a pixel grid graph are hard to solve in practice, and, secondly, due to the lack of long-range terms in the objective function of the MP. We propose a generalization of the MP with long-range terms (LMP). We design and implement two efficient algorithms (primal feasible heuristics) for the MP and LMP which allow us to study instances of both problems w.r.t. the pixel grid graphs of the images in the BSDS-500 benchmark. The decompositions we obtain do not differ significantly from the state of the art, suggesting that the LMP is a competitive formulation of the Image Decomposition Problem. To demonstrate the generality of the LMP, we apply it also to the Mesh Decomposition Problem posed by the Princeton benchmark, obtaining state-of-the-art decompositions. | A generalization of the MP by a higher-order objective function, called the Higher-Order Multicut Problem (HMP), was proposed in @cite_2 and is studied in detail in @cite_11 @cite_20 . In principle, the HMP subsumes all optimization problems whose feasible solutions coincide with the multicuts of a graph, including the LMP we propose. In fact, the HMP is strictly more general than the LMP; its objective function can assign an objective value to all decompositions for which any set of edges is cut, unlike the objective function of the LMP which is limited to single edges. However, the instances of the HMP that are equivalent to the instances of the LMP we propose have an objective function whose order is equal to the number of edges in the graph and are hence impractical. Thus, the HMP and LMP are complementary in practice. | {
"cite_N": [
"@cite_11",
"@cite_20",
"@cite_2"
],
"mid": [
"",
"2165423399",
"2102338614"
],
"abstract": [
"",
"In this paper, a hypergraph-based image segmentation framework is formulated in a supervised manner for many high-level computer vision tasks. To consider short- and long-range dependency among various regions of an image and also to incorporate wider selection of features, a higher-order correlation clustering (HO-CC) is incorporated in the framework. Correlation clustering (CC), which is a graph-partitioning algorithm, was recently shown to be effective in a number of applications such as natural language processing, document clustering, and image segmentation. It derives its partitioning result from a pairwise graph by optimizing a global objective function such that it simultaneously maximizes both intra-cluster similarity and inter-cluster dissimilarity. In the HO-CC, the pairwise graph which is used in the CC is generalized to a hypergraph which can alleviate local boundary ambiguities that can occur in the CC. Fast inference is possible by linear programming relaxation, and effective parameter learning by structured support vector machine is also possible by incorporating a decomposable structured loss function. Experimental results on various data sets show that the proposed HO-CC outperforms other state-of-the-art image segmentation algorithms. The HO-CC framework is therefore an efficient and flexible image segmentation framework.",
"For many of the state-of-the-art computer vision algorithms, image segmentation is an important preprocessing step. As such, several image segmentation algorithms have been proposed, however, with certain reservation due to high computational load and many hand-tuning parameters. Correlation clustering, a graph-partitioning algorithm often used in natural language processing and document clustering, has the potential to perform better than previously proposed image segmentation algorithms. We improve the basic correlation clustering formulation by taking into account higher-order cluster relationships. This improves clustering in the presence of local boundary ambiguities. We first apply the pairwise correlation clustering to image segmentation over a pairwise superpixel graph and then develop higher-order correlation clustering over a hypergraph that considers higher-order relations among superpixels. Fast inference is possible by linear programming relaxation, and also effective parameter learning framework by structured support vector machine is possible. Experimental results on various datasets show that the proposed higher-order correlation clustering outperforms other state-of-the-art image segmentation algorithms."
]
} |
1505.06973 | 2949078765 | Formulations of the Image Decomposition Problem as a Multicut Problem (MP) w.r.t. a superpixel graph have received considerable attention. In contrast, instances of the MP w.r.t. a pixel grid graph have received little attention, firstly, because the MP is NP-hard and instances w.r.t. a pixel grid graph are hard to solve in practice, and, secondly, due to the lack of long-range terms in the objective function of the MP. We propose a generalization of the MP with long-range terms (LMP). We design and implement two efficient algorithms (primal feasible heuristics) for the MP and LMP which allow us to study instances of both problems w.r.t. the pixel grid graphs of the images in the BSDS-500 benchmark. The decompositions we obtain do not differ significantly from the state of the art, suggesting that the LMP is a competitive formulation of the Image Decomposition Problem. To demonstrate the generality of the LMP, we apply it also to the Mesh Decomposition Problem posed by the Princeton benchmark, obtaining state-of-the-art decompositions. | Efficient algorithms (primal feasible heuristics) for the MP are proposed and analyzed in @cite_5 @cite_37 @cite_32 @cite_12 . The algorithms we design and implement are compared here to the state of the art @cite_37 . Our implementation of (an extension of) the Kernighan-Lin Algorithm (KL) @cite_12 is compared here, in addition, to the implementation of KL in @cite_36 @cite_1 . | {
"cite_N": [
"@cite_37",
"@cite_36",
"@cite_1",
"@cite_32",
"@cite_5",
"@cite_12"
],
"mid": [
"",
"1585614163",
"1971861370",
"1937315027",
"2141985162",
"2161455936"
],
"abstract": [
"",
"OpenGM is a C++ template library for defining discrete graphical models and performing inference on these models, using a wide range of state-of-the-art algorithms. No restrictions are imposed on the factor graph to allow for higher-order factors and arbitrary neighborhood structures. Large models with repetitive structure are handled efficiently because (i) functions that occur repeatedly need to be stored only once, and (ii) distinct functions can be implemented differently, using different encodings alongside each other in the same model. Several parametric functions (e.g. metrics), sparse and dense value tables are provided and so is an interface for custom C++ code. Algorithms are separated by design from the representation of graphical models and are easily exchangeable. OpenGM, its algorithms, HDF5 file format and command line tools are modular and extendible.",
"published an influential study in 2006 on energy minimization methods for Markov random fields. This study provided valuable insights in choosing the best optimization technique for certain classes of problems. While these insights remain generally useful today, the phenomenal success of random field models means that the kinds of inference problems that have to be solved changed significantly. Specifically, the models today often include higher order interactions, flexible connectivity structures, large label-spaces of different cardinalities, or learned energy tables. To reflect these changes, we provide a modernized and enlarged study. We present an empirical comparison of more than 27 state-of-the-art optimization techniques on a corpus of 2453 energy minimization instances from diverse applications in computer vision. To ensure reproducibility, we evaluate all methods in the OpenGM 2 framework and report extensive results regarding runtime and solution quality. Key insights from our study agree with the results of for the types of models they studied. However, on new and challenging types of models our findings disagree and suggest that polyhedral methods and integer programming solvers are competitive in terms of runtime and solution quality over a large range of model types.",
"Correlation clustering, or multicut partitioning, is widely used in image segmentation for partitioning an undirected graph or image with positive and negative edge weights such that the sum of cut edge weights is minimized. Due to its NP-hardness, exact solvers do not scale and approximative solvers often give unsatisfactory results. We investigate scalable methods for correlation clustering. To this end we define fusion moves for the correlation clustering problem. Our algorithm iteratively fuses the current and a proposed partitioning which monotonously improves the partitioning and maintains a valid partitioning at all times. Furthermore, it scales to larger datasets, gives near optimal solutions, and at the same time shows a good anytime performance.",
"Clustering is a fundamental task in unsupervised learning. The focus of this paper is the Correlation Clustering functional which combines positive and negative affinities between the data points. The contribution of this paper is two fold: (i) Provide a theoretic analysis of the functional. (ii) New optimization algorithms which can cope with large scale problems (>100K variables) that are infeasible using existing methods. Our theoretic analysis provides a probabilistic generative interpretation for the functional, and justifies its intrinsic \"model-selection\" capability. Furthermore, we draw an analogy between optimizing this functional and the well known Potts energy minimization. This analogy allows us to suggest several new optimization algorithms, which exploit the intrinsic \"model-selection\" capability of the functional to automatically recover the underlying number of clusters. We compare our algorithms to existing methods on both synthetic and real data. In addition we suggest two new applications that are made possible by our algorithms: unsupervised face identification and interactive multi-object segmentation by rough boundary delineation.",
"We consider the problem of partitioning the nodes of a graph with costs on its edges into subsets of given sizes so as to minimize the sum of the costs on all edges cut. This problem arises in several physical situations — for example, in assigning the components of electronic circuits to circuit boards to minimize the number of connections between boards. This paper presents a heuristic method for partitioning arbitrary graphs which is both effective in finding optimal partitions, and fast enough to be practical in solving large problems."
]
} |
1505.06973 | 2949078765 | Formulations of the Image Decomposition Problem as a Multicut Problem (MP) w.r.t. a superpixel graph have received considerable attention. In contrast, instances of the MP w.r.t. a pixel grid graph have received little attention, firstly, because the MP is NP-hard and instances w.r.t. a pixel grid graph are hard to solve in practice, and, secondly, due to the lack of long-range terms in the objective function of the MP. We propose a generalization of the MP with long-range terms (LMP). We design and implement two efficient algorithms (primal feasible heuristics) for the MP and LMP which allow us to study instances of both problems w.r.t. the pixel grid graphs of the images in the BSDS-500 benchmark. The decompositions we obtain do not differ significantly from the state of the art, suggesting that the LMP is a competitive formulation of the Image Decomposition Problem. To demonstrate the generality of the LMP, we apply it also to the Mesh Decomposition Problem posed by the Princeton benchmark, obtaining state-of-the-art decompositions. | Toward image decomposition @cite_28 , the state of the art in boundary detection is @cite_13 @cite_18 , followed closely by @cite_23 @cite_4 . Our experiments are based on @cite_23 which is publicly available and outperformed marginally by @cite_13 @cite_18 . The state of the art in image decomposition is @cite_26 , followed closely by @cite_28 @cite_4 . Our results are compared quantitatively to @cite_26 . | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_28",
"@cite_23",
"@cite_13"
],
"mid": [
"",
"1991367009",
"",
"2110158442",
"2129587342",
"1930528368"
],
"abstract": [
"",
"We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.",
"",
"This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications.",
"Edge detection is a critical component of many vision systems, including object detectors and image segmentation algorithms. Patches of edges exhibit well-known forms of local structure, such as straight lines or T-junctions. In this paper we take advantage of the structure present in local image patches to learn both an accurate and computationally efficient edge detector. We formulate the problem of predicting local edge masks in a structured learning framework applied to random decision forests. Our novel approach to learning decision trees robustly maps the structured labels to a discrete space on which standard information gain measures may be evaluated. The result is an approach that obtains real time performance that is orders of magnitude faster than many competing state-of-the-art approaches, while also achieving state-of-the-art edge detection results on the BSDS500 Segmentation dataset and NYU Depth dataset. Finally, we show the potential of our approach as a general purpose edge detector by showing our learned edge models generalize well across datasets.",
"Contour detection has been a fundamental component in many image segmentation and object detection systems. Most previous work utilizes low-level features such as texture or saliency to detect contours and then use them as cues for a higher-level task such as object detection. However, we claim that recognizing objects and predicting contours are two mutually related tasks. Contrary to traditional approaches, we show that we can invert the commonly established pipeline: instead of detecting contours with low-level cues for a higher-level recognition task, we exploit object-related features as high-level cues for contour detection."
]
} |
1505.06973 | 2949078765 | Formulations of the Image Decomposition Problem as a Multicut Problem (MP) w.r.t. a superpixel graph have received considerable attention. In contrast, instances of the MP w.r.t. a pixel grid graph have received little attention, firstly, because the MP is NP-hard and instances w.r.t. a pixel grid graph are hard to solve in practice, and, secondly, due to the lack of long-range terms in the objective function of the MP. We propose a generalization of the MP with long-range terms (LMP). We design and implement two efficient algorithms (primal feasible heuristics) for the MP and LMP which allow us to study instances of both problems w.r.t. the pixel grid graphs of the images in the BSDS-500 benchmark. The decompositions we obtain do not differ significantly from the state of the art, suggesting that the LMP is a competitive formulation of the Image Decomposition Problem. To demonstrate the generality of the LMP, we apply it also to the Mesh Decomposition Problem posed by the Princeton benchmark, obtaining state-of-the-art decompositions. | Toward mesh decomposition @cite_10 , the state of the art is @cite_27 @cite_41 @cite_21 , followed closely by @cite_19 @cite_38 . Our experiments are based on @cite_27 @cite_19 @cite_38 . In prior work, methods based on learning mostly rely on a unary term which requires components to be labeled semantically @cite_27 @cite_21 . One method based on edge probabilities was introduced previously @cite_41 . It applies a complex post-process (contour thinning and completion, snake movement) to obtain a decomposition. We show the first mesh decompositions based on multicuts. | {
"cite_N": [
"@cite_38",
"@cite_41",
"@cite_21",
"@cite_19",
"@cite_27",
"@cite_10"
],
"mid": [
"2132582440",
"1991337803",
"2027069615",
"",
"2106723645",
"1994509172"
],
"abstract": [
"This paper presents a simple and efficient automatic mesh segmentation algorithm that solely exploits the shape concavity information. The method locates concave creases and seams using a set of concavity-sensitive scalar fields. These fields are computed by solving a Laplacian system with a novel concavity-sensitive weighting scheme. Isolines sampled from the concavity-aware fields naturally gather at concave seams, serving as good cutting boundary candidates. In addition, the fields provide sufficient information allowing efficient evaluation of the candidate cuts. We perform a summarization of all field gradient magnitudes to define a score for each isoline and employ a score-based greedy algorithm to select the best cuts. Extensive experiments and quantitative analysis have shown that the quality of our segmentations are better than or comparable with existing state-of-the-art more complex approaches.",
"This paper presents a 3D-mesh segmentation algorithm based on a learning approach. A large database of manually segmented 3D-meshes is used to learn a boundary edge function. The function is learned using a classifier which automatically selects from a pool of geometric features the most relevant ones to detect candidate boundary edges. We propose a processing pipeline that produces smooth closed boundaries using this edge function. This pipeline successively selects a set of candidate boundary contours, closes them and optimizes them using a snake movement. Our algorithm was evaluated quantitatively using two different segmentation benchmarks and was shown to outperform most recent algorithms from the state-of-the-art.",
"We propose a fast method for 3D shape segmentation and labeling via Extreme Learning Machine (ELM). Given a set of example shapes with labeled segmentation, we train an ELM classifier and use it to produce initial segmentation for test shapes. Based on the initial segmentation, we compute the final smooth segmentation through a graph-cut optimization constrained by the super-face boundaries obtained by over-segmentation and the active contours computed from ELM segmentation. Experimental results show that our method achieves comparable results against the state-of-the-arts, but reduces the training time by approximately two orders of magnitude, both for face-level and super-face-level, making it scale well for large datasets. Based on such notable improvement, we demonstrate the application of our method for fast online sequential learning for 3D shape segmentation at face level, as well as realtime sequential learning at super-face level.",
"",
"This paper presents a data-driven approach to simultaneous segmentation and labeling of parts in 3D meshes. An objective function is formulated as a Conditional Random Field model, with terms assessing the consistency of faces with labels, and terms between labels of neighboring faces. The objective function is learned from a collection of labeled training meshes. The algorithm uses hundreds of geometric and contextual label features and learns different types of segmentations for different tasks, without requiring manual parameter tuning. Our algorithm achieves a significant improvement in results over the state-of-the-art when evaluated on the Princeton Segmentation Benchmark, often producing segmentations and labelings comparable to those produced by humans.",
"An overview of the state-of-the-art in 3D mesh segmentation.An overview of performance evaluation frameworks for 3D mesh segmentation.New categorization for 3D mesh segmentation methodologies.Pros and cons in performance evaluation frameworks. 3D mesh segmentation has become a crucial part of many applications in 3D shape analysis. In this paper, a comprehensive survey on 3D mesh segmentation methods is presented. Analysis of the existing methodologies is addressed taking into account a new categorization along with the performance evaluation frameworks which aim to support meaningful benchmarks not only qualitatively but also in a quantitative manner. This survey aims to capture the essence of current trends in 3D mesh segmentation."
]
} |
1505.06897 | 1752795171 | At the light of regularized dynamic time warping kernels, this paper reconsider the concept of time elastic centroid (TEC) for a set of time series. From this perspective, we show first how TEC can easily be addressed as a preimage problem. Unfortunately this preimage problem is ill-posed, may suffer from over-fitting especially for long time series and getting a sub-optimal solution involves heavy computational costs. We then derive two new algorithms based on a probabilistic interpretation of kernel alignment matrices that expresses in terms of probabilistic distributions over sets of alignment paths. The first algorithm is an iterative agglomerative heuristics inspired from the state of the art DTW barycenter averaging (DBA) algorithm proposed specifically for the Dynamic Time Warping measure. The second proposed algorithm achieves a classical averaging of the aligned samples but also implements an averaging of the time of occurrences of the aligned samples. It exploits a straightforward progressive agglomerative heuristics. An experimentation that compares for 45 time series datasets classification error rates obtained by first near neighbors classifiers exploiting a single medoid or centroid estimate to represent each categories show that: i) centroids based approaches significantly outperform medoids based approaches, ii) on the considered experience, the two proposed algorithms outperform the state of the art DBA algorithm, and iii) the second proposed algorithm that implements an averaging jointly in the sample space and along the time axes emerges as the most significantly robust time elastic averaging heuristic with an interesting noise reduction capability. Index Terms—Time series averaging Time elastic kernel Dynamic Time Warping Time series clustering and classification. | Time series averaging in the context of (multiple) time elastic distance alignments has been mainly addressed in the scope of the Dynamic Time Warping (DTW) measure @cite_26 , @cite_21 . Although other time elastic distance measures such as the Edit Distance With Real Penalty (ERP) @cite_12 or the Time Warp Edit Distance (TWED) @cite_28 could be considered instead, without loss of generality, we remain focused throughout this paper on DTW and its kernelization. | {
"cite_N": [
"@cite_28",
"@cite_26",
"@cite_21",
"@cite_12"
],
"mid": [
"2143325592",
"1986092967",
"54230203",
"1597504361"
],
"abstract": [
"In a way similar to the string-to-string correction problem, we address discrete time series similarity in light of a time-series-to-time-series-correction problem for which the similarity between two time series is measured as the minimum cost sequence of edit operations needed to transform one time series into another. To define the edit operations, we use the paradigm of a graphical editing process and end up with a dynamic programming algorithm that we call time warp edit distance (TWED). TWED is slightly different in form from dynamic time warping (DTW), longest common subsequence (LCSS), or edit distance with real penalty (ERP) algorithms. In particular, it highlights a parameter that controls a kind of stiffness of the elastic measure along the time axis. We show that the similarity provided by TWED is a potentially useful metric in time series retrieval applications since it could benefit from the triangular inequality property to speed up the retrieval process while tuning the parameters of the elastic measure. In that context, a lower bound is derived to link the matching of time series into down sampled representation spaces to the matching into the original space. The empiric quality of the TWED distance is evaluated on a simple classification task. Compared to edit distance, DTW, LCSS, and ERP, TWED has proved to be quite effective on the considered experimental task.",
"Experiments on the automatic recognition of 203 Russian words are described. The experimental vocabulary includes terms of the language, ALGOL -60 together with others. The logarithmic characteristics of acoustic signal in five bands are extracted as features. The measure of similarity between the words of standard and control sequences is calculated by the words maximizing a definite functional using dynamic programming. The average reliability of recognition for one speaker obtained for experiments using 5000 words is 0·95. The computational time for recognition is 2-4 sec.",
"",
"Existing studies on time series are based on two categories of distance functions. The first category consists of the Lp-norms. They are metric distance functions but cannot support local time shifting. The second category consists of distance functions which are capable of handling local time shifting but are nonmetric. The first contribution of this paper is the proposal of a new distance function, which we call ERP (\"Edit distance with Real Penalty\"). Representing a marriage of L1- norm and the edit distance, ERP can support local time shifting, and is a metric. The second contribution of the paper is the development of pruning strategies for large time series databases. Given that ERP is a metric, one way to prune is to apply the triangle inequality. Another way to prune is to develop a lower bound on the ERP distance. We propose such a lower bound, which has the nice computational property that it can be efficiently indexed with a standard B+- tree. Moreover, we show that these two ways of pruning can be used simultaneously for ERP distances. Specifically, the false positives obtained from the B+-tree can be further minimized by applying the triangle inequality. Based on extensive experimentation with existing benchmarks and techniques, we show that this combination delivers superb pruning power and search time performance, and dominates all existing strategies."
]
} |
1505.06897 | 1752795171 | At the light of regularized dynamic time warping kernels, this paper reconsider the concept of time elastic centroid (TEC) for a set of time series. From this perspective, we show first how TEC can easily be addressed as a preimage problem. Unfortunately this preimage problem is ill-posed, may suffer from over-fitting especially for long time series and getting a sub-optimal solution involves heavy computational costs. We then derive two new algorithms based on a probabilistic interpretation of kernel alignment matrices that expresses in terms of probabilistic distributions over sets of alignment paths. The first algorithm is an iterative agglomerative heuristics inspired from the state of the art DTW barycenter averaging (DBA) algorithm proposed specifically for the Dynamic Time Warping measure. The second proposed algorithm achieves a classical averaging of the aligned samples but also implements an averaging of the time of occurrences of the aligned samples. It exploits a straightforward progressive agglomerative heuristics. An experimentation that compares for 45 time series datasets classification error rates obtained by first near neighbors classifiers exploiting a single medoid or centroid estimate to represent each categories show that: i) centroids based approaches significantly outperform medoids based approaches, ii) on the considered experience, the two proposed algorithms outperform the state of the art DBA algorithm, and iii) the second proposed algorithm that implements an averaging jointly in the sample space and along the time axes emerges as the most significantly robust time elastic averaging heuristic with an interesting noise reduction capability. Index Terms—Time series averaging Time elastic kernel Dynamic Time Warping Time series clustering and classification. | A single alignment path is required to calculate the time elastic centroid of a pair of time series (Def. ). However, multiple path alignments need to be considered to evaluate the centroid of a larger set of time series. Multiple alignments have been widely studied in bioinformatics @cite_19 , and it has been shown that the computational complexity of determining the optimal alignment of a set of sequences under the sum of all pairs (SP) score scheme is a NP-complete problem @cite_7 @cite_27 . The time and space complexity of this problem is @math , where @math is the number of sequences in the set and @math is the length of the sequences when using dynamic programming to search for an optimal solution @cite_1 . This latter result applies to the estimation of the time elastic centroid of a set of @math time series with respect to the DTW measure. Since the search for an optimal solution becomes rapidly intractable with increasing @math , sub-optimal heuristic solutions have been subsequently proposed, most of them falling into one of the following three categories. | {
"cite_N": [
"@cite_19",
"@cite_27",
"@cite_1",
"@cite_7"
],
"mid": [
"2908362408",
"2053663417",
"1974326986",
"2002638840"
],
"abstract": [
"",
"It is shown that the multiple alignment problem with SP-score is NP-hard for each scoring matrix in a broad class M that includes most scoring matrices actually used in biological applications. The problem remains NP-hard even if sequences can only be shifted relative to each other and no internal gaps are allowed. It is also shown that there is a scoring matrix M0 such that the multiple alignment problem for M0 is MAX-SNP-hard, regardless of whether or not internal gaps are allowed.",
"The study and comparison of sequences of characters from a finite alphabet is relevant to various areas of science, notably molecular biology. The measurement of sequence similarity involves the consideration of the different possible sequence alignments in order to find an optimal one for which the “distance” between sequences is minimum. By associating a path in a lattice to each alignment, a geometric insight can be brought into the problem of finding an optimal alignment. This problem can then be solved by applying a dynamic programming algorithm. However, the computational effort grows rapidly with the number N of sequences to be compared @math , where l is the mean length of the sequences to be compared).It is proved here that knowledge of the measure of an arbitrarily chosen alignment can be used in combination with information from the pairwise alignments to considerably restrict the size of the region of the lattice in consideration. This reduction implies fewer computations and less memory ...",
"ABSTRACT We study the computational complexity of two popular problems in multiple sequence alignment: multiple alignment with SP-score and multiple tree alignment. It is shown that the first problem is NP-complete and the second is MAX SNP-hard. The complexity of tree alignment with a given phylogeny is also considered."
]
} |
1505.06851 | 1563122285 | Smell has a huge influence over how we perceive places. Despite its importance, smell has been crucially overlooked by urban planners and scientists alike, not least because it is difficult to record and analyze at scale. One of the authors of this paper has ventured out in the urban world and conducted smellwalks in a variety of cities: participants were exposed to a range of different smellscapes and asked to record their experiences. As a result, smell-related words have been collected and classified, creating the first dictionary for urban smell. Here we explore the possibility of using social media data to reliably map the smells of entire cities. To this end, for both Barcelona and London, we collect geo-referenced picture tags from Flickr and Instagram, and geo-referenced tweets from Twitter. We match those tags and tweets with the words in the smell dictionary. We find that smell-related words are best classified in ten categories. We also find that specific categories (e.g., industry, transport, cleaning) correlate with governmental air quality indicators, adding validity to our study. | People are able to detect up to 1 trillion smells @cite_2 . Despite that, there are limited maps of this potentially vast urban smellscape. One reason is that smell is problematic to record, to analyse and to depict visually. Here we review a variety of methodological approaches for recording urban smells. | {
"cite_N": [
"@cite_2"
],
"mid": [
"1969133910"
],
"abstract": [
"Humans can discriminate several million different colors and almost half a million different tones, but the number of discriminable olfactory stimuli remains unknown. The lay and scientific literature typically claims that humans can discriminate 10,000 odors, but this number has never been empirically validated. We determined the resolution of the human sense of smell by testing the capacity of humans to discriminate odor mixtures with varying numbers of shared components. On the basis of the results of psychophysical testing, we calculated that humans can discriminate at least 1 trillion olfactory stimuli. This is far more than previous estimates of distinguishable olfactory stimuli. It demonstrates that the human olfactory system, with its hundreds of different olfactory receptors, far outperforms the other senses in the number of physically different stimuli it can discriminate."
]
} |
1505.06851 | 1563122285 | Smell has a huge influence over how we perceive places. Despite its importance, smell has been crucially overlooked by urban planners and scientists alike, not least because it is difficult to record and analyze at scale. One of the authors of this paper has ventured out in the urban world and conducted smellwalks in a variety of cities: participants were exposed to a range of different smellscapes and asked to record their experiences. As a result, smell-related words have been collected and classified, creating the first dictionary for urban smell. Here we explore the possibility of using social media data to reliably map the smells of entire cities. To this end, for both Barcelona and London, we collect geo-referenced picture tags from Flickr and Instagram, and geo-referenced tweets from Twitter. We match those tags and tweets with the words in the smell dictionary. We find that smell-related words are best classified in ten categories. We also find that specific categories (e.g., industry, transport, cleaning) correlate with governmental air quality indicators, adding validity to our study. | Online participatory mapping allows web users to annotate pre-designed base maps with odor markers @cite_25 . This method promises to be scalable, but engaging enough people to participate is hard. | {
"cite_N": [
"@cite_25"
],
"mid": [
"1580732488"
],
"abstract": [
"List of Figures and Charts Acknowledgements Glossary Chatper One: Introduction PART I: SMELL, SOCIETY AND CITIES Chapter Two: Perspectives on Smell and the City: Part I Chapter Three: Perspectives on Smell and the City: Part II Chapter Four: Smellwalking and Representing Urban Smellscapes PART 2: SMELL SOURCES IN THE CITY Chapter Five: Air Quality, Pollution and Smell Chapter Six: Food and Smell Chapter Seven: Urban Policy and Smell PART 3: SMELLSCAPE CONTROL, DESIGN AND PLACEMAKING Chapter Eight: Processes of Odour Management and Control in the City Chapter Nine: Designing with Smell - Restorative Environments and Design Tools Chapter Ten: Odour, Placemaking and Urban Smellscape Design Conclusion References"
]
} |
1505.06455 | 2590435974 | A visceral structure on a model is given by a definable base for a uniform topology on its universe M in which all basic open sets are infinite and any infinite definable subset X of M has non-empty interior. Assuming only viscerality, we show that the definable sets in M satisfy some desirable topological tameness conditions. For example, any definable unary function has a finite set of discontinuities; any definable function from some Cartesian power of M into M is continuous on an open set; and assuming definable finite choice, we obtain a cell decomposition result for definable sets. Under an additional topological assumption ("no space-filling functions"), we prove that the natural notion of topological dimension is invariant under definable bijections. These results generalize some of the theorems proved by Simon and Walsberg, who assumed dp-minimality in addition to viscerality. In the final two sections, we construct new examples of visceral structures a subclass of which are dp-minimal yet not weakly o-minimal. | Simon and Walsberg @cite_25 recently proved some similar results for visceral dp-minimal theories (although they did not call them such; what we call viscerality, they called ()''). For instance, they also proved that definable functions are continuous almost everywhere and that the natural topological dimension function is invariant under definable bijections. We do not assume dp-minimality or even NIP, and in that sense our results are more general; on the other hand, we needed Definable Finite Choice for our cell decomposition theorem and a few other results, whereas Simon and Walsberg compensate for the lack of DFC by decomposing definable sets into graphs of continuous multi-valued functions.'' | {
"cite_N": [
"@cite_25"
],
"mid": [
"2272254781"
],
"abstract": [
"We develop tame topology over dp-minimal structures equipped with definable uniformities satisfying certain assumptions. Our assumptions are enough to ensure that definable sets are tame: there is a good notion of dimension on definable sets, definable functions are almost everywhere continuous, and definable sets are finite unions of graphs of definable continuous \"multi-valued functions\". This generalizes known statements about weakly o-minimal, C-minimal and P-minimal theories."
]
} |
1505.06455 | 2590435974 | A visceral structure on a model is given by a definable base for a uniform topology on its universe M in which all basic open sets are infinite and any infinite definable subset X of M has non-empty interior. Assuming only viscerality, we show that the definable sets in M satisfy some desirable topological tameness conditions. For example, any definable unary function has a finite set of discontinuities; any definable function from some Cartesian power of M into M is continuous on an open set; and assuming definable finite choice, we obtain a cell decomposition result for definable sets. Under an additional topological assumption ("no space-filling functions"), we prove that the natural notion of topological dimension is invariant under definable bijections. These results generalize some of the theorems proved by Simon and Walsberg, who assumed dp-minimality in addition to viscerality. In the final two sections, we construct new examples of visceral structures a subclass of which are dp-minimal yet not weakly o-minimal. | In William Johnson's Ph.D. thesis @cite_18 , it is shown that any dp-minimal, not strongly minimal field has a definable uniform structure which is visceral in our sense, furnishing many interesting examples of visceral theories. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2550342065"
],
"abstract": [
"Author(s): Johnson, William Andrew | Advisor(s): Scanlon, Thomas | Abstract: This dissertation is a collection of results in model theory, related in one way or another to fields, NIP theories, and elimination of imaginaries. The most important result is a classification of dp-minimal fields, presented in Chapter 9. We construct in a canonical fashion a non-trivial Hausdorff definable field topology on any unstable dp-minimal field. Usingthis we classify the dp-minimal pure fields and valued fields up to elementary equivalence. Furthermore we prove that every VC-minimal field is real closed or algebraically closed.In Chapter 11, we analyze the theories of existentially closed fields with several valuations and orderings, as studied by van den Dries. We show that these model complete theories are NTP2, and analyze forking, dividing, and burden in these theories. The theory of algebraically closed fields with n independent valuation rings turns out to be an example of such a theory. This provides a new and natural example of an NTP2 theory which is neithersimple nor NIP, nor even a conceptual hybrid of something simple and something NIP.In Chapter 8, we exhibit a bad failure of elimination of imaginaries in a dense o-minimal structure. We produce an exotic interpretable set which cannot be put in definable bijection with a definable set, after naming any amount of parameters. However, we show that these exotic interpretable sets are still amenable to some of the tools of tame topology: they mustadmit nice definable topologies locally homeomorphic to definable sets.Chapter 12 proves the existence of Z nZ-valued definable strong Euler characteristics on pseudofinite fields, which measure the non-standard “size” of definable sets, mod n. The non-trivial result is that these “sizes” are definable in families of definable sets. This could probably be proven e cohomology, but we give a more elementary proof relying heavily on the theory of abelian varieties.We also present simplified and new proofs of several model-theoretic facts, including the definability of irreducibility and Zariski closure in ACF (Chapter 10), and elimination of imaginaries in ACVF (Chapter 6). This latter fact was originally proven by Haskell, Hrushovski, and Macpherson. We give a proof that is drastically simpler, inspired by Poizat’s proofs of elimination of imaginaries in ACF and DCF."
]
} |
1505.06241 | 411845244 | Private information retrieval (PIR) protocols allow a user to retrieve a data item from a database without revealing any information about the identity of the item being retrieved. Specifically, in information-theoretic @math -server PIR, the database is replicated among @math non-communicating servers, and each server learns nothing about the item retrieved by the user. The cost of PIR protocols is usually measured in terms of their communication complexity, which is the total number of bits exchanged between the user and the servers, and storage overhead, which is the ratio between the total number of bits stored on all the servers and the number of bits in the database. Since single-server information-theoretic PIR is impossible, the storage overhead of all existing PIR protocols is at least @math . In this work, we show that information-theoretic PIR can be achieved with storage overhead arbitrarily close to the optimal value of @math , without sacrificing the communication complexity. Specifically, we prove that all known @math -server PIR protocols can be efficiently emulated, while preserving both privacy and communication complexity but significantly reducing the storage overhead. To this end, we distribute the @math bits of the database among @math servers, each storing @math coded bits (rather than replicas). For every fixed @math , the resulting storage overhead @math approaches @math as @math grows; explicitly we have @math . Moreover, in the special case @math , the storage overhead is only @math . In order to achieve these results, we introduce and study a new kind of binary linear codes, called here @math -server PIR codes. We then show how such codes can be constructed, and we establish several bounds on the parameters of @math -server PIR codes. Finally, we briefly discuss extensions of our results to nonbinary alphabets, to robust PIR, and to @math -private PIR. | There are several previous works which construct coded schemes for the purpose of fast or private retrieval. The first work we know of for the purpose of coded private retrieval is the recent work by @cite_23 . The authors showed how to encode files in multiple servers with very low communication complexity. However, their constructions require an exponentially large number of servers which may depend on the number of files or their size. In another recent work @cite_24 , studied the tradeoff between storage overhead and communication complexity, though only for setups in which the size of each file is relatively large. A similar approach to ours was studied by @cite_36 , where the authors also partitioned the database into several parts in order to avoid repetition and thereby reduce the storage overhead. However, their construction works only for the PIR scheme using the multiplicity codes by @cite_0 and they didn't encode the parts of the database as we study in this work. | {
"cite_N": [
"@cite_24",
"@cite_0",
"@cite_36",
"@cite_23"
],
"mid": [
"1962818100",
"",
"2161788512",
"2007725949"
],
"abstract": [
"Private information retrieval scheme for coded data storage is considered in this paper. We focus on the case where the size of each data record is large and hence only the download cost (but not the upload cost for transmitting retrieval queries) is of interest. We prove that the tradeoff between storage cost and retrieval download cost depends on the number of data records in the system. We propose a class of linear storage codes and retrieval schemes, and derive conditions under which our schemes are error-free and private. Tradeoffs between the storage cost and retrieval costs are also obtained.",
"",
"Since the concept of locally decodable codes was introduced by Katz and Trevisan in 2000 [11], it is well-known that information theoretically secure private information retrieval schemes can be built using locally decodable codes [15]. In this paper, we construct a Byzantine robust PIR scheme using the multiplicity codes introduced by [12]. Our main contributions are on the one hand to avoid full replication of the database on each server; this significantly reduces the global redundancy. On the other hand, to have a much lower locality in the PIR context than in the LDC context. This shows that there exists two different notions: LDC-locality and PIR-locality. This is made possible by exploiting geometric properties of multiplicity codes.",
"Private information retrieval (PIR) systems allow a user to retrieve a record from a public database without revealing to the server which record is being retrieved. The literature on PIR considers only replication-based systems, wherein each storage node stores a copy of the entire data. However, systems based on erasure codes are gaining increasing popularity due to a variety of reasons. This paper initiates an investigation into PIR in erasure-coded systems by establishing its capacity and designing explicit codes and algorithms. The notion of privacy considered here is information-theoretic, and the metric optimized is the amount of data downloaded by the user during PIR. In this paper, we present four main results. First, we design an explicit erasure code and PIR algorithm that requires only one extra bit of download to provide perfect privacy. In contrast, all existing PIR algorithms require a download of at least twice the size of the requisite data. Second, we derive lower bounds proving the necessity of downloading at least one additional bit. This establishes the precise capacity of PIR with respect to the metric of download. These results are also applicable to PIR in replication-based systems, which are a special case of erasure codes. Our third contribution is a negative result showing that capacity-achieving codes necessitate super-linear storage overheads. This motivates the fourth contribution of this paper: an erasure code and PIR algorithm that requires a linear storage overhead, provides high reliability to the data, and is a small factor away from the capacity."
]
} |
1505.06241 | 411845244 | Private information retrieval (PIR) protocols allow a user to retrieve a data item from a database without revealing any information about the identity of the item being retrieved. Specifically, in information-theoretic @math -server PIR, the database is replicated among @math non-communicating servers, and each server learns nothing about the item retrieved by the user. The cost of PIR protocols is usually measured in terms of their communication complexity, which is the total number of bits exchanged between the user and the servers, and storage overhead, which is the ratio between the total number of bits stored on all the servers and the number of bits in the database. Since single-server information-theoretic PIR is impossible, the storage overhead of all existing PIR protocols is at least @math . In this work, we show that information-theoretic PIR can be achieved with storage overhead arbitrarily close to the optimal value of @math , without sacrificing the communication complexity. Specifically, we prove that all known @math -server PIR protocols can be efficiently emulated, while preserving both privacy and communication complexity but significantly reducing the storage overhead. To this end, we distribute the @math bits of the database among @math servers, each storing @math coded bits (rather than replicas). For every fixed @math , the resulting storage overhead @math approaches @math as @math grows; explicitly we have @math . Moreover, in the special case @math , the storage overhead is only @math . In order to achieve these results, we introduce and study a new kind of binary linear codes, called here @math -server PIR codes. We then show how such codes can be constructed, and we establish several bounds on the parameters of @math -server PIR codes. Finally, we briefly discuss extensions of our results to nonbinary alphabets, to robust PIR, and to @math -private PIR. | Batch codes @cite_3 are another method to store coded data in a distributed storage for the purpose of fast retrieval of multiple bits. Under this setup, the database is encoded into an @math -tuple of strings, called , such that each batch of @math bits from the database can be recovered by reading at most some predetermined @math bits from each bucket. They are also useful in trading the storage overhead in exchange for load-balancing or lowering the computational complexity in private information retrieval. Another recent work on batch codes was recently studied in @cite_8 . | {
"cite_N": [
"@cite_3",
"@cite_8"
],
"mid": [
"2088336724",
"2951944286"
],
"abstract": [
"A batch code encodes a string x into an m-tuple of strings, called buckets, such that each batch of k bits from x can be decoded by reading at most one (more generally, t) bits from each bucket. Batch codes can be viewed as relaxing several combinatorial objects, including expanders and locally decodable codes. We initiate the study of these codes by presenting some constructions, connections with other problems, and lower bounds. We also demonstrate the usefulness of batch codes by presenting two types of applications: trading maximal load for storage in certain load-balancing scenarios, and amortizing the computational cost of private information retrieval (PIR) and related cryptographic protocols.",
"Consider a large database of @math data items that need to be stored using @math servers. We study how to encode information so that a large number @math of read requests can be performed in parallel while the rate remains constant (and ideally approaches one). This problem is equivalent to the design of multiset Batch Codes introduced by Ishai, Kushilevitz, Ostrovsky and Sahai [17]. We give families of multiset batch codes with asymptotically optimal rates of the form @math and a number of servers @math scaling polynomially in the number of read requests @math . An advantage of our batch code constructions over most previously known multiset batch codes is explicit and deterministic decoding algorithms and asymptotically optimal fault tolerance. Our main technical innovation is a graph-theoretic method of designing multiset batch codes using dense bipartite graphs with no small cycles. We modify prior graph constructions of dense, high-girth graphs to obtain our batch code results. We achieve close to optimal tradeoffs between the parameters for bipartite graph based batch codes."
]
} |
1505.06319 | 1547082951 | In computer vision, we have the problem of creating graphs out of unstructured point-sets, i.e. the data graph. A common approach for this problem consists of building a triangulation which might not always lead to the best solution. Small changes in the location of the points might generate graphs with unstable configurations and the topology of the graph could change significantly. After building the data-graph, one could apply Graph Matching techniques to register the original point-sets. In this paper, we propose a data graph technique based on the Minimum Spanning Tree of Maximum Entropty (MSTME). We aim at a data graph construction which could be more stable than the Delaunay triangulation with respect to small variations in the neighborhood of points. Our technique aims at creating data graphs which could help the point-set registration process. We propose an algorithm with a single free parameter that weighs the importance between the total weight cost and the entropy of the current spanning tree. We compare our algorithm on a number of different databases with the Delaunay triangulation. | Our cost function is built upon a Minimum Spanning Tree formulation and entropy maximization. In this sense, our work is closely related to @cite_9 as the authors also formulate an optimization problem focusing on the entropy of a Minimum Spanning Tree. However, they are dealing with a different problem by proposing an entropy estimator for clustering. In our work, we are interested on the data graph construction problem and our cost function differs from theirs since our entropy is calculated on the degree distribution of the generated MST while they estimate the data entropy based on the length of the spanning tree. Our problem can also be seen as a bi-criteria optimization problem. proposed a Lagrangian relaxation to solve the constrained minimum spanning tree problem, which is also a bi-criteria optimization proven to be NP-Hard by . Neighborhood search and adjacency search heuristics for the bicriterion minimum spanning tree problem were proposed by . | {
"cite_N": [
"@cite_9"
],
"mid": [
"1576317141"
],
"abstract": [
"Given an undirected graph with two different nonnegative costs associated with every edge e (say, w e for the weight and l e for the length of edge e) and a budget L, consider the problem of finding a spanning tree of total edge length at most L and minimum total weight under this restriction. This constrained minimum spanning tree problem is weakly NP-hard. We present a polynomial-time approximation scheme for this problem. This algorithm always produces a spanning tree of total length at most (1 + e)L and of total weight at most that of any spanning tree of total length at most L, for any fixed e >0. The algorithm uses Lagrangean relaxation, and exploits adjacency relations for matroids."
]
} |
1505.06556 | 2952181314 | Online learning has been in the spotlight from the machine learning society for a long time. To handle massive data in Big Data era, one single learner could never efficiently finish this heavy task. Hence, in this paper, we propose a novel distributed online learning algorithm to solve the problem. Comparing to typical centralized online learner, the distributed learners optimize their own learning parameters based on local data sources and timely communicate with neighbors. However, communication may lead to a privacy breach. Thus, we use differential privacy to preserve the privacy of learners, and study the influence of guaranteeing differential privacy on the utility of the distributed online learning algorithm. Furthermore, by using the results from Kakade and Tewari (2009), we use the regret bounds of online learning to achieve fast convergence rates for offline learning algorithms in distributed scenarios, which provides tighter utility performance than the existing state-of-the-art results. In simulation, we demonstrate that the differentially private offline learning algorithm has high variance, but we can use mini-batch to improve the performance. Finally, the simulations show that the analytical results of our proposed theorems are right and our private distributed online learning algorithm is a general framework. | @cite_10 studied the differentially private centralized online learning. They provided a generic differentially private framework for online algorithms. They showed that using their generic framework, Implicit Gradient Descent (IGD) and Generalized Infinitesimal Gradient Ascent (GIGA) can be transformed into differentially private online learning algorithms. Their work motivates our study on the differentially private online learning in distributed scenarios. | {
"cite_N": [
"@cite_10"
],
"mid": [
"1658965807"
],
"abstract": [
"In this paper, we consider the problem of preserving privacy in the online learning setting. We study the problem in the online convex programming (OCP) framework---a popular online learning setting with several interesting theoretical and practical implications---while using differential privacy as the formal privacy measure. For this problem, we distill two critical attributes that a private OCP algorithm should have in order to provide reasonable privacy as well as utility guarantees: 1) linearly decreasing sensitivity, i.e., as new data points arrive their effect on the learning model decreases, 2) sub-linear regret bound---regret bound is a popular goodness utility measure of an online learning algorithm. Given an OCP algorithm that satisfies these two conditions, we provide a general framework to convert the given algorithm into a privacy preserving OCP algorithm with good (sub-linear) regret. We then illustrate our approach by converting two popular online learning algorithms into their differentially private variants while guaranteeing sub-linear regret ( @math ). Next, we consider the special case of online linear regression problems, a practically important class of online learning problems, for which we generalize an approach by to provide a differentially private algorithm with just @math regret. Finally, we show that our online learning framework can be used to provide differentially private algorithms for offline learning as well. For the offline learning problem, our approach obtains better error bounds as well as can handle larger class of problems than the existing state-of-the-art methods"
]
} |
1505.06556 | 2952181314 | Online learning has been in the spotlight from the machine learning society for a long time. To handle massive data in Big Data era, one single learner could never efficiently finish this heavy task. Hence, in this paper, we propose a novel distributed online learning algorithm to solve the problem. Comparing to typical centralized online learner, the distributed learners optimize their own learning parameters based on local data sources and timely communicate with neighbors. However, communication may lead to a privacy breach. Thus, we use differential privacy to preserve the privacy of learners, and study the influence of guaranteeing differential privacy on the utility of the distributed online learning algorithm. Furthermore, by using the results from Kakade and Tewari (2009), we use the regret bounds of online learning to achieve fast convergence rates for offline learning algorithms in distributed scenarios, which provides tighter utility performance than the existing state-of-the-art results. In simulation, we demonstrate that the differentially private offline learning algorithm has high variance, but we can use mini-batch to improve the performance. Finally, the simulations show that the analytical results of our proposed theorems are right and our private distributed online learning algorithm is a general framework. | Recently, growing research effort has been devoted to distributed online learning. @cite_13 has proposed a DOLA to handle the decentralized data. A fixed network topology was used to conduct the communications among the learners in their system. They analyzed the regret bounds for convex and strongly convex functions respectively. Further, they studied the privacy-preserving problem, and showed that the communication network made their algorithm have intrinsic privacy-preserving properties. Worse than differential privacy, their privacy-preserving method cannot protect the privacy of all learners absolutely. Because their privacy-preserving properties depended on the connectivity between two nodes, however, all the nodes cannot have the same connectivity in a fixed communication matrix. Besides, @cite_7 is closely related to our work. In their paper, they presented a differentially private distributed optimization algorithm. While guaranteed the convergence of the algorithm, they used differential privacy to protect the privacy of the agents. Finally, they observed that to guarantee @math -differential privacy, their algorithm had the accuracy of the order of @math . Comparing to this accuracy, we obtain not only @math rates for convex functions, but also @math rates for strongly convex functions, if our regret bounds of the differentially private DOLA are converted to convergence rates | {
"cite_N": [
"@cite_13",
"@cite_7"
],
"mid": [
"2149778463",
"2953279152"
],
"abstract": [
"Online learning has become increasingly popular on handling massive data. The sequential nature of online learning, however, requires a centralized learner to store data and update parameters. In this paper, we consider online learning with distributed data sources. The autonomous learners update local parameters based on local data sources and periodically exchange information with a small subset of neighbors in a communication network. We derive the regret bound for strongly convex functions that generalizes the work by for convex functions. More importantly, we show that our algorithm has intrinsic privacy-preserving properties, and we prove the sufficient and necessary conditions for privacy preservation in the network. These conditions imply that for networks with greater-than-one connectivity, a malicious learner cannot reconstruct the subgradients (and sensitive raw data) of other learners, which makes our algorithm appealing in privacy-sensitive applications.",
"In distributed optimization and iterative consensus literature, a standard problem is for @math agents to minimize a function @math over a subset of Euclidean space, where the cost function is expressed as a sum @math . In this paper, we study the private distributed optimization (PDOP) problem with the additional requirement that the cost function of the individual agents should remain differentially private. The adversary attempts to infer information about the private cost functions from the messages that the agents exchange. Achieving differential privacy requires that any change of an individual's cost function only results in unsubstantial changes in the statistics of the messages. We propose a class of iterative algorithms for solving PDOP, which achieves differential privacy and convergence to the optimal value. Our analysis reveals the dependence of the achieved accuracy and the privacy levels on the the parameters of the algorithm. We observe that to achieve @math -differential privacy the accuracy of the algorithm has the order of @math ."
]
} |
1505.06556 | 2952181314 | Online learning has been in the spotlight from the machine learning society for a long time. To handle massive data in Big Data era, one single learner could never efficiently finish this heavy task. Hence, in this paper, we propose a novel distributed online learning algorithm to solve the problem. Comparing to typical centralized online learner, the distributed learners optimize their own learning parameters based on local data sources and timely communicate with neighbors. However, communication may lead to a privacy breach. Thus, we use differential privacy to preserve the privacy of learners, and study the influence of guaranteeing differential privacy on the utility of the distributed online learning algorithm. Furthermore, by using the results from Kakade and Tewari (2009), we use the regret bounds of online learning to achieve fast convergence rates for offline learning algorithms in distributed scenarios, which provides tighter utility performance than the existing state-of-the-art results. In simulation, we demonstrate that the differentially private offline learning algorithm has high variance, but we can use mini-batch to improve the performance. Finally, the simulations show that the analytical results of our proposed theorems are right and our private distributed online learning algorithm is a general framework. | The method to solve distributed online learning was pioneered in distributed optimization. Hazan has studied online convex optimization in his book @cite_5 . They proposed that the framework of convex online learning is closely tied to statistical learning theory and convex optimization. @cite_11 developed an efficient algorithm for distributed optimization based on dual averaging of subgradients method. They demonstrated that their algorithm could work, even the communication matrix is random and not fixed. Nedic and Ozdaglar @cite_6 considered a subgradient method for distributed convex optimization, where the functions are convex but not necessarily smooth. They demonstrated that a time-variant communication could ensure the convergence of the distributed optimization algorithm. @cite_1 tried to analyze the influence of stochastic subgradient errors on distributed convex optimization based on a time-variant network topology. They studied the convergence rate of their distributed optimization algorithm. Our work extends the works of Nedic and Ozdaglar @cite_6 and @cite_1 . All these papers have made great contributions to distributed convex optimization, but they did not consider the privacy-preserving problem. | {
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_6",
"@cite_11"
],
"mid": [
"",
"2066332749",
"2044212084",
"2556913586"
],
"abstract": [
"",
"We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set.",
"We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.",
"The focus of this paper is the development and analysis of distributed algorithms for solving convex optimization problems that are defined over networks. Such network-structured optimization problems arise in a variety of application domains within the information sciences and engineering. For instance, problems such as multi-agent coordination, distributed tracking and localization, estimation problems in sensor networks and packet routing are all naturally cast as distributed convex minimization [2], [11], [10], [16], [20]. Common to these problems is the necessity for completely decentralized computation that is locally light — so as to avoid overburdening small sensors or flooding busy networks — and robust to periodic link or node failures. As a second example, data sets that are too large to be processed quickly by any single processor present related challenges. A canonical example that arises in statistical machine learning is the problem of minimizing a loss function averaged over a large dataset (e.g., optimization in support vector machines [5]). With terabytes of data, it is desirable to assign smaller subsets of the data to different processors, and the processors must communicate to find parameters that minimize the loss over the entire dataset. However, the communication should be efficient enough that network latencies do not offset computational gains."
]
} |
1505.06556 | 2952181314 | Online learning has been in the spotlight from the machine learning society for a long time. To handle massive data in Big Data era, one single learner could never efficiently finish this heavy task. Hence, in this paper, we propose a novel distributed online learning algorithm to solve the problem. Comparing to typical centralized online learner, the distributed learners optimize their own learning parameters based on local data sources and timely communicate with neighbors. However, communication may lead to a privacy breach. Thus, we use differential privacy to preserve the privacy of learners, and study the influence of guaranteeing differential privacy on the utility of the distributed online learning algorithm. Furthermore, by using the results from Kakade and Tewari (2009), we use the regret bounds of online learning to achieve fast convergence rates for offline learning algorithms in distributed scenarios, which provides tighter utility performance than the existing state-of-the-art results. In simulation, we demonstrate that the differentially private offline learning algorithm has high variance, but we can use mini-batch to improve the performance. Finally, the simulations show that the analytical results of our proposed theorems are right and our private distributed online learning algorithm is a general framework. | As for the study of differential privacy, there has been much research effort being devoted to how differential privacy can be used in existing learning algorithms. For example, @cite_3 presented the output perturbation and objective perturbation ideas about differential privacy in empirical risk minimization (ERM) classification. They achieved a good utility for ERM algorithm while guaranteed @math -differential privacy. Rajkumar and Agarwal @cite_16 extended differentially private ERM classification @cite_3 to differentially private ERM multiparty classification. More importantly, they analyzed the sequential and parallel composability problems while the algorithm guaranteed @math -differential privacy. @cite_18 proposed more efficient algorithms and tighter error bounds for ERM classification on the basis of @cite_3 . | {
"cite_N": [
"@cite_18",
"@cite_16",
"@cite_3"
],
"mid": [
"1937834619",
"",
"2119874464"
],
"abstract": [
"In this paper, we initiate a systematic investigation of differentially private algorithms for convex empirical risk minimization. Various instantiations of this problem have been studied before. We provide new algorithms and matching lower bounds for private ERM assuming only that each data point's contribution to the loss function is Lipschitz bounded and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex. Our algorithms run in polynomial time, and in some cases even match the optimal non-private running time (as measured by oracle complexity). We give separate algorithms (and lower bounds) for @math - and @math -differential privacy; perhaps surprisingly, the techniques used for designing optimal algorithms in the two cases are completely different. Our lower bounds apply even to very simple, smooth function families, such as linear and quadratic functions. This implies that algorithms from previous work can be used to obtain optimal error rates, under the additional assumption that the contributions of each data point to the loss function is smooth. We show that simple approaches to smoothing arbitrary loss functions (in order to apply previous techniques) do not yield optimal error rates. In particular, optimal algorithms were not previously known for problems such as training support vector machines and the high-dimensional median.",
"",
"Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the e-differential privacy definition due to (2006). First we apply the output perturbation ideas of (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance."
]
} |
1505.06556 | 2952181314 | Online learning has been in the spotlight from the machine learning society for a long time. To handle massive data in Big Data era, one single learner could never efficiently finish this heavy task. Hence, in this paper, we propose a novel distributed online learning algorithm to solve the problem. Comparing to typical centralized online learner, the distributed learners optimize their own learning parameters based on local data sources and timely communicate with neighbors. However, communication may lead to a privacy breach. Thus, we use differential privacy to preserve the privacy of learners, and study the influence of guaranteeing differential privacy on the utility of the distributed online learning algorithm. Furthermore, by using the results from Kakade and Tewari (2009), we use the regret bounds of online learning to achieve fast convergence rates for offline learning algorithms in distributed scenarios, which provides tighter utility performance than the existing state-of-the-art results. In simulation, we demonstrate that the differentially private offline learning algorithm has high variance, but we can use mini-batch to improve the performance. Finally, the simulations show that the analytical results of our proposed theorems are right and our private distributed online learning algorithm is a general framework. | Some papers have discussed the application of online learning with good regret to offline learning. Kakade and Tewari @cite_17 proposed some properties of online learning algorithms if the loss function is Lipschitz and strongly convex. They found that recent online algorithms with logarithmic regret guarantees could help to achieve fast convergence rates for the excess risk with high probability. Subsequently, @cite_10 use the results in @cite_17 to analyze the utility of differentially private offline learning algorithms. | {
"cite_N": [
"@cite_10",
"@cite_17"
],
"mid": [
"1658965807",
"2138682935"
],
"abstract": [
"In this paper, we consider the problem of preserving privacy in the online learning setting. We study the problem in the online convex programming (OCP) framework---a popular online learning setting with several interesting theoretical and practical implications---while using differential privacy as the formal privacy measure. For this problem, we distill two critical attributes that a private OCP algorithm should have in order to provide reasonable privacy as well as utility guarantees: 1) linearly decreasing sensitivity, i.e., as new data points arrive their effect on the learning model decreases, 2) sub-linear regret bound---regret bound is a popular goodness utility measure of an online learning algorithm. Given an OCP algorithm that satisfies these two conditions, we provide a general framework to convert the given algorithm into a privacy preserving OCP algorithm with good (sub-linear) regret. We then illustrate our approach by converting two popular online learning algorithms into their differentially private variants while guaranteeing sub-linear regret ( @math ). Next, we consider the special case of online linear regression problems, a practically important class of online learning problems, for which we generalize an approach by to provide a differentially private algorithm with just @math regret. Finally, we show that our online learning framework can be used to provide differentially private algorithms for offline learning as well. For the offline learning problem, our approach obtains better error bounds as well as can handle larger class of problems than the existing state-of-the-art methods",
"This paper examines the generalization properties of online convex programming algorithms when the loss function is Lipschitz and strongly convex. Our main result is a sharp bound, that holds with high probability, on the excess risk of the output of an online algorithm in terms of the average regret. This allows one to use recent algorithms with logarithmic cumulative regret guarantees to achieve fast convergence rates for the excess risk with high probability. As a corollary, we characterize the convergence rate of PEGASOS (with high probability), a recently proposed method for solving the SVM optimization problem."
]
} |
1505.06427 | 2201142001 | Recent research shows that deep neural networks (DNNs) can be used to extract deep speaker vectors (d-vectors) that preserve speaker characteristics and can be used in speaker verification. This new method has been tested on text-dependent speaker verification tasks, and improvement was reported when combined with the conventional i-vector method. This paper extends the d-vector approach to semi text-independent speaker verification tasks, i.e., the text of the speech is in a limited set of short phrases. We explore various settings of the DNN structure used for d-vector extraction, and present a phone-dependent training which employs the posterior features obtained from an ASR system. The experimental results show that it is possible to apply d-vectors on semi text-independent speaker recognition, and the phone-dependent training improves system performance. | This paper follows the work in @cite_9 . The difference is that we extend the application of the DNN-based feature learning approach to semi text-independent tasks, and we introduce a phone-dependent training. Due to the mismatched content of the enrollment and test speech, our task is more challenging. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2046056978"
],
"abstract": [
"In this paper we investigate the use of deep neural networks (DNNs) for a small footprint text-dependent speaker verification task. At development stage, a DNN is trained to classify speakers at the frame-level. During speaker enrollment, the trained DNN is used to extract speaker specific features from the last hidden layer. The average of these speaker features, or d-vector, is taken as the speaker model. At evaluation stage, a d-vector is extracted for each utterance and compared to the enrolled speaker model to make a verification decision. Experimental results show the DNN based speaker verification system achieves good performance compared to a popular i-vector system on a small footprint text-dependent speaker verification task. In addition, the DNN based system is more robust to additive noise and outperforms the i-vector system at low False Rejection operating points. Finally the combined system outperforms the i-vector system by 14 and 25 relative in equal error rate (EER) for clean and noisy conditions respectively."
]
} |
1505.06289 | 2952229272 | The ability to map descriptions of scenes to 3D geometric representations has many applications in areas such as art, education, and robotics. However, prior work on the text to 3D scene generation task has used manually specified object categories and language that identifies them. We introduce a dataset of 3D scenes annotated with natural language descriptions and learn from this data how to ground textual descriptions to physical objects. Our method successfully grounds a variety of lexical terms to concrete referents, and we show quantitatively that our method improves 3D scene generation over previous work using purely rule-based methods. We evaluate the fidelity and plausibility of 3D scenes generated with our grounding approach through human judgments. To ease evaluation on this task, we also introduce an automated metric that strongly correlates with human judgments. | Prior work has generated sentences that describe 2D images @cite_12 @cite_10 @cite_13 and referring expressions for specific objects in images @cite_16 @cite_7 . However, generating scenes is currently out of reach for purely image-based approaches. 3D scene representations serve as an intermediate level of structure between raw image pixels and simpler microcosms (e.g., grid and block worlds). This level of structure is amenable to the generation task but still realistic enough to present a variety of challenges associated with natural scenes. | {
"cite_N": [
"@cite_7",
"@cite_10",
"@cite_16",
"@cite_13",
"@cite_12"
],
"mid": [
"2251512949",
"2066134726",
"2159149613",
"2953276893",
"1897761818"
],
"abstract": [
"In this paper we introduce a new game to crowd-source natural language referring expressions. By designing a two player game, we can both collect and verify referring expressions directly within the game. To date, the game has produced a dataset containing 130,525 expressions, referring to 96,654 distinct objects, in 19,894 photographs of natural scenes. This dataset is larger and more varied than previous REG datasets and allows us to study referring expressions in real-world scenes. We provide an in depth analysis of the resulting dataset. Based on our findings, we design a new optimization based model for generating referring expressions and perform experimental evaluations on 3 test sets.",
"We posit that visually descriptive language offers computer vision researchers both information about the world, and information about how people describe the world. The potential benefit from this source is made more significant due to the enormous amount of language data easily available today. We present a system to automatically generate natural language descriptions from images that exploits both statistics gleaned from parsing large quantities of text data and recognition algorithms from computer vision. The system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work.",
"We present a new approach to referring expression generation, casting it as a density estimation problem where the goal is to learn distributions over logical expressions identifying sets of objects in the world. Despite an extremely large space of possible expressions, we demonstrate effective learning of a globally normalized log-linear distribution. This learning is enabled by a new, multi-stage approximate inference technique that uses a pruning model to construct only the most likely logical forms. We train and evaluate the approach on a new corpus of references to sets of visual objects. Experiments show the approach is able to learn accurate models, which generate over 87 of the expressions people used. Additionally, on the previously studied special case of single object reference, we show a 35 relative error reduction over previous state of the art.",
"We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit.",
"Humans can prepare concise descriptions of pictures, focusing on what they find important. We demonstrate that automatic methods can do so too. We describe a system that can compute a score linking an image to a sentence. This score can be used to attach a descriptive sentence to a given image, or to obtain images that illustrate a given sentence. The score is obtained by comparing an estimate of meaning obtained from the image to one obtained from the sentence. Each estimate of meaning comes from a discriminative procedure that is learned us-ingdata. We evaluate on a novel dataset consisting of human-annotated images. While our underlying estimate of meaning is impoverished, it is sufficient to produce very good quantitative results, evaluated with a novel score that can account for synecdoche."
]
} |
1505.06289 | 2952229272 | The ability to map descriptions of scenes to 3D geometric representations has many applications in areas such as art, education, and robotics. However, prior work on the text to 3D scene generation task has used manually specified object categories and language that identifies them. We introduce a dataset of 3D scenes annotated with natural language descriptions and learn from this data how to ground textual descriptions to physical objects. Our method successfully grounds a variety of lexical terms to concrete referents, and we show quantitatively that our method improves 3D scene generation over previous work using purely rule-based methods. We evaluate the fidelity and plausibility of 3D scenes generated with our grounding approach through human judgments. To ease evaluation on this task, we also introduce an automated metric that strongly correlates with human judgments. | A related line of work focuses on grounding referring expressions to referents in 3D worlds with simple colored geometric shapes @cite_5 @cite_4 . More recent work grounds text to object attributes such as color and shape in images @cite_2 @cite_9 . ground spatial relationship language in 3D scenes (e.g., to the left of , behind ) by learning from pairwise object relations provided by crowd-workers. In contrast, we ground general descriptions to a wide variety of possible objects. The objects in our scenes represent a broader space of possible referents than the first two lines of work. Unlike the latter work, our descriptions are provided as unrestricted free-form text, rather than filling in specific templates of object references and fixed spatial relationships. | {
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_4",
"@cite_2"
],
"mid": [
"2111807093",
"2186283982",
"2140164026",
"2949559657"
],
"abstract": [
"We present a visually-grounded language understanding model based on a study of how people verbally describe objects in scenes. The emphasis of the model is on the combination of individual word meanings to produce meanings for complex referring expressions. The model has been implemented, and it is able to understand a broad range of spatial referring expressions. We describe our implementation of word level visually-grounded semantics and their embedding in a compositional parsing framework. The implemented system selects the correct referents in response to natural language expressions for a large percentage of test cases. In an analysis of the system's successes and failures we reveal how visual context influences the semantics of utterances and propose future extensions to the model that take such context into account.",
"This paper introduces Logical Semantics with Perception (LSP), a model for grounded language acquisition that learns to map natural language statements to their referents in a physical environment. For example, given an image, LSP can map the statement “blue mug on the table” to the set of image segments showing blue mugs on tables. LSP learns physical representations for both categorical (“blue,” “mug”) and relational (“on”) language, and also learns to compose these representations to produce the referents of entire statements. We further introduce a weakly supervised training procedure that estimates LSP’s parameters using annotated referents for entire statements, without annotated referents for individual words or the parse structure of the statement. We perform experiments on two applications: scene understanding and geographical question answering. We find that LSP outperforms existing, less expressive models that cannot represent relational language. We further find that weakly supervised training is competitive with fully supervised training while requiring significantly less annotation effort.",
"Situated, spontaneous speech may be ambiguous along acoustic, lexical, grammatical and semantic dimensions. To understand such a seemingly difficult signal, we propose to model the ambiguity inherent in acoustic signals and in lexical and grammatical choices using compact, probabilistic representations of multiple hypotheses. To resolve semantic ambiguities we propose a situation model that captures aspects of the physical context of an utterance as well as the speaker's intentions, in our case represented by recognized plans. In a single, coherent Framework for Understanding Situated Speech (FUSS) we show how these two influences, acting on an ambiguous representation of the speech signal, complement each other to disambiguate form and content of situated speech. This method produces promising results in a game playing environment and leaves room for other types of situation models.",
"As robots become more ubiquitous and capable, it becomes ever more important to enable untrained users to easily interact with them. Recently, this has led to study of the language grounding problem, where the goal is to extract representations of the meanings of natural language tied to perception and actuation in the physical world. In this paper, we present an approach for joint learning of language and perception models for grounded attribute induction. Our perception model includes attribute classifiers, for example to detect object color and shape, and the language model is based on a probabilistic categorial grammar that enables the construction of rich, compositional meaning representations. The approach is evaluated on the task of interpreting sentences that describe sets of objects in a physical workspace. We demonstrate accurate task performance and effective latent-variable concept induction in physical grounded scenes."
]
} |
1505.06169 | 2950189471 | We present paired learning and inference algorithms for significantly reducing computation and increasing speed of the vector dot products in the classifiers that are at the heart of many NLP components. This is accomplished by partitioning the features into a sequence of templates which are ordered such that high confidence can often be reached using only a small fraction of all features. Parameter estimation is arranged to maximize accuracy and early confidence in this sequence. Our approach is simpler and better suited to NLP than other related cascade methods. We present experiments in left-to-right part-of-speech tagging, named entity recognition, and transition-based dependency parsing. On the typical benchmarking datasets we can preserve POS tagging accuracy above 97 and parsing LAS above 88.5 both with over a five-fold reduction in run-time, and NER F1 above 88 with more than 2x increase in speed. | We pose and address the question of whether a single, interacting set of parameters can be learned such that they efficiently both (1) provide high accuracy and (2) good confidence estimates throughout their use in the lengthening prefixes of the feature template sequence. (These two requirements are both incorporated into our novel parameter estimation algorithm.) In contrast, other work @cite_18 @cite_17 learns a separate classifier to determine when to add features. Such heavier-weight approaches are unsuitable for our setting, where the core classifier's features and scoring are already so cheap that adding complex decision-making would cause too much computational overhead. | {
"cite_N": [
"@cite_18",
"@cite_17"
],
"mid": [
"2108599331",
"2117598836"
],
"abstract": [
"Discriminative methods for learning structured models have enabled wide-spread use of very rich feature representations. However, the computational cost of feature extraction is prohibitive for large-scale or time-sensitive applications, often dominating the cost of inference in the models. Significant efforts have been devoted to sparsity-based model selection to decrease this cost. Such feature selection methods control computation statically and miss the opportunity to fine-tune feature extraction to each input at run-time. We address the key challenge of learning to control fine-grained feature extraction adaptively, exploiting non-homogeneity of the data. We propose an architecture that uses a rich feedback loop between extraction and prediction. The run-time control policy is learned using efficient value-function approximation, which adaptively determines the value of information of features at the level of individual variables for each input. We demonstrate significant speedups over state-of-the-art methods on two challenging datasets. For articulated pose estimation in video, we achieve a more accurate state-of-the-art model that is also faster, with similar results on an OCR task.",
"Feature computation and exhaustive search have significantly restricted the speed of graph-based dependency parsing. We propose a faster framework of dynamic feature selection, where features are added sequentially as needed, edges are pruned early, and decisions are made online for each sentence. We model this as a sequential decision-making problem and solve it by imitation learning techniques. We test our method on 7 languages. Our dynamic parser can achieve accuracies comparable or even superior to parsers using a full set of features, while computing fewer than 30 of the feature templates."
]
} |
1505.06169 | 2950189471 | We present paired learning and inference algorithms for significantly reducing computation and increasing speed of the vector dot products in the classifiers that are at the heart of many NLP components. This is accomplished by partitioning the features into a sequence of templates which are ordered such that high confidence can often be reached using only a small fraction of all features. Parameter estimation is arranged to maximize accuracy and early confidence in this sequence. Our approach is simpler and better suited to NLP than other related cascade methods. We present experiments in left-to-right part-of-speech tagging, named entity recognition, and transition-based dependency parsing. On the typical benchmarking datasets we can preserve POS tagging accuracy above 97 and parsing LAS above 88.5 both with over a five-fold reduction in run-time, and NER F1 above 88 with more than 2x increase in speed. | Our work is also related to the field of learning and inference under test-time budget constraints @cite_7 @cite_14 . However, common approaches to this problem also employ auxiliary models to rank which feature to add next, and are generally suited for problems where features are expensive to compute ( e.g vision) and the extra computation of an auxiliary pruning-decision model is offset by substantial reduction in feature computations @cite_18 . Our method uses confidence scores directly from the model, and so requires no additional computation, making it suitable for speeding up classifier-based NLP methods that are already very fast and have relatively cheap features. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_7"
],
"mid": [
"2108599331",
"195866714",
"2169961240"
],
"abstract": [
"Discriminative methods for learning structured models have enabled wide-spread use of very rich feature representations. However, the computational cost of feature extraction is prohibitive for large-scale or time-sensitive applications, often dominating the cost of inference in the models. Significant efforts have been devoted to sparsity-based model selection to decrease this cost. Such feature selection methods control computation statically and miss the opportunity to fine-tune feature extraction to each input at run-time. We address the key challenge of learning to control fine-grained feature extraction adaptively, exploiting non-homogeneity of the data. We propose an architecture that uses a rich feedback loop between extraction and prediction. The run-time control policy is learned using efficient value-function approximation, which adaptively determines the value of information of features at the level of individual variables for each input. We demonstrate significant speedups over state-of-the-art methods on two challenging datasets. For articulated pose estimation in video, we achieve a more accurate state-of-the-art model that is also faster, with similar results on an OCR task.",
"In this paper we develop a framework for a sequential decision making under budget constraints for multi-class classification. In many classification systems, such as medical diagnosis and homeland security, sequential decisions are often warranted. For each instance, a sensor is first chosen for acquiring measurements and then based on the available information one decides (rejects) to seek more measurements from a new sensor modality or to terminate by classifying the example based on the available information. Different sensors have varying costs for acquisition, and these costs account for delay, throughput or monetary value. Consequently, we seek methods for maximizing performance of the system subject to budget constraints. We formulate a multi-stage multi-class empirical risk objective and learn sequential decision functions from training data. We show that reject decision at each stage can be posed as supervised binary classification. We derive bounds for the VC dimension of the multi-stage system to quantify the generalization error. We compare our approach to alternative strategies on several multi-class real world datasets.",
"We present SpeedBoost, a natural extension of functional gradient descent, for learning anytime predictors, which automatically trade computation time for predictive accuracy by selecting from a set of simpler candidate predictors. These anytime predictors not only generate approximate predictions rapidly, but are capable of using extra resources at prediction time, when available, to improve performance. We also demonstrate how our framework can be used to select weak predictors which target certain subsets of the data, allowing for ecient use of computational resources on dicult examples. We also show that variants of the SpeedBoost algorithm produce predictors which are provably competitive with any possible sequence of weak predictors with the same total complexity."
]
} |
1505.06169 | 2950189471 | We present paired learning and inference algorithms for significantly reducing computation and increasing speed of the vector dot products in the classifiers that are at the heart of many NLP components. This is accomplished by partitioning the features into a sequence of templates which are ordered such that high confidence can often be reached using only a small fraction of all features. Parameter estimation is arranged to maximize accuracy and early confidence in this sequence. Our approach is simpler and better suited to NLP than other related cascade methods. We present experiments in left-to-right part-of-speech tagging, named entity recognition, and transition-based dependency parsing. On the typical benchmarking datasets we can preserve POS tagging accuracy above 97 and parsing LAS above 88.5 both with over a five-fold reduction in run-time, and NER F1 above 88 with more than 2x increase in speed. | While our comparisons above focus on other methods of feature selection, there also exists related work in the field of general (static) feature selection. The most relevant results come from the applications of , such as the work of in for NLP problems. The Group Lasso regularizer @cite_12 sparsifies groups of feature weights (e.g. feature templates), and has been used to speed up test-time prediction by removing entire templates from the model. The key difference between this work and ours is that we select our templates based on the test-time difficulty of the inference problem, while the Group Lasso must do so at train time. In Appendix , we compare against Group Lasso and show improvements in accuracy and speed. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2138019504"
],
"abstract": [
"Summary. We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor analysis-of-variance problem as the most important and well-known example. Instead of selecting factors by stepwise backward elimination, we focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection. The lasso, the LARS algorithm and the non-negative garrotte are recently proposed regression methods that can be used to select individual variables. We study and propose efficient algorithms for the extensions of these methods for factor selection and show that these extensions give superior performance to the traditional stepwise backward elimination method in factor selection problems. We study the similarities and the differences between these methods. Simulations and real examples are used to illustrate the methods."
]
} |
1505.05667 | 2144160171 | In this work, we address the problem to model all the nodes (words or phrases) in a dependency tree with the dense representations. We propose a recursive convolutional neural network (RCNN) architecture to capture syntactic and compositional-semantic representations of phrases and words in a dependency tree. Different with the original recursive neural network, we introduce the convolution and pooling layers, which can model a variety of compositions by the feature maps and choose the most informative compositions by the pooling layers. Based on RCNN, we use a discriminative model to re-rank a @math -best list of candidate dependency parsing trees. The experiments show that RCNN is very effective to improve the state-of-the-art dependency parsing on both English and Chinese datasets. | Specific to the re-ranking model, proposed a generative re-ranking model with Inside-Outside Recursive Neural Network (IORNN), which can process trees both bottom-up and top-down. However, IORNN works in generative way and just estimates the probability of a given tree, so IORNN cannot fully utilize the incorrect trees in @math -best candidate results. Besides, IORNN treats dependency tree as a sequence, which can be regarded as a generalization of simple recurrent neural network (SRNN) @cite_4 . Unlike IORNN, our proposed RCNN is a discriminative model and can optimize the re-ranking strategy for a particular base parser. Another difference is that RCNN computes the score of tree in a recursive way, which is more natural for the hierarchical structure of natural language. Besides, the RCNN can not only be used for the re-ranking, but also be regarded as general model to represent sentence with its dependency tree. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2110485445"
],
"abstract": [
"Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands; indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly context-dependent while also expressing generalizations across classes of items. These representations suggest a method for representing lexical categories and the type token distinction."
]
} |
1505.05788 | 2109047384 | Display advertising normally charges advertisers for every single ad impression. Specifically, if an ad in a webpage has been loaded in the browser, an ad impression is counted. However, due to the position and size of the ad slot, lots of ads are actually not viewed but still measured as impressions and charged. These fraud ad impressions indeed undermine the efficacy of display advertising. A perfect ad impression viewability measurement should match what the user has really viewed with a short memory. In this paper, we conduct extensive investigations on display ad impression viewability measurements on dimensions of ad creative displayed pixel percentage and exposure time to find which measurement provides the most accurate ad impression counting. The empirical results show that the most accurate measurement counts one ad impression if more than 75 of the ad creative pixels have been exposed for at least 2 continuous seconds. | Since the origin of online advertising, the researchers have been seeking a way to answer whether or not the user has viewed an ad in one webpage. In @cite_7 , the relationship between the ad creative animation and users' recognition were studied but the authors found such relationship was not significant. The authors in @cite_5 @cite_3 leveraged the eye-tracking techniques to study the users' attention area on the screen and check how likely the user truly viewed a particular ad. Different users and webpages had much different high attention areas @cite_3 . However, such eye-tracking techniques are impractical to be used in the production. The most practical and straightforward method is to define the pixel percentage and exposure time as the thresholds of measuring a viewed impression across all the webpages and users. In 2013, Google announced that the advertisers were charged only for the viewed ad impressions, where an ad was considered as viewed only if the pixel percentage was no less than 50 | {
"cite_N": [
"@cite_5",
"@cite_3",
"@cite_7"
],
"mid": [
"2093163898",
"2019658628",
"1997566826"
],
"abstract": [
"Abstract Click-through rates are still the de facto measure of Internet advertising effectiveness. Unfortunately, click-through rates have plummeted. This decline prompts two critical questions: (1) Why do banner ads seem to be ineffective and (2) what can advertisers do to improve their effectiveness? To address these questions, we utilized an eye-tracking device to investigate online surfers’ attention to online advertising. Then we conducted a large-scale survey of Internet users’ recall, recognition, and awareness of banner advertising. Our research suggests that the reason why click-through rates are low is that surfers actually avoid looking at banner ads during their online activities. This implies that the larger part of a surfer's processing of banners will probably be done at the pre-attentive level. If such is the case, click-through rate is an ineffective measure of banner ad performance. Our research also shows that banner ads do have an impact on traditional memory-based measure of effectiveness. Thus, we claim that advertisers should rely more on traditional brand equity measures such as brand awareness and advertising recall. Using such measures, we show that repetition affects unaided advertising recall, brand recognition, and brand awareness and that a banner's message influences both aided advertising recall and brand recognition.",
"An understanding of how people allocate their visual attention when viewing Web pages is very important for Web authors, interface designers, advertisers and others. Such knowledge opens the door to a variety of innovations, ranging from improved Web page design to the creation of compact, yet recognizable, visual representations of long pages. We present an eye-tracking study in which 20 users viewed 361 Web pages while engaged in information foraging and page recognition tasks. From this data, we describe general location-based characteristics of visual attention for Web pages dependent on different tasks and demographics, and generate a model for predicting the visual attention that individual page elements may receive. Finally, we introduce the concept of fixation impact, a new method for mapping gaze data to visual scenes that is motivated by findings in vision research.",
"A common medium for advertising on the Internet is the use of banner ads. This study investigates recall and recognition of animated banner advertisements in an attempt to identify design guidelines. It was hypothesized that animation would increase recall and recognition of novel ads by increasing user awareness. No significant relationships were found between the use of animation and ability to recall and recognize banner ads. Results indicate that animation does not enhance user memory of online banner advertisements."
]
} |
1505.05921 | 2950983995 | In light of growing attention of intelligent vehicle systems, we propose developing a driver model that uses a hybrid system formulation to capture the intent of the driver. This model hopes to capture human driving behavior in a way that can be utilized by semi- and fully autonomous systems in heterogeneous environments. We consider a discrete set of high level goals or intent modes, that is designed to encompass the decision making process of the human. A driver model is derived using a dataset of lane changes collected in a realistic driving simulator, in which the driver actively labels data to give us insight into her intent. By building the labeled dataset, we are able to utilize classification tools to build the driver model using features of based on her perception of the environment, and achieve high accuracy in identifying driver intent. Multiple algorithms are presented and compared on the dataset, and a comparison of the varying behaviors between drivers is drawn. Using this modeling methodology, we present a model that can be used to assess driver behaviors and to develop human-inspired safety metrics that can be utilized in intelligent vehicular systems. | There are many works that consider predicting driver behavior by monitoring the driver @cite_15 , and have shown promising results in terms of human-in-the-loop and shared control for semi-autonomous frameworks @cite_4 . When it comes to driver modeling intent, many rely heavily on the driver state @cite_9 or on driver input @cite_0 . While effective, these methods rely on heuristics to determine when a lane change begins use windowing to select features that will train the classifier @cite_9 . This ultimately assumes that these high level decisions are made as a function of time, and not by the dynamic state of the environment, which is difficult to predict due to the high variability over long time horizons @cite_4 . | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_15",
"@cite_4"
],
"mid": [
"2104425135",
"2139851562",
"",
"2016284836"
],
"abstract": [
"A method for detecting drivers’ intentions is essential to facilitate operating mode transitions between driver and driver assistance systems. We propose a driver behavior recognition method using Hidden Markov Models (HMMs) to characterize and detect driving maneuvers and place it in the framework of a cognitive model of human behavior. HMM-based steering behavior models for emergency and normal lane changes as well as for lane keeping were developed using a moving base driving simulator. Analysis of these models after training and recognition tests showed that driver behavior modeling and recognition of different types of lane changes is possible using HMMs.",
"By predicting a driver's maneuvers before they occur, a driver-assistance system can prepare for or avoid dangerous situations. This article describes a real-time, on-road lane-change-intent detector that can enhance driver safety.",
"",
"Threat assessment during semiautonomous driving is used to determine when correcting a driver's input is required. Since current semiautonomous systems perform threat assessment by predicting a vehicle's future state while treating the driver's input as a disturbance, autonomous controller intervention is limited to a restricted regime. Improving vehicle safety demands threat assessment that occurs over longer prediction horizons wherein a driver cannot be treated as a malicious agent. In this paper, we describe a real-time semiautonomous system that utilizes empirical observations of a driver's pose to inform an autonomous controller that corrects a driver's input when possible in a safe manner. We measure the performance of our system using several metrics that evaluate the informativeness of the prediction and the utility of the intervention procedure. A multisubject driving experiment illustrates the usefulness, with respect to these metrics, of incorporating the driver's pose while designing a semiautonomous system."
]
} |
1505.05921 | 2950983995 | In light of growing attention of intelligent vehicle systems, we propose developing a driver model that uses a hybrid system formulation to capture the intent of the driver. This model hopes to capture human driving behavior in a way that can be utilized by semi- and fully autonomous systems in heterogeneous environments. We consider a discrete set of high level goals or intent modes, that is designed to encompass the decision making process of the human. A driver model is derived using a dataset of lane changes collected in a realistic driving simulator, in which the driver actively labels data to give us insight into her intent. By building the labeled dataset, we are able to utilize classification tools to build the driver model using features of based on her perception of the environment, and achieve high accuracy in identifying driver intent. Multiple algorithms are presented and compared on the dataset, and a comparison of the varying behaviors between drivers is drawn. Using this modeling methodology, we present a model that can be used to assess driver behaviors and to develop human-inspired safety metrics that can be utilized in intelligent vehicular systems. | One of the desired outcomes of this work is to identify a model that does not depend on human input and can be used in human driven or autonomous systems. Ideally, the model could be used in human-inspired driving applications. Driving styles in terms of discrete control actions were mimicked in @cite_7 , using inverse reinforcement learning. | {
"cite_N": [
"@cite_7"
],
"mid": [
"1999874108"
],
"abstract": [
"We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be difficult to write down an explicit reward function specifying exactly how different desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and give an algorithm for learning the task demonstrated by the expert. Our algorithm is based on using \"inverse reinforcement learning\" to try to recover the unknown reward function. We show that our algorithm terminates in a small number of iterations, and that even though we may never recover the expert's reward function, the policy output by the algorithm will attain performance close to that of the expert, where here performance is measured with respect to the expert's unknown reward function."
]
} |
1505.05612 | 1488163396 | In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7 of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: this http URL | Recent work has made significant progress using deep neural network models in both the fields of computer vision and natural language. For computer vision, methods based on Convolutional Neural Network (CNN @cite_42 ) achieve the state-of-the-art performance in various tasks, such as object classification @cite_2 @cite_18 @cite_2 , detection @cite_11 @cite_0 and segmentation @cite_4 . For natural language, the Recurrent Neural Network (RNN @cite_8 @cite_37 ) and the Long Short-Term Memory network (LSTM @cite_15 ) are also widely used in machine translation @cite_32 @cite_12 @cite_19 and speech recognition @cite_13 . | {
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_4",
"@cite_8",
"@cite_42",
"@cite_32",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"1686810756",
"2118776487",
"2964288706",
"2110485445",
"",
"1753482797",
"2144616054",
"2949888546",
"",
"",
"179875071",
"2950635152",
"2102605133"
],
"abstract": [
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"Recurrent neural network is a powerful model that learns temporal patterns in sequential data. For a long time, it was believed that recurrent networks are difficult to train using simple optimizers, such as stochastic gradient descent, due to the so-called vanishing gradient problem. In this paper, we show that learning longer term patterns in real data, such as in natural language, is perfectly possible using gradient descent. This is achieved by using a slight structural modification of the simple recurrent neural network architecture. We encourage some of the hidden units to change their state slowly by making part of the recurrent weight matrix close to identity, thus forming kind of a longer term memory. We evaluate our model in language modeling experiments, where we obtain similar performance to the much more complex Long Short Term Memory (LSTM) networks (Hochreiter & Schmidhuber, 1997).",
"Abstract: Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands; indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly context-dependent while also expressing generalizations across classes of items. These representations suggest a method for representing lexical categories and the type token distinction.",
"",
"We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences and do not rely on alignments or phrasal translation units. The models have a generation and a conditioning aspect. The generation of the translation is modelled with a target Recurrent Language Model, whereas the conditioning on the source sentence is modelled with a Convolutional Sentence Model. Through various experiments, we show first that our models obtain a perplexity with respect to gold translations that is > 43 lower than that of stateof-the-art alignment-based translation models. Secondly, we show that they are remarkably sensitive to the word order, syntax, and meaning of the source sentence despite lacking alignments. Finally we show that they match a state-of-the-art system when rescoring n-best lists of translations.",
"In many situations we have some measurement of confidence on \"positiveness\" for a binary label. The \"positiveness\" is a continuous value whose range is a bounded interval. It quantifies the affiliation of each training data to the positive class. We propose a novel learning algorithm called expectation loss SVM (e-SVM) that is devoted to the problems where only the \"positiveness\" instead of a binary label of each training sample is available. Our e-SVM algorithm can also be readily extended to learn segment classifiers under weak supervision where the exact positiveness value of each training example is unobserved. In experiments, we show that the e-SVM algorithm can effectively address the segment proposal classification task under both strong supervision (e.g. the pixel-level annotations are available) and the weak supervision (e.g. only bounding-box annotations are available), and outperforms the alternative approaches. Besides, we further validate this method on two major tasks of computer vision: semantic segmentation and object detection. Our method achieves the state-of-the-art object detection performance on PASCAL VOC 2007 dataset.",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.",
"",
"",
"A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50 reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18 reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5 on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition",
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn."
]
} |
1505.05612 | 1488163396 | In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7 of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: this http URL | The structure of our mQA model is inspired by the m-RNN model @cite_1 for the image captioning and image-sentence retrieval tasks. It adopts a deep CNN for vision and a RNN for language. We extend the model to handle the input of question and image pairs, and generate answers. In the experiments, we find that we can learn how to ask a good question about an image using the m-RNN model and this question can be answered by our mQA model. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1811254738"
],
"abstract": [
"In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu junhua.mao m-RNN.html ."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.