aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1509.08172
2226779634
This article extends the results of Fang & Zeitouni (2012a) on branching random walks (BRWs) with Gaussian increments in time inhomogeneous environments. We treat the case where the variance of the increments changes a finite number of times at different scales in [0,1] under a slight restriction. We find the asymptotics of the maximum up to an OP(1) error and show how the profile of the variance influences the leading order and the logarithmic correction term. A more general result was independently obtained by Mallein (2015b) when the law of the increments is not necessarily Gaussian. However, the proof we present here generalizes the approach of Fang & Zeitouni (2012a) instead of using the spinal decomposition of the BRW. As such, the proof is easier to understand and more robust in the presence of an approximate branching structure.
One shortfall of the spinal decomposition is that it completely relies on the presence of an branching structure. Specifically, a crucial step in @cite_21 is the proof of a time-inhomogeneous version of the classical many-to-one lemma, which is a direct consequence of his comparison between the size-biased law of the BRW (the usual change of measure) and a certain projection of a law on the set of planar rooted marked trees with spine.
{ "cite_N": [ "@cite_21" ], "mid": [ "1550653623" ], "abstract": [ "In this article, we study a branching random walk in an environment which depends on the time. This time-inhomogeneous environment consists of a sequence of macroscopic time intervals, in each of which the law of reproduction remains constant. We prove that the asymptotic behaviour of the maximal displacement in this process consists of a first ballistic order, given by the solution of an optimization problem under constraints, a negative logarithmic correction, plus stochastically bounded fluctuations." ] }
1509.08172
2226779634
This article extends the results of Fang & Zeitouni (2012a) on branching random walks (BRWs) with Gaussian increments in time inhomogeneous environments. We treat the case where the variance of the increments changes a finite number of times at different scales in [0,1] under a slight restriction. We find the asymptotics of the maximum up to an OP(1) error and show how the profile of the variance influences the leading order and the logarithmic correction term. A more general result was independently obtained by Mallein (2015b) when the law of the increments is not necessarily Gaussian. However, the proof we present here generalizes the approach of Fang & Zeitouni (2012a) instead of using the spinal decomposition of the BRW. As such, the proof is easier to understand and more robust in the presence of an approximate branching structure.
For other recent and relevant results on branching processes in time-inhomogeneous environments, the reader is referred to @cite_0 @cite_25 @cite_16 @cite_26 @cite_7 @cite_20 @cite_18 @cite_23 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_7", "@cite_0", "@cite_23", "@cite_16", "@cite_25", "@cite_20" ], "mid": [ "805655559", "", "1975874480", "2150441560", "2950312311", "2019219533", "1937448836", "2122057480" ], "abstract": [ "Consider a branching random walk evolving in a macroscopic time-inhomogeneous environment, that scales with the length n of the process under study. We compute the first two terms of the asymptotic of the maximal displacement at time n. The coefficient of the first (ballistic) order is obtained as the solution of an optimization problem, while the second term, of order n1 3, comes from time-inhomogeneous random walk estimates, that may be of independent interest. This result partially answers a conjecture of Fang and Zeitouni. Same techniques are used to obtain the asymptotic of other quantities, such as the consistent maximal displacement.", "", "We consider the maximal displacement of one dimensional branching Brownian motion with (macroscopically) time varying profiles. For monotone decreasing variances, we show that the correction from linear displacement is not logarithmic but rather proportional to T 1 3. We conjecture that this is the worse case correction possible.", "We construct and describe the extremal process for variable speed branching Brownian motion, studied recently by Fang and Zeitouni, for the case of piecewise constant speeds; in fact for simplicity we concentrate on the case when the speed is @math for @math and @math when @math . In the case @math , the process is the concatenation of two BBM extremal processes, as expected. In the case @math , a new family of cluster point processes arises, that are similar, but distinctively different from the BBM process. Our proofs follow the strategy of Arguin, Bovier, and Kistler.", "Abstract The behavior of the maximal displacement of a supercritical branching random walk has been a subject of intense studies for a long time. But only recently the case of time-inhomogeneous branching has gained focus. The contribution of this paper is to analyze a time-inhomogeneous model with two levels of randomness. In the first step a sequence of branching laws is sampled independently according to a distribution on the set of point measures’ laws. Conditionally on the realization of this sequence (called environment) we define a branching random walk and find the asymptotic behavior of its maximal particle. It is of the form V n − φ log n + o P ( log n ) , where V n is a function of the environment that behaves as a random walk and φ > 0 is a deterministic constant, which turns out to be bigger than the usual logarithmic correction of the homogeneous branching random walk.", "The Random Energy Model is generalized to treat arbitrary correlations between pairs of energy levels. This Generalized Random Energy Model remains soluble. The sudden freezing which occurs in the RE model can, in the generalized version, become a progressive freezing when temperature decreases Le modele d'energie aleatoire est generalise pour prendre en compte des correlations arbitraires entre paires de niveaux d'energies. Ce modele reste un modele de verre de spin resoluble. Le gel brutal du modele a energies aleatoires peut etre remplace en gel progressif quand on baisse la temperature", "We prove the convergence of the extremal processes for variable speed branching Brownian motions where the \"speed functions\", that describe the time- inhomogeneous variance, lie strictly below their concave hull and satisfy a certain weak regularity condition. These limiting objects are universal in the sense that they only depend on the slope of the speed function at 0 and the nal time t. The proof is based on previous results for two-speed BBM obtained in Bovier and Hartung (2014) and uses Gaussian comparison arguments to extend these to the general case.", "We consider the distribution of the maximum MT of branching Brownian motion with time-inhomogeneous variance of the form 2 (t=T ), where ( ) is a strictly decreasing function. This corresponds to the study of the time-inhomogeneous Fisher KolmogorovPetrovskii-Piskunov (FKPP) equation Ft(x;t) = 2 (1 t=T )Fxx(x;t)=2 +g(F (x;t)), for appropriate nonlinearities g( ). Fang and Zeitouni (2012) showed that MT v T is negative of order T 1=3 , where v = R 1 0 (s)ds. In this paper, we show the existence of a function m 0 , such that MT m 0 converges in law, as T ! 1. Furthermore, m 0 = v T w T 1=3 (1) logT +O(1) with w = 2 1=3 1 R 1 0 (s) 1=3 j 0 (s)j 2=3 ds. Here," ] }
1509.07859
2274540040
We study the problem of recovering a hidden community of cardinality @math from an @math symmetric data matrix @math , where for distinct indices @math , @math if @math both belong to the community and @math otherwise, for two known probability distributions @math and @math depending on @math . If @math and @math with @math , it reduces to the problem of finding a densely-connected @math -subgraph planted in a large Erd "os-R 'enyi graph; if @math and @math with @math , it corresponds to the problem of locating a @math principal submatrix of elevated means in a large Gaussian random matrix. We focus on two types of asymptotic recovery guarantees as @math : (1) weak recovery: expected number of classification errors is @math ; (2) exact recovery: probability of classifying all indices correctly converges to one. Under mild assumptions on @math and @math , and allowing the community size to scale sublinearly with @math , we derive a set of sufficient conditions and a set of necessary conditions for recovery, which are asymptotically tight with sharp constants. The results hold in particular for the Gaussian case, and for the case of bounded log likelihood ratio, including the Bernoulli case whenever @math and @math are bounded away from zero and infinity. An important algorithmic implication is that, whenever exact recovery is information theoretically possible, any algorithm that provides weak recovery when the community size is concentrated near @math can be upgraded to achieve exact recovery in linear additional time by a simple voting procedure.
The paper of @cite_0 gives sharp results for a Gaussian submatrix recovery problem similar to the one considered here -- see rmk:Butucea_sharp for details.
{ "cite_N": [ "@cite_0" ], "mid": [ "2963050003" ], "abstract": [ "We observe a N × M matrix of independent, identically distributed Gaussian random variables which are centered except for elements of some submatrix of size n × m where the mean is larger than some a > 0. The submatrix is sparse in the sense that n N and m M tend to 0, whereas n, m, N and M tend to infinity. We consider the problem of selecting the random variables with significantly large mean values, as was also considered by [M. Kolar, S. Balakrishnan, A. Rinaldo and A. Singh, NIPS (2011)]. We give sufficient conditions on a as a function of n, m, N and M and construct a uniformly consistent procedure in order to do sharp variable selection. We also prove the minimax lower bounds under necessary conditions which are complementary to the previous conditions. The critical values a ∗ separating the necessary and sufficient conditions are sharp (we show exact constants), whereas [M. Kolar, S. Balakrishnan, A. Rinaldo and A. Singh, NIPS (2011)] only prove rate optimality and focus on suboptimal computationally feasible selectors. Note that rate optimality in this problem leaves out a large set of possible parameters, where we do not know whether consistent selection is possible." ] }
1509.07469
2962739446
Massive MIMO is a variant of multiuser MIMO where the number of base-station antennas M is very large (typically ≈ 100), and generally much larger than the number of spatially multiplexed data streams (typically ≈ 10). The benefits of such approach have been intensively investigated in the past few years, and all-digital experimental implementations have also been demonstrated. Unfortunately, the front-end A D conversion necessary to drive hundreds of antennas, with a signal bandwidth of the order of 10 to 100 MHz, requires very large sampling bitrate and power consumption. In order to reduce such implementation requirements, Hybrid Digital-Analog architectures have been proposed. In particular, our work in this paper is motivated by one of such schemes named Joint Spatial Division and Multiplexing (JSDM), where the downlink precoder (resp., uplink linear receiver) is split into the product of a baseband linear projection (digital) and an RF reconfigurable beamforming network (analog), such that only a reduced number m M of A D converters and RF modulation demodulation chains is needed. In JSDM, users are grouped according to similarity of their channel dominant subspaces, and these groups are separated by the analog beamforming stage, where multiplexing gain in each group is achieved using the digital precoder. Therefore, it is apparent that extracting the channel subspace information of the M -dim channel vectors from snapshots of m-dim projections, with m M , plays a fundamental role in JSDM implementation. In this paper, we develop novel efficient algorithms that require sampling only m = O(2 √ M) specific array elements according to a coprime sampling scheme, and for a given p M , return a p-dim beamformer that has a performance comparable with the best p-dim beamformer that can be designed from the full knowledge of the exact channel covariance matrix. We assess the performance of our proposed estimators both analytically and empirically via numerical simulations. We also demonstrate by simulation that the proposed subspace estimation methods provide near-ideal performance for a massive MIMO JSDM system, by comparing with the case where the user channel covariances are perfectly known.
Let @math be the singular value decomposition (SVD) of @math , where @math denotes the diagonal matrix of singular values (sorted in non-increasing order). Denoting by @math the @math matrix consisting of the first @math columns of @math , we have that the columns of @math form an orthonormal basis for the signal subspace. The goal of subspace tracking from incomplete observations consists in estimating this subspace from the noisy low-dim sketches @math , revealed to the estimator sequentially for @math . The noiseless version of this problem was studied by Chi et. al. in @cite_17 , proposing the PETRELS algorithm. Another algorithm named GROUSE was proposed by Balzano et. al. in @cite_48 . The main focus of both algorithms is to optimize the computational complexity rather than the data size, and therefore they are mainly suited to the case where both @math and @math are high.
{ "cite_N": [ "@cite_48", "@cite_17" ], "mid": [ "2962834831", "2075406189" ], "abstract": [ "This work presents GROUSE (Grassmanian Rank-One Update Subspace Estimation), an efficient online algorithm for tracking subspaces from highly incomplete observations. GROUSE requires only basic linear algebraic manipulations at each iteration, and each subspace update can be performed in linear time in the dimension of the subspace. The algorithm is derived by analyzing incremental gradient descent on the Grassmannian manifold of subspaces. With a slight modification, GROUSE can also be used as an online incremental algorithm for the matrix completion problem of imputing missing entries of a low-rank matrix. GROUSE performs exceptionally well in practice both in tracking subspaces and as an online algorithm for matrix completion.", "Many real world datasets exhibit an embedding of low-dimensional structure in a high-dimensional manifold. Examples include images, videos and internet traffic data. It is of great significance to estimate and track the low-dimensional structure with small storage requirements and computational complexity when the data dimension is high. Therefore we consider the problem of reconstructing a data stream from a small subset of its entries, where the data is assumed to lie in a low-dimensional linear subspace, possibly corrupted by noise. We further consider tracking the change of the underlying subspace, which can be applied to applications such as video denoising, network monitoring and anomaly detection. Our setting can be viewed as a sequential low-rank matrix completion problem in which the subspace is learned in an online fashion. The proposed algorithm, dubbed Parallel Estimation and Tracking by REcursive Least Squares (PETRELS), first identifies the underlying low-dimensional subspace, and then reconstructs the missing entries via least-squares estimation if required. Subspace identification is performed via a recursive procedure for each row of the subspace matrix in parallel with discounting for previous observations. Numerical examples are provided for direction-of-arrival estimation and matrix completion, comparing PETRELS with state of the art batch algorithms." ] }
1509.07469
2962739446
Massive MIMO is a variant of multiuser MIMO where the number of base-station antennas M is very large (typically ≈ 100), and generally much larger than the number of spatially multiplexed data streams (typically ≈ 10). The benefits of such approach have been intensively investigated in the past few years, and all-digital experimental implementations have also been demonstrated. Unfortunately, the front-end A D conversion necessary to drive hundreds of antennas, with a signal bandwidth of the order of 10 to 100 MHz, requires very large sampling bitrate and power consumption. In order to reduce such implementation requirements, Hybrid Digital-Analog architectures have been proposed. In particular, our work in this paper is motivated by one of such schemes named Joint Spatial Division and Multiplexing (JSDM), where the downlink precoder (resp., uplink linear receiver) is split into the product of a baseband linear projection (digital) and an RF reconfigurable beamforming network (analog), such that only a reduced number m M of A D converters and RF modulation demodulation chains is needed. In JSDM, users are grouped according to similarity of their channel dominant subspaces, and these groups are separated by the analog beamforming stage, where multiplexing gain in each group is achieved using the digital precoder. Therefore, it is apparent that extracting the channel subspace information of the M -dim channel vectors from snapshots of m-dim projections, with m M , plays a fundamental role in JSDM implementation. In this paper, we develop novel efficient algorithms that require sampling only m = O(2 √ M) specific array elements according to a coprime sampling scheme, and for a given p M , return a p-dim beamformer that has a performance comparable with the best p-dim beamformer that can be designed from the full knowledge of the exact channel covariance matrix. We assess the performance of our proposed estimators both analytically and empirically via numerical simulations. We also demonstrate by simulation that the proposed subspace estimation methods provide near-ideal performance for a massive MIMO JSDM system, by comparing with the case where the user channel covariances are perfectly known.
For @math and for a high signal-to-noise ratio (SNR), the covariance matrix @math in ) is nearly low-rank. Recovery of low-rank matrices from a collection of a few possibly noisy samples is of great importance in signal processing and machine learning. Recently, it has been shown that this can be achieved via nuclear-norm minimization, which is a convex problem and can be efficiently solved @cite_47 . For a symmetric matrix @math , the nuclear norm @math is given by the sum of the absolute values of the eigen-values of @math , and reduces to @math when @math is positive semi-definite (PSD). In our case, we have only a collection of @math snapshots @math as defined before. Let be the sample covariance of the full and projected signal. A natural extension of the matrix completion by nuclear-norm minimization to our case is readily give by: where @math is an estimate of the @math -norm of the error.
{ "cite_N": [ "@cite_47" ], "mid": [ "2611328865" ], "abstract": [ "We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys @math for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information." ] }
1509.07469
2962739446
Massive MIMO is a variant of multiuser MIMO where the number of base-station antennas M is very large (typically ≈ 100), and generally much larger than the number of spatially multiplexed data streams (typically ≈ 10). The benefits of such approach have been intensively investigated in the past few years, and all-digital experimental implementations have also been demonstrated. Unfortunately, the front-end A D conversion necessary to drive hundreds of antennas, with a signal bandwidth of the order of 10 to 100 MHz, requires very large sampling bitrate and power consumption. In order to reduce such implementation requirements, Hybrid Digital-Analog architectures have been proposed. In particular, our work in this paper is motivated by one of such schemes named Joint Spatial Division and Multiplexing (JSDM), where the downlink precoder (resp., uplink linear receiver) is split into the product of a baseband linear projection (digital) and an RF reconfigurable beamforming network (analog), such that only a reduced number m M of A D converters and RF modulation demodulation chains is needed. In JSDM, users are grouped according to similarity of their channel dominant subspaces, and these groups are separated by the analog beamforming stage, where multiplexing gain in each group is achieved using the digital precoder. Therefore, it is apparent that extracting the channel subspace information of the M -dim channel vectors from snapshots of m-dim projections, with m M , plays a fundamental role in JSDM implementation. In this paper, we develop novel efficient algorithms that require sampling only m = O(2 √ M) specific array elements according to a coprime sampling scheme, and for a given p M , return a p-dim beamformer that has a performance comparable with the best p-dim beamformer that can be designed from the full knowledge of the exact channel covariance matrix. We assess the performance of our proposed estimators both analytically and empirically via numerical simulations. We also demonstrate by simulation that the proposed subspace estimation methods provide near-ideal performance for a massive MIMO JSDM system, by comparing with the case where the user channel covariances are perfectly known.
We will compare the performance of our algorithms with a grid-based MMV approach as in @cite_34 , where the channel coefficients @math are estimated by where @math is a quantized dictionary over a grid of AoAs of size @math , and where the so-called @math -norm of the matrix @math is defined as @math , where @math , @math , denote the rows of @math . The signal subspace is eventually given by @math , where @math contains the index of the active'' columns of @math , i.e., those indexed by the support set of @math .
{ "cite_N": [ "@cite_34" ], "mid": [ "2074054045" ], "abstract": [ "A simultaneous sparse approximation problem requests a good approximation of several input signals at once using different linear combinations of the same elementary signals. At the same time, the problem balances the error in approximation against the total number of elementary signals that participate. These elementary signals typically model coherent structures in the input signals, and they are chosen from a large, linearly dependent collection.The first part of this paper presents theoretical and numerical results for a greedy pursuit algorithm, called simultaneous orthogonal matching pursuit.The second part of the paper develops another algorithmic approach called convex relaxation. This method replaces the combinatorial simultaneous sparse approximation problem with a closely related convex program that can be solved efficiently with standard mathematical programming software. The paper develops conditions under which convex relaxation computes good solutions to simultaneous sparse approximation problems." ] }
1509.07821
2148498187
This paper introduces the Wave Transactional Filesystem (WTF), a novel, transactional, POSIX-compatible filesystem based on a new file slicing API that enables efficient file transformations. WTF provides transactional access to a distributed filesystem, eliminating the possibility of inconsistencies across multiple files. Further, the file slicing API enables applications to construct files from the contents of other files without having to rewrite or relocate data. Combined, these enable a new class of high-performance applications. Experiments show that WTF can qualitatively outperform the industry-standard HDFS distributed filesystem, up to a factor of four in a sorting benchmark, by reducing I O costs. Microbenchmarks indicate that the new features of WTF impose only a modest overhead on top of the POSIX-compatible API.
Distributed filesystems expose one or more units of storage over a network to clients. AFS @cite_40 exports a uniform namespace to workstations, and stores all data on centralized servers. Other systems @cite_33 @cite_14 @cite_17 , most notably xFS @cite_20 and Swift @cite_32 stripe data across multiple servers for higher performance than can be achieved with a single disk. Pet al @cite_34 provides a virtual disk abstraction that clients may use as a traditional block device. Frangipani @cite_44 builds a filesystem abstraction on top of Pet al. NASD @cite_18 and Panasas @cite_12 employ customized storage devices that attach to the network to store the bulk of the metadata. In contrast to these systems, WTF provides transactional guarantees that can span hundreds or thousands of disks because its metadata storage scales independently of the number of storage servers.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_33", "@cite_32", "@cite_44", "@cite_40", "@cite_34", "@cite_12", "@cite_20", "@cite_17" ], "mid": [ "2137808089", "2131645490", "2613324619", "2149609142", "2115457697", "2005373714", "2124288146", "2131894525", "2025413686", "" ], "abstract": [ "This paper describes the Network-Attached Secure Disk (NASD) storage architecture, prototype implementations oj NASD drives, array management for our architecture, and three, filesystems built on our prototype. NASD provides scalable storage bandwidth without the cost of servers used primarily, for transferring data from peripheral networks (e.g. SCSI) to client networks (e.g. ethernet). Increasing datuset sizes, new attachment technologies, the convergence of peripheral and interprocessor switched networks, and the increased availability of on-drive transistors motivate and enable this new architecture. NASD is based on four main principles: direct transfer to clients, secure interfaces via cryptographic support, asynchronous non-critical-path oversight, and variably-sized data objects. Measurements of our prototype system show that these services can be cost-effectively integrated into a next generation disk drive ASK. End-to-end measurements of our prototype drive andfilesysterns suggest that NASD cun support conventional distributed filesystems without performance degradation. More importantly, we show scaluble bandwidth for NASD-specialized filesystems. Using a parallel data mining application, NASD drives deliver u linear scaling of 6.2 MB s per clientdrive pair, tested with up to eight pairs in our lab.", "GPFS is IBM's parallel, shared-disk file system for cluster computers, available on the RS 6000 SP parallel supercomputer and on Linux clusters. GPFS is used on many of the largest supercomputers in the world. GPFS was built on many of the ideas that were developed in the academic community over the last several years, particularly distributed locking and recovery technology. To date it has been a matter of conjecture how well these ideas scale. We have had the opportunity to test those limits in the context of a product that runs on the largest systems in existence. While in many cases existing ideas scaled well, new approaches were necessary in many key areas. This paper describes GPFS, and discusses how distributed locking and recovery techniques were extended to scale to large clusters.", "Zebra is a network file system that increases throughput by striping the file data across multiple servers. Rather than striping each file separately, Zebra forms all the new data from each client into a single stream, which it then stripes using an approach similar to a log-structured file system. This provides high performance for writes of small files as well as for reads and writes of large files. Zebra also writes parity information in each stripe in the style of RAID disk arrays; this increases storage costs slightly, but allows the system to continue operation while a single storage server is unavailable. A prototype implementation of Zebra, built in the Sprite operating system, provides 4–5 times the throughput of the standard Sprite file system or NFS for large files and a 15–300 improvement for writing small files.", "We present an I O architecture, called Swift, that addresses the problem of data rate mismatches between the requirements of an application, storage devices, and the interconnection medium. The goal of Swift is to support high data rates in general purpose distributed systems. Swift uses a high-speed interconnection medium to provide high data rate transfers by using multiple slower storage devices in parallel. It scales well when using multiple storage devices and interconnections, and can use any appropriate storage technology, including high-performance devices such as disk arrays. To address the problem of partial failures, Swift stores data redundantly. Using the UNIX operating system, we have constructed a simplified prototype of the Swift architecture. The prototype provides data rates that are significantly faster than access to the local SCSI disk, limited by the capacity of a single Ethernet segment, or in the case of multiple Ethernet segments by the ability of the client to drive them. We have constructed a simulation model to demonstrate how the Swift architecture can exploit advances in processor, communication and storage technology. We consider the effects of processor speed, interconnection capacity, and multiple storage agents on the utilization of the components and the data rate of the system. We show that the data rates scale well in the number of storage devices, and that by replacing the most highly stressed components by more powerful ones the data rates of the entire system increase significantly.", "The ideal distributed file system would provide all its users with coherent, shared access to the same set of files, yet would be arbitrarily scalable to provide more storage space and higher performance to a growing user community. It would be highly available in spite of component failures. It would require minimal human administration, and administration would not become more complex as more components were added. Frangipani is a new file system that approximates this ideal, yet was relatively easy to build because of its two-layer structure. The lower layer is Pet al (described in an earlier paper), a distributed storage service that provides incrementally scalable, highly available, automatically managed virtual disks. In the upper layer, multiple machines run the same Frangipani file system code on top of a shared Pet al virtual disk, using a distributed lock service to ensure coherence. Frangipani is meant to run in a cluster of machines that are under a common administration and can communicate securely. Thus the machines trust one another and the shared virtual disk approach is practical. Of course, a Frangipani file system can be exported to untrusted machines using ordinary network file access protocols. We have implemented Frangipani on a collection of Alphas running DIGITAL Unix 4.0. Initial measurements indicate that Frangipani has excellent single-server performance and scales well as servers are added.", "The Andrew File System is a location-transparent distributed tile system that will eventually span more than 5000 workstations at Carnegie Mellon University. Large scale affects performance and complicates system operation. In this paper we present observations of a prototype implementation, motivate changes in the areas of cache validation, server process structure, name translation, and low-level storage representation, and quantitatively demonstrate Andrews ability to scale gracefully. We establish the importance of whole-file transfer and caching in Andrew by comparing its performance with that of Sun Microsystems NFS tile system. We also show how the aggregation of files into volumes improves the operability of the system.", "The ideal storage system is globally accessible, always available, provides unlimited performance and capacity for a large number of clients, and requires no management. This paper describes the design, implementation, and performance of Pet al, a system that attempts to approximate this ideal in practice through a novel combination of features. Pet al consists of a collection of network-connected servers that cooperatively manage a pool of physical disks. To a Pet al client, this collection appears as a highly available block-level storage system that provides large abstract containers called virtual disks. A virtual disk is globally accessible to all Pet al clients on the network. A client can create a virtual disk on demand to tap the entire capacity and performance of the underlying physical resources. Furthermore, additional resources, such as servers and disks, can be automatically incorporated into Pet al.We have an initial Pet al prototype consisting of four 225 MHz DEC 3000 700 workstations running Digital Unix and connected by a 155 Mbit s ATM network. The prototype provides clients with virtual disks that tolerate and recover from disk, server, and network failures. Latency is comparable to a locally attached disk, and throughput scales with the number of servers. The prototype can achieve I O rates of up to 3150 requests sec and bandwidth up to 43.1 Mbytes sec.", "The Panasas file system uses parallel and redundant access to object storage devices (OSDs), per-file RAID, distributed metadata management, consistent client caching, file locking services, and internal cluster management to provide a scalable, fault tolerant, high performance distributed file system. The clustered design of the storage system and the use of client-driven RAID provide scalable performance to many concurrent file system clients through parallel access to file data that is striped across OSD storage nodes. RAID recovery is performed in parallel by the cluster of metadata managers, and declustered data placement yields scalable RAID rebuild rates as the storage system grows larger. This paper presents performance measures of I O, metadata, and recovery operations for storage clusters that range in size from 10 to 120 storage nodes, 1 to 12 metadata nodes, and with file system client counts ranging from 1 to 100 compute nodes. Production installations are as large as 500 storage nodes, 50 metadata managers, and 5000 clients.", "In this paper, we propose a new paradigm for network file system design, serverless network file systems. While traditional network file systems rely on a central server machine, a serverless system utilizes workstations cooperating as peers to provide all file system services. Any machine in the system can store, cache, or control any block of data. Our approach uses this location independence, in combination with fast local area networks, to provide better performance and scalability than traditional file systems. Further, because any machine in the system can assume the responsibilities of a failed component, our serverless design also provides high availability via redundant data storage. To demonstrate our approach, we have implemented a prototype serverless network file system called xFS. Preliminary performance measurements suggest that our architecture achieves its goal of scalability. For instance, in a 32-node xFS system with 32 active clients, each client receives nearly as much read or write throughput as it would see if it were the only active client.", "" ] }
1509.07821
2148498187
This paper introduces the Wave Transactional Filesystem (WTF), a novel, transactional, POSIX-compatible filesystem based on a new file slicing API that enables efficient file transformations. WTF provides transactional access to a distributed filesystem, eliminating the possibility of inconsistencies across multiple files. Further, the file slicing API enables applications to construct files from the contents of other files without having to rewrite or relocate data. Combined, these enable a new class of high-performance applications. Experiments show that WTF can qualitatively outperform the industry-standard HDFS distributed filesystem, up to a factor of four in a sorting benchmark, by reducing I O costs. Microbenchmarks indicate that the new features of WTF impose only a modest overhead on top of the POSIX-compatible API.
Recent work has focused on building large-scale datacenter-centric filesystems. GFS @cite_1 and HDFS @cite_26 employ a centralized master server that maintains the metadata, mediates client access, and coordinates the storage servers. Salus @cite_5 improves HDFS to support storage and computation failures without loss of data, but retains the central metadata server. This centralized master approach, however, suffers from scalability bottlenecks inherent to the limits of a single server @cite_41 . WTF overcomes the metadata scalability bottleneck using the scalable HyperDex key-value store @cite_42 .
{ "cite_N": [ "@cite_26", "@cite_41", "@cite_42", "@cite_1", "@cite_5" ], "mid": [ "", "2023502337", "2282592920", "2119565742", "1910816777" ], "abstract": [ "", "A discussion between Kirk McKusick and Sean Quinlan about the origin and evolution of the Google File System", "Traditional NoSQL systems scale by sharding data across multiple servers and by performing each operation on a small number of servers. Because transactions on multiple keys necessarily require coordination across multiple servers, NoSQL systems often explicitly avoid making transactional guarantees in order to avoid such coordination. Past work on transactional systems control this coordination by either increasing the granularity at which transactions are ordered, sacrificing serializability, or by making clock synchronicity assumptions. This paper presents a novel protocol for providing serializable transactions on top of a sharded data store. Called acyclic transactions, this protocol allows multiple transactions to prepare and commit simultaneously, improving concurrency in the system, while ensuring that no cycles form between concurrently-committing transactions. We have fully implemented acyclic transactions in a document store called Warp. Experiments show that Warp achieves 4 times higher throughput than Sinfonia's mini-transactions on the standard TPC-C benchmark with no aborts. Further, the system achieves 75 of the throughput of the non-transactional key-value store it builds upon.", "We have designed and implemented the Google File System, a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients. While sharing many of the same goals as previous distributed file systems, our design has been driven by observations of our application workloads and technological environment, both current and anticipated, that reflect a marked departure from some earlier file system assumptions. This has led us to reexamine traditional choices and explore radically different design points. The file system has successfully met our storage needs. It is widely deployed within Google as the storage platform for the generation and processing of data used by our service as well as research and development efforts that require large data sets. The largest cluster to date provides hundreds of terabytes of storage across thousands of disks on over a thousand machines, and it is concurrently accessed by hundreds of clients. In this paper, we present file system interface extensions designed to support distributed applications, discuss many aspects of our design, and report measurements from both micro-benchmarks and real world use.", "This paper describes Salus, a block store that seeks to maximize simultaneously both scalability and robustness. Salus provides strong end-to-end correctness guarantees for read operations, strict ordering guarantees for write operations, and strong durability and availability guarantees despite a wide range of server failures (including memory corruptions, disk corruptions, firmware bugs, etc.). Such increased protection does not come at the cost of scalability or performance: indeed, Salus often actually outperforms HBase (the codebase from which Salus descends). For example, Salus' active replication allows it to halve network bandwidth while increasing aggregate write throughput by a factor of 1.74 compared to HBase in a well-provisioned system." ] }
1509.07821
2148498187
This paper introduces the Wave Transactional Filesystem (WTF), a novel, transactional, POSIX-compatible filesystem based on a new file slicing API that enables efficient file transformations. WTF provides transactional access to a distributed filesystem, eliminating the possibility of inconsistencies across multiple files. Further, the file slicing API enables applications to construct files from the contents of other files without having to rewrite or relocate data. Combined, these enable a new class of high-performance applications. Experiments show that WTF can qualitatively outperform the industry-standard HDFS distributed filesystem, up to a factor of four in a sorting benchmark, by reducing I O costs. Microbenchmarks indicate that the new features of WTF impose only a modest overhead on top of the POSIX-compatible API.
CalvinFS @cite_30 focuses on fast metadata management using distributed transactions in the Calvin @cite_10 transaction processing system. Transactions in CalvinFS are limited, and cannot do read-modify-write operations on the filesystem without additional mechanism. Further, CalvinFS addresses file fragmentation using a heavy-weight garbage collection mechanism that entirely rewrites fragmented files; in the worst case, a sequential writer could incur I O that scales quadratically in the size of the file. In contrast, WTF provides fully general transactions and carefully arranges data to improve sequential write performance.
{ "cite_N": [ "@cite_30", "@cite_10" ], "mid": [ "1751683294", "2060440895" ], "abstract": [ "Existing file systems, even the most scalable systems that store hundreds of petabytes (or more) of data across thousands of machines, store file metadata on a single server or via a shared-disk architecture in order to ensure consistency and validity of the metadata. This paper describes a completely different approach for the design of replicated, scalable file systems, which leverages a high-throughput distributed database system for metadata management. This results in improved scalability of the metadata layer of the file system, as file metadata can be partitioned (and replicated) across a (shared-nothing) cluster of independent servers, and operations on file metadata transformed into distributed transactions. In addition, our file system is able to support standard file system semantics--including fully linearizable random writes by concurrent users to arbitrary byte offsets within the same file--across wide geographic areas. Such high performance, fully consistent, geographically distributed files systems do not exist today. We demonstrate that our approach to file system design can scale to billions of files and handle hundreds of thousands of updates and millions of reads per second-- while maintaining consistently low read latencies. Furthermore, such a deployment can survive entire datacenter outages with only small performance hiccups and no loss of availability.", "Many distributed storage systems achieve high data access throughput via partitioning and replication, each system with its own advantages and tradeoffs. In order to achieve high scalability, however, today's systems generally reduce transactional support, disallowing single transactions from spanning multiple partitions. Calvin is a practical transaction scheduling and data replication layer that uses a deterministic ordering guarantee to significantly reduce the normally prohibitive contention costs associated with distributed transactions. Unlike previous deterministic database system prototypes, Calvin supports disk-based storage, scales near-linearly on a cluster of commodity machines, and has no single point of failure. By replicating transaction inputs rather than effects, Calvin is also able to support multiple consistency levels---including Paxos-based strong consistency across geographically distant replicas---at no cost to transactional throughput." ] }
1509.07821
2148498187
This paper introduces the Wave Transactional Filesystem (WTF), a novel, transactional, POSIX-compatible filesystem based on a new file slicing API that enables efficient file transformations. WTF provides transactional access to a distributed filesystem, eliminating the possibility of inconsistencies across multiple files. Further, the file slicing API enables applications to construct files from the contents of other files without having to rewrite or relocate data. Combined, these enable a new class of high-performance applications. Experiments show that WTF can qualitatively outperform the industry-standard HDFS distributed filesystem, up to a factor of four in a sorting benchmark, by reducing I O costs. Microbenchmarks indicate that the new features of WTF impose only a modest overhead on top of the POSIX-compatible API.
Another approach to scalability is demonstrated by Flat Datacenter Storage @cite_31 , which enables applications to access any disk in a cluster via a CLOS network with full bisection bandwidth. To eliminate the scalability bottlenecks inherent to a single master design, FDS stores metadata on its tract servers and uses a centralized master solely to maintain the list of servers in the system. Blizzard @cite_39 builds block storage, visible to applications as a standard block device, on top of FDS, using nested striping and eventual durability to service the smaller writes typical of POSIX applications. These systems are complementary to WTF, and could implement the storage servers abstraction.
{ "cite_N": [ "@cite_31", "@cite_39" ], "mid": [ "1900515362", "201983520" ], "abstract": [ "Flat Datacenter Storage (FDS) is a high-performance, fault-tolerant, large-scale, locality-oblivious blob store. Using a novel combination of full bisection bandwidth networks, data and metadata striping, and flow control, FDS multiplexes an application's large-scale I O across the available throughput and latency budget of every disk in a cluster. FDS therefore makes many optimizations around data locality unnecessary. Disks also communicate with each other at their full bandwidth, making recovery from disk failures extremely fast. FDS is designed for datacenter scale, fully distributing metadata operations that might otherwise become a bottleneck. FDS applications achieve single-process read and write performance of more than 2GB s. We measure recovery of 92GB data lost to disk failure in 6.2 s and recovery from a total machine failure with 655GB of data in 33.7 s. Application performance is also high: we describe our FDS-based sort application which set the 2012 world record for disk-to-disk sorting.", "Blizzard is a high-performance block store that exposes cloud storage to cloud-oblivious POSIX and Win32 applications. Blizzard connects clients and servers using a network with full-bisection bandwidth, allowing clients to access any remote disk as fast as if it were local. Using a novel striping scheme, Blizzard exposes high disk parallelism to both sequential and random workloads; also, by decoupling the durability and ordering requirements expressed by flush requests, Blizzard can commit writes out-of-order, providing high performance and crash consistency to applications that issue many small, random IOs. Blizzard's virtual disk drive, which clients mount like a normal physical one, provides maximum throughputs of 1200 MB s, and can improve the performance of unmodified, cloud-oblivious applications by 2x-10x. Compared to EBS, a commercially available, state-of-the-art virtual drive for cloud applications, Blizzard can improve SQL server IOp rates by seven-fold while still providing crash consistency." ] }
1509.07821
2148498187
This paper introduces the Wave Transactional Filesystem (WTF), a novel, transactional, POSIX-compatible filesystem based on a new file slicing API that enables efficient file transformations. WTF provides transactional access to a distributed filesystem, eliminating the possibility of inconsistencies across multiple files. Further, the file slicing API enables applications to construct files from the contents of other files without having to rewrite or relocate data. Combined, these enable a new class of high-performance applications. Experiments show that WTF can qualitatively outperform the industry-standard HDFS distributed filesystem, up to a factor of four in a sorting benchmark, by reducing I O costs. Microbenchmarks indicate that the new features of WTF impose only a modest overhead on top of the POSIX-compatible API.
Power-proportional filesystems are elastic, in that they dynamically change the power consumption of a cluster to scale resource usage with demand and decrease power consumption in the cluster @cite_13 @cite_24 @cite_4 . WTF's design does not consider power-proportionality, but could possibly incorporate allocation techniques from other systems to make it more elastic.
{ "cite_N": [ "@cite_24", "@cite_13", "@cite_4" ], "mid": [ "2058181149", "2099204199", "1482713870" ], "abstract": [ "Online services hosted in data centers show significant diurnal variation in load levels. Thus, there is significant potential for saving power by powering down excess servers during the troughs. However, while techniques like VM migration can consolidate computational load, storage state has always been the elephant in the room preventing this powering down. Migrating storage is not a practical way to consolidate I O load. This paper presents Sierra, a power-proportional distributed storage subsystem for data centers. Sierra allows powering down of a large fraction of servers during troughs without migrating data and without imposing extra capacity requirements. It addresses the challenges of maintaining read and write availability, no performance degradation, consistency, and fault tolerance for general I O workloads through a set of techniques including power-aware layout, a distributed virtual log, recovery and migration techniques, and predictive gear scheduling. Replaying live traces from a large, real service (Hotmail) on a cluster shows power savings of 23 . Savings of 40--50 are possible with more complex optimizations.", "Power-proportional cluster-based storage is an important component of an overall cloud computing infrastructure. With it, substantial subsets of nodes in the storage cluster can be turned off to save power during periods of low utilization. Rabbit is a distributed file system that arranges its data-layout to provide ideal power-proportionality down to very low minimum number of powered-up nodes (enough to store a primary replica of available datasets). Rabbit addresses the node failure rates of large-scale clusters with data layouts that minimize the number of nodes that must be powered-up if a primary fails. Rabbit also allows different datasets to use different subsets of nodes as a building block for interference avoidance when the infrastructure is shared by multiple tenants. Experiments with a Rabbit prototype demonstrate its power-proportionality, and simulation experiments demonstrate its properties at scale.", "Elastic storage systems can be expanded or contracted to meet current demand, allowing servers to be turned off or used for other tasks. However, the usefulness of an elastic distributed storage system is limited by its agility: how quickly it can increase or decrease its number of servers. Due to the large amount of data they must migrate during elastic resizing, state-of-the-art designs usually have to make painful tradeoffs among performance, elasticity and agility. This paper describes an elastic storage system, called SpringFS, that can quickly change its number of active servers, while retaining elasticity and performance goals. SpringFS uses a novel technique, termed bounded write offloading, that restricts the set of servers where writes to overloaded servers are redirected. This technique, combined with the read offloading and passive migration policies used in SpringFS, minimizes the work needed before deactivation or activation of servers. Analysis of real-world traces from Hadoop deployments at Facebook and various Cloudera customers and experiments with the SpringFS prototype confirm SpringFS's agility, show that it reduces the amount of data migrated for elastic resizing by up to two orders of magnitude, and show that it cuts the percentage of active servers required by 67- 82 , outdoing state-of-the-art designs by 6-120 ." ] }
1509.07821
2148498187
This paper introduces the Wave Transactional Filesystem (WTF), a novel, transactional, POSIX-compatible filesystem based on a new file slicing API that enables efficient file transformations. WTF provides transactional access to a distributed filesystem, eliminating the possibility of inconsistencies across multiple files. Further, the file slicing API enables applications to construct files from the contents of other files without having to rewrite or relocate data. Combined, these enable a new class of high-performance applications. Experiments show that WTF can qualitatively outperform the industry-standard HDFS distributed filesystem, up to a factor of four in a sorting benchmark, by reducing I O costs. Microbenchmarks indicate that the new features of WTF impose only a modest overhead on top of the POSIX-compatible API.
Other blob'' storage systems behave similarly to file systems, but with a restricted interface that permits creating, retrieving, and deleting blobs, without efficient support for arbitrarily changing or resizing blobs. Facebook's f4 @cite_38 ensures infrequently accessed files are readily available for access. Pelican @cite_2 enables power-efficient cold storage by over provisioning storage space, and selectively turning on subsets of the disks to service requests. The design goals of these systems are different from the interactive, online applications that WTF enables; WTF could be used in front of these systems to generate, maintain, and modify data before placing it in warm or cold storage.
{ "cite_N": [ "@cite_38", "@cite_2" ], "mid": [ "1441763937", "1471723175" ], "abstract": [ "Facebook's corpus of photos, videos, and other Binary Large OBjects (BLOBs) that need to be reliably stored and quickly accessible is massive and continues to grow. As the footprint of BLOBs increases, storing them in our traditional storage system, Haystack, is becoming increasingly inefficient. To increase our storage efficiency, measured in the effective-replication-factor of BLOBs, we examine the underlying access patterns of BLOBs and identify temperature zones that include hot BLOBs that are accessed frequently and warm BLOBs that are accessed far less often. Our overall BLOB storage system is designed to isolate warm BLOBs and enable us to use a specialized warm BLOB storage system, f4. f4 is a new system that lowers the effective-replication-factor of warm BLOBs while remaining fault tolerant and able to support the lower throughput demands. f4 currently stores over 65PBs of logical BLOBs and reduces their effective-replication-factor from 3.6 to either 2.8 or 2.1. f4 provides low latency; is resilient to disk, host, rack, and datacenter failures; and provides sufficient throughput for warm BLOBs.", "A significant fraction of data stored in cloud storage is rarely accessed. This data is referred to as cold data; cost-effective storage for cold data has become a challenge for cloud providers. Pelican is a rack-scale hard-disk based storage unit designed as the basic building block for exabyte scale storage for cold data. In Pelican, server, power, cooling and interconnect bandwidth resources are provisioned by design to support cold data workloads; this right-provisioning significantly reduces Pelican's total cost of ownership compared to traditional disk-based storage. Resource right-provisioning in Pelican means only 8 of the drives can be concurrently spinning. This introduces complex resource management to be handled by the Pelican storage stack. Resource restrictions are expressed as constraints over the hard drives. The data layout and IO scheduling ensures that these constraints are not violated. We evaluate the performance of a prototype Pelican, and compare against a traditional resource overprovisioned storage rack using a cross-validated simulator. We show that compared to this overprovisioned storage rack Pelican performs well for cold workloads, providing high throughput with acceptable latency." ] }
1509.07821
2148498187
This paper introduces the Wave Transactional Filesystem (WTF), a novel, transactional, POSIX-compatible filesystem based on a new file slicing API that enables efficient file transformations. WTF provides transactional access to a distributed filesystem, eliminating the possibility of inconsistencies across multiple files. Further, the file slicing API enables applications to construct files from the contents of other files without having to rewrite or relocate data. Combined, these enable a new class of high-performance applications. Experiments show that WTF can qualitatively outperform the industry-standard HDFS distributed filesystem, up to a factor of four in a sorting benchmark, by reducing I O costs. Microbenchmarks indicate that the new features of WTF impose only a modest overhead on top of the POSIX-compatible API.
Transactional filesystems enable applications to offload much of the hard work relating to update consistency and durability to the filesystem. The QuickSilver operating system shows that transactions across the filesystem simplify application development @cite_11 . Further work showed that transactions could be easily added to LFS, exploiting properties of the already-log-structured data to simplify the design @cite_9 . Valor @cite_7 builds transaction support into the Linux kernel by interposing a lock manager between the kernel's VFS calls and existing VFS implementations. In contrast to the transactions provided by WTF, and the underlying HyperDex transactions, these systems adopt traditional pessimistic locking techniques that hinder concurrency.
{ "cite_N": [ "@cite_9", "@cite_7", "@cite_11" ], "mid": [ "2151843600", "195676583", "2097589646" ], "abstract": [ "The design and implementation of a transaction manager embedded in a log-structured file system are described. Measurements indicate that transaction support on a log-structured file system offers a 10 performance improvement over transaction support on a conventional read-optimized file system. When the transaction manager is embedded in the log-structured file system, the resulting performance is comparable to that of a more traditional, user-level system. The performance results also indicate that embedding transactions in the file system need not impact the performance of nontransaction applications. >", "Transactions offer a powerful data-access method used in many databases today trough a specialized query API. User applications, however, use a different file-access API (POSIX) which does not offer transactional guarantees. Applications using transactions can become simpler, smaller, easier to develop and maintain, more reliable, and more secure. We explored several techniques how to provide transactional file access with minimal impact on existing programs. Our first prototype was a standalone kernel component within the Linux kernel, but it complicated the kernel considerably and duplicated some of Linux's existing facilities. Our second prototype was all in user level, and while it was easier to develop, it suffered from high overheads. In this paper we describe our latest prototype and the evolution that led to it. We implemented a transactional file API inside the Linux kernel which integrates easily and seamlessly with existing kernel facilities. This design is easier to maintain, simpler to integrate into existing OSs, and efficient. We evaluated our prototype and other systems under a variety of workloads. We demonstrate that our prototype's performance is better than comparable systems and comes close to the theoretical lower bound for a log-based transaction manager.", "All programs in the QuickSilver distributed system behave atomically with respect to their updates to permanent data. Operating system support for transactions provides the framework required to support this, as well as a mechanism that unifies reclamation of resources after failures or normal process termination. This paper evaluates the use of transactions for these purposes in a general purpose operating system and presents some of the lessons learned from our experience with a complete running system based on transactions. Examples of how transactions are used in QuickSilver and measurements of their use demonstrate that the transaction mechanism provides an efficient and powerful means for solving many of the problems introduced by operating system extensibility and distribution." ] }
1509.07821
2148498187
This paper introduces the Wave Transactional Filesystem (WTF), a novel, transactional, POSIX-compatible filesystem based on a new file slicing API that enables efficient file transformations. WTF provides transactional access to a distributed filesystem, eliminating the possibility of inconsistencies across multiple files. Further, the file slicing API enables applications to construct files from the contents of other files without having to rewrite or relocate data. Combined, these enable a new class of high-performance applications. Experiments show that WTF can qualitatively outperform the industry-standard HDFS distributed filesystem, up to a factor of four in a sorting benchmark, by reducing I O costs. Microbenchmarks indicate that the new features of WTF impose only a modest overhead on top of the POSIX-compatible API.
Optimistic concurrency control schemes often enable more concurrency for lightly-contended workloads. PerDiS FS adopts an optimistic concurrency control scheme that relies upon external components to reconcile concurrent changes to a file @cite_19 . This allows users and applications to concurrently work on the same file; according to the authors, the most commonly adopted technique is selecting one version and throwing the rest away. Liskov and Rodrigues show that much of the overhead of a serializable filesystem can be avoided by running read-only transactions in the recent past, and employing an optimistic protocol for read-write transactions @cite_37 . WTF builds on top of HyperDex's optimistic concurrency and supports operations such as append that avoid creating conflicts between concurrent transactions.
{ "cite_N": [ "@cite_19", "@cite_37" ], "mid": [ "2056012812", "2058068178" ], "abstract": [ "Companies cooperating in the framework of a virtual enterprise have increasing demands for systems on which to base applications for their particular environment: groups of workers on distant independent LANs. In this paper, we present a transactional file system for a distributed persistent store designed to support cooperative engineering applications. The system integrates techniques, such as optimistic consistency protocols and versioning, tailored to provide efficient sharing of data, between users (at LAN scale) and between companies (at WAN scale).", "Transactions ensure simple and correct handling of concurrency and failures but are often considered too expensive for use in file systems. This paper argues that performance is not a barrier to running transactions. It presents a simple mechanism that substantially lowers the cost of read-only transactions (which constitute the bulk of operations in a file system). The approach is inexpensive: it requires modest additional storage, but storage is cheap. It causes read-only transactions to run slightly in the past, but guarantees that they nevertheless see a consistent state." ] }
1509.07821
2148498187
This paper introduces the Wave Transactional Filesystem (WTF), a novel, transactional, POSIX-compatible filesystem based on a new file slicing API that enables efficient file transformations. WTF provides transactional access to a distributed filesystem, eliminating the possibility of inconsistencies across multiple files. Further, the file slicing API enables applications to construct files from the contents of other files without having to rewrite or relocate data. Combined, these enable a new class of high-performance applications. Experiments show that WTF can qualitatively outperform the industry-standard HDFS distributed filesystem, up to a factor of four in a sorting benchmark, by reducing I O costs. Microbenchmarks indicate that the new features of WTF impose only a modest overhead on top of the POSIX-compatible API.
WTF is not the first system to choose to employ a transactional database as part of its design. Inversion @cite_15 builds on PostgreSQL to maintain a complete filesystem. KBDBFS @cite_7 and Amino @cite_3 both build on top of BerkeleyDB; the former is an in-kernel implementation of BerkeleyDB, while the latter eschews the complexity and takes a performance hit with a userspace implementation. WTF differs from these designs in that it stores solely the metadata in the transactional data store; data is stored elsewhere and not managed by the transactional component. Further, its design ensures that transactions on metadata are sufficient to provide filesystem-level transactions.
{ "cite_N": [ "@cite_15", "@cite_3", "@cite_7" ], "mid": [ "41850619", "2141423292", "195676583" ], "abstract": [ "This paper describes the design, implementation, and performance of the Inversion file system. Inversion provides a rich set of services to file system users, and manages a large tertiary data store. Inversion is built on top of the POSTGRES database system, and takes advantage of low-level DBMS services to provide transaction protection, fine-grained time travel, and fast crash recovery for user files and file system metadata. Inversion gets between 30 and 80 of the throughput of ULTRIX NFS backed by a non-volatile RAM cache. In addition, Inversion allows users to provide code for execution directly in the file system manager, yielding performance as much as seven times better than that of ULTRIX NFS.", "An organization's data is often its most valuable asset, but today's file systems provide few facilities to ensure its safety. Databases, on the other hand, have long provided transactions. Transactions are useful because they provide atomicity, consistency, isolation, and durability (ACID). Many applications could make use of these semantics, but databases have a wide variety of nonstandard interfaces. For example, applications like mail servers currently perform elaborate error handling to ensure atomicity and consistency, because it is easier than using a DBMS. A transaction-oriented programming model eliminates complex error-handling code because failed operations can simply be aborted without side effects. We have designed a file system that exports ACID transactions to user-level applications, while preserving the ubiquitous and convenient POSIX interface. In our prototype ACID file system, called Amino, updated applications can protect arbitrary sequences of system calls within a transaction. Unmodified applications operate without any changes, but each system call is transaction protected. We also built a recoverable memory library with support for nested transactions to allow applications to keep their in-memory data structures consistent with the file system. Our performance evaluation shows that ACID semantics can be added to applications with acceptable overheads. When Amino adds atomicity, consistency, and isolation functionality to an application, it performs close to Ext3. Amino achieves durability up to 46p faster than Ext3, thanks to improved locality.", "Transactions offer a powerful data-access method used in many databases today trough a specialized query API. User applications, however, use a different file-access API (POSIX) which does not offer transactional guarantees. Applications using transactions can become simpler, smaller, easier to develop and maintain, more reliable, and more secure. We explored several techniques how to provide transactional file access with minimal impact on existing programs. Our first prototype was a standalone kernel component within the Linux kernel, but it complicated the kernel considerably and duplicated some of Linux's existing facilities. Our second prototype was all in user level, and while it was easier to develop, it suffered from high overheads. In this paper we describe our latest prototype and the evolution that led to it. We implemented a transactional file API inside the Linux kernel which integrates easily and seamlessly with existing kernel facilities. This design is easier to maintain, simpler to integrate into existing OSs, and efficient. We evaluated our prototype and other systems under a variety of workloads. We demonstrate that our prototype's performance is better than comparable systems and comes close to the theoretical lower bound for a log-based transaction manager." ] }
1509.07821
2148498187
This paper introduces the Wave Transactional Filesystem (WTF), a novel, transactional, POSIX-compatible filesystem based on a new file slicing API that enables efficient file transformations. WTF provides transactional access to a distributed filesystem, eliminating the possibility of inconsistencies across multiple files. Further, the file slicing API enables applications to construct files from the contents of other files without having to rewrite or relocate data. Combined, these enable a new class of high-performance applications. Experiments show that WTF can qualitatively outperform the industry-standard HDFS distributed filesystem, up to a factor of four in a sorting benchmark, by reducing I O costs. Microbenchmarks indicate that the new features of WTF impose only a modest overhead on top of the POSIX-compatible API.
Stasis @cite_25 makes the argument that no one design support all use cases, and that transactional components should be building blocks for applications. WTF's approach is similar: HyperDex's transactions are used as a base primitive for managing WTF's state, and WTF supports a transactional API. Applications built on WTF can use this API to achieve their own transactional behavior.
{ "cite_N": [ "@cite_25" ], "mid": [ "2170892031" ], "abstract": [ "An increasing range of applications requires robust support for atomic, durable and concurrent transactions. Databases provide the default solution, but force applications to interact via SQL and to forfeit control over data layout and access mechanisms. We argue there is a gap between DBMSs and file systems that limits designers of data-oriented applications. Stasis is a storage framework that incorporates ideas from traditional write-ahead logging algorithms and file systems. It provides applications with flexible control over data structures, data layout, robustness, and performance. Stasis enables the development of unforeseen variants on transactional storage by generalizing write-ahead logging algorithms. Our partial implementation of these ideas already provides specialized (and cleaner) semantics to applications. We evaluate the performance of a traditional transactional storage system based on Stasis, and show that it performs favorably relative to existing systems. We present examples that make use of custom access methods, modified buffer manager semantics, direct log file manipulation, and LSN-free pages. These examples facilitate sophisticated performance optimizations such as zero-copy I O. These extensions are composable, easy to implement and significantly improve performance." ] }
1509.07473
2953040449
With the rapid proliferation of smart mobile devices, users now take millions of photos every day. These include large numbers of clothing and accessory images. We would like to answer questions like What outfit goes well with this pair of shoes?' To answer these types of questions, one has to go beyond learning visual similarity and learn a visual notion of compatibility across categories. In this paper, we propose a novel learning framework to help answer these types of questions. The main idea of this framework is to learn a feature transformation from images of items into a latent space that expresses compatibility. For the feature transformation, we use a Siamese Convolutional Neural Network (CNN) architecture, where training examples are pairs of items that are either compatible or incompatible. We model compatibility based on co-occurrence in large-scale user behavior data; in particular co-purchase data from Amazon.com. To learn cross-category fit, we introduce a strategic method to sample training data, where pairs of items are heterogeneous dyads, i.e., the two elements of a pair belong to different high-level categories. While this approach is applicable to a wide variety of settings, we focus on the representative problem of learning compatible clothing style. Our results indicate that the proposed framework is capable of learning semantic information about visual style and is able to generate outfits of clothes, with items from different categories, that go well together.
Metric learning is used to learn a high dimensional embedding space. This research field is wide and we refer to the work of Kulis @cite_10 for a comprehensive survey. A different approach is the use of that assign semantic labels to specific dimensions or regions in the feature space. An example is Whittle search that uses relative attributes to guide product search @cite_7 . In contrast with these works, we want to learn a feature transformation from the input image to a similarity metric that does not rely on discrete and pre-defined attributes.
{ "cite_N": [ "@cite_10", "@cite_7" ], "mid": [ "2121949863", "2033365921" ], "abstract": [ "The metric learning problem is concerned with learning a distance function tuned to a particular task, and has been shown to be useful when used in conjunction with nearest-neighbor methods and other techniques that rely on distances or similarities. This survey presents an overview of existing research in metric learning, including recent progress on scaling to high-dimensional feature spaces and to data sets with an extremely large number of data points. A goal of the survey is to present as unified as possible a framework under which existing research on metric learning can be cast. The first part of the survey focuses on linear metric learning approaches, mainly concentrating on the class of Mahalanobis distance learning methods. We then discuss nonlinear metric learning approaches, focusing on the connections between the nonlinear and linear approaches. Finally, we discuss extensions of metric learning, as well as applications to a variety of problems in computer vision, text analysis, program analysis, and multimedia. Full text available at: http: dx.doi.org 10.1561 2200000019", "We propose a novel mode of feedback for image search, where a user describes which properties of exemplar images should be adjusted in order to more closely match his her mental model of the image(s) sought. For example, perusing image results for a query “black shoes”, the user might state, “Show me shoe images like these, but sportier.” Offline, our approach first learns a set of ranking functions, each of which predicts the relative strength of a nameable attribute in an image (‘sportiness’, ‘furriness’, etc.). At query time, the system presents an initial set of reference images, and the user selects among them to provide relative attribute feedback. Using the resulting constraints in the multi-dimensional attribute space, our method updates its relevance function and re-ranks the pool of images. This procedure iterates using the accumulated constraints until the top ranked images are acceptably close to the user's envisioned target. In this way, our approach allows a user to efficiently “whittle away” irrelevant portions of the visual feature space, using semantic language to precisely communicate her preferences to the system. We demonstrate the technique for refining image search for people, products, and scenes, and show it outperforms traditional binary relevance feedback in terms of search speed and accuracy." ] }
1509.07473
2953040449
With the rapid proliferation of smart mobile devices, users now take millions of photos every day. These include large numbers of clothing and accessory images. We would like to answer questions like What outfit goes well with this pair of shoes?' To answer these types of questions, one has to go beyond learning visual similarity and learn a visual notion of compatibility across categories. In this paper, we propose a novel learning framework to help answer these types of questions. The main idea of this framework is to learn a feature transformation from images of items into a latent space that expresses compatibility. For the feature transformation, we use a Siamese Convolutional Neural Network (CNN) architecture, where training examples are pairs of items that are either compatible or incompatible. We model compatibility based on co-occurrence in large-scale user behavior data; in particular co-purchase data from Amazon.com. To learn cross-category fit, we introduce a strategic method to sample training data, where pairs of items are heterogeneous dyads, i.e., the two elements of a pair belong to different high-level categories. While this approach is applicable to a wide variety of settings, we focus on the representative problem of learning compatible clothing style. Our results indicate that the proposed framework is capable of learning semantic information about visual style and is able to generate outfits of clothes, with items from different categories, that go well together.
In this stream of research, the closest work to ours is the work by Bell and Bala @cite_12 . Although they focus on learning correspondences between photos of objects in context situations and in iconic photos, they also discover a space that represents some notion of style. However, their notion of style is only based on visual similarity. Our work builds upon this approach, but extends it, because we want to learn a notion of style that goes beyond visual similarity. In particular, we want to learn the compatibility of bundles of items from different categories. Since this compatibility cannot be reduced to only visual similarity, we face a harder learning problem. To learn this compatibility we propose a novel strategic sampling approach for the training data, based on heterogeneous dyads of co-occurrences. To compare our framework to this approach, we include na "ive sampling as a baseline in our evaluations. In particular, among the presented architectures in @cite_12 , we choose architecture B as baseline, because it gives the best results for cross-category search.
{ "cite_N": [ "@cite_12" ], "mid": [ "2021354639" ], "abstract": [ "Popular sites like Houzz, Pinterest, and LikeThatDecor, have communities of users helping each other answer questions about products in images. In this paper we learn an embedding for visual search in interior design. Our embedding contains two different domains of product images: products cropped from internet scenes, and products in their iconic form. With such a multi-domain embedding, we demonstrate several applications of visual search including identifying products in scenes and finding stylistically similar products. To obtain the embedding, we train a convolutional neural network on pairs of images. We explore several training architectures including re-purposing object classifiers, using siamese networks, and using multitask learning. We evaluate our search quantitatively and qualitatively and demonstrate high quality results for search across multiple visual domains, enabling new applications in interior design." ] }
1509.07473
2953040449
With the rapid proliferation of smart mobile devices, users now take millions of photos every day. These include large numbers of clothing and accessory images. We would like to answer questions like What outfit goes well with this pair of shoes?' To answer these types of questions, one has to go beyond learning visual similarity and learn a visual notion of compatibility across categories. In this paper, we propose a novel learning framework to help answer these types of questions. The main idea of this framework is to learn a feature transformation from images of items into a latent space that expresses compatibility. For the feature transformation, we use a Siamese Convolutional Neural Network (CNN) architecture, where training examples are pairs of items that are either compatible or incompatible. We model compatibility based on co-occurrence in large-scale user behavior data; in particular co-purchase data from Amazon.com. To learn cross-category fit, we introduce a strategic method to sample training data, where pairs of items are heterogeneous dyads, i.e., the two elements of a pair belong to different high-level categories. While this approach is applicable to a wide variety of settings, we focus on the representative problem of learning compatible clothing style. Our results indicate that the proposed framework is capable of learning semantic information about visual style and is able to generate outfits of clothes, with items from different categories, that go well together.
There is a growing body of research that aims at learning a notion of style from images. For example, Murillo al @cite_1 consider photos of groups of people to learn which groups are more likely to socialize with one another. This implies learning a distance metric between images. However, they require manually specified styles, called urban tribes'. Similarly, Bossard al @cite_20 , who use a random forest approach to classify the style of clothing images, require pre-specified classes of style. In contrast, our learning framework learns a high dimensional space of style that does not require specified classes of styles. In a different approach, Vittayakorn al @cite_17 learn outfit similarity, based on specific descriptors for color, texture and shape. While they are able to retrieve similar outfits to a query image, they don't learn compatibility between parts of outfits and, as opposed to our work, are not able to build outfits from compatible clothing items.
{ "cite_N": [ "@cite_1", "@cite_20", "@cite_17" ], "mid": [ "2093254778", "", "2021541806" ], "abstract": [ "The explosive growth in image sharing via social networks has produced exciting opportunities for the computer vision community in areas including face, text, product and scene recognition. In this work we turn our attention to group photos of people and ask the question: what can we determine about the social subculture or urban tribe to which these people belong? To this end, we propose a framework employing low- and mid-level features to capture the visual attributes distinctive to a variety of urban tribes. We proceed in a semi-supervised manner, employing a metric that allows us to extrapolate from a small number of pairwise image similarities to induce a set of groups that visually correspond to familiar urban tribes such as biker, hipster or goth. Automatic recognition of such information in group photos offers the potential to improve recommendation services, context sensitive advertising and other social analysis applications. We present promising preliminary experimental results that demonstrate our ability to categorize group photos in a socially meaningful manner.", "", "Clothing and fashion are an integral part of our everyday lives. In this paper we present an approach to studying fashion both on the runway and in more real-world settings, computationally, and at large scale, using computer vision. Our contributions include collecting a new runway dataset, designing features suitable for capturing outfit appearance, collecting human judgments of outfit similarity, and learning similarity functions on the features to mimic those judgments. We provide both intrinsic and extrinsic evaluations of our learned models to assess performance on outfit similarity prediction as well as season, year, and brand estimation. An example application tracks visual trends as runway fashions filter down to \"real way\" street fashions." ] }
1509.07892
2275975620
Classifier evasion consists in finding for a given instance @math the nearest instance @math such that the classifier predictions of @math and @math are different. We present two novel algorithms for systematically computing evasions for tree ensembles such as boosted trees and random forests. Our first algorithm uses a Mixed Integer Linear Program solver and finds the optimal evading instance under an expressive set of constraints. Our second algorithm trades off optimality for speed by using symbolic prediction, a novel algorithm for fast finite differences on tree ensembles. On a digit recognition task, we demonstrate that both gradient boosted trees and random forests are extremely susceptible to evasions. Finally, we harden a boosted tree model without loss of predictive accuracy by augmenting the training set of each boosting round with evading instances, a technique we call adversarial boosting.
We contrast our paper with a few related papers on deep neural networks, as these are the closest in spirit to the ideas developed here. @cite_21 hypothesize that evasion in practical deep neural networks is possible because these models are locally linear. However, this paper demonstrates that despite their extreme non-linearity, boosted trees are even more susceptible to evasion than neural networks. On the hardening side, @cite_21 introduce a regularization penalty term which simulates the presence of evading instances at training time, and show limited improvements in both test accuracy and robustness. @cite_8 show preliminary results by augmenting deep neural networks with a pre-filtering layer based on a form of contractive auto-encoding. Most recently, @cite_18 shows the strong positive effect of distillation on evasion robustness for neural networks. In this paper, we demonstrate a large increase in robustness for a boosted tree model hardened by . We empirically show that our method does not degrade accuracy and creates the most robust model in our benchmark problem.
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_8" ], "mid": [ "2174868984", "1945616565", "" ], "abstract": [ "Deep learning algorithms have been shown to perform extremely well on many classical machine learning problems. However, recent studies have shown that deep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force a deep neural network (DNN) to provide adversary-selected outputs. Such attacks can seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles can be crashed, illicit or illegal content can bypass content filters, or biometric authentication systems can be manipulated to allow improper access. In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs. We analytically investigate the generalizability and robustness properties granted by the use of defensive distillation when training DNNs. We also empirically study the effectiveness of our defense mechanisms on two DNNs placed in adversarial settings. The study shows that defensive distillation can reduce effectiveness of sample creation from 95 to less than 0.5 on a studied DNN. Such dramatic gains can be explained by the fact that distillation leads gradients used in adversarial sample creation to be reduced by a factor of 10^30. We also find that distillation increases the average minimum number of features that need to be modified to create adversarial samples by about 800 on one of the DNNs we tested.", "Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "" ] }
1509.07845
2226011429
Complex event retrieval is a challenging research problem, especially when no training videos are available. An alternative to collecting training videos is to train a large semantic concept bank a priori. Given a text description of an event, event retrieval is performed by selecting concepts linguistically related to the event description and fusing the concept responses on unseen videos. However, defining an exhaustive concept lexicon and pre-training it requires vast computational resources. Therefore, recent approaches automate concept discovery and training by leveraging large amounts of weakly annotated web data. Compact visually salient concepts are automatically obtained by the use of concept pairs or, more generally, n-grams. However, not all visually salient n-grams are necessarily useful for an event query--some combinations of concepts may be visually compact but irrelevant--and this drastically affects performance. We propose an event retrieval algorithm that constructs pairs of automatically discovered concepts and then prunes those concepts that are unlikely to be helpful for retrieval. Pruning depends both on the query and on the specific video instance being evaluated. Our approach also addresses calibration and domain adaptation issues that arise when applying concept detectors to unseen videos. We demonstrate large improvements over other vision based systems on the TRECVID MED 13 dataset.
Large scale video retrieval commonly employs a concept-based video representation (CBRE) @cite_2 @cite_3 @cite_4 @cite_17 , especially when only few or no training examples of the events are available. In this setting, complex events are represented in terms of a large set of concepts that are either event-driven (generated once the event description is known) @cite_12 @cite_30 @cite_1 or pre-defined @cite_22 @cite_11 @cite_5 . A test query description is mapped to a set of concepts whose detectors are then applied to videos to perform retrieval. However, methods based on pre-defined concepts need to train an exhaustive set of concept detectors a priori or the semantic gap between the query description and the concept database might be too large. This is computationally expensive and currently infeasible for real-world video retrieval systems. Instead, in this paper, given the textual description of the event to be retrieved, our approach leverages web image data to discover event-driven concepts and train detectors that are relevant to this specific event.
{ "cite_N": [ "@cite_30", "@cite_11", "@cite_4", "@cite_22", "@cite_1", "@cite_3", "@cite_2", "@cite_5", "@cite_12", "@cite_17" ], "mid": [ "2127944900", "2112856309", "2018668305", "2021899621", "2002657139", "2060566762", "2083598512", "1982795953", "2048261712", "2043727559" ], "abstract": [ "We consider automated detection of events in video without the use of any visual training examples. A common approach is to represent videos as classification scores obtained from a vocabulary of pre-trained concept classifiers. Where others construct the vocabulary by training individual concept classifiers, we propose to train classifiers for combination of concepts composed by Boolean logic operators. We call these concept combinations composite concepts and contribute an algorithm that automatically discovers them from existing video-level concept annotations. We discover composite concepts by jointly optimizing the accuracy of concept classifiers and their effectiveness for detecting events. We demonstrate that by combining concepts into composite concepts, we can train more accurate classifiers for the concept vocabulary, which leads to improved zero-shot event detection. Moreover, we demonstrate that by using different logic operators, namely \"AND\", \"OR\", we discover different types of composite concepts, which are complementary for zero-shot event detection. We perform a search for 20 events in 41K web videos from two test sets of the challenging TRECVID Multimedia Event Detection 2013 corpus. The experiments demonstrate the superior performance of the discovered composite concepts, compared to present-day alternatives, for zero-shot event detection.", "Concept-based video representation has proven to be eective in complex event detection. However, existing methods either manually design concepts or directly adopt concept libraries not specically designed for events. In this paper, we propose to build Concept Bank, the largest concept library consisting of 4; 876 concepts specically designed to cover 631 real-world events. To construct the Concept Bank, we rst gather a comprehensive event collection from WikiHow, a collaborative writing project that aims to build the world’s largest manual for any possible How-To event. For each event, we then search Flickr and discover relevant concepts from the tags of the returned images. We train a Multiple Kernel Linear SVM for each discovered concept as a concept detector in Concept Bank. We organize the concepts into a ve-layer tree structure, in which the higher-level nodes correspond to the event categories while the leaf nodes are the event-specic concepts discovered for each event. Based on such tree ontology, we develop a semantic matching method to select relevant concepts for each textual event query, and then apply the corresponding concept detectors to generate concept-based video representations. We use TRECVID Multimedia Event Detection 2013 and Columbia Consumer Video open source event denitions and videos as our test sets and show very promising results on two video event detection tasks: event modeling over concept space and zero-shot event retrieval. To the best of our knowledge, this is the largest concept library covering the largest number of real-world events.", "We propose semantic model vectors, an intermediate level semantic representation, as a basis for modeling and detecting complex events in unconstrained real-world videos, such as those from YouTube. The semantic model vectors are extracted using a set of discriminative semantic classifiers, each being an ensemble of SVM models trained from thousands of labeled web images, for a total of 280 generic concepts. Our study reveals that the proposed semantic model vectors representation outperforms-and is complementary to-other low-level visual descriptors for video event modeling. We hence present an end-to-end video event detection system, which combines semantic model vectors with other static or dynamic visual descriptors, extracted at the frame, segment, or full clip level. We perform a comprehensive empirical study on the 2010 TRECVID Multimedia Event Detection task (http: www.nist.gov itl iad mig med10.cfm), which validates the semantic model vectors representation not only as the best individual descriptor, outperforming state-of-the-art global and local static features as well as spatio-temporal HOG and HOF descriptors, but also as the most compact. We also study early and late feature fusion across the various approaches, leading to a 15 performance boost and an overall system performance of 0.46 mean average precision. In order to promote further research in this direction, we made our semantic model vectors for the TRECVID MED 2010 set publicly available for the community to use (http: www1.cs.columbia.edu mmerler SMV.html).", "Current state-of-the-art systems for visual content analysis require large training sets for each class of interest, and performance degrades rapidly with fewer examples. In this paper, we present a general framework for the zeroshot learning problem of performing high-level event detection with no training exemplars, using only textual descriptions. This task goes beyond the traditional zero-shot framework of adapting a given set of classes with training data to unseen classes. We leverage video and image collections with free-form text descriptions from widely available web sources to learn a large bank of concepts, in addition to using several off-the-shelf concept detectors, speech, and video text for representing videos. We utilize natural language processing technologies to generate event description features. The extracted features are then projected to a common high-dimensional space using text expansion, and similarity is computed in this space. We present extensive experimental results on the large TRECVID MED [26] corpus to demonstrate our approach. Our results show that the proposed concept detection methods significantly outperform current attribute classifiers such as Classemes [34], ObjectBank [21], and SUN attributes[28] . Further, we find that fusion, both within as well as between modalities, is crucial for optimal performance.", "We propose to use action, scene and object concepts as semantic attributes for classification of video events in InTheWild content, such as YouTube videos. We model events using a variety of complementary semantic attribute features developed in a semantic concept space. Our contribution is to systematically demonstrate the advantages of this concept-based event representation (CBER) in applications of video event classification and understanding. Specifically, CBER has better generalization capability, which enables to recognize events with a few training examples. In addition, CBER makes it possible to recognize a novel event without training examples (i.e., zero-shot learning). We further show our proposed enhanced event model can further improve the zero-shot learning. Furthermore, CBER provides a straightforward way for event recounting understanding. We use the TRECVID Multimedia Event Detection (MED11) open source event definitions and datasets as our test bed and show results on over 1400 hours of videos.", "We aim to query web video for complex events using only a handful of video query examples, where the standard approach learns a ranker from hundreds of examples. We consider a semantic signature representation, consisting of off-the-shelf concept detectors, to capture the variance in semantic appearance of events. Since it is unknown what similarity metric and query fusion to use in such an event retrieval setting, we perform three experiments on unconstrained web videos from the TRECVID event detection task. It reveals that: retrieval with semantic signatures using normalized correlation as similarity metric outperforms a low-level bag-of-words alternative, multiple queries are best combined using late fusion with an average operator, and event retrieval is preferred over event classification when less than eight positive video examples are available.", "We address the problem of classifying complex videos based on their content. A typical approach to this problem is performing the classification using semantic attributes, commonly termed concepts, which occur in the video. In this paper, we propose a contextual approach to video classification based on Generalized Maximum Clique Problem (GMCP) which uses the co-occurrence of concepts as the context model. To be more specific, we propose to represent a class based on the co-occurrence of its concepts and classify a video based on matching its semantic co-occurrence pattern to each class representation. We perform the matching using GMCP which finds the strongest clique of co-occurring concepts in a video. We argue that, in principal, the co-occurrence of concepts yields a richer representation of a video compared to most of the current approaches. Additionally, we propose a novel optimal solution to GMCP based on Mixed Binary Integer Programming (MBIP). The evaluations show our approach, which opens new opportunities for further research in this direction, outperforms several well established video classification methods.", "Recent research in video retrieval has been successful at finding videos when the query consists of tens or hundreds of sample relevant videos for training supervised models. Instead, we investigate unsupervised zero-shot retrieval where no training videos are provided: a query consists only of a text statement. For retrieval, we use text extracted from images in the videos, text recognized in the speech of its audio track, as well as automatically detected semantically meaningful visual video concepts identified with widely varying confidence in the videos. In this work we introduce a new method for automatically identifying relevant concepts given a text query using the Markov Random Field (MRF) retrieval framework. We use source expansion to build rich textual representations of semantic video concepts from large external sources such as the web. We find that concept-based retrieval significantly outperforms text based approaches in recall. Using an evaluation derived from the TRECVID MED'11 track, we present early results that an approach using multi-modal fusion can compensate for inadequacies in each modality, resulting in substantial effectiveness gains. With relevance feedback, our approach provides additional improvements of over 50 .", "Analysis and detection of complex events in videos require a semantic representation of the video content. Existing video semantic representation methods typically require users to pre-define an exhaustive concept lexicon and manually annotate the presence of the concepts in each video, which is infeasible for real-world video event detection problems. In this paper, we propose an automatic semantic concept discovery scheme by exploiting Internet images and their associated tags. Given a target event and its textual descriptions, we crawl a collection of images and their associated tags by performing text based image search using the noun and verb pairs extracted from the event textual descriptions. The system first identifies the candidate concepts for an event by measuring whether a tag is a meaningful word and visually detectable. Then a concept visual model is built for each candidate concept using a SVM classifier with probabilistic output. Finally, the concept models are applied to generate concept based video representations. We use the TRECVID Multimedia Event Detection (MED) 2013 as our video test set and crawl 400K Flickr images to automatically discover 2, 000 visual concepts. We show significant performance gains of the proposed concept discovery method over different video event detection tasks including supervised event modeling over concept space and semantic based zero-shot retrieval without training examples. Importantly, we show the proposed method of automatic concept discovery outperforms other well-known concept library construction approaches such as Classemes and ImageNet by a large margin (228 ) in zero-shot event retrieval. Finally, subjective evaluation by humans also confirms clear superiority of the proposed method in discovering concepts for event representation.", "Multimedia event detection has drawn a lot of attention in recent years. Given a recognized event, in this paper, we conduct a pilot study of the multimedia event recounting problem, which answers the question why this video is recognized as this event, i.e. what evidences this decision is made on. In order to provide a semantic recounting of the multimedia event, we adopt a concept-based event representation for learning a discriminative event model. Then, we present a recounting approach that exactly recovers the contribution of semantic evidence to the event classification decision. This approach can be applied on any additive discriminative classifiers. The promising result is shown on the MED11 dataset that contains 15 events in thousands of YouTube like videos." ] }
1509.07845
2226011429
Complex event retrieval is a challenging research problem, especially when no training videos are available. An alternative to collecting training videos is to train a large semantic concept bank a priori. Given a text description of an event, event retrieval is performed by selecting concepts linguistically related to the event description and fusing the concept responses on unseen videos. However, defining an exhaustive concept lexicon and pre-training it requires vast computational resources. Therefore, recent approaches automate concept discovery and training by leveraging large amounts of weakly annotated web data. Compact visually salient concepts are automatically obtained by the use of concept pairs or, more generally, n-grams. However, not all visually salient n-grams are necessarily useful for an event query--some combinations of concepts may be visually compact but irrelevant--and this drastically affects performance. We propose an event retrieval algorithm that constructs pairs of automatically discovered concepts and then prunes those concepts that are unlikely to be helpful for retrieval. Pruning depends both on the query and on the specific video instance being evaluated. Our approach also addresses calibration and domain adaptation issues that arise when applying concept detectors to unseen videos. We demonstrate large improvements over other vision based systems on the TRECVID MED 13 dataset.
Recently, web (Internet) data has been widely used for knowledge discovery @cite_14 @cite_29 @cite_26 @cite_6 @cite_7 @cite_18 . @cite_33 use web data to weakly label images, learn and exploit common sense relationships. @cite_24 automatically discover attributes from unlabeled Internet images and their associated textual descriptions. @cite_14 describe a system that uses a large amount of weakly labeled web videos for visual event recognition by measuring the distance between two videos and a new transfer learning method. @cite_20 obtain textual descriptions of videos from the web and learn a multimedia embedding for few-example event recognition. For concept training, given a list of concepts, each corresponding to a word or short phrase, web search is commonly used to construct weakly annotated training sets @cite_12 @cite_6 @cite_26 . We use the concept name as a query to a search engine, and train the concept detector based on the returned images.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_26", "@cite_33", "@cite_7", "@cite_29", "@cite_6", "@cite_24", "@cite_12", "@cite_20" ], "mid": [ "", "2099501835", "", "1964763677", "", "", "", "1528802670", "2048261712", "2007510844" ], "abstract": [ "", "We propose a visual event recognition framework for consumer videos by leveraging a large amount of loosely labeled web videos (e.g., from YouTube). Observing that consumer videos generally contain large intraclass variations within the same type of events, we first propose a new method, called Aligned Space-Time Pyramid Matching (ASTPM), to measure the distance between any two video clips. Second, we propose a new transfer learning method, referred to as Adaptive Multiple Kernel Learning (A-MKL), in order to 1) fuse the information from multiple pyramid levels and features (i.e., space-time features and static SIFT features) and 2) cope with the considerable variation in feature distributions between videos from two domains (i.e., web video domain and consumer video domain). For each pyramid level and each type of local features, we first train a set of SVM classifiers based on the combined training set from two domains by using multiple base kernels from different kernel types and parameters, which are then fused with equal weights to obtain a prelearned average classifier. In A-MKL, for each event class we learn an adapted target classifier based on multiple base kernels and the prelearned average classifiers from this event class or all the event classes by minimizing both the structural risk functional and the mismatch between data distributions of two domains. Extensive experiments demonstrate the effectiveness of our proposed framework that requires only a small number of labeled consumer videos by leveraging web data. We also conduct an in-depth investigation on various aspects of the proposed method A-MKL, such as the analysis on the combination coefficients on the prelearned classifiers, the convergence of the learning algorithm, and the performance variation by using different proportions of labeled consumer videos. Moreover, we show that A-MKL using the prelearned classifiers from all the event classes leads to better performance when compared with A-MKL using the prelearned classifiers only from each individual event class.", "", "We propose NEIL (Never Ending Image Learner), a computer program that runs 24 hours per day and 7 days per week to automatically extract visual knowledge from Internet data. NEIL uses a semi-supervised learning algorithm that jointly discovers common sense relationships (e.g., \"Corolla is a kind of looks similar to Car\", \"Wheel is a part of Car\") and labels instances of the given visual categories. It is an attempt to develop the world's largest visual structured knowledge base with minimum human labeling effort. As of 10th October 2013, NEIL has been continuously running for 2.5 months on 200 core cluster (more than 350K CPU hours) and has an ontology of 1152 object categories, 1034 scene categories and 87 attributes. During this period, NEIL has discovered more than 1700 relationships and has labeled more than 400K visual instances.", "", "", "", "It is common to use domain specific terminology - attributes - to describe the visual appearance of objects. In order to scale the use of these describable visual attributes to a large number of categories, especially those not well studied by psychologists or linguists, it will be necessary to find alternative techniques for identifying attribute vocabularies and for learning to recognize attributes without hand labeled training data. We demonstrate that it is possible to accomplish both these tasks automatically by mining text and image data sampled from the Internet. The proposed approach also characterizes attributes according to their visual representation: global or local, and type: color, texture, or shape. This work focuses on discovering attributes and their visual appearance, and is as agnostic as possible about the textual description.", "Analysis and detection of complex events in videos require a semantic representation of the video content. Existing video semantic representation methods typically require users to pre-define an exhaustive concept lexicon and manually annotate the presence of the concepts in each video, which is infeasible for real-world video event detection problems. In this paper, we propose an automatic semantic concept discovery scheme by exploiting Internet images and their associated tags. Given a target event and its textual descriptions, we crawl a collection of images and their associated tags by performing text based image search using the noun and verb pairs extracted from the event textual descriptions. The system first identifies the candidate concepts for an event by measuring whether a tag is a meaningful word and visually detectable. Then a concept visual model is built for each candidate concept using a SVM classifier with probabilistic output. Finally, the concept models are applied to generate concept based video representations. We use the TRECVID Multimedia Event Detection (MED) 2013 as our video test set and crawl 400K Flickr images to automatically discover 2, 000 visual concepts. We show significant performance gains of the proposed concept discovery method over different video event detection tasks including supervised event modeling over concept space and semantic based zero-shot retrieval without training examples. Importantly, we show the proposed method of automatic concept discovery outperforms other well-known concept library construction approaches such as Classemes and ImageNet by a large margin (228 ) in zero-shot event retrieval. Finally, subjective evaluation by humans also confirms clear superiority of the proposed method in discovering concepts for event representation.", "This paper proposes a new video representation for few-example event recognition and translation. Different from existing representations, which rely on either low-level features, or pre-specified attributes, we propose to learn an embedding from videos and their descriptions. In our embedding, which we call VideoStory, correlated term labels are combined if their combination improves the video classifier prediction. Our proposed algorithm prevents the combination of correlated terms which are visually dissimilar by optimizing a joint-objective balancing descriptiveness and predictability. The algorithm learns from textual descriptions of video content, which we obtain for free from the web by a simple spidering procedure. We use our VideoStory representation for few-example recognition of events on more than 65K challenging web videos from the NIST TRECVID event detection task and the Columbia Consumer Video collection. Our experiments establish that i) VideoStory outperforms an embedding without joint-objective and alternatives without any embedding, ii) The varying quality of input video descriptions from the web is compensated by harvesting more data, iii) VideoStory sets a new state-of-the-art for few-example event recognition, outperforming very recent attribute and low-level motion encodings. What is more, VideoStory translates a previously unseen video to its most likely description from visual content only." ] }
1509.07845
2226011429
Complex event retrieval is a challenging research problem, especially when no training videos are available. An alternative to collecting training videos is to train a large semantic concept bank a priori. Given a text description of an event, event retrieval is performed by selecting concepts linguistically related to the event description and fusing the concept responses on unseen videos. However, defining an exhaustive concept lexicon and pre-training it requires vast computational resources. Therefore, recent approaches automate concept discovery and training by leveraging large amounts of weakly annotated web data. Compact visually salient concepts are automatically obtained by the use of concept pairs or, more generally, n-grams. However, not all visually salient n-grams are necessarily useful for an event query--some combinations of concepts may be visually compact but irrelevant--and this drastically affects performance. We propose an event retrieval algorithm that constructs pairs of automatically discovered concepts and then prunes those concepts that are unlikely to be helpful for retrieval. Pruning depends both on the query and on the specific video instance being evaluated. Our approach also addresses calibration and domain adaptation issues that arise when applying concept detectors to unseen videos. We demonstrate large improvements over other vision based systems on the TRECVID MED 13 dataset.
Recent work has also explored multiple modalities--e.g., automatic speech recognition (ASR), optical character recognition (OCR), audio, and vision--for event detection @cite_28 @cite_27 @cite_22 to achieve better performance over vision alone. @cite_27 propose MultiModel Pseudo Relevance Feedback (MMPRF), which selects several feedback videos for each modality to train a joint model. Applied to test videos, the model yields a new ranked video list that is used as feedback to retrain the model. @cite_22 represent a video by using a large concept bank, speech information, and video text. These features are projected to a high-dimensional concept space, where event video similarity scores are computed to rank videos. While multi-modal techniques achieve good performance, their visual components alone significantly under-perform the system as a whole.
{ "cite_N": [ "@cite_28", "@cite_27", "@cite_22" ], "mid": [ "1995137594", "2013075750", "2021899621" ], "abstract": [ "Reranking has been a focal technique in multimedia retrieval due to its efficacy in improving initial retrieval results. Current reranking methods, however, mainly rely on the heuristic weighting. In this paper, we propose a novel reranking approach called Self-Paced Reranking (SPaR) for multimodal data. As its name suggests, SPaR utilizes samples from easy to more complex ones in a self-paced fashion. SPaR is special in that it has a concise mathematical objective to optimize and useful properties that can be theoretically verified. It on one hand offers a unified framework providing theoretical justifications for current reranking methods, and on the other hand generates a spectrum of new reranking schemes. This paper also advances the state-of-the-art self-paced learning research which potentially benefits applications in other fields. Experimental results validate the efficacy and the efficiency of the proposed method on both image and video search tasks. Notably, SPaR achieves by far the best result on the challenging TRECVID multimedia event search task.", "We propose a novel method MultiModal Pseudo Relevance Feedback (MMPRF) for event search in video, which requires no search examples from the user. Pseudo Relevance Feedback has shown great potential in retrieval tasks, but previous works are limited to unimodal tasks with only a single ranked list. To tackle the event search task which is inherently multimodal, our proposed MMPRF takes advantage of multiple modalities and multiple ranked lists to enhance event search performance in a principled way. The approach is unique in that it leverages not only semantic features, but also non-semantic low-level features for event search in the absence of training data. Evaluated on the TRECVID MEDTest dataset, the approach improves the baseline by up to 158 in terms of the mean average precision. It also significantly contributes to CMU Team's final submission in TRECVID-13 Multimedia Event Detection.", "Current state-of-the-art systems for visual content analysis require large training sets for each class of interest, and performance degrades rapidly with fewer examples. In this paper, we present a general framework for the zeroshot learning problem of performing high-level event detection with no training exemplars, using only textual descriptions. This task goes beyond the traditional zero-shot framework of adapting a given set of classes with training data to unseen classes. We leverage video and image collections with free-form text descriptions from widely available web sources to learn a large bank of concepts, in addition to using several off-the-shelf concept detectors, speech, and video text for representing videos. We utilize natural language processing technologies to generate event description features. The extracted features are then projected to a common high-dimensional space using text expansion, and similarity is computed in this space. We present extensive experimental results on the large TRECVID MED [26] corpus to demonstrate our approach. Our results show that the proposed concept detection methods significantly outperform current attribute classifiers such as Classemes [34], ObjectBank [21], and SUN attributes[28] . Further, we find that fusion, both within as well as between modalities, is crucial for optimal performance." ] }
1509.07845
2226011429
Complex event retrieval is a challenging research problem, especially when no training videos are available. An alternative to collecting training videos is to train a large semantic concept bank a priori. Given a text description of an event, event retrieval is performed by selecting concepts linguistically related to the event description and fusing the concept responses on unseen videos. However, defining an exhaustive concept lexicon and pre-training it requires vast computational resources. Therefore, recent approaches automate concept discovery and training by leveraging large amounts of weakly annotated web data. Compact visually salient concepts are automatically obtained by the use of concept pairs or, more generally, n-grams. However, not all visually salient n-grams are necessarily useful for an event query--some combinations of concepts may be visually compact but irrelevant--and this drastically affects performance. We propose an event retrieval algorithm that constructs pairs of automatically discovered concepts and then prunes those concepts that are unlikely to be helpful for retrieval. Pruning depends both on the query and on the specific video instance being evaluated. Our approach also addresses calibration and domain adaptation issues that arise when applying concept detectors to unseen videos. We demonstrate large improvements over other vision based systems on the TRECVID MED 13 dataset.
All these methods suffer from calibration and domain adaptation issues, since CBRE methods fuse multiple concept detector responses and are usually trained and tested on different domains. To deal with calibration issues, most related work uses SVMs with probabilistic outputs @cite_10 . However, the domain shift between web training data and test videos is usually not addressed by calibration alone. To reduce this effect, some ranking-based re-scoring schemes @cite_28 @cite_27 replace raw detector confidences with the confidence rank in a list of videos. To further adapt to new domains (e.g., from images to videos), easy samples have been used to update detector models @cite_13 @cite_28 . Similar to these approaches, we use a rank based re-scoring scheme to address calibration issues and update models using the most confident detections to adapt to new domains. , (b) changing a vehicle tire , (c) getting a vehicle unstuck . The first and second rows show the results of running unary concepts on test videos. The third row combines two unary concept detectors by adding their scores. The fourth row shows the results of our proposed pair-concept detectors. Pair-concepts are more effective at discovering frames that are more semantically relevant to the event.
{ "cite_N": [ "@cite_28", "@cite_27", "@cite_10", "@cite_13" ], "mid": [ "1995137594", "2013075750", "2056983531", "2133434696" ], "abstract": [ "Reranking has been a focal technique in multimedia retrieval due to its efficacy in improving initial retrieval results. Current reranking methods, however, mainly rely on the heuristic weighting. In this paper, we propose a novel reranking approach called Self-Paced Reranking (SPaR) for multimodal data. As its name suggests, SPaR utilizes samples from easy to more complex ones in a self-paced fashion. SPaR is special in that it has a concise mathematical objective to optimize and useful properties that can be theoretically verified. It on one hand offers a unified framework providing theoretical justifications for current reranking methods, and on the other hand generates a spectrum of new reranking schemes. This paper also advances the state-of-the-art self-paced learning research which potentially benefits applications in other fields. Experimental results validate the efficacy and the efficiency of the proposed method on both image and video search tasks. Notably, SPaR achieves by far the best result on the challenging TRECVID multimedia event search task.", "We propose a novel method MultiModal Pseudo Relevance Feedback (MMPRF) for event search in video, which requires no search examples from the user. Pseudo Relevance Feedback has shown great potential in retrieval tasks, but previous works are limited to unimodal tasks with only a single ranked list. To tackle the event search task which is inherently multimodal, our proposed MMPRF takes advantage of multiple modalities and multiple ranked lists to enhance event search performance in a principled way. The approach is unique in that it leverages not only semantic features, but also non-semantic low-level features for event search in the absence of training data. Evaluated on the TRECVID MEDTest dataset, the approach improves the baseline by up to 158 in terms of the mean average precision. It also significantly contributes to CMU Team's final submission in TRECVID-13 Multimedia Event Detection.", "Platt's probabilistic outputs for Support Vector Machines (Platt, J. in Smola, A., et al (eds.) Advances in large margin classifiers. Cambridge, 2000) has been popular for applications that require posterior class probabilities. In this note, we propose an improved algorithm that theoretically converges and avoids numerical difficulties. A simple and ready-to-use pseudo code is included.", "Typical object detectors trained on images perform poorly on video, as there is a clear distinction in domain between the two types of data. In this paper, we tackle the problem of adapting object detectors learned from images to work well on videos. We treat the problem as one of unsupervised domain adaptation, in which we are given labeled data from the source domain (image), but only unlabeled data from the target domain (video). Our approach, self-paced domain adaptation, seeks to iteratively adapt the detector by re-training the detector with automatically discovered target domain examples, starting with the easiest first. At each iteration, the algorithm adapts by considering an increased number of target domain examples, and a decreased number of source domain examples. To discover target domain examples from the vast amount of video data, we introduce a simple, robust approach that scores trajectory tracks instead of bounding boxes. We also show how rich and expressive features specific to the target domain can be incorporated under the same framework. We show promising results on the 2011 TRECVID Multimedia Event Detection [1] and LabelMe Video [2] datasets that illustrate the benefit of our approach to adapt object detectors to video." ] }
1509.06939
2952449568
The importance of depth perception in the interactions that humans have within their nearby space is a well established fact. Consequently, it is also well known that the possibility of exploiting good stereo information would ease and, in many cases, enable, a large variety of attentional and interactive behaviors on humanoid robotic platforms. However, the difficulty of computing real-time and robust binocular disparity maps from moving stereo cameras often prevents from relying on this kind of cue to visually guide robots' attention and actions in real-world scenarios. The contribution of this paper is two-fold: first, we show that the Efficient Large-scale Stereo Matching algorithm (ELAS) by A. 2010 for computation of the disparity map is well suited to be used on a humanoid robotic platform as the iCub robot; second, we show how, provided with a fast and reliable stereo system, implementing relatively challenging visual behaviors in natural settings can require much less effort. As a case of study we consider the common situation where the robot is asked to focus the attention on one object close in the scene, showing how a simple but effective disparity-based segmentation solves the problem in this case. Indeed this example paves the way to a variety of other similar applications.
Depth is a natural cue to be used when the robot's attention needs to be focused on close entities in its workspace. For example, consider a very common situation for a humanoid robotic platform, like the one where a human stands in front of the robot showing to it an object to be recognized or grasped. Both motion- and appearance-based approaches to focus the robot's attention on the object of interest would impose many constraints on this even simple Human-Robot Interaction (HRI) scenario. Indeed, color-based methods work under strict assumptions on the light conditions, kind of background (preferably a table or a wall) and generally fail in cluttered settings. Model-based methods, beyond being affected to some extent by the same limitations, need a model of the object to be known a-priori. Motion-based methods (see, e.g., @cite_12 ) work under the obvious assumption that the objects are moving, the speed of the object being often critical for the detection. Instead, when the robot is required to look at something we are showing to it, or which is located nearby, the most distinguishing feature is simply the fact that the object of interest is closer to the robot than the background.
{ "cite_N": [ "@cite_12" ], "mid": [ "2170549527" ], "abstract": [ "Visual motion is a simple yet powerful cue widely used by biological systems to improve their perception and adaptation to the environment. Examples of tasks that greatly benefit from the ability to detect movement are object segmentation, 3D scene reconstruction and control of attention. In computer vision several algorithms for computing visual motion and optic flow exist. However their application in robotics is not straightforward as in these platforms visual motion is often dominated by (self) motion produced by the movement of the robot (egomotion) making it difficult to disambiguate between motion induced by the scene dynamics or by the own actions of the robot. Independent motion detection is an active field in computer vision and robotics, however approaches in this area typically require that some models of both the environment and the robot visual system are available and are hardly suitable for real-time control. In this paper we describe the motionCUT, a derivation of the Lucas-Kanade optical flow algorithm that allows detecting moving objects, irrespectively of the egomotion produced by the robot. Our method is purely visual and does not require information other than the images coming from the cameras. As such it can be easily adapted to any robotic platform. The system was tested on a stereo tracking task on the iCub humanoid robot, demonstrating that the algorithm performs well and can easily execute in real-time." ] }
1509.06939
2952449568
The importance of depth perception in the interactions that humans have within their nearby space is a well established fact. Consequently, it is also well known that the possibility of exploiting good stereo information would ease and, in many cases, enable, a large variety of attentional and interactive behaviors on humanoid robotic platforms. However, the difficulty of computing real-time and robust binocular disparity maps from moving stereo cameras often prevents from relying on this kind of cue to visually guide robots' attention and actions in real-world scenarios. The contribution of this paper is two-fold: first, we show that the Efficient Large-scale Stereo Matching algorithm (ELAS) by A. 2010 for computation of the disparity map is well suited to be used on a humanoid robotic platform as the iCub robot; second, we show how, provided with a fast and reliable stereo system, implementing relatively challenging visual behaviors in natural settings can require much less effort. As a case of study we consider the common situation where the robot is asked to focus the attention on one object close in the scene, showing how a simple but effective disparity-based segmentation solves the problem in this case. Indeed this example paves the way to a variety of other similar applications.
For this reason, depth information has been exploited in a variety of robotics applications in the past @cite_16 @cite_11 @cite_1 @cite_5 @cite_8 @cite_9 . However, it is not easy to find methods for depth estimation from a stereo pair which are a good trade-off between robustness (e.g. to lighting conditions) and speed, two requirements that are key for working in real-world robotic scenarios. Therefore, alternative solutions as for example Kinect RGB-D sensors have been adopted, even when working on the iCub humanoid, which is equipped with a human-like stereo camera system (see @cite_19 ). The main motivation of this work is thus to upgrade'' the iCub robot's depth perception and to show that this improvement opens the way to a range of possible applications where disparity and depth can be successfully used to visually guide the robot's attention and actions also without relying on a Kinect.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_1", "@cite_19", "@cite_5", "@cite_16", "@cite_11" ], "mid": [ "2542911972", "2155338019", "2007944162", "", "1480710408", "2125524302", "1527435454" ], "abstract": [ "In this paper we present the current improvements of our biologically motivated interacting and learning vision system for humanoids. Building on the work presented by , 2005 the system features now a very natural gaze selection and interaction for learning freely presented complex objects in realtime. The new features are facilitated by two major contributions. First, by the introduction of an internal needs dynamics based on unspecific and specific rewards governing and exploring the parameterization of the basic behaviors. And second, by extending the object recognition pathway by sensory and object memory pathways as well as speech input output for interactive confirmation and object labeling", "In this paper, we present a unifying approach for learning and recognition of objects in unstructured environments through exploration. Taking inspiration from how young infants learn objects, we establish four principles for object learning. First, early object detection is based on an attention mechanism detecting salient parts in the scene. Second, motion of the object allows more accurate object localization. Next, acquiring multiple observations of the object through manipulation allows a more robust representation of the object. And last, object recognition benefits from a multi-modal representation. Using these principles, we developed a unifying method including visual attention, smooth pursuit of the object, and a multi-view and multi-modal object representation. Our results indicate the effectiveness of this approach and the improvement of the system when multiple observations are acquired from active object manipulation.", "We present a biologically motivated architecture for object recognition that is capable of online learning of several objects based on interaction with a human teacher. The system combines biological principles such as appearance-based representation in topographical feature detection hierarchies and context-driven transfer between different levels of object memory. Training can be performed in an unconstrained environment by presenting objects in front of a stereo camera system and labeling them by speech input. The learning is fully online and thus avoids an artificial separation of the interaction into training and test phases. We demonstrate the performance on a challenging ensemble of 50 objects.", "", "We present a modular architecture for recognition and localization of objects in a scene that is motivated from coupling the ventral (\"what\") and dorsal (\"where\") pathways of human visual processing. Our main target is to demonstrate how online learning can be used to bootstrap the representation from nonspecific cues like stereo depth towards object-specific representations for recognition and detection. We show the realization of the system learning objects in a complex realworld environment and investigate its performance.", "This work is concerned with a framework for visual object recognition in real world tasks. Our approach is motivated by biological findings of the representation of space around the body, the so-called peripersonal space. We show that the principles behind those findings can lead to a natural structuring of object recognition tasks in artificial systems. We demonstrate this by the supervised learning and recognition of 20 complex-shaped objects from unsegmented visual input", "We present a biologically motivated system for object recognition that is capable of online learning of several objects based on interaction with a human teacher. The training is unconstrained in the sense that arbitrary objects can be freely presented in front of a stereo camera system and labeled by speech input. The architecture unites biological principles such as appearance-based representation in topographical feature detection hierarchies and context-driven transfer between different levels of object memory. The learning is fully online and thus avoids an artificial separation of the interaction into training and test phases." ] }
1509.07022
2415615063
This paper solves the rendezvous problem for a network of underactuated rigid bodies such as quadrotor helicopters. A control strategy is presented that makes the centres of mass of the vehicles converge to an arbitrarily small neighborhood of one another. The convergence is global, and each vehicle can compute its own control input using only an on-board camera and a three-axis rate gyroscope. No global positioning system is required, nor any information about the vehicles' attitudes.
Typical coordination problems include attitude synchronization, rendezvous, flocking, and formation control. For networks of single or double-integrator systems, the rendezvous problem is referred to as consensus or agreement , and it has been investigated by many researchers, for instance @cite_34 @cite_8 @cite_2 @cite_14 @cite_12 @cite_22 @cite_5 @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_22", "@cite_8", "@cite_2", "@cite_5", "@cite_34", "@cite_12" ], "mid": [ "2114482313", "2099175737", "2107396783", "2143843955", "", "", "2050505323", "2168132433" ], "abstract": [ "The paper investigates the synchronization of a network of identical linear state-space models under a possibly time-varying and directed interconnection structure. The main result is the construction of a dynamic output feedback coupling that achieves synchronization if the decoupled systems have no exponentially unstable mode and if the communication graph is uniformly connected. The result can be interpreted as a generalization of classical consensus algorithms. Stronger conditions are shown to be sufficient-but to some extent, also necessary-to ensure synchronization with the diffusive static output coupling often considered in the literature.", "This note considers the problem of information consensus among multiple agents in the presence of limited and unreliable information exchange with dynamically changing interaction topologies. Both discrete and continuous update schemes are proposed for information consensus. This note shows that information consensus under dynamically changing interaction topologies can be achieved asymptotically if the union of the directed interaction graphs have a spanning tree frequently enough as the system evolves.", "In this paper, we discuss consensus problems for networks of dynamic agents with fixed and switching topologies. We analyze three cases: 1) directed networks with fixed topology; 2) directed networks with switching topology; and 3) undirected networks with communication time-delays and fixed topology. We introduce two consensus protocols for networks with and without time-delays and provide a convergence analysis in all three cases. We establish a direct connection between the algebraic connectivity (or Fiedler eigenvalue) of the network and the performance (or negotiation speed) of a linear consensus protocol. This required the generalization of the notion of algebraic connectivity of undirected graphs to digraphs. It turns out that balanced digraphs play a key role in addressing average-consensus problems. We introduce disagreement functions for convergence analysis of consensus protocols. A disagreement function is a Lyapunov function for the disagreement network dynamics. We proposed a simple disagreement function that is a common Lyapunov function for the disagreement dynamics of a directed network with switching topology. A distinctive feature of this work is to address consensus problems for networks with directed information flow. We provide analytical tools that rely on algebraic graph theory, matrix theory, and control theory. Simulations are provided that demonstrate the effectiveness of our theoretical results.", "This paper studies some necessary and sufficient conditions for second-order consensus in multi-agent dynamical systems. First, basic theoretical analysis is carried out for the case where for each agent the second-order dynamics are governed by the position and velocity terms and the asymptotic velocity is constant. A necessary and sufficient condition is given to ensure second-order consensus and it is found that both the real and imaginary parts of the eigenvalues of the Laplacian matrix of the corresponding network play key roles in reaching consensus. Based on this result, a second-order consensus algorithm is derived for the multi-agent system facing communication delays. A necessary and sufficient condition is provided, which shows that consensus can be achieved in a multi-agent system whose network topology contains a directed spanning tree if and only if the time delay is less than a critical value. Finally, simulation examples are given to verify the theoretical analysis.", "", "", "This paper describes a distributed coordination scheme with local information exchange for multiple vehicle systems. We introduce second-order consensus protocols that take into account motions of the information states and their derivatives, extending first-order protocols from the literature. We also derive necessary and sufficient conditions under which consensus can be reached in the context of unidirectional information exchange topologies. This work takes into account the general case where information flow may be unidirectional due to sensors with limited fields of view or vehicles with directed, power-constrained communication links. Unlike the first-order case, we show that having a (directed) spanning tree is a necessary rather than a sufficient condition for consensus seeking with second-order dynamics. This work focuses on a formal analysis of information exchange topologies that permit second-order consensus. Given its importance to the stability of the coordinated system, an analysis of the consensus term control gains is also presented, specifically the strength of the information states relative to their derivatives. As an illustrative example, consensus protocols are applied to coordinate the movements of multiple mobile robots. Copyright © 2006 John Wiley & Sons, Ltd.", "We study the stability properties of linear time-varying systems in continuous time whose system matrix is Metzler with zero row sums. This class of systems arises naturally in the context of distributed decision problems, coordination and rendezvous tasks and synchronization problems. The equilibrium set contains all states with identical state components. We present sufficient conditions guaranteeing uniform exponential stability of this equilibrium set, implying that all state components converge to a common value as time grows unbounded. Furthermore it is shown that this convergence result is robust with respect to an arbitrary delay, provided that the delay affects only the off-diagonal terms in the differential equation." ] }
1509.07022
2415615063
This paper solves the rendezvous problem for a network of underactuated rigid bodies such as quadrotor helicopters. A control strategy is presented that makes the centres of mass of the vehicles converge to an arbitrarily small neighborhood of one another. The convergence is global, and each vehicle can compute its own control input using only an on-board camera and a three-axis rate gyroscope. No global positioning system is required, nor any information about the vehicles' attitudes.
A passivity-based solution of the attitude synchronization problem for kinematic vehicle models is proposed in @cite_0 . @cite_26 @cite_21 @cite_11 , the same problem is investigated for dynamic vehicle models. The proposed controllers do not require measurements of the angular velocity, but they do require absolute attitude measurements. @cite_37 , the authors use the energy shaping approach to design local and distributed controllers for attitude synchronization. The same approach is adopted in @cite_1 to design two attitude synchronization controllers, both local and distributed. The first controller achieves almost-global synchronization for directed connected graphs. However, the controller design is based on distributed observers @cite_9 , and therefore requires auxiliary states to be communicated among neighboring vehicles. It also employs an angular velocity dissipation term that forces all vehicle angular velocities to zero in steady-state. The second controller in @cite_1 does not restrict the final angular velocities, and does not require communication, but it requires an undirected sensing graph, and guarantees only local convergence.
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_9", "@cite_21", "@cite_1", "@cite_0", "@cite_11" ], "mid": [ "1968976970", "2030056867", "2116630866", "2165756371", "2119497361", "1134107867", "" ], "abstract": [ "We address stable synchronization of a network of rotating and translating rigid bodies in three-dimensional space. Motivated by applications that require coordinated spinning spacecraft or diving underwater vehicles, we prove control laws that stably couple and coordinate the dynamics of multiple rigid bodies. We design decentralized, energy shaping control laws for each individual rigid body that depend on the relative orientation and relative position of its neighbors. Energy methods are used to prove stability of the coordinated multi-body dynamical system. To prove exponential stability, we break symmetry and consider a controlled dissipation term that requires each individual to measure its own velocity. The control laws are illustrated in simulation for a network of spinning rigid bodies.", "In this paper, distributed cooperative attitude synchronization and tracking problems are considered for multiple rigid bodies with attitudes represented by modified rodriguez parameters. Two distributed control laws are proposed and analyzed. The first control law applies a passivity approach for distributed attitude synchronization. The control law guarantees attitude synchronization without the requirement for absolute angular velocity measurements and relative angular velocity measurements between neighboring rigid bodies. The second control law incorporates a time-varying reference attitude, where the reference attitude is allowed to be available to only a subset of the group members under general directed information exchange. The control law guarantees that all rigid bodies track the time-varying reference attitude as long as a virtual leader whose attitude is the time-varying reference attitude has a directed path to all other rigid bodies in the group. Simulation results are presented to demonstrate the effectiveness of the two control laws.", "In this paper, we study the behavior of a network of N agents, each evolving on the circle. We propose a novel algorithm that achieves synchronization or balancing in phase models under mild connectedness assumptions on the (possibly time-varying and unidirectional) communication graphs. The global convergence analysis on the N-torus is a distinctive feature of the present work with respect to previous results that have focused on convergence in the Euclidean space.", "We consider the coordinated attitude control problem for a group of spacecraft, without velocity measurements. Our approach is based on the introduction of auxiliary dynamical systems (playing the role of velocity observers in a certain sense) to generate the individual and relative damping terms in the absence of the actual angular velocities and relative angular velocities. Our main focus, in this technical note, is to address the following two problems: 1) Design a velocity-free attitude tracking and synchronization control scheme, that allows the team members to align their attitudes and track a time-varying reference trajectory (simultaneously). 2) Design a velocity-free synchronization control scheme, in the case where no reference attitude is specified, and all spacecraft are required to reach a consensus by aligning their attitudes with the same final time-varying attitude. In this work, one important and novel feature (besides the non-requirement of the angular velocity measurements), consists in the fact that the control torques are naturally bounded and the designer can arbitrarily assign the desired bounds on the control torques, a priori, through the control gains, regardless of the angular velocities. Throughout this technical note, the communication flow between spacecraft is assumed to be undirected. Simulation results of a scenario of four spacecraft are provided to show the effectiveness of the proposed control schemes.", "Control laws to synchronize attitudes in a swarm of fully actuated rigid bodies, in the absence of a common reference attitude or hierarchy in the swarm, are proposed in [Smith, T. R., Hanssmann, H., & Leonard, N.E. (2001). Orientation control of multiple underwater vehicles with symmetry-breaking potentials. In Proc. 40th IEEE conf. decision and control (pp. 4598-4603); Nair, S., Leonard, N. E. (2007). Stable synchronization of rigid body networks. Networks and Heterogeneous Media, 2(4), 595-624]. The present paper studies two separate extensions with the same energy shaping approach: (i) locally synchronizing the rigid bodies' attitudes, but without restricting their final motion and (ii) relaxing the communication topology from undirected, fixed and connected to directed, varying and uniformly connected. The specific strategies that must be developed for these extensions illustrate the limitations of attitude control with reduced information.", "Highlighting the control of networked robotic systems, this book synthesizes a unified passivity-based approach to an emerging cross-disciplinary subject. Thanks to this unified approach, readers can access various state-of-the-art research fields by studying only the background foundations associated with passivity. In addition to the theoretical results and techniques,the authors provide experimental case studies on testbeds of robotic systems including networked haptic devices, visual robotic systems, robotic network systems and visual sensor network systems.The text begins with an introduction to passivity and passivity-based control together with the other foundations needed in this book. The main body of the book consists of three parts. The first examines how passivity can be utilized for bilateral teleoperation and demonstrates the inherent robustness of the passivity-based controller against communication delays. The second part emphasizes passivitys usefulness for visual feedback control and estimation. Convergence is rigorously proved even when other passive components are interconnected. The passivity approach is also differentiated from other methodologies. The third part presents the unified passivity-based control-design methodology for multi-agent systems. This scheme is shown to be either immediately applicable or easily extendable to the solution of various motion coordination problems including 3-D attitude pose synchronization, flocking control and cooperative motion estimation.Academic researchers and practitioners working in systems and control and or robotics will appreciate the potential of the elegant and novel approach to the control of networked robots presented here. The limited background required and the case-study work described also make the text appropriate for and, it is hoped, inspiring to students.", "" ] }
1509.06451
2950557924
In this paper, we propose a novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically, our method achieves a high recall rate of 90.99 on the challenging FDDB benchmark, outperforming the state-of-the-art method by a large margin of 2.91 . Importantly, we consider finding faces from a new perspective through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variation, which are the main difficulty and bottleneck of most existing face detection approaches. We show that despite the use of DCN, our network can achieve practical runtime speed.
In the last decades, cascade based @cite_34 @cite_1 @cite_16 @cite_22 and deformable part models (DPM) detectors dominate the face detection approaches. Viola and Jones @cite_22 introduced fast Haar-like features computation via integral image and boosted cascade classifier. Various studies thereafter follow a similar pipeline. Amongst the variants, SURF cascade @cite_16 was one of the top performers. Later Chen al @cite_34 demonstrate state-of-the-art face detection performance by learning face detection and face alignment jointly in the same cascade framework. Deformable part models define face as a collection of parts. Latent Support Vector Machine is typically used to find the parts and their relationships. DPM is shown more robust to occlusion than the cascade based methods. A recent study @cite_33 demonstrates state-of-the-art performance with just a vanilla DPM, achieving better results than more sophisticated DPM variants @cite_25 @cite_41 .
{ "cite_N": [ "@cite_33", "@cite_22", "@cite_41", "@cite_1", "@cite_16", "@cite_34", "@cite_25" ], "mid": [ "", "2137401668", "2047508432", "2169696215", "2100807570", "204612701", "2034025266" ], "abstract": [ "", "This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.", "We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).", "Rotation invariant multiview face detection (MVFD) aims to detect faces with arbitrary rotation-in-plane (RIP) and rotation-off-plane (ROP) angles in still images or video sequences. MVFD is crucial as the first step in automatic face processing for general applications since face images are seldom upright and frontal unless they are taken cooperatively. In this paper, we propose a series of innovative methods to construct a high-performance rotation invariant multiview face detector, including the width-first-search (WFS) tree detector structure, the vector boosting algorithm for learning vector-output strong classifiers, the domain-partition-based weak learning method, the sparse feature in granular space, and the heuristic search for sparse feature selection. As a result of that, our multiview face detector achieves low computational complexity, broad detection scope, and high detection accuracy on both standard testing sets and real-life images", "This paper presents a novel learning framework for training boosting cascade based object detector from large scale dataset. The framework is derived from the well-known Viola-Jones (VJ) framework but distinguished by three key differences. First, the proposed framework adopts multi-dimensional SURF features instead of single dimensional Haar features to describe local patches. In this way, the number of used local patches can be reduced from hundreds of thousands to several hundreds. Second, it adopts logistic regression as weak classifier for each local patch instead of decision trees in the VJ framework. Third, we adopt AUC as a single criterion for the convergence test during cascade training rather than the two trade-off criteria (false-positive-rate and hit-rate) in the VJ framework. The benefit is that the false-positive-rate can be adaptive among different cascade stages, and thus yields much faster convergence speed of SURF cascade. Combining these points together, the proposed approach has three good properties. First, the boosting cascade can be trained very efficiently. Experiments show that the proposed approach can train object detectors from billions of negative samples within one hour even on personal computers. Second, the built detector is comparable to the state-of-the-art algorithm not only on the accuracy but also on the processing speed. Third, the built detector is small in model-size due to short cascade stages.", "We present a new state-of-the-art approach for face detection. The key idea is to combine face alignment with detection, observing that aligned face shapes provide better features for face classification. To make this combination more effective, our approach learns the two tasks jointly in the same cascade framework, by exploiting recent advances in face alignment. Such joint learning greatly enhances the capability of cascade detection and still retains its realtime performance. Extensive experiments show that our approach achieves the best accuracy on challenging datasets, where all existing solutions are either inaccurate or too slow.", "Despite the successes in the last two decades, the state-of-the-art face detectors still have problems in dealing with images in the wild due to large appearance variations. Instead of leaving appearance variations directly to statistical learning algorithms, we propose a hierarchical part based structural model to explicitly capture them. The model enables part subtype option to handle local appearance variations such as closed and open month, and part deformation to capture the global appearance variations such as pose and expression. In detection, candidate window is fitted to the structural model to infer the part location and part subtype, and detection score is then computed based on the fitted configuration. In this way, the influence of appearance variation is reduced. Besides the face model, we exploit the co-occurrence between face and body, which helps to handle large variations, such as heavy occlusions, to further boost the face detection performance. We present a phrase based representation for body detection, and propose a structural context model to jointly encode the outputs of face detector and body detector. Benefit from the rich structural face and body information, as well as the discriminative structural learning algorithm, our method achieves state-of-the-art performance on FDDB, AFW and a self-annotated dataset, under wide comparisons with commercial and academic methods. (C) 2013 Elsevier B.V. All rights reserved." ] }
1509.06451
2950557924
In this paper, we propose a novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically, our method achieves a high recall rate of 90.99 on the challenging FDDB benchmark, outperforming the state-of-the-art method by a large margin of 2.91 . Importantly, we consider finding faces from a new perspective through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variation, which are the main difficulty and bottleneck of most existing face detection approaches. We show that despite the use of DCN, our network can achieve practical runtime speed.
A recent study @cite_38 shows that face detection can be further improved by using deep learning, leveraging the high capacity of deep convolutional networks. In this study, we push the performance limit further. Specifically, the network proposed by @cite_38 does not have explicit mechanism to handle occlusion, the face detector therefore fails to detect faces with heavy occlusions, as acknowledged by the authors. In contrast, our two-stage architecture has its first stage designated to handle partial occlusions. In addition, our network gains improved efficiency by adopting the more recent fully convolutional architecture, in contrast to the previous work that relies on the conventional sliding window approach to obtain the final face detector.
{ "cite_N": [ "@cite_38" ], "mid": [ "1970456555" ], "abstract": [ "In this paper we consider the problem of multi-view face detection. While there has been significant research on this problem, current state-of-the-art approaches for this task require annotation of facial landmarks, e.g. TSM [25], or annotation of face poses [28, 22]. They also require training dozens of models to fully capture faces in all orientations, e.g. 22 models in HeadHunter method [22]. In this paper we propose Deep Dense Face Detector (DDFD), a method that does not require pose landmark annotation and is able to detect faces in a wide range of orientations using a single model based on deep convolutional neural networks. The proposed method has minimal complexity; unlike other recent deep learning object detection methods [9], it does not require additional components such as segmentation, bounding-box regression, or SVM classifiers. Furthermore, we analyzed scores of the proposed face detector for faces in different orientations and found that 1) the proposed method is able to detect faces from different angles and can handle occlusion to some extent, 2) there seems to be a correlation between distribution of positive examples in the training set and scores of the proposed face detector. The latter suggests that the proposed method's performance can be further improved by using better sampling strategies and more sophisticated data augmentation techniques. Evaluations on popular face detection benchmark datasets show that our single-model face detector algorithm has similar or better performance compared to the previous methods, which are more complex and require annotations of either different poses or facial landmarks." ] }
1509.06451
2950557924
In this paper, we propose a novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically, our method achieves a high recall rate of 90.99 on the challenging FDDB benchmark, outperforming the state-of-the-art method by a large margin of 2.91 . Importantly, we consider finding faces from a new perspective through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variation, which are the main difficulty and bottleneck of most existing face detection approaches. We show that despite the use of DCN, our network can achieve practical runtime speed.
The first stage of our model is partially inspired by the generic object proposal approaches @cite_5 @cite_13 @cite_21 . Generic object proposal generators are now an indispensable component of standard object detection algorithms through providing high-quality and category-independent bounding boxes. These generic methods, however, are devoted to generic objects therefore not suitable to propose windows specific to face. In particular, applying a generic proposal generator directly would produce enormous number of candidate windows but only minority of them contain faces. In addition, a generic method does not consider the unique structure and parts on the face. Hence, there will be no principled mechanism to recall faces when the face is only partially visible. These shortcomings motivate us to formulate the new faceness measure to achieve high recall on faces, whilst reduce the number of candidate windows to half the original.
{ "cite_N": [ "@cite_5", "@cite_21", "@cite_13" ], "mid": [ "1991367009", "7746136", "2088049833" ], "abstract": [ "We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.", "The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )." ] }
1509.06807
2400131829
The scarcity of data annotated at the desired level of granularity is a recurring issue in many applications. Significant amounts of effort have been devoted to developing weakly supervised methods tailored to each individual setting, which are often carefully designed to take advantage of the particular properties of weak supervision regimes, form of available data and prior knowledge of the task at hand. Unfortunately, it is difficult to adapt these methods to new tasks and or forms of data, which often require different weak supervision regimes or models. We present a general-purpose method that can solve any weakly supervised learning problem irrespective of the weak supervision regime or the model. The proposed method turns any off-the-shelf strongly supervised classifier into a weakly supervised classifier and allows the user to specify any arbitrary weakly supervision regime via a loss function. We apply the method to several different weak supervision regimes and demonstrate competitive results compared to methods specifically engineered for those settings.
One other notable setting that has been studied is the learning with label proportions (LLP) regime, where the bag label is the proportion of positive instances in the bag. A variety of methods have been developed, such as approaches based on graphical model formulations @cite_20 , @math -means based methods @cite_3 , support vector machines @cite_21 @cite_19 , and estimations of the mean operator @cite_17 . The LLP regime has found applications in fraud detection @cite_21 and video event detection @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_21", "@cite_3", "@cite_19", "@cite_20", "@cite_17" ], "mid": [ "2063438554", "", "", "2185563688", "2166886337", "1607038179" ], "abstract": [ "Video event detection allows intelligent indexing of video content based on events. Traditional approaches extract features from video frames or shots, then quantize and pool the features to form a single vector representation for the entire video. Though simple and efficient, the final pooling step may lead to loss of temporally local information, which is important in indicating which part in a long video signifies presence of the event. In this work, we propose a novel instance-based video event detection approach. We represent each video as multiple 'instances', defined as video segments of different temporal intervals. The objective is to learn an instance-level event detection model based on only video-level labels. To solve this problem, we propose a large-margin formulation which treats the instance labels as hidden latent variables, and simultaneously infers the instance labels as well as the instance-level classification model. Our framework infers optimal solutions that assume positive videos have a large number of positive instances while negative videos have the fewest ones. Extensive experiments on large-scale video event datasets demonstrate significant performance gains. The proposed method is also useful in explaining the detection results by localizing the temporal segments in a video which is responsible for the positive detection.", "", "", "We study the problem of learning with label proportions in which the training data is provided in groups and only the proportion of each class in each group is known. We propose a new method called proportion-SVM, or ∞SVM, which explicitly models the latent unknown instance labels together with the known group label proportions in a large-margin framework. Unlike the existing works, our approach avoids making restrictive assumptions about the data. The ∞SVM model leads to a non-convex integer programming problem. In order to solve it efficiently, we propose two algorithms: one based on simple alternating optimization and the other based on a convex relaxation. Extensive experiments on standard datasets show that ∞SVM outperforms the state-of-the-art, especially for larger group sizes.", "We propose a new problem formulation which is similar to, but more informative than, the binary multiple-instance learning problem. In this setting, we are given groups of instances (described by feature vectors) along with estimates of the fraction of positively-labeled instances per group. The task is to learn an instance level classifier from this information. That is, we are trying to estimate the unknown binary labels of individuals from knowledge of group statistics. We propose a principled probabilistic model to solve this problem that accounts for uncertainty in the parameters and in the unknown individual labels. This model is trained with an efficient MCMC algorithm. Its performance is demonstrated on both synthetic and real-world data arising in general object recognition.", "Consider the following problem: given sets of unlabeled observations, each set with known label proportions, predict the labels of another set of observations, possibly with known label proportions. This problem occurs in areas like e-commerce, politics, spam filtering and improper content detection. We present consistent estimators which can reconstruct the correct labels with high probability in a uniform convergence sense. Experiments show that our method works well in practice." ] }
1509.06720
2273943137
One major challenge for 3D pose estimation from a single RGB image is the acquisition of sufficient training data. In particular, collecting large amounts of training data that contain unconstrained images and are annotated with accurate 3D poses is infeasible. We therefore propose to use two independent training sources. The first source consists of images with annotated 2D poses and the second source consists of accurate 3D motion capture data. To integrate both sources, we propose a dual-source approach that combines 2D pose estimation with efficient and robust 3D pose retrieval. In our experiments, we show that our approach achieves state-of-the-art results when both sources are from the same dataset, but it also achieves competitive results when the motion capture data is taken from a different dataset.
The 3D pictorial structure model (PSM) proposed in @cite_30 combines generative and discriminative methods. Regression forests are trained to estimate the probabilities of 3D joint locations and the final 3D pose is inferred by the PSM. Since inference is performed in 3D, the bounding volume of the 3D pose space needs to be known and the inference requires a few minutes per frame.
{ "cite_N": [ "@cite_30" ], "mid": [ "2054820429" ], "abstract": [ "In this work we address the problem of estimating the 3D human pose from a single RGB image, which is a challenging problem since different 3D poses may have similar 2D projections. Following the success of regression forests for 3D pose estimation from depth data or 2D pose estimation from RGB images, we extend regression forests to infer missing depth data of image features and 3D pose simultaneously. Since we do not observe depth for inference or training directly, we hypothesize the depth of the features by sweeping with a plane through the 3D volume of potential joint locations. The regression forests are then combined with a pictorial structure framework, which is extended to 3D. The approach is evaluated on two challenging benchmarks where stateof-the-art performance is achieved." ] }
1509.06720
2273943137
One major challenge for 3D pose estimation from a single RGB image is the acquisition of sufficient training data. In particular, collecting large amounts of training data that contain unconstrained images and are annotated with accurate 3D poses is infeasible. We therefore propose to use two independent training sources. The first source consists of images with annotated 2D poses and the second source consists of accurate 3D motion capture data. To integrate both sources, we propose a dual-source approach that combines 2D pose estimation with efficient and robust 3D pose retrieval. In our experiments, we show that our approach achieves state-of-the-art results when both sources are from the same dataset, but it also achieves competitive results when the motion capture data is taken from a different dataset.
Action specific priors learned from the MoCap data have also been proposed for 3D pose tracking @cite_28 @cite_8 . These approaches, however, are more constrained by assuming that the type of motion is known in advance.
{ "cite_N": [ "@cite_28", "@cite_8" ], "mid": [ "2097412577", "1997500560" ], "abstract": [ "We advocate the use of Gaussian Process Dynamical Models (GPDMs) for learning human pose and motion priors for 3D people tracking. A GPDM provides a lowdimensional embedding of human motion data, with a density function that gives higher probability to poses and motions close to the training data. With Bayesian model averaging a GPDM can be learned from relatively small amounts of data, and it generalizes gracefully to motions outside the training set. Here we modify the GPDM to permit learning from motions with significant stylistic variation. The resulting priors are effective for tracking a range of human walking styles, despite weak and noisy image measurements and significant occlusions.", "Automatic recovery of 3D human pose from monocular image sequences is a challenging and important research topic with numerous applications. Although current methods are able to recover 3D pose for a single person in controlled environments, they are severely challenged by real-world scenarios, such as crowded street scenes. To address this problem, we propose a three-stage process building on a number of recent advances. The first stage obtains an initial estimate of the 2D articulation and viewpoint of the person from single frames. The second stage allows early data association across frames based on tracking-by-detection. These two stages successfully accumulate the available 2D image evidence into robust estimates of 2D limb positions over short image sequences (= tracklets). The third and final stage uses those tracklet-based estimates as robust image observations to reliably recover 3D pose. We demonstrate state-of-the-art performance on the HumanEva II benchmark, and also show the applicability of our approach to articulated 3D tracking in realistic street conditions." ] }
1509.06658
2953141364
We propose a novel image representation, termed Attribute-Graph, to rank images by their semantic similarity to a given query image. An Attribute-Graph is an undirected fully connected graph, incorporating both local and global image characteristics. The graph nodes characterise objects as well as the overall scene context using mid-level semantic attributes, while the edges capture the object topology. We demonstrate the effectiveness of Attribute-Graphs by applying them to the problem of image ranking. We benchmark the performance of our algorithm on the 'rPascal' and 'rImageNet' datasets, which we have created in order to evaluate the ranking performance on complex queries containing multiple objects. Our experimental evaluation shows that modelling images as Attribute-Graphs results in improved ranking performance over existing techniques.
Mid level features have long been used for various tasks in the field of computer vision. The recent surge in the use of attributes have reiterated the efficacy of such features. Farhadi @cite_2 showed the usefulness of attributes for image description, object classification and abnormality detection. Extending the applicability of attributes beyond objects, Patterson @cite_25 demonstrated their use for scene description and classification. Attributes have also been employed for image understanding @cite_37 @cite_6 , web search @cite_44 @cite_26 , zero-shot-learning @cite_38 @cite_31 , action recognition @cite_33 and human-computer interaction @cite_24 .
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_26", "@cite_33", "@cite_6", "@cite_44", "@cite_24", "@cite_2", "@cite_31", "@cite_25" ], "mid": [ "2950497525", "2032699694", "2155855695", "2117103983", "2914449749", "2097053051", "2150793046", "2098411764", "", "2070148066" ], "abstract": [ "In principle, zero-shot learning makes it possible to train a recognition model simply by specifying the category's attributes. For example, with classifiers for generic attributes like and , one can construct a classifier for the zebra category by enumerating which properties it possesses---even without providing zebra training images. In practice, however, the standard zero-shot paradigm suffers because attribute predictions in novel images are hard to get right. We propose a novel random forest approach to train zero-shot models that explicitly accounts for the unreliability of attribute predictions. By leveraging statistics about each attribute's error tendencies, our method obtains more robust discriminative models for the unseen classes. We further devise extensions to handle the few-shot scenario and unreliable attribute descriptions. On three datasets, we demonstrate the benefit for visual category learning with zero or few training examples, a critical domain for rare categories or categories defined on the fly.", "In this paper we present the first large-scale scene attribute database. First, we perform crowdsourced human studies to find a taxonomy of 102 discriminative attributes. We discover attributes related to materials, surface properties, lighting, affordances, and spatial layout. Next, we build the \"SUN attribute database\" on top of the diverse SUN categorical database. We use crowdsourcing to annotate attributes for 14,340 images from 707 scene categories. We perform numerous experiments to study the interplay between scene attributes and scene categories. We train and evaluate attribute classifiers and then study the feasibility of attributes as an intermediate scene representation for scene classification, zero shot learning, automatic image captioning, semantic image search, and parsing natural images. We show that when used as features for these tasks, low dimensional scene attributes can compete with or improve on the state of the art performance. The experiments suggest that scene attributes are an effective low-dimensional feature for capturing high-level context and semantics in scenes.", "In interactive image search, a user iteratively refines his results by giving feedback on exemplar images. Active selection methods aim to elicit useful feedback, but traditional approaches suffer from expensive selection criteria and cannot predict in formativeness reliably due to the imprecision of relevance feedback. To address these drawbacks, we propose to actively select \"pivot\" exemplars for which feedback in the form of a visual comparison will most reduce the system's uncertainty. For example, the system might ask, \"Is your target image more or less crowded than this image?\" Our approach relies on a series of binary search trees in relative attribute space, together with a selection function that predicts the information gain were the user to compare his envisioned target to the next node deeper in a given attribute's tree. It makes interactive search more efficient than existing strategies-both in terms of the system's selection time as well as the user's feedback effort.", "We propose a new model for recognizing human attributes (e.g. wearing a suit, sitting, short hair) and actions (e.g. running, riding a horse) in still images. The proposed model relies on a collection of part templates which are learnt discriminatively to explain specific scale-space locations in the images (in human centric coordinates). It avoids the limitations of highly structured models, which consist of a few (i.e. a mixture of) 'average' templates. To learn our model, we propose an algorithm which automatically mines out parts and learns corresponding discriminative templates with their respective locations from a large number of candidate parts. We validate the method on recent challenging datasets: (i) Willow 7 actions [7], (ii) 27 Human Attributes (HAT) [25], and (iii) Stanford 40 actions [37]. We obtain convincing qualitative and state-of-the-art quantitative results on the three datasets.", "1 Computer Science, Princeton University, Princeton, NJ, USA 2 Computer Science, Brown University, Providence, RI, USA 3 Computer Science and Engineering, University of Washington, Seattle, WA, USA 4 Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA 5 Department of EECS, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA 6 Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA", "Current methods learn monolithic attribute predictors, with the assumption that a single model is sufficient to reflect human understanding of a visual attribute. However, in reality, humans vary in how they perceive the association between a named property and image content. For example, two people may have slightly different internal models for what makes a shoe look \"formal\", or they may disagree on which of two scenes looks \"more cluttered\". Rather than discount these differences as noise, we propose to learn user-specific attribute models. We adapt a generic model trained with annotations from multiple users, tailoring it to satisfy user-specific labels. Furthermore, we propose novel techniques to infer user-specific labels based on transitivity and contradictions in the user's search history. We demonstrate that adapted attributes improve accuracy over both existing monolithic models as well as models that learn from scratch with user-specific data alone. In addition, we show how adapted attributes are useful to personalize image search, whether with binary or relative attributes.", "User feedback helps an image search system refine its relevance predictions, tailoring the search towards the user's preferences. Existing methods simply take feedback at face value: clicking on an image means the user wants things like it, commenting that an image lacks a specific attribute means the user wants things that have it. However, we expect there is actually more information behind the user's literal feedback. In particular, a user's (possibly subconscious) search strategy leads him to comment on certain images rather than others, based on how any of the visible candidate images compare to the desired content. For example, he may be more likely to give negative feedback on an irrelevant image that is relatively close to his target, as opposed to bothering with one that is altogether different. We introduce novel features to capitalize on such implied feedback cues, and learn a ranking function that uses them to improve the system's relevance estimates. We validate the approach with real users searching for shoes, faces, or scenes using two different modes of feedback: binary relevance feedback and relative attributes-based feedback. The results show that retrieval improves significantly when the system accounts for the learned behaviors. We show that the nuances learned are domain-invariant, and useful for both generic user-independent search as well as personalized user-specific search.", "We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework.", "", "In this paper we present the first large-scale scene attribute database. First, we perform crowd-sourced human studies to find a taxonomy of 102 discriminative attributes. Next, we build the “SUN attribute database” on top of the diverse SUN categorical database. Our attribute database spans more than 700 categories and 14,000 images and has potential for use in high-level scene understanding and fine-grained scene recognition. We use our dataset to train attribute classifiers and evaluate how well these relatively simple classifiers can recognize a variety of attributes related to materials, surface properties, lighting, functions and affordances, and spatial envelope properties." ] }
1509.06658
2953141364
We propose a novel image representation, termed Attribute-Graph, to rank images by their semantic similarity to a given query image. An Attribute-Graph is an undirected fully connected graph, incorporating both local and global image characteristics. The graph nodes characterise objects as well as the overall scene context using mid-level semantic attributes, while the edges capture the object topology. We demonstrate the effectiveness of Attribute-Graphs by applying them to the problem of image ranking. We benchmark the performance of our algorithm on the 'rPascal' and 'rImageNet' datasets, which we have created in order to evaluate the ranking performance on complex queries containing multiple objects. Our experimental evaluation shows that modelling images as Attribute-Graphs results in improved ranking performance over existing techniques.
Content based image retrieval techniques such as @cite_18 @cite_41 use image queries. Zheng @cite_41 couple complementary features of SIFT and colour into a multidimensional inverted index to improve precision, while adopting multiple assignment to improve recall. Douze @cite_18 use attributes in combination with Fisher vectors of a query image to perform retrieval. These techniques obtain a single global representation for an image, and fail to consider the objects in the image and their local characteristics. Cao @cite_42 perform image ranking by constructing triangular object structures with attribute features. However, they fail to take into account other important aspects such as the global scene context. We compare the proposed method with the works of Douze and Cao in Sec. .
{ "cite_N": [ "@cite_41", "@cite_18", "@cite_42" ], "mid": [ "2031332477", "", "2160892771" ], "abstract": [ "In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low discriminative power, so false positive matches occur prevalently. Apart from the information loss during quantization, another cause is that the SIFT feature only describes the local gradient distribution. To address this problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform feature fusion at indexing level. Basically, complementary features are coupled into a multi-dimensional inverted index. Each dimension of c-MI corresponds to one kind of feature, and the retrieval process votes for images similar in both SIFT and other feature spaces. Specifically, we exploit the fusion of local color feature into c-MI. While the precision of visual match is greatly enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation of SIFT and color features significantly reduces the impact of false positive matches. Extensive experiments on several benchmark datasets demonstrate that c-MI improves the retrieval accuracy significantly, while consuming only half of the query time compared to the baseline. Importantly, we show that c-MI is well complementary to many prior techniques. Assembling these methods, we have obtained an mAP of 85.8 and N-S score of 3.85 on Holidays and Ukbench datasets, respectively, which compare favorably with the state-of-the-arts.", "", "In image retrieval, users' search intention is usually specified by textual queries, exemplar images, concept maps, and even sketches, which can only express the search intention partially. These query strategies lack the abilities to indicate the Regions Of Interests (ROIs) and represent the spatial or semantic correlations among the ROIs, which results in the so-called semantic gap between users' search intention and images' low-level visual content. In this paper, we propose a novel image search method, which allows the users to indicate any number of Regions Of Interest (ROIs) within the query as well as utilize various semantic concepts and spatial relations to search images. Specifically, we firstly propose a structured descriptor to jointly represent the categories, attributes, and spatial relations among objects. Then, based on the defined descriptor, our method ranks the images in the database according to the matching scores w.r.t. the category, attribute, and spatial relations. We conduct the experiments on the aPascal and aYahoo datasets, and experimental results show the advantage of the proposed method compared to the state of the arts." ] }
1509.06053
2287841014
Text classification is a widely studied problem, and it can be considered solved for some domains and under certain circumstances. There are scenarios, however, that have received little or no attention at all, despite its relevance and applicability. One of such scenarios is early text classification, where one needs to know the category of a document by using partial information only. A document is processed as a sequence of terms, and the goal is to devise a method that can make predictions as fast as possible. The importance of this variant of the text classification problem is evident in domains like sexual predator detection, where one wants to identify an offender as early as possible. This paper analyzes the suitability of the standard naive Bayes classifier for approaching this problem. Specifically, we assess its performance when classifying documents after seeing an increasingly number of terms. A simple modification to the standard naive Bayes implementation allows us to make predictions with partial information. To the best of our knowledge naive Bayes has not been used for this purpose before. Throughout an extensive experimental evaluation we show the effectiveness of the classifier for early text classification. What is more, we show that this simple solution is very competitive when compared with state of the art methodologies that are more elaborated. We foresee our work will pave the way for the development of more effective early text classification techniques based in the naive Bayes formulation.
Na "ive Bayes has been used extensively in text mining and within machine learning in general, because of its high performance in several domains, several modifications and extensions have been proposed to augment the scope of the classifier. Related to our work, the following extensions have been reported in the literature: This is perhaps the most studied topic in terms of extending the mentioned classifier. The independence assumption may be too strong for some domains applications, therefore, several works have been proposed that try to relax it. Most notably TAN @cite_18 , AODE @cite_15 , and WANBIA @cite_6 extensions have reported outstanding results. Nevertheless, the focus here is on relaxing the attribute independence assumption, and not on working with partial information. One should note, however, that this extended versions of na "ive Bayes can be well suited for early text classification, as attribute-dependency information can help the algorithm to classify texts earlier.
{ "cite_N": [ "@cite_18", "@cite_6", "@cite_15" ], "mid": [ "", "2108626773", "2111574971" ], "abstract": [ "", "Despite the simplicity of the Naive Bayes classifier, it has continued to perform well against more sophisticated newcomers and has remained, therefore, of great interest to the machine learning community. Of numerous approaches to refining the naive Bayes classifier, attribute weighting has received less attention than it warrants. Most approaches, perhaps influenced by attribute weighting in other machine learning algorithms, use weighting to place more emphasis on highly predictive attributes than those that are less predictive. In this paper, we argue that for naive Bayes attribute weighting should instead be used to alleviate the conditional independence assumption. Based on this premise, we propose a weighted naive Bayes algorithm, called WANBIA, that selects weights to minimize either the negative conditional log likelihood or the mean squared error objective functions. We perform extensive evaluations and find that WANBIA is a competitive alternative to state of the art classifiers like Random Forest, Logistic Regression and A1DE.", "Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, both LBR and Super-Parent TAN have demonstrated remarkable error performance. However, both techniques obtain this outcome at a considerable computational cost. We present a new approach to weakening the attribute independence assumption by averaging all of a constrained class of classifiers. In extensive experiments this technique delivers comparable prediction accuracy to LBR and Super-Parent TAN with substantially improved computational efficiency at test time relative to the former and at training time relative to the latter. The new algorithm is shown to have low variance and is suited to incremental learning." ] }
1509.06086
2201691068
This paper studies deep network architectures to address the problem of video classification. A multi-stream framework is proposed to fully utilize the rich multimodal information in videos. Specifically, we first train three Convolutional Neural Networks to model spatial, short-term motion and audio clues respectively. Long Short Term Memory networks are then adopted to explore long-term temporal dynamics. With the outputs of the individual streams, we propose a simple and effective fusion method to generate the final predictions, where the optimal fusion weights are learned adaptively for each class, and the learning process is regularized by automatically estimated class relationships. Our contributions are two-fold. First, the proposed multi-stream framework is able to exploit multimodal features that are more comprehensive than those previously attempted. Second, we demonstrate that the adaptive fusion method using the class relationship as a regularizer outperforms traditional alternatives that estimate the weights in a "free" fashion. Our framework produces significantly better results than the state of the arts on two popular benchmarks, 92.2 on UCF-101 (without using audio) and 84.9 on Columbia Consumer Videos.
Motived by the promising results of deep networks (particularly the ConvNets) on image analysis tasks @cite_47 @cite_18 @cite_8 , several works have exploited deep architectures for video classification. Ji al extended CNN models into spatial-temporal space by operating on stacked video frames @cite_38 . Karparthy al compared several architectures for action recognition @cite_41 . Tran al proposed to learn generic spatial-temporal features which can be computed efficiently @cite_22 . Simonyan and Zisserman @cite_9 introduced an interesting two-stream approach, where two ConvNets are trained to explicitly capture spatial and short-term motion information using frames and stacked optical flows as inputs, respectively. Final predictions can be obtained by linearly averaging the prediction scores of the two ConvNets. In this paper, we also adopt two similar ConvNets as @cite_9 . However, as the two-stream approach is not able to model the auditory and the long-term temporal clues, we adopt additional networks to build a more comprehensive framework. A novel fusion method is also proposed to combine the multi-stream outputs, which is better than the simple linear fusion used in @cite_9 .
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_22", "@cite_8", "@cite_41", "@cite_9", "@cite_47" ], "mid": [ "1983364832", "1686810756", "2122476475", "2102605133", "2308045930", "2952186347", "2950179405" ], "abstract": [ "We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection." ] }
1509.06086
2201691068
This paper studies deep network architectures to address the problem of video classification. A multi-stream framework is proposed to fully utilize the rich multimodal information in videos. Specifically, we first train three Convolutional Neural Networks to model spatial, short-term motion and audio clues respectively. Long Short Term Memory networks are then adopted to explore long-term temporal dynamics. With the outputs of the individual streams, we propose a simple and effective fusion method to generate the final predictions, where the optimal fusion weights are learned adaptively for each class, and the learning process is regularized by automatically estimated class relationships. Our contributions are two-fold. First, the proposed multi-stream framework is able to exploit multimodal features that are more comprehensive than those previously attempted. Second, we demonstrate that the adaptive fusion method using the class relationship as a regularizer outperforms traditional alternatives that estimate the weights in a "free" fashion. Our framework produces significantly better results than the state of the arts on two popular benchmarks, 92.2 on UCF-101 (without using audio) and 84.9 on Columbia Consumer Videos.
The RNN has been shown to be effective on many sequential modeling tasks, such as speech recognition @cite_11 and image video description @cite_10 @cite_24 . For long-term temporal modeling of the video data, Srivastava al proposed an LSTM encoder-decoder framework to learn video representations in an unsupervised manner @cite_14 . Donahua al @cite_10 and Wu al @cite_25 trained a two-layer LSTM network for action classification. Ng al @cite_42 further demonstrated that a five-layer LSTM network is slightly better.
{ "cite_N": [ "@cite_14", "@cite_42", "@cite_24", "@cite_10", "@cite_25", "@cite_11" ], "mid": [ "2952453038", "", "1492731187", "2951183276", "2161565164", "2950689855" ], "abstract": [ "We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.", "", "Recent progress in using recurrent neural networks (RNNs) for image description has motivated us to explore the application of RNNs to video description. Recent work has also suggested that attention mechanisms may be able to increase performance. To this end, we apply a long short-term memory (LSTM) network in two configurations: with a recently introduced soft-attention mechanism, and without. Our results suggest two things. First, incorporating a soft-attention mechanism into the text generation RNN significantly improves the quality of the descriptions. Second, using a combination of still frame features and dynamic motion-based features can also help. Ultimately, our combined approach exceeds the state-of-art on both BLEU and Meteor on the Youtube2Text dataset. We also present results on a new, larger and more complex dataset of paired video and natural language descriptions based on the use of Descriptive Video Service (DVS) annotations which are now widely available as an additional audio track on many DVDs.", "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.", "Classifying videos according to content semantics is an important problem with a wide range of applications. In this paper, we propose a hybrid deep learning framework for video classification, which is able to model static spatial information, short-term motion, as well as long-term temporal clues in the videos. Specifically, the spatial and the short-term motion features are extracted separately by two Convolutional Neural Networks (CNN). These two types of CNN-based features are then combined in a regularized feature fusion network for classification, which is able to learn and utilize feature relationships for improved performance. In addition, Long Short Term Memory (LSTM) networks are applied on top of the two features to further model longer-term temporal clues. The main contribution of this work is the hybrid learning framework that can model several important aspects of the video data. We also show that (1) combining the spatial and the short-term motion features in the regularized fusion network is better than direct classification and fusion using the CNN with a softmax layer, and (2) the sequence-based LSTM is highly complementary to the traditional classification strategy without considering the temporal frame orders. Extensive experiments are conducted on two popular and challenging benchmarks, the UCF-101 Human Actions and the Columbia Consumer Videos (CCV). On both benchmarks, our framework achieves very competitive performance: 91.3 on the UCF-101 and 83.5 on the CCV.", "Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates , which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score." ] }
1509.06086
2201691068
This paper studies deep network architectures to address the problem of video classification. A multi-stream framework is proposed to fully utilize the rich multimodal information in videos. Specifically, we first train three Convolutional Neural Networks to model spatial, short-term motion and audio clues respectively. Long Short Term Memory networks are then adopted to explore long-term temporal dynamics. With the outputs of the individual streams, we propose a simple and effective fusion method to generate the final predictions, where the optimal fusion weights are learned adaptively for each class, and the learning process is regularized by automatically estimated class relationships. Our contributions are two-fold. First, the proposed multi-stream framework is able to exploit multimodal features that are more comprehensive than those previously attempted. Second, we demonstrate that the adaptive fusion method using the class relationship as a regularizer outperforms traditional alternatives that estimate the weights in a "free" fashion. Our framework produces significantly better results than the state of the arts on two popular benchmarks, 92.2 on UCF-101 (without using audio) and 84.9 on Columbia Consumer Videos.
Fusion is needed to combine the outputs of separate prediction models. The simplest solution is linear weighted fusion, which has been adopted in many recent approaches like @cite_9 . Nandakumar al performed score fusion using a method called likelihood ratio test @cite_28 . More recently, Xu al @cite_44 and Ye al @cite_29 proposed robust late fusion methods by seeking a low rank matrix to remove the noise of individually trained classifiers. @cite_36 proposed to predict sample-specific weights in the fusion process.
{ "cite_N": [ "@cite_28", "@cite_36", "@cite_9", "@cite_29", "@cite_44" ], "mid": [ "2136885397", "", "2952186347", "2024051019", "2142521298" ], "abstract": [ "Multibiometric systems fuse information from different sources to compensate for the limitations in performance of individual matchers. We propose a framework for the optimal combination of match scores that is based on the likelihood ratio test. The distributions of genuine and impostor match scores are modeled as finite Gaussian mixture model. The proposed fusion approach is general in its ability to handle 1) discrete values in biometric match score distributions, 2) arbitrary scales and distributions of match scores, 3) correlation between the scores of multiple matchers, and 4) sample quality of multiple biometric sources. Experiments on three multibiometric databases indicate that the proposed fusion framework achieves consistently high performance compared to commonly used score fusion techniques based on score transformation and classification.", "", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "In this paper, we propose a rank minimization method to fuse the predicted confidence scores of multiple models, each of which is obtained based on a certain kind of feature. Specifically, we convert each confidence score vector obtained from one model into a pairwise relationship matrix, in which each entry characterizes the comparative relationship of scores of two test samples. Our hypothesis is that the relative score relations are consistent among component models up to certain sparse deviations, despite the large variations that may exist in the absolute values of the raw scores. Then we formulate the score fusion problem as seeking a shared rank-2 pairwise relationship matrix based on which each original score matrix from individual model can be decomposed into the common rank-2 matrix and sparse deviation errors. A robust score vector is then extracted to fit the recovered low rank score relation matrix. We formulate the problem as a nuclear norm and l 1 norm optimization objective function and employ the Augmented Lagrange Multiplier (ALM) method for the optimization. Our method is isotonic (i.e., scale invariant) to the numeric scales of the scores originated from different models. We experimentally show that the proposed method achieves significant performance gains on various tasks including object categorization and video event detection.", "Fusion of multiple features can boost the performance of large-scale visual classification and detection tasks like TRECVID Multimedia Event Detection (MED) competition [1]. In this paper, we propose a novel feature fusion approach, namely Feature Weighting via Optimal Thresholding (FWOT) to effectively fuse various features. FWOT learns the weights, thresholding and smoothing parameters in a joint framework to combine the decision values obtained from all the individual features and the early fusion. To the best of our knowledge, this is the first work to consider the weight and threshold factors of fusion problem simultaneously. Compared to state-of-the-art fusion algorithms, our approach achieves promising improvements on HMDB [8] action recognition dataset and CCV [5] video classification dataset. In addition, experiments on two TRECVID MED 2011 collections show that our approach outperforms the state-of-the-art fusion methods for complex event detection." ] }
1509.06095
2235771077
We apply multilayer bootstrap network (MBN), a recent proposed unsupervised learning method, to unsupervised speaker recognition. The proposed method first extracts supervectors from an unsupervised universal background model, then reduces the dimension of the high-dimensional supervectors by multilayer bootstrap network, and finally conducts unsupervised speaker recognition by clustering the low-dimensional data. The comparison results with 2 unsupervised and 1 supervised speaker recognition techniques demonstrate the effectiveness and robustness of the proposed method.
The proposed method learns multilayer nonlinear transforms, which is related to deep learning (a.k.a., multilayer neural networks)---a recent advanced topic in many speech processing fields, e.g. speaker recognition @cite_11 @cite_3 , speech recognition @cite_2 , speech separation and enhancement @cite_18 @cite_14 @cite_0 @cite_6 , speech synthesis @cite_15 , and voice activity detection @cite_7 @cite_21 . The aforementioned deep learning methods are all supervised ones and limited to neural networks, while the proposed method is an unsupervised one and different from neural networks.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_7", "@cite_21", "@cite_3", "@cite_6", "@cite_0", "@cite_2", "@cite_15", "@cite_11" ], "mid": [ "", "2044893557", "1985242443", "2197404611", "", "2480088621", "", "2147768505", "2020024436", "" ], "abstract": [ "", "In contrast to the conventional minimum mean square error (MMSE)-based noise reduction techniques, we propose a supervised method to enhance speech by means of finding a mapping function between noisy and clean speech signals based on deep neural networks (DNNs). In order to be able to handle a wide range of additive noises in real-world situations, a large training set that encompasses many possible combinations of speech and noise types, is first designed. A DNN architecture is then employed as a nonlinear regression function to ensure a powerful modeling capability. Several techniques have also been proposed to improve the DNN-based speech enhancement system, including global variance equalization to alleviate the over-smoothing problem of the regression model, and the dropout and noise-aware training strategies to further improve the generalization capability of DNNs to unseen noise conditions. Experimental results demonstrate that the proposed framework can achieve significant improvements in both objective and subjective measures over the conventional MMSE based technique. It is also interesting to observe that the proposed DNN approach can well suppress highly nonstationary noise, which is tough to handle in general. Furthermore, the resulting DNN model, trained with artificial synthesized data, is also effective in dealing with noisy speech data recorded in real-world scenarios without the generation of the annoying musical artifact commonly observed in conventional enhancement methods.", "Fusing the advantages of multiple acoustic features is important for the robustness of voice activity detection (VAD). Recently, the machine-learning-based VADs have shown a superiority to traditional VADs on multiple feature fusion tasks. However, existing machine-learning-based VADs only utilize shallow models, which cannot explore the underlying manifold of the features. In this paper, we propose to fuse multiple features via a deep model, called deep belief network (DBN). DBN is a powerful hierarchical generative model for feature extraction. It can describe highly variant functions and discover the manifold of the features. We take the multiple serially-concatenated features as the input layer of DBN, and then extract a new feature by transferring these features through multiple nonlinear hidden layers. Finally, we predict the class of the new feature by a linear classifier. We further analyze that even a single-hidden-layer-based belief network is as powerful as the state-of-the-art models in the machine-learning-based VADs. In our empirical comparison, ten common features are used for performance analysis. Extensive experimental results on the AURORA2 corpus show that the DBN-based VAD not only outperforms eleven referenced VADs, but also can meet the real-time detection demand of VAD. The results also show that the DBN-based VAD can fuse the advantages of multiple features effectively.", "Voice activity detection (VAD) is an important topic in audio signal processing. Contextual information is important for improving the performance of VAD at low signal-to-noise ratios. Here we explore contextual information by machine learning methods at three levels. At the top level, we employ an ensemble learning framework, named multi-resolution stacking (MRS), which is a stack of ensemble classifiers. Each classifier in a building block inputs the concatenation of the predictions of its lower building blocks and the expansion of the raw acoustic feature by a given window (called a resolution). At the middle level, we describe a base classifier in MRS, named boosted deep neural network (bDNN). bDNN first generates multiple base predictions from different contexts of a single frame by only one DNN and then aggregates the base predictions for a better prediction of the frame, and it is different from computationally-expensive boosting methods that train ensembles of classifiers for multiple base predictions. At the bottom level, we employ the multi-resolution cochleagram feature, which incorporates the contextual information by concatenating the cochleagram features at multiple spectrotemporal resolutions. Experimental results show that the MRS-based VAD outperforms other VADs by a considerable margin. Moreover, when trained on a large amount of noise types and a wide range of signal-to-noise ratios, the MRS-based VAD demonstrates surprisingly good generalization performance on unseen test scenarios, approaching the performance with noise-dependent training.", "", "Abstract : Monaural speech separation is a fundamental problem in robust speech processing. Recently, deep neural network (DNN) based speech separation methods, which predict either clean speech or an ideal time-frequency mask, have demonstrated remarkable performance improvement. However, a single DNN with a given window length does not leverage contextual information sufficiently, and the differences between the two optimization objectives are not well understood. In this paper, we propose to stack ensembles of DNNs, named multi-resolution stacking, to address monaural speech separation. Each DNN in a module of the stack takes the concatenation of original acoustic features and expansion of the soft output of the lower module as its input, and predicts the ideal ratio mask of the target speaker. The DNNs in the same module explore different contexts by employing different window lengths. We have conducted extensive experiments with three speech corpora. The results demonstrate the effectiveness of the proposed method. We have also compared the two optimization objectives systematically and found that predicting the ideal time-frequency mask is more efficient in utilizing clean training speech, while predicting clean speech is less sensitive to SNR variations.", "", "We propose a novel context-dependent (CD) model for large-vocabulary speech recognition (LVSR) that leverages recent advances in using deep belief networks for phone recognition. We describe a pre-trained deep neural network hidden Markov model (DNN-HMM) hybrid architecture that trains the DNN to produce a distribution over senones (tied triphone states) as its output. The deep belief network pre-training algorithm is a robust and often helpful way to initialize deep neural networks generatively that can aid in optimization and reduce generalization error. We illustrate the key components of our model, describe the procedure for applying CD-DNN-HMMs to LVSR, and analyze the effects of various modeling choices on performance. Experiments on a challenging business search dataset demonstrate that CD-DNN-HMMs can significantly outperform the conventional context-dependent Gaussian mixture model (GMM)-HMMs, with an absolute sentence accuracy improvement of 5.8 and 9.2 (or relative error reduction of 16.0 and 23.2 ) over the CD-GMM-HMMs trained using the minimum phone error rate (MPE) and maximum-likelihood (ML) criteria, respectively.", "This paper presents a new spectral modeling method for statistical parametric speech synthesis. In the conventional methods, high-level spectral parameters, such as mel-cepstra or line spectral pairs, are adopted as the features for hidden Markov model (HMM)-based parametric speech synthesis. Our proposed method described in this paper improves the conventional method in two ways. First, distributions of low-level, un-transformed spectral envelopes (extracted by the STRAIGHT vocoder) are used as the parameters for synthesis. Second, instead of using single Gaussian distribution, we adopt the graphical models with multiple hidden variables, including restricted Boltzmann machines (RBM) and deep belief networks (DBN), to represent the distribution of the low-level spectral envelopes at each HMM state. At the synthesis time, the spectral envelopes are predicted from the RBM-HMMs or the DBN-HMMs of the input sentence following the maximum output probability parameter generation criterion with the constraints of the dynamic features. A Gaussian approximation is applied to the marginal distribution of the visible stochastic variables in the RBM or DBN at each HMM state in order to achieve a closed-form solution to the parameter generation problem. Our experimental results show that both RBM-HMM and DBN-HMM are able to generate spectral envelope parameter sequences better than the conventional Gaussian-HMM with superior generalization capabilities and that DBN-HMM and RBM-HMM perform similarly due possibly to the use of Gaussian approximation. As a result, our proposed method can significantly alleviate the over-smoothing effect and improve the naturalness of the conventional HMM-based speech synthesis system using mel-cepstra.", "" ] }
1509.06109
2296638628
In real settings, natural body movements can be erroneously recognized by whole-body input systems as explicit input actions. We call body activity not intended as input actions "background activity." We argue that understanding background activity is crucial to the success of always-available whole-body input in the real world. To operationalize this argument, we contribute a reusable study methodology and software tools to generate standardized background activity datasets composed of data from multiple Kinect cameras, a Vicon tracker, and two high-definition video cameras. Using our methodology, we create an example background activity dataset for a television-oriented living room setting. We use this dataset to demonstrate how it can be used to redesign a gestural interaction vocabulary to minimize conflicts with the real world. The software tools and initial living room dataset are publicly available (this http URL).
Large datasets of naturally occurring body movements are useful for conducting post hoc observational inquiries, modelling phenomena, motivating technique designs, training algorithms, testing individual techniques, and comparing multiple techniques with a common baseline. Examples of well-established datasets include the MNIST handwritten digit database @cite_14 for handwriting recognition, the MacKenzie Phrase Set @cite_35 to evaluate text entry techniques, and datasets of static objects captured by depth cameras @cite_22 @cite_30 for computer graphics algorithms. Dataset corpora have a strong tradition in natural language processing and have been leveraged to make speech input classification robust to background speech @cite_1 @cite_31 . In the field of gesture recognition, algorithms are trained and tested using datasets similar to Marcels @cite_24 compilation of hand gesture and posture images, and to the Cambridge Gesture Database @cite_19 of image sequences showing various hand motions. More recently, the Chalearn gesture challenge dataset was established as part of a competition in ICMI 2012 to recognize gestures consisting of motion and hand shapes in 320x240 Kinect RGB-D data @cite_34 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_14", "@cite_22", "@cite_1", "@cite_24", "@cite_19", "@cite_31", "@cite_34" ], "mid": [ "2156222070", "2099287431", "2310919327", "2098883970", "1981688737", "", "", "193308610", "" ], "abstract": [ "Over the last decade, the availability of public image repositories and recognition benchmarks has enabled rapid progress in visual object category and instance detection. Today we are witnessing the birth of a new generation of sensing technologies capable of providing high quality synchronized videos of both color and depth, the RGB-D (Kinect-style) camera. With its advanced sensing capabilities and the potential for mass adoption, this technology represents an opportunity to dramatically increase robotic object recognition, manipulation, navigation, and interaction capabilities. In this paper, we introduce a large-scale, hierarchical multi-view object dataset collected using an RGB-D camera. The dataset contains 300 objects organized into 51 categories and has been made publicly available to the research community so as to enable rapid progress based on this promising technology. This paper describes the dataset collection procedure and introduces techniques for RGB-D based object recognition and detection, demonstrating that combining color and depth information substantially improves quality of results.", "In evaluations of text entry methods, participants enter phrases of text using a technique of interest while performance data are collected. This paper describes and publishes (via the internet) a collection of 500 phrases for such evaluations. Utility programs are also provided to compute statistical properties of the phrase set, or any other phrase set. The merits of using a pre-defined phrase set are described as are methodological considerations, such as attaining results that are generalizable and the possible addition of punctuation and other characters.", "", "Recent proliferation of a cheap but quality depth sensor, the Microsoft Kinect, has brought the need for a challenging category-level 3D object detection dataset to the fore. We review current 3D datasets and find them lacking in variation of scenes, categories, instances, and viewpoints. Here we present our dataset of color and depth image pairs, gathered in real domestic and office environments. It currently includes over 50 classes, with more images added continuously by a crowd-sourced collection effort. We establish baseline performance in a PASCAL VOC-style detection task, and suggest two ways that inferred world size of the object may be used to improve detection. The dataset and annotations can be downloaded at http: www.kinectdata.com.", "We describe a machine learning approach that allows an open-world spoken dialog system to learn to predict engagement intentions in situ, from interaction. The proposed approach does not require any developer supervision, and leverages spatiotemporal and attentional features automatically extracted from a visual analysis of people coming into the proximity of the system to produce models that are attuned to the characteristics of the environment the system is placed in. Experimental results indicate that a system using the proposed approach can learn to recognize engagement intentions at low false positive rates (e.g. 2--4 ) up to 3--4 seconds prior to the actual moment of engagement.", "", "", "Although speech, derived from reading texts, and similar types of speech, e.g. that from reading newspapers or that from news broadcast, can be recognized with high accuracy, recognition accuracy drastically decreases for spontaneous speech. This is due to the fact that spontaneous speech and read speech are significantly different acoustically as well as linguistically. This paper reports analysis and recognition of spontaneous speech using a large-scale spontaneous speech database “Corpus of Spontaneous Japanese (CSJ)”. Recognition results in this experiment show that recognition accuracy significantly increases as a function of the size of acoustic as well as language model training data and the improvement levels off at approximately 7M words of training data. This means that acoustic and linguistic variation of spontaneous speech is so large that we need a very large corpus in order to encompass the variations. Spectral analysis using various styles of utterances in the CSJ shows that the spectral distribution difference of phonemes is significantly reduced in spontaneous speech compared to read speech. Experimental results also show that there is a strong correlation between mean spectral distance between phonemes and phoneme recognition accuracy. This indicates that spectral reduction is one major reason for the decrease of recognition accuracy of spontaneous speech.", "" ] }
1509.06109
2296638628
In real settings, natural body movements can be erroneously recognized by whole-body input systems as explicit input actions. We call body activity not intended as input actions "background activity." We argue that understanding background activity is crucial to the success of always-available whole-body input in the real world. To operationalize this argument, we contribute a reusable study methodology and software tools to generate standardized background activity datasets composed of data from multiple Kinect cameras, a Vicon tracker, and two high-definition video cameras. Using our methodology, we create an example background activity dataset for a television-oriented living room setting. We use this dataset to demonstrate how it can be used to redesign a gestural interaction vocabulary to minimize conflicts with the real world. The software tools and initial living room dataset are publicly available (this http URL).
Datasets of whole-body motion exist, but these focus primarily on short sequences of high-energy actions performed by actors in a motion capture studio @cite_38 @cite_11 @cite_36 @cite_21 . More recently, the CMU Quality of Life Technology Centre created a multimodal capture database of people cooking in a simulated kitchen @cite_6 . With an average of 5 minutes per clip, the sequences are too short and too task focused to provide general background activity.
{ "cite_N": [ "@cite_38", "@cite_36", "@cite_21", "@cite_6", "@cite_11" ], "mid": [ "2135643110", "2086509056", "2106996050", "", "2130287054" ], "abstract": [ "Recent technological advances in connected-speech recognition and position sensing in space have encouraged the notion that voice and gesture inputs at the graphics interface can converge to provide a concerted, natural user modality. The work described herein involves the user commanding simple shapes about a large-screen graphics display surface. Because voice can be augmented with simultaneous pointing, the free usage of pronouns becomes possible, with a corresponding gain in naturalness and economy of expression. Conversely, gesture aided by voice gains precision in its power to reference.", "Over the years, a large number of methods have been proposed to analyze human pose and motion information from images, videos, and recently from depth data. Most methods, however, have been evaluated on datasets that were too specific to each application, limited to a particular modality, and more importantly, captured under unknown conditions. To address these issues, we introduce the Berkeley Multimodal Human Action Database (MHAD) consisting of temporally synchronized and geometrically calibrated data from an optical motion capture system, multi-baseline stereo cameras from multiple views, depth sensors, accelerometers and microphones. This controlled multimodal dataset provides researchers an inclusive testbed to develop and benchmark new algorithms across multiple modalities under known capture conditions in various research domains. To demonstrate possible use of MHAD for action recognition, we compare results using the popular Bag-of-Words algorithm adapted to each modality independently with the results of various combinations of modalities using the Multiple Kernel Learning. Our comparative results show that multimodal analysis of human motion yields better action recognition rates than unimodal analysis.", "Vision-based human action recognition is the process of labeling image sequences with action labels. Robust solutions to this problem have applications in domains such as visual surveillance, video retrieval and human-computer interaction. The task is challenging due to variations in motion performance, recording settings and inter-personal differences. In this survey, we explicitly address these challenges. We provide a detailed overview of current advances in the field. Image representations and the subsequent classification process are discussed separately to focus on the novelties of recent research. Moreover, we discuss limitations of the state of the art and outline promising directions of research.", "", "We present a first study of using RGB-D (Kinect-style) cameras for fine-grained recognition of kitchen activities. Our prototype system combines depth (shape) and color (appearance) to solve a number of perception problems crucial for smart space applications: locating hands, identifying objects and their functionalities, recognizing actions and tracking object state changes through actions. Our proof-of-concept results demonstrate great potentials of RGB-D perception: without need for instrumentation, our system can robustly track and accurately recognize detailed steps through cooking activities, for instance how many spoons of sugar are in a cake mix, or how long it has been mixing. A robust RGB-D based solution to fine-grained activity recognition in real-world conditions will bring the intelligence of pervasive and interactive systems to the next level." ] }
1509.06109
2296638628
In real settings, natural body movements can be erroneously recognized by whole-body input systems as explicit input actions. We call body activity not intended as input actions "background activity." We argue that understanding background activity is crucial to the success of always-available whole-body input in the real world. To operationalize this argument, we contribute a reusable study methodology and software tools to generate standardized background activity datasets composed of data from multiple Kinect cameras, a Vicon tracker, and two high-definition video cameras. Using our methodology, we create an example background activity dataset for a television-oriented living room setting. We use this dataset to demonstrate how it can be used to redesign a gestural interaction vocabulary to minimize conflicts with the real world. The software tools and initial living room dataset are publicly available (this http URL).
Using body gestures for explicit input has been extensively studied @cite_33 @cite_23 @cite_20 @cite_7 @cite_21 @cite_37 @cite_2 @cite_26 . With always-available body input, the difference between gesture and non-gesture can be subtle, introducing false positives @cite_11 @cite_9 . Baudel and Beaudouin-Lafon @cite_32 call systems that interpret every gesture of the user as possible meaning as having immersion syndrome, ignoring that interacting with the system is not the users only ongoing activity.
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_33", "@cite_7", "@cite_9", "@cite_21", "@cite_32", "@cite_23", "@cite_2", "@cite_20", "@cite_11" ], "mid": [ "2120860586", "2098339052", "1983705368", "2019366372", "2165093411", "2106996050", "2067870229", "2113696470", "", "2109618101", "2130287054" ], "abstract": [ "LightGuide is a system that explores a new approach to gesture guidance where we project guidance hints directly on a user's body. These projected hints guide the user in completing the desired motion with their body part which is particularly useful for performing movements that require accuracy and proper technique, such as during exercise or physical therapy. Our proof-of-concept implementation consists of a single low-cost depth camera and projector and we present four novel interaction techniques that are focused on guiding a user's hand in mid-air. Our visualizations are designed to incorporate both feedback and feedforward cues to help guide users through a range of movements. We quantify the performance of LightGuide in a user study comparing each of our on-body visualizations to hand animation videos on a computer display in both time and accuracy. Exceeding our expectations, participants performed movements with an average error of 21.6mm, nearly 85 more accurately than when guided by video.", "Action recognition has become a very important topic in computer vision, with many fundamental applications, in robotics, video surveillance, human-computer interaction, and multimedia retrieval among others and a large variety of approaches have been described. The purpose of this survey is to give an overview and categorization of the approaches used. We concentrate on approaches that aim on classification of full-body motions, such as kicking, punching, and waving, and we categorize them according to how they represent the spatial and temporal structure of actions; how they segment actions from an input stream of visual data; and how they learn a view-invariant representation of actions.", "Human activity recognition is an important area of computer vision research. Its applications include surveillance systems, patient monitoring systems, and a variety of systems that involve interactions between persons and electronic devices such as human-computer interfaces. Most of these applications require an automated recognition of high-level activities, composed of multiple simple (or atomic) actions of persons. This article provides a detailed overview of various state-of-the-art research papers on human activity recognition. We discuss both the methodologies developed for simple human actions and those for high-level activities. An approach-based taxonomy is chosen that compares the advantages and limitations of each approach. Recognition methodologies for an analysis of the simple actions of a single person are first presented in the article. Space-time volume approaches and sequential approaches that represent and recognize activities directly from input images are discussed. Next, hierarchical recognition methodologies for high-level activities are presented and compared. Statistical approaches, syntactic approaches, and description-based approaches for hierarchical recognition are discussed in the article. In addition, we further discuss the papers on the recognition of human-object interactions and group activities. Public datasets designed for the evaluation of the recognition methodologies are illustrated in our article as well, comparing the methodologies' performances. This review will provide the impetus for future research in more productive areas.", "3D interfaces use motion sensing, physical input, and spatial interaction techniques to effectively control highly dynamic virtual content. Now, with the advent of the Nintendo Wii, Sony Move, and Microsoft Kinect, game developers and researchers must create compelling interface techniques and game-play mechanics that make use of these technologies. At the same time, it is becoming increasingly clear that emerging game technologies are not just going to change the way we play games, they are also going to change the way we make and view art, design new products, analyze scientific datasets, and more. This introduction to 3D spatial interfaces demystifies the workings of modern videogame motion controllers and provides an overview of how it is used to create 3D interfaces for tasks such as 2D and 3D navigation, object selection and manipulation, and gesture-based application control. Topics include the strengths and limitations of various motion-controller sensing technologies in today's peripherals, useful techniques for working with these devices, and current and future applications of these technologies to areas beyond games. The course presents valuable information on how to utilize existing 3D user-interface techniques with emerging technologies, how to develop interface techniques, and how to learn from the successes and failures of spatial interfaces created for a variety of application domains.", "Depth cameras have become a fixture of millions of living rooms thanks to the Microsoft Kinect. Yet to be seen is whether they can succeed as widely in other areas of the home. This research takes the Kinect into real-life kitchens, where touchless gestural control could be a boon for messy hands, but where commands are interspersed with the movements of cooking. We implement a recipe navigator, timer and music player and, experimentally, allow users to change the control scheme at runtime and navigate with other limbs when their hands are full. We tested our system with five subjects who baked a cookie recipe in their own kitchens, and found that placing the Kinect was simple and that subjects felt successful. However, testing in real kitchens underscored the challenge of preventing accidental commands in tasks with sporadic input.", "Vision-based human action recognition is the process of labeling image sequences with action labels. Robust solutions to this problem have applications in domains such as visual surveillance, video retrieval and human-computer interaction. The task is challenging due to variations in motion performance, recording settings and inter-personal differences. In this survey, we explicitly address these challenges. We provide a detailed overview of current advances in the field. Image representations and the subsequent classification process are discussed separately to focus on the novelties of recent research. Moreover, we discuss limitations of the state of the art and outline promising directions of research.", "This paper presents an application that uses hand gesture input to control a computer while giving a presentation. In order to develop a prototype of this application, we have defined an interaction model, a notation for gestures, and a set of guidelines to design gestural command sets. This works aims to define interaction styles that work in computerized reality environments. In our application, gestures are used for interacting with the computer as well as for communicating with other people or operating other devices.", "Instrumented with a single depth camera, a stereoscopic projector, and a curved screen, MirageTable is an interactive system designed to merge real and virtual worlds into a single spatially registered experience on top of a table. Our depth camera tracks the user's eyes and performs a real-time capture of both the shape and the appearance of any object placed in front of the camera (including user's body and hands). This real-time capture enables perspective stereoscopic 3D visualizations to a single user that account for deformations caused by physical objects on the table. In addition, the user can interact with virtual objects through physically-realistic freehand actions without any gloves, trackers, or instruments. We illustrate these unique capabilities through three application examples: virtual 3D model creation, interactive gaming with real and virtual objects, and a 3D teleconferencing experience that not only presents a 3D view of a remote person, but also a seamless 3D shared task space. We also evaluated the user's perception of projected 3D objects in our system, which confirmed that the users can correctly perceive such objects even when they are projected over different background colors and geometries (e.g., gaps, drops).", "", "HoloDesk is an interactive system combining an optical see through display and Kinect camera to create the illusion that users are directly interacting with 3D graphics. A virtual image of a 3D scene is rendered through a half silvered mirror and spatially aligned with the real-world for the viewer. Users easily reach into an interaction volume displaying the virtual image. This allows the user to literally get their hands into the virtual display and to directly interact with an spatially aligned 3D virtual world, without the need for any specialized head-worn hardware or input device. We introduce a new technique for interpreting raw Kinect data to approximate and track rigid (e.g., books, cups) and non-rigid (e.g., hands, paper) physical objects and support a variety of physics-inspired interactions between virtual and real. In particular the algorithm models natural human grasping of virtual objects with more fidelity than previously demonstrated. A qualitative study highlights rich emergent 3D interactions, using hands and real-world objects. The implementation of HoloDesk is described in full, and example application scenarios explored. Finally, HoloDesk is quantitatively evaluated in a 3D target acquisition task, comparing the system with indirect and glasses-based variants.", "We present a first study of using RGB-D (Kinect-style) cameras for fine-grained recognition of kitchen activities. Our prototype system combines depth (shape) and color (appearance) to solve a number of perception problems crucial for smart space applications: locating hands, identifying objects and their functionalities, recognizing actions and tracking object state changes through actions. Our proof-of-concept results demonstrate great potentials of RGB-D perception: without need for instrumentation, our system can robustly track and accurately recognize detailed steps through cooking activities, for instance how many spoons of sugar are in a cake mix, or how long it has been mixing. A robust RGB-D based solution to fine-grained activity recognition in real-world conditions will bring the intelligence of pervasive and interactive systems to the next level." ] }
1509.06109
2296638628
In real settings, natural body movements can be erroneously recognized by whole-body input systems as explicit input actions. We call body activity not intended as input actions "background activity." We argue that understanding background activity is crucial to the success of always-available whole-body input in the real world. To operationalize this argument, we contribute a reusable study methodology and software tools to generate standardized background activity datasets composed of data from multiple Kinect cameras, a Vicon tracker, and two high-definition video cameras. Using our methodology, we create an example background activity dataset for a television-oriented living room setting. We use this dataset to demonstrate how it can be used to redesign a gestural interaction vocabulary to minimize conflicts with the real world. The software tools and initial living room dataset are publicly available (this http URL).
Detecting gestures in a continuous stream of input is known as the Gesture Spotting Problem. A common approach is to model each gesture type as a Hidden Markov Model (HMM) and detect gestures when their likelihood exceeds that of a thresholding HMM, synthesized from the trained gesture HMMs @cite_27 . The limitation of this approach is that this thresholding HMM does not model the background.
{ "cite_N": [ "@cite_27" ], "mid": [ "2120701504" ], "abstract": [ "A new method is developed using the hidden Markov model (HMM) based technique. To handle nongesture patterns, we introduce the concept of a threshold model that calculates the likelihood threshold of an input pattern and provides a confirmation mechanism for the provisionally matched gesture patterns. The threshold model is a weak model for all trained gestures in the sense that its likelihood is smaller than that of the dedicated gesture model for a given gesture. Consequently, the likelihood can be used as an adaptive threshold for selecting proper gesture model. It has, however, a large number of states and needs to be reduced because the threshold model is constructed by collecting the states of all gesture models in the system. To overcome this problem, the states with similar probability distributions are merged, utilizing the relative entropy measure. Experimental results show that the proposed method can successfully extract trained gestures from continuous hand motion with 93.14 reliability." ] }
1509.05909
2279895976
We present a robust and real-time monocular six degree of freedom visual relocalization system. We use a Bayesian convolutional neural network to regress the 6-DOF camera pose from a single RGB image. It is trained in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking under 6ms to compute. It obtains approximately 2m and 6 degrees accuracy for very large scale outdoor scenes and 0.5m and 10 degrees accuracy indoors. Using a Bayesian convolutional neural network implementation we obtain an estimate of the model's relocalization uncertainty and improve state of the art localization accuracy on a large scale outdoor dataset. We leverage the uncertainty measure to estimate metric relocalization error and to detect the presence or absence of the scene in the input image. We show that the model's uncertainty is caused by images being dissimilar to the training dataset in either pose or appearance.
Visual SLAM @cite_20 @cite_9 @cite_21 @cite_6 , and relocalization research @cite_1 @cite_14 , has largely focused on registering viewpoints using point-based landmark features, such as SIFT @cite_13 or ORB @cite_22 . The problem with using local point-based features, is that they are not robust to many real-life scenarios. Tracking loss typically fails with rapid motion, large changes in viewpoint, or significant appearance changes.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_9", "@cite_21", "@cite_1", "@cite_6", "@cite_13", "@cite_20" ], "mid": [ "1616969904", "", "2108134361", "612478963", "", "", "2151103935", "2151290401" ], "abstract": [ "We address the problem of determining where a photo was taken by estimating a full 6-DOF-plus-intrincs camera pose with respect to a large geo-registered 3D point cloud, bringing together research on image localization, landmark recognition, and 3D pose estimation. Our method scales to datasets with hundreds of thousands of images and tens of millions of 3D points through the use of two new techniques: a co-occurrence prior for RANSAC and bidirectional matching of image features with 3D points. We evaluate our method on several large data sets, and show state-of-the-art results on landmark recognition as well as the ability to locate cameras to within meters, requiring only seconds per query.", "", "DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non-convex optimisation framework. Interleaved, we track the camera's 6DOF motion precisely by frame-rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves real-time performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application.", "We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.", "", "", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.", "This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand-held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer: one thread deals with the task of robustly tracking erratic hand-held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems." ] }
1509.05909
2279895976
We present a robust and real-time monocular six degree of freedom visual relocalization system. We use a Bayesian convolutional neural network to regress the 6-DOF camera pose from a single RGB image. It is trained in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking under 6ms to compute. It obtains approximately 2m and 6 degrees accuracy for very large scale outdoor scenes and 0.5m and 10 degrees accuracy indoors. Using a Bayesian convolutional neural network implementation we obtain an estimate of the model's relocalization uncertainty and improve state of the art localization accuracy on a large scale outdoor dataset. We leverage the uncertainty measure to estimate metric relocalization error and to detect the presence or absence of the scene in the input image. We show that the model's uncertainty is caused by images being dissimilar to the training dataset in either pose or appearance.
A learning approach to relocalization has also been proposed previously using random forests to regress Scene Coordinate labels @cite_2 . It was extended with a probabilistic approach in @cite_15 . However this algorithm has not been demonstrated beyond a scale of a few @math and requires RGB-D images to generate the scene coordinate label, in practice constraining its use to indoor scenes.
{ "cite_N": [ "@cite_15", "@cite_2" ], "mid": [ "1893935112", "1989476314" ], "abstract": [ "Recent advances in camera relocalization use predictions from a regression forest to guide the camera pose optimization procedure. In these methods, each tree associates one pixel with a point in the scene's 3D world coordinate frame. In previous work, these predictions were point estimates and the subsequent camera pose optimization implicitly assumed an isotropic distribution of these estimates. In this paper, we train a regression forest to predict mixtures of anisotropic 3D Gaussians and show how the predicted uncertainties can be taken into account for continuous pose optimization. Experiments show that our proposed method is able to relocalize up to 40 more frames than the state of the art.", "We address the problem of inferring the pose of an RGB-D camera relative to a known 3D scene, given only a single acquired image. Our approach employs a regression forest that is capable of inferring an estimate of each pixel's correspondence to 3D points in the scene's world coordinate frame. The forest uses only simple depth and RGB pixel comparison features, and does not require the computation of feature descriptors. The forest is trained to be capable of predicting correspondences at any pixel, so no interest point detectors are required. The camera pose is inferred using a robust optimization scheme. This starts with an initial set of hypothesized camera poses, constructed by applying the forest at a small fraction of image pixels. Preemptive RANSAC then iterates sampling more pixels at which to evaluate the forest, counting inliers, and refining the hypothesized poses. We evaluate on several varied scenes captured with an RGB-D camera and observe that the proposed technique achieves highly accurate relocalization and substantially out-performs two state of the art baselines." ] }
1509.05909
2279895976
We present a robust and real-time monocular six degree of freedom visual relocalization system. We use a Bayesian convolutional neural network to regress the 6-DOF camera pose from a single RGB image. It is trained in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking under 6ms to compute. It obtains approximately 2m and 6 degrees accuracy for very large scale outdoor scenes and 0.5m and 10 degrees accuracy indoors. Using a Bayesian convolutional neural network implementation we obtain an estimate of the model's relocalization uncertainty and improve state of the art localization accuracy on a large scale outdoor dataset. We leverage the uncertainty measure to estimate metric relocalization error and to detect the presence or absence of the scene in the input image. We show that the model's uncertainty is caused by images being dissimilar to the training dataset in either pose or appearance.
Although many of the modern SLAM algorithms do not consider localization uncertainty @cite_20 @cite_21 @cite_14 , previous probabilistic algorithms have been proposed. Bayesian approaches include extended Kalman filters and particle filter approaches such as FastSLAM @cite_16 . However these approaches estimate uncertainty from sensor noise models, not the uncertainty of the model to represent the data. Our proposed framework does not assume any input noise but measures the model uncertainty for localization.
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_20", "@cite_16" ], "mid": [ "1616969904", "612478963", "2151290401", "" ], "abstract": [ "We address the problem of determining where a photo was taken by estimating a full 6-DOF-plus-intrincs camera pose with respect to a large geo-registered 3D point cloud, bringing together research on image localization, landmark recognition, and 3D pose estimation. Our method scales to datasets with hundreds of thousands of images and tens of millions of 3D points through the use of two new techniques: a co-occurrence prior for RANSAC and bidirectional matching of image features with 3D points. We evaluate our method on several large data sets, and show state-of-the-art results on landmark recognition as well as the ability to locate cameras to within meters, requiring only seconds per query.", "We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.", "This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand-held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer: one thread deals with the task of robustly tracking erratic hand-held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems.", "" ] }
1509.05909
2279895976
We present a robust and real-time monocular six degree of freedom visual relocalization system. We use a Bayesian convolutional neural network to regress the 6-DOF camera pose from a single RGB image. It is trained in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking under 6ms to compute. It obtains approximately 2m and 6 degrees accuracy for very large scale outdoor scenes and 0.5m and 10 degrees accuracy indoors. Using a Bayesian convolutional neural network implementation we obtain an estimate of the model's relocalization uncertainty and improve state of the art localization accuracy on a large scale outdoor dataset. We leverage the uncertainty measure to estimate metric relocalization error and to detect the presence or absence of the scene in the input image. We show that the model's uncertainty is caused by images being dissimilar to the training dataset in either pose or appearance.
Neural networks which consider uncertainty are known as Bayesian neural networks @cite_18 @cite_4 . They offer a probabilistic interpretation of deep learning models by inferring distributions over the networks’ weights. They are often very computationally expensive, increasing the number of model parameters without increasing model capacity significantly. Performing inference in Bayesian neural networks is a difficult task, and approximations to the model posterior are often used, such as variational inference @cite_17 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_17" ], "mid": [ "2127538960", "2111051539", "2108677974" ], "abstract": [ "(1) The outputs of a typical multi-output classification network do not satisfy the axioms of probability; probabilities should be positive and sum to one. This problem can be solved by treating the trained network as a preprocessor that produces a feature vector that can be further processed, for instance by classical statistical estimation techniques. (2) We present a method for computing the first two moments of the probability distribution indicating the range of outputs that are consistent with the input and the training data. It is particularly useful to combine these two ideas: we implement the ideas of section 1 using Parzen windows, where the shape and relative size of each window is computed using the ideas of section 2. This allows us to make contact between important theoretical ideas (e.g. the ensemble formalism) and practical techniques (e.g. back-prop). Our results also shed new light on and generalize the well-known \"softmax\" scheme.", "A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian \"evidence\" automatically embodies \"Occam's razor,\" penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained.", "Variational methods have been previously explored as a tractable approximation to Bayesian inference for neural networks. However the approaches proposed so far have only been applicable to a few simple network architectures. This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks. Along the way it revisits several common regularisers from a variational perspective. It also provides a simple pruning heuristic that can both drastically reduce the number of network weights and lead to improved generalisation. Experimental results are provided for a hierarchical multidimensional recurrent neural network applied to the TIMIT speech corpus." ] }
1509.05909
2279895976
We present a robust and real-time monocular six degree of freedom visual relocalization system. We use a Bayesian convolutional neural network to regress the 6-DOF camera pose from a single RGB image. It is trained in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking under 6ms to compute. It obtains approximately 2m and 6 degrees accuracy for very large scale outdoor scenes and 0.5m and 10 degrees accuracy indoors. Using a Bayesian convolutional neural network implementation we obtain an estimate of the model's relocalization uncertainty and improve state of the art localization accuracy on a large scale outdoor dataset. We leverage the uncertainty measure to estimate metric relocalization error and to detect the presence or absence of the scene in the input image. We show that the model's uncertainty is caused by images being dissimilar to the training dataset in either pose or appearance.
Recent work @cite_7 has extended dropout to approximate Bayesian inference over the network's weights. Gal and Ghahramani @cite_12 show that dropout can be used at test time to impose a Bernoulli distribution over the convolutional net filter's weights, without requiring any additional model parameters. This is achieved by sampling the network with randomly dropped out connections at test time. We can consider these as Monte Carlo samples which sample from the posterior distribution of models.
{ "cite_N": [ "@cite_12", "@cite_7" ], "mid": [ "601603264", "582134693" ], "abstract": [ "Convolutional neural networks (CNNs) work well on large datasets. But labelled data is hard to collect, and in some applications larger amounts of data are not available. The problem then is how to use CNNs with small data -- as CNNs overfit quickly. We present an efficient Bayesian CNN, offering better robustness to over-fitting on small data than traditional approaches. This is by placing a probability distribution over the CNN's kernels. We approximate our model's intractable posterior with Bernoulli variational distributions, requiring no additional model parameters. On the theoretical side, we cast dropout network training as approximate inference in Bayesian neural networks. This allows us to implement our model using existing tools in deep learning with no increase in time complexity, while highlighting a negative result in the field. We show a considerable improvement in classification accuracy compared to standard techniques and improve on published state-of-the-art results for CIFAR-10.", "Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and non-linearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning." ] }
1509.06243
2953384527
The goal of this work is to bring semantics into the tasks of text recognition and retrieval in natural images. Although text recognition and retrieval have received a lot of attention in recent years, previous works have focused on recognizing or retrieving exactly the same word used as a query, without taking the semantics into consideration. In this paper, we ask the following question: For this goal we propose a convolutional neural network (CNN) with a weighted ranking loss objective that ensures that the concepts relevant to the query image are ranked ahead of those that are not relevant. This can also be interpreted as learning a Euclidean space where word images and concepts are jointly embedded. This model is learned in an end-to-end manner, from image pixels to semantic concepts, using a dataset of synthetically generated word images and concepts mined from a lexical database (WordNet). Our results show that, despite the complexity of the task, word images and concepts can indeed be associated with a high degree of accuracy
On a slightly different line, and closely related to our approach, Jaderberg al @cite_6 learn to classify words images into a set of @math possible transcriptions. This is achieved in an end-to-end manner using Convolutional Neural Networks (CNNs) and synthetic training data, and the approach obtained outstanding recognition results on standard benchmarks. Interestingly, the activations of the previous-to-last layer of the network can also be used as word image features for retrieval purposes.
{ "cite_N": [ "@cite_6" ], "mid": [ "1491389626" ], "abstract": [ "In this work we present a framework for the recognition of natural scene text. Our framework does not require any human-labelled data, and performs word recognition on the whole image holistically, departing from the character based recognition systems of the past. The deep neural network models at the centre of this framework are trained solely on data produced by a synthetic text generation engine -- synthetic data that is highly realistic and sufficient to replace real data, giving us infinite amounts of training data. This excess of data exposes new possibilities for word recognition models, and here we consider three models, each one \"reading\" words in a different way: via 90k-way dictionary encoding, character sequence encoding, and bag-of-N-grams encoding. In the scenarios of language based and completely unconstrained text recognition we greatly improve upon state-of-the-art performance on standard datasets, using our fast, simple machinery and requiring zero data-acquisition costs." ] }
1509.06243
2953384527
The goal of this work is to bring semantics into the tasks of text recognition and retrieval in natural images. Although text recognition and retrieval have received a lot of attention in recent years, previous works have focused on recognizing or retrieving exactly the same word used as a query, without taking the semantics into consideration. In this paper, we ask the following question: For this goal we propose a convolutional neural network (CNN) with a weighted ranking loss objective that ensures that the concepts relevant to the query image are ranked ahead of those that are not relevant. This can also be interpreted as learning a Euclidean space where word images and concepts are jointly embedded. This model is learned in an end-to-end manner, from image pixels to semantic concepts, using a dataset of synthetically generated word images and concepts mined from a lexical database (WordNet). Our results show that, despite the complexity of the task, word images and concepts can indeed be associated with a high degree of accuracy
Several works have considered the problem of jointly embedding images and semantic categories in an intermediate Euclidean space. A simple way to do so is to perform a Canonical Correlation Analysis (CCA) on image representations and their tags @cite_24 . Weston al @cite_8 proposed WSABIE which can be understood as a neural architecture with a single hidden layer. The WSABIE objective function is a weighted ranking loss. We use a similar loss to learn our joint word-image and semantic concept embedding. An issue with WSABIE is that it cannot deal with zero-shot recognition. To address this problem, Frome al @cite_18 proposed Devise, an embedding model that learns to map natural images to text embeddings learned with Word2Vec. Other recent works also used text embeddings as output embeddings @cite_27 @cite_3 , but focusing only on natural images. By contrast, we do not leverage the Word2Vec representations of text, and rely on the graph taxonomy provided by WordNet @cite_12 to learn our embeddings.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_3", "@cite_24", "@cite_27", "@cite_12" ], "mid": [ "2123024445", "21006490", "204268067", "2070753207", "1990550880", "" ], "abstract": [ "Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.", "Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at the top of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method, called WSABIE, both outperforms several baseline methods and is faster and consumes less memory.", "It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time.", "This paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets.", "The standard approach to recognizing text in images consists in first classifying local image regions into candidate characters and then combining them with high-level word models such as conditional random fields. This paper explores a new paradigm that departs from this bottom-up view. We propose to embed word labels and word images into a common Euclidean space. Given a word image to be recognized, the text recognition problem is cast as one of retrieval: find the closest word label in this space. This common space is learned using the Structured SVM framework by enforcing matching label-image pairs to be closer than non-matching pairs. This method presents several advantages: it does not require ad-hoc or costly pre- post-processing operations, it can build on top of any state-of-the-art image descriptor (Fisher vectors in our case), it allows for the recognition of never-seen-before words (zero-shot recognition) and the recognition process is simple and efficient, as it amounts to a nearest neighbor search. Experiments are performed on challenging datasets of license plates and scene text. The main conclusion of the paper is that with such a frugal approach it is possible to obtain results which are competitive with standard bottom-up approaches, thus establishing label embedding as an interesting and simple to compute baseline for text recognition.", "" ] }
1509.06041
2253891449
Sentiment analysis of online user generated content is important for many social media analytics tasks. Researchers have largely relied on textual sentiment analysis to develop systems to predict political elections, measure economic indicators, and so on. Recently, social media users are increasingly using images and videos to express their opinions and share their experiences. Sentiment analysis of such large scale visual content can help better extract user sentiments toward events or topics, such as those in image tweets, so that prediction of sentiment from visual content is complementary to textual sentiment analysis. Motivated by the needs in leveraging large scale yet noisy training data to solve the extremely challenging problem of image sentiment analysis, we employ Convolutional Neural Networks (CNN). We first design a suitable CNN architecture for image sentiment analysis. We obtain half a million training samples by using a baseline sentiment algorithm to label Flickr images. To make use of such noisy machine labeled data, we employ a progressive strategy to fine-tune the deep network. Furthermore, we improve the performance on Twitter images by inducing domain transfer with a small number of manually labeled Twitter images. We have conducted extensive experiments on manually labeled Twitter images. The results show that the proposed CNN can achieve better performance in image sentiment analysis than competing algorithms.
Sentiment analysis is a very challenging task @cite_17 @cite_22 . Researchers from natural language processing and information retrieval have developed different approaches to solve this problem, achieving promising or satisfying results @cite_28 . In the context of social media, there are several additional unique challenges. First, there are huge amounts of data available. Second, messages on social networks are by nature informal and short. Third, people use not only textual messages, but also images and videos to express themselves.
{ "cite_N": [ "@cite_28", "@cite_22", "@cite_17" ], "mid": [ "", "2105010330", "2112251034" ], "abstract": [ "", "We study the online micro-blog sentiment detection problem, which aims to determine whether a micro-blog post expresses emotions. This problem is challenging because a micro-blog post is very short and individuals have distinct ways of expressing emotions. A single classification model trained on the entire corpus may fail to capture characteristics unique to each user. On the other hand, a personalized model for each user may be inaccurate due to the scarcity of training data, especially at the very beginning where users have just posted a few entries. To overcome these challenges, we propose learning a global model over all micro-bloggers, which is then leveraged to continuously refine the individual models through a collaborative online learning way. We evaluate our algorithm on a real-life micro-blog dataset collected from the popular micro-blog site – Twitter. Results show that our algorithm is effective and efficient for timely sentiment detection in real micro-blogging applications.", "Automated identification of diverse sentiment types can be beneficial for many NLP systems such as review summarization and public media analysis. In some of these systems there is an option of assigning a sentiment value to a single sentence or a very short text. In this paper we propose a supervised sentiment classification framework which is based on data from Twitter, a popular microblogging service. By utilizing 50 Twitter tags and 15 smileys as sentiment labels, this framework avoids the need for labor intensive manual annotation, allowing identification and classification of diverse sentiment types of short texts. We evaluate the contribution of different feature types for sentiment classification and show that our framework successfully identifies sentiment types of untagged sentences. The quality of the sentiment identification was also confirmed by human judges. We also explore dependencies and overlap between different sentiment types represented by smileys and Twitter hashtags." ] }
1509.06041
2253891449
Sentiment analysis of online user generated content is important for many social media analytics tasks. Researchers have largely relied on textual sentiment analysis to develop systems to predict political elections, measure economic indicators, and so on. Recently, social media users are increasingly using images and videos to express their opinions and share their experiences. Sentiment analysis of such large scale visual content can help better extract user sentiments toward events or topics, such as those in image tweets, so that prediction of sentiment from visual content is complementary to textual sentiment analysis. Motivated by the needs in leveraging large scale yet noisy training data to solve the extremely challenging problem of image sentiment analysis, we employ Convolutional Neural Networks (CNN). We first design a suitable CNN architecture for image sentiment analysis. We obtain half a million training samples by using a baseline sentiment algorithm to label Flickr images. To make use of such noisy machine labeled data, we employ a progressive strategy to fine-tune the deep network. Furthermore, we improve the performance on Twitter images by inducing domain transfer with a small number of manually labeled Twitter images. We have conducted extensive experiments on manually labeled Twitter images. The results show that the proposed CNN can achieve better performance in image sentiment analysis than competing algorithms.
There are also several recent works on visual sentiment analysis. proposes a machine learning algorithm to predict the sentiment of images using pixel-level features. Motivated by the fact that sentiment involves high-level abstraction, which may be easier to explain by objects or attributes in images, both @cite_12 and @cite_16 propose to employ visual entities or attributes as features for visual sentiment analysis. @cite_12 , 1200 adjective noun pairs (ANP), which may correspond to different levels of different emotions, are extracted. These ANPs are used as queries to crawl images from Flickr. Next, pixel-level features of images in each ANP are employed to train 1200 ANP detectors. The responses of these 1200 classifiers can then be considered as mid-level features for visual sentiment analysis. The work in @cite_16 employed a similar mechanism. The main difference is that 102 scene attributes are used instead.
{ "cite_N": [ "@cite_16", "@cite_12" ], "mid": [ "2046682605", "2048783874" ], "abstract": [ "Visual content analysis has always been important yet challenging. Thanks to the popularity of social networks, images become an convenient carrier for information diffusion among online users. To understand the diffusion patterns and different aspects of the social images, we need to interpret the images first. Similar to textual content, images also carry different levels of sentiment to their viewers. However, different from text, where sentiment analysis can use easily accessible semantic and context information, how to extract and interpret the sentiment of an image remains quite challenging. In this paper, we propose an image sentiment prediction framework, which leverages the mid-level attributes of an image to predict its sentiment. This makes the sentiment classification results more interpretable than directly using the low-level features of an image. To obtain a better performance on images containing faces, we introduce eigenface-based facial expression detection as an additional mid-level attributes. An empirical study of the proposed framework shows improved performance in terms of prediction accuracy. More importantly, by inspecting the prediction results, we are able to discover interesting relationships between mid-level attribute and image sentiment.", "A picture is worth one thousand words, but what words should be used to describe the sentiment and emotions conveyed in the increasingly popular social multimedia? We demonstrate a novel system which combines sound structures from psychology and the folksonomy extracted from social multimedia to develop a large visual sentiment ontology consisting of 1,200 concepts and associated classifiers called SentiBank. Each concept, defined as an Adjective Noun Pair (ANP), is made of an adjective strongly indicating emotions and a noun corresponding to objects or scenes that have a reasonable prospect of automatic detection. We believe such large-scale visual classifiers offer a powerful mid-level semantic representation enabling high-level sentiment analysis of social multimedia. We demonstrate novel applications made possible by SentiBank including live sentiment prediction of social media and visualization of visual content in a rich intuitive semantic space." ] }
1509.06361
2261564226
Let @math be a fixed graph on @math vertices. Let @math iff the input graph @math on @math vertices contains @math as a (not necessarily induced) subgraph. Let @math denote the cardinality of a maximum independent set of @math . In this paper we show: [Q(f_H) = ( n ), ] where @math denotes the quantum query complexity of @math . As a consequence we obtain a lower bounds for @math in terms of several other parameters of @math such as the average degree, minimum vertex cover, chromatic number, and the critical probability. We also use the above bound to show that @math for any @math , improving on the previously best known bound of @math . Until very recently, it was believed that the quantum query complexity is at least square root of the randomized one. Our @math bound for @math matches the square root of the current best known bound for the randomized query complexity of @math , which is @math due to Gr "oger. Interestingly, the randomized bound of @math for @math still remains open. We also study the Subgraph Homomorphism Problem, denoted by @math , and show that @math . Finally we extend our results to the @math -uniform hypergraphs. In particular, we show an @math bound for quantum query complexity of the Subgraph Isomorphism, improving on the previously known @math bound. For the Subgraph Homomorphism, we obtain an @math bound for the same.
Understanding the query complexity of monotone graph properties has a long history. In the deterministic setting the Aanderaa-Rosenberg-Karp Conjecture asserts that one must query all the @math edges in the worst-case. The randomized complexity of monotone graph properties is conjectured to be @math . Yao @cite_4 obtained the first super-linear lower bound in the randomized setting using the graph packing arguments. Subsequently his bound was improved by King @cite_6 and later by Hajnal @cite_13 . The current best known bound is @math due to Chakrabarti and Khot @cite_18 . Moreover, O'Donnell, Saks, Schramm, and Servedio @cite_3 also obtained an @math bound via a more generic approach for monotone transitive functions. Friedgut, Kahn, and Wigderson @cite_16 obtain an @math bound where the @math is the critical probability of the property. In the quantum setting, Buhrman, Cleve, de Wolf and Zalka @cite_10 were the first to study quantum complexity of graph properties. Santha and Yao @cite_14 obtain an @math bound for general properties. Their proof follows along the lines of Hajnal's proof.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_10", "@cite_3", "@cite_6", "@cite_16", "@cite_13" ], "mid": [ "", "", "2041763757", "", "2950902318", "1965171998", "1582919494", "2101724873" ], "abstract": [ "", "", "For any property P on n-vertex graphs, let C(P) be the minimum number of edges that need to be examined by any decision tree algorithm for determining P. In 1975 Rivest and Vuillemin settled the Aanderra-Rosenberg Conjecture, proving that C(P) = Ω(n2) for every nontrivial monotone graph property P. An intriguing open question is whether the theorem remains true when randomized algorithms are allowed. In this paper we report progress on this problem, showing that Ω(n(log n)1 12) edges must be examined by a randomized algorithm for determining any nontrivial monotone graph property.", "", "We prove that for any decision tree calculating a boolean function @math , [ [f] i=1 ^n (f), ] where @math is the probability that the @math th input variable is read and @math is the influence of the @math th variable on @math . The variance, influence and probability are taken with respect to an arbitrary product measure on @math . It follows that the minimum depth of a decision tree calculating a given balanced function is at least the reciprocal of the largest influence of any input variable. Likewise, any balanced boolean function with a decision tree of depth @math has a variable with influence at least @math . The only previous nontrivial lower bound known was @math . Our inequality has many generalizations, allowing us to prove influence lower bounds for randomized decision trees, decision trees on arbitrary product probability spaces, and decision trees with non-boolean outputs. As an application of our results we give a very easy proof that the randomized query complexity of nontrivial monotone graph properties is at least @math , where @math is the number of vertices and @math is the critical threshold probability. This supersedes the milestone @math bound of Hajnal and is sometimes superior to the best known lower bounds of Chakrabarti-Khot and Friedgut-Kahn-Wigderson.", "In this simple model, a decision tree algorithm must determine whether an unknown digraph on nodes 1, 2, …, n has a given property by asking questions of the form “Is edge in the graph?”. The complexity of a property is the number of questions which must be asked in the worst case. Aanderaa and Rosenberg conjectured that any monotone, nontrivial, (isomorphism-invariant) n-node digraph property has complexity O( n 2 ). This bound was proved by Rivest and Vuillemin and subsequently improved to n 2 4+ o ( n 2 ). In Part I, we give a bound of n 2 2+ o ( n 2 ). Whether these properties are evasive remains open. In Part II, we investigate the power of randomness in recognizing these properties by considering randomized decision tree algorithms in which coins may be flipped to determine the next edge to be queried. Yao's lower bound on the randomized complexity of any monotone nontrivial graph property is improved from O( n log 1 12 n ) to O( n 5 4 ), and improved bounds for the complexity of monotone, nontrivial bipartite graph properties are shown.", "We prove a new lower bound on the randomized decision tree complexity of monotone graph properties. For a monotone property A of graphs on n vertices, let p = p(A) denote the threshold probability of A, namely the value of p for which a random graph from G(n,p) has property A with probability 1 2. Then the expected number of queries made by any decision tree for A on such a random graph is at least Ω(n 2 max pn, logn ).", "We improve King's Ω(n5 4) lower bound on the randomized decision tree complexity of monotone graph properties to Ω(n4 3). The proof follows Yao's approach and improves it in a different direction from King's. At the heart of the proof are a duality argument combined with a new packing lemma for bipartite graphs." ] }
1509.06094
2252481337
Inverting the hash values by performing brute force computation is one of the latest security threats on password based authentication technique. New technologies are being developed for brute force computation and these increase the success rate of inversion attack. Honeyword base authentication protocol can successfully mitigate this threat by making password cracking detectable. However, the existing schemes have several limitations like Multiple System Vulnerability, Weak DoS Resistivity, Storage Overhead, etc. In this paper we have proposed a new honeyword generation approach, identified as Paired Distance Protocol (PDP) which overcomes almost all the drawbacks of previously proposed honeyword generation approaches. The comprehensive analysis shows that PDP not only attains a high detection rate of 97.23 but also reduces the storage cost to a great extent.
The modern password cracking algorithm uses the concept of probabilistic context free grammars @cite_0 . In @cite_5 , characterizes the vulnerability of the passwords under the same threat model @cite_0 by considering different password-compositions policies. One of such weak password composition policy is basic8" in which users are instructed Password must have atleast 8 characters". One billion guess is sufficient to guess @math 2006$ Fred Cohen has made the first contribution in this domain @cite_11 . There after many methodologies have been proposed in this direction. The idea has been deployed to many password related domains. Herley and Florencio @cite_4 use this concept to protect online banking accounts from brute-force attack. propose the concept of Kamouflage" where real password of the user is stored along with the fake passwords but this does not include the concept of honeychecker" server @cite_8 . Later in @cite_14 , authors introduce the concept of honeychecker" server to detect the password cracking mechanism. Recently Chakraborty and Mondal show how can be used to detect shoulder surfing attack @cite_9 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_8", "@cite_9", "@cite_0", "@cite_5", "@cite_11" ], "mid": [ "2093397575", "", "1811376871", "93892664", "2135359429", "2054626033", "" ], "abstract": [ "We propose a simple method for improving the security of hashed passwords: the maintenance of additional honeywords'' (false passwords) associated with each user's account. An adversary who steals a file of hashed passwords and inverts the hash function cannot tell if he has found the password or a honeyword. The attempted use of a honeyword for login sets off an alarm. An auxiliary server (the honeychecker'') can distinguish the user password from honeywords for the login routine, and will set off an alarm if a honeyword is submitted.", "", "We introduce Kamouflage: a new architecture for building theft-resistant password managers. An attacker who steals a laptop or cell phone with a Kamouflage-based password manager is forced to carry out a considerable amount of online work before obtaining any user credentials. We implemented our proposal as a replacement for the built-in Firefox password manager, and provide performance measurements and the results from experiments with large real-world password sets to evaluate the feasibility and effectiveness of our approach. Kamouflage is well suited to become a standard architecture for password managers on mobile devices.", "Traditional password based authentication scheme is vulnerable to shoulder surfing attack. So if an attacker sees a legitimate user to enter password then it is possible for the attacker to use that credentials later to illegally login into the system and may do some malicious activities. Many methodologies exist to prevent such attack. These methods are either partially observable or fully observable to the attacker. In this paper we have focused on detection of shoulder surfing attack rather than prevention. We have introduced the concept of tag digit to create a trap known as honeypot. Using the proposed methodology if the shoulder surfers try to login using others’ credentials then there is a high chance that they will be caught red handed. Comparative analysis shows that unlike the existing preventive schemes, the proposed methodology does not require much computation from users end. Thus from security and usability perspective the proposed scheme is quite robust and powerful.", "Choosing the most effective word-mangling rules to use when performing a dictionary-based password cracking attack can be a difficult task. In this paper we discuss a new method that generates password structures in highest probability order. We first automatically create a probabilistic context-free grammar based upon a training set of previously disclosed passwords. This grammar then allows us to generate word-mangling rules, and from them, password guesses to be used in password cracking. We will also show that this approach seems to provide a more effective way to crack passwords as compared to traditional methods by testing our tools and techniques on real password sets. In one series of experiments, training on a set of disclosed passwords, our approach was able to crack 28 to 129 more passwords than John the Ripper, a publicly available standard password cracking program.", "Text-based passwords remain the dominant authentication method in computer systems, despite significant advancement in attackers' capabilities to perform password cracking. In response to this threat, password composition policies have grown increasingly complex. However, there is insufficient research defining metrics to characterize password strength and using them to evaluate password-composition policies. In this paper, we analyze 12,000 passwords collected under seven composition policies via an online study. We develop an efficient distributed method for calculating how effectively several heuristic password-guessing algorithms guess passwords. Leveraging this method, we investigate (a) the resistance of passwords created under different conditions to guessing, (b) the performance of guessing algorithms under different training sets, (c) the relationship between passwords explicitly created under a given composition policy and other passwords that happen to meet the same requirements, and (d) the relationship between guess ability, as measured with password-cracking algorithms, and entropy estimates. Our findings advance understanding of both password-composition policies and metrics for quantifying password security.", "" ] }
1509.05377
1941846586
In this paper, we consider the rectilinear one-center problem on uncertain points in the plane. In this problem, we are given a set @math of @math (weighted) uncertain points in the plane and each uncertain point has @math possible locations each associated with a probability for the point appearing at that location. The goal is to find a point @math in the plane which minimizes the maximum expected rectilinear distance from @math to all uncertain points of @math , and @math is called a rectilinear center. We present an algorithm that solves the problem in @math time. Since the input size of the problem is @math , our algorithm is optimal.
The problem of finding one-center among uncertain points on a line has been considered in our previous work @cite_11 , where an @math time algorithm was given. An algorithm for computing @math centers for general @math was also given in @cite_11 with the running time @math . In fact, in @cite_11 we considered the @math -center problem under a more general uncertain model where each uncertain point can appear in @math intervals. We also studied the one-center problem for uncertain points on tree networks in @cite_2 , where a linear-time algorithm was proposed.
{ "cite_N": [ "@cite_2", "@cite_11" ], "mid": [ "2401996954", "1628490320" ], "abstract": [ "Uncertain data has been very common in many applications. In this paper, we consider the one-center problem for uncertain data on tree networks. In this problem, we are given a tree T and n (weighted) uncertain points each of which has m possible locations on T associated with probabilities. The goal is to find a point (x^* ) on T such that the maximum (weighted) expected distance from (x^* ) to all uncertain points is minimized. To the best of our knowledge, this problem has not been studied before. We propose a refined prune-and-search technique that solves the problem in linear time.", "Problems on uncertain data have attracted significant attention due to the imprecise nature of many measurement data. In this paper, we consider the k-center problem on one-dimensional uncertain data. The input is a set P of (weighted) uncertain points on a real line, and each uncertain point is specified by its probability density function (pdf) which is a piecewise-uniform function (i.e., a histogram). The goal is to find a set Q of k points on the line to minimize the maximum expected distance from the uncertain points of P to their expected closest points in Q. We present efficient algorithms for this uncertain k-center problem and their running times almost match those for the \"deterministic\" k-center problem." ] }
1509.05377
1941846586
In this paper, we consider the rectilinear one-center problem on uncertain points in the plane. In this problem, we are given a set @math of @math (weighted) uncertain points in the plane and each uncertain point has @math possible locations each associated with a probability for the point appearing at that location. The goal is to find a point @math in the plane which minimizes the maximum expected rectilinear distance from @math to all uncertain points of @math , and @math is called a rectilinear center. We present an algorithm that solves the problem in @math time. Since the input size of the problem is @math , our algorithm is optimal.
There is also a lot of other work on facility location problems for uncertain data. For instances, Cormode and McGregor @cite_19 proved that the @math -center problem on uncertain points each associated with multiple locations in high-dimension space is NP-hard and gave approximation algorithms for different problem models. Foul @cite_0 considered the Euclidean one-center problem on uncertain points each of which has a uniform distribution in a given rectangle in the plane. de Berg. et al @cite_1 studied the Euclidean 2-center problem for a set of moving points in the plane (the moving points can be considered uncertain).
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_1" ], "mid": [ "2080818711", "2110553125", "" ], "abstract": [ "Center problems or minimax facility location problems are among the most active research areas in location theory. In this paper, we find the best unique location for a facility in the plane such that the maximum expected weighted distance to all random demand points is minimized.", "There is an increasing quantity of data with uncertainty arising from applications such as sensor network measurements, record linkage, and as output of mining algorithms. This uncertainty is typically formalized as probability density functions over tuple values. Beyond storing and processing such data in a DBMS, it is necessary to perform other data analysis tasks such as data mining. We study the core mining problem of clustering on uncertain data, and define appropriate natural generalizations of standard clustering optimization criteria. Two variations arise, depending on whether a point is automatically associated with its optimal center, or whether it must be assigned to a fixed cluster no matter where it is actually located. For uncertain versions of k-means and k-median, we show reductions to their corresponding weighted versions on data with no uncertainties. These are simple in the unassigned case, but require some care for the assigned version. Our most interesting results are for uncertain k-center, which generalizes both traditional k-center and k-median objectives. We show a variety of bicriteria approximation algorithms. One picks O(ke--1log2n) centers and achieves a (1 + e) approximation to the best uncertain k-centers. Another picks 2k centers and achieves a constant factor approximation. Collectively, these results are the first known guaranteed approximation algorithms for the problems of clustering uncertain data.", "" ] }
1509.05377
1941846586
In this paper, we consider the rectilinear one-center problem on uncertain points in the plane. In this problem, we are given a set @math of @math (weighted) uncertain points in the plane and each uncertain point has @math possible locations each associated with a probability for the point appearing at that location. The goal is to find a point @math in the plane which minimizes the maximum expected rectilinear distance from @math to all uncertain points of @math , and @math is called a rectilinear center. We present an algorithm that solves the problem in @math time. Since the input size of the problem is @math , our algorithm is optimal.
The @math -center problems on deterministic points are classical problems and have been studied extensively. When all points are in the plane, the problems on most distance metrics are NP-hard @cite_15 . However, some special cases can be solved in polynomial time, e.g., the one-center problem @cite_7 , the two-center problem @cite_8 , the rectilinear three-center problem @cite_20 , the line-constrained @math -center problems (where all centers are restricted to be on a given line in the plane) @cite_3 @cite_4 @cite_17 .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_3", "@cite_15", "@cite_20", "@cite_17" ], "mid": [ "2755780781", "2258388518", "2046812523", "2158196818", "2133962824", "2087653366", "113922551" ], "abstract": [ "Given a set P of n points and a straight line L, we study three important variations of minimum enclosing circle problem as follows:", "", "Abstract This paper considers the planar Euclidean two-center problem: given a planar n-point set S, find two congruent circular disks of the smallest radius covering S. The main result is a deterministic algorithm with running time O(nlog2nlog2logn), improving the previous O(nlog9n) bound of Sharir and almost matching the randomized O(nlog2n) bound of Eppstein. If a point in the intersection of the two disks is given, then we can solve the problem in O(nlogn) time with high probability.", "In this paper we study several instances of the alignedk-center problem where the goal is, given a set of points S in the plane and a parameter k ⩾ 1, to find k disks with centers on a line l such that their union covers S and the maximum radius of the disks is minimized. This problem is a constrained version of the well-known k-center problem in which the centers are constrained to lie in a particular region such as a segment, a line, or a polygon. We first consider the simplest version of the problem where the line l is given in advance; we can solve this problem in time O(n log2 n). In the case where only the direction of l is fixed, we give an O(n2log2 n)-time algorithm. When l is an arbitrary line, we give a randomized algorithm with expected running time O(n4log2 n). Then we present (1+e)-approximation algorithms for these three problems. When we denote T(k, e) = (k e2+(k e) log k) log(1 e), these algorithms run in O(n log k + T(k, e)) time, O(n log k + T(k, e) e) time, and O(n log k + T(k, e) e2) time, respectively. For k = O(n1 3 log n), we also give randomized algorithms with expected running times O(n + (k e2) log(1 e)), O(n+(k e3) log(1 e)), and O(n + (k e4) log(1 e)), respectively.", "Given n demand points in the plane, the p-center problem is to find p supply points (anywhere in the plane) so as to minimize the maximum distance from a demand point to its respective nearest supply point. The p-median problem is to minimize the sum of distances from demand points to their respective nearest supply points. We prove that the p-center and the p-median problems relative to both the Euclidean and the rectilinear metrics are NP-hard. In fact, we prove that it is NP-hard even to approximate the p-center problems sufficiently closely. The reductions are from 3-satisfiability.", "Rectilinear k-centers of a finite point set P ⊂ R2 are the centers of at most k congruent axis-parallel squares of minimal size whose union covers P. This paper describes a linear time algorithm based on the prune-and-search paradigm to compute rectilinear 3-centers. The algorithm is elementary in the sense that it does not build on any sophisticated data structures or other algorithms, except for linear time median finding. An implementation is publically available as part of the Computational Geometry Algorithms Library (CGAL).", "The (weighted) (k )-median, (k )-means, and (k )-center problems in the plane are known to be NP-hard. In this paper, we study these problems with an additional constraint that requires the sought (k ) facilities to be on a given line. We present efficient algorithms for various distance metrics such as (L_1,L_2,L_ ). Assume all (n ) weighted points are given sorted by their projections on the given line. For (k )-median, our algorithms for (L_1 ) and (L_ ) metrics run in (O( nk,n k n n, n2^ O( k n ) n ) ) time and (O( nk n,n k n ^2 n, n2^ O( k n ) ^2 n ) ) time, respectively. For (k )-means, which is defined only on the (L_2 ) metric, we give an (O( nk,n k n , n2^ O( k n ) ) ) time algorithm. For (k )-center, our algorithms run in (O(n n) ) time for all three metrics, and in (O(n) ) time for the unweighted version under (L_1 ) and (L_ ) metrics." ] }
1509.05488
2243512948
Recently, knowledge graph embedding, which projects symbolic entities and relations into continuous vector space, has become a new, hot topic in artificial intelligence. This paper addresses a new issue of multiple relation semantics that a relation may have multiple meanings revealed by the entity pairs associated with the corresponding triples, and proposes a novel Gaussian mixture model for embedding, TransG. The new model can discover latent semantics for a relation and leverage a mixture of relation component vectors for embedding a fact triple. To the best of our knowledge, this is the first generative model for knowledge graph embedding, which is able to deal with multiple relation semantics. Extensive experiments show that the proposed model achieves substantial improvements against the state-of-the-art baselines.
Existing translation-based embedding methods share the same translation principle @math and the score function is designed as: @math where @math are entity embedding vectors projected in the relation-specific space. @cite_19 , lays the entities in the original entity space: @math . @cite_2 , projects entities into a hyperplane for addressing the issue of complex relation embedding: @math . To address the same issue, @cite_18 , transforms the entity embeddings by the same relation-specific matrix: @math . TransR also proposes an ad-hoc clustering-based method, , where the entity pairs for a relation are clustered into different groups, and the pairs in the same group share the same relation vector. In comparison, our model is more elegant to address such an issue theoretically, and does not require a pre-process of clustering. Furthermore, our model has much better performance than CTransR, as we expect. @cite_20 leverages the structure of the knowledge graph via pre-calculating the distinct weight for each training triple to enhance embedding. @cite_9 is a probabilistic embedding method for modeling the uncertainty in knowledge graph.
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_19", "@cite_2", "@cite_20" ], "mid": [ "2184957013", "2073587810", "2127795553", "2283196293", "2172684358" ], "abstract": [ "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to state-of-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction.", "The representation of a knowledge graph (KG) in a latent space recently has attracted more and more attention. To this end, some proposed models (e.g., TransE) embed entities and relations of a KG into a \"point\" vector space by optimizing a global loss function which ensures the scores of positive triplets are higher than negative ones. We notice that these models always regard all entities and relations in a same manner and ignore their (un)certainties. In fact, different entities and relations may contain different certainties, which makes identical certainty insufficient for modeling. Therefore, this paper switches to density-based embedding and propose KG2E for explicitly modeling the certainty of entities and relations, which learn the representations of KGs in the space of multi-dimensional Gaussian distributions. Each entity relation is represented by a Gaussian distribution, where the mean denotes its position and the covariance (currently with diagonal covariance) can properly represent its certainty. In addition, compared with the symmetric measures used in point-based methods, we employ the KL-divergence for scoring triplets, which is a natural asymmetry function for effectively modeling multiple types of relations. We have conducted extensive experiments on link prediction and triplet classification with multiple benchmark datasets (WordNet and Freebase). Our experimental results demonstrate that our method can effectively model the (un)certainties of entities and relations in a KG, and it significantly outperforms state-of-the-art methods (including TransH and TransR).", "We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.", "We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up.", "Many knowledge repositories nowadays contain billions of triplets, i.e. (head-entity, relationship, tail-entity), as relation instances. These triplets form a directed graph with entities as nodes and relationships as edges. However, this kind of symbolic and discrete storage structure makes it difficult for us to exploit the knowledge to enhance other intelligenceacquired applications (e.g. the QuestionAnswering System), as many AI-related algorithms prefer conducting computation on continuous data. Therefore, a series of emerging approaches have been proposed to facilitate knowledge computing via encoding the knowledge graph into a low-dimensional embedding space. TransE is the latest and most promising approach among them, and can achieve a higher performance with fewer parameters by modeling the relationship as a transitional vector from the head entity to the tail entity. Unfortunately, it is not flexible enough to tackle well with the various mapping properties of triplets, even though its authors spot the harm on performance. In this paper, we thus propose a superior model called TransM to leverage the structure of the knowledge graph via pre-calculating the distinct weight for each training triplet according to its relational mapping property. In this way, the optimal function deals with each triplet depending on its own weight. We carry out extensive experiments to compare TransM with the state-of-the-art method TransE and other prior arts. The performance of each approach is evaluated within two different application scenarios on several benchmark datasets. Results show that the model we proposed significantly outperforms the former ones with lower parameter complexity as TransE." ] }
1509.05488
2243512948
Recently, knowledge graph embedding, which projects symbolic entities and relations into continuous vector space, has become a new, hot topic in artificial intelligence. This paper addresses a new issue of multiple relation semantics that a relation may have multiple meanings revealed by the entity pairs associated with the corresponding triples, and proposes a novel Gaussian mixture model for embedding, TransG. The new model can discover latent semantics for a relation and leverage a mixture of relation component vectors for embedding a fact triple. To the best of our knowledge, this is the first generative model for knowledge graph embedding, which is able to deal with multiple relation semantics. Extensive experiments show that the proposed model achieves substantial improvements against the state-of-the-art baselines.
@cite_17 aims at further discovering the geometric structure of the embedding space to make it semantically smooth. focuses on bridging the gap between knowledge and texts, with a joint loss function for knowledge graph and text corpus. incorporates the rules that are related with relation types such as 1-N and N-1. @cite_7 is a path-based embedding model, simultaneously considering the information and confidence level of the path in knowledge graph.
{ "cite_N": [ "@cite_7", "@cite_17" ], "mid": [ "2952854166", "2250376704" ], "abstract": [ "Representation learning of knowledge bases (KBs) aims to embed both entities and relations into a low-dimensional space. Most existing methods only consider direct relations in representation learning. We argue that multiple-step relation paths also contain rich inference patterns between entities, and propose a path-based representation learning model. This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, we design a path-constraint resource allocation algorithm to measure the reliability of relation paths. (2) We represent relation paths via semantic composition of relation embeddings. Experimental results on real-world datasets show that, as compared with baselines, our model achieves significant and consistent improvements on knowledge base completion and relation extraction from text.", "This paper considers the problem of embedding Knowledge Graphs (KGs) consisting of entities and relations into lowdimensional vector spaces. Most of the existing methods perform this task based solely on observed facts. The only requirement is that the learned embeddings should be compatible within each individual fact. In this paper, aiming at further discovering the intrinsic geometric structure of the embedding space, we propose Semantically Smooth Embedding (SSE). The key idea of SSE is to take full advantage of additional semantic information and enforce the embedding space to be semantically smooth, i.e., entities belonging to the same semantic category will lie close to each other in the embedding space. Two manifold learning algorithms Laplacian Eigenmaps and Locally Linear Embedding are used to model the smoothness assumption. Both are formulated as geometrically based regularization terms to constrain the embedding task. We empirically evaluate SSE in two benchmark tasks of link prediction and triple classification, and achieve significant and consistent improvements over state-of-the-art methods. Furthermore, SSE is a general framework. The smoothness assumption can be imposed to a wide variety of embedding models, and it can also be constructed using other information besides entities’ semantic categories." ] }
1509.05366
2950434305
This paper presents a novel approach in a rarely studied area of computer vision: Human interaction recognition in still images. We explore whether the facial regions and their spatial configurations contribute to the recognition of interactions. In this respect, our method involves extraction of several visual features from the facial regions, as well as incorporation of scene characteristics and deep features to the recognition. Extracted multiple features are utilized within a discriminative learning framework for recognizing interactions between people. Our designed facial descriptors are based on the observation that relative positions, size and locations of the faces are likely to be important for characterizing human interactions. Since there is no available dataset in this relatively new domain, a comprehensive new dataset which includes several images of human interactions is collected. Our experimental results show that faces and scene characteristics contain important information to recognize interactions between people.
There is a vast literature on human action activity recognition (for a recent survey, see @cite_8 ), whereas human interaction recognition is a relatively less studied topic. There are a number of studies that propose models for interaction recognition in videos, and in general, two types of interactions are considered. These are human-object and human-human interactions. For human-object interaction recognition, @cite_25 propose to use probabilistic models for simultaneous object and action recognition. For recognizing human-human interactions, @cite_2 propose to simultaneously segment and track multiple body parts of interacting humans in videos. @cite_19 builds their model on matching of local spatio-temporal features. @cite_12 focus on a single type of interaction, i.e., looking at each other, and propose several methods for detecting this interaction effectively in videos. Recently, @cite_18 utilize configuration detection of upper body detections for better interaction recognition in edited TV material.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_19", "@cite_2", "@cite_25", "@cite_12" ], "mid": [ "1990992394", "1983705368", "2533503513", "", "2169393274", "1971029019" ], "abstract": [ "The objective of this work is to accurately and efficiently detect configurations of one or more people in edited TV material. Such configurations often appear in standard arrangements due to cinematic style, and we take advantage of this to provide scene context. We make the following contributions: first, we introduce a new learnable context aware configuration model for detecting sets of people in TV material that predicts the scale and location of each upper body in the configuration, second, we show that inference of the model can be solved globally and efficiently using dynamic programming, and implement a maximum margin learning framework, and third, we show that the configuration model substantially outperforms a Deformable Part Model (DPM) for predicting upper body locations in video frames, even when the DPM is equipped with the context of other upper bodies. Experiments are performed over two datasets: the TV Human Interaction dataset, and 150 episodes from four different TV shows. We also demonstrate the benefits of the model in recognizing interactions in TV shows.", "Human activity recognition is an important area of computer vision research. Its applications include surveillance systems, patient monitoring systems, and a variety of systems that involve interactions between persons and electronic devices such as human-computer interfaces. Most of these applications require an automated recognition of high-level activities, composed of multiple simple (or atomic) actions of persons. This article provides a detailed overview of various state-of-the-art research papers on human activity recognition. We discuss both the methodologies developed for simple human actions and those for high-level activities. An approach-based taxonomy is chosen that compares the advantages and limitations of each approach. Recognition methodologies for an analysis of the simple actions of a single person are first presented in the article. Space-time volume approaches and sequential approaches that represent and recognize activities directly from input images are discussed. Next, hierarchical recognition methodologies for high-level activities are presented and compared. Statistical approaches, syntactic approaches, and description-based approaches for hierarchical recognition are discussed in the article. In addition, we further discuss the papers on the recognition of human-object interactions and group activities. Public datasets designed for the evaluation of the recognition methodologies are illustrated in our article as well, comparing the methodologies' performances. This review will provide the impetus for future research in more productive areas.", "Human activity recognition is a challenging task, especially when its background is unknown or changing, and when scale or illumination differs in each video. Approaches utilizing spatio-temporal local features have proved that they are able to cope with such difficulties, but they mainly focused on classifying short videos of simple periodic actions. In this paper, we present a new activity recognition methodology that overcomes the limitations of the previous approaches using local features. We introduce a novel matching, spatio-temporal relationship match, which is designed to measure structural similarity between sets of features extracted from two videos. Our match hierarchically considers spatio-temporal relationships among feature points, thereby enabling detection and localization of complex non-periodic activities. In contrast to previous approaches to ‘classify’ videos, our approach is designed to ‘detect and localize’ all occurring activities from continuous videos where multiple actors and pedestrians are present. We implement and test our methodology on a newly-introduced dataset containing videos of multiple interacting persons and individual pedestrians. The results confirm that our system is able to recognize complex non-periodic activities (e.g. ‘push’ and ‘hug’) from sets of spatio-temporal features even when multiple activities are present in the scene", "", "Interpretation of images and videos containing humans interacting with different objects is a daunting task. It involves understanding scene or event, analyzing human movements, recognizing manipulable objects, and observing the effect of the human movement on those objects. While each of these perceptual tasks can be conducted independently, recognition rate improves when interactions between them are considered. Motivated by psychological studies of human perception, we present a Bayesian approach which integrates various perceptual tasks involved in understanding human-object interactions. Previous approaches to object and action recognition rely on static shape or appearance feature matching and motion analysis, respectively. Our approach goes beyond these traditional approaches and applies spatial and functional constraints on each of the perceptual elements for coherent semantic interpretation. Such constraints allow us to recognize objects and actions when the appearances are not discriminative enough. We also demonstrate the use of such constraints in recognition of actions from static images without using any motion information.", "The objective of this work is to determine if people are interacting in TV video by detecting whether they are looking at each other or not. We determine both the temporal period of the interaction and also spatially localize the relevant people. We make the following four contributions: (i) head detection with implicit coarse pose information (front, profile, back); (ii) continuous head pose estimation in unconstrained scenarios (TV video) using Gaussian process regression; (iii) propose and evaluate several methods for assessing whether and when pairs of people are looking at each other in a video shot; and (iv) introduce new ground truth annotation for this task, extending the TV human interactions dataset (Patron- 2010) The performance of the methods is evaluated on this dataset, which consists of 300 video clips extracted from TV shows. Despite the variety and difficulty of this video material, our best method obtains an average precision of 87.6 in a fully automatic manner." ] }
1509.05366
2950434305
This paper presents a novel approach in a rarely studied area of computer vision: Human interaction recognition in still images. We explore whether the facial regions and their spatial configurations contribute to the recognition of interactions. In this respect, our method involves extraction of several visual features from the facial regions, as well as incorporation of scene characteristics and deep features to the recognition. Extracted multiple features are utilized within a discriminative learning framework for recognizing interactions between people. Our designed facial descriptors are based on the observation that relative positions, size and locations of the faces are likely to be important for characterizing human interactions. Since there is no available dataset in this relatively new domain, a comprehensive new dataset which includes several images of human interactions is collected. Our experimental results show that faces and scene characteristics contain important information to recognize interactions between people.
One of the earliest works to recognize the human interactions in still images is the work of @cite_17 . In their paper, four classes are defined: shaking hands, pointing at the opposite person, standing hand-in-hand and intermediate-transitional state between them, and K-nearest neighbor classifier is used to recognize the interactions. Recently, @cite_16 has focused on how people interact by investigating the proxemics between them. They claim that complex interactions can be modeled as a single representation and a joint model of body poses can be learned. @cite_13 look into the problem of detecting social roles in videos in a weakly supervised setting via a CRF model. In our work, we approach the human-human interaction recognition problem by means of several descriptors that encode facial region configurations.
{ "cite_N": [ "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "", "1989004008", "2164295580" ], "abstract": [ "", "We deal with the problem of recognizing social roles played by people in an event. Social roles are governed by human interactions, and form a fundamental component of human event description. We focus on a weakly supervised setting, where we are provided different videos belonging to an event class, without training role labels. Since social roles are described by the interaction between people in an event, we propose a Conditional Random Field to model the inter-role interactions, along with person specific social descriptors. We develop tractable variational inference to simultaneously infer model weights, as well as role assignment to all people in the videos. We also present a novel YouTube social roles dataset with ground truth role annotations, and introduce annotations on a subset of videos from the TRECVID-MED11 [1] event kits for evaluation purposes. The performance of the model is compared against different baseline methods on these datasets.", "This paper presents a recognition system that classifies four kinds of human interactions: shaking hands, pointing at the opposite person, standing hand-in-hand, and an intermediate transitional state between them. Our system achieves recognition by applying the K-nearest neighbor classifier to the parametric human-interaction model, which describes the interpersonal configuration with multiple features from gray scale images (i.e., binary blob, silhouette contour, and intensity distribution). Unlike the algorithms that use temporal information about motion, our system independently classifies each frame by estimating the relative poses of the interacting persons. The system provides a tool to detect the initiation and the termination of an interaction with no parsing procedure for sequential data. Experimental results are presented and illustrated." ] }
1509.05366
2950434305
This paper presents a novel approach in a rarely studied area of computer vision: Human interaction recognition in still images. We explore whether the facial regions and their spatial configurations contribute to the recognition of interactions. In this respect, our method involves extraction of several visual features from the facial regions, as well as incorporation of scene characteristics and deep features to the recognition. Extracted multiple features are utilized within a discriminative learning framework for recognizing interactions between people. Our designed facial descriptors are based on the observation that relative positions, size and locations of the faces are likely to be important for characterizing human interactions. Since there is no available dataset in this relatively new domain, a comprehensive new dataset which includes several images of human interactions is collected. Our experimental results show that faces and scene characteristics contain important information to recognize interactions between people.
Another study area that could be related to our work is event recognition in still images( @cite_1 , @cite_9 , @cite_10 ). Event recognition research aims to recognize a certain scene or event in images or videos. Datasets in this field are different from ours. Event recognition datasets describe an event like Christmas, wedding, etc. In such images, the main focus is not the people, but visual elements for an event. In multi-person interaction recognition problem, we focus on the presence of people, and try to infer the interaction based on images of people.
{ "cite_N": [ "@cite_10", "@cite_9", "@cite_1" ], "mid": [ "2123654294", "2169917630", "2038032460" ], "abstract": [ "The problem of adaptively selecting pooling regions for the classification of complex video events is considered. Complex events are defined as events composed of several characteristic behaviors, whose temporal configuration can change from sequence to sequence. A dynamic pooling operator is defined so as to enable a unified solution to the problems of event specific video segmentation, temporal structure modeling, and event detection. Video is decomposed into segments, and the segments most informative for detecting a given event are identified, so as to dynamically determine the pooling operator most suited for each sequence. This dynamic pooling is implemented by treating the locations of characteristic segments as hidden information, which is inferred, on a sequence-by-sequence basis, via a large-margin classification rule with latent variables. Although the feasible set of segment selections is combinatorial, it is shown that a globally optimal solution to the inference problem can be obtained efficiently, through the solution of a series of linear programs. Besides the coarse-level location of segments, a finer model of video structure is implemented by jointly pooling features of segment-tuples. Experimental evaluation demonstrates that the resulting event detector has state-of-the-art performance on challenging video datasets.", "The task of recognizing events in photo collections is central for automatically organizing images. It is also very challenging, because of the ambiguity of photos across different event classes and because many photos do not convey enough relevant information. Unfortunately, the field still lacks standard evaluation data sets to allow comparison of different approaches. In this paper, we introduce and release a novel data set of personal photo collections containing more than 61,000 images in 807 collections, annotated with 14 diverse social event classes. Casting collections as sequential data, we build upon recent and state-of-the-art work in event recognition in videos to propose a latent sub-event approach for event recognition in photo collections. However, photos in collections are sparsely sampled over time and come in bursts from which transpires the importance of specific moments for the photographers. Thus, we adapt a discriminative hidden Markov model to allow the transitions between states to be a function of the time gap between consecutive images, which we coin as Stopwatch Hidden Markov model (SHMM). In our experiments, we show that our proposed model outperforms approaches based only on feature pooling or a classical hidden Markov model. With an average accuracy of 56 , we also highlight the difficulty of the data set and the need for future advances in event recognition in photo collections.", "Since high-level events in images (e.g. “dinner”, “motorcycle stunt”, etc.) may not be directly correlated with their visual appearance, low-level visual features do not carry enough semantics to classify such events satisfactorily. This paper explores a fully compositional approach for event based image retrieval which is able to overcome this shortcoming. Furthermore, the approach is fully scalable in both adding new events and new primitives. Using the Pascal VOC 2007 dataset, our contributions are the following: (i) We apply the Faceted Analysis-Synthesis Theory (FAST) to build a hierarchy of 228 high-level events. (ii) We show that rule-based classifiers are better suited for compositional recognition of events than SVMs. In addition, rule-based classifiers provide semantically meaningful event descriptions which help bridging the semantic gap. (iii) We demonstrate that compositionality enables unseen event recognition: we can use rules learned from non-visual cues, together with object detectors to get reasonable performance on unseen event categories." ] }
1509.05490
2249040818
Knowledge representation is a major topic in AI, and many studies attempt to represent entities and relations of knowledge base in a continuous vector space. Among these attempts, translation-based methods build entity and relation vectors by minimizing the translation loss from a head entity to a tail one. In spite of the success of these methods, translation-based methods also suffer from the oversimplified loss metric, and are not competitive enough to model various and complex entities relations in knowledge bases. To address this issue, we propose , an adaptive metric approach for embedding, utilizing the metric learning ideas to provide a more flexible embedding method. Experiments are conducted on the benchmark datasets and our proposed method makes significant and consistent improvements over the state-of-the-art baselines.
@cite_19 lays the entities in the original space, say @math , @math . @cite_4 projects the entities into a hyperplane for addressing the issue of complex relation embedding, say @math , @math . @cite_17 transforms the entities by the same matrix to also address the issue of complex relation embedding, as: @math , @math .
{ "cite_N": [ "@cite_19", "@cite_4", "@cite_17" ], "mid": [ "2127795553", "2283196293", "2184957013" ], "abstract": [ "We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.", "We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up.", "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to state-of-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction." ] }
1509.05490
2249040818
Knowledge representation is a major topic in AI, and many studies attempt to represent entities and relations of knowledge base in a continuous vector space. Among these attempts, translation-based methods build entity and relation vectors by minimizing the translation loss from a head entity to a tail one. In spite of the success of these methods, translation-based methods also suffer from the oversimplified loss metric, and are not competitive enough to model various and complex entities relations in knowledge bases. To address this issue, we propose , an adaptive metric approach for embedding, utilizing the metric learning ideas to provide a more flexible embedding method. Experiments are conducted on the benchmark datasets and our proposed method makes significant and consistent improvements over the state-of-the-art baselines.
The UM @cite_10 is a simplified version of TransE by setting all the relation vectors to zero @math . Obviously, relation is not considered in this model.
{ "cite_N": [ "@cite_10" ], "mid": [ "1596986901" ], "abstract": [ "Open-text semantic parsers are designed to interpret any statement in natural language by inferring a corresponding meaning representation (MR – a formal representation of its sense). Unfortunately, large scale systems cannot be easily machine-learned due to a lack of directly supervised data. We propose a method that learns to assign MRs to a wide range of text (using a dictionary of more than 70,000 words mapped to more than 40,000 entities) thanks to a training scheme that combines learning from knowledge bases (e.g. WordNet) with learning from raw text. The model jointly learns representations of words, entities and MRs via a multi-task training process operating on these diverse sources of data. Hence, the system ends up providing methods for knowledge acquisition and wordsense disambiguation within the context of semantic parsing in a single elegant framework. Experiments on these various tasks indicate the promise of the approach." ] }
1509.05490
2249040818
Knowledge representation is a major topic in AI, and many studies attempt to represent entities and relations of knowledge base in a continuous vector space. Among these attempts, translation-based methods build entity and relation vectors by minimizing the translation loss from a head entity to a tail one. In spite of the success of these methods, translation-based methods also suffer from the oversimplified loss metric, and are not competitive enough to model various and complex entities relations in knowledge bases. To address this issue, we propose , an adaptive metric approach for embedding, utilizing the metric learning ideas to provide a more flexible embedding method. Experiments are conducted on the benchmark datasets and our proposed method makes significant and consistent improvements over the state-of-the-art baselines.
SLM applies neural network to knowledge graph embedding. The score function is defined as Note that SLM is a special case of NTN when the zero tensors are applied. @cite_12 had proposed a similar method but applied this approach into the language model.
{ "cite_N": [ "@cite_12" ], "mid": [ "2117130368" ], "abstract": [ "We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance." ] }
1509.05490
2249040818
Knowledge representation is a major topic in AI, and many studies attempt to represent entities and relations of knowledge base in a continuous vector space. Among these attempts, translation-based methods build entity and relation vectors by minimizing the translation loss from a head entity to a tail one. In spite of the success of these methods, translation-based methods also suffer from the oversimplified loss metric, and are not competitive enough to model various and complex entities relations in knowledge bases. To address this issue, we propose , an adaptive metric approach for embedding, utilizing the metric learning ideas to provide a more flexible embedding method. Experiments are conducted on the benchmark datasets and our proposed method makes significant and consistent improvements over the state-of-the-art baselines.
The SME model @cite_10 @cite_1 attempts to capture the correlations between entities and relations by matrix product and Hadamard product. The score functions are defined as follows: where @math and @math are weight matrices, @math is the Hadamard product, @math and @math are bias vectors. In some recent work @cite_1 , the second form of score function is re-defined with 3-way tensors instead of matrices.
{ "cite_N": [ "@cite_1", "@cite_10" ], "mid": [ "2951131188", "1596986901" ], "abstract": [ "Large-scale relational learning becomes crucial for handling the huge amounts of structured data generated daily in many application domains ranging from computational biology or information retrieval, to natural language processing. In this paper, we present a new neural network architecture designed to embed multi-relational graphs into a flexible continuous vector space in which the original data is kept and enhanced. The network is trained to encode the semantics of these graphs in order to assign high probabilities to plausible components. We empirically show that it reaches competitive performance in link prediction on standard datasets from the literature.", "Open-text semantic parsers are designed to interpret any statement in natural language by inferring a corresponding meaning representation (MR – a formal representation of its sense). Unfortunately, large scale systems cannot be easily machine-learned due to a lack of directly supervised data. We propose a method that learns to assign MRs to a wide range of text (using a dictionary of more than 70,000 words mapped to more than 40,000 entities) thanks to a training scheme that combines learning from knowledge bases (e.g. WordNet) with learning from raw text. The model jointly learns representations of words, entities and MRs via a multi-task training process operating on these diverse sources of data. Hence, the system ends up providing methods for knowledge acquisition and wordsense disambiguation within the context of semantic parsing in a single elegant framework. Experiments on these various tasks indicate the promise of the approach." ] }
1509.05490
2249040818
Knowledge representation is a major topic in AI, and many studies attempt to represent entities and relations of knowledge base in a continuous vector space. Among these attempts, translation-based methods build entity and relation vectors by minimizing the translation loss from a head entity to a tail one. In spite of the success of these methods, translation-based methods also suffer from the oversimplified loss metric, and are not competitive enough to model various and complex entities relations in knowledge bases. To address this issue, we propose , an adaptive metric approach for embedding, utilizing the metric learning ideas to provide a more flexible embedding method. Experiments are conducted on the benchmark datasets and our proposed method makes significant and consistent improvements over the state-of-the-art baselines.
The NTN model @cite_6 defines an expressive score function for graph embedding to joint the SLM and LFM. where @math is a relation-specific linear layer, @math is the @math function, @math is a 3-way tensor. However, the high complexity of NTN may degrade its applicability to large-scale knowledge bases.
{ "cite_N": [ "@cite_6" ], "mid": [ "2127426251" ], "abstract": [ "Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively." ] }
1509.05490
2249040818
Knowledge representation is a major topic in AI, and many studies attempt to represent entities and relations of knowledge base in a continuous vector space. Among these attempts, translation-based methods build entity and relation vectors by minimizing the translation loss from a head entity to a tail one. In spite of the success of these methods, translation-based methods also suffer from the oversimplified loss metric, and are not competitive enough to model various and complex entities relations in knowledge bases. To address this issue, we propose , an adaptive metric approach for embedding, utilizing the metric learning ideas to provide a more flexible embedding method. Experiments are conducted on the benchmark datasets and our proposed method makes significant and consistent improvements over the state-of-the-art baselines.
@cite_4 jointly embeds knowledge and texts. @cite_11 involves the rules into embedding. @cite_8 considers the paths of knowledge graph into embedding.
{ "cite_N": [ "@cite_8", "@cite_4", "@cite_11" ], "mid": [ "2952854166", "2283196293", "2274308990" ], "abstract": [ "Representation learning of knowledge bases (KBs) aims to embed both entities and relations into a low-dimensional space. Most existing methods only consider direct relations in representation learning. We argue that multiple-step relation paths also contain rich inference patterns between entities, and propose a path-based representation learning model. This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, we design a path-constraint resource allocation algorithm to measure the reliability of relation paths. (2) We represent relation paths via semantic composition of relation embeddings. Experimental results on real-world datasets show that, as compared with baselines, our model achieves significant and consistent improvements on knowledge base completion and relation extraction from text.", "We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up.", "Knowledge bases (KBs) are often greatly incomplete, necessitating a demand for KB completion. A promising approach is to embed KBs into latent spaces and make inferences by learning and operating on latent representations. Such embedding models, however, do not make use of any rules during inference and hence have limited accuracy. This paper proposes a novel approach which incorporates rules seamlessly into embedding models for KB completion. It formulates inference as an integer linear programming (ILP) problem, with the objective function generated from embedding models and the constraints translated from rules. Solving the ILP problem results in a number of facts which 1) are the most preferred by the embedding models, and 2) comply with all the rules. By incorporating rules, our approach can greatly reduce the solution space and significantly improve the inference accuracy of embedding models. We further provide a slacking technique to handle noise in KBs, by explicitly modeling the noise with slack variables. Experimental results on two publicly available data sets show that our approach significantly and consistently outperforms state-of-the-art embedding models in KB completion. Moreover, the slacking technique is effective in identifying erroneous facts and ambiguous entities, with a precision higher than 90 ." ] }
1509.05096
2950253947
Twitter is one of the most popular social media. Due to the ease of availability of data, Twitter is used significantly for research purposes. Twitter is known to evolve in many aspects from what it was at its birth; nevertheless, how it evolved its own linguistic style is still relatively unknown. In this paper, we study the evolution of various sociolinguistic aspects of Twitter over large time scales. To the best of our knowledge, this is the first comprehensive study on the evolution of such aspects of this OSN. We performed quantitative analysis both on the word level as well as on the hashtags since it is perhaps one of the most important linguistic units of this social media. We studied the (in)formality aspects of the linguistic styles in Twitter and find that it is neither fully formal nor completely informal; while on one hand, we observe that Out-Of-Vocabulary words are decreasing over time (pointing to a formal style), on the other hand it is quite evident that whitespace usage is getting reduced with a huge prevalence of running texts (pointing to an informal style). We also analyze and propose quantitative reasons for repetition and coalescing of hashtags in Twitter. We believe that such phenomena may be strongly tied to different evolutionary aspects of human languages.
* Cognitive and linguistic studies in CMC There have been various works on analyzing linguistic style, structure of language, as well as its cognitive aspects. Some early works includes the analysis of the cognitive process involved in picking words and the linguistic style @cite_17 , the variations across different registers @cite_5 , and the correlation between style and gender @cite_6 by With the advent of Internet, the research focus shifted towards the language of computer-mediated-communication (CMC) systems like online chats, IM etc. Paolillo in @cite_3 investigate linguistic variations associated with strong and weak ties in an early Internet chat relay system. study the linguistic styles in SMS @cite_18 . Similar research of understanding linguistic styles have been carried out subsequently in various other media: in @cite_28 study the IM media, emails and blogs have been studied by @cite_16 and @cite_29 respectively. There have also been some studies on analysis of linguistic content and structure of deceptive CMC interaction and the linguistic profile of the sender and receiver @cite_14 @cite_19 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_28", "@cite_29", "@cite_3", "@cite_6", "@cite_19", "@cite_5", "@cite_16", "@cite_17" ], "mid": [ "60041609", "2101707957", "2134460304", "2156589869", "2006537090", "", "2106815140", "2093585241", "2041394582", "1514520715" ], "abstract": [ "An expanded register rack assembly is for a processor based device. The expanded register rack assembly has a first board register rack assembly having multiple slots for receiving multiple electronic modules including a programmable controller module. The expanded register rack also has a second register rack having multiple slots for receiving multiple electronic modules. The expanded register rack assembly further has a communicative coupling between the first and second board racks, the communicative coupling permitting the programmable controller module to selectively access any of the other electronic modules received in the first and second multiple slots without the need for communication hardware mounted in any of the slots.", "Current techniques towards information security have limited capabilities to detect and counter attacks that involve different kinds of masquerade and spread of misinformation executed over long time periods to achieve malicious goals. Detection of such deceptive information obtained during online interactions (emails, chat room conversations) is the first step before counter strategies can be developed. With the large-scale use of information technologies as a general communication medium, facilitating deception detection is a key enabler to utilizing information systems to their fullest potential. This article presents a framework for computer-aided deception detection building on the interpersonal deception theory (IDT) of human interpersonal communication research and text-processing techniques for facilitating deception analysis of text-oriented communication. A state-transition diagram based framework is proposed to model the dynamic evolution of an interpersonal conversation between a sender and receiver based on the IDT-based process schemata. The framework is then utilized to develop a deception detection agent to process textual information. Deception detection is defined as a process of model verification. Architecture of a prototype under development and open problems for further research in this area are outlined.", "This article presents an analysis of Instant Messaging (IM), a one-to-one synchronous medium of computer-mediated communication. Innumerable articles in the popular press suggest that increasing use of IM by teens is leading to a break- down in the English language. The analyses presented here are based on a unique corpus involving 72 teenagers and over a million words of natural, unmonitored IM. In addition, a corpus of speech from the same teenagers is examined for comparison. Targeting well-known IM features and four areas of grammar, we show that IM is firmly rooted in the model of the extant language. It reflects the same structured heteroge- neity (variation) and the same dynamic, ongoing processes of linguistic change that are currently under way in contemporary varieties of English. At the same time, IM is a unique new hybrid register, exhibiting a fusion of the full range of variants from the speech community—formal, informal, and highly vernacular. Teenagers in the early twenty-first century are using home com- puters for communication at unprecedented rates in ever-expanding virtual communities. A particularly favorite medium, at least when we conducted this research, was Instant Messaging (IM). IM is \"a one-to-one synchronous form of computer-mediated communication\" (Baron 2004, 13). It is \"direct, immediate, casual online contact\" ( 2002). In essence, IM is real-time \"interactive written discourse\" (Ferrara, Brunner, and Whittemore", "Weblogs (blogs) - frequently modified Web pages in which dated entries are listed in reverse chronological sequence - are the latest genre of Internet communication to attain widespread popularity, yet their characteristics have not been systematically described. This paper presents the results of a content analysis of 203 randomly-selected Weblogs, comparing the empirically observable features of the corpus with popular claims about the nature of Weblogs, and finding them to differ in a number of respects. Notably, blog authors, journalists and scholars alike exaggerate the extent to which blogs are interlinked, interactive, and oriented towards external events, and underestimate the importance of blogs as individualistic, intimate forms of self-expression. Based on the profile generated by the empirical analysis, we consider the likely antecedents of the blog genre, situate it with respect to the dominant forms of digital communication on the Internet today, and advance predictions about its long-term impacts.", "This paper examines linguistic variation on an Internet Relay Chat channel with respect to the hypothesis, based on the model of Milroy and Milroy (1992), that standard variants tend to be associated with weak social network ties, while vernacular variants are associated with strong network ties. An analysis of frequency of contact as a measure of tie strength reveals a structured relationship between tie strength and several linguistic variants. However, the variant features are associated with social positions in a way that does not correlate neatly with tie strength. An account of these results is proposed in terms of the social functions of the different variables and the larger social context of IRC affecting tie strength.", "", "The present study investigates changes in both the sender's and the target's linguistic style across truthful and deceptive dyadic communication in a synchronous text-based setting. A computer-based analysis of 242 transcripts revealed that senders produced more words overall, decreased their use of self-oriented pronouns but increased other-oriented pronouns, and used more sense-based descriptions (e.g., seeing, touching) when lying than when telling the truth. In addition, motivated senders avoided causal terms during deception, while unmotivated senders relied more heavily on simple negations. Receivers used more words when being deceived, but they also asked more questions and used shorter sentences when being lied to than when being told the truth, especially when the sender was unmotivated. These findings are discussed in terms of their implications for linguistic style matching and interpersonal deception theory.", "Part I. Background Concepts and Issues: 1. Introduction: textual dimensions and relations 2. Situations and functions 3. Previous linguistic research on speech and writing Part II. Methodology: 4. Methodological overview of the study 5. Statistical analysis Part III. Dimensions and Relations in English: 6. Textual dimensions in speech and writing 7. Textual relations in speech and writing 8. Extending the description: variations within genres 9. Afterword: applying the model Appendices.", "", "There is a venerable tradition in rhetoric and composition which sees the composing process as a series of decisions and choices.1 However, it is no longer easy simply to assert this position, unless you are prepared to answer a number of questions, the most pressing of which probably is: \"What then are the criteria which govern that choice?\" Or we could put it another way: \"What guides the decisions writers make as they write?\" In a recent survey of composition research, Odell, Cooper, and Courts noticed that some of the most thoughtful people in the field are giving us two reasonable but somewhat different answers:" ] }
1509.04954
2272449372
In this paper we present a method for localisation of facial landmarks on human and sheep. We introduce a new feature extraction scheme called triplet-interpolated feature used at each iteration of the cascaded shape regression framework. It is able to extract features from similar semantic location given an estimated shape, even when head pose variations are large and the facial landmarks are very sparsely distributed. Furthermore, we study the impact of training data imbalance on model performance and propose a training sample augmentation scheme that produces more initialisations for training samples from the minority. More specifically, the augmentation number for a training sample is made to be negatively correlated to the value of the fitted probability density function at the sample's position. We evaluate the proposed scheme on both human and sheep facial landmarks localisation. On the benchmark 300w human face dataset, we demonstrate the benefits of our proposed methods and show very competitive performance when comparing to other methods. On a newly created sheep face dataset, we get very good performance despite the fact that we only have a limited number of training samples and a set of sparse landmarks are annotated.
Local based methods usually consist of two parts: local experts and spatial shape models. The former describes how image around each facial landmark looks like in terms of local intensity or colour patterns while the latter describes how face shape varies. There are three main types of local feature detection. (1) Classification methods include Support Vector Machine (SVM) classifier @cite_1 @cite_43 based on various image features such as Gabor @cite_6 , SIFT @cite_12 , Discriminative Response Map Fitting () by dictionary learning @cite_19 and multichannel correlation filter responses @cite_32 . (2) Regression-based approaches include Support Vector Regressors (SVRs) @cite_36 with a probabilistic MRF-based shape model, Continuous Conditional Neural Fields () @cite_27 . (3) Voting-based approaches are also introduced in recent years, including regression forests based voting methods @cite_39 @cite_33 @cite_22 and exemplar based voting methods @cite_25 @cite_2 . One typical shape model is the Constrained Local Model (CLM) @cite_3 . There are some other shape models such as RANSAC in @cite_43 , graph-matching in @cite_13 , Gaussian Newton Deformable Part Model () @cite_8 and mixture of trees @cite_7 .
{ "cite_N": [ "@cite_33", "@cite_22", "@cite_8", "@cite_36", "@cite_7", "@cite_1", "@cite_32", "@cite_6", "@cite_39", "@cite_3", "@cite_19", "@cite_43", "@cite_27", "@cite_2", "@cite_13", "@cite_25", "@cite_12" ], "mid": [ "2135132101", "2170800077", "2130563197", "2148739392", "2047508432", "2064399342", "", "2167955297", "2009621568", "1977821862", "2101866605", "2032558548", "", "2015268479", "2014322185", "2025684930", "2151103935" ], "abstract": [ "Although facial feature detection from 2D images is a well-studied field, there is a lack of real-time methods that estimate feature points even on low quality images. Here we propose conditional regression forest for this task. While regression forest learn the relations between facial image patches and the location of feature points from the entire set of faces, conditional regression forest learn the relations conditional to global face properties. In our experiments, we use the head pose as a global property and demonstrate that conditional regression forests outperform regression forests for facial feature detection. We have evaluated the method on the challenging Labeled Faces in the Wild [20] database where close-to-human accuracy is achieved while processing images in real-time.", "In this paper we propose a method for the localization of multiple facial features on challenging face images. In the regression forests (RF) framework, observations (patches) that are extracted at several image locations cast votes for the localization of several facial features. In order to filter out votes that are not relevant, we pass them through two types of sieves, that are organised in a cascade, and which enforce geometric constraints. The first sieve filters out votes that are not consistent with a hypothesis for the location of the face center. Several sieves of the second type, one associated with each individual facial point, filter out distant votes. We propose a method that adjusts on-the-fly the proximity threshold of each second type sieve by applying a classifier which, based on middle-level features extracted from voting maps for the facial feature in question, makes a sequence of decisions on whether the threshold should be reduced or not. We validate our proposed method on two challenging datasets with images collected from the Internet in which we obtain state of the art results without resorting to explicit facial shape models. We also show the benefits of our method for proximity threshold adjustment especially on 'difficult' face images.", "Arguably, Deformable Part Models (DPMs) are one of the most prominent approaches for face alignment with impressive results being recently reported for both controlled lab and unconstrained settings. Fitting in most DPM methods is typically formulated as a two-step process during which discriminatively trained part templates are first correlated with the image to yield a filter response for each landmark and then shape optimization is performed over these filter responses. This process, although computationally efficient, is based on fixed part templates which are assumed to be independent, and has been shown to result in imperfect filter responses and detection ambiguities. To address this limitation, in this paper, we propose to jointly optimize a part-based, trained in-the-wild, flexible appearance model along with a global shape model which results in a joint translational motion model for the model parts via Gauss-Newton (GN) optimization. We show how significant computational reductions can be achieved by building a full model during training but then efficiently optimizing the proposed cost function on a sparse grid using weighted least-squares during fitting. We coin the proposed formulation Gauss-Newton Deformable Part Model (GN-DPM). Finally, we compare its performance against the state-of-the-art and show that the proposed GN-DPM outperforms it, in some cases, by a large margin. Code for our method is available from http: ibug.doc.ic.ac.uk resources", "We propose a new algorithm to detect facial points in frontal and near-frontal face images. It combines a regression-based approach with a probabilistic graphical model-based face shape model that restricts the search to anthropomorphically consistent regions. While most regression-based approaches perform a sequential approximation of the target location, our algorithm detects the target location by aggregating the estimates obtained from stochastically selected local appearance information into a single robust prediction. The underlying assumption is that by aggregating the different estimates, their errors will cancel out as long as the regressor inputs are uncorrelated. Once this new perspective is adopted, the problem is reformulated as how to optimally select the test locations over which the regressors are evaluated. We propose to extend the regression-based model to provide a quality measure of each prediction, and use the shape model to restrict and correct the sampling region. Our approach combines the low computational cost typical of regression-based approaches with the robustness of exhaustive-search approaches. The proposed algorithm was tested on over 7,500 images from five databases. Results showed significant improvement over the current state of the art.", "We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).", "In this paper we present a robust and accurate method to detect 17 facial landmarks in expressive face images. We introduce a new multi-resolution framework based on the recent multiple kernel algorithm. Low resolution patches carry the global information of the face and give a coarse but robust detection of the desired landmark. High resolution patches, using local details, refine this location. This process is combined with a bootstrap process and a statistical validation, both improving the system robustness. Combining independent point detection and prior knowledge on the point distribution, the proposed detector is robust to variable lighting conditions and facial expressions. This detector is tested on several databases and the results reported can be compared favorably with the current state of the art point detectors.", "", "Locating facial feature points in images of faces is an important stage for numerous facial image interpretation tasks. In this paper we present a method for fully automatic detection of 20 facial feature points in images of expressionless faces using Gabor feature based boosted classifiers. The method adopts fast and robust face detection algorithm, which represents an adapted version of the original Viola-Jones face detector. The detected face region is then divided into 20 relevant regions of interest, each of which is examined further to predict the location of the facial feature points. The proposed facial feature point detection method uses individual feature patch templates to detect points in the relevant region of interest. These feature models are GentleBoost templates built from both gray level intensities and Gabor wavelet features. When tested on the Cohn-Kanade database, the method has achieved average recognition rates of 93 .", "A widely used approach for locating points on deformable objects in images is to generate feature response images for each point, and then to fit a shape model to these response images. We demonstrate that Random Forest regression-voting can be used to generate high quality response images quickly. Rather than using a generative or a discriminative model to evaluate each pixel, a regressor is used to cast votes for the optimal position of each point. We show that this leads to fast and accurate shape model matching when applied in the Constrained Local Model framework. We evaluate the technique in detail, and compare it with a range of commonly used alternatives across application areas: the annotation of the joints of the hands in radiographs and the detection of feature points in facial images. We show that our approach outperforms alternative techniques, achieving what we believe to be the most accurate results yet published for hand joint annotation and state-of-the-art performance for facial feature point detection.", "We present an efficient and robust model matching method which uses a joint shape and texture appearance model to generate a set of region template detectors. The model is fitted to an unseen image in an iterative manner by generating templates using the joint model and the current parameter estimates, correlating the templates with the target image to generate response images and optimising the shape parameters so as to maximise the sum of responses. The appearance model is similar to that used in the Active Appearance Model due to However in our approach the appearance model is used to generate likely feature templates, instead of trying to approximate the image pixels directly. We show that when applied to human faces, our constrained local model (CLM) algorithm is more robust and more accurate than the original AAM search method, which relies on the image reconstruction error to update the model parameters. We demonstrate improved localisation accuracy on two publicly available face data sets and improved tracking on a challenging set of in-car face sequences.", "We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. The motivation behind this approach is that, unlike the holistic texture based features used in the discriminative AAM approaches, the response map can be represented by a small set of parameters and these parameters can be very efficiently used for reconstructing unseen response maps. Furthermore, we show that by adopting very simple off-the-shelf regression techniques, it is possible to learn robust functions from response maps to the shape parameters updates. The experiments, conducted on Multi-PIE, XM2VTS and LFPW database, show that the proposed DRMF method outperforms state-of-the-art algorithms for the task of generic face fitting. Moreover, the DRMF method is computationally very efficient and is real-time capable. The current MATLAB implementation takes 1 second per image. To facilitate future comparisons, we release the MATLAB code and the pre-trained models for research purposes.", "We present a novel approach to localizing parts in images of human faces. The approach combines the output of local detectors with a non-parametric set of global models for the part locations based on over one thousand hand-labeled exemplar images. By assuming that the global models generate the part locations as hidden variables, we derive a Bayesian objective function. This function is optimized using a consensus of models for these hidden variables. The resulting localizer handles a much wider range of expression, pose, lighting and occlusion than prior ones. We show excellent performance on a new dataset gathered from the internet and show that our localizer achieves state-of-the-art performance on the less challenging BioID dataset.", "", "Detecting faces in uncontrolled environments continues to be a challenge to traditional face detection methods due to the large variation in facial appearances, as well as occlusion and clutter. In order to overcome these challenges, we present a novel and robust exemplar-based face detector that integrates image retrieval and discriminative learning. A large database of faces with bounding rectangles and facial landmark locations is collected, and simple discriminative classifiers are learned from each of them. A voting-based method is then proposed to let these classifiers cast votes on the test image through an efficient image retrieval technique. As a result, faces can be very efficiently detected by selecting the modes from the voting maps, without resorting to exhaustive sliding window-style scanning. Moreover, due to the exemplar-based framework, our approach can detect faces under challenging conditions without explicitly modeling their variations. Evaluation on two public benchmark datasets shows that our new face detection approach is accurate and efficient, and achieves the state-of-the-art performance. We further propose to use image retrieval for face validation (in order to remove false positives) and for face alignment landmark localization. The same methodology can also be easily generalized to other face-related tasks, such as attribute recognition, as well as general object detection.", "Localizing facial landmarks is a fundamental step in facial image analysis. However, the problem is still challenging due to the large variability in pose and appearance, and the existence of occlusions in real-world face images. In this paper, we present exemplar-based graph matching (EGM), a robust framework for facial landmark localization. Compared to conventional algorithms, EGM has three advantages: (1) an affine-invariant shape constraint is learned online from similar exemplars to better adapt to the test face, (2) the optimal landmark configuration can be directly obtained by solving a graph matching problem with the learned shape constraint, (3) the graph matching problem can be optimized efficiently by linear programming. To our best knowledge, this is the first attempt to apply a graph matching technique for facial landmark localization. Experiments on several challenging datasets demonstrate the advantages of EGM over state-of-the-art methods.", "We propose a data-driven approach to facial landmark localization that models the correlations between each landmark and its surrounding appearance features. At runtime, each feature casts a weighted vote to predict landmark locations, where the weight is precomputed to take into account the feature's discriminative power. The feature votingbased landmark detection is more robust than previous local appearance-based detectors, we combine it with nonparametric shape regularization to build a novel facial landmark localization pipeline that is robust to scale, in-plane rotation, occlusion, expression, and most importantly, extreme head pose. We achieve state-of-the-art performance on two especially challenging in-the-wild datasets populated by faces with extreme head pose and expression.", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance." ] }
1509.04783
2186708012
The group membership prediction (GMP) problem involves predicting whether or not a collection of instances share a certain semantic property. For instance, in kinship verification given a collection of images, the goal is to predict whether or not they share a familial relationship. In this context we propose a novel probability model and introduce latent view-specific and view-shared random variables to jointly account for the view-specific appearance and cross-view similarities among data instances. Our model posits that data from each view is independent conditioned on the shared variables. This postulate leads to a parametric probability model that decomposes group membership likelihood into a tensor product of data-independent parameters and data-dependent factors. We propose learning the data-independent parameters in a discriminative way with bilinear classifiers, and test our prediction algorithm on challenging visual recognition tasks such as multi-camera person re-identification and kinship verification. On most benchmark datasets, our method can significantly outperform the current state-of-the-art.
GMP problem is closely related to multi-view learning (MVL). Indeed, our perspective of shared variables has been used before in the context of MVL @cite_22 @cite_19 @cite_2 . Nevertheless, the goal of MVL specifically in visual recognition is different from ours. Namely, the objective of MVL is to leverage multiple sources ( texts, images, videos, ) of data corresponding to the same underlying object ( persons, events, ) to improve recognition performance @cite_33 @cite_36 @cite_6 @cite_2 . On the other hand our goal is to predict group membership among the multiple sources.
{ "cite_N": [ "@cite_22", "@cite_33", "@cite_36", "@cite_6", "@cite_19", "@cite_2" ], "mid": [ "", "1980176153", "2047779269", "1989529706", "2111645492", "1670132599" ], "abstract": [ "", "Images and videos are often characterized by multiple types of local descriptors such as SIFT, HOG and HOF, each of which describes certain aspects of object feature. Recognition systems benefit from fusing multiple types of these descriptors. Two widely applied fusion pipelines are descriptor concatenation and kernel average. The first one is effective when different descriptors are strongly correlated, while the second one is probably better when descriptors are relatively independent. In practice, however, different descriptors are neither fully independent nor fully correlated, and previous fusion methods may not be satisfying. In this paper, we propose a new global representation, Multi-View Super Vector (MVSV), which is composed of relatively independent components derived from a pair of descriptors. Kernel average is then applied on these components to produce recognition result. To obtain MVSV, we develop a generative mixture model of probabilistic canonical correlation analyzers (M-PCCA), and utilize the hidden factors and gradient vectors of M-PCCA to construct MVSV for video representation. Experiments on video based action recognition tasks show that MVSV achieves promising results, and outperforms FV and VLAD with descriptor concatenation or kernel average fusion strategy.", "The real world image databases such as Flickr are characterized by continuous addition of new images. The recent approaches for image annotation, i.e. the problem of assigning tags to images, have two major drawbacks. First, either models are learned using the entire training data, or to handle the issue of dataset imbalance, tag-specific discriminative models are trained. Such models become obsolete and require relearning when new images and tags are added to database. Second, the task of feature-fusion is typically dealt using ad-hoc approaches. In this paper, we present a weighted extension of Multi-view Non-negative Matrix Factorization (NMF) to address the aforementioned drawbacks. The key idea is to learn query-specific generative model on the features of nearest-neighbors and tags using the proposed NMF-KNN approach which imposes consensus constraint on the coefficient matrices across different features. This results in coefficient vectors across features to be consistent and, thus, naturally solves the problem of feature fusion, while the weight matrices introduced in the proposed formulation alleviate the issue of dataset imbalance. Furthermore, our approach, being query-specific, is unaffected by addition of images and tags in a database. We tested our method on two datasets used for evaluation of image annotation and obtained competitive results.", "Multiview representations reveal the fundamental attributes of the studied instances from different perspectives. Some common perspectives are reviewed by multiple views simultaneously, while some specific ones are reflected by individual views. That is, there are two kinds of properties embedded in the multiview data: 1) consistency and 2) complementarity. Different from most multiview learning approaches only focusing on either consistency or complementarity, this paper proposes a novel semisupervised multiview learning algorithm, called partially shared latent factor (PSLF) learning, which jointly exploits both consistent and complementary information among multiple views. In PSLF, a nonnegative matrix factorization (NMF)-based formulation is adopted to learn a compact and comprehensive partially shared latent representation, which is composed of common latent factors shared by multiple views and some specific latent factors to each view. With the learned representations of multiview data, we introduce a robust sparse regression model to predict the cluster labels of labeled data. By integrating the NMF-based model and the regression model, we obtain a unified formulation and propose a multiplicative-based alternative algorithm for optimization. In addition, PSLF can learn the weights of different views adaptively according to the reconstruction precisions of data matrices. Our experimental study indicates different multiview data that contains consistent and complementary information in different degrees. In addition, the encouraging results of the proposed algorithm are achieved in comparison with the state-of-the-art algorithms on real-world data sets.", "Many human action recognition tasks involve data that can be factorized into multiple views such as body postures and hand shapes. These views often interact with each other over time, providing important cues to understanding the action. We present multi-view latent variable discriminative models that jointly learn both view-shared and view-specific sub-structures to capture the interaction between views. Knowledge about the underlying structure of the data is formulated as a multi-chain structured latent conditional model, explicitly learning the interaction between multiple views using disjoint sets of hidden variables in a discriminative manner. The chains are tied using a predetermined topology that repeats over time. We present three topologies — linked, coupled, and linked-coupled — that differ in the type of interaction between views that they model. We evaluate our approach on both segmented and unsegmented human action recognition tasks, using the ArmGesture, the NATOPS, and the ArmGesture-Continuous data. Experimental results show that our approach outperforms previous state-of-the-art action recognition models.", "In recent years, a great many methods of learning from multi-view data by considering the diversity of different views have been proposed. These views may be obtained from multiple sources or different feature subsets. In trying to organize and highlight similarities and differences between the variety of multi-view learning approaches, we review a number of representative multi-view learning algorithms in different areas and classify them into three groups: 1) co-training, 2) multiple kernel learning, and 3) subspace learning. Notably, co-training style algorithms train alternately to maximize the mutual agreement on two distinct views of the data; multiple kernel learning algorithms exploit kernels that naturally correspond to different views and combine kernels either linearly or non-linearly to improve learning performance; and subspace learning algorithms aim to obtain a latent subspace shared by multiple views by assuming that the input views are generated from this latent subspace. Though there is significant variance in the approaches to integrating multiple views to improve learning performance, they mainly exploit either the consensus principle or the complementary principle to ensure the success of multi-view learning. Since accessing multiple views is the fundament of multi-view learning, with the exception of study on learning a model from multiple views, it is also valuable to study how to construct multiple views and how to evaluate these views. Overall, by exploring the consistency and complementary properties of different views, multi-view learning is rendered more effective, more promising, and has better generalization ability than single-view learning." ] }
1509.04399
2953375280
Studies from neuroscience show that part-mapping computations are employed by human visual system in the process of object recognition. In this work, we present an approach for analyzing semantic-part characteristics of object category representations. For our experiments, we use category-epitome, a recently proposed sketch-based spatial representation for objects. To enable part-importance analysis, we first obtain semantic-part annotations of hand-drawn sketches originally used to construct the corresponding epitomes. We then examine the extent to which the semantic-parts are present in the epitomes of a category and visualize the relative importance of parts as a word cloud. Finally, we show how such word cloud visualizations provide an intuitive understanding of category-level structural trends that exist in the category-epitome object representations.
Determining the relative importance of part-level structural primitives for object category understanding has been explored only to a limited extent. @cite_6 present an importance measure of shape parts based on their ability to reconstruct the whole object shape of 2-D silhouettes. However, the authors interpret parts to mean segments on the contour of the object. @cite_3 propose a perception-based method to segment a sketch into semantically meaningful parts. Interestingly, they demonstrate the effectiveness of utilizing semantic parts rather than just consider parts as unnamed regions" of the object. To the best of our knowledge, the relative importance of the semantic parts has not been studied.
{ "cite_N": [ "@cite_3", "@cite_6" ], "mid": [ "2125130833", "2114846024" ], "abstract": [ "In this paper, we propose a novel perception-based shape decomposition method which aims to decompose a shape into semantically meaningful parts. In addition to three popular perception rules (the Minima rule, the Short-cut rule and the Convexity rule) in shape decomposition, we propose a new rule named part-similarity rule to encourage consistent partition of similar parts. The problem is formulated as a quadratic ally constrained quadratic program (QCQP) problem and is solved by a trust-region method. Experiment results on MPEG-7 dataset show that we can get a more consistent shape decomposition with human perception compared with other state-of-the-art methods both qualitatively and quantitatively. Finally, we show the advantage of semantic parts over non-meaningful parts in object detection on the ETHZ dataset.", "We propose a computational model which computes the importance of 2-D object shape parts, and we apply it to detect and localize objects with and without occlusions. The importance of a shape part (a localized contour fragment) is considered from the perspective of its contribution to the perception and recognition of the global shape of the object. Accordingly, the part importance measure is defined based on the ability to estimate recall the global shapes of objects from the local part, namely the part's \"shape reconstructability\". More precisely, the shape reconstructability of a part is determined by two factors---part variation and part uniqueness. (i) Part variation measures the precision of the global shape reconstruction, i.e. the consistency of the reconstructed global shape with the true object shape; and (ii) part uniqueness quantifies the ambiguity of matching the part to the object, i.e. taking into account that the part could be matched to the object at several different locations. Taking both these factors into consideration, an information theoretic formulation is proposed to measure part importance by the conditional entropy of the reconstruction of the object shape from the part. Experimental results demonstrate the benefit with the proposed part importance in object detection, including the improvement of detection rate, localization accuracy, and detection efficiency. By comparing with other state-of-the-art object detectors in a challenging but common scenario, object detection with occlusions, we show a considerable improvement using the proposed importance measure, with the detection rate increased over @math 10 . On a subset of the challenging PASCAL dataset, the Interpolated Average Precision (as used in the PASCAL VOC challenge) is improved by 4---8 . Moreover, we perform a psychological experiment which provides evidence suggesting that humans use a similar measure for part importance when perceiving and recognizing shapes." ] }
1509.04227
2230452966
User communities in social networks are usually identified by considering explicit structural social connections between users. While such communities can reveal important information about their members such as family or friendship ties and geographical proximity, they do not necessarily succeed at pulling like-minded users that share the same interests together. In this paper, we are interested in identifying communities of users that share similar topical interests over time, regardless of whether they are explicitly connected to each other on the social network. More specifically, we tackle the problem of identifying temporal topic-based communities from Twitter, i.e., communities of users who have similar temporal inclination towards the current emerging topics on Twitter. We model each topic as a collection of highly correlated semantic concepts observed in tweets and identify them by clustering the time-series based representation of each concept built based on each concept's observation frequency over time. Based on the identified emerging topics in a given time period, we utilize multivariate time series analysis to model the contributions of each user towards the identified topics, which allows us to detect latent user communities. Through our experiments on Twitter data, we demonstrate i) the effectiveness of our topic detection method to detect real world topics and ii) the effectiveness of our approach compared to well-established approaches for community detection.
Existing user community detection approaches can be broadly classified into two categories @cite_5 : and . The Topology-based community detection approaches represent the social network as a graph whose nodes are users and edges indicate explicit user relationships. This approach relies only on the network structure of social network graph and depends on concepts such as and to extract latent communities @cite_28 . On the other hand, Topic-based approaches mainly focus on information content of the users in the social network to detect latent communities. Since, the goal of our proposed approach is to detect communities formed toward the topics extracted from users' information contents, we review topic-based community detection methods in this subsection.
{ "cite_N": [ "@cite_28", "@cite_5" ], "mid": [ "2127048411", "1989742824" ], "abstract": [ "The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.", "The evolution of the Web has promoted a growing interest in social network analysis, such as community detection. Among many different community detection approaches, there are two kinds that we want to address: one considers the graph structure of the network (topology-based community detection approach); the other one takes the textual information of the network nodes into consideration (topic-based community detection approach). This paper conducted systematic analysis of applying a topology-based community detection approach and a topic-based community detection approach to the coauthorship networks of the information retrieval area and found that: (1) communities detected by the topology-based community detection approach tend to contain different topics within each community; and (2) communities detected by the topic-based community detection approach tend to contain topologically-diverse sub-communities within each community. The future community detection approaches should not only emphasize the relationship between communities and topics, but also consider the dynamic changes of communities and topics." ] }
1509.04227
2230452966
User communities in social networks are usually identified by considering explicit structural social connections between users. While such communities can reveal important information about their members such as family or friendship ties and geographical proximity, they do not necessarily succeed at pulling like-minded users that share the same interests together. In this paper, we are interested in identifying communities of users that share similar topical interests over time, regardless of whether they are explicitly connected to each other on the social network. More specifically, we tackle the problem of identifying temporal topic-based communities from Twitter, i.e., communities of users who have similar temporal inclination towards the current emerging topics on Twitter. We model each topic as a collection of highly correlated semantic concepts observed in tweets and identify them by clustering the time-series based representation of each concept built based on each concept's observation frequency over time. Based on the identified emerging topics in a given time period, we utilize multivariate time series analysis to model the contributions of each user towards the identified topics, which allows us to detect latent user communities. Through our experiments on Twitter data, we demonstrate i) the effectiveness of our topic detection method to detect real world topics and ii) the effectiveness of our approach compared to well-established approaches for community detection.
Most of these works have proposed a probabilistic model to detect topic-based user communities based on textual content or jointly with social connections @cite_26 @cite_6 @cite_7 @cite_27 @cite_16 . For example, @cite_26 have identified users' topics of interest and extract latent communities based on the topics utilizing Gaussian Restricted Boltzmann Machine. , @cite_23 have integrated community discovery with topic modeling in a unified generative model to detect communities of users who are coherent in both structural relationships and latent topics. In their framework, a community can be formed around multiple topics and a topic can be shared between multiple communities. , @cite_24 have proposed probabilistic schemes that incorporate users' posts, social connections and interaction types to discover latent user communities in social networks. In their paper, they have considered three types of interactions: a conventional tweet, a reply tweet and a re-tweet. Other authors have also proposed variations of Latent Dirichlet Allocation (LDA), for example, Author-Topic model @cite_29 and Community-User-Topic model @cite_4 , to identify latent communities.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_7", "@cite_29", "@cite_6", "@cite_24", "@cite_27", "@cite_23", "@cite_16" ], "mid": [ "2078525157", "1982167371", "", "2949169239", "", "2025901387", "", "2059047669", "" ], "abstract": [ "Online social networks have been wildly spread in recent years. They enable users to identify other users with common interests, exchange their opinions, and expertise. Discovering user communities from social networks have become one of the major challenges which help its members to interact with relevant people who have similar interests. Community detection approaches fall into two categories: the first one considers user' networks while the other utilizes usergenerated content. In this paper, a multi-layer community detection model based on identifying topics of interest from user published content is presented. This model applies Gaussian Restricted Boltzmann Machine for modeling user's posts within a social network which yields to identify their topics of interest, and finally construct communities. The effectiveness of the proposed multi-layer model is measured using KL divergence which measures similarity between users of the same community. Experiments on the real Twitter dataset show that the proposed deep model outperforms traditional community detection models that directly maps users into corresponding communities using several baseline techniques.", "The increasing amount of communication between individuals in e-formats (e.g. email, Instant messaging and the Web) has motivated computational research in social network analysis (SNA). Previous work in SNA has emphasized the social network (SN) topology measured by communication frequencies while ignoring the semantic information in SNs. In this paper, we propose two generative Bayesian models for semantic community discovery in SNs, combining probabilistic modeling with community detection in SNs. To simulate the generative models, an EnF-Gibbs sampling algorithm is proposed to address the efficiency and performance problems of traditional methods. Experimental studies on Enron email corpus show that our approach successfully detects the communities of individuals and in addition provides semantic topic descriptions of these communities.", "", "We introduce the author-topic model, a generative model for documents that extends Latent Dirichlet Allocation (LDA; Blei, Ng, & Jordan, 2003) to include authorship information. Each author is associated with a multinomial distribution over topics and each topic is associated with a multinomial distribution over words. A document with multiple authors is modeled as a distribution over topics that is a mixture of the distributions associated with the authors. We apply the model to a collection of 1,700 NIPS conference papers and 160,000 CiteSeer abstracts. Exact inference is intractable for these datasets and we use Gibbs sampling to estimate the topic and author distributions. We compare the performance with two other generative models for documents, which are special cases of the author-topic model: LDA (a topic model) and a simple author model in which each author is associated with a distribution over words rather than a distribution over topics. We show topics recovered by the author-topic model, and demonstrate applications to computing similarity between authors and entropy of author output.", "", "In recent years, social networking sites have not only enabled people to connect with each other using social links but have also allowed them to share, communicate and interact over diverse geographical regions. Social network provide a rich source of heterogeneous data which can be exploited to discover previously unknown relationships and interests among groups of people. In this paper, we address the problem of discovering topically meaningful communities from a social network. We assume that a persons' membership in a community is conditioned on its social relationship, the type of interaction and the information communicated with other members of that community. We propose generative models that can discover communities based on the discussed topics, interaction types and the social connections among people. In our models a person can belong to multiple communities and a community can participate in multiple topics. This allows us to discover both community interests and user interests based on the information and linked associations. We demonstrate the effectiveness of our model on two real word data sets and show that it performs better than existing community discovery models.", "", "This article studies the problem of latent community topic analysis in text-associated graphs. With the development of social media, a lot of user-generated content is available with user networks. Along with rich information in networks, user graphs can be extended with text information associated with nodes. Topic modeling is a classic problem in text mining and it is interesting to discover the latent topics in text-associated graphs. Different from traditional topic modeling methods considering links, we incorporate community discovery into topic analysis in text-associated graphs to guarantee the topical coherence in the communities so that users in the same community are closely linked to each other and share common latent topics. We handle topic modeling and community discovery in the same framework. In our model we separate the concepts of community and topic, so one community can correspond to multiple topics and multiple communities can share the same topic. We compare different methods and perform extensive experiments on two real datasets. The results confirm our hypothesis that topics could help understand community structure, while community structure could help model topics.", "" ] }
1509.03778
2208965674
Analytic performance models are essential for understanding the performance characteristics of loop kernels, which consume a major part of CPU cycles in computational science. Starting from a validated performance model one can infer the relevant hardware bottlenecks and promising optimization opportunities. Unfortunately, analytic performance modeling is often tedious even for experienced developers since it requires in-depth knowledge about the hardware and how it interacts with the software. We present the "Kerncraft" tool, which eases the construction of analytic performance models for streaming kernels and stencil loop nests. Starting from the loop source code, the problem size, and a description of the underlying hardware, Kerncraft can ideally predict the single-core performance and scaling behavior of loops on multicore processors using the Roofline or the Execution-Cache-Memory (ECM) model. We describe the operating principles of Kerncraft with its capabilities and limitations, and we show how it may be used to quickly gain insights by accelerated analytic modeling.
There is to our knowledge no tool comparable to IACA for non-Intel CPUs. Fallback support for in-core predictions based on automatic code analysis and architectural information (as done, e.g., in @cite_11 ) is work in progress.
{ "cite_N": [ "@cite_11" ], "mid": [ "2134495056" ], "abstract": [ "One of the emerging challenges to designing HPC systems is understanding and projecting the requirements of exascale applications. In order to determine the performance consequences of different hardware designs, analytic models are essential because they can provide fast feedback to the co-design centers and chip designers without costly simulations. However, current attempts to analytically model program performance typically rely on the user manually specifying a performance model. We introduce the ExaSAT framework that automates the extraction of parameterized performance models directly from source code using compiler analysis. The parameterized analytic model enables quantitative evaluation of a broad range of hardware design trade-offs and software optimizations on a variety of different performance metrics, with a primary focus on data movement as a metric. We demonstrate the ExaSAT framework's ability to perform deep code analysis of a proxy application from the Department of Energy Combustion Co-design Center to illustrate its value to the exascale co-design process. ExaSAT analysis provides insights into the hardware and software trade-offs and lays the groundwork for exploring a more targeted set of design points using cycle-accurate architectural simulators." ] }
1509.03699
2949065830
Green data centers have become more and more popular recently due to their sustainability. The resource management module within a green data center, which is in charge of dispatching jobs and scheduling energy, becomes especially critical as it directly affects a center's profit and sustainability. The thrust of managing a green data center's machine and energy resources lies at the uncertainty of incoming job requests and future showing-up green energy supplies. Thus, the decision of scheduling resources has to be made in an online manner. Some heuristic deterministic online algorithms have been proposed in recent literature. In this paper, we consider online algorithms for green data centers and introduce a randomized solution with the objective of maximizing net profit. Competitive analysis is employed to measure online algorithms' theoretical performance. Our algorithm is theoretical-sound and it outperforms the previously known deterministic algorithms in many settings using real traces. To complement our study, optimal offline algorithms are also designed.
How to schedule green energy in an efficient and effective manner has been investigated extensively. Although green energy has the advantages of being cost-effective and environmental-friendly, there is a challenge in using it due to its daily seasonal variability. Another challenge comes from customers' workload fluctuations @cite_33 . There could lead to a temporal mismatch between the green energy supply and the workload's energy demand in the time axis --- a heavy workload arrives when the green energy supply is low. One solution is to bank'' green energy in batteries for later possible use. However, this approach incurs huge energy lost and high additional maintenance cost @cite_3 . Thus, a run-time online algorithm for a matching of workload and energy is highly demanded for green data centers.
{ "cite_N": [ "@cite_33", "@cite_3" ], "mid": [ "2096092966", "1983322859" ], "abstract": [ "Batched stream processing is a new distributed data processing paradigm that models recurring batch computations on incrementally bulk-appended data streams. The model is inspired by our empirical study on a trace from a large-scale production data-processing cluster; it allows a set of effective query optimizations that are not possible in a traditional batch processing model. We have developed a query processing system called Comet that embraces batched stream processing and integrates with DryadLINQ. We used two complementary methods to evaluate the effectiveness of optimizations that Comet enables. First, a prototype system deployed on a 40-node cluster shows an I O reduction of over 40 using our benchmark. Second, when applied to a real production trace covering over 19 million machine-hours, our simulator shows an estimated I O saving of over 50 .", "Interest has been growing in powering data centers (at least partially) with renewable or \"green\" sources of energy, such as solar or wind. However, it is challenging to use these sources because, unlike the \"brown\" (carbon-intensive) energy drawn from the electrical grid, they are not always available. In this keynote talk, I will first discuss the tradeoffs involved in leveraging green energy today and the prospects for the future. I will then discuss the main research challenges and questions involved in managing the use of green energy in data centers. Next, I will describe some of the software and hardware that researchers are building to explore these challenges and questions. Specifically, I will overview systems that match a data center's computational workload to the green energy supply. I will also describe Parasol, the solar-powered micro-data center we have just built at Rutgers University. Finally, I will discuss some potential avenues for future research on this topic." ] }
1509.03699
2949065830
Green data centers have become more and more popular recently due to their sustainability. The resource management module within a green data center, which is in charge of dispatching jobs and scheduling energy, becomes especially critical as it directly affects a center's profit and sustainability. The thrust of managing a green data center's machine and energy resources lies at the uncertainty of incoming job requests and future showing-up green energy supplies. Thus, the decision of scheduling resources has to be made in an online manner. Some heuristic deterministic online algorithms have been proposed in recent literature. In this paper, we consider online algorithms for green data centers and introduce a randomized solution with the objective of maximizing net profit. Competitive analysis is employed to measure online algorithms' theoretical performance. Our algorithm is theoretical-sound and it outperforms the previously known deterministic algorithms in many settings using real traces. To complement our study, optimal offline algorithms are also designed.
Among the work on centralized data centers, @cite_38 @cite_24 studied a model which is the same as ours presented in this paper. @cite_6 aimed to improve green energy usage and @cite_36 had the goal of reducing brown energy costs. The algorithmic idea underlying the above-mentioned solutions is greedy and they employed algorithms known as and . All prior work focuses on either maximizing green energy consumption or minimizing brown energy consumption cost except @cite_17 which studied the net profit maximization problem for centralized data center service providers. @cite_17 proposed a systematic approach to maximize green data center's profit with a stochastic assumption over the workload --- the workload that they studied is restricted to online service requests with variable arrival rates. In this paper, we study the profit maximization problem in a more general setting. In particular, we do not make particular assumptions over the workload's stochastic property. In addition, we incorporate dynamic brown energy price in our model which is a widely used energy charging scheme in data centers.
{ "cite_N": [ "@cite_38", "@cite_36", "@cite_6", "@cite_24", "@cite_17" ], "mid": [ "2055964748", "2107128713", "2066238258", "", "2021689699" ], "abstract": [ "In this paper, we propose GreenSlot, a parallel batch job scheduler for a datacenter powered by a photovoltaic solar array and the electrical grid (as a backup). GreenSlot predicts the amount of solar energy that will be available in the near future, and schedules the workload to maximize the green energy consumption while meeting the jobs' deadlines. If grid energy must be used to avoid deadline violations, the scheduler selects times when it is cheap. Our results for production scientific workloads demonstrate that Green-Slot can increase green energy consumption by up to 117 and decrease energy cost by up to 39 , compared to a conventional scheduler. Based on these positive results, we conclude that green datacenters and green-energy-aware scheduling can have a significant role in building a more sustainable IT ecosystem.", "Recently, the demand for data center computing has surged, increasing the total energy footprint of data centers worldwide. Data centers typically comprise three subsystems: IT equipment provides services to customers; power infrastructure supports the IT and cooling equipment; and the cooling infrastructure removes heat generated by these subsystems. This work presents a novel approach to model the energy flows in a data center and optimize its operation. Traditionally, supply-side constraints such as energy or cooling availability were treated independently from IT workload management. This work reduces electricity cost and environmental impact using a holistic approach that integrates renewable supply, dynamic pricing, and cooling supply including chiller and outside air cooling, with IT workload planning to improve the overall sustainability of data center operations. Specifically, we first predict renewable energy as well as IT demand. Then we use these predictions to generate an IT workload management plan that schedules IT workload and allocates IT resources within a data center according to time varying power supply and cooling efficiency. We have implemented and evaluated our approach using traces from real data centers and production systems. The results demonstrate that our approach can reduce both the recurring power costs and the use of non-renewable energy by as much as 60 compared to existing techniques, while still meeting the Service Level Agreements.", "As brown energy costs grow, renewable energy becomes more widely used. Previous work focused on using immediately available green energy to supplement the non-renewable, or brown energy at the cost of canceling and rescheduling jobs whenever the green energy availability is too low [16]. In this paper we design an adaptive data center job scheduler which utilizes short term prediction of solar and wind energy production. This enables us to scale the number of jobs to the expected energy availability, thus reducing the number of cancelled jobs by 4x and improving green energy usage efficiency by 3x over just utilizing the immediately available green energy.", "", "While a large body of work has recently focused on reducing data center's energy expenses, there exists no prior work on investigating the trade-off between minimizing data center's energy expenditure and maximizing their revenue for various Internet and cloud computing services that they may offer. In this paper, we seek to tackle this shortcoming by proposing a systematic approach to maximize green data center's profit, i.e., revenue minus cost. In this regard, we explicitly take into account practical service-level agreements (SLAs) that currently exist between data centers and their customers. Our model also incorporates various other factors such as availability of local renewable power generation at data centers and the stochastic nature of data centers' workload. Furthermore, we propose a novel optimization-based profit maximization strategy for data centers for two different cases, without and with behind-the-meter renewable generators. We show that the formulated optimization problems in both cases are convex programs; therefore, they are tractable and appropriate for practical implementation. Using various experimental data and via computer simulations, we assess the performance of the proposed optimization-based profit maximization strategy and show that it significantly outperforms two comparable energy and performance management algorithms that are recently proposed in the literature." ] }
1509.03603
2235706989
We propose a Green Cloudlet Network () architecture to provide seamless Mobile Cloud Computing () services to User Equipments (s) with low latency in which each cloudlet is powered by both green and brown energy. Fully utilizing green energy can significantly reduce the operational cost of cloudlet providers. However, owing to the spatial dynamics of energy demand and green energy generation, the energy gap among different cloudlets in the network is unbalanced, i.e., some cloudlets' energy demands can be fully provided by green energy but others need to utilize on-grid energy (i.e., brown energy) to satisfy their energy demands. We propose a Green-energy awarE Avatar migRation () strategy to minimize the on-grid energy consumption in GCN by redistributing the energy demands via Avatar migration among cloudlets according to cloudlets' green energy generation. Furthermore, GEAR ensures the Service Level Agreement () in terms of the maximum Avatar propagation delay by avoiding Avatars hosted in the remote cloudlets. We formulate the GEAR strategy as a mixed integer linear programming problem, which is NP-hard, and thus apply the Branch and Bound search to find its sub-optimal solution. Simulation results demonstrate that GEAR can save on-grid energy consumption significantly as compared to the Follow me AvataR () migration strategy, which aims to minimize the propagation delay between an UE and its Avatar.
Previous works @cite_14 @cite_2 have proved that cloudlets can significantly reduce the communications latency between UEs and VMs in the cloudlet. Tarik and Ksentini @cite_18 introduced a follow-me cloud, i.e., a UE's service is continuously migrated to the data center which is much closer to the UE. Follow-Me Cloud tries to minimize the propagation delay between a UE and its VM in the data center, but it does not capitalize on green energy to optimize the energy consumption.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_2" ], "mid": [ "2032129959", "", "2089227674" ], "abstract": [ "This article introduces the Follow-Me Cloud concept and proposes its framework. The proposed framework is aimed at smooth migration of all or only a required portion of an ongoing IP service between a data center and user equipment of a 3GPP mobile network to another optimal DC with no service disruption. The service migration and continuity is supported by replacing IP addressing with service identification. Indeed, an FMC service application is identified, upon establishment, by a session service ID, dynamically changing along with the service being delivered over the session; it consists of a unique identifier of UE within the 3GPP mobile network, an identifier of the cloud service, and dynamically changing characteristics of the cloud service. Service migration in FMC is triggered by change in the IP address of the UE due to a change of data anchor gateway in the mobile network, in turn due to UE mobility and or for load balancing. An optimal DC is then selected based on the features of the new data anchor gateway. Smooth service migration and continuity are supported thanks to logic installed at UE and DCs that maps features of IP flows to the session service ID.", "", "Improving performance of a mobile application by offloading its computation onto a cloudlet has become a prevalent paradigm. Among mobile applications, the category of interactive data-streaming applications is emerging while having not yet received sufficient attention. During computation offloading, the performance of this category of applications (including response time and throughput) depends on network latency and bandwidth between the mobile device and the cloudlet. Although a single cloudlet can provide satisfactory network latency, the bandwidth is always the bottleneck of the throughput. To address this issue, we propose to use multiple cloudlets for computation offloading so as to alleviate the bandwidth bottleneck. In addition, we propose to use multiple module instances to complete a module, enabling more fine-grained computation partitioning, since data processing in many modules of data-streaming applications could be highly parallelized. Specifically, at first we apply a fine-grained data-flow model to characterize mobile interactive data-streaming applications. Then we build a unified optimization framework that achieves maximization of the overall utilities of all mobile users, and design an efficient heuristic for the optimization problem, which is able to make trade-off between throughput and energy consumption at each mobile device. At the end we verify our algorithm with extensive simulation. The results show that the overall utility achieved by our heuristic is close to the precise optimum, and our multiple-cloudlet mechanism significantly outperforms the single-cloudlet mechanism." ] }
1509.03946
2266977765
The proximal problem for structured penalties obtained via convex relaxations of submodular functions is known to be equivalent to minimizing separable convex functions over the corresponding submodular polyhedra. In this paper, we reveal a comprehensive class of structured penalties for which penalties this problem can be solved via an efficiently solvable class of parametric maxflow optimization. We then show that the parametric maxflow algorithm proposed by and its variants, which runs, in the worst-case, at the cost of only a constant factor of a single computation of the corresponding maxflow optimization, can be adapted to solve the proximal problems for those penalties. Several existing structured penalties satisfy these conditions; thus, regularized learning with these penalties is solvable quickly using the parametric maxflow algorithm. We also investigate the empirical runtime performance of the proposed framework.
Learning with structured sparsity-inducing regularization has been actively discussed in machine learning for a decade. Typical instances include (generalized) fused Lasso @cite_38 @cite_0 and group Lasso @cite_21 @cite_22 @cite_4 . Generalized fused Lasso is closely related to the so-called total variation, which has often been discussed in computer vision @cite_6 . Recently, group penalties have been applied to more complex groups, such as hierarchical penalty @cite_47 @cite_8 and path penalty @cite_29 . The total variation regularization is known to be solvable with an efficient parametric maxflow algorithm @cite_48 @cite_9 . In addition, the proximal problem for @math -group penalty is calculated via parametric maxflow optimization @cite_33 . The proposed optimization formulation includes these formulations as special cases. Bach (2010) @cite_30 and Bach & Obozinski (2012) @cite_36 revealed that many of the existing structured penalties are obtained as convex relaxations of submodular functions, and those proximal problems are formulated as separable convex minimization.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_4", "@cite_22", "@cite_33", "@cite_8", "@cite_36", "@cite_48", "@cite_29", "@cite_21", "@cite_9", "@cite_6", "@cite_0", "@cite_47" ], "mid": [ "2953028742", "2140514146", "2138265962", "2244252827", "", "1539012881", "2222461823", "1986753844", "2120470041", "2138019504", "", "2103559027", "1977777127", "1994309289" ], "abstract": [ "Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turned into a convex optimization problem by replacing the cardinality function by its convex envelope (tightest convex lower bound), in this case the L1-norm. In this paper, we investigate more general set-functions than the cardinality, that may incorporate prior knowledge or structural constraints which are common in many applications: namely, we show that for nondecreasing submodular set-functions, the corresponding convex envelope can be obtained from its extension, a common tool in submodular analysis. This defines a family of polyhedral norms, for which we provide generic algorithmic tools (subgradients and proximal operators) and theoretical results (conditions for support recovery or high-dimensional inference). By selecting specific submodular functions, we can give a new interpretation to known norms, such as those based on rank-statistics or grouped norms with potentially overlapping groups; we also define new norms, in particular ones that can be used as non-factorial priors for supervised learning.", "Summary. The lasso penalizes a least squares regression by the sum of the absolute values (L1-norm) of the coefficients. The form of this penalty encourages sparse solutions (with many coefficients equal to 0). We propose the ‘fused lasso’, a generalization that is designed for problems with features that can be ordered in some meaningful way. The fused lasso penalizes the L1-norm of both the coefficients and their successive differences. Thus it encourages sparsity of the coefficients and also sparsity of their differences—i.e. local constancy of the coefficient profile. The fused lasso is especially useful when the number of features p is much greater than N, the sample size. The technique is also extended to the ‘hinge’ loss function that underlies the support vector classifier.We illustrate the methods on examples from protein mass spectroscopy and gene expression data.", "Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. While naturally cast as a combinatorial optimization problem, variable or feature selection admits a convex relaxation through the regularization by the @math -norm. In this paper, we consider situations where we are not only interested in sparsity, but where some structural prior knowledge is available as well. We show that the @math -norm can then be extended to structured norms built on either disjoint or overlapping groups of variables, leading to a flexible framework that can deal with various structures. We present applications to unsupervised learning, for structured sparse principal component analysis and hierarchical dictionary learning, and to supervised learning in the context of non-linear variable selection.", "This paper investigates a learning formulation called structured sparsity, which is a natural extension of the standard sparsity concept in statistical learning and compressive sensing. By allowing arbitrary structures on the feature set, this concept generalizes the group sparsity idea that has become popular in recent years. A general theory is developed for learning with structured sparsity, based on the notion of coding complexity associated with the structure. It is shown that if the coding complexity of the target signal is small, then one can achieve improved performance by using coding complexity regularization methods, which generalize the standard sparse regularization. Moreover, a structured greedy algorithm is proposed to efficiently solve the structured sparsity problem. It is shown that the greedy algorithm approximately solves the coding complexity optimization problem under appropriate conditions. Experiments are included to demonstrate the advantage of structured sparsity over standard sparsity on some real applications.", "", "Sparse coding consists in representing signals as sparse linear combinations of atoms selected from a dictionary. We consider an extension of this framework where the atoms are further assumed to be embedded in a tree. This is achieved using a recently introduced tree-structured sparse regularization norm, which has proven useful in several applications. This norm leads to regularized problems that are difficult to optimize, and in this paper, we propose efficient algorithms for solving them. More precisely, we show that the proximal operator associated with this norm is computable exactly via a dual approach that can be viewed as the composition of elementary proximal operators. Our procedure has a complexity linear, or close to linear, in the number of atoms, and allows the use of accelerated gradient techniques to solve the tree-structured sparse approximation problem at the same computational cost as traditional ones using the l1-norm. Our method is efficient and scales gracefully to millions of variables, which we illustrate in two types of applications: first, we consider fixed hierarchical dictionaries of wavelets to denoise natural images. Then, we apply our optimization tools in the context of dictionary learning, where learned dictionary elements naturally self-organize in a prespecified arborescent structure, leading to better performance in reconstruction of natural image patches. When applied to text documents, our method learns hierarchies of topics, thus providing a competitive alternative to probabilistic topic models.", "In this paper, we propose an unifying view of several recently proposed structured sparsity-inducing norms. We consider the situation of a model simultaneously (a) penalized by a set- function de ned on the support of the unknown parameter vector which represents prior knowledge on supports, and (b) regularized in Lp-norm. We show that the natural combinatorial optimization problems obtained may be relaxed into convex optimization problems and introduce a notion, the lower combinatorial envelope of a set-function, that characterizes the tightness of our relaxations. We moreover establish links with norms based on latent representations including the latent group Lasso and block-coding, and with norms obtained from submodular functions.", "In a recent paper (LNCS, Vol. 3953, pp. 409---422, 2006) propose an approach for computing curve and surface evolution using a variational approach and the geo-cuts method of Boykov and Kolmogorov (International conference on computer vision, pp. 26---33, 2003). We recall in this paper how this is related to well-known approaches for mean curvature motion, introduced by (SIAM Journal on Control and Optimization 31(2):387---438, 1993) and Luckhaus and Sturzenhecker (Calculus of Variations and Partial Differential Equations 3(2):253---271, 1995), and show how the corresponding problems can be solved with sub-pixel accuracy using Parametric Maximum Flow techniques. This provides interesting algorithms for computing crystalline curvature motion, possibly with a forcing term.", "We consider supervised learning problems where the features are embedded in a graph, such as gene expressions in a gene network. In this context, it is of much interest to automatically select a subgraph with few connected components; by exploiting prior knowledge, one can indeed improve the prediction performance or obtain results that are easier to interpret. Regularization or penalty functions for selecting features in graphs have recently been proposed, but they raise new algorithmic challenges. For example, they typically require solving a combinatorially hard selection problem among all connected subgraphs. In this paper, we propose computationally feasible strategies to select a sparse and well-connected subset of features sitting on a directed acyclic graph (DAG). We introduce structured sparsity penalties over paths on a DAG called \"path coding\" penalties. Unlike existing regularization functions that model long-range interactions between features in a graph, path coding penalties are tractable. The penalties and their proximal operators involve path selection problems, which we efficiently solve by leveraging network flow optimization. We experimentally show on synthetic, image, and genomic data that our approach is scalable and leads to more connected subgraphs than other regularization functions for graphs.", "Summary. We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor analysis-of-variance problem as the most important and well-known example. Instead of selecting factors by stepwise backward elimination, we focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection. The lasso, the LARS algorithm and the non-negative garrotte are recently proposed regression methods that can be used to select individual variables. We study and propose efficient algorithms for the extensions of these methods for factor selection and show that these extensions give superior performance to the traditional stepwise backward elimination method in factor selection problems. We study the similarities and the differences between these methods. Simulations and real examples are used to illustrate the methods.", "", "A constrained optimization type of numerical algorithm for removing noise from images is presented. The total variation of the image is minimized subject to constraints involving the statistics of the noise. The constraints are imposed using Lagrange multipliers. The solution is obtained using the gradient-projection method. This amounts to solving a time dependent partial differential equation on a manifold determined by the constraints. As t--- 0o the solution converges to a steady state which is the denoised image. The numerical algorithm is simple and relatively fast. The results appear to be state-of-the-art for very noisy images. The method is noninvasive, yielding sharp edges in the image. The technique could be interpreted as a first step of moving each level set of the image normal to itself with velocity equal to the curvature of the level set divided by the magnitude of the gradient of the image, and a second step which projects the image back onto the constraint set.", "We present a path algorithm for the generalized lasso problem. This problem penalizes the @math norm of a matrix D times the coefficient vector, and has a wide range of applications, dictated by the choice of D. Our algorithm is based on solving the dual of the generalized lasso, which greatly facilitates computation of the path. For @math (the usual lasso), we draw a connection between our approach and the well-known LARS algorithm. For an arbitrary D, we derive an unbiased estimate of the degrees of freedom of the generalized lasso fit. This estimate turns out to be quite intuitive in many applications.", "Extracting useful information from high-dimensional data is an important focus of today's statistical research and practice. Penalized loss function minimization has been shown to be effective for this task both theoretically and empirically. With the virtues of both regularization and sparsity, the L 1 -penalized squared error minimization method Lasso has been popular in regression models and beyond. In this paper, we combine different norms including L 1 to form an intelligent penalty in order to add side information to the fitting of a regression or classification model to obtain reasonable estimates. Specifically, we introduce the Composite Absolute Penalties (CAP) family, which allows given grouping and hierarchical relationships between the predictors to be expressed. CAP penalties are built by defining groups and combining the properties of norm penalties at the across-group and within-group levels. Grouped selection occurs for nonoverlapping groups. Hierarchical variable selection is reached by defining groups with particular overlapping patterns. We propose using the BLASSO and cross-validation to compute CAP estimates in general. For a subfamily of CAP estimates involving only the L 1 and L ∞ norms, we introduce the iCAP algorithm to trace the entire regularization path for the grouped selection problem. Within this subfamily, unbiased estimates of the degrees of freedom (df) are derived so that the regularization parameter is selected without cross-validation. CAP is shown to improve on the predictive performance of the LASSO in a series of simulated experiments, including cases with p » n and possibly mis-specified groupings. When the complexity of a model is properly calculated, iCAP is seen to be parsimonious in the experiments." ] }
1509.03946
2266977765
The proximal problem for structured penalties obtained via convex relaxations of submodular functions is known to be equivalent to minimizing separable convex functions over the corresponding submodular polyhedra. In this paper, we reveal a comprehensive class of structured penalties for which penalties this problem can be solved via an efficiently solvable class of parametric maxflow optimization. We then show that the parametric maxflow algorithm proposed by and its variants, which runs, in the worst-case, at the cost of only a constant factor of a single computation of the corresponding maxflow optimization, can be adapted to solve the proximal problems for those penalties. Several existing structured penalties satisfy these conditions; thus, regularized learning with these penalties is solvable quickly using the parametric maxflow algorithm. We also investigate the empirical runtime performance of the proposed framework.
The sufficient condition in is closely related to the class of energy minimization problems solvable by graph-cut algorithm @cite_3 @cite_37 . Energy minimization is a formulation of the maximum a posteriori (MAP) estimation on MRFs (see, for example, @cite_52 ). Similar results are found in the context of realization of a submodular function as a cut function in combinatorial optimization @cite_43 @cite_7 .
{ "cite_N": [ "@cite_37", "@cite_7", "@cite_52", "@cite_3", "@cite_43" ], "mid": [ "2101309634", "2000478304", "2120340025", "2143516773", "2021902776" ], "abstract": [ "In the last few years, several new algorithms based on graph cuts have been developed to solve energy minimization problems in computer vision. Each of these techniques constructs a graph such that the minimum cut on the graph also minimizes the energy. Yet, because these graph constructions are complex and highly specific to a particular energy function, graph cuts have seen limited application to date. In this paper, we give a characterization of the energy functions that can be minimized by graph cuts. Our results are restricted to functions of binary variables. However, our work generalizes many previous constructions and is easily applicable to vision problems that involve large numbers of labels, such as stereo, motion, image restoration, and scene reconstruction. We give a precise characterization of what energy functions can be minimized using graph cuts, among the energy functions that can be written as a sum of terms containing three or fewer binary variables. We also provide a general-purpose construction to minimize such an energy function. Finally, we give a necessary condition for any energy function of binary variables to be minimized by graph cuts. Researchers who are considering the use of graph cuts to optimize a particular energy function can use our results to determine if this is possible and then follow our construction to create the appropriate graph. A software implementation is freely available.", "Abstract We consider the problems of realizing set functions as cut functions on graphs and hypergraphs. We give necessary and sufficient conditions for set functions to be realized as cut functions on nonnegative networks, symmetric networks and nonnegative hypernetworks. The symmetry significantly simplifies the characterization of set functions of nonnegative network type. Set functions of nonnegative hypernetwork type generalize those of nonnegative network type (cut functions of ordinary networks) and are submodular functions. For any constant integer k⩾2 , we can discern in polynomial time by a max-flow algorithm whether a given set function of order k is of nonnegative hypernetwork type.", "The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building large-scale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fields, including bioinformatics, communication theory, statistical physics, combinatorial optimization, signal and image processing, information retrieval and statistical machine learning. Many problems that arise in specific instances — including the key problems of computing marginals and modes of probability distributions — are best studied in the general setting. Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, we develop general variational representations of the problems of computing likelihoods, marginal probabilities and most probable configurations. We describe how a wide variety of algorithms — among them sum-product, cluster variational methods, expectation-propagation, mean field methods, max-product and linear programming relaxation, as well as conic programming relaxations — can all be understood in terms of exact or approximate forms of these variational representations. The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models.", "Many tasks in computer vision involve assigning a label (such as disparity) to every pixel. A common constraint is that the labels should vary smoothly almost everywhere while preserving sharp discontinuities that may exist, e.g., at object boundaries. These tasks are naturally stated in terms of energy minimization. The authors consider a wide class of energies with various smoothness constraints. Global minimization of these energy functions is NP-hard even in the simplest discontinuity-preserving case. Therefore, our focus is on efficient approximation algorithms. We present two algorithms based on graph cuts that efficiently find a local minimum with respect to two types of large moves, namely expansion moves and swap moves. These moves can simultaneously change the labels of arbitrarily large sets of pixels. In contrast, many standard algorithms (including simulated annealing) use small moves where only one pixel changes its label at a time. Our expansion algorithm finds a labeling within a known factor of the global minimum, while our swap algorithm handles more general energy functions. Both of these algorithms allow important cases of discontinuity preserving energies. We experimentally demonstrate the effectiveness of our approach for image restoration, stereo and motion. On real data with ground truth, we achieve 98 percent accuracy.", "Abstract The problem of maximizing a pseudoboolean function (or equivalently a set function) which is supermodular, has many interesting applications e.g. in combinatorial optimization, Operations Research etc. Up to now, a number of special cases of pseudoboolean functions have been known, the maximization of which can be converted into the search for a maximum flow in an associated network. These were essentially the so-called negative-positive pseudoboolean functions (which, as will be noted here, turn out to be supermodular). First it is shown here how these results on negative-positive functions can be more easily derived by using the concept of conflict graph . The conflict graph approach is then generalized to extend the class of problems amenable to maximum network flow problems to the whole set of cubic supermodular pseudoboolean functions." ] }
1509.03739
2950940666
Distant supervision is a widely applied approach to automatic training of relation extraction systems and has the advantage that it can generate large amounts of labelled data with minimal effort. However, this data may contain errors and consequently systems trained using distant supervision tend not to perform as well as those based on manually labelled data. This work proposes a novel method for detecting potential false negative training examples using a knowledge inference method. Results show that our approach improves the performance of relation extraction systems trained using distantly supervised data.
Distant supervision is widely used to train relation extraction systems with Freebase and Wikipedia commonly being used as knowledge bases, e.g. @cite_2 @cite_10 @cite_14 @cite_8 @cite_11 @cite_1 . The main advantage is its ability to automatically generate large amounts of training data automatically. On the other hand, this automatically labelled data is noisy and usually generates lower performance than approaches trained using manually labelled data. A range of filtering approaches have been applied to address this problem including multi-class SVM @cite_18 and Multi-Instance learning methods @cite_10 @cite_9 . These approaches take into account the fact that entities might occur in different relations at the same time and may not necessarily express the target relation. Other approaches focus directly on the noise in the data. For instance use a generative model to predict incorrect data while use a range of heuristics including PMI to remove noise. apply techniques to detect highly ambiguous entity pairs and discard them from their labelled training set.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_8", "@cite_9", "@cite_1", "@cite_2", "@cite_10", "@cite_11" ], "mid": [ "2251419334", "1515300998", "2128851024", "174427690", "2104411075", "2107598941", "1604644367", "2251960799" ], "abstract": [ "In this paper, we extend distant supervision (DS) based on Wikipedia for Relation Extraction (RE) by considering (i) relations defined in external repositories, e.g. YAGO, and (ii) any subset of Wikipedia documents. We show that training data constituted by sentences containing pairs of named entities in target relations is enough to produce reliable supervision. Our experiments with state-of-the-art relation extraction models, trained on the above data, show a meaningful F1 of 74.29 on a manually annotated test set: this highly improves the state-of-art in RE using DS. Additionally, our end-to-end experiments demonstrated that our extractors can be applied to any general text document.", "We present a large-scale relation extraction (RE) system which learns grammar-based RE rules from the Web by utilizing large numbers of relation instances as seed. Our goal is to obtain rule sets large enough to cover the actual range of linguistic variation, thus tackling the long-tail problem of real-world applications. A variant of distant supervision learns several relations in parallel, enabling a new method of rule filtering. The system detects both binary and n-ary relations. We target 39 relations from Freebase, for which 3M sentences extracted from 20M web pages serve as the basis for learning an average of 40K distinctive rules per relation. Employing an efficient dependency parser, the average run time for each relation is only 19 hours. We compare these rules with ones learned from local corpora of different sizes and demonstrate that the Web is indeed needed for a good coverage of linguistic variation.", "Distant supervision (DS) is an appealing learning method which learns from existing relational facts to extract more from a text corpus. However, the accuracy is still not satisfying. In this paper, we point out and analyze some critical factors in DS which have great impact on accuracy, including valid entity type detection, negative training examples construction and ensembles. We propose an approach to handle these factors. By experimenting on Wikipedia articles to extract the facts in Freebase (the top 92 relations), we show the impact of these three factors on the accuracy of DS and the remarkable improvement led by the proposed approach.", "Distant supervision for relation extraction (RE) -- gathering training data by aligning a database of facts with text -- is an efficient approach to scale RE to thousands of different relations. However, this introduces a challenging learning scenario where the relation expressed by a pair of entities found in a sentence is unknown. For example, a sentence containing Balzac and France may express BornIn or Died, an unknown relation, or no relation at all. Because of this, traditional supervised learning, which assumes that each example is explicitly mapped to a label, is not appropriate. We propose a novel approach to multi-instance multi-label learning for RE, which jointly models all the instances of a pair of entities in text and all their labels using a graphical model with latent variables. Our model performs competitively on two difficult domains.", "Distant supervision algorithms learn information extraction models given only large readily available databases and text collections. Most previous work has used heuristics for generating labeled data, for example assuming that facts not contained in the database are not mentioned in the text, and facts in the database must be mentioned at least once. In this paper, we propose a new latent-variable approach that models missing data. This provides a natural way to incorporate side information, for instance modeling the intuition that text will often mention rare entities which are likely to be missing in the database. Despite the added complexity introduced by reasoning about missing data, we demonstrate that a carefully designed local search approach to inference is very accurate and scales to large datasets. Experiments demonstrate improved performance for binary and unary relation extraction when compared to learning with heuristic labels, including on average a 27 increase in area under the precision recall curve in the binary case.", "Modern models of relation extraction for tasks like ACE are based on supervised learning of relations from small hand-labeled corpora. We investigate an alternative paradigm that does not require labeled corpora, avoiding the domain dependence of ACE-style algorithms, and allowing the use of corpora of any size. Our experiments use Freebase, a large semantic database of several thousand relations, to provide distant supervision. For each pair of entities that appears in some Freebase relation, we find all sentences containing those entities in a large unlabeled corpus and extract textual features to train a relation classifier. Our algorithm combines the advantages of supervised IE (combining 400,000 noisy pattern features in a probabilistic classifier) and unsupervised IE (extracting large numbers of relations from large corpora of any domain). Our model is able to extract 10,000 instances of 102 relations at a precision of 67.6 . We also analyze feature performance, showing that syntactic parse features are particularly helpful for relations that are ambiguous or lexically distant in their expression.", "Several recent works on relation extraction have been applying the distant supervision paradigm: instead of relying on annotated text to learn how to predict relations, they employ existing knowledge bases (KBs) as source of supervision. Crucially, these approaches are trained based on the assumption that each sentence which mentions the two related entities is an expression of the given relation. Here we argue that this leads to noisy patterns that hurt precision, in particular if the knowledge base is not directly related to the text we are working with. We present a novel approach to distant supervision that can alleviate this problem based on the following two ideas: First, we use a factor graph to explicitly model the decision whether two entities are related, and the decision whether this relation is mentioned in a given sentence; second, we apply constraint-driven semi-supervision to train this model without any knowledge about which sentences express the relations in our training KB. We apply our approach to extract relations from the New York Times corpus and use Freebase as knowledge base. When compared to a state-of-the-art approach for relation extraction under distant supervision, we achieve 31 error reduction.", "Distant supervision, heuristically labeling a corpus using a knowledge base, has emerged as a popular choice for training relation extractors. In this paper, we show that a significant number of “negative“ examples generated by the labeling process are false negatives because the knowledge base is incomplete. Therefore the heuristic for generating negative examples has a seriousflaw. Building on a state-of-the-art distantly-supervised extraction algorithm, we proposed an algorithm that learns from only positive and unlabeled labels at the pair-of-entity level. Experimental results demonstrate its advantage over existing algorithms." ] }
1509.03531
2950379400
Compromising social network accounts has become a profitable course of action for cybercriminals. By hijacking control of a popular media or business account, attackers can distribute their malicious messages or disseminate fake information to a large user base. The impacts of these incidents range from a tarnished reputation to multi-billion dollar monetary losses on financial markets. In our previous work, we demonstrated how we can detect large-scale compromises (i.e., so-called campaigns) of regular online social network users. In this work, we show how we can use similar techniques to identify compromises of individual high-profile accounts. High-profile accounts frequently have one characteristic that makes this detection reliable -- they show consistent behavior over time. We show that our system, were it deployed, would have been able to detect and prevent three real-world attacks against popular companies and news agencies. Furthermore, our system, in contrast to popular media, would not have fallen for a staged compromise instigated by a US restaurant chain for publicity reasons.
@cite_27 ran an experiment on the propagation of spam on Twitter. Their goal was to study how spammers use popular topics in their messages to reach more victims. To do this, they created a hashtag and made it trending, and observed that spammers started using the hashtag in their messages.
{ "cite_N": [ "@cite_27" ], "mid": [ "2074835059" ], "abstract": [ "Spam becomes a problem as soon as an online communication medium becomes popular. Twitter’s behavioral and structural properties make it a fertile breeding ground for spammers to proliferate. In this article we examine spam around a one-time Twitter meme—“robotpickuplines”. We show the existence of structural network differences between spam accounts and legitimate users. We conclude by highlighting challenges in disambiguating spammers from legitimate users." ] }
1509.03531
2950379400
Compromising social network accounts has become a profitable course of action for cybercriminals. By hijacking control of a popular media or business account, attackers can distribute their malicious messages or disseminate fake information to a large user base. The impacts of these incidents range from a tarnished reputation to multi-billion dollar monetary losses on financial markets. In our previous work, we demonstrated how we can detect large-scale compromises (i.e., so-called campaigns) of regular online social network users. In this work, we show how we can use similar techniques to identify compromises of individual high-profile accounts. High-profile accounts frequently have one characteristic that makes this detection reliable -- they show consistent behavior over time. We show that our system, were it deployed, would have been able to detect and prevent three real-world attacks against popular companies and news agencies. Furthermore, our system, in contrast to popular media, would not have fallen for a staged compromise instigated by a US restaurant chain for publicity reasons.
@cite_1 built Monarch to detect malicious messages on social networks based on URLs that link to malicious sites. By relying only on URLs, Monarch misses other types of malicious messages. For example, our previous work @cite_36 illustrates that detects scams based on phone numbers and XSS worms spreading without linking to a malicious URL.
{ "cite_N": [ "@cite_36", "@cite_1" ], "mid": [ "2397135192", "2163764145" ], "abstract": [ "As social networking sites have risen in popularity, cyber-criminals started to exploit these sites to spread malware and to carry out scams. Previous work has extensively studied the use of fake (Sybil) accounts that attackers set up to distribute spam messages (mostly messages that contain links to scam pages or drive-by download sites). Fake accounts typically exhibit highly anomalous behavior, and hence, are relatively easy to detect. As a response, attackers have started to compromise and abuse legitimate accounts. Compromising legitimate accounts is very effective, as attackers can leverage the trust relationships that the account owners have established in the past. Moreover, compromised accounts are more difficult to clean up because a social network provider cannot simply delete the correspond-", "On the heels of the widespread adoption of web services such as social networks and URL shorteners, scams, phishing, and malware have become regular threats. Despite extensive research, email-based spam filtering techniques generally fall short for protecting other web services. To better address this need, we present Monarch, a real-time system that crawls URLs as they are submitted to web services and determines whether the URLs direct to spam. We evaluate the viability of Monarch and the fundamental challenges that arise due to the diversity of web service spam. We show that Monarch can provide accurate, real-time protection, but that the underlying characteristics of spam do not generalize across web services. In particular, we find that spam targeting email qualitatively differs in significant ways from spam campaigns targeting Twitter. We explore the distinctions between email and Twitter spam, including the abuse of public web hosting and redirector services. Finally, we demonstrate Monarch's scalability, showing our system could protect a service such as Twitter -- which needs to process 15 million URLs day -- for a bit under $800 day." ] }
1509.03531
2950379400
Compromising social network accounts has become a profitable course of action for cybercriminals. By hijacking control of a popular media or business account, attackers can distribute their malicious messages or disseminate fake information to a large user base. The impacts of these incidents range from a tarnished reputation to multi-billion dollar monetary losses on financial markets. In our previous work, we demonstrated how we can detect large-scale compromises (i.e., so-called campaigns) of regular online social network users. In this work, we show how we can use similar techniques to identify compromises of individual high-profile accounts. High-profile accounts frequently have one characteristic that makes this detection reliable -- they show consistent behavior over time. We show that our system, were it deployed, would have been able to detect and prevent three real-world attacks against popular companies and news agencies. Furthermore, our system, in contrast to popular media, would not have fallen for a staged compromise instigated by a US restaurant chain for publicity reasons.
@cite_25 is a system that detects spam links posted on Twitter by analyzing the characteristics of HTTP redirection chains that lead to a final spam page.
{ "cite_N": [ "@cite_25" ], "mid": [ "2398757235" ], "abstract": [ "Twitter can suffer from malicious tweets containing suspicious URLs for spam, phishing, and malware distribution. Previous Twitter spam detection schemes have used account features such as the ratio of tweets containing URLs and the account creation date, or relation features in the Twitter graph. Malicious users, however, can easily fabricate account features. Moreover, extracting relation features from the Twitter graph is time and resource consuming. Previous suspicious URL detection schemes have classified URLs using several features including lexical features of URLs, URL redirection, HTML content, and dynamic behavior. However, evading techniques exist, such as time-based evasion and crawler evasion. In this paper, we propose WARNINGBIRD, a suspicious URL detection system for Twitter. Instead of focusing on the landing pages of individual URLs in each tweet, we consider correlated redirect chains of URLs in a number of tweets. Because attackers have limited resources and thus have to reuse them, a portion of their redirect chains will be shared. We focus on these shared resources to detect suspicious URLs. We have collected a large number of tweets from the Twitter public timeline and trained a statistical classifier with features derived from correlated URLs and tweet context information. Our classifier has high accuracy and low false-positive and falsenegative rates. We also present WARNINGBIRD as a realtime system for classifying suspicious URLs in the Twitter stream. ∗This research was supported by the MKE (The Ministry of Knowledge Economy), Korea, under the ITRC (Information Technology Research Center) support program supervised by the NIPA (National IT Industry Promotion Agency) (NIPA-2011-C1090-1131-0009) and World Class University program funded by the Ministry of Education, Science and Technology through the National Research Foundation of Korea(R31-10100)." ] }
1509.03531
2950379400
Compromising social network accounts has become a profitable course of action for cybercriminals. By hijacking control of a popular media or business account, attackers can distribute their malicious messages or disseminate fake information to a large user base. The impacts of these incidents range from a tarnished reputation to multi-billion dollar monetary losses on financial markets. In our previous work, we demonstrated how we can detect large-scale compromises (i.e., so-called campaigns) of regular online social network users. In this work, we show how we can use similar techniques to identify compromises of individual high-profile accounts. High-profile accounts frequently have one characteristic that makes this detection reliable -- they show consistent behavior over time. We show that our system, were it deployed, would have been able to detect and prevent three real-world attacks against popular companies and news agencies. Furthermore, our system, in contrast to popular media, would not have fallen for a staged compromise instigated by a US restaurant chain for publicity reasons.
@cite_31 present a system that, by monitoring a small number of nodes, detects worms propagating on social networks. This paper does not directly address the problem of compromised accounts, but could detect large-scale infections such as @cite_20 . @cite_29 analyze three categories of Twitter users: humans, bots, and cyborgs, which are software-aided humans that share characteristics from both bots and humans. To this end, the authors use a classifier that examines how regularly an account tweets, as well as other account features such as the application that is used to post updates. Using this paper's terminology, compromised accounts would fall in the cyborg category. However, the paper does not provide a way of reliably detecting them, since these accounts are often times misclassified as either bots or humans. More precisely, their true positive ratio for accounts is only of 82.8 such accounts much more reliably. Also, the authors in @cite_29 do not provide a clear distinction between compromised accounts and legitimate ones that use third-party applications to post updates on Twitter.
{ "cite_N": [ "@cite_31", "@cite_29", "@cite_20" ], "mid": [ "1982691206", "1969568357", "" ], "abstract": [ "Worms propagating in online social networking (OSN) websites have become a major security threat to both the websites and their users in recent years. Since these worms exhibit unique propagation vectors, existing Internet worm detection mechanisms cannot be applied to them. In this work, we propose an early warning OSN worms detection system, which leverages both the propagation characteristics of these worms and the topological properties of online social networks. Our system can effectively monitor the entire social graph by keeping only a small number of user accounts under surveillance. Moreover, the system applies a two-level correlation scheme to reduce the noise from normal user communications such that infected user accounts can be identified with a higher accuracy. Our evaluation on the real social graph data obtained from Flickr indicates that by monitoring five hundreds users out of 1.8 million users, the proposed detection system can detect the burst of an OSN worm when less than 0.13 of total user accounts are infected. Besides, by adopting simple countermeasures, the detection system is also shown to be very helpful for worm containment.", "Twitter is a new web application playing dual roles of online social networking and micro-blogging. Users communicate with each other by publishing text-based posts. The popularity and open structure of Twitter have attracted a large number of automated programs, known as bots, which appear to be a double-edged sword to Twitter. Legitimate bots generate a large amount of benign tweets delivering news and updating feeds, while malicious bots spread spam or malicious contents. More interestingly, in the middle between human and bot, there has emerged cyborg referred to either bot-assisted human or human-assisted bot. To assist human users in identifying who they are interacting with, this paper focuses on the classification of human, bot and cyborg accounts on Twitter. We first conduct a set of large-scale measurements with a collection of over 500,000 accounts. We observe the difference among human, bot and cyborg in terms of tweeting behavior, tweet content, and account properties. Based on the measurement results, we propose a classification system that includes the following four parts: (1) an entropy-based component, (2) a machine-learning-based component, (3) an account properties component, and (4) a decision maker. It uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot or cyborg. Our experimental evaluation demonstrates the efficacy of the proposed classification system.", "" ] }
1509.03531
2950379400
Compromising social network accounts has become a profitable course of action for cybercriminals. By hijacking control of a popular media or business account, attackers can distribute their malicious messages or disseminate fake information to a large user base. The impacts of these incidents range from a tarnished reputation to multi-billion dollar monetary losses on financial markets. In our previous work, we demonstrated how we can detect large-scale compromises (i.e., so-called campaigns) of regular online social network users. In this work, we show how we can use similar techniques to identify compromises of individual high-profile accounts. High-profile accounts frequently have one characteristic that makes this detection reliable -- they show consistent behavior over time. We show that our system, were it deployed, would have been able to detect and prevent three real-world attacks against popular companies and news agencies. Furthermore, our system, in contrast to popular media, would not have fallen for a staged compromise instigated by a US restaurant chain for publicity reasons.
@cite_13 studied new Twitter spammers that act in a stealthy way to avoid detection. In their system, they use advanced features such as the topology of the network that surrounds the spammer. They do not try to distinguish compromised from spam accounts.
{ "cite_N": [ "@cite_13" ], "mid": [ "1233141674" ], "abstract": [ "Due to the significance and indispensability of detecting and suspending Twitter spammers, many researchers along with the engineers in Twitter Corporation have devoted themselves to keeping Twitter as spam-free online communities. Meanwhile, Twitter spammers are also evolving to evade existing detection techniques. In this paper, we make an empirical analysis of the evasion tactics utilized by Twitter spammers, and then design several new and robust features to detect Twitter spammers. Finally, we formalize the robustness of 24 detection features that are commonly utilized in the literature as well as our proposed ones. Through our experiments, we show that our new designed features are effective to detect Twitter spammers, achieving a much higher detection rate than three state-of-the-art approaches [35,32,34] while keeping an even lower false positive rate." ] }
1509.03389
1807958600
We consider the following problem in which a given number of items has to be chosen from a predefined set. Each item is described by a vector of attributes and for each attribute there is a desired distribution that the selected set should have. We look for a set that fits as much as possible the desired distributions on all attributes. Examples of applications include choosing members of a representative committee, where candidates are described by attributes such as sex, age and profession, and where we look for a committee that for each attribute offers a certain representation, i.e., a single committee that contains a certain number of young and old people, certain number of men and women, certain number of people with different professions, etc. With a single attribute the problem collapses to the apportionment problem for party-list proportional representation systems (in such case the value of the single attribute would be a political affiliation of a candidate). We study the properties of the associated subset selection rules, as well as their computation complexity.
. In particular, our model is related to the problem of finding a fully proportional representation @cite_2 @cite_6 . There, the voters vote directly for candidates and do not consider attributes that characterize them. Thus, in this literature, the term proportional representation'' has a different meaning: these methods are representative' because each voter feels represented by some member of the elected committee. The computational aspects of full proportional and its extensions have raised a lot of attention lately @cite_18 @cite_10 @cite_19 @cite_5 @cite_14 . Our study of the properties of multi-attribute proportional representation is close in spirit to the work of Elkind et al. @cite_16 , who gives a normative study of multiwinner election rules. Budgeted social choice @cite_13 is technically close to committee elections, but it has a different motivation: the aim is to make a collective choice about a set of objects to be consumed by the group (perhaps, subject to some constraints) rather than about the set of candidates to represent voters.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_14", "@cite_6", "@cite_19", "@cite_2", "@cite_5", "@cite_16", "@cite_10" ], "mid": [ "1238745702", "", "", "", "2123389191", "2322270235", "2168205737", "", "2124837549" ], "abstract": [ "We develop a general framework for social choice problems in which a limited number of alternatives can be recommended to an agent population. In our budgeted social choice model, this limit is determined by a budget, capturing problems that arise naturally in a variety of contexts, and spanning the continuum from pure consensus decision making (i.e., standard social choice) to fully personalized recommendation. Our approach applies a form of segmentation to social choice problems-- requiring the selection of diverse options tailored to different agent types--and generalizes certain multiwinner election schemes. We show that standard rank aggregation methods perform poorly, and that optimization in our model is NP-complete; but we develop fast greedy algorithms with some theoretical guarantees. Experiments on real-world datasets demonstrate the effectiveness of our algorithms.", "", "", "", "This paper is devoted to the proportional representation (PR) problem when the preferences are clustered single-peaked. PR is a \"multi-winner\" election problem, that we study in Chamberlin and Courant's scheme [6]. We define clustered single-peakedness as a form of single-peakedness with respect to clusters of candidates, i.e. subsets of candidates that are consecutive (in arbitrary order) in the preferences of all voters. We show that the PR problem becomes polynomial when the size of the largest cluster of candidates (width) is bounded. Furthermore, we establish the polynomiality of determining the single-peaked width of a preference profile (minimum width for a partition of candidates into clusters compatible with clustered single-peakedness) when the preferences are narcissistic (i.e., every candidate is the most preferred one for some voter).", "The development of social choice theory over the past three decades has brought many new insights into democratic theory. Surprisingly, the theory of representation has gone almost untouched by social choice theorists. This article redresses this neglect and provides an axiomatic study of one means of implementing proportional representation. The distinguishing feature of proportional representation is its concern for the representativeness of deliberations as well as decisions. We define a representative in a way that is particularly attentive to this feature and then define a method of selecting representatives (a variant of the Borda rule) which selects a maximally representative body. We also prove that this method of selection meets four social choice axioms that are met by a number of other important social choicefunctions (including pairwise majority decision and the Borda rule). For over two hundred years, methods of selecting representative bodies have been a major topic of debate among democratic theorists. One important view of the goals and functions of repre", "We study the complexity of (approximate) winner determination under the Monroe and Chamberlin-Courant multiwinner voting rules, which determine the set of representatives by optimizing the total satisfaction or dissatisfaction of the voters with their representatives. The total (dis)satisfaction is calculated either as the sum of individual (dis)satisfactions in the utilitarian case or as the (dis)satisfaction of the worst off voter in the egalitarian case. We provide good approximation algorithms for the satisfaction-based utilitarian versions of the Monroe and Chamberlin-Courant rules, and inapproximability results for the dissatisfaction-based utilitarian versions of these rules and also for all egalitarian cases. Our algorithms are applicable and particularly appealing when voters submit truncated ballots. We provide experimental evaluation of the algorithms both on real-life preference-aggregation data and on synthetic preference data. These experiments show that our simple and fast algorithms can, in many cases, find near-perfect solutions.", "", "We investigate two systems of fully proportional representation suggested by Chamberlin & Courant and Monroe. Both systems assign a representative to each voter so that the \"sum of misrepresentations\" is minimized. The winner determination problem for both systems is known to be NP-hard, hence this work aims at investigating whether there are variants of the proposed rules and or specific electorates for which these problems can be solved efficiently. As a variation of these rules, instead of minimizing the sum of misrepresentations, we considered minimizing the maximal misrepresentation introducing effectively two new rules. In the general case these \"minimax\" versions of classical rules appeared to be still NP-hard. We investigated the parameterized complexity of winner determination of the two classical and two new rules with respect to several parameters. Here we have a mixture of positive and negative results: e.g., we proved fixed-parameter tractability for the parameter the number of candidates but fixed-parameter intractability for the number of winners. For single-peaked electorates our results are overwhelmingly positive: we provide polynomial-time algorithms for most of the considered problems. The only rule that remains NP-hard for single-peaked electorates is the classical Monroe rule." ] }
1509.03453
2234617421
We propose a new algorithm for fast approximate nearest neighbor search based on the properties of ordered vectors. Data vectors are classified based on the index and sign of their largest components, thereby partitioning the space in a number of cones centered in the origin. The query is itself classified, and the search starts from the selected cone and proceeds to neighboring ones. Overall, the proposed algorithm corresponds to locality sensitive hashing in the space of directions, with hashing based on the order of components. Thanks to the statistical features emerging through ordering, it deals very well with the challenging case of unstructured data, and is a valuable building block for more complex techniques dealing with structured data. Experiments on both simulated and real-world data prove the proposed algorithm to provide a state-of-the-art performance.
The main innovation in ROSANNA is the use of order statistics. Sorting has been already exploited for ANN search by Chavez @cite_43 . This technique, however, is pivot-based rather than partition-based. A number of anchor vectors, or pivots'' are chosen in advance. Then, for each database vector the distances to all pivots are computed and sorted, keeping track of the order in a permutation vector. At search time, a permutation vector is computed also for the query and compared with those of the database points to select a short list of candidate NNs, assuming that close vectors have similar permutation vectors. In summary, sorting is used for very different goals than in ROSANNA.
{ "cite_N": [ "@cite_43" ], "mid": [ "2099812803" ], "abstract": [ "We introduce a new probabilistic proximity search algorithm for range and A\"-nearest neighbor (A\"-NN) searching in both coordinate and metric spaces. Although there exist solutions for these problems, they boil down to a linear scan when the space is intrinsically high dimensional, as is the case in many pattern recognition tasks. This, for example, renders the A\"-NN approach to classification rather slow in large databases. Our novel idea is to predict closeness between elements according to how they order their distances toward a distinguished set of anchor objects. Each element in the space sorts the anchor objects from closest to farthest to it and the similarity between orders turns out to be an excellent predictor of the closeness between the corresponding elements. We present extensive experiments comparing our method against state-of-the-art exact and approximate techniques, both in synthetic and real, metric and nonmetric databases, measuring both CPU time and distance computations. The experiments demonstrate that our technique almost always improves upon the performance of alternative techniques, in some cases by a wide margin." ] }
1509.03453
2234617421
We propose a new algorithm for fast approximate nearest neighbor search based on the properties of ordered vectors. Data vectors are classified based on the index and sign of their largest components, thereby partitioning the space in a number of cones centered in the origin. The query is itself classified, and the search starts from the selected cone and proceeds to neighboring ones. Overall, the proposed algorithm corresponds to locality sensitive hashing in the space of directions, with hashing based on the order of components. Thanks to the statistical features emerging through ordering, it deals very well with the challenging case of unstructured data, and is a valuable building block for more complex techniques dealing with structured data. Experiments on both simulated and real-world data prove the proposed algorithm to provide a state-of-the-art performance.
In Concomitant LSH @cite_0 , instead, a Voronoi partition of the space of directions is built based on a set of @math points taken at random on the unit sphere. Although originally proposed for cosine distance, it presents some similarities with ROSANNA, the use of directions and sorting, and can be easily adapted to deal with the Euclidean distance. However, to classify the query, @math distances must be computed at search-time, before inspecting the candidates, which is a severe overhead for large hashing tables. To reduce this burden, several variants are also proposed in @cite_0 which, however, tend to produce a worse partition of the space. Irrespective of the implementation details, the regular partition of the space of directions provided by ROSANNA can be expected to be more effective than the random partition used in Concomitant LSH. Moreover, in ROSANNA, the hashing requires only a vector sorting, with no distance computation. As for concomitants (related to order statistics) they are only used to carry out a theoretical analysis of performance, but are not considered in the algorithm.
{ "cite_N": [ "@cite_0" ], "mid": [ "2019336343" ], "abstract": [ "Locality Sensitive Hash functions are invaluable tools for approximate near neighbor problems in high dimensional spaces. In this work, we are focused on LSH schemes where the similarity metric is the cosine measure. The contribution of this work is a new class of locality sensitive hash functions for the cosine similarity measure based on the theory of concomitants, which arises in order statistics. Consider n i.i.d sample pairs, (X1; Y1); (X2; Y2); : : : ;(Xn; Yn) obtained from a bivariate distribution f(X, Y). Concomitant theory captures the relation between the order statistics of X and Y in the form of a rank distribution given by Prob(Rank(Yi)=j-Rank(Xi)=k). We exploit properties of the rank distribution towards developing a locality sensitive hash family that has excellent collision rate properties for the cosine measure. The computational cost of the basic algorithm is high for high hash lengths. We introduce several approximations based on the properties of concomitant order statistics and discrete transforms that perform almost as well, with significantly reduced computational cost. We demonstrate the practical applicability of our algorithms by using it for finding similar images in an image repository." ] }
1509.03302
2175022647
One significant challenge to scaling entity resolution algorithms to massive datasets is understanding how performance changes after moving beyond the realm of small, manually labeled reference datasets. Unlike traditional machine learning tasks, when an entity resolution algorithm performs well on small hold-out datasets, there is no guarantee this performance holds on larger hold-out datasets. We prove simple bounding properties between the performance of a match function on a small validation set and the performance of a pairwise entity resolution algorithm on arbitrarily sized datasets. Thus, our approach enables optimization of pairwise entity resolution algorithms for large datasets, using a small set of labeled data.
Entity resolution encompasses a broad set of approaches, including many adapted from the machine learning, optimization, and graph theory domains. Strategies appropriate for ER includes hierarchical clustering @cite_2 , integer linear programming @cite_9 , latent Dirichlet allocation @cite_0 , pairwise match merge @cite_1 , Markov logic @cite_15 and hybrid human-machine systems @cite_3 . Pairwise entity resolution approaches are appealing because they use an intuitive and easy to implement iterative match and merge process between pairs of records. Further, under certain assumptions, pairwise algorithms will perform the optimal number of record comparisons @cite_1 .
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_3", "@cite_0", "@cite_2", "@cite_15" ], "mid": [ "2020018978", "2141253686", "2056748234", "2148019918", "2102350406", "2171472464" ], "abstract": [ "", "It sometimes happens (for instance in case control studies) that a classifier is trained on a data set that does not reflect the true a priori probabilities of the target classes on real-world data. This may have a negative effect on the classification accuracy obtained on the real-world data set, especially when the classifier's decisions are based on the a posteriori probabilities of class membership. Indeed, in this case, the trained classifier provides estimates of the a posteriori probabilities that are not valid for this real-world data set (they rely on the a priori probabilities of the training set). Applying the classifier as is (without correcting its outputs with respect to these new conditions) on this new data set may thus be suboptimal. In this note, we present a simple iterative procedure for adjusting the outputs of the trained classifier with respect to these new a priori probabilities without having to refit the model, even when these probabilities are not known in advance. As a by-product, estimates of the new a priori probabilities are also obtained. This iterative algorithm is a straightforward instance of the expectation-maximization (EM) algorithm and is shown to maximize the likelihood of the new data. Thereafter, we discuss a statistical test that can be applied to decide if the a priori class probabilities have changed from the training set to the real-world data. The procedure is illustrated on different classification problems involving a multilayer neural network, and comparisons with a standard procedure for a priori probability estimation are provided. Our original method, based on the EM algorithm, is shown to be superior to the standard one for a priori probability estimation. Experimental results also indicate that the classifier with adjusted outputs always performs better than the original one in terms of classification accuracy, when the a priori probability conditions differ from the training set to the real-world data. The gain in classification accuracy can be significant.", "Entity resolution is central to data integration and data cleaning. Algorithmic approaches have been improving in quality, but remain far from perfect. Crowdsourcing platforms offer a more accurate but expensive (and slow) way to bring human insight into the process. Previous work has proposed batching verification tasks for presentation to human workers but even with batching, a human-only approach is infeasible for data sets of even moderate size, due to the large numbers of matches to be tested. Instead, we propose a hybrid human-machine approach in which machines are used to do an initial, coarse pass over all the data, and people are used to verify only the most likely matching pairs. We show that for such a hybrid system, generating the minimum number of verification tasks of a given size is NP-Hard, but we develop a novel two-tiered heuristic approach for creating batched tasks. We describe this method, and present the results of extensive experiments on real data sets using a popular crowdsourcing platform. The experiments show that our hybrid approach achieves both good efficiency and high accuracy compared to machine-only or human-only alternatives.", "Many databases contain uncertain and imprecise references to real-world entities. The absence of identifiers for the underlying entities often results in a database which contains multiple references to the same entity. This can lead not only to data redundancy, but also inaccuracies in query processing and knowledge extraction. These problems can be alleviated through the use of entity resolution. Entity resolution involves discovering the underlying entities and mapping each database reference to these entities. Traditionally, entities are resolved using pairwise similarity over the attributes of references. However, there is often additional relational information in the data. Specifically, references to different entities may cooccur. In these cases, collective entity resolution, in which entities for cooccurring references are determined jointly rather than independently, can improve entity resolution accuracy. We propose a novel relational clustering algorithm that uses both attribute and relational information for determining the underlying domain entities, and we give an efficient implementation. We investigate the impact that different relational similarity measures have on entity resolution quality. We evaluate our collective entity resolution algorithm on multiple real-world databases. We show that it improves entity resolution performance over both attribute-based baselines and over algorithms that consider relational information but do not resolve entities collectively. In addition, we perform detailed experiments on synthetically generated data to identify data characteristics that favor collective relational resolution over purely attribute-based algorithms.", "The problem of record linkage focuses on determining whether two object descriptions refer to the same underlying entity. Addressing this problem effectively has many practical applications, e.g., elimination of duplicate records in databases and citation matching for scholarly articles. In this paper, we consider a new domain where the record linkage problem is manifested: Internet comparison shopping. We address the resulting linkage setting that requires learning a similarity function between record pairs from streaming data. The learned similarity function is subsequently used in clustering to determine which records are co-referent and should be linked. We present an online machine learning method for addressing this problem, where a composite similarity function based on a linear combination of basis functions is learned incrementally. We illustrate the efficacy of this approach on several real-world datasets from an Internet comparison shopping site, and show that our method is able to effectively learn various distance functions for product data with differing characteristics. We also provide experimental results that show the importance of considering multiple performance measures in record linkage evaluation.", "Entity resolution is the problem of determining which records in a database refer to the same entities, and is a crucial and expensive step in the data mining process. Interest in it has grown rapidly in recent years, and many approaches have been proposed. However, they tend to address only isolated aspects of the problem, and are often ad hoc. This paper proposes a well-founded, integrated solution to the entity resolution problem based on Markov logic. Markov logic combines first-order logic and probabilistic graphical models by attaching weights to first-order formulas, and viewing them as templates for features of Markov networks. We show how a number of previous approaches can be formulated and seamlessly combined in Markov logic, and how the resulting learning and inference problems can be solved efficiently. Experiments on two citation databases show the utility of this approach, and evaluate the contribution of the different components." ] }
1509.03302
2175022647
One significant challenge to scaling entity resolution algorithms to massive datasets is understanding how performance changes after moving beyond the realm of small, manually labeled reference datasets. Unlike traditional machine learning tasks, when an entity resolution algorithm performs well on small hold-out datasets, there is no guarantee this performance holds on larger hold-out datasets. We prove simple bounding properties between the performance of a match function on a small validation set and the performance of a pairwise entity resolution algorithm on arbitrarily sized datasets. Thus, our approach enables optimization of pairwise entity resolution algorithms for large datasets, using a small set of labeled data.
Perhaps the most general framework for pairwise entity resolution was presented by @cite_1 . They outlined a theoretically disciplined approach, wherein certain properties of the match and merge function guarantee a deterministic output in the optimal number of record comparisons. We explore the use of some of these properties in the derivation of our bounds. Collectively, these properties are referred to by their acronym ICAR:
{ "cite_N": [ "@cite_1" ], "mid": [ "2141253686" ], "abstract": [ "It sometimes happens (for instance in case control studies) that a classifier is trained on a data set that does not reflect the true a priori probabilities of the target classes on real-world data. This may have a negative effect on the classification accuracy obtained on the real-world data set, especially when the classifier's decisions are based on the a posteriori probabilities of class membership. Indeed, in this case, the trained classifier provides estimates of the a posteriori probabilities that are not valid for this real-world data set (they rely on the a priori probabilities of the training set). Applying the classifier as is (without correcting its outputs with respect to these new conditions) on this new data set may thus be suboptimal. In this note, we present a simple iterative procedure for adjusting the outputs of the trained classifier with respect to these new a priori probabilities without having to refit the model, even when these probabilities are not known in advance. As a by-product, estimates of the new a priori probabilities are also obtained. This iterative algorithm is a straightforward instance of the expectation-maximization (EM) algorithm and is shown to maximize the likelihood of the new data. Thereafter, we discuss a statistical test that can be applied to decide if the a priori class probabilities have changed from the training set to the real-world data. The procedure is illustrated on different classification problems involving a multilayer neural network, and comparisons with a standard procedure for a priori probability estimation are provided. Our original method, based on the EM algorithm, is shown to be superior to the standard one for a priori probability estimation. Experimental results also indicate that the classifier with adjusted outputs always performs better than the original one in terms of classification accuracy, when the a priori probability conditions differ from the training set to the real-world data. The gain in classification accuracy can be significant." ] }
1509.03001
2223602738
Sign language recognition is important for natural and convenient communication between deaf community and hearing majority. We take the highly efficient initial step of automatic fingerspelling recognition system using convolutional neural networks (CNNs) from depth maps. In this work, we consider relatively larger number of classes compared with the previous literature. We train CNNs for the classification of 31 alphabets and numbers using a subset of collected depth data from multiple subjects. While using different learning configurations, such as hyper-parameter selection with and without validation, we achieve 99.99 accuracy for observed signers and 83.58 to 85.49 accuracy for new signers. The result shows that accuracy improves as we include more data from different subjects during training. The processing time is 3 ms for the prediction of a single image. To the best of our knowledge, the system achieves the highest accuracy and speed. The trained model and dataset is available on our repository.
Although gesture recognition only considers well specified hand gestures, some approaches are related to sign language recognition. Nagi al @cite_1 proposed a gesture recognition system for human-robot interaction using CNNs. Van den Bergh al @cite_6 proposed a hand gesture recognition system using Haar wavelets and database searching. The system extracts features using Haar wavelets and classifies input image by finding the nearest match in the database. Although both systems show good results, these methods consider only six gesture classes.
{ "cite_N": [ "@cite_1", "@cite_6" ], "mid": [ "2074772891", "2097299079" ], "abstract": [ "Automatic recognition of gestures using computer vision is important for many real-world applications such as sign language recognition and human-robot interaction (HRI). Our goal is a real-time hand gesture-based HRI interface for mobile robots. We use a state-of-the-art big and deep neural network (NN) combining convolution and max-pooling (MPCNN) for supervised feature learning and classification of hand gestures given by humans to mobile robots using colored gloves. The hand contour is retrieved by color segmentation, then smoothened by morphological image processing which eliminates noisy edges. Our big and deep MPCNN classifies 6 gesture classes with 96 accuracy, nearly three times better than the nearest competitor. Experiments with mobile robots using an ARM 11 533MHz processor achieve real-time gesture recognition performance.", "Time-of-Flight (ToF) and other IR-based cameras that register depth are becoming more and more affordable in consumer electronics. This paper aims to improve a realtime hand gesture interaction system by augmenting it with a ToF camera. First, the ToF camera and the RGB camera are calibrated, and a mapping is made from the depth data to the RGB image. Then, a novel hand detection algorithm is introduced based on depth and color. This not only improves detection rates, but also allows for the hand to overlap with the face, or with hands from other persons in the background. The hand detection algorithm is evaluated in these settings, and compared to previous algorithms. Furthermore, the depth information allows us to track the position of the hand in 3D, allowing for more interesting modes of interaction. Finally, the hand gesture recognition algorithm is applied to the depth data as well, and compared to the recognition based on the RGB images. The result is a real-time hand gesture interaction system that allows for complex 3D gestures and is not disturbed by objects or persons in the background." ] }
1509.03001
2223602738
Sign language recognition is important for natural and convenient communication between deaf community and hearing majority. We take the highly efficient initial step of automatic fingerspelling recognition system using convolutional neural networks (CNNs) from depth maps. In this work, we consider relatively larger number of classes compared with the previous literature. We train CNNs for the classification of 31 alphabets and numbers using a subset of collected depth data from multiple subjects. While using different learning configurations, such as hyper-parameter selection with and without validation, we achieve 99.99 accuracy for observed signers and 83.58 to 85.49 accuracy for new signers. The result shows that accuracy improves as we include more data from different subjects during training. The processing time is 3 ms for the prediction of a single image. To the best of our knowledge, the system achieves the highest accuracy and speed. The trained model and dataset is available on our repository.
ASL sentence recognition and verification has also been explored. Zafrulla al @cite_3 proposed a system which recognizes a sentence of three to five words. The word should be one of 19 signs in their dictionary. They also used Hidden Markov Models on extracted features.
{ "cite_N": [ "@cite_3" ], "mid": [ "1969724184" ], "abstract": [ "We investigate the potential of the Kinect depth-mapping camera for sign language recognition and verification for educational games for deaf children. We compare a prototype Kinect-based system to our current CopyCat system which uses colored gloves and embedded accelerometers to track children's hand movements. If successful, a Kinect-based approach could improve interactivity, user comfort, system robustness, system sustainability, cost, and ease of deployment. We collected a total of 1000 American Sign Language (ASL) phrases across both systems. On adult data, the Kinect system resulted in 51.5 and 76.12 sentence verification rates when the users were seated and standing respectively. These rates are comparable to the 74.82 verification rate when using the current(seated) CopyCat system. While the Kinect computer vision system requires more tuning for seated use, the results suggest that the Kinect may be a viable option for sign verification." ] }
1509.03075
2228745896
Over the past decade, many works on the modeling of wireless networks using stochastic geometry have been proposed. Results about probability of coverage, throughput or mean interference, have been provided for a wide variety of networks (cellular, ad-hoc, cognitive, sensors, etc). These results notably allow to tune network protocol parameters. Nevertheless, in their vast majority, these works assume that the wireless network deployment is flat: nodes are placed on the Euclidean plane. However, this assumption is disproved in dense urban environments where many nodes are deployed in high buildings. In this letter, we derive the exact form of the probability of coverage for the cases where the interferers form a 3D Poisson Point Process (PPP) and a 3D Modified Matern Process (MMP), and compare the results with the 2D case. The main goal of this letter is to show that the 2D model, although being the most common, can lead either to an optimistic or a pessimistic evaluation of the probability of coverage depending on the parameters of the model.
Even if most of the research efforts are focused on 2D networks, 3D networks have been emerging during the past few years. It is notably the case for WSNs @cite_17 @cite_5 @cite_15 underwater mobile networks @cite_2 Unmanned Aerial Vehicle (UAV) networks @cite_1 .
{ "cite_N": [ "@cite_1", "@cite_2", "@cite_5", "@cite_15", "@cite_17" ], "mid": [ "2087211251", "1604225584", "2114533345", "1465761504", "2154078937" ], "abstract": [ "This paper presents a collaborative system made up of a Wireless Sensor Network (WSN) and an aerial robot, which is applied to real-time frost monitoring in vineyards. The core feature of our system is a dynamic mobile node carried by an aerial robot, which ensures communication between sparse clusters located at fragmented parcels and a base station. This system overcomes some limitations of the wireless networks in areas with such characteristics. The use of a dedicated communication channel enables data routing to from unlimited distances.", "The large-scale mobile underwater wireless sensor network (UWSN) is a novel networking paradigm to explore aqueous environments. However, the characteristics of mobile UWSNs, such as low communication bandwidth, large propagation delay, floating node mobility, and high error probability, are significantly different from ground-based wireless sensor networks. The novel networking paradigm poses interdisciplinary challenges that will require new technological solutions. In particular, in this article we adopt a top-down approach to explore the research challenges in mobile UWSN design. Along the layered protocol stack, we proceed roughly from the top application layer to the bottom physical layer. At each layer, a set of new design intricacies is studied. The conclusion is that building scalable mobile UWSNs is a challenge that must be answered by interdisciplinary efforts of acoustic communications, signal processing, and mobile acoustic network protocol design.", "In a wireless sensor network (WSN), connectivity enables the sensors to communicate with each other, while sensing coverage reflects the quality of surveillance. Although the majority of studies on coverage and connectivity in WSNs consider 2D space, 3D settings represent more accurately the network design for real-world applications. As an example, underwater sensor networks require design in 3D rather than 2D space. In this paper, we focus on the connectivity and k-coverage issues in 3D WSNs, where each point is covered by at least k sensors (the maximum value of k is called the coverage degree). Precisely, we propose the Reuleaux tetrahedron model to characterize k-coverage of a 3D field and investigate the corresponding minimum sensor spatial density. We prove that a 3D field is guaranteed to be k-covered if any Reuleaux tetrahedron region of the field contains at least k sensors. We also compute the connectivity of 3D k-covered WSNs. Based on the concepts of conditional connectivity and forbidden faulty sensor set, which cannot include all the neighbors of a sensor, we prove that 3D k-covered WSNs can sustain a large number of sensor failures. Precisely, we prove that 3D k-covered WSNs have connectivity higher than their coverage degree k. Then, we relax some widely used assumptions in coverage and connectivity in WSNs, such as sensor homogeneity and unit sensing and communication model, so as to promote the practicality of our results in real-world scenarios. Also, we propose a placement strategy of sensors to achieve full k-coverage of a 3D field. This strategy can be used in the design of energy-efficient scheduling protocols for 3D k-covered WSNs to extend the network lifetime.", "This paper presents Indriya, a large-scale, low-cost wireless sensor network testbed deployed at the National University of Singapore. Indriya uses TelosB devices and it is built on an active-USB infrastructure. The infrastructure acts as a remote programming back-channel and it also supplies electric power to sensor devices. Indriya is designed to reduce the costs of both deployment and maintenance of a large-scale testbed. Indriya has been in use by over 100 users with its maintenance incurring less than US$500 for almost 2 years of its usage.", "This research focuses on distributed and localized algorithms for precise boundary detection in 3D wireless networks. Our objectives are in two folds. First, we aim to identify the nodes on the boundaries of a 3D network, which serve as a key attribute that characterizes the network, especially in such geographic exploration tasks as terrain and underwater reconnaissance. Second, we construct locally planarized 2-manifold surfaces for inner and outer boundaries, in order to enable available graph theory tools to be applied on 3D surfaces, such as embedding, localization, partition, and greedy routing among many others. To achieve the first objective, we propose a Unit Ball Fitting (UBF) algorithm that discovers a set of potential boundary nodes, followed by a refinement algorithm, named Isolated Fragment Filtering (IFF), which removes isolated nodes that are misinterpreted as boundary nodes by UBF. Based on the identified boundary nodes, we develop an algorithm that constructs a locally planarized triangular mesh surface for each 3D boundary. Our proposed scheme is localized, requiring information within one-hop neighborhood only. Our simulation results demonstrate that the proposed algorithms can effectively identify boundary nodes and surfaces, even under high measurement errors. As far as we know, this is the first work for discovering boundary nodes and constructing boundary surfaces in 3D wireless networks." ] }
1509.03075
2228745896
Over the past decade, many works on the modeling of wireless networks using stochastic geometry have been proposed. Results about probability of coverage, throughput or mean interference, have been provided for a wide variety of networks (cellular, ad-hoc, cognitive, sensors, etc). These results notably allow to tune network protocol parameters. Nevertheless, in their vast majority, these works assume that the wireless network deployment is flat: nodes are placed on the Euclidean plane. However, this assumption is disproved in dense urban environments where many nodes are deployed in high buildings. In this letter, we derive the exact form of the probability of coverage for the cases where the interferers form a 3D Poisson Point Process (PPP) and a 3D Modified Matern Process (MMP), and compare the results with the 2D case. The main goal of this letter is to show that the 2D model, although being the most common, can lead either to an optimistic or a pessimistic evaluation of the probability of coverage depending on the parameters of the model.
From a more theoretical point of view, 3D networks have been investigated in terms of capacity @cite_0 @cite_14 and scaling laws have been provided. Nevertheless, in the present work we focus on the stochastic geometric approach to the study of wireless networks in the sense of @cite_16 @cite_13 . This approach has the advantage to provide tractable results for the probability of coverage in the case of the PPP model and good approximations for other models @cite_16 .
{ "cite_N": [ "@cite_0", "@cite_14", "@cite_13", "@cite_16" ], "mid": [ "2143443272", "2151690436", "2133685514", "635250944" ], "abstract": [ "Consider n nodes located in a sphere of volume V m sup 3 , each capable of transmitting at a data rate of W bits sec. Under a protocol based model for successful receptions, the entire network can carry only spl Theta (WV1 3n2 3) bit-meters sec, where 1 bit carried a distance of 1 meter is counted as 1 bit-meter. This is the best possible performance even assuming the node locations, traffic patterns, and the range power timing of each transmission, are all optimally chosen. If the node locations and their destinations are randomly chosen, and all transmissions employ the same power range, then each node only obtains a throughput of spl Theta (w (nlog sup 2 n)1 3) bits sec, if the network is optimally operated. Similar results hold under an alternate physical model where a minimum signal-to-interference ratio is specified for successful receptions. The proofs of these results require determination of the VC-dimensions of certain geometric sets, which may be of independent interest.", "Network capacity investigation has been intensive in the past few years. A large body of work has appeared in the literature. However, so far most of the effort has been made on two-dimensional wireless networks only. With the great development of wireless technologies, wireless networks are envisioned to extend from two-dimensional space to three-dimensional space. In this paper, we investigate for the first time the throughput capacity of 3D regular ad hoc networks (RANETs) and of 3D heterogeneous ad hoc networks (HANETs), respectively, by employing a generalized physical model. In 3D RANETs, we assume that the nodes are regularly placed, while in 3D HANETs, we consider that the nodes are distributed according to a general Nonhomogeneous Poisson Process (NPP). We find both lower and upper bounds in both types of networks in a broad power propagation regime, i.e., when the path loss exponent is no less than 2.", "Since interference is the main performance-limiting factor in most wireless networks, it is crucial to characterize the interference statistics. The two main determinants of the interference are the network geometry (spatial distribution of concurrently transmitting nodes) and the path loss law (signal attenuation with distance). For certain classes of node distributions, most notably Poisson point processes, and attenuation laws, closed-form results are available, for both the interference itself as well as the signal-to-interference ratios, which determine the network performance. This monograph presents an overview of these results and gives an introduction to the analytical techniques used in their derivation. The node distribution models range from lattices to homogeneous and clustered Poisson models to general motion-invariant ones. The analysis of the more general models requires the use of Palm theory, in particular conditional probability generating functionals, which are briefly introduced in the appendix.", "Preface. Preface to Volume II. Contents of Volume II. Part IV Medium Access Control 1 Spatial Aloha: the Bipole Model 2 Receiver Selection in Spatial 3 Carrier Sense Multiple 4 Code Division Multiple Access in Cellular Networks Bibliographical Notes on Part IV. Part V Multihop Routing in Mobile ad Hoc Networks: 5 Optimal Routing 6 Greedy Routing 7 Time-Space Routing Bibliographical Notes on Part V. Part VI Appendix:Wireless Protocols and Architectures: 8 RadioWave Propagation 9 Signal Detection 10 Wireless Network Architectures and Protocols Bibliographical Notes on Part VI Bibliography Table of Notation Index." ] }
1509.03075
2228745896
Over the past decade, many works on the modeling of wireless networks using stochastic geometry have been proposed. Results about probability of coverage, throughput or mean interference, have been provided for a wide variety of networks (cellular, ad-hoc, cognitive, sensors, etc). These results notably allow to tune network protocol parameters. Nevertheless, in their vast majority, these works assume that the wireless network deployment is flat: nodes are placed on the Euclidean plane. However, this assumption is disproved in dense urban environments where many nodes are deployed in high buildings. In this letter, we derive the exact form of the probability of coverage for the cases where the interferers form a 3D Poisson Point Process (PPP) and a 3D Modified Matern Process (MMP), and compare the results with the 2D case. The main goal of this letter is to show that the 2D model, although being the most common, can lead either to an optimistic or a pessimistic evaluation of the probability of coverage depending on the parameters of the model.
Many works modeling CSMA access through stochastic geometry have emerged in the literature these past ten years @cite_18 @cite_10 @cite_19 . These works are based on a modified version of the Matern Point Process for which the points cannot live too close to each others. This allows to model the contention radius of CSMA as follows. The nodes of the network form a PPP on the Euclidean plane and they contend to access to the medium. Each node picks uniformly a value between zero and one, the node is selected in the process if it does not detect any node with a smaller mark value (this models the backoff procedure). The resulting process is the Modified Matern Process (MMP) which will be detailed in section . As there is no known exact formulation of the interference in such process, @cite_18 @cite_10 and @cite_19 use approximation techniques. They show that their approximations lie close to simulations results. We extend these works by considering 3D distribution of the nodes and compare the resulting model to simulation. To the best of our knowledge, this has not been considered in the literature.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_10" ], "mid": [ "2028070124", "2146115135", "2154233760" ], "abstract": [ "For spectrum sharing and avoidance of mutual interference, carrier-sense multiple access (CSMA) protocols are very popular in distributed wireless networks. CSMA protocols aim to maximize the spatial frequency reuse while limiting the mutual interference and outage. The hard core point process (HCPP) is a very popular tool for modeling and analysis of random CSMA networks. However, the traditional HCPP suffers from the node intensity (and hence the interference) underestimation flaw. Therefore, we propose a modified hard core point process to mitigate this flaw. The proposed modified HCPP is generalized for any fading environment. To this end, we derive a closed-form expression for the intensity of simultaneously active transmitters in a random wireless CSMA network. Then, we derive a closed-form expression for approximating the outage probability experienced by a generic receiver in the network, and subsequently, use it to obtain the transmission capacity of the network. Finally, we show the existence of an optimal carrier-sensing threshold for the CSMA protocol that maximizes the transmission capacity of the network. Simulation results validate the analysis and also provide interesting insights into the design of practical CSMA networks.", "This paper presents a stochastic geometry model for the performance analysis and the planning of dense IEEE 802.11 networks. This model allows one to propose heuristic formulas for various properties of such networks like the probability for users to be covered, the probability for access points to be granted access to the channel or the average long term throughput provided to end-users. The main merit of this model is to take the effect of interferences and that of CSMA into account within this dense network context. This analytic model, which is based on Matern point processes, is partly validated against simulation. It is then used to assess various properties of such networks. We show for instance how the long term throughput obtained by end-users behaves when the access point density increases. We also briefly show how to use this model for the planning of managed networks and for the economic modeling of unplanned networks.", "In this paper, the performance of the ALOHA and CSMA MAC protocols are analyzed in spatially distributed wireless networks. The main system objective is correct reception of packets, and thus the analysis is performed in terms of outage probability. In our network model, packets belonging to specific transmitters arrive randomly in space and time according to a 3-D Poisson point process, and are then transmitted to their intended destinations using a fully-distributed MAC protocol. A packet transmission is considered successful if the received SINR is above a predefined threshold for the duration of the packet. Accurate bounds on the outage probabilities are derived as a function of the transmitter density, the number of backoffs and retransmissions, and in the case of CSMA, also the sensing threshold. The analytical expressions are validated with simulation results. For continuous-time transmissions, CSMA with receiver sensing (which involves adding a feedback channel to the conventional CSMA protocol) is shown to yield the best performance. Moreover, the sensing threshold of CSMA is optimized. It is shown that introducing sensing for lower densities (i.e., in sparse networks) is not beneficial, while for higher densities (i.e., in dense networks), using an optimized sensing threshold provides significant gain." ] }
1509.03374
2232794761
QoS-aware networking applications such as real-time streaming and video surveillance systems require nearly fixed average end-to-end delay over long periods to communicate efficiently, although may tolerate some delay variations in short periods. This variability exhibits complex dynamics that makes rate control of such applications a formidable task. This paper addresses rate allocation for heterogeneous QoS-aware applications that preserves the long-term end-to-end delay constraint while, similar to Dynamic Network Utility Maximization (DNUM), strives to achieve the maximum network utility aggregated over a fixed time interval. Since capturing temporal dynamics in QoS requirements of sources is allowed in our system model, we incorporate a novel time-coupling constraint in which delay-sensitivity of sources is considered such that a certain end-to-end average delay for each source over a pre-specified time interval is satisfied. We propose DA-DNUM algorithm, as a dual-based solution, which allocates source rates for the next time interval in a distributed fashion, given the knowledge of network parameters in advance. Through numerical experiments, we show that DA-DNUM gains higher average link utilization and a wider range of feasible scenarios in comparison with the best, to our knowledge, rate control schemes that may guarantee such constraints on delay.
In the recent years, many studies have employed NUM framework to propose efficient protocols and algorithms for network applications under different types of traffics, assumptions, and constraints (see @cite_13 and the references therein). In particular, by extending the basic single-period version of NUM framework, a number of studies have incorporated end-to-end delay @cite_3 @cite_12 @cite_6 @cite_8 @cite_14 @cite_0 @cite_5 @cite_9 to address the requirements of the delay-sensitive traffic and applications. In these works, end-to-end delay either is included in the objective function of NUM (e.g., @cite_12 @cite_5 @cite_9 @cite_18 ) or is augmented as constraints to the underlying optimization problem (e.g., @cite_3 @cite_8 @cite_14 ).
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_8", "@cite_9", "@cite_3", "@cite_6", "@cite_0", "@cite_5", "@cite_13", "@cite_12" ], "mid": [ "2130339513", "2075343554", "2054629214", "", "2097878592", "2951024449", "2171028981", "2077252365", "2158893758", "2018446679" ], "abstract": [ "TCP IP can be interpreted as a distributed primal-dual algorithm to maximize aggregate utility over source rates. It has recently been shown that an equilibrium of TCP IP, if it exists, maximizes the same delay-insensitive utility over both source rates and routes, provided pure congestion prices are used as link costs in the shortest-path calculation of IP. In practice, however, pure dynamic routing is never used and link costs are weighted sums of both static as well as dynamic components. In this paper, we introduce delay-sensitive utility functions and identify a class of utility functions that such a TCP IP equilibrium optimizes. We exhibit some counter-intuitive properties that any class of delay-sensitive utility functions optimized by TCP IP necessarily possess. We prove a sufficient condition for global stability of routing updates for general networks. We construct example networks that defy conventional wisdom on the effect of link cost parameters on network stability and utility.", "This paper investigates the optimal rate allocation problem with end-to-end delay constraints in multi-hop wireless networks. We introduce Virtual Link Capacity Margin (VLCM), which is the gap between the schedulable link capacity and the maximum allowable flow rate over a link, for link delay control. We formulate the problem as a utility maximization framework with two sets of constraints: 1) capacity and schedulability constraints and 2) end-to-end delay constraints. By dual decomposition of the original optimization problem, we present a control algorithm that jointly tunes the flow rates and VLCMs, through a double-price scheme derived with regard to the constraints: the link congestion price reflecting the traffic load of a link, and the flow delay price reflecting the margin between the average packet delay and the delay requirement of a flow. We prove that the algorithm converges to a global optimum where the aggregate network utility defined over the flow rate set is maximized, while the delay constraints are satisfied. A key feature of our algorithm is it does not rely on a specific traffic model. The algorithm is implemented distributedly via joint wireless link scheduling and congestion control. Simulation results show that our algorithm outperforms heuristic rate allocation algorithms while satisfying the end-to-end delay constraints.", "Allocating limited resources such as bandwidth and power in a multi-hop wireless network can be formulated as a Network Utility Maximization (NUM) problem. In this approach, both transmitting source nodes and relaying link nodes exchange information allowing for the NUM problem to be solved in an iterative distributed manner. Some previous NUM formulations of wireless network problems have considered the parameters of data rate, reliability, and transmitter powers either in the source utility function which measures an application's performance or as constraints. However, delay is also an important factor in the performance of many applications. In this paper, we consider an additional constraint based on the average queueing delay requirements of the sources. In particular, we examine an augmented NUM formulation in which rate and power control in a wireless network are balanced to achieve bounded average queueing delays for sources. With the additional delay constraints, the augmented NUM problem is non-convex. Therefore, we present a change of variable to transform the problem to a convex problem and we develop a solution which results in a distributed rate and power control algorithm tailored to achieving bounded average queueing delays. Simulation results demonstrate the efficacy of the distributed algorithm.", "", "A fluid model of multi-class flows with priority packet scheduling is considered for controlling the flow rates, their end-to-end delays and their packet losses. We derive a globally stable distributed rate and delay combined control when no information time lags are present. By properly sizing the router buffers, the stable rates attain the end-to- end delay requirements without any packet loss. By further enhancing the network with bandwidth reservation and admission control, we also show that minimum rate is guaranteed. The stability properties of the discrete time version of our control are also derived when no information time lags are present. The stability in the presence of information time lags is studied numerically by computing the delay and rate trajectories for a real test-bed network.", "This paper studies the problem of utility maximization for clients with delay based QoS requirements in wireless networks. We adopt a model used in a previous work that characterizes the QoS requirements of clients by their delay constraints, channel reliabilities, and delivery ratio requirements. In this work, we assume that the utility of a client is a function of the delivery ratio it obtains. We treat the delivery ratio for a client as a tunable parameter by the access point (AP), instead of a given value as in the previous work. We then study how the AP should assign delivery ratios to clients so that the total utility of all clients is maximized. We apply the techniques introduced in two previous papers to decompose the utility maximization problem into two simpler problems, a CLIENT problem and an ACCESS-POINT problem. We show that this decomposition actually describes a bidding game, where clients bid for the service time from the AP. We prove that although all clients behave selfishly in this game, the resulting equilibrium point of the game maximizes the total utility. In addition, we also establish an efficient scheduling policy for the AP to reach the optimal point of the ACCESS-POINT problem. We prove that the policy not only approaches the optimal point but also achieves some forms of fairness among clients. Finally, simulation results show that our proposed policy does achieve higher utility than all other compared policies.", "We investigate the problem of designing delay-aware joint flow control, routing, and scheduling algorithms in general multi-hop networks for maximizing network utilization. Since the end-to-end delay performance has a complex dependence on the high-order statistics of cross-layer algorithms, earlier optimization-based design methodologies that optimize the long term network utilization are not immediately well-suited for delay-aware design. This motivates us in this work to develop a novel design framework and alternative methods that take advantage of several unexploited design choices in the routing and the scheduling strategy spaces. In particular, we reveal and exploit a crucial characteristic of back pressure-type controllers that enables us to develop a novel link rate allocation strategy that not only optimizes long-term network utilization, but also yields loop free multi-path routes between each source-destination pair. Moreover, we propose a regulated scheduling strategy, based on a token-based service discipline, for shaping the per-hop delay distribution to obtain highly desirable end-to-end delay performance. We establish that our joint flow control, routing, and scheduling algorithm achieves loop-free routes and optimal network utilization. Our extensive numerical studies support our theoretical results, and further show that our joint design leads to substantial end-to-end delay performance improvements in multi-hop networks compared to earlier solutions.", "Wired and wireless data networks have witnessed a rapid proliferation of multimedia applications such as Internet video streaming, video conferencing, etc. A desirable key feature for multimedia transmission over multiuser environments is the ability of adapting rate and quality of video stream to different QoS conditions. The most efficient approach to address the scalability of multimedia applications is to encode video streams in compliance with the scalable video coding (SVC) standard, which is proposed as an extension to H.264 AVC standard. This paper addresses the rate control and bandwidth sharing for multimedia applications that are relying on SVC-encoded video streams. In previous studies, idealistic utility functions, mainly in the form of a staircase function, were used to cast the rate allocation for SVC-encoded streams as an optimization problem. These utility functions make the optimization problem non-convex and even non-differentiable, for which achieving optimality through dual-based approaches proves quite challenging. Towards this goal, we introduce an accurate analytical model of the utility function for SVC-encoded video streams. Using the abovementioned utility model and adopting the utility-proportional optimization approach, we come up with a convex formulation and propose a dual-based distributed algorithm for rate allocation of SVC-encoded streams. To the best of our knowledge, this is the first work that focuses on an accurate utility function modeling for SVC-encoded streams. Simulation experiments show that the proposed algorithm is quite efficient in achieving the convergence towards the global optimality.", "Network protocols in layered architectures have historically been obtained on an ad hoc basis, and many of the recent cross-layer designs are also conducted through piecemeal approaches. Network protocol stacks may instead be holistically analyzed and systematically designed as distributed solutions to some global optimization problems. This paper presents a survey of the recent efforts towards a systematic understanding of layering as optimization decomposition, where the overall communication network is modeled by a generalized network utility maximization problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems. There can be many alternative decompositions, leading to a choice of different layering architectures. This paper surveys the current status of horizontal decomposition into distributed computation, and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and channel coding. Key messages and methods arising from many recent works are summarized, and open issues discussed. Through case studies, it is illustrated how layering as Optimization Decomposition provides a common language to think about modularization in the face of complex, networked interactions, a unifying, top-down approach to design protocol stacks, and a mathematical theory of network architectures", "We consider congestion control in a network with delay sensitive insensitive traffic, modelled by adding explicit delay terms to the utility function measuring user's happiness on the Quality of Service (QoS). A new Network Utility Maximization (NUM) problem is formulated and solved in a decentralized way via appropriate algorithms implemented at the users (primal) and or links (dual). For the dual algorithm, delay-independent and delay-dependent stability conditions are derived when propagation delays are taken into account. A system with voice and data traffic is considered as example and the properties of the congestion control algorithm are assessed." ] }
1509.03374
2232794761
QoS-aware networking applications such as real-time streaming and video surveillance systems require nearly fixed average end-to-end delay over long periods to communicate efficiently, although may tolerate some delay variations in short periods. This variability exhibits complex dynamics that makes rate control of such applications a formidable task. This paper addresses rate allocation for heterogeneous QoS-aware applications that preserves the long-term end-to-end delay constraint while, similar to Dynamic Network Utility Maximization (DNUM), strives to achieve the maximum network utility aggregated over a fixed time interval. Since capturing temporal dynamics in QoS requirements of sources is allowed in our system model, we incorporate a novel time-coupling constraint in which delay-sensitivity of sources is considered such that a certain end-to-end average delay for each source over a pre-specified time interval is satisfied. We propose DA-DNUM algorithm, as a dual-based solution, which allocates source rates for the next time interval in a distributed fashion, given the knowledge of network parameters in advance. Through numerical experiments, we show that DA-DNUM gains higher average link utilization and a wider range of feasible scenarios in comparison with the best, to our knowledge, rate control schemes that may guarantee such constraints on delay.
In @cite_12 , delay is incorporated to the objective function and therefore, delay plays its role as a penalty to the utility function. Consequently, the goal is to simultaneously maximize the aggregated utility of all sources and reduce the end-to-end delays. Based on a delay-sensitive utility function introduced in @cite_17 , authors in @cite_5 @cite_9 aim to propose some application-oriented rate allocation schemes employing an alternative utility definition. Both approaches, however, show incompetency to provide some guarantee for delay, thereby fail to be employed in QoS-aware applications with hard long term average delay requirements.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_12", "@cite_17" ], "mid": [ "2077252365", "", "2018446679", "2156568423" ], "abstract": [ "Wired and wireless data networks have witnessed a rapid proliferation of multimedia applications such as Internet video streaming, video conferencing, etc. A desirable key feature for multimedia transmission over multiuser environments is the ability of adapting rate and quality of video stream to different QoS conditions. The most efficient approach to address the scalability of multimedia applications is to encode video streams in compliance with the scalable video coding (SVC) standard, which is proposed as an extension to H.264 AVC standard. This paper addresses the rate control and bandwidth sharing for multimedia applications that are relying on SVC-encoded video streams. In previous studies, idealistic utility functions, mainly in the form of a staircase function, were used to cast the rate allocation for SVC-encoded streams as an optimization problem. These utility functions make the optimization problem non-convex and even non-differentiable, for which achieving optimality through dual-based approaches proves quite challenging. Towards this goal, we introduce an accurate analytical model of the utility function for SVC-encoded video streams. Using the abovementioned utility model and adopting the utility-proportional optimization approach, we come up with a convex formulation and propose a dual-based distributed algorithm for rate allocation of SVC-encoded streams. To the best of our knowledge, this is the first work that focuses on an accurate utility function modeling for SVC-encoded streams. Simulation experiments show that the proposed algorithm is quite efficient in achieving the convergence towards the global optimality.", "", "We consider congestion control in a network with delay sensitive insensitive traffic, modelled by adding explicit delay terms to the utility function measuring user's happiness on the Quality of Service (QoS). A new Network Utility Maximization (NUM) problem is formulated and solved in a decentralized way via appropriate algorithms implemented at the users (primal) and or links (dual). For the dual algorithm, delay-independent and delay-dependent stability conditions are derived when propagation delays are taken into account. A system with voice and data traffic is considered as example and the properties of the congestion control algorithm are assessed.", "The Internet has been a startling and dramatic success. Originally designed to link together a small group of researchers, the Internet is now used by many millions of people. However, multimedia applications, with their novel traffic characteristics and service requirements, pose an interesting challenge to the technical foundations of the Internet. We address some of the fundamental architectural design issues facing the future Internet. In particular, we discuss whether the Internet should adopt a new service model, how this service model should be invoked, and whether this service model should include admission control. These architectural issues are discussed in a nonrigorous manner, through the use of a utility function formulation and some simple models. While we do advocate some design choices over others, the main purpose here is to provide a framework for discussing the various architectural alternatives. >" ] }