aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1503.01514
2949138836
Internet services are traditionally priced at flat rates; however, many Internet service providers (ISPs) have recently shifted towards two-part tariffs where a data cap is imposed to restrain data demand from heavy users. Although the two-part tariff could generally increase the revenue for ISPs and has been supported by the US FCC, the role of data cap and its optimal pricing structures are not well understood. In this article, we study the impact of data cap on the optimal two-part pricing schemes for congestion-prone service markets. We model users' demand and preferences over pricing and congestion alternatives and derive the market share and congestion of service providers under a market equilibrium. Based on the equilibrium model, we characterize the two-part structures of the revenue- and welfare-optimal pricing schemes. Our results reveal that 1) the data cap provides a mechanism for ISPs to transition from the flat-rate to pay-as-you-go type of schemes, 2) both the revenue and welfare objectives of the ISP will drive the optimal pricing towards usage-based schemes with diminishing data caps, and 3) the welfare-optimal tariff comprises lower fees than the revenue-optimal counterpart, suggesting that regulators might want to promote usage-based pricing but regulate the lump-sum and per-unit fees.
More generally, there have been several works that study the usage-based Internet pricing. Hande @cite_12 characterized the economic loss due to ISPs' inability or unwillingness to price broadband access based on the time of use. Li @cite_15 studied the optimal price differentiation under complete and incomplete information. Basar @cite_27 devised a revenue-maximizing pricing under varying user scale and network capacity. Shen @cite_6 investigated optimal nonlinear pricing policy design for a monopolistic service provider and showed that the introduction of nonlinear pricing provides a large profit improvement over linear pricing. In this paper, we focus on the two-part pricing. Besides optimizing the revenue from the provider's perspective, we also look into the welfare-optimal solution, through which we derive regulatory implications.
{ "cite_N": [ "@cite_27", "@cite_15", "@cite_6", "@cite_12" ], "mid": [ "2120126835", "2973000641", "2100853709", "1964897391" ], "abstract": [ "We consider a network where each user is charged a fixed price per unit of bandwidth used, but where there is no congestion-dependent pricing. However, the transmission rate of each user is assumed to be a function of network congestion (like TCP), and the price per unit bandwidth. We are interested in answering the following question: how should the network choose the price to maximize its overall revenue? To obtain a tractable solution, we consider a single link accessed by many users where the capacity is increased in proportion to the number of users. We show the following result: as the number of users increases, the optimal price per unit bandwidth charged by the service provider may increase or decrease depending upon the bandwidth of the link. However, for all values of the link capacity, the service provider's revenue per unit bandwidth increases and the overall performance of each user (measured in terms of a function of its throughput, the network congestion and the cost incurred by the user for bandwidth usage) improves. Since the revenue per unit bandwidth increases, it provides an incentive for the service provider to increase the available bandwidth in proportion to the number of users.", "", "In the communication network pricing literature, it is the linear pricing schemes that have been largely adopted as the means of controlling network usage or generating profits for network service providers. This paper extends the framework to nonlinear pricing and investigates optimal nonlinear pricing policy design for a monopolistic service provider. The problem is formulated as an incentive-design problem, and incentive (pricing) policies are obtained for a many-users regime, which enable the service provider to approach arbitrarily close to Pareto- optimal solutions. Under the assumption that the service provider knows the true user types, analytical and numerical results indicate a profit improvement exceeding 38 over linear pricing by the introduction of nonlinear pricing. We also consider the scenario where the service provider has incomplete information on user types. A comparative study of the results for complete information and incomplete information is carried out as well, with numerical results pointing to 25 -40 loss of profit by the service provider due to incompleteness of information on the user types.", "This paper investigates pricing of Internet connectivity services in the context of a monopoly ISP selling broadband access to consumers. We first study the optimal combination of flat-rate and usage-based access price components for maximization of ISP revenue, subject to a capacity constraint on the data-rate demand. Next, we consider time-varying consumer utilities for broadband data rates that can result in uneven demand for data-rate over time. Practical considerations limit the viability of altering prices over time to smoothen out the demanded data-rate. Despite such constraints on pricing, our analysis reveals that the ISP can retain the revenue by setting a low usage fee and dropping packets of consumer demanded data that exceed capacity. Regulatory attention on ISP congestion management discourages such technical\" practices and promotes economics based approaches. We characterize the loss in ISP revenue from an economics based approach. Regulatory requirements further impose limitations on price discrimination across consumers, and we derive the revenue loss to the ISP from such restrictions. We then develop partial recovery of revenue loss through non-linear pricing that does not explicitly discriminate across consumers. While determination of the access price is ultimately based on additional considerations beyond the scope of this paper, the analysis here can serve as a benchmark to structure access price in broadband access networks." ] }
1503.01514
2949138836
Internet services are traditionally priced at flat rates; however, many Internet service providers (ISPs) have recently shifted towards two-part tariffs where a data cap is imposed to restrain data demand from heavy users. Although the two-part tariff could generally increase the revenue for ISPs and has been supported by the US FCC, the role of data cap and its optimal pricing structures are not well understood. In this article, we study the impact of data cap on the optimal two-part pricing schemes for congestion-prone service markets. We model users' demand and preferences over pricing and congestion alternatives and derive the market share and congestion of service providers under a market equilibrium. Based on the equilibrium model, we characterize the two-part structures of the revenue- and welfare-optimal pricing schemes. Our results reveal that 1) the data cap provides a mechanism for ISPs to transition from the flat-rate to pay-as-you-go type of schemes, 2) both the revenue and welfare objectives of the ISP will drive the optimal pricing towards usage-based schemes with diminishing data caps, and 3) the welfare-optimal tariff comprises lower fees than the revenue-optimal counterpart, suggesting that regulators might want to promote usage-based pricing but regulate the lump-sum and per-unit fees.
From a modeling perspective, Chander @cite_10 , Reitman @cite_16 , Ma @cite_14 and our work all consider the service market with congestion externalities. Chander @cite_10 studied the quality differentiation strategy of a monopoly provider and Reitman @cite_16 studied a multi-provider price competition. Both of them modeled the market as a continuum of non-atomic users, each of which is characterized by a quality-sensitivity parameter. However, this one-dimensional model only applies for flat-rate pricing and the distribution of users was often assumed to be uniform for analytical tractability. To faithfully characterize the utility of users under two-part tiered pricing, we establish a novel two-dimensional model that describes users by their data demand and valuation on data usage. Furthermore, we analyze a class of distributions, including the uniform distribution, to understand the impact of user demand and value on the optimal pricing structures of the providers. Ma @cite_14 also considered a two-dimensional user model; however, the author only focused on the pay-as-you-go pricing, a special case of the two-part tariff structure studied in this paper.
{ "cite_N": [ "@cite_14", "@cite_16", "@cite_10" ], "mid": [ "2087607694", "2003845297", "2135483574" ], "abstract": [ "As Internet traffic grows exponentially due to the pervasive Internet accesses via mobile devices and increasing adoptions of cloud-based applications, broadband providers start to shift from flat-rate to usage-based pricing, which has gained support from regulators such as the FCC. We consider generic congestion-prone network services, including cloud services, and study the pay-as-you-go type of usage-based pricing of service providers under market competition. Based on a novel model that captures users' preferences over usage price and congestion alternatives, we derive the induced congestion and market share of the service providers under a market equilibrium and design algorithms to calculate them. By analyzing different market structures, we reveal how users' value on usage and sensitivity to congestion influence the optimal price, revenue, and competition of service providers, as well as the social welfare. We also obtain the conditions under which monopolistic providers have strong incentives to implement service differentiation via Paris Metro Pricing and whether regulators should encourage such practices.", "Firms selling a product with congestion externalities to a heterogeneous population of customers have an incentive to offer differentiated levels of quality. In a price competitive market, differentiation arises endogenously through the prices chosen by firms. In equilibrium, firms offer a range of prices, which induce an efficiency improving range of quality levels and allow customers to self-select their preferred price-quality combinations. Additional results suggest that, with many firms in the market, the choice of prices dominates the choice of service capacities in determining congestion levels. Copyright 1991 by Blackwell Publishing Ltd.", "The paper develops a model of product differentiation in which the quality of a product may be negatively affected by the number of consumers buying it, as it is the case for any good affected by congestion. It is shown that for any positive degree of heterogeneity among the consumers, a monopolist will always find it more profitable to differentiate, i.e., to sell more than one quality of the product at different prices." ] }
1503.01170
2157077021
We study the effect of addition on the Hamming weight of a positive integer. Consider the first @math positive integers, and fix an @math among them. We show that if the binary representation of @math consists of @math blocks of zeros and ones, then addition by @math causes a constant fraction of low Hamming weight integers to become high Hamming weight integers. This result has applications in complexity theory to the hardness of computing powering maps using bounded-depth arithmetic circuits over @math . Our result implies that powering by @math composed of many blocks require exponential-size, bounded-depth arithmetic circuits over @math .
Kopparty gave a different condition for when @math has the @math -shifting property: its binary representation consists mostly of a repeating constant-length string that is not all zeros or ones @cite_3 . Note that any integer expressible as @math , where @math , @math is odd, and @math , has binary representation of this form. As a consequence, taking @math -th roots and computing @math -th residue symbols cannot be done with polynomial-size @math circuits. Our main result generalizes Kopparty's condition, as the periodic strings form a small subset of the strings with @math blocks. Beck and Li showed that the @math -th residue map is hard to compute in @math by using the concept of algebraic immunity @cite_5 . It is worth noting that their method does not say anything about the complexity of the @math -th root map in @math . So in this regard, there is something to be gained by analyzing the @math -shifting property condition. A more detailed history of the complexity of arithmetic operations using low-depth circuits can be found in @cite_3 .
{ "cite_N": [ "@cite_5", "@cite_3" ], "mid": [ "30524954", "2159268377" ], "abstract": [ "In this paper, we prove tight lower bounds on the smallest degree of a nonzero polynomial in the ideal generated by @math or @math in the polynomial ring @math , @math are coprime, which is called over @math . The immunity of @math is lower bounded by @math , which is achievable when @math is a multiple of @math ; the immunity of @math is exactly @math for every @math and @math . Our result improves the previous bound @math by Green. We observe how immunity over @math is related to @math circuit lower bound. For example, if the immunity of @math over @math is lower bounded by @math , and @math , then @math requires @math circuit of exponential size to compute.", "We study the complexity of computing the kth-power of an element of F2n by constant depth arithmetic circuits over F2 (also known as ACP). Our study encompasses the complexity of basic arithmetic operations such as computing cube-root and computing cubic-residuosity of elements of F2n. Our main result is that these problems require exponential size circuits. We also derive strong average-case versions of these results. For example, we show that no subexponential-size, constant-depth, arithmetic circuit over F2 can correctly compute the cubic residue symbol for more than 1 3 + o(1) fraction of the elements of F2n. As a corollary, we deduce a character sum bound showing that the cubic residue character over F2n is uncorrelated with all degree-d n-variate F2 polynomials (viewed as functions over F2n in a natural way), provided d l ne for some universal e > 0. Classical methods (based on van der Corput differencing and the Weil bounds) show this only for d l log(n). Our proof revisits the classical Razborov-Smolensky method for circuit lower bounds, and executes an analogue of it in the land of univariate polynomials over F2n. The tools we use come from both F2n and F2n. In recent years, this interplay between F2n and F2n has played an important role in many results in pseudorandomness, property testing and coding theory." ] }
1503.01205
2115084976
In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules (or ligands) in a fluid medium. This paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols, and the receiver consists of receptors. When the signalling molecules arrive at the receiver, they may react with the receptors to form ligand-receptor complexes. Our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised. We derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator. We do that by first deriving a communication model which includes the chemical reactions in the transmitter, diffusion in the transmission medium and the ligand-receptor process in the receiver. This model, which takes the form of a continuous-time Markov process, captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion. We then adopt a maximum a posteriori framework and use Bayesian filtering to derive the optimal demodulator. We use numerical examples to illustrate the properties of this optimal demodulator.
There is a growing interest to understand molecular communication from the communication engineering point of view. For recent surveys of the field, see @cite_51 @cite_22 @cite_25 @cite_35 @cite_32 . We divide the discussion under these headings: transmitters, receivers, models and others.
{ "cite_N": [ "@cite_35", "@cite_22", "@cite_32", "@cite_51", "@cite_25" ], "mid": [ "2333907923", "1997193784", "2005822392", "2116194016", "2018036523" ], "abstract": [ "Molecular communication is an emerging communication paradigm for biological nanomachines. It allows biological nanomachines to communicate through exchanging molecules in an aqueous environment and to perform collaborative tasks through integrating functionalities of individual biological nanomachines. This paper develops the layered architecture of molecular communication and describes research issues that molecular communication faces at each layer of the architecture. Specifically, this paper applies a layered architecture approach, traditionally used in communication networks, to molecular communication, decomposes complex molecular communication functionality into a set of manageable layers, identifies basic functionalities of each layer, and develops a descriptive model consisting of key components of the layer for each layer. This paper also discusses open research issues that need to be addressed at each layer. In addition, this paper provides an example design of targeted drug delivery, a nanomedical application, to illustrate how the layered architecture helps design an application of molecular communication. The primary contribution of this paper is to provide an in-depth architectural view of molecular communication. Establishing a layered architecture of molecular communication helps organize various research issues and design concerns into layers that are relatively independent of each other, and thus accelerates research in each layer and facilitates the design and development of applications of molecular communication.", "Abstract Molecular communication uses molecules (i.e., biochemical signals) as an information medium and allows biologically and artificially created nano- or microscale entities to communicate over a short distance. It is a new communication paradigm; it is different from the traditional communication paradigm, which uses electromagnetic waves (i.e., electronic and optical signals) as an information medium. Key research challenges in molecular communication include design of system components (i.e., a sender, a molecular propagation system, a receiver, and a molecular communication interface) and mathematical modeling of each system component as well as entire systems. We review all research activities in molecular communication to date, from its origin to recent experimental studies and theoretical approaches for each system component. As a model molecular communication system, we describe an integrated system that combines a molecular communication interface (using a lipid vesicle embedded with channel-forming proteins), a molecular propagation system (using microtubule motility on kinesin molecular motors and DNA hybridization), and a sender receiver (using giant lipid vesicles embedded with gemini-peptide lipids). We also present potential applications and the future outlook of molecular communication.", "This article presents a branch of research where the use of molecules to encode and transmit information among nanoscale devices (nanomachines) is investigated as a bio-inspired viable solution to realize nano-communication networks. Unlike traditional technologies, molecular communication is a radically new paradigm, which demands novel solutions, including the identification of naturally existing molecular communication mechanisms, the establishment of the foundations of a molecular information theory, or the development of architectures and networking protocols for nanomachines. The tight connection of this cutting edge engineering research field with biology will ultimately enable both the bio-inspired study of molecular nanonetwork architectures and their realization with tools already available in nature. The testbed described in this article, which is based on a microfluidic device hosting intercommunicating populations of genetically engineered bacteria, is a clear example of this research direction.", "Nanotechnologies promise new solutions for several applications in biomedical, industrial and military fields. At nano-scale, a nano-machine can be considered as the most basic functional unit. Nano-machines are tiny components consisting of an arranged set of molecules, which are able to perform very simple tasks. Nanonetworks. i.e., the interconnection of nano-machines are expected to expand the capabilities of single nano-machines by allowing them to cooperate and share information. Traditional communication technologies are not suitable for nanonetworks mainly due to the size and power consumption of transceivers, receivers and other components. The use of molecules, instead of electromagnetic or acoustic waves, to encode and transmit the information represents a new communication paradigm that demands novel solutions such as molecular transceivers, channel models or protocols for nanonetworks. In this paper, first the state-of-the-art in nano-machines, including architectural aspects, expected features of future nano-machines, and current developments are presented for a better understanding of nanonetwork scenarios. Moreover, nanonetworks features and components are explained and compared with traditional communication networks. Also some interesting and important applications for nanonetworks are highlighted to motivate the communication needs between the nano-machines. Furthermore, nanonetworks for short-range communication based on calcium signaling and molecular motors as well as for long-range communication based on pheromones are explained in detail. Finally, open research challenges, such as the development of network components, molecular communication theory, and the development of new architectures and protocols, are presented which need to be solved in order to pave the way for the development and deployment of nanonetworks within the next couple of decades.", "The ability of engineered biological nanomachines to communicate with biological systems at the molecular level is anticipated to enable future applications such as monitoring the condition of a human body, regenerating biological tissues and organs, and interfacing artificial devices with neural systems. From the viewpoint of communication theory and engineering, molecular communication is proposed as a new paradigm for engineered biological nanomachines to communicate with the natural biological nanomachines which form a biological system. Distinct from the current telecommunication paradigm, molecular communication uses molecules as the carriers of information; sender biological nanomachines encode information on molecules and release the molecules in the environment, the molecules then propagate in the environment to receiver biological nanomachines, and the receiver biological nanomachines biochemically react with the molecules to decode information. Current molecular communication research is limited to small-scale networks of several biological nanomachines. Key challenges to bridge the gap between current research and practical applications include developing robust and scalable techniques to create a functional network from a large number of biological nanomachines. Developing networking mechanisms and communication protocols is anticipated to introduce new avenues into integrating engineered and natural biological nanomachines into a single networked system. In this paper, we present the state-of-the-art in the area of molecular communication by discussing its architecture, features, applications, design, engineering, and physical modeling. We then discuss challenges and opportunities in developing networking mechanisms and communication protocols to create a network from a large number of bio-nanomachines for future applications." ] }
1503.01205
2115084976
In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules (or ligands) in a fluid medium. This paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols, and the receiver consists of receptors. When the signalling molecules arrive at the receiver, they may react with the receptors to form ligand-receptor complexes. Our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised. We derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator. We do that by first deriving a communication model which includes the chemical reactions in the transmitter, diffusion in the transmission medium and the ligand-receptor process in the receiver. This model, which takes the form of a continuous-time Markov process, captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion. We then adopt a maximum a posteriori framework and use Bayesian filtering to derive the optimal demodulator. We use numerical examples to illustrate the properties of this optimal demodulator.
Transmitters. A number of different types of transmission signals have been considered in the molecular communication literature. The papers @cite_28 @cite_2 assume that the transmitter releases the signalling molecules in a burst which can be modelled as either an impulse or a pulse with a finite duration. A recent work in @cite_26 assumes that the transmitter releases the molecules according to a Poisson process. In this paper, we instead assume that the transmitter uses different sets of chemical reactions to generate different transmission symbols and we use CTMP to model these transmission symbols. Since a Poisson process can also be modelled by a CTMP, the transmission process in this paper is more general than that of @cite_26 . Our CTMP model can also deal with an impulsive input by using an appropriate initial condition for the CTMP. The use of CTMP as an end-to-end model --- which includes the transmitter, the medium and the receiver --- does not appear to have been used before.
{ "cite_N": [ "@cite_28", "@cite_26", "@cite_2" ], "mid": [ "1984222522", "2963922654", "2093287389" ], "abstract": [ "Abstract In this paper, we study a molecular communication system operating over a moving propagation medium. Using the convection–diffusion equation, we present the first separate models for the channel response and the corrupting noise. The flow-based molecular channel is shown to be linear but time-varying and the noise corrupting the signal is additive white Gaussian with a signal dependent magnitude. By modelling the ligand–receptor binding process, it is shown that the molecular communication reception process in this channel has a low-pass characteristic that colours the additive noise. A whitening filter is proposed to compensate for this low-pass characteristic. Simulation results demonstrate the benefit of the whitening filter and the effect of medium motion on bit error rate.", "In this paper, a diffusion-based molecular communication channel between two nano-machines is considered. The effect of the amount of memory on performance is characterized, and a simple memory-limited decoder is proposed; its performance is shown to be close to that of the best possible decoder (without any restrictions on the computational complexity or its functional form), using genie-aided upper bounds. This effect is adapted to the case of Molecular Concentration Shift Keying; it is shown that a four-bit memory achieves nearly the same performance as infinite memory for all of the examples considered. A general class of threshold decoders is considered and shown to be suboptimal for a Poisson channel with memory, unless the SNR is higher than a computed threshold. During each symbol duration (symbol period), the probability that a released molecule hits the receiver changes over the duration of the period; thus, we also consider a receiver that samples at a rate higher than the transmission rate (a multi-read system). A multi-read system improves performance. The associated decision rule for this system is shown to be a weighted sum of the samples during each symbol interval. The performance of the system is analyzed using the saddle point approximation. The best performance gains are achieved for an oversampling factor of three for the examples considered.", "Abstract In this study, nanoscale communication networks have been investigated in the context of binary concentration-encoded unicast molecular communication suitable for numerous emerging applications, for example in healthcare and nanobiomedicine. The main focus of the paper has been given to the spatiotemporal distribution of signal strength and modulation schemes suitable for short-range, medium-range, and long-range molecular communication between two communicating nanomachines in a nanonetwork. This paper has principally focused on bio-inspired transmission techniques for concentration-encoded molecular communication systems. Spatiotemporal distributions of a carrier signal in the form of the concentration of diffused molecules over the molecular propagation channel and diffusion-dependent communication ranges have been explained for various scenarios. Finally, the performance analysis of modulation schemes has been evaluated in the form of the steady-state loss of amplitude of the received concentration signals and its dependence on the transmitter–receiver distance." ] }
1503.01205
2115084976
In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules (or ligands) in a fluid medium. This paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols, and the receiver consists of receptors. When the signalling molecules arrive at the receiver, they may react with the receptors to form ligand-receptor complexes. Our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised. We derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator. We do that by first deriving a communication model which includes the chemical reactions in the transmitter, diffusion in the transmission medium and the ligand-receptor process in the receiver. This model, which takes the form of a continuous-time Markov process, captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion. We then adopt a maximum a posteriori framework and use Bayesian filtering to derive the optimal demodulator. We use numerical examples to illustrate the properties of this optimal demodulator.
Receivers. Demodulation methods for diffusion-based molecular communication have been studied in @cite_14 @cite_56 . Both papers also use the MAP framework with discrete-time samples of the number of output molecules as the input to the demodulator. Instead, in this paper, we consider demodulation using continuous-time history of the number of complexes. The demodulation from ligand-receptor signal has also been considered in @cite_28 . The key difference is that @cite_28 uses a linear approximation of the ligand-receptor process while we use a non-linear reaction rate.
{ "cite_N": [ "@cite_28", "@cite_14", "@cite_56" ], "mid": [ "1984222522", "2031515082", "1991545742" ], "abstract": [ "Abstract In this paper, we study a molecular communication system operating over a moving propagation medium. Using the convection–diffusion equation, we present the first separate models for the channel response and the corrupting noise. The flow-based molecular channel is shown to be linear but time-varying and the noise corrupting the signal is additive white Gaussian with a signal dependent magnitude. By modelling the ligand–receptor binding process, it is shown that the molecular communication reception process in this channel has a low-pass characteristic that colours the additive noise. A whitening filter is proposed to compensate for this low-pass characteristic. Simulation results demonstrate the benefit of the whitening filter and the effect of medium motion on bit error rate.", "In this paper, we perform receiver design for a diffusive molecular communication environment. Our model includes flow in any direction, sources of information molecules in addition to the transmitter, and enzymes in the propagation environment to mitigate intersymbol interference. We characterize the mutual information between receiver observations to show how often independent observations can be made. We derive the maximum likelihood sequence detector to provide a lower bound on the bit error probability. We propose the family of weighted sum detectors for more practical implementation and derive their expected bit error probability. Under certain conditions, the performance of the optimal weighted sum detector is shown to be equivalent to a matched filter. Receiver simulation results show the tradeoff in detector complexity versus achievable bit error probability, and that a slow flow in any direction can improve the performance of a weighted sum detector.", "Abstract In this paper, a strength-based optimum signal detection scheme for binary concentration-encoded molecular communication (CEMC) system has been presented. In CEMC, a single type of information molecule is assumed to carry the information from the transmitting nanomachine (TN), through the propagation medium, to the receiving nanomachine (RN) in the form of received concentration of information molecules at the location of the RN. We consider a pair of nanomachines communicating by means of on–off keying (OOK) transmission protocol in a three-dimensional ideal (i.e. free) diffusion-based unbounded propagation environment. First, based on stochastic chemical kinetics of the reaction events between ligand molecules and receptors, we develop a mathematical receiver model of strength-based detection scheme for OOK CEMC system. Using an analytical approach, we explain the receiver operating characteristic (ROC) curves of the receiver thus developed. Finally, we propose a variable threshold -based detection scheme and explain its communication range and rate dependent characteristics. We show that it provides an improvement in the communication ranges compared to fixed threshold -based detection scheme. (Part of this paper has been peer-reviewed and published in BWCCA-2012 conference in Victoria, BC, 12–14 November, 2012 [20] .)" ] }
1503.01205
2115084976
In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules (or ligands) in a fluid medium. This paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols, and the receiver consists of receptors. When the signalling molecules arrive at the receiver, they may react with the receptors to form ligand-receptor complexes. Our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised. We derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator. We do that by first deriving a communication model which includes the chemical reactions in the transmitter, diffusion in the transmission medium and the ligand-receptor process in the receiver. This model, which takes the form of a continuous-time Markov process, captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion. We then adopt a maximum a posteriori framework and use Bayesian filtering to derive the optimal demodulator. We use numerical examples to illustrate the properties of this optimal demodulator.
The capacity of molecular communications based on ligand-receptor binding has been studied in @cite_20 @cite_3 assuming discrete samples of the number of complexes are available. A recent work @cite_15 considers the capacity of such systems in the continuous-time limit. Instead of focusing on the capacity, our work focuses on demodulation.
{ "cite_N": [ "@cite_15", "@cite_3", "@cite_20" ], "mid": [ "1996262444", "2150275923", "1968363642" ], "abstract": [ "We model the ligand-receptor molecular communication channel with a discrete-time Markov model, and show how to obtain the capacity of this channel. We show that the capacity-achieving input distribution is iid; further, unusually for a channel with memory, we show that feedback does not increase the capacity of this channel.", "In diffusion-based molecular communications, messages can be conveyed via the variation in the concentration of molecules in the medium. In this paper, we intend to analyze the achievable capacity in transmission of information from one node to another in a diffusion channel. We observe that because of the molecular diffusion in the medium, the channel possesses memory. We then model the memory of the channel by a two-step Markov chain and obtain the equations describing the capacity of the diffusion channel. By performing a numerical analysis, we obtain the maximum achievable rate for different levels of the transmitter power, i.e., the molecule production rate.", "A diffusion-based molecular communication system has two major components: the diffusion in the medium, and the ligand-reception. Information bits, encoded in the time variations of the concentration of molecules, are conveyed to the receiver front through the molecular diffusion in the medium. The receiver, in turn, measures the concentration of the molecules in its vicinity in order to retrieve the information. This is done via ligand-reception process. In this paper, we develop models to study the constraints imposed by the concentration sensing at the receiver side and derive the maximum rate by which a ligand-receiver can receive information. Therefore, the overall capacity of the diffusion channel with the ligand receptors can be obtained by combining the results presented in this paper with our previous work on the achievable information rate of molecular communication over the diffusion channel." ] }
1503.01205
2115084976
In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules (or ligands) in a fluid medium. This paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols, and the receiver consists of receptors. When the signalling molecules arrive at the receiver, they may react with the receptors to form ligand-receptor complexes. Our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised. We derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator. We do that by first deriving a communication model which includes the chemical reactions in the transmitter, diffusion in the transmission medium and the ligand-receptor process in the receiver. This model, which takes the form of a continuous-time Markov process, captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion. We then adopt a maximum a posteriori framework and use Bayesian filtering to derive the optimal demodulator. We use numerical examples to illustrate the properties of this optimal demodulator.
Receiver design is an important topic in molecular communication and has been studied in many papers, some examples are @cite_21 @cite_14 @cite_47 @cite_56 @cite_30 . These papers either use one sample or a number of discrete samples on the count of a specific molecule to compute the likelihood of observing a certain input symbols. This paper takes a different approach and uses continuous-time signals.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_21", "@cite_56", "@cite_47" ], "mid": [ "2004388769", "2031515082", "2059488492", "1991545742", "1984920148" ], "abstract": [ "Diffusion-based communication refers to the transfer of information using molecules as message carriers whos ep rop- agation is governed by the laws of molecular diffusion. It has been identified that diffusion-based communication is one of the most promising solutions for end-to-end communication between nanoscale devices. In this paper, the design of a diffusion-based communication system considering stochastic signaling, arbitrary orders of channel memory, and noisy receptio ni s proposed. The diffusion in the cases of one, two, and three dimensions are all considered. Three signal processing techniques for the molecular concentration with low computational complexity are proposed. For the detector design, both a low-complexity one-shot optimal detector for mutual information maximization and a near Max- imum Likelihood (ML) sequence detector are proposed. To the best of our knowledge, our paper is thefirst that gives an analytical treatment of the signal processing, estimation, and detection prob- lems for diffusion-based communication in the presence of ISI and reception noise. Numerical results indicate that the proposed signal processing technique followed by the one-shot detector achieves near-optimal throughput without the need of ap riori information in both short-range and long-range diffusion-based communication scenarios, whic hs uggests an ML sequence de- tector is not necessary. Furthermore, the proposed receiver design guarantees diffusion-based communication to operate without failure even in the case of infinite channel memory. A channel capacity of 1 bit per channel utilization can be ultimately achieved by extending the duration of the signaling interval.", "In this paper, we perform receiver design for a diffusive molecular communication environment. Our model includes flow in any direction, sources of information molecules in addition to the transmitter, and enzymes in the propagation environment to mitigate intersymbol interference. We characterize the mutual information between receiver observations to show how often independent observations can be made. We derive the maximum likelihood sequence detector to provide a lower bound on the bit error probability. We propose the family of weighted sum detectors for more practical implementation and derive their expected bit error probability. Under certain conditions, the performance of the optimal weighted sum detector is shown to be equivalent to a matched filter. Receiver simulation results show the tradeoff in detector complexity versus achievable bit error probability, and that a slow flow in any direction can improve the performance of a weighted sum detector.", "This paper studies the mitigation of intersymbol interference in a diffusive molecular communication system using enzymes that freely diffuse in the propagation environment. The enzymes form reaction intermediates with information molecules and then degrade them so that they cannot interfere with future transmissions. A lower bound expression on the expected number of molecules measured at the receiver is derived. A simple binary receiver detection scheme is proposed where the number of observed molecules is sampled at the time when the maximum number of molecules is expected. Insight is also provided into the selection of an appropriate bit interval. The expected bit error probability is derived as a function of the current and all previously transmitted bits. Simulation results show the accuracy of the bit error probability expression and the improvement in communication performance by having active enzymes present.", "Abstract In this paper, a strength-based optimum signal detection scheme for binary concentration-encoded molecular communication (CEMC) system has been presented. In CEMC, a single type of information molecule is assumed to carry the information from the transmitting nanomachine (TN), through the propagation medium, to the receiving nanomachine (RN) in the form of received concentration of information molecules at the location of the RN. We consider a pair of nanomachines communicating by means of on–off keying (OOK) transmission protocol in a three-dimensional ideal (i.e. free) diffusion-based unbounded propagation environment. First, based on stochastic chemical kinetics of the reaction events between ligand molecules and receptors, we develop a mathematical receiver model of strength-based detection scheme for OOK CEMC system. Using an analytical approach, we explain the receiver operating characteristic (ROC) curves of the receiver thus developed. Finally, we propose a variable threshold -based detection scheme and explain its communication range and rate dependent characteristics. We show that it provides an improvement in the communication ranges compared to fixed threshold -based detection scheme. (Part of this paper has been peer-reviewed and published in BWCCA-2012 conference in Victoria, BC, 12–14 November, 2012 [20] .)", "In the Molecular Communication (MC), molecules are utilized to encode, transmit, and receive information. Transmission of the information is achieved by means of diffusion of molecules and the information is recovered based on the molecule concentration variations at the receiver location. The MC is very prone to intersymbol interference (ISI) due to residual molecules emitted previously. Furthermore, the stochastic nature of the molecule movements adds noise to the MC. For the first time, we propose four methods for a receiver in the MC to recover the transmitted information distorted by both ISI and noise. We introduce sequence detection methods based on maximum a posteriori (MAP) and maximum likelihood (ML) criterions, a linear equalizer based on minimum mean-square error (MMSE) criterion, and a decision-feedback equalizer (DFE) which is a nonlinear equalizer. We present a channel estimator to estimate time varying MC channel at the receiver. The performances of the proposed methods based on bit error rates are evaluated. The sequence detection methods reveal the best performance at the expense of computational complexity. However, the MMSE equalizer has the lowest performance with the lowest computational complexity. The results show that using these methods significantly increases the information transmission rate in the MC." ] }
1503.01205
2115084976
In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules (or ligands) in a fluid medium. This paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols, and the receiver consists of receptors. When the signalling molecules arrive at the receiver, they may react with the receptors to form ligand-receptor complexes. Our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised. We derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator. We do that by first deriving a communication model which includes the chemical reactions in the transmitter, diffusion in the transmission medium and the ligand-receptor process in the receiver. This model, which takes the form of a continuous-time Markov process, captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion. We then adopt a maximum a posteriori framework and use Bayesian filtering to derive the optimal demodulator. We use numerical examples to illustrate the properties of this optimal demodulator.
Another approach of receiver design for molecular communication is to derive molecular circuits that can be used for decoding. An attempt is made in @cite_27 to design a molecular circuit that can decode frequency-modulated signals. However, the work does not take diffusion and reaction noise into consideration. A recent work in @cite_12 analyses end-to-end molecular communication biological circuits from linear time-invariant system point of view. The work in @cite_36 compares the information theoretic capacity of a number of different types of linear molecular circuits. This paper differs from the previous work in that it uses a non-linear ligand-receptor binding model.
{ "cite_N": [ "@cite_36", "@cite_27", "@cite_12" ], "mid": [ "2003495286", "2044579330", "2070597721" ], "abstract": [ "We consider diffusion-based molecular communication networks where the receivers consist of a set of chemical reactions or a molecular circuit. At the receivers of these networks, the signalling molecules react with the molecular circuit to produce output molecules. The counts of output molecules over time is the output signal of the receiver. The aim of this paper is to investigate the impact of different molecular circuits on the noise properties and information transmission capacity of molecular communication networks. In particular, we show that some molecular circuits have lower noise and higher information transmission capacity.", "Abstract A key research question in the design of molecular nano-communication networks is how the information is to be encoded and decoded. One particular encoding method is to use different frequencies to represent different symbols. This paper will investigate the decoding of such frequency coded signals. To the best of our knowledge, the current literature on molecular communication has only used simple ligand–receptor models as decoders and the decoding of frequency coded signals has not been studied. There are two key issues in the design of such decoders. First, the decoder must exhibit frequency selective behaviour which means that encoder symbol of a specific frequency causes a bigger response at the decoder than symbols of other frequencies. Second, the decoder must take into account inter-symbol interference which earlier studies on concentration coding have pointed out to be a major performance issue. In order to study the design of decoder, we propose a system of reaction–diffusion and reaction kinetic equations to model the system of encoder, channel and decoder. We use this model to show that enzymatic circuit of a particular inter-connection has frequency selective properties. We also explore how decoder can be designed to avoid inter-symbol interference.", "Molecular Communication (MC), i.e., the exchange of information through the emission, propagation, and reception of molecules, is a promising paradigm for the interconnection of autonomous nanoscale devices, known as nanomachines. Synthetic biology techniques, and in particular the engineering of biological circuits, are enabling research towards the programming of functions within biological cells, thus paving the way for the realization of biological nanomachines. The design of MC systems built upon biological circuits is particularly interesting since cells naturally employ the MC paradigm in their interactions, and possess many of the elements required to realize this type of communication. This paper focuses on the identification and systems-theoretic modeling of a minimal subset of biological circuit elements necessary to be included in an MC system design where the message-bearing molecules are propagated via free diffusion between two cells. The system-theoretic models are here detailed in terms of transfer functions, from which analytical expressions are derived for the attenuation and the delay experienced by an information signal through the MC system. Numerical results are presented to evaluate the attenuation and delay expressions as functions of realistic biological parameters." ] }
1503.01205
2115084976
In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules (or ligands) in a fluid medium. This paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols, and the receiver consists of receptors. When the signalling molecules arrive at the receiver, they may react with the receptors to form ligand-receptor complexes. Our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised. We derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator. We do that by first deriving a communication model which includes the chemical reactions in the transmitter, diffusion in the transmission medium and the ligand-receptor process in the receiver. This model, which takes the form of a continuous-time Markov process, captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion. We then adopt a maximum a posteriori framework and use Bayesian filtering to derive the optimal demodulator. We use numerical examples to illustrate the properties of this optimal demodulator.
The noise property of ligand-receptor for molecular communication has been characterised in @cite_44 . The case for non-linear ligand-receptor binding does not appear to have an analytical solution and @cite_44 derives an approximate characterisation using a linear reaction rate assuming that the number of signalling molecules around the receptor is large. This paper uses a non-linear ligand-receptor binding model and no approximation is used in solving the filtering problem.
{ "cite_N": [ "@cite_44" ], "mid": [ "2125863473" ], "abstract": [ "Molecular communication (MC) will enable the exchange of information among nanoscale devices. In this novel bio-inspired communication paradigm, molecules are employed to encode, transmit and receive information. In the most general case, these molecules are propagated in the medium by means of free diffusion. An information theoretical analysis of diffusion-based MC is required to better understand the potential of this novel communication mechanism. The study and the modeling of the noise sources is of utmost importance for this analysis. The objective of this paper is to provide a mathematical study of the noise at the reception of the molecular information in a diffusion-based MC system when the ligand-binding reception is employed. The reference diffusion-based MC system for this analysis is the physical end-to-end model introduced in a previous work by the same authors, where the reception process is realized through ligand-binding chemical receptors. The reception noise is modeled in this paper by following two different approaches, namely, through the ligand-receptor kinetics and through the stochastic chemical kinetics. The ligand-receptor kinetics allows to simulate the random perturbations in the chemical processes of the reception, while the stochastic chemical kinetics provides the tools to derive a closed-form solution to the modeling of the reception noise. The ligand-receptor kinetics model is expressed through a block scheme, while the stochastic chemical kinetics results in the characterization of the reception noise using stochastic differential equations. Numerical results are provided to demonstrate that the analytical formulation of the reception noise in terms of stochastic chemical kinetics is compliant with the reception noise behavior resulting from the ligand-receptor kinetics simulations." ] }
1503.01205
2115084976
In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules (or ligands) in a fluid medium. This paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols, and the receiver consists of receptors. When the signalling molecules arrive at the receiver, they may react with the receptors to form ligand-receptor complexes. Our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised. We derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator. We do that by first deriving a communication model which includes the chemical reactions in the transmitter, diffusion in the transmission medium and the ligand-receptor process in the receiver. This model, which takes the form of a continuous-time Markov process, captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion. We then adopt a maximum a posteriori framework and use Bayesian filtering to derive the optimal demodulator. We use numerical examples to illustrate the properties of this optimal demodulator.
Models. This paper uses the Reaction Diffusion Master Equation (RDME) @cite_38 framework to model the reactions and diffusion in the molecular communication networks. RDME assumes that time is continuous while the diffusion medium is discretised into voxels. This results in a CTMP with finite number of (discrete) states. RDME has been used to model stochastic dynamics of cells in the biology literature @cite_8 . An attraction of RDME is that it has the Markov property which means that one can leverage the rich theory behind Markov process.
{ "cite_N": [ "@cite_38", "@cite_8" ], "mid": [ "2124924107", "2122087743" ], "abstract": [ "We outline our perspective on stochastic chemical kinetics, paying particular attention to numerical simulation algorithms. We first focus on dilute, well-mixed systems, whose description using ordinary differential equations has served as the basis for traditional chemical kinetics for the past 150 years. For such systems, we review the physical and mathematical rationale for a discrete-stochastic approach, and for the approximations that need to be made in order to regain the traditional continuous-deterministic description. We next take note of some of the more promising strategies for dealing stochastically with stiff systems, rare events, and sensitivity analysis. Finally, we review some recent efforts to adapt and extend the discrete-stochastic approach to systems that are not well-mixed. In that currently developing area, we focus mainly on the strategy of subdividing the system into well-mixed subvolumes, and then simulating diffusional transfers of reactant molecules between adjacent subvolumes together with chemical reactions inside the subvolumes.", "Although cell polarity is an essential feature of living cells, it is far from being well-understood. Using a combination of computational modeling and biological experiments we closely examine an important prototype of cell polarity: the pheromone-induced formation of the yeast polarisome. Focusing on the role of noise and spatial heterogeneity, we develop and investigate two mechanistic spatial models of polarisome formation, one deterministic and the other stochastic, and compare the contrasting predictions of these two models against experimental phenotypes of wild-type and mutant cells. We find that the stochastic model can more robustly reproduce two fundamental characteristics observed in wild-type cells: a highly polarized phenotype via a mechanism that we refer to as spatial stochastic amplification, and the ability of the polarisome to track a moving pheromone input. Moreover, we find that only the stochastic model can simultaneously reproduce these characteristics of the wild-type phenotype and the multi-polarisome phenotype of a deletion mutant of the scaffolding protein Spa2. Significantly, our analysis also demonstrates that higher levels of stochastic noise results in increased robustness of polarization to parameter variation. Furthermore, our work suggests a novel role for a polarisome protein in the stabilization of actin cables. These findings elucidate the intricate role of spatial stochastic effects in cell polarity, giving support to a cellular model where noise and spatial heterogeneity combine to achieve robust biological function." ] }
1503.01205
2115084976
In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules (or ligands) in a fluid medium. This paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols, and the receiver consists of receptors. When the signalling molecules arrive at the receiver, they may react with the receptors to form ligand-receptor complexes. Our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised. We derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator. We do that by first deriving a communication model which includes the chemical reactions in the transmitter, diffusion in the transmission medium and the ligand-receptor process in the receiver. This model, which takes the form of a continuous-time Markov process, captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion. We then adopt a maximum a posteriori framework and use Bayesian filtering to derive the optimal demodulator. We use numerical examples to illustrate the properties of this optimal demodulator.
The author of this paper has previously used an extension of the RDME model, called the RDME with exogenous input (RDMEX) model, to study molecular communication networks in @cite_11 @cite_9 @cite_4 @cite_33 . The RDMEX assumes that the times at which the transmitter emits signalling molecules are deterministic. This results in a stochastic process which is piecewise Markov or the Markov property only holds in between two consecutive emissions by the transmitter. In this paper, we assume the transmitter uses chemical reactions to generate the signalling molecules. Therefore, the emission timings are not deterministic but are governed by a stochastic process.
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_33", "@cite_11" ], "mid": [ "2046221159", "2145401736", "2093072521", "1967221240" ], "abstract": [ "Abstract Molecular communication networks consist of transmitters and receivers distributed in a fluid medium. The communication in these networks is realised by the transmitters emitting signalling molecules, which are diffused in the medium to reach the receivers. This paper investigates the properties of noise, or the variance of the receiver output, in molecular communication networks. The noise in these networks come from multiple sources: stochastic emission of signalling molecules by the transmitters, diffusion in the fluid medium and stochastic reaction kinetics at the receivers. We model these stochastic fluctuations by using an extension of the master equation. We show that, under certain conditions, the receiver outputs of linear molecular communication networks are Poisson distributed. The derivation also shows that noise in these networks is a nonlinear function of the network parameters and is non-additive. Numerical examples are provided to illustrate the properties of this type of Poisson channels.", "Molecular communication networks can be used to realise communication between nanoscale devices. In a molecular communication network, transmitters and receivers communicate by using signalling molecules. At the receivers, the signalling molecules react, via a chain of chemical reactions, to produce output molecules. The counts of output molecules over time is the output signal of the receiver. The output signal is noisy due to the stochastic nature of diffusion and chemical reactions. This paper aims to characterise the properties of the output signal. We do this by modelling the transmission medium, transmitter and receiver. In order to simplify the analysis, we model the transmitter as a sequence which specifies the number of molecules emitted by the transmitter over time. This paper considers two receiver reaction mechanisms, reversible conversion and linear catalytic, which can be used to approximate, respectively, ligand-receptor binding and enzymatic reactions. These two mechanisms are chosen because, if we consider them on their own (i.e. without the transmitter and diffusion), the ordinary differential equations describing the mean behaviour of these two reaction mechanisms have the same form; however, if we consider the end-to-end behaviour from the transmitter signal to the mean variance of the number of output molecules, then these two receiver reaction mechanisms have very different behaviours. We show this by deriving analytical expressions for the mean, variance and frequency properties of the number of output molecules of these two receiver reaction mechanisms. In addition, for reversible conversion, we are able to derive the exact probability distribution of the number of output molecules. Our model allows us to study the impact of design parameters on the communication performance. For example, we assume that our receiver is enclosed by a membrane and we study the impact of the diffusibility of molecules across this membrane on the communication performance.", "Molecular communication is a promising approach to realize the communication between nanoscale devices. In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules. The transmitter uses different time-varying functions of concentration of signalling molecules (called emission patterns) to represent different transmission symbols. The signalling molecules diffuse freely in the medium. The receiver is assumed to consist of a number of receptors, which can be in ON or OFF state. When the signalling molecules arrive at the receiver, they react with the receptors and switch them from OFF to ON state probabilistically. The receptors remain ON for a random amount of time before reverting to the OFF state. This paper assumes that the receiver uses the continuous history of receptor state to infer the transmitted symbol. Furthermore, it assumes that the transmitter uses two transmission symbols and approaches the decoding problem from the maximum a posteriori (MAP) framework. Specifically, the decoding is realized by calculating the logarithm of the ratio of the posteriori probabilities of the two transmission symbols, or log-MAP ratio. A contribution of this paper is to show that the computation of log-MAP ratio can be performed by an analog filter. The receiver can therefore use the output of this filter to decide which symbol has been sent. This analog filter provides insight on what information is important for decoding. In particular, the timing at which the receptors switch from OFF to ON state, the number of OFF receptors and the mean number of signalling molecules at the receiver are important. Numerical examples are used to illustrate the property of this decoding method.", "We consider molecular communication networks consisting of transmitters and receivers distributed in a fluidic medium. In such networks, a transmitter sends one or more signaling molecules, which are diffused over the medium, to the receiver to realize the communication. In order to be able to engineer synthetic molecular communication networks, mathematical models for these networks are required. This paper proposes a new stochastic model for molecular communication networks called reaction-diffusion master equation with exogenous input (RDMEX). The key idea behind RDMEX is to model the transmitters as time series of signaling molecule counts, while diffusion in the medium and chemical reactions at the receivers are modeled as Markov processes using master equation. An advantage of RDMEX is that it can readily be used to model molecular communication networks with multiple transmitters and receivers. For the case where the reaction kinetics at the receivers is linear, we show how RDMEX can be used to determine the mean and covariance of the receiver output signals, and derive closed-form expressions for the mean receiver output signal of the RDMEX model. These closed-form expressions reveal that the output signal of a receiver can be affected by the presence of other receivers. Numerical examples are provided to demonstrate the properties of the model." ] }
1503.01205
2115084976
In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules (or ligands) in a fluid medium. This paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols, and the receiver consists of receptors. When the signalling molecules arrive at the receiver, they may react with the receptors to form ligand-receptor complexes. Our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised. We derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator. We do that by first deriving a communication model which includes the chemical reactions in the transmitter, diffusion in the transmission medium and the ligand-receptor process in the receiver. This model, which takes the form of a continuous-time Markov process, captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion. We then adopt a maximum a posteriori framework and use Bayesian filtering to derive the optimal demodulator. We use numerical examples to illustrate the properties of this optimal demodulator.
In this paper, we assume that the propagation medium is discretised in the voxels. An alternative modelling paradigm that has been used in a number of molecular communication network papers @cite_2 @cite_28 @cite_26 is that the transmitter or receiver has a non-zero spatial dimension (commonly modelled by a sphere) while the propagation medium is assumed to be continuous. (Note that though @cite_26 does not explicitly state the dimension of the receiver, one can infer from the fact that the receiver must have a non-zero dimension because it has a non-zero probability of receiving the signalling molecules.) We believe the technique in this paper can be adapted to this alternative modelling paradigm and we do not expect this alternative modelling paradigm will change the results in this paper; we will explain this in Section .
{ "cite_N": [ "@cite_28", "@cite_26", "@cite_2" ], "mid": [ "1984222522", "2963922654", "2093287389" ], "abstract": [ "Abstract In this paper, we study a molecular communication system operating over a moving propagation medium. Using the convection–diffusion equation, we present the first separate models for the channel response and the corrupting noise. The flow-based molecular channel is shown to be linear but time-varying and the noise corrupting the signal is additive white Gaussian with a signal dependent magnitude. By modelling the ligand–receptor binding process, it is shown that the molecular communication reception process in this channel has a low-pass characteristic that colours the additive noise. A whitening filter is proposed to compensate for this low-pass characteristic. Simulation results demonstrate the benefit of the whitening filter and the effect of medium motion on bit error rate.", "In this paper, a diffusion-based molecular communication channel between two nano-machines is considered. The effect of the amount of memory on performance is characterized, and a simple memory-limited decoder is proposed; its performance is shown to be close to that of the best possible decoder (without any restrictions on the computational complexity or its functional form), using genie-aided upper bounds. This effect is adapted to the case of Molecular Concentration Shift Keying; it is shown that a four-bit memory achieves nearly the same performance as infinite memory for all of the examples considered. A general class of threshold decoders is considered and shown to be suboptimal for a Poisson channel with memory, unless the SNR is higher than a computed threshold. During each symbol duration (symbol period), the probability that a released molecule hits the receiver changes over the duration of the period; thus, we also consider a receiver that samples at a rate higher than the transmission rate (a multi-read system). A multi-read system improves performance. The associated decision rule for this system is shown to be a weighted sum of the samples during each symbol interval. The performance of the system is analyzed using the saddle point approximation. The best performance gains are achieved for an oversampling factor of three for the examples considered.", "Abstract In this study, nanoscale communication networks have been investigated in the context of binary concentration-encoded unicast molecular communication suitable for numerous emerging applications, for example in healthcare and nanobiomedicine. The main focus of the paper has been given to the spatiotemporal distribution of signal strength and modulation schemes suitable for short-range, medium-range, and long-range molecular communication between two communicating nanomachines in a nanonetwork. This paper has principally focused on bio-inspired transmission techniques for concentration-encoded molecular communication systems. Spatiotemporal distributions of a carrier signal in the form of the concentration of diffused molecules over the molecular propagation channel and diffusion-dependent communication ranges have been explained for various scenarios. Finally, the performance analysis of modulation schemes has been evaluated in the form of the steady-state loss of amplitude of the received concentration signals and its dependence on the transmitter–receiver distance." ] }
1503.01205
2115084976
In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules (or ligands) in a fluid medium. This paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols, and the receiver consists of receptors. When the signalling molecules arrive at the receiver, they may react with the receptors to form ligand-receptor complexes. Our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised. We derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator. We do that by first deriving a communication model which includes the chemical reactions in the transmitter, diffusion in the transmission medium and the ligand-receptor process in the receiver. This model, which takes the form of a continuous-time Markov process, captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion. We then adopt a maximum a posteriori framework and use Bayesian filtering to derive the optimal demodulator. We use numerical examples to illustrate the properties of this optimal demodulator.
There is a rich literature in the modelling of biological systems discussing the difference between: (1) The particle approach which has a continuous state space because the state of a particle is its position; and (2) The mesoscopic approach (the approach in this paper) which discretises the medium into discrete voxels and consider the number of molecules in the voxels as the state. The first approach is more accurate but the computation burden can be high @cite_16 , while the second approach is accurate for appropriate discretisation @cite_59 @cite_38 . There are also hybrid approaches too. An overview of various modelling and simulation approaches can be found in @cite_16 .
{ "cite_N": [ "@cite_38", "@cite_16", "@cite_59" ], "mid": [ "2124924107", "1528643580", "1482678044" ], "abstract": [ "We outline our perspective on stochastic chemical kinetics, paying particular attention to numerical simulation algorithms. We first focus on dilute, well-mixed systems, whose description using ordinary differential equations has served as the basis for traditional chemical kinetics for the past 150 years. For such systems, we review the physical and mathematical rationale for a discrete-stochastic approach, and for the approximations that need to be made in order to regain the traditional continuous-deterministic description. We next take note of some of the more promising strategies for dealing stochastically with stiff systems, rare events, and sensitivity analysis. Finally, we review some recent efforts to adapt and extend the discrete-stochastic approach to systems that are not well-mixed. In that currently developing area, we focus mainly on the strategy of subdividing the system into well-mixed subvolumes, and then simulating diffusional transfers of reactant molecules between adjacent subvolumes together with chemical reactions inside the subvolumes.", "One of the fundamental motivations underlying computational cell biology is to gain insight into the complicated dynamical processes taking place, for example, on the plasma membrane or in the cytosol of a cell. These processes are often so complicated that purely temporal mathematical models cannot adequately capture the complex chemical kinetics and transport processes of, for example, proteins or vesicles. On the other hand, spatial models such as Monte Carlo approaches can have very large computational overheads. This chapter gives an overview of the state of the art in the development of stochastic simulation techniques for the spatial modelling of dynamic processes in a living cell.", "Numerical simulation methods have become an important tool in the study of chemical reaction networks in living cells. Many systems can, with high accuracy, be modeled by deterministic ordinary differential equations, but other systems require a more detailed level of modeling. Stochastic models at either the mesoscopic level or the microscopic level can be used for cases when molecules are present in low copy numbers.In this thesis we develop efficient and flexible algorithms for simulating systems at the microscopic level. We propose an improvement to the Green's function reaction dynamics algorithm, an efficient microscale method. Furthermore, we describe how to simulate interactions with complex internal structures such as membranes and dynamic fibers.The mesoscopic level is related to the microscopic level through the reaction rates at the respective scale. We derive that relation in both two dimensions and three dimensions and show that the mesoscopic model breaks down if the discretization of space becomes too fine. For a simple model problem we can show exactly when this breakdown occurs.We show how to couple the microscopic scale with the mesoscopic scale in a hybrid method. Using the fact that some systems only display microscale behaviour in parts of the system, we can gain computational time by restricting the fine-grained microscopic simulations to only a part of the system.Finally, we have developed a mesoscopic method that couples simulations in three dimensions with simulations on general embedded lines. The accuracy of the method has been verified by comparing the results with purely microscopic simulations as well as with theoretical predictions." ] }
1503.01205
2115084976
In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules (or ligands) in a fluid medium. This paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols, and the receiver consists of receptors. When the signalling molecules arrive at the receiver, they may react with the receptors to form ligand-receptor complexes. Our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised. We derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator. We do that by first deriving a communication model which includes the chemical reactions in the transmitter, diffusion in the transmission medium and the ligand-receptor process in the receiver. This model, which takes the form of a continuous-time Markov process, captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion. We then adopt a maximum a posteriori framework and use Bayesian filtering to derive the optimal demodulator. We use numerical examples to illustrate the properties of this optimal demodulator.
Others: The results of this paper may also be of interest to biologists who are interested to understand how living cells can distinguish between different concentration levels @cite_40 @cite_49 . The result of this paper can be viewed as a generalisation of @cite_40 which studies how cells can distinguish between two constant levels of ligand concentration.
{ "cite_N": [ "@cite_40", "@cite_49" ], "mid": [ "2115988195", "1975891443" ], "abstract": [ "Cells send and receive signals through pathways that have been defined in great detail biochemically, and it is often presumed that the signals convey only level information. Cell signaling in the presence of noise is extensively studied but only rarely is the speed required to make a decision considered. However, in the immune system, rapidly developing embryos, and cellular response to stress, fast and accurate actions are required. Statistical theory under the rubric of “exploit–explore” quantifies trade-offs between decision speed and accuracy and supplies rigorous performance bounds and algorithms that realize them. We show that common protein phosphorylation networks can implement optimal decision theory algorithms and speculate that the ubiquitous chemical modifications to receptors during signaling actually perform analog computations. We quantify performance trade-offs when the cellular system has incomplete knowledge of the data model. For the problem of sensing the time when the composition of a ligand mixture changes, we find a nonanalytic dependence on relative concentrations and specify the number of parameters needed for near-optimal performance and how to adjust them. The algorithms specify the minimal computation that has to take place on a single receptor before the information is pooled across the cell.", "A variety of cellular functions are robust even to substantial intrinsic and extrinsic noise in intracellular reactions and the environment that could be strong enough to impair or limit them. In particular, of substantial importance is cellular decision-making in which a cell chooses a fate or behavior on the basis of information conveyed in noisy external signals. For robust decoding, the crucial step is filtering out the noise inevitably added during information transmission. As a minimal and optimal implementation of such an information decoding process, the autocatalytic phosphorylation and autocatalytic dephosphorylation (aPadP) cycle was recently proposed. Here, we analyze the dynamical properties of the aPadP cycle in detail. We describe the dynamical roles of the stationary and short-term responses in determining the efficiency of information decoding and clarify the optimality of the threshold value of the stationary response and its information-theoretical meaning. Furthermore, we investigate the robustness of the aPadP cycle against the receptor inactivation time and intrinsic noise. Finally, we discuss the relationship among information decoding with information-dependent actions, bet-hedging and network modularity." ] }
1503.01161
2130485404
We present the Bayesian Case Model (BCM), a general framework for Bayesian case-based reasoning (CBR) and prototype classification and clustering. BCM brings the intuitive power of CBR to a Bayesian generative framework. The BCM learns prototypes, the "quintessential" observations that best represent clusters in a dataset, by performing joint inference on cluster labels, prototypes and important features. Simultaneously, BCM pursues sparsity by learning subspaces, the sets of features that play important roles in the characterization of the prototypes. The prototype and subspace representation provides quantitative benefits in interpretability while preserving classification accuracy. Human subject experiments verify statistically significant improvements to participants' understanding when using explanations produced by BCM, compared to those given by prior art.
In our view, there are at least three widely known types of interpretable models: sparse linear classifiers ( @cite_0 @cite_28 @cite_1 ); discretization methods, such as decision trees and decision lists (e.g., @cite_14 @cite_9 @cite_10 @cite_23 @cite_16 ); and prototype- or case-based classifiers (e.g., nearest neighbors @cite_2 or a supervised optimization-based method @cite_24 ). (See @cite_6 for a review of interpretable classification.) BCM is intended as the third model type, but uses unsupervised generative mechanisms to explain clusters, rather than supervised approaches @cite_18 or by focusing myopically on neighboring points @cite_31 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_28", "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_24", "@cite_23", "@cite_2", "@cite_31", "@cite_16", "@cite_10" ], "mid": [ "", "2099534828", "2159426623", "", "2165279024", "", "2135046866", "", "", "2004055201", "", "2951938546", "" ], "abstract": [ "", "Classification and regression trees are ideally suited for the analysis of com- plex ecological data. For such data, we require flexible and robust analytical methods, which can deal with nonlinear relationships, high-order interactions, and missing values. Despite such difficulties, the methods should be simple to understand and give easily interpretable results. Trees explain variation of a single response variable by repeatedly splitting the data into more homogeneous groups, using combinations of explanatory var- iables that may be categorical and or numeric. Each group is characterized by a typical value of the response variable, the number of observations in the group, and the values of the explanatory variables that define it. The tree is represented graphically, and this aids exploration and understanding. Trees can be used for interactive exploration and for description and prediction of patterns and processes. Advantages of trees include: (1) the flexibility to handle a broad range of response types, including numeric, categorical, ratings, and survival data; (2) invariance to monotonic transformations of the explanatory variables; (3) ease and ro- bustness of construction; (4) ease of interpretation; and (5) the ability to handle missing values in both response and explanatory variables. Thus, trees complement or represent an alternative to many traditional statistical techniques, including multiple regression, analysis of variance, logistic regression, log-linear models, linear discriminant analysis, and survival models. We use classification and regression trees to analyze survey data from the Australian central Great Barrier Reef, comprising abundances of soft coral taxa (Cnidaria: Octocorallia) and physical and spatial environmental information. Regression tree analyses showed that dense aggregations, typically formed by three taxa, were restricted to distinct habitat types, each of which was defined by combinations of 3-4 environmental variables. The habitat definitions were consistent with known experimental findings on the nutrition of these taxa. When used separately, physical and spatial variables were similarly strong predictors of abundances and lost little in comparison with their joint use. The spatial variables are thus effective surrogates for the physical variables in this extensive reef complex, where infor- mation on the physical environment is often not available. Finally, we compare the use of regression trees and linear models for the analysis of these data and show how linear models fail to find patterns uncovered by the trees.", "Probabilistic topic models are a popular tool for the unsupervised analysis of text, providing both a predictive model of future text and a latent topic representation of the corpus. Practitioners typically assume that the latent space is semantically meaningful. It is used to check models, summarize the corpus, and guide exploration of its contents. However, whether the latent space is interpretable is in need of quantitative evaluation. In this paper, we present new quantitative methods for measuring semantic meaning in inferred topics. We back these measures with large-scale user studies, showing that they capture aspects of the model that are undetected by previous measures of model quality based on held-out likelihood. Surprisingly, topic models which perform better on held-out likelihood may infer less semantically meaningful topics.", "", "Generative models of text typically associate a multinomial with every class label or topic. Even in simple models this requires the estimation of thousands of parameters; in multi-faceted latent variable models, standard approaches require additional latent \"switching\" variables for every token, complicating inference. In this paper, we propose an alternative generative model for text. The central idea is that each class label or latent topic is endowed with a model of the deviation in log-frequency from a constant background distribution. This approach has two key advantages: we can enforce sparsity to prevent overfitting, and we can combine generative facets through simple addition in log space, avoiding the need for latent switching variables. We demonstrate the applicability of this idea to a range of scenarios: classification, topic modeling, and more complex multifaceted generative models.", "", "SUMMARY We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly 0 and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described.", "", "", "", "", "The vast majority of real world classification problems are imbalanced, meaning there are far fewer data from the class of interest (the positive class) than from other classes. We propose two machine learning algorithms to handle highly imbalanced classification problems. The classifiers constructed by both methods are created as unions of parallel axis rectangles around the positive examples, and thus have the benefit of being interpretable. The first algorithm uses mixed integer programming to optimize a weighted balance between positive and negative class accuracies. Regularization is introduced to improve generalization performance. The second method uses an approximation in order to assist with scalability. Specifically, it follows a approach, where the positive class is characterized first by boxes, and then each box boundary becomes a separate discriminative classifier. This method has the computational advantages that it can be easily parallelized, and considers only the relevant regions of feature space.", "" ] }
1503.00769
1530058226
When the boundary of a familiar object is shown by a series of isolated dots, humans can often recognize the object with ease. This ability can be sustained with addition of distracting dots around the object. However, such capability has not been reproduced algorithmically on computers. We introduce a new algorithm that groups a set of dots into multiple non-disjoint subsets. It connects the dots into a spanning tree using the proximity cue. It then applies the straight polygon transformation to an initial polygon derived from the spanning tree. The straight polygon divides the space into polygons recursively and each polygon can be viewed as grouping of a subset of the dots. The number of polygons generated is O( @math ). We also introduce simple shape selection and recognition algorithms that can be applied to the grouping result. We used both natural and synthetic images to show effectiveness of these algorithms.
Kubovy investigated grouping of dots arranged on a periodic rectilinear lattice and studied stability of the grouping based on proximity @cite_1 . The multi-stability of dot grouping was later modeled with an exponential decay function of the distance between nearby pairs @cite_11 . Good fit of the model advocates proximity based groping. However, in @cite_17 , dots arranged on a curvilinear lattice showed grouping over smooth curves instead of over more proximal straight curves. The study appears to contradict the pure proximity based model and suggests more complex interplay between proximity and curvature cues in our perception. Nevertheless, the patterns used in @cite_17 are rather unnatural as dots are aligned perfectly along parallel curves providing distinct texture and they are viewed through a small aperture without any notion of boundaries. Hence, we think that the proximity cues are more dominant than the curvature ones, and stay focused on the proximity cues alone in our current study.
{ "cite_N": [ "@cite_1", "@cite_17", "@cite_11" ], "mid": [ "2043788923", "", "2029406404" ], "abstract": [ "Bravais (1850 1949) demonstrated that there are five types of periodic dot patterns (or lattices): oblique, rectangular, centered rectangular, square, and hexagonal. Gestalt psychologists studied grouping by proximity in rectangular and square dot patterns. In the first part of the present paper, I (1) describe the geometry of the five types of lattices, and (2) explain why, for the study of perception, centered rectangular lattices must be divided into two classes (centered rectangular andrhombic). I also show how all lattices can be located in a two-dimensional space. In the second part of the paper, I show how the geometry of these lattices determines their grouping and their multistability. I introduce the notion ofdegree of instability and explain how to order lattices from most stable to least stable (hexagonal). In the third part of the paper, I explore the effect of replacing the dots in a lattice with less symmetric motifs, thus creating wallpaper patterns. When a dot pattern is turned into a wallpaper pattern, its perceptual organization can be altered radically, overcoming grouping by proximity. I conclude the paper with an introduction to the implications of motif selection and placement for the perception of the ensuing patterns.", "", "Gestalt phenomena have long resisted quantification In the spirit of Gestalt field theory, we propose a theory that predicts the probability of grouping by proximity in the six kinds of dot lattices (hexagonal, rhombic, square, rectangular, centered rectangular, and oblique) We claim that the unstable perceptual organization of dot lattices is caused by competing forces that attract each dot to other dots in its neighborhood We model the decline of these forces as a function of distance with an exponential decay function This attraction function has one parameter, the attraction constant Simple assumptions allow us to predict the entropy of the perceptual organization of different dot lattices We showed dot lattices tachistoscopically to 7 subjects, and from the probabilities of the perceived organizations, we calculated the entropy of each lattice for each subject The model fit the data exceedingly well The attraction constant did not vary much over subjects" ] }
1503.00769
1530058226
When the boundary of a familiar object is shown by a series of isolated dots, humans can often recognize the object with ease. This ability can be sustained with addition of distracting dots around the object. However, such capability has not been reproduced algorithmically on computers. We introduce a new algorithm that groups a set of dots into multiple non-disjoint subsets. It connects the dots into a spanning tree using the proximity cue. It then applies the straight polygon transformation to an initial polygon derived from the spanning tree. The straight polygon divides the space into polygons recursively and each polygon can be viewed as grouping of a subset of the dots. The number of polygons generated is O( @math ). We also introduce simple shape selection and recognition algorithms that can be applied to the grouping result. We used both natural and synthetic images to show effectiveness of these algorithms.
Greene has conducted various studies on shape recognition using a device that allows controlling of a display of a 64x64 array of LEDs at sub-millisecond accuracy. In @cite_14 , a sparse set of dots uniformly sampled around the shape induced recognition of the shape more quickly than another set distributed non-uniformly around the shape. The result suggests that the maximum separation of dots affects the speed of shape recognition. In @cite_33 , using the same LED display, dots delineating common shapes were grouped into a group of four dots and they were flashed at millisecond accuracy. One treatment selected the four dots consecutively from the outline, thus provides contour cues. The other treatment selected the four dots randomly, thus depriving any contour cues. Subjects were divided into one of the two treatments and recognition accuracy was recorded for each shape. The results showed that there was no significant difference in the two treatment groups. The study suggests that contour attributes such as orientation, curvature, and length commonly used for perceptual grouping models are less important than the proximity cue for construction of shape outlines.
{ "cite_N": [ "@cite_14", "@cite_33" ], "mid": [ "1985181958", "2138255641" ], "abstract": [ "Summary-Most extant theories of shape perception assume or assert that various contour attributes, and in particular, the orientation, curvature and linear extent of the contours provide essential object recognition cues. The present study examined this proposal using discrete dots that marked locations on the outer boundary of namable objects, providing shape-patterns similar to silhouettes. For each shape, the display initially provided only a sampling of the total number of dots in the boundary, and the number of dots was periodically increased until the participant named the object. There were three treatment conditions in which the initial display as well as the periodic increments consisted of continuous arrays (strings) of dots, randomly positioned dots, or evenly spaced dots. Analysis showed objects were recognized with the fewest percentage of dots with the evenly spaced condition, and participants needed the greatest percentage with the contiguous array condition. In many cases objects could be identified when very few evenly spaced dots were shown, thereby providing large spacing between the dots. It seems unlikely that known neural mechanisms could extract contour attributes, e.g., orientation, curvature, and linear extent, from such sparse stimulus patterns, which provides a challenge to the proposition that these are essential shape cues.", "It is believed that certain contour attributes, specifically orientation, curvature and linear extent, provide essential cues for object (shape) recognition. The present experiment examined this hypothesis by comparing stimulus conditions that differentially provided such cues. A spaced array of dots was used to mark the outside boundary of namable objects, and subsets were chosen that contained either contiguous strings of dots or randomly positioned dots. These subsets were briefly and successively displayed using an MTDC information persistence paradigm. Across the major range of temporal separation of the subsets, it was found that contiguity of boundary dots did not provide more effective shape recognition cues. This is at odds with the concept that encoding and recognition of shapes is predicated on the encoding of contour attributes such as orientation, curvature and linear extent." ] }
1503.00769
1530058226
When the boundary of a familiar object is shown by a series of isolated dots, humans can often recognize the object with ease. This ability can be sustained with addition of distracting dots around the object. However, such capability has not been reproduced algorithmically on computers. We introduce a new algorithm that groups a set of dots into multiple non-disjoint subsets. It connects the dots into a spanning tree using the proximity cue. It then applies the straight polygon transformation to an initial polygon derived from the spanning tree. The straight polygon divides the space into polygons recursively and each polygon can be viewed as grouping of a subset of the dots. The number of polygons generated is O( @math ). We also introduce simple shape selection and recognition algorithms that can be applied to the grouping result. We used both natural and synthetic images to show effectiveness of these algorithms.
When a dot pattern represents a single cluster, the problem is to derive a polygonal representation of the cluster. The problem is often called . A trivial but important representation is a convex hull. proposed a generalization of convex hull called @math -. Given a real number @math , the @math - is the intersection of all closed generalized discs with radius @math that contain all the points in the pattern @cite_20 . If @math , a generalized disk is the complement of a disc of radius @math and if @math , it is a half-plane. The convex hull is a case with @math . Furthermore, @math - is a polygonal representation of the dot pattern derived from the corresponding @math - and can be computed in @math . proposed a representation called @math -, which is simpler and computationally more efficient ( @math ) than the @math - @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_20" ], "mid": [ "2081183584", "2151631165" ], "abstract": [ "Abstract A novel approach to defining the external shape of a dot pattern is proposed from which the intuitive border of the set is extracted. The approach is based on a new definition called the s -shape, which can be generated by a data-driven procedure. The s -shape generates a staircase-like border. To obtain a polygonal border, an r -shape is defined for which the parameter r is found from s , the parameter of the s -shape. The main advantage of this approach is that it can be computed in O ( n ) time for a dot pattern containing n points. The approach has three basic steps: (i) choice of an appropriate s (and corresponding r ) from the given point set, (ii) generation of the r -shape, and (iii) cleaning of inconsistent parts from the r -shape. The diagram composed of the consistent edges of the r -shape is considered the perceived border of the dot pattern. A new structural basis called the dispersion matrix is evolved. Extension of the work to the digital case is discussed. The algorithm for extracting the perceptual border is fast since it is mainly composed of basic operations such as nonnegative integer addition and logical operations. Moreover, it can be implemented on parallel machines since the operations are local in the point space.", "A generalization of the convex hull of a finite set of points in the plane is introduced and analyzed. This generalization leads to a family of straight-line graphs, \" -shapes,\" which seem to capture the intuitive notions of \"fine shape\" and \"crude shape\" of point sets. It is shown that a-shapes are subgraphs of the closest point or furthest point Delaunay triangulation. Relying on this result an optimal O(n n) algorithm that constructs -shapes is developed." ] }
1503.00769
1530058226
When the boundary of a familiar object is shown by a series of isolated dots, humans can often recognize the object with ease. This ability can be sustained with addition of distracting dots around the object. However, such capability has not been reproduced algorithmically on computers. We introduce a new algorithm that groups a set of dots into multiple non-disjoint subsets. It connects the dots into a spanning tree using the proximity cue. It then applies the straight polygon transformation to an initial polygon derived from the spanning tree. The straight polygon divides the space into polygons recursively and each polygon can be viewed as grouping of a subset of the dots. The number of polygons generated is O( @math ). We also introduce simple shape selection and recognition algorithms that can be applied to the grouping result. We used both natural and synthetic images to show effectiveness of these algorithms.
For the clustering side, standard clustering algorithms such as k-means, mixture of Gaussian, and ISODATA can be applied. However, these algorithms work on compact clusters that are well separated, and do not work on the dolphin example shown in Figure . Parametric or template based models @cite_16 can be used to isolate specific shapes from background. However, the approaches are not applicable to general shapes.
{ "cite_N": [ "@cite_16" ], "mid": [ "2141689883" ], "abstract": [ "This paper analyses the improvements that can be gained in the generalized Hough transform method for recognizing objects through the use of imperfect perceptual grouping techniques. In particular, we consider simple grouping techniques that determine pairs of points that are likely to belong to the same object using a criterion based on connectedness in the image edge map. It is shown that such imperfect grouping techniques can considerably improve both the efficiency and accuracy of object recognition. Experiments are described that demonstrate the improvements in performance." ] }
1503.00769
1530058226
When the boundary of a familiar object is shown by a series of isolated dots, humans can often recognize the object with ease. This ability can be sustained with addition of distracting dots around the object. However, such capability has not been reproduced algorithmically on computers. We introduce a new algorithm that groups a set of dots into multiple non-disjoint subsets. It connects the dots into a spanning tree using the proximity cue. It then applies the straight polygon transformation to an initial polygon derived from the spanning tree. The straight polygon divides the space into polygons recursively and each polygon can be viewed as grouping of a subset of the dots. The number of polygons generated is O( @math ). We also introduce simple shape selection and recognition algorithms that can be applied to the grouping result. We used both natural and synthetic images to show effectiveness of these algorithms.
Zahn used a minimum spanning tree from a dot pattern, as we do in our algorithm, and break the tree into a forrest by removing edges that are significantly longer than the others @cite_26 . The method of Bajcsy and Ahuja @cite_3 is similar to Zahn's method, but exploits maximum intra-cluster similarity and inter-cluster dissimilarity. Ahuja and Tuceryan @cite_27 used the Voronoi diagram to derive the neighbor relation and various local geometric structures from the diagram to classify dots into interior, border, curve, and isolated. Globally consistent classification is encouraged by relaxation labeling @cite_6 @cite_31 . They use 7 different geometric structures and the classification method is complex and requires many free parameters.
{ "cite_N": [ "@cite_26", "@cite_6", "@cite_3", "@cite_27", "@cite_31" ], "mid": [ "1972969203", "1979622972", "2050932686", "2061645666", "2107792892" ], "abstract": [ "A family of graph-theoretical algorithms based on the minimal spanning tree are capable of detecting several kinds of cluster structure in arbitrary point sets; description of the detected clusters is possible in some cases by extensions of the method. Development of these clustering algorithms was based on examples from two-dimensional space because we wanted to copy the human perception of gestalts or point groupings. On the other hand, all the methods considered apply to higher dimensional spaces and even to general metric spaces. Advantages of these methods include determinacy, easy interpretation of the resulting clusters, conformity to gestalt principles of perceptual organization, and invariance of results under monotone transformations of interpoint distance. Brief discussion is made of the application of cluster detection to taxonomy and the selection of good feature spaces for pattern recognition. Detailed analyses of several planar cluster detection problems are illustrated by text and figures. The well-known Fisher iris data, in four-dimensional space, have been analyzed by these methods also. PL 1 programs to implement the minimal spanning tree methods have been fully debugged.", "Given a set of objects in a scene whose identifications are ambiguous, it is often possible to use relationships among the objects to reduce or eliminate the ambiguity. A striking example of this approach was given by Waltz [13]. This paper formulates the ambiguity-reduction process in terms of iterated parallel operations (i.e., relaxation operations) performed on an array of (object, identification) data. Several different models of the process are developed, convergence properties of these models are established, and simple examples are given.", "This paper presents a new approach to hierarchical clustering of point patterns. Two algorithms for hierarchical location- and density-based clustering are developed. Each method groups points such that maximum intracluster similarity and intercluster dissimilarity are achieved for point locations or point separations. Performance of the clustering methods is compared with four other methods. The approach is applied to a two-step texture analysis, where points represent centroid and average color of the regions in image segmentation.", "Abstract This paper presents a computational approach to extracting basic perceptual structure, or the lowest level grouping in dot patterns. The goal is to extract the perceptual segments of dots that group together because of their relative locations. The dots are interpreted as belonging to the interior or the border of a perceptual segment, or being along a perceived curve, or being isolated. To perform the lowest level grouping, first the geometric structure of the dot pattern is represented in terms of certain geometric properties of the Voronoi neighborhoods of the dots. The grouping is accomplished through independent modules that posses narrow expertise for recognition of typical interior dots, border dots, curve dots, and isolated dots, from the properties of the Voronoi neighborhoods. The results of the modules are allowed to influence and change each other so as to result in perceptual components that satisfy global, Gestalt criteria such as border and curve smoothness and component compactness. Such latera; communication among the modules makes feasible a perceptual interpretation of the local structure in a manner that best meets the global expectations. Thus, an integration is performed of multiple constraints, active at different perceptual levels and having different scopes in the dot pattern, to infer the lowest level perceptual structure. The local interpretations as well as lateral corrections are performed through constraint propagation using a probabilistic relaxation process. The result is a partitioning of the dot pattern into different perceptual segments or tokens. Unlike dots, these segments posses size and shape properties in addition to locations.", "A large class of problems can be formulated in terms of the assignment of labels to objects. Frequently, processes are needed which reduce ambiguity and noise, and select the best label among several possible choices. Relaxation labeling processes are just such a class of algorithms. They are based on the parallel use of local constraints between labels. This paper develops a theory to characterize the goal of relaxation labeling. The theory is founded on a definition of con-sistency in labelings, extending the notion of constraint satisfaction. In certain restricted circumstances, an explicit functional exists that can be maximized to guide the search for consistent labelings. This functional is used to derive a new relaxation labeling operator. When the restrictions are not satisfied, the theory relies on variational cal-culus. It is shown that the problem of finding consistent labelings is equivalent to solving a variational inequality. A procedure nearly identical to the relaxation operator derived under restricted circum-stances serves in the more general setting. Further, a local convergence result is established for this operator. The standard relaxation labeling formulas are shown to approximate our new operator, which leads us to conjecture that successful applications of the standard methods are explainable by the theory developed here. Observations about con-vergence and generalizations to higher order compatibility relations are described." ] }
1503.00593
2951529889
In this paper, we address the problem of estimating and removing non-uniform motion blur from a single blurry image. We propose a deep learning approach to predicting the probabilistic distribution of motion blur at the patch level using a convolutional neural network (CNN). We further extend the candidate set of motion kernels predicted by the CNN using carefully designed image rotations. A Markov random field model is then used to infer a dense non-uniform motion blur field enforcing motion smoothness. Finally, motion blur is removed by a non-uniform deblurring model using patch-level image prior. Experimental evaluations show that our approach can effectively estimate and remove complex non-uniform motion blur that is not handled well by previous approaches.
Estimating accurate motion blur kernels is essential to non-uniform image deblurring. @cite_4 @cite_13 @cite_26 @cite_7 @cite_33 , non-uniform motion blur is modeled as a global camera motion, which basically estimates an uniform kernel in the camera motion space. Methods in @cite_10 @cite_21 @cite_9 jointly estimate the motion kernels and sharp image. They rely on a sparsity prior to infer the latent sharp image for better motion kernel estimation. Different to them, we estimate motion blur kernels directly using the local patches, which does not require the estimation of camera motion or a latent sharp image.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_33", "@cite_7", "@cite_10", "@cite_9", "@cite_21", "@cite_13" ], "mid": [ "2106923440", "1598936309", "", "2043529138", "2002630316", "2167307343", "2044005793", "2132244934" ], "abstract": [ "This paper addresses how to model and correct image blur that arises when a camera undergoes ego motion while observing a distant scene. In particular, we discuss how the blurred image can be modeled as an integration of the clear scene under a sequence of planar projective transformations (i.e., homographies) that describe the camera's path. This projective motion path blur model is more effective at modeling the spatially varying motion blur exhibited by ego motion than conventional methods based on space-invariant blur kernels. To correct the blurred image, we describe how to modify the Richardson-Lucy (RL) algorithm to incorporate this new blur model. In addition, we show that our projective motion RL algorithm can incorporate state-of-the-art regularization priors to improve the deblurred results. The projective motion path blur model, along with the modified RL algorithm, is detailed, together with experimental results demonstrating its overall effectiveness. Statistical analysis on the algorithm's convergence properties and robustness to noise is also provided.", "We present a novel single image deblurring method to estimate spatially non-uniform blur that results from camera shake. We use existing spatially invariant deconvolution methods in a local and robust way to compute initial estimates of the latent image. The camera motion is represented as a Motion Density Function (MDF) which records the fraction of time spent in each discretized portion of the space of all possible camera poses. Spatially varying blur kernels are derived directly from the MDF. We show that 6D camera motion is well approximated by 3 degrees of motion (in-plane translation and rotation) and analyze the scope of this approximation. We present results on both synthetic and captured data. Our system out-performs current approaches which make the assumption of spatially invariant blur.", "", "Photographs taken in low-light conditions are often blurry as a result of camera shake, i.e. a motion of the camera while its shutter is open. Most existing deblurring methods model the observed blurry image as the convolution of a sharp image with a uniform blur kernel. However, we show that blur from camera shake is in general mostly due to the 3D rotation of the camera, resulting in a blur that can be significantly non-uniform across the image. We propose a new parametrized geometric model of the blurring process in terms of the rotational motion of the camera during exposure. This model is able to capture non-uniform blur in an image due to camera shake using a single global descriptor, and can be substituted into existing deblurring algorithms with only small modifications. To demonstrate its effectiveness, we apply this model to two deblurring problems; first, the case where a single blurry image is available, for which we examine both an approximate marginalization approach and a maximum a posteriori approach, and second, the case where a sharp but noisy image of the scene is available in addition to the blurry image. We show that our approach makes it possible to model and remove a wider class of blurs than previous approaches, including uniform blur as a special case, and demonstrate its effectiveness with experiments on synthetic and real images.", "Many blind motion deblur methods model the motion blur as a spatially invariant convolution process. However, motion blur caused by the camera movement in 3D space during shutter time often leads to spatially varying blurring effect over the image. In this paper, we proposed an efficient two-stage approach to remove spatially-varying motion blurring from a single photo. There are three main components in our approach: (i) a minimization method of estimating region-wise blur kernels by using both image information and correlations among neighboring kernels, (ii) an interpolation scheme of constructing pixel-wise blur matrix from region-wise blur kernels, and (iii) a non-blind deblurring method robust to kernel errors. The experiments showed that the proposed method outperformed the existing software based approaches on tested real images.", "We show in this paper that the success of previous maximum a posterior (MAP) based blur removal methods partly stems from their respective intermediate steps, which implicitly or explicitly create an unnatural representation containing salient image structures. We propose a generalized and mathematically sound L0 sparse expression, together with a new effective method, for motion deblurring. Our system does not require extra filtering during optimization and demonstrates fast energy decreasing, making a small number of iterations enough for convergence. It also provides a unified framework for both uniform and non-uniform motion deblurring. We extensively validate our method and show comparison with other approaches with respect to convergence speed, running time, and result quality.", "Most state-of-the-art dynamic scene deblurring methods based on accurate motion segmentation assume that motion blur is small or that the specific type of motion causing the blur is known. In this paper, we study a motion segmentation-free dynamic scene deblurring method, which is unlike other conventional methods. When the motion can be approximated to linear motion that is locally (pixel-wise) varying, we can handle various types of blur caused by camera shake, including out-of-plane motion, depth variation, radial distortion, and so on. Thus, we propose a new energy model simultaneously estimating motion flow and the latent image based on robust total variation (TV)-L1 model. This approach is necessary to handle abrupt changes in motion without segmentation. Furthermore, we address the problem of the traditional coarse-to-fine deblurring framework, which gives rise to artifacts when restoring small structures with distinct motion. We thus propose a novel kernel re-initialization method which reduces the error of motion flow propagated from a coarser level. Moreover, a highly effective convex optimization-based solution mitigating the computational difficulties of the TV-L1 model is established. Comparative experimental results on challenging real blurry images demonstrate the efficiency of the proposed method.", "Camera shake leads to non-uniform image blurs. State-of-the-art methods for removing camera shake model the blur as a linear combination of homographically transformed versions of the true image. While this is conceptually interesting, the resulting algorithms are computationally demanding. In this paper we develop a forward model based on the efficient filter flow framework, incorporating the particularities of camera shake, and show how an efficient algorithm for blur removal can be obtained. Comprehensive comparisons on a number of real-world blurry images show that our approach is not only substantially faster, but it also leads to better deblurring results." ] }
1503.00593
2951529889
In this paper, we address the problem of estimating and removing non-uniform motion blur from a single blurry image. We propose a deep learning approach to predicting the probabilistic distribution of motion blur at the patch level using a convolutional neural network (CNN). We further extend the candidate set of motion kernels predicted by the CNN using carefully designed image rotations. A Markov random field model is then used to infer a dense non-uniform motion blur field enforcing motion smoothness. Finally, motion blur is removed by a non-uniform deblurring model using patch-level image prior. Experimental evaluations show that our approach can effectively estimate and remove complex non-uniform motion blur that is not handled well by previous approaches.
Recently, there has been some related work on learning-based deblurring approaches. @cite_22 proposes a discriminative deblurring approach using cascade of Gaussian CRF models for uniform blur removal. @cite_14 proposes a neural network approach for learning a denoiser to suppress noises during deconvolution. @cite_31 designs an image deconvolution neural network for non-blind deconvolution. These approaches above focus on designing better learning-based model for uniform blur removal. Our approach works on a more challenging task of non-uniform motion blur estimation and removal. Our CNN-based approach provides an effective method for solving this problem.
{ "cite_N": [ "@cite_31", "@cite_14", "@cite_22" ], "mid": [ "2124964692", "1973567017", "2099628070" ], "abstract": [ "Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.", "Image deconvolution is the ill-posed problem of recovering a sharp image, given a blurry one generated by a convolution. In this work, we deal with space-invariant non-blind deconvolution. Currently, the most successful methods involve a regularized inversion of the blur in Fourier domain as a first step. This step amplifies and colors the noise, and corrupts the image information. In a second (and arguably more difficult) step, one then needs to remove the colored noise, typically using a cleverly engineered algorithm. However, the methods based on this two-step approach do not properly address the fact that the image information has been corrupted. In this work, we also rely on a two-step procedure, but learn the second step on a large dataset of natural images, using a neural network. We will show that this approach outperforms the current state-of-the-art on a large dataset of artificially blurred images. We demonstrate the practical applicability of our method in a real-world example with photographic out-of-focus blur.", "Non-blind deblurring is an integral component of blind approaches for removing image blur due to camera shake. Even though learning-based deblurring methods exist, they have been limited to the generative case and are computationally expensive. To this date, manually-defined models are thus most widely used, though limiting the attained restoration quality. We address this gap by proposing a discriminative approach for non-blind deblurring. One key challenge is that the blur kernel in use at test time is not known in advance. To address this, we analyze existing approaches that use half-quadratic regularization. From this analysis, we derive a discriminative model cascade for image deblurring. Our cascade model consists of a Gaussian CRF at each stage, based on the recently introduced regression tree fields. We train our model by loss minimization and use synthetically generated blur kernels to generate training data. Our experiments show that the proposed approach is efficient and yields state-of-the-art restoration quality on images corrupted with synthetic and real blur." ] }
1503.00658
1674425027
With double hashing, for a key @math , one generates two hash values @math and @math , and then uses combinations @math for @math to generate multiple hash values in the range @math from the initial two. For balanced allocations, keys are hashed into a hash table where each bucket can hold multiple keys, and each key is placed in the least loaded of @math choices. It has been shown previously that asymptotically the performance of double hashing and fully random hashing is the same in the balanced allocation paradigm using fluid limit methods. Here we extend a coupling argument used by Lueker and Molodowitch to show that double hashing and ideal uniform hashing are asymptotically equivalent in the setting of open address hash tables to the balanced allocation setting, providing further insight into this phenomenon. We also discuss the potential for and bottlenecks limiting the use this approach for other multiple choice hashing schemes.
Of course, our work is also highly motivated by the chain of work @cite_8 @cite_14 @cite_0 @cite_16 regarding the classical question of the behavior of double hashing for open address hash tables, where empirical work had shown that the difference in performance, in terms of the average length of an unsuccessful search sequence, appeared negligible. Theoretically, the main result showed that for a table with @math cells and @math keys for a constant @math , the number of probed locations in an unsuccessful search was (up to lower order terms) @math for both double hashing and uniform hashing @cite_0 . We have not seen this methodology applied to other hashing schemes such as balanced allocations, although of course the issue of limited randomness is pervasive; a recent example include studying the use of @math -wise independent hash functions for linear probing for small constant @math @cite_11 @cite_1 .
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_1", "@cite_0", "@cite_16", "@cite_11" ], "mid": [ "2055754268", "", "2953305855", "2030616523", "2124820199", "1968161494" ], "abstract": [ "Abstract In this paper we analyze the performance of double hashing , a well-known hashing algorithm in which we probe the hash table along arithmetic progressions where the initial element and the increment of the progression are chosen randomly and independently depending only on the key K of the search. We prove that double hashing is asymptotically equivalent to uniform probing for load factors α not exceeding a certain constant α 0 = 0.31…. Uniform hashing refers to a technique which exhibits no clustering and is known to be optimal in a certain sense. Our proof method has a different flavor from those previously used in algorithmic analysis. We begin by showing that the tail of the hypergeometric distribution a fixed percentage away from the mean is exponentially small. We use this result to prove that random subsets of the finite ring of integers modulo m of cardinality am have always nearly the expected number of arithmetic progressions of length k , except with exponentially small probability. We then use this theorem to start up a process (called the extension process) of looking at snapshorts of the table as it fills up with double hashing. Between steps of the extension process we can show that the effect of clustering is negligible, and that we therefore never depart too far from the truly random situation.", "", "We show that linear probing requires 5-independent hash functions for expected constant-time performance, matching an upper bound of [ STOC'07]. More precisely, we construct a 4-independent hash functions yielding expected logarithmic search time. For (1+ )-approximate minwise independence, we show that (log 1 )-independent hash functions are required, matching an upper bound of [Indyk, SODA'99]. We also show that the very fast 2-independent multiply-shift scheme of Dietzfelbinger [STACS'96] fails badly in both applications.", "In [GS78] a deep and elegant analysis showed that double hashing was equivalent to the ideal uniform hashing up to a load factor of about 0.319. In this paper we give an analysis which extends this to load factors arbitrarily close to 1. We understand from [Ko86, Gu87] that Ajtai, Guibas, Komlos, and Szemeredi obtained this result in the first part of 1986; the analysis in this paper is of interest nonetheless because we demonstrate how a resampling technique can be used to obtain a remarkably simple proof.", "A multiple module heat exchanger having a serpentine path for one fluid to flow transversely through adjacent modules while a second fluid flows longitudinally therethrough. A guide pin carried by one module and a mating receptacle carried by a module adjacent thereto cooperate to permit relative longitudinal movement between modules while precluding relative transverse movement therebetween. The mating guide pin and receptacle are located at the central axis of adjacent modules whereby there may be a limited amount of pivotal movement between adjacent modules.", "Hashing with linear probing dates back to the 1950s and is among the most studied algorithms for storing (key, value) pairs. In recent years it has become one of the most important hash table organizations since it uses the cache of modern computers very well. Unfortunately, previous analyses rely either on complicated and space consuming hash functions, or on the unrealistic assumption of free access to a hash function with random and independent function values. Carter and Wegman, in their seminal paper on universal hashing, raised the question of extending their analysis to linear probing. However, we show in this paper that linear probing using a 2-wise independent hash function may have expected logarithmic cost per operation. Recently, Pactrascu and Thorup have shown that 3- and 4-wise independent hash functions may also give rise to logarithmic expected query time. On the positive side, we show that 5-wise independence is enough to ensure constant expected time per operation. This resolves the question of finding a space and time efficient hash function that provably ensures good performance for hashing with linear probing." ] }
1503.00604
2949326040
We study the problem of group linkage: linking records that refer to entities in the same group. Applications for group linkage include finding businesses in the same chain, finding conference attendees from the same affiliation, finding players from the same team, etc. Group linkage faces challenges not present for traditional record linkage. First, although different members in the same group can share some similar global values of an attribute, they represent different entities so can also have distinct local values for the same or different attributes, requiring a high tolerance for value diversity. Second, groups can be huge (with tens of thousands of records), requiring high scalability even after using good blocking strategies. We present a two-stage algorithm: the first stage identifies cores containing records that are very likely to belong to the same group, while being robust to possible erroneous values; the second stage collects strong evidence from the cores and leverages it for merging more records into the same group, while being tolerant to differences in local values of an attribute. Experimental results show the high effectiveness and efficiency of our algorithm on various real-world data sets.
For record clustering in linkage, existing work may apply the transitive rule @cite_31 , or do match-and-merge @cite_3 , or reduce it to an optimization problem @cite_15 . Our work is different in that our core-identification algorithm aims at being robust to a few erroneous records; and our clustering algorithm emphasizes leveraging the strong evidence collected from the cores.
{ "cite_N": [ "@cite_31", "@cite_3", "@cite_15" ], "mid": [ "1612155886", "2117974736", "2148524305" ], "abstract": [ "The problem of merging multiple databases of information about common entities is frequently encountered in KDD and decision support applications in large commercial and government organizations. The problem we study is often called the Merge Purge problem and is difficult to solve both in scale and accuracy. Large repositories of data typically have numerous duplicate information entries about the same entities that are difficult to cull together without an intelligent ’’equational theory‘‘ that identifies equivalent items by a complex, domain-dependent matching process. We have developed a system for accomplishing this Data Cleansing task and demonstrate its use for cleansing lists of names of potential customers in a direct marketing-type application. Our results for statistically generated data are shown to be accurate and effective when processing the data multiple times using different keys for sorting on each successive pass. Combing results of individual passes using transitive closure over the independent results, produces far more accurate results at lower cost. The system provides a rule programming module that is easy to program and quite good at finding duplicates especially in an environment with massive amounts of data. This paper details improvements in our system, and reports on the successful implementation for a real-world database that conclusively validates our results previously achieved for statistically generated data.", "Entity Resolution (ER) is the problem of identifying which records in a database refer to the same real-world entity. An exhaustive ER process involves computing the similarities between pairs of records, which can be very expensive for large datasets. Various blocking techniques can be used to enhance the performance of ER by dividing the records into blocks in multiple ways and only comparing records within the same block. However, most blocking techniques process blocks separately and do not exploit the results of other blocks. In this paper, we propose an iterative blocking framework where the ER results of blocks are reflected to subsequently processed blocks. Blocks are now iteratively processed until no block contains any more matching records. Compared to simple blocking, iterative blocking may achieve higher accuracy because reflecting the ER results of blocks to other blocks may generate additional record matches. Iterative blocking may also be more efficient because processing a block now saves the processing time for other blocks. We implement a scalable iterative blocking system and demonstrate that iterative blocking can be more accurate and efficient than blocking for large datasets.", "The presence of duplicate records is a major data quality concern in large databases. To detect duplicates, entity resolution also known as duplication detection or record linkage is used as a part of the data cleaning process to identify records that potentially refer to the same real-world entity. We present the Stringer system that provides an evaluation framework for understanding what barriers remain towards the goal of truly scalable and general purpose duplication detection algorithms. In this paper, we use Stringer to evaluate the quality of the clusters (groups of potential duplicates) obtained from several unconstrained clustering algorithms used in concert with approximate join techniques. Our work is motivated by the recent significant advancements that have made approximate join algorithms highly scalable. Our extensive evaluation reveals that some clustering algorithms that have never been considered for duplicate detection, perform extremely well in terms of both accuracy and scalability." ] }
1503.00848
2168804568
We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five seconds per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.
* The efficient computation of normalized-cuts eigenvectors has been the subject of recent work, as it is often the computational bottleneck in grouping algorithms. Taylor @cite_47 presented a technique for using a simple watershed oversegmentation to reduce the size of the eigenvector problem, sacrificing accuracy for speed. We take a similar approach of solving the eigenvector problem in a reduced space, though we use simple image-pyramid operations on the affinity matrix (instead of a separate segmentation algorithm) and we see no loss in performance despite a 20 @math speed improvement. Maire and Yu @cite_45 presented a novel multigrid solver for producing eigenvectors at multiple scales, which speeds up fine-scale eigenvector computation by leveraging coarse-scale solutions. Our technique also uses the scale-space structure of an image, but instead of solving the problem at multiple scales, we simply reduce the scale of the problem, solve it at a reduced scale, and then upsample the solution while preserving the structure of the image. As such, our technique is faster and much simpler, requiring only a few lines of code wrapped around a standard sparse eigensolver.
{ "cite_N": [ "@cite_47", "@cite_45" ], "mid": [ "2027645416", "2108944208" ], "abstract": [ "In this paper we explore approaches to accelerating segmentation and edge detection algorithms based on the framework. The paper characterizes the performance of a simple but effective edge detection scheme which can be computed rapidly and offers performance that is competitive with the pB detector. The paper also describes an approach for computing a reduced order normalized cut that captures the essential features of the original problem but can be computed in less than half a second on a standard computing platform.", "We reexamine the role of multiscale cues in image segmentation using an architecture that constructs a globally coherent scale-space output representation. This characteristic is in contrast to many existing works on bottom-up segmentation, which prematurely compress information into a single scale. The architecture is a standard extension of Normalized Cuts from an image plane to an image pyramid, with cross-scale constraints enforcing consistency in the solution while allowing emergence of coarse-to-fine detail. We observe that multiscale processing, in addition to improving segmentation quality, offers a route by which to speed computation. We make a significant algorithmic advance in the form of a custom multigrid eigensolver for constrained Angular Embedding problems possessing coarse-to-fine structure. Multiscale Normalized Cuts is a special case. Our solver builds atop recent results on randomized matrix approximation, using a novel interpolation operation to mold its computational strategy according to cross-scale constraints in the problem definition. Applying our solver to multiscale segmentation problems demonstrates speedup by more than an order of magnitude. This speedup is at the algorithmic level and carries over to any implementation target." ] }
1503.00848
2168804568
We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five seconds per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.
Among the former, Alexe al @cite_18 propose an measure to score randomly-sampled image windows based on low-level features computed on the superpixels of @cite_17 . Manen al @cite_1 propose to use the Randomized Prim's algorithm, Zitnick al @cite_36 group contours directly to produce object windows, and Cheng al @cite_44 generate box proposals at 300 images per second. In contrast to these approaches, we focus on the finer-grained task of pixel-accurate object extraction, rather than on window selection. However, by just taking the bounding box around our segmented proposals, our results are also state of the art as window proposals.
{ "cite_N": [ "@cite_18", "@cite_36", "@cite_1", "@cite_44", "@cite_17" ], "mid": [ "2066624635", "7746136", "2121660792", "2010181071", "1999478155" ], "abstract": [ "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.", "The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.", "Generic object detection is the challenging task of proposing windows that localize all the objects in an image, regardless of their classes. Such detectors have recently been shown to benefit many applications such as speeding-up class-specific object detection, weakly supervised learning of object detectors and object discovery. In this paper, we introduce a novel and very efficient method for generic object detection based on a randomized version of Prim's algorithm. Using the connectivity graph of an image's super pixels, with weights modelling the probability that neighbouring super pixels belong to the same object, the algorithm generates random partial spanning trees with large expected sum of edge weights. Object localizations are proposed as bounding-boxes of those partial trees. Our method has several benefits compared to the state-of-the-art. Thanks to the efficiency of Prim's algorithm, it samples proposals very quickly: 1000 proposals are obtained in about 0.7s. With proposals bound to super pixel boundaries yet diversified by randomization, it yields very high detection rates and windows that tightly fit objects. In extensive experiments on the challenging PASCAL VOC 2007 and 2012 and SUN2012 benchmark datasets, we show that our method improves over state-of-the-art competitors for a wide range of evaluation scenarios.", "Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm. We observe that generic objects with well-defined closed boundary can be discriminated by looking at the norm of gradients, with a suitable resizing of their corresponding image windows in to a small fixed size. Based on this observation and computational reasons, we propose to resize the window to 8 × 8 and use the norm of the gradients as a simple 64D feature to describe it, for explicitly training a generic objectness measure. We further show how the binarized version of this feature, namely binarized normed gradients (BING), can be used for efficient objectness estimation, which requires only a few atomic operations (e.g. ADD, BITWISE SHIFT, etc.). Experiments on the challenging PASCAL VOC 2007 dataset show that our method efficiently (300fps on a single laptop CPU) generates a small set of category-independent, high quality object windows, yielding 96.2 object detection rate (DR) with 1, 000 proposals. Increasing the numbers of proposals and color spaces for computing BING features, our performance can be further improved to 99.5 DR.", "This paper addresses the problem of segmenting an image into regions. We define a predicate for measuring the evidence for a boundary between two regions using a graph-based representation of the image. We then develop an efficient segmentation algorithm based on this predicate, and show that although this algorithm makes greedy decisions it produces segmentations that satisfy global properties. We apply the algorithm to image segmentation using two different kinds of local neighborhoods in constructing the graph, and illustrate the results with both real and synthetic images. The algorithm runs in time nearly linear in the number of graph edges and is also fast in practice. An important characteristic of the method is its ability to preserve detail in low-variability image regions while ignoring detail in high-variability regions." ] }
1503.00848
2168804568
We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five seconds per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.
Among the methods that produce segmented proposals, Carreira and Sminchisescu @cite_4 hypothesize a set of placements of fore- and background seeds and, for each configuration, solve a constrained parametric min-cut (CPMC) problem to generate a pool of object hypotheses. Endres and Hoiem @cite_21 base their category-independent object proposals on an iterative generation of a hierarchy of regions, based on the contour detector of @cite_37 and occlusion boundaries of @cite_6 . Kim and Grauman @cite_3 propose to match parts of the shape of exemplar objects, regardless of their class, to detected contours by @cite_37 . They infer the presence and shape of a proposal object by adapting the matched object to the computed superpixels.
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_21", "@cite_3", "@cite_6" ], "mid": [ "2110158442", "2046382188", "2035784046", "1501467284", "2080920426" ], "abstract": [ "This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications.", "We present a novel framework to generate and rank plausible hypotheses for the spatial extent of objects in images using bottom-up computational processes and mid-level selection cues. The object hypotheses are represented as figure-ground segmentations, and are extracted automatically, without prior knowledge of the properties of individual object classes, by solving a sequence of Constrained Parametric Min-Cut problems (CPMC) on a regular image grid. In a subsequent step, we learn to rank the corresponding segments by training a continuous model to predict how likely they are to exhibit real-world regularities (expressed as putative overlap with ground truth) based on their mid-level region properties, then diversify the estimated overlap score using maximum marginal relevance measures. We show that this algorithm significantly outperforms the state of the art for low-level segmentation in the VOC 2009 and 2010 data sets. In our companion papers [1], [2], we show that the algorithm can be used, successfully, in a segmentation-based visual object category recognition pipeline. This architecture ranked first in the VOC2009 and VOC2010 image segmentation and labeling challenges.", "We propose a category-independent method to produce a bag of regions and rank them, such that top-ranked regions are likely to be good segmentations of different objects. Our key objectives are completeness and diversity: Every object should have at least one good proposed region, and a diverse set should be top-ranked. Our approach is to generate a set of segmentations by performing graph cuts based on a seed region and a learned affinity function. Then, the regions are ranked using structured learning based on various cues. Our experiments on the Berkeley Segmentation Data Set and Pascal VOC 2011 demonstrate our ability to find most objects within a small bag of proposed regions.", "We introduce a category-independent shape prior for object segmentation. Existing shape priors assume class-specific knowledge, and thus are restricted to cases where the object class is known in advance. The main insight of our approach is that shapes are often shared between objects of different categories. To exploit this \"shape sharing\" phenomenon, we develop a non-parametric prior that transfers object shapes from an exemplar database to a test image based on local shape matching. The transferred shape priors are then enforced in a graph-cut formulation to produce a pool of object segment hypotheses. Unlike previous multiple segmentation methods, our approach benefits from global shape cues; unlike previous top-down methods, it assumes no class-specific training and thus enhances segmentation even for unfamiliar categories. On the challenging PASCAL 2010 and Berkeley Segmentation datasets, we show it outperforms the state-of-the-art in bottom-up or category-independent segmentation.", "Occlusion reasoning is a fundamental problem in computer vision. In this paper, we propose an algorithm to recover the occlusion boundaries and depth ordering of free-standing structures in the scene. Rather than viewing the problem as one of pure image processing, our approach employs cues from an estimated surface layout and applies Gestalt grouping principles using a conditional random field (CRF) model. We propose a hierarchical segmentation process, based on agglomerative merging, that re-estimates boundary strength as the segmentation progresses. Our experiments on the Geometric Context dataset validate our choices for features, our iterative refinement of classifiers, and our CRF model. In experiments on the Berkeley Segmentation Dataset, PASCAL VOC 2008, and LabelMe, we also show that the trained algorithm generalizes to other datasets and can be used as an object boundary predictor with figure ground labels." ] }
1503.00848
2168804568
We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five seconds per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.
Uijlings al @cite_11 present a selective search algorithm based on segmentation. Starting with the superpixels of @cite_17 for a variety of color spaces, they produce a set of segmentation hierarchies by region merging, which are used to produce a set of object proposals. While we also take advantage of different hierarchies to gain diversity, we leverage multiscale information rather than different color spaces.
{ "cite_N": [ "@cite_17", "@cite_11" ], "mid": [ "1999478155", "2088049833" ], "abstract": [ "This paper addresses the problem of segmenting an image into regions. We define a predicate for measuring the evidence for a boundary between two regions using a graph-based representation of the image. We then develop an efficient segmentation algorithm based on this predicate, and show that although this algorithm makes greedy decisions it produces segmentations that satisfy global properties. We apply the algorithm to image segmentation using two different kinds of local neighborhoods in constructing the graph, and illustrate the results with both real and synthetic images. The algorithm runs in time nearly linear in the number of graph edges and is also fast in practice. An important characteristic of the method is its ability to preserve detail in low-variability image regions while ignoring detail in high-variability regions.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )." ] }
1503.00848
2168804568
We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five seconds per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.
Recently, two works proposed to train a cascade of classifiers to learn which sets of regions should be merged to form objects. Ren and Shankhnarovich @cite_43 produce full region hierarchies by iteratively merging pairs of regions and adapting the classifiers to different scales. Weiss and Taskar @cite_46 specialize the classifiers also to size and class of the annotated instances to produce object proposals.
{ "cite_N": [ "@cite_43", "@cite_46" ], "mid": [ "2040072996", "2088432363" ], "abstract": [ "We propose a hierarchical segmentation algorithm that starts with a very fine over segmentation and gradually merges regions using a cascade of boundary classifiers. This approach allows the weights of region and boundary features to adapt to the segmentation scale at which they are applied. The stages of the cascade are trained sequentially, with asymetric loss to maximize boundary recall. On six segmentation data sets, our algorithm achieves best performance under most region-quality measures, and does it with fewer segments than the prior work. Our algorithm is also highly competitive in a dense over segmentation (super pixel) regime under boundary-based measures.", "We propose SCALPEL, a flexible method for object segmentation that integrates rich region-merging cues with mid- and high-level information about object layout, class, and scale into the segmentation process. Unlike competing approaches, SCALPEL uses a cascade of bottom-up segmentation models that is capable of learning to ignore boundaries early on, yet use them as a stopping criterion once the object has been mostly segmented. Furthermore, we show how such cascades can be learned efficiently. When paired with a novel method that generates better localized shape priors than our competitors, our method leads to a concise, accurate set of segmentation proposals, these proposals are more accurate on the PASCAL VOC2010 dataset than state-of-the-art methods that use re-ranking to filter much larger bags of proposals. The code for our algorithm is available online." ] }
1503.00848
2168804568
We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five seconds per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.
Malisiewicz and Efros @cite_31 took one of the first steps towards combinatorial grouping, by running multiple segmenters with different parameters and merging up to three adjacent regions. @cite_33 , another step was taken by considering hierarchical segmentations at three different scales and combining pairs and triplets of adjacent regions from the two coarser scales to produce object proposals.
{ "cite_N": [ "@cite_31", "@cite_33" ], "mid": [ "2009685382", "2115150266" ], "abstract": [ "Sliding window scanning is the dominant paradigm in object recognition research today. But while much success has been reported in detecting several rectangular-shaped object classes (i.e. faces, cars, pedestrians), results have been much less impressive for more general types of objects. Several researchers have advocated the use of image segmentation as a way to get a better spatial support for objects. In this paper, our aim is to address this issue by studying the following two questions: 1) how important is good spatial support for recognition? 2) can segmentation provide better spatial support for objects? To answer the first, we compare recognition performance using ground-truth segmentation vs. bounding boxes. To answer the second, we use the multiple segmentation approach to evaluate how close can real segments approach the ground-truth for real objects, and at what cost. Our results demonstrate the importance of finding the right spatial support for objects, and the feasibility of doing so without excessive computational burden.", "We address the problem of segmenting and recognizing objects in real world images, focusing on challenging articulated categories such as humans and other animals. For this purpose, we propose a novel design for region-based object detectors that integrates efficiently top-down information from scanning-windows part models and global appearance cues. Our detectors produce class-specific scores for bottom-up regions, and then aggregate the votes of multiple overlapping candidates through pixel classification. We evaluate our approach on the PASCAL segmentation challenge, and report competitive performance with respect to current leading techniques. On VOC2010, our method obtains the best results in 6 20 categories and the highest performance on articulated objects." ] }
1503.00848
2168804568
We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five seconds per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.
The most recent wave of object proposal algorithms is represented by @cite_5 , @cite_12 , and @cite_2 , which all keep the quality of the seminal proposal works while improving the speed considerably. Kr "ahenb "uhl and Koltun @cite_5 find object proposal by identifying critical level sets in geodesic distance transforms, based on seeds placed in learnt places in the image. Rantalankila al @cite_12 perform a global and local search in the space of sets of superpixels. Humayun al @cite_2 reuse a graph to perform many parametric min-cuts over different seeds in order to speed the process up.
{ "cite_N": [ "@cite_5", "@cite_12", "@cite_2" ], "mid": [ "261873710", "2008541429", "" ], "abstract": [ "We present an approach for identifying a set of candidate objects in a given image. This set of candidates can be used for object recognition, segmentation, and other object-based image parsing tasks. To generate the proposals, we identify critical level sets in geodesic distance transforms computed for seeds placed in the image. The seeds are placed by specially trained classifiers that are optimized to discover objects. Experiments demonstrate that the presented approach achieves significantly higher accuracy than alternative approaches, at a fraction of the computational cost.", "We present a method for generating object segmentation proposals from groups of superpixels. The goal is to propose accurate segmentations for all objects of an image. The proposed object hypotheses can be used as input to object detection systems and thereby improve efficiency by replacing exhaustive search. The segmentations are generated in a class-independent manner and therefore the computational cost of the approach is independent of the number of object classes. Our approach combines both global and local search in the space of sets of superpixels. The local search is implemented by greedily merging adjacent pairs of superpixels to build a bottom-up segmentation hierarchy. The regions from such a hierarchy directly provide a part of our region proposals. The global search provides the other part by performing a set of graph cut segmentations on a superpixel graph obtained from an intermediate level of the hierarchy. The parameters of the graph cut problems are learnt in such a manner that they provide complementary sets of regions. Experiments with Pascal VOC images show that we reach state-of-the-art with greatly reduced computational cost.", "" ] }
1503.00848
2168804568
We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five seconds per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.
A substantial difference between our approach and previous work is that, instead of relying on pre-computed hierarchies or superpixels, we propose a unified approach that produces and groups high-quality multiscale regions. With respect to the combinatorial approaches of @cite_31 @cite_33 , our main contribution is to develop efficient algorithms to explore a much larger combinatorial space by taking into account a set of object examples, increasing thus the likelihood of having complete objects in the pool of proposals. Our approach has therefore the flexibility to adapt to specific applications and types of objects, and can produce proposals at any trade-off between their number and their accuracy.
{ "cite_N": [ "@cite_31", "@cite_33" ], "mid": [ "2009685382", "2115150266" ], "abstract": [ "Sliding window scanning is the dominant paradigm in object recognition research today. But while much success has been reported in detecting several rectangular-shaped object classes (i.e. faces, cars, pedestrians), results have been much less impressive for more general types of objects. Several researchers have advocated the use of image segmentation as a way to get a better spatial support for objects. In this paper, our aim is to address this issue by studying the following two questions: 1) how important is good spatial support for recognition? 2) can segmentation provide better spatial support for objects? To answer the first, we compare recognition performance using ground-truth segmentation vs. bounding boxes. To answer the second, we use the multiple segmentation approach to evaluate how close can real segments approach the ground-truth for real objects, and at what cost. Our results demonstrate the importance of finding the right spatial support for objects, and the feasibility of doing so without excessive computational burden.", "We address the problem of segmenting and recognizing objects in real world images, focusing on challenging articulated categories such as humans and other animals. For this purpose, we propose a novel design for region-based object detectors that integrates efficiently top-down information from scanning-windows part models and global appearance cues. Our detectors produce class-specific scores for bottom-up regions, and then aggregate the votes of multiple overlapping candidates through pixel classification. We evaluate our approach on the PASCAL segmentation challenge, and report competitive performance with respect to current leading techniques. On VOC2010, our method obtains the best results in 6 20 categories and the highest performance on articulated objects." ] }
1503.00107
2147666095
Modern statistical machine translation (SMT) systems usually use a linear combination of features to model the quality of each translation hypothesis. The linear combination assumes that all the features are in a linear relationship and constrains that each feature interacts with the rest features in an linear manner, which might limit the expressive power of the model and lead to a under-fit model on the current data. In this paper, we propose a non-linear modeling for the quality of translation hypotheses based on neural networks, which allows more complex interaction between features. A learning framework is presented for training the non-linear models. We also discuss possible heuristics in designing the network structure which may improve the non-linear learning performance. Experimental results show that with the basic features of a hierarchical phrase-based machine translation system, our method produce translations that are better than a linear model.
The third line of research attempted to add non-linear features components into the log-linear learning framework. Neural network based models are trained as language models @cite_14 @cite_2 , translation models @cite_12 or joint language and translation models @cite_8 @cite_6 . also introduced word embedding for source and target side of translation rule as local features. In this paper we focus on enhancing the expressive power of the modeling, which is independent of the research of enhancing translation system with new designed features. We believe additional improvement could be achieved by incorporating more features into our framework.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_6", "@cite_2", "@cite_12" ], "mid": [ "932413789", "2250489405", "2251682575", "2251071050", "2250445771" ], "abstract": [ "We explore the application of neural language models to machine translation. We develop a new model that combines the neural probabilistic language model of , rectified linear units, and noise-contrastive estimation, and we incorporate it into a machine translation system both by reranking k-best lists and by direct integration into the decoder. Our large-scale, large-vocabulary experiments across four language pairs show that our neural language model improves translation quality by up to 1.1 Bleu.", "We present a joint language and translation model based on a recurrent neural network which predicts target words based on an unbounded history of both source and target words. The weaker independence assumptions of this model result in a vastly larger search space compared to related feedforward-based language or translation models. We tackle this issue with a new lattice rescoring algorithm and demonstrate its effectiveness empirically. Our joint model builds on a well known recurrent neural network language model (Mikolov, 2012) augmented by a layer of additional inputs from the source language. We show competitive accuracy compared to the traditional channel model features. Our best results improve the output of a system trained on WMT 2012 French-English data by up to 1.5 BLEU, and by 1.1 BLEU on average across several test sets.", "Recent work has shown success in using neural network language models (NNLMs) as features in MT systems. Here, we present a novel formulation for a neural network joint model (NNJM), which augments the NNLM with a source context window. Our model is purely lexicalized and can be integrated into any MT decoder. We also present several variations of the NNJM which provide significant additive improvements.", "Neural network language models are often trained by optimizing likelihood, but we would prefer to optimize for a task specific metric, such as BLEU in machine translation. We show how a recurrent neural network language model can be optimized towards an expected BLEU loss instead of the usual cross-entropy criterion. Furthermore, we tackle the issue of directly integrating a recurrent network into firstpass decoding under an efficient approximation. Our best results improve a phrasebased statistical machine translation system trained on WMT 2012 French-English data by up to 2.0 BLEU, and the expected BLEU objective improves over a crossentropy trained model by up to 0.6 BLEU in a single reference setup.", "This paper tackles the sparsity problem in estimating phrase translation probabilities by learning continuous phrase representations, whose distributed nature enables the sharing of related phrases in their representations. A pair of source and target phrases are projected into continuous-valued vector representations in a low-dimensional latent space, where their translation score is computed by the distance between the pair in this new space. The projection is performed by a neural network whose weights are learned on parallel training data. Experimental evaluation has been performed on two WMT translation tasks. Our best result improves the performance of a state-of-the-art phrase-based statistical machine translation system trained on WMT 2012 French-English data by up to 1.3 BLEU points." ] }
1503.00488
2951389984
Heterogeneous face recognition (HFR) refers to matching face images acquired from different sources (i.e., different sensors or different wavelengths) for identification. HFR plays an important role in both biometrics research and industry. In spite of promising progresses achieved in recent years, HFR is still a challenging problem due to the difficulty to represent two heterogeneous images in a homogeneous manner. Existing HFR methods either represent an image ignoring the spatial information, or rely on a transformation procedure which complicates the recognition task. Considering these problems, we propose a novel graphical representation based HFR method (G-HFR) in this paper. Markov networks are employed to represent heterogeneous image patches separately, which takes the spatial compatibility between neighboring image patches into consideration. A coupled representation similarity metric (CRSM) is designed to measure the similarity between obtained graphical representations. Extensive experiments conducted on multiple HFR scenarios (viewed sketch, forensic sketch, near infrared image, and thermal infrared image) show that the proposed method outperforms state-of-the-art methods.
Synthesis based HFR methods began with an eigen-transformation algorithm @cite_10 proposed by Tang and Wang. Later, Liu @cite_3 proposed a local linear embedding approach for patch-based face sketch synthesis. The sketch patches were synthesized independently and the spatial compatibility between neighboring patches was neglected. Chen @cite_15 proposed to learn the local linear mappings between NIR and VIS patches in a similar manner as @cite_3 . Gao @cite_21 employed embedded hidden Markov model to represent the non-linear relationship between sketches and photos and a selective ensemble strategy @cite_26 was explored to synthesize a sketch. Wang and Tang @cite_29 proposed a multi-scale Markov random field model for face sketch-photo synthesis, which took the spatial constraints between neighboring patches into consideration. Zhou @cite_37 proposed a Markov weight field model which was capable of synthesizing new patches that do not appear in the training set. Wang @cite_19 presented a transductive face sketch-photo synthesis method which incorporated the test image into the learning process.
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_29", "@cite_21", "@cite_3", "@cite_19", "@cite_15", "@cite_10" ], "mid": [ "", "2100128988", "2153288431", "2120749805", "2141345255", "2054210502", "2109945314", "" ], "abstract": [ "", "Neural network ensemble is a learning paradigm where many neural networks are jointly used to solve a problem. In this paper, the relationship between the ensemble and its component neural networks is analyzed from the context of both regression and classification, which reveals that it may be better to ensemble many instead of all of the neural networks at hand. This result is interesting because at present, most approaches ensemble all the available neural networks for prediction. Then, in order to show that the appropriate neural networks for composing an ensemble can be effectively selected from a set of available neural networks, an approach named GASEN is presented. GASEN trains a number of neural networks at first. Then it assigns random weights to those networks and employs genetic algorithm to evolve the weights so that they can characterize to some extent the fitness of the neural networks in constituting an ensemble. Finally it selects some neural networks based on the evolved weights to make up the ensemble. A large empirical study shows that, compared with some popular ensemble approaches such as Bagging and Boosting, GASEN can generate neural network ensembles with far smaller sizes but stronger generalization ability. Furthermore, in order to understand the working mechanism of GASEN, the bias-variance decomposition of the error is provided in this paper, which shows that the success of GASEN may lie in that it can significantly reduce the bias as well as the variance.", "In this paper, we propose a novel face photo-sketch synthesis and recognition method using a multiscale Markov Random Fields (MRF) model. Our system has three components: 1) given a face photo, synthesizing a sketch drawing; 2) given a face sketch drawing, synthesizing a photo; and 3) searching for face photos in the database based on a query sketch drawn by an artist. It has useful applications for both digital entertainment and law enforcement. We assume that faces to be studied are in a frontal pose, with normal lighting and neutral expression, and have no occlusions. To synthesize sketch photo images, the face region is divided into overlapping patches for learning. The size of the patches decides the scale of local face structures to be learned. From a training set which contains photo-sketch pairs, the joint photo-sketch model is learned at multiple scales using a multiscale MRF model. By transforming a face photo to a sketch (or transforming a sketch to a photo), the difference between photos and sketches is significantly reduced, thus allowing effective matching between the two in face sketch recognition. After the photo-sketch transformation, in principle, most of the proposed face photo recognition approaches can be applied to face sketch recognition in a straightforward way. Extensive experiments are conducted on a face sketch database including 606 faces, which can be downloaded from our Web site (http: mmlab.ie.cuhk.edu.hk facesketch.html).", "Sketch synthesis plays an important role in face sketch-photo recognition system. In this manuscript, an automatic sketch synthesis algorithm is proposed based on embedded hidden Markov model (E-HMM) and selective ensemble strategy. First, the E-HMM is adopted to model the nonlinear relationship between a sketch and its corresponding photo. Then based on several learned models, a series of pseudo-sketches are generated for a given photo. Finally, these pseudo-sketches are fused together with selective ensemble strategy to synthesize a finer face pseudo-sketch. Experimental results illustrate that the proposed algorithm achieves satisfactory effect of sketch synthesis with a small set of face training samples.", "Most face recognition systems focus on photo-based face recognition. In this paper, we present a face recognition system based on face sketches. The proposed system contains two elements: pseudo-sketch synthesis and sketch recognition. The pseudo-sketch generation method is based on local linear preserving of geometry between photo and sketch images, which is inspired by the idea of locally linear embedding. The nonlinear discriminate analysis is used to recognize the probe sketch from the synthesized pseudo-sketches. Experimental results on over 600 photo-sketch pairs show that the performance of the proposed method is encouraging.", "Face sketch-photo synthesis plays a critical role in many applications, such as law enforcement and digital entertainment. Recently, many face sketch-photo synthesis methods have been proposed under the framework of inductive learning, and these have obtained promising performance. However, these inductive learning-based face sketch-photo synthesis methods may result in high losses for test samples, because inductive learning minimizes the empirical loss for training samples. This paper presents a novel transductive face sketch-photo synthesis method that incorporates the given test samples into the learning process and optimizes the performance on these test samples. In particular, it defines a probabilistic model to optimize both the reconstruction fidelity of the input photo (sketch) and the synthesis fidelity of the target output sketch (photo), and efficiently optimizes this probabilistic model by alternating optimization. The proposed transductive method significantly reduces the expected high loss and improves the synthesis performance for test samples. Experimental results on the Chinese University of Hong Kong face sketch data set demonstrate the effectiveness of the proposed method by comparing it with representative inductive learning-based face sketch-photo synthesis methods.", "This paper deals with a new problem in face recognition research, in which the enrollment and query face samples are captured under different lighting conditions. In our case, the enrollment samples are visual light (VIS) images, whereas the query samples are taken under near infrared (NIR) condition. It is very difficult to directly match the face samples captured under these two lighting conditions due to their different visual appearances. In this paper, we propose a novel method for synthesizing VIS images from NIR images based on learning the mappings between images of different spectra (i.e., NIR and VIS). In our approach, we reduce the inter-spectral differences significantly, thus allowing effective matching between faces taken under different imaging conditions. Face recognition experiments clearly show the efficacy of the proposed approach.", "" ] }
1503.00488
2951389984
Heterogeneous face recognition (HFR) refers to matching face images acquired from different sources (i.e., different sensors or different wavelengths) for identification. HFR plays an important role in both biometrics research and industry. In spite of promising progresses achieved in recent years, HFR is still a challenging problem due to the difficulty to represent two heterogeneous images in a homogeneous manner. Existing HFR methods either represent an image ignoring the spatial information, or rely on a transformation procedure which complicates the recognition task. Considering these problems, we propose a novel graphical representation based HFR method (G-HFR) in this paper. Markov networks are employed to represent heterogeneous image patches separately, which takes the spatial compatibility between neighboring image patches into consideration. A coupled representation similarity metric (CRSM) is designed to measure the similarity between obtained graphical representations. Extensive experiments conducted on multiple HFR scenarios (viewed sketch, forensic sketch, near infrared image, and thermal infrared image) show that the proposed method outperforms state-of-the-art methods.
A number of feature descriptor based HFR approaches have shown promising performance. Klare @cite_17 proposed a local feature-based discriminant analysis (LFDA) framework through scale invariant feature transform (SIFT) feature @cite_22 and multiscale local binary pattern (MLBP) feature @cite_6 . A face descriptor based on coupled information-theoretic encoding was designed for matching face sketches with photos by Zhang @cite_9 . The coupled information-theoretic projection tree was introduced and was further extended to the randomized forest with different sampling patterns. Another face descriptor called local radon binary pattern (LRBP) was proposed in @cite_27 . The face images were projected onto the radon space and encoded by local binary patterns (LBP). A histogram of averaged oriented gradients (HAOG) face descriptor was proposed to reduce the modality difference @cite_7 . Lei @cite_23 proposed a discriminant image filter learning method benefitted from LBP like face representation for matching NIR to VIS face images. Alex @cite_24 proposed a local difference of Gaussian binary pattern (LDoGBP) for face recognition across modalities.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_9", "@cite_6", "@cite_24", "@cite_27", "@cite_23", "@cite_17" ], "mid": [ "2151103935", "2076631638", "2034136097", "", "2038809248", "2049352011", "2100302316", "2158096215" ], "abstract": [ "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.", "Automatic face sketch recognition plays an important role in law enforcement. Recently, various methods have been proposed to address the problem of face sketch recognition by matching face photos and sketches, which are of different modalities. However, their performance is strongly affected by the modality difference between sketches and photos. In this paper, we propose a new face descriptor based on gradient orientations to reduce the modality difference in feature extraction stage, called Histogram of Averaged Oriented Gradients (HAOG). Experiments on CUFS database show that the new descriptor outperforms the state-of-the-art approaches.", "Automatic face photo-sketch recognition has important applications for law enforcement. Recent research has focused on transforming photos and sketches into the same modality for matching or developing advanced classification algorithms to reduce the modality gap between features extracted from photos and sketches. In this paper, we propose a new inter-modality face recognition approach by reducing the modality gap at the feature extraction stage. A new face descriptor based on coupled information-theoretic encoding is used to capture discriminative local face structures and to effectively match photos and sketches. Guided by maximizing the mutual information between photos and sketches in the quantized feature spaces, the coupled encoding is achieved by the proposed coupled information-theoretic projection tree, which is extended to the randomized forest to further boost the performance. We create the largest face sketch database including sketches of 1, 194 people from the FERET database. Experiments on this large scale dataset show that our approach significantly outperforms the state-of-the-art methods.", "", "Automatic recognition of face sketches is a challenging problem with application in criminal investigations. We propose a method that allows face sketch recognition across modalities called Local Difference of Gaussian Binary Pattern (LDoGBP). LDoGBP is based on the fact that the sketches are similar to their corresponding photos even though they are prone to shape distoration. This similarity between sketch and photo is captured and used for recognition across modalities. In this method, the face image characteristics are captured in the Difference of Gaussian (DoG) representation of the image patches. The Local Binary Pattern(LBP) corresponding to the DoG representation is then generated. These histograms are concatenated to generate the feature vector corresponding to input image. These feature vectors are compared using Earth Mover's Distance for recognition. Experiments on the CUFS(Chinese University of Hong Kong (CUHK) Face Sketch Database) and CUFSF (CUHK Face Sketch FERET Database) datesets prove the effectiveness of this feature in Face Sketch Recognition.", "In this paper, we propose a new face descriptor to directly match face photos and sketches of different modalities, called Local Radon Binary Pattern (LRBP). LRBP is inspired by the fact that the shape of a face photo and its corresponding sketch is similar, even when the sketch is exaggerated by an artist. Therefore, the shape of face can be exploited to compute features which are robust against modality differences between face photo and sketch. In LRBP framework, the characteristics of face shape are captured by transforming face image into Radon space. Then, micro-information of face shape in new space is encoded by Local Binary Pattern (LBP). Finally, LRBP is computed by concatenating histograms of local LBPs. In order to capture both local and global characteristics of face shape, LRBP is extracted in a spatial pyramid fashion. Experiments on CUFS and CUFSF datasets indicate the efficiency of LRBP for face sketch recognition.", "Local binary pattern (LBP) and its variants are effective descriptors for face recognition. The traditional LBP like features are extracted based on the original pixel or patch values of images. In this paper, we propose to learn the discriminative image filter to improve the discriminant power of the LBP like feature. The basic idea is after the image filtering with the learned filter, the difference of pixel difference vectors (PDVs) between the images from the same person is consistent and the difference between the images from different persons is enlarged. In this way, the LBP like features extracted from the filtered images are considered to be more discriminant than those extracted from the original images. Moreover, a coupled discriminant image filters learning method is proposed to deal with the heterogenous face images matching problem by reducing the feature gap between the heterogeneous images. Experiments on FERET, FRGC and a VIS-NIR heterogeneous face databases validate the effectiveness of our proposed image filter learning method combined with LBP like features.", "The problem of matching a forensic sketch to a gallery of mug shot images is addressed in this paper. Previous research in sketch matching only offered solutions to matching highly accurate sketches that were drawn while looking at the subject (viewed sketches). Forensic sketches differ from viewed sketches in that they are drawn by a police sketch artist using the description of the subject provided by an eyewitness. To identify forensic sketches, we present a framework called local feature-based discriminant analysis (LFDA). In LFDA, we individually represent both sketches and photos using SIFT feature descriptors and multiscale local binary patterns (MLBP). Multiple discriminant projections are then used on partitioned vectors of the feature-based representation for minimum distance matching. We apply this method to match a data set of 159 forensic sketches against a mug shot gallery containing 10,159 images. Compared to a leading commercial face recognition system, LFDA offers substantial improvements in matching forensic sketches to the corresponding face images. We were able to further improve the matching performance using race and gender information to reduce the target gallery size. Additional experiments demonstrate that the proposed framework leads to state-of-the-art accuracys when matching viewed sketches." ] }
1503.00488
2951389984
Heterogeneous face recognition (HFR) refers to matching face images acquired from different sources (i.e., different sensors or different wavelengths) for identification. HFR plays an important role in both biometrics research and industry. In spite of promising progresses achieved in recent years, HFR is still a challenging problem due to the difficulty to represent two heterogeneous images in a homogeneous manner. Existing HFR methods either represent an image ignoring the spatial information, or rely on a transformation procedure which complicates the recognition task. Considering these problems, we propose a novel graphical representation based HFR method (G-HFR) in this paper. Markov networks are employed to represent heterogeneous image patches separately, which takes the spatial compatibility between neighboring image patches into consideration. A coupled representation similarity metric (CRSM) is designed to measure the similarity between obtained graphical representations. Extensive experiments conducted on multiple HFR scenarios (viewed sketch, forensic sketch, near infrared image, and thermal infrared image) show that the proposed method outperforms state-of-the-art methods.
With great progresses achieved on viewed sketches, recently researches began to focus on matching forensic sketches to mug shots. Klare @cite_17 matched forensic sketches to mug shot photos with a populated gallery. Bhatt @cite_25 proposed a discriminative approach for matching forensic sketches to mug shots deploying multi-scale circular Weber's local descriptor (MCWLD) and an evolutionary memetic optimization algorithm. Klare and Jain @cite_11 represented heterogeneous face images through their nonlinear kernel similarities to a collection of prototype face images. Considering the fact that many law enforcement agencies employ facial composite software to create composite sketches, Han @cite_1 proposed a component based approach for matching composite sketches to mug shot photos.
{ "cite_N": [ "@cite_11", "@cite_1", "@cite_25", "@cite_17" ], "mid": [ "2152788298", "2063643424", "2076829102", "2158096215" ], "abstract": [ "Heterogeneous face recognition (HFR) involves matching two face images from alternate imaging modalities, such as an infrared image to a photograph or a sketch to a photograph. Accurate HFR systems are of great value in various applications (e.g., forensics and surveillance), where the gallery databases are populated with photographs (e.g., mug shot or passport photographs) but the probe images are often limited to some alternate modality. A generic HFR framework is proposed in which both probe and gallery images are represented in terms of nonlinear similarities to a collection of prototype face images. The prototype subjects (i.e., the training set) have an image in each modality (probe and gallery), and the similarity of an image is measured against the prototype images from the corresponding modality. The accuracy of this nonlinear prototype representation is improved by projecting the features into a linear discriminant subspace. Random sampling is introduced into the HFR framework to better handle challenges arising from the small sample size problem. The merits of the proposed approach, called prototype random subspace (P-RS), are demonstrated on four different heterogeneous scenarios: 1) near infrared (NIR) to photograph, 2) thermal to photograph, 3) viewed sketch to photograph, and 4) forensic sketch to photograph.", "The problem of automatically matching composite sketches to facial photographs is addressed in this paper. Previous research on sketch recognition focused on matching sketches drawn by professional artists who either looked directly at the subjects (viewed sketches) or used a verbal description of the subject's appearance as provided by an eyewitness (forensic sketches). Unlike sketches hand drawn by artists, composite sketches are synthesized using one of the several facial composite software systems available to law enforcement agencies. We propose a component-based representation (CBR) approach to measure the similarity between a composite sketch and mugshot photograph. Specifically, we first automatically detect facial landmarks in composite sketches and face photos using an active shape model (ASM). Features are then extracted for each facial component using multiscale local binary patterns (MLBPs), and per component similarity is calculated. Finally, the similarity scores obtained from individual facial components are fused together, yielding a similarity score between a composite sketch and a face photo. Matching performance is further improved by filtering the large gallery of mugshot images using gender information. Experimental results on matching 123 composite sketches against two galleries with 10,123 and 1,316 mugshots show that the proposed method achieves promising performance (rank-100 accuracies of 77.2 and 89.4 , respectively) compared to a leading commercial face recognition system (rank-100 accuracies of 22.8 and 52.0 ) and densely sampled MLBP on holistic faces (rank-100 accuracies of 27.6 and 10.6 ). We believe our prototype system will be of great value to law enforcement agencies in apprehending suspects in a timely fashion.", "One of the important cues in solving crimes and apprehending criminals is matching sketches with digital face images. This paper presents an automated algorithm to extract discriminating information from local regions of both sketches and digital face images. Structural information along with minute details present in local facial regions are encoded using multiscale circular Weber's local descriptor. Further, an evolutionary memetic optimization algorithm is proposed to assign optimal weight to every local facial region to boost the identification performance. Since forensic sketches or digital face images can be of poor quality, a preprocessing technique is used to enhance the quality of images and improve the identification performance. Comprehensive experimental evaluation on different sketch databases show that the proposed algorithm yields better identification performance compared to existing face recognition algorithms and two commercial face recognition systems.", "The problem of matching a forensic sketch to a gallery of mug shot images is addressed in this paper. Previous research in sketch matching only offered solutions to matching highly accurate sketches that were drawn while looking at the subject (viewed sketches). Forensic sketches differ from viewed sketches in that they are drawn by a police sketch artist using the description of the subject provided by an eyewitness. To identify forensic sketches, we present a framework called local feature-based discriminant analysis (LFDA). In LFDA, we individually represent both sketches and photos using SIFT feature descriptors and multiscale local binary patterns (MLBP). Multiple discriminant projections are then used on partitioned vectors of the feature-based representation for minimum distance matching. We apply this method to match a data set of 159 forensic sketches against a mug shot gallery containing 10,159 images. Compared to a leading commercial face recognition system, LFDA offers substantial improvements in matching forensic sketches to the corresponding face images. We were able to further improve the matching performance using race and gender information to reduce the target gallery size. Additional experiments demonstrate that the proposed framework leads to state-of-the-art accuracys when matching viewed sketches." ] }
1503.00448
1507268481
In Kleinberg's small-world network model, strong ties are modeled as deterministic edges in the underlying base grid and weak ties are modeled as random edges connecting remote nodes. The probability of connecting a node @math with node @math through a weak tie is proportional to @math , where @math is the grid distance between @math and @math and @math is the parameter of the model. Complex contagion refers to the propagation mechanism in a network where each node is activated only after @math neighbors of the node are activated. In this paper, we propose the concept of routing of complex contagion (or complex routing), where we can activate one node at one time step with the goal of activating the targeted node in the end. We consider decentralized routing scheme where only the weak ties from the activated nodes are revealed. We study the routing time of complex contagion and compare the result with simple routing and complex diffusion (the diffusion of complex contagion, where all nodes that could be activated are activated immediately in the same step with the goal of activating all nodes in the end). We show that for decentralized complex routing, the routing time is lower bounded by a polynomial in @math (the number of nodes in the network) for all range of @math both in expectation and with high probability (in particular, @math for @math and @math for @math in expectation), while the routing time of simple contagion has polylogarithmic upper bound when @math . Our results indicate that complex routing is harder than complex diffusion and the routing time of complex contagion differs exponentially compared to simple contagion at sweetspot.
Social and information networks and network diffusions have been extensively studied, and a comprehensive coverage has been provided by recent textbooks such as @cite_11 @cite_0 . In this section, we provide most related work in addition to the ones already discussed in the introduction.
{ "cite_N": [ "@cite_0", "@cite_11" ], "mid": [ "1875112053", "19838944" ], "abstract": [ "The scientific study of networks, including computer networks, social networks, and biological networks, has received an enormous amount of interest in the last few years. The rise of the Internet and the wide availability of inexpensive computers have made it possible to gather and analyze network data on a large scale, and the development of a variety of new theoretical tools has allowed us to extract new knowledge from many different kinds of networks.The study of networks is broadly interdisciplinary and important developments have occurred in many fields, including mathematics, physics, computer and information sciences, biology, and the social sciences. This book brings together for the first time the most important breakthroughs in each of these fields and presents them in a coherent fashion, highlighting the strong interconnections between work in different areas. Subjects covered include the measurement and structure of networks in many branches of science, methods for analyzing network data, including methods developed in physics, statistics, and sociology, the fundamentals of graph theory, computer algorithms, and spectral methods, mathematical models of networks, including random graph models and generative models, and theories of dynamical processes taking place on networks.", "Over the past decade there has been a growing public fascination with the complex connectedness of modern society. This connectedness is found in many incarnations: in the rapid growth of the Internet, in the ease with which global communication takes place, and in the ability of news and information as well as epidemics and financial crises to spread with surprising speed and intensity. These are phenomena that involve networks, incentives, and the aggregate behavior of groups of people; they are based on the links that connect us and the ways in which our decisions can have subtle consequences for others. This introductory undergraduate textbook takes an interdisciplinary look at economics, sociology, computing and information science, and applied mathematics to understand networks and behavior. It describes the emerging field of study that is growing at the interface of these areas, addressing fundamental questions about how the social, economic, and technological worlds are connected." ] }
1503.00448
1507268481
In Kleinberg's small-world network model, strong ties are modeled as deterministic edges in the underlying base grid and weak ties are modeled as random edges connecting remote nodes. The probability of connecting a node @math with node @math through a weak tie is proportional to @math , where @math is the grid distance between @math and @math and @math is the parameter of the model. Complex contagion refers to the propagation mechanism in a network where each node is activated only after @math neighbors of the node are activated. In this paper, we propose the concept of routing of complex contagion (or complex routing), where we can activate one node at one time step with the goal of activating the targeted node in the end. We consider decentralized routing scheme where only the weak ties from the activated nodes are revealed. We study the routing time of complex contagion and compare the result with simple routing and complex diffusion (the diffusion of complex contagion, where all nodes that could be activated are activated immediately in the same step with the goal of activating all nodes in the end). We show that for decentralized complex routing, the routing time is lower bounded by a polynomial in @math (the number of nodes in the network) for all range of @math both in expectation and with high probability (in particular, @math for @math and @math for @math in expectation), while the routing time of simple contagion has polylogarithmic upper bound when @math . Our results indicate that complex routing is harder than complex diffusion and the routing time of complex contagion differs exponentially compared to simple contagion at sweetspot.
Since the proposal of the small-world network models by @cite_12 @cite_21 , many extensions and variants have been studied. For example, Kleinberg proposed a small-world model based on tree structure @cite_4 , Fraigniaud and Giakkoupis extended the model to allow power-law degree distribution @cite_13 or arbitrary base graph structure @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_21", "@cite_13", "@cite_12" ], "mid": [ "", "2164195254", "2128678576", "2084442192", "2112090702" ], "abstract": [ "", "The problem of searching for information in networks like the World Wide Web can be approached in a variety of ways, ranging from centralized indexing schemes to decentralized mechanisms that navigate the underlying network without knowledge of its global structure. The decentralized approach appears in a variety of settings: in the behavior of users browsing the Web by following hyperlinks; in the design of focused crawlers [4, 5, 8] and other agents that explore the Web’s links to gather information; and in the search protocols underlying decentralized peer-to-peer systems such as Gnutella [10], Freenet [7], and recent research prototypes [21, 22, 23], through which users can share resources without a central server. In recent work, we have been investigating the problem of decentralized search in large information networks [14, 15]. Our initial motivation was an experiment that dealt directly with the search problem in a decidedly pre-Internet context: Stanley Milgram’s famous study of the small-world phenomenon [16, 17]. Milgram was seeking to determine whether most pairs of people in society were linked by short chains of acquaintances, and for this purpose he recruited individuals to try forwarding a letter to a designated “target” through people they knew on a first-name basis. The starting individuals were given basic information about the target — his name, address, occupation, and a few other personal details — and had to choose a single acquaintance to send the letter to, with goal of reaching the target as quickly as possible; subsequent recipients followed the same procedure, and the chain closed in on its destination. Of the chains that completed, the median number of steps required was six — a result that has since entered popular culture as the “six degrees of separation” principle [11]. Milgram’s experiment contains two striking discoveries — that short chains are pervasive, and that people are able to find them. This latter point is concerned precisely with a type of decentralized navigation in a social network, consisting of people as nodes and links joining", "Long a matter of folklore, the small-world phenomenon'''' --the principle that we are all linked by short chains of acquaintances --was inaugurated as an area of experimental study in the social sciences through the pioneering work of Stanley Milgram in the 1960''s. This work was among the first to make the phenomenon quantitative, allowing people to speak of the six degrees of separation'''' between any two people in the United States. Since then, a number of network models have been proposed as frameworks in which to study the problem analytically. One of the most refined of these models was formulated in recent work of Watts and Strogatz; their framework provided compelling evidence that the small-world phenomenon is pervasive in a range of networks arising in nature and technology, and a fundamental ingredient in the evolution of the World Wide Web. But existing models are insufficient to explain the striking algorithmic component of Milgram''s original findings: that individuals using local information are collectively very effective at actually constructing short paths between two points in a social network. Although recently proposed network models are rich in short paths, we prove that no decentralized algorithm, operating with local information only, can construct short paths in these networks with non-negligible probability. We then define an infinite family of network models that naturally generalizes the Watts-Strogatz model, and show that for one of these models, there is a decentralized algorithm capable of finding short paths with high probability. More generally, we provide a strong characterization of this family of network models, showing that there is in fact a unique model within the family for which decentralized algorithms are effective.", "We analyze decentralized routing in small-world networks that combine a wide variation in node degrees with a notion of spatial embedding. Specifically, we consider a variation of Kleinberg's augmented-lattice model (STOC 2000), where the number of long-range contacts for each node is drawn from a power-law distribution. This model is motivated by the experimental observation that many \"real-world\" networks have power-law degrees. In such networks, the exponent α of the power law is typically between 2 and 3. We prove that, in our model, for this range of values, 2", "Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices." ] }
1503.00448
1507268481
In Kleinberg's small-world network model, strong ties are modeled as deterministic edges in the underlying base grid and weak ties are modeled as random edges connecting remote nodes. The probability of connecting a node @math with node @math through a weak tie is proportional to @math , where @math is the grid distance between @math and @math and @math is the parameter of the model. Complex contagion refers to the propagation mechanism in a network where each node is activated only after @math neighbors of the node are activated. In this paper, we propose the concept of routing of complex contagion (or complex routing), where we can activate one node at one time step with the goal of activating the targeted node in the end. We consider decentralized routing scheme where only the weak ties from the activated nodes are revealed. We study the routing time of complex contagion and compare the result with simple routing and complex diffusion (the diffusion of complex contagion, where all nodes that could be activated are activated immediately in the same step with the goal of activating all nodes in the end). We show that for decentralized complex routing, the routing time is lower bounded by a polynomial in @math (the number of nodes in the network) for all range of @math both in expectation and with high probability (in particular, @math for @math and @math for @math in expectation), while the routing time of simple contagion has polylogarithmic upper bound when @math . Our results indicate that complex routing is harder than complex diffusion and the routing time of complex contagion differs exponentially compared to simple contagion at sweetspot.
In terms of network diffusion, a line of research initiated by studied the maximization problem of finding a set of small seeds to maximize the influence spread, usually under a stochastic diffusion model. For example, provided efficient influence maximization algorithms for large-scale networks, while proved that minimizing the size of the seed set for a given coverage in the fixed threshold model is hard to approximate to any polylogarithmic factor. Threshold behavior is also studied in bootstrap percolation @cite_9 , where all nodes have the same threshold and initial seeds are randomly selected. Bootstrap percolation focuses on the study of the critical fraction @math of the seed nodes selected so that the entire network is infected in the end. The network structures investigated for bootstrap percolation include grid @cite_5 , trees @cite_7 , random regular graphs @cite_19 , complex networks @cite_10 etc.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_19", "@cite_5", "@cite_10" ], "mid": [ "2125271247", "", "2093333397", "2027508917", "2166942778" ], "abstract": [ "Bootstrap percolation on an arbitrary graph has a random initial configuration, where each vertex is occupied with probability @math , independently of each other, and a deterministic spreading rule with a fixed parameter @math : if a vacant site has at least @math occupied neighbours at a certain time step, then it becomes occupied in the next step. This process is well studied on @math ; here we investigate it on regular and general infinite trees and on non-amenable Cayley graphs. The critical probability is the infimum of those values of @math for which the process achieves complete occupation with positive probability. On trees we find the following discontinuity: if the branching number of a tree is strictly smaller than @math , then the critical probability is 1, while it is @math on the @math -ary tree. A related result is that in any rooted tree @math there is a way of erasing @math children of the root, together with all their descendants, and repeating this for all remaining children, and so on, such that the remaining tree @math has branching number @math . We also prove that on any @math -regular non-amenable graph, the critical probability for the @math -rule is strictly positive.", "", "The k-parameter bootstrap percolation on a graph is a model of an interacting particle system, which can also be viewed as a variant of a cellular automaton growth process with threshold k ≥ 2. At the start, each of the graph vertices is active with probability p and inactive with probability 1 − p, independently of other vertices. Presence of active vertices triggers a bootstrap percolation process controlled by a recursive rule: an active vertex remains active forever, and a currently inactive vertex becomes active when at least k of its neighbors are active. The basic problem is to identify, for a given graph, p− and p+ such that for p p+ resp.) the probability that all vertices are eventually active is very close to 0 (1 resp.). The bootstrap percolation process is a deterministic process on the space of subsets of the vertex set, which is easy to describe but hard to analyze rigorously in general. We study the percolation on the random d-regular graph, d ≥ 3, via analysis of the process on the multigraph counterpart of the graph. Here, thanks to a “principle of deferred decisions,” the percolation dynamics is described by a surprisingly simple Markov chain. Its generic state is formed by the counts of currently active and nonactive vertices having various degrees of activation capabilities. We replace the chain by a deterministic dynamical system, and use its integrals to show—via exponential supermartingales—that the percolation process undergoes relatively small fluctuations around the deterministic trajectory. This allows us to show existence of the phase transition within an interval [p−(n),p+(n)], such that (1) p±(n) p* = 1 − miny∈(0,1)y ℙ(Bin(d − 1,1 − y) < k); (2) p+(n) − p−(n) is of order n−1 2 for k < d − 1, and n, (en 0,en log n ∞), for k = d − 1. Note that p* is the same as the critical probability of the process on the corresponding infinite regular tree. © 2006 Wiley Periodicals, Inc. Random Struct. Alg., 30, 257–286, 2007", "A new percolation problem is posed which can exhibit a first-order transition. In bootstrap percolation, sites on an empty lattice are first randomly occupied, and then all occupied sites with less than a given number m of occupied neighbours are successively removed until a stable configuration is reached. On any lattice for sufficiently large m, the ensuing clusters can only be infinite. On a Bethe lattice for m>or=3, the fraction of the lattice occupied by infinite clusters discontinuously jumps from zero at the percolation threshold. From an analysis of stable and metastable ground states of the dilute Blume-Capel model (1966), it is concluded that effects like bootstrap percolation may occur in some real magnets.", "We consider bootstrap percolation on uncorrelated complex networks. We obtain the phase diagram for this process with respect to two parameters: @math , the fraction of vertices initially activated, and @math , the fraction of undamaged vertices in the graph. We observe two transitions: the giant active component appears continuously at a first threshold. There may also be a second, discontinuous, hybrid transition at a higher threshold. Avalanches of activations increase in size as this second critical point is approached, finally diverging at this threshold. We describe the existence of a special critical point at which this second transition first appears. In networks with degree distributions whose second moment diverges (but whose first moment does not), we find a qualitatively different behavior. In this case the giant active component appears for any @math and @math , and the discontinuous transition is absent. This means that the giant active component is robust to damage, and also is very easily activated. We also formulate a generalized bootstrap process in which each vertex can have an arbitrary threshold." ] }
1503.00193
1525126490
We present a new method for the constraint-based synthesis of termination arguments for linear loop programs based on linear ranking templates. Linear ranking templates are parametrized, well-founded relations such that an assignment to the parameters gives rise to a ranking function. This approach generalizes existing methods and enables us to use templates for many different ranking functions with affine-linear components. We discuss templates for multiphase, piecewise, and lexicographic ranking functions. Because these ranking templates require both strict and non-strict inequalities, we use Motzkin’s Transposition Theorem instead of Farkas Lemma to transform the generated ∃ ∀-constraint into an ∃-constraint.
Bradley, Manna, and Sipma propose a constraint-based approach for linear lasso programs @cite_5 . Their termination argument is a lexicographic ranking function with each lexicographic component corresponding to one loop disjunct. This requires nonlinear constraint solving and an ordering on the loop disjuncts. The authors extend this approach in @cite_23 by the use of . These trees allow each lexicographic component to have a ranking function that decreases not necessarily in every step, but .
{ "cite_N": [ "@cite_5", "@cite_23" ], "mid": [ "1608799719", "1585194019" ], "abstract": [ "We present a complete method for synthesizing lexicographic linear ranking functions supported by inductive linear invariants for loops with linear guards and transitions. Proving termination via linear ranking functions often requires invariants; yet invariant generation is expensive. Thus, we describe a technique that discovers just the invariants necessary for proving termination. Finally, we describe an implementation of the method and provide extensive experimental evidence of its effectiveness for proving termination of C loops.", "Although every terminating loop has a ranking function, not every loop has a ranking function of a restricted form, such as a lexicographic tuple of polynomials over program variables. The polyranking principle is proposed as a generalization of polynomial ranking for analyzing termination of loops. We define lexicographic polyranking functions in the context of loops with parallel transitions consisting of polynomial assertions, including inequalities, over primed and unprimed variables. Next, we address synthesis of these functions with a complete and automatic method for synthesizing lexicographic linear polyranking functions with supporting linear invariants over linear loops." ] }
1503.00193
1525126490
We present a new method for the constraint-based synthesis of termination arguments for linear loop programs based on linear ranking templates. Linear ranking templates are parametrized, well-founded relations such that an assignment to the parameters gives rise to a ranking function. This approach generalizes existing methods and enables us to use templates for many different ranking functions with affine-linear components. We discuss templates for multiphase, piecewise, and lexicographic ranking functions. Because these ranking templates require both strict and non-strict inequalities, we use Motzkin’s Transposition Theorem instead of Farkas Lemma to transform the generated ∃ ∀-constraint into an ∃-constraint.
Ben-Amram and Genaim discuss the synthesis of affine-linear and lexicographic ranking functions for linear loop programs over the integers @cite_28 . They prove that this problem is generally co-NP-complete and show that several special cases admit a polynomial time complexity.
{ "cite_N": [ "@cite_28" ], "mid": [ "1983764301" ], "abstract": [ "In this article, we study the complexity of the problems: given a loop, described by linear constraints over a finite set of variables, is there a linear or lexicographical-linear ranking function for this loop? While existence of such functions implies termination, these problems are not equivalent to termination. When the variables range over the rationals (or reals), it is known that both problems are PTIME decidable. However, when they range over the integers, whether for single-path or multipath loops, the complexity has not yet been determined. We show that both problems are coNP-complete. However, we point out some special cases of importance of PTIME complexity. We also present complete algorithms for synthesizing linear and lexicographical-linear ranking functions, both for the general case and the special PTIME cases. Moreover, in the rational setting, our algorithm for synthesizing lexicographical-linear ranking functions extends existing ones, because our definition for such functions is more general, yet it has PTIME complexity." ] }
1503.00193
1525126490
We present a new method for the constraint-based synthesis of termination arguments for linear loop programs based on linear ranking templates. Linear ranking templates are parametrized, well-founded relations such that an assignment to the parameters gives rise to a ranking function. This approach generalizes existing methods and enables us to use templates for many different ranking functions with affine-linear components. We discuss templates for multiphase, piecewise, and lexicographic ranking functions. Because these ranking templates require both strict and non-strict inequalities, we use Motzkin’s Transposition Theorem instead of Farkas Lemma to transform the generated ∃ ∀-constraint into an ∃-constraint.
Approaches for computing lexicographic linear ranking functions for a more general class of programs, namely programs that can consist of several (potentially nested) loops are presented in @cite_1 and @cite_13 . On linear loop programs, both algorithms involve choosing an ordering on the loop disjuncts. Hence, both approaches are either incomplete or have to use backtracking to iteratively consider all possible orderings of loop disjuncts.
{ "cite_N": [ "@cite_13", "@cite_1" ], "mid": [ "1589106570", "1523037784" ], "abstract": [ "Termination proving has traditionally been based on the search for (possibly lexicographic) ranking functions. In recent years, however, the discovery of termination proof techniques based on Ramsey's theorem have led to new automation strategies, e.g. size-change, or iterative reductions from termination to safety. In this paper we revisit the decision to use Ramsey-based termination arguments in the iterative approach. We describe a new iterative termination proving procedure that instead searches for lexicographic termination arguments. Using experimental evidence we show that this new method leads to dramatic speedups.", "Proving the termination of a flowchart program can be done by exhibiting a ranking function, i.e., a function from the program states to a well-founded set, which strictly decreases at each program step. A standard method to automatically generate such a function is to compute invariants for each program point and to search for a ranking in a restricted class of functions that can be handled with linear programming techniques. Previous algorithms based on affine rankings either are applicable only to simple loops (i.e., single-node flowcharts) and rely on enumeration, or are not complete in the sense that they are not guaranteed to find a ranking in the class of functions they consider, if one exists. Our first contribution is to propose an efficient algorithm to compute ranking functions: It can handle flowcharts of arbitrary structure, the class of candidate rankings it explores is larger, and our method, although greedy, is provably complete. Our second contribution is to show how to use the ranking functions we generate to get upper bounds for the computational complexity (number of transitions) of the source program. This estimate is a polynomial, which means that we can handle programs with more than linear complexity. We applied the method on a collection of test cases from the literature. We also show the links and differences with previous techniques based on the insertion of counters." ] }
1503.00193
1525126490
We present a new method for the constraint-based synthesis of termination arguments for linear loop programs based on linear ranking templates. Linear ranking templates are parametrized, well-founded relations such that an assignment to the parameters gives rise to a ranking function. This approach generalizes existing methods and enables us to use templates for many different ranking functions with affine-linear components. We discuss templates for multiphase, piecewise, and lexicographic ranking functions. Because these ranking templates require both strict and non-strict inequalities, we use Motzkin’s Transposition Theorem instead of Farkas Lemma to transform the generated ∃ ∀-constraint into an ∃-constraint.
Our method is not able to prove termination for all terminating linear loop programs. Termination is decidable for the subclass of deterministic conjunctive linear loop programs of the form where the matrices @math , @math , @math and vectors @math , @math , @math are rational, and variables can take on rational or real values @cite_38 . This class also admits decidable termination analysis over the integers for the homogeneous case where @math @cite_31 . However, their method is not targeted at the synthesis of ranking functions.
{ "cite_N": [ "@cite_38", "@cite_31" ], "mid": [ "1575647584", "1561261246" ], "abstract": [ "We show that termination of a class of linear loop programs is decidable. Linear loop programs are discrete-time linear systems with a loop condition governing termination, that is, a while loop with linear assignments. We relate the termination of such a simple loop, on all initial values, to the eigenvectors corresponding to only the positive real eigenvalues of the matrix defining the loop assignments. This characterization of termination is reminiscent of the famous stability theorems in control theory that characterize stability in terms of eigenvalues.", "We show that termination of a simple class of linear loops over the integers is decidable. Namely we show that termination of deterministic linear loops is decidable over the integers in the homogeneous case, and over the rationals in the general case. This is done by analyzing the powers of a matrix symbolically using its eigenvalues. Our results generalize the work of Tiwari [Tiw04], where similar results were derived for termination over the reals. We also gain some insights into termination of non-homogeneous integer programs, that are very common in practice." ] }
1503.00193
1525126490
We present a new method for the constraint-based synthesis of termination arguments for linear loop programs based on linear ranking templates. Linear ranking templates are parametrized, well-founded relations such that an assignment to the parameters gives rise to a ranking function. This approach generalizes existing methods and enables us to use templates for many different ranking functions with affine-linear components. We discuss templates for multiphase, piecewise, and lexicographic ranking functions. Because these ranking templates require both strict and non-strict inequalities, we use Motzkin’s Transposition Theorem instead of Farkas Lemma to transform the generated ∃ ∀-constraint into an ∃-constraint.
Ranking functions can also be computed via abstract interpretation @cite_18 . Urban and Miné @cite_35 @cite_16 @cite_7 introduced the domain of piecewise defined ordinal-valued functions for this approach. In contrast to our work, their approach is applicable to programs with arbitrary structure and not restricted to linear lasso programs. However, the authors do not provide completeness results that state that a ranking function of a certain form can always be found.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_16", "@cite_7" ], "mid": [ "2114695797", "", "45151810", "77406878" ], "abstract": [ "We present a parameterized abstract domain for proving program termination by abstract interpretation. The domain automatically synthesizes piecewise-defined ranking functions and infers sufficient conditions for program termination. The analysis uses over-approximations but we prove its soundness, meaning that all program executions respecting these sufficient conditions are indeed terminating.", "", "The traditional method for proving program termination consists in inferring a ranking function. In many cases i.e. programs with unbounded non-determinism, a single ranking function over natural numbers is not sufficient. Hence, we propose a new abstract domain to automatically infer ranking functions over ordinals. We extend an existing domain for piecewise-defined natural-valued ranking functions to polynomials in ω, where the polynomial coefficients are natural-valued functions of the program variables. The abstract domain is parametric in the choice of the maximum degree of the polynomial, and the types of functions used as polynomial coefficients. We have implemented a prototype static analyzer for a while-language by instantiating our domain using affine functions as polynomial coefficients. We successfully analyzed small but intricate examples that are out of the reach of existing methods. To our knowledge this is the first abstract domain able to reason about ordinals. Handling ordinals leads to a powerful approach for proving termination of imperative programs, which in particular subsumes existing techniques based on lexicographic ranking functions.", "We present a new parameterized abstract domain able to refine existing numerical abstract domains with finite disjunctions. The elements of the abstract domain are decision trees where the decision nodes are labeled with linear constraints, and the leaf nodes belong to a numerical abstract domain." ] }
1503.00095
1914293925
We present a novel learning method for word embeddings designed for relation classification. Our word embeddings are trained by predicting words between noun pairs using lexical relation-specific features on a large unlabeled corpus. This allows us to explicitly incorporate relation-specific information into the word embeddings. The learned word embeddings are then used to construct feature vectors for a relation classification model. On a well-established semantic relation classification task, our method significantly outperforms a baseline based on a previously introduced word embedding method, and compares favorably to previous state-of-the-art models that use syntactic information or manually constructed external resources.
A traditional approach to relation classification is to train classifiers in a supervised fashion using a variety of features. These features include lexical bag-of-words features and features based on syntactic parse trees. For syntactic parse trees, the paths between the target entities on constituency and dependency trees have been demonstrated to be useful @cite_26 @cite_23 . On the shared task introduced by , achieved the best score using a variety of hand-crafted features which were then used to train a Support Vector Machine (SVM).
{ "cite_N": [ "@cite_26", "@cite_23" ], "mid": [ "2138627627", "2152269015" ], "abstract": [ "We present a novel approach to relation extraction, based on the observation that the information required to assert a relationship between two named entities in the same sentence is typically captured by the shortest path between the two entities in the dependency graph. Experiments on extracting top-level relations from the ACE (Automated Content Extraction) newspaper corpus show that the new shortest path dependency kernel outperforms a recent approach based on dependency tree kernels.", "This paper proposes a novel composite kernel for relation extraction. The composite kernel consists of two individual kernels: an entity kernel that allows for entity-related features and a convolution parse tree kernel that models syntactic information of relation examples. The motivation of our method is to fully utilize the nice properties of kernel methods to explore diverse knowledge for relation extraction. Our study illustrates that the composite kernel can effectively capture both flat and structured features without the need for extensive feature engineering, and can also easily scale to include more features. Evaluation on the ACE corpus shows that our method outperforms the previous best-reported methods and significantly out-performs previous two dependency tree kernels for relation extraction." ] }
1502.08030
1607091478
Author name ambiguity is one of the problems that decrease the quality and reliability of information retrieved from digital libraries. Existing methods have tried to solve this problem by predefining a feature set based on expert's knowledge for a specific dataset. In this paper, we propose a new approach which uses deep neural network to learn features automatically for solving author name ambiguity. Additionally, we propose the general system architecture for author name disambiguation on any dataset. We evaluate the proposed method on a dataset containing Vietnamese author names. The results show that this method significantly outperforms other methods that use predefined feature set. The proposed method achieves 99.31 in terms of accuracy. Prediction error rate decreases from 1.83 to 0.69 , i.e., it decreases by 1.14 , or 62.3 relatively compared with other methods that use predefined feature set Table 3.
@cite_12 did a brief survey of author name disambiguation methods. According to their survey, existing methods have tried to create, select and combine features based on the similarity of attributes by using some string-matching measures or some specific heuristic, such as the number of coauthor names in common, etc.
{ "cite_N": [ "@cite_12" ], "mid": [ "2129558264" ], "abstract": [ "Name ambiguity in the context of bibliographic citation records is a hard problem that affects the quality of services and content in digital libraries and similar systems. The challenges of dealing with author name ambiguity have led to a myriad of disambiguation methods. Generally speaking, the proposed methods usually attempt to group citation records of a same author by finding some similarity among them or try to directly assign them to their respective authors. Both approaches may either exploit supervised or unsupervised techniques. In this article, we propose a taxonomy for characterizing the current author name disambiguation methods described in the literature, present a brief survey of the most representative ones and discuss several open challenges." ] }
1502.08030
1607091478
Author name ambiguity is one of the problems that decrease the quality and reliability of information retrieved from digital libraries. Existing methods have tried to solve this problem by predefining a feature set based on expert's knowledge for a specific dataset. In this paper, we propose a new approach which uses deep neural network to learn features automatically for solving author name ambiguity. Additionally, we propose the general system architecture for author name disambiguation on any dataset. We evaluate the proposed method on a dataset containing Vietnamese author names. The results show that this method significantly outperforms other methods that use predefined feature set. The proposed method achieves 99.31 in terms of accuracy. Prediction error rate decreases from 1.83 to 0.69 , i.e., it decreases by 1.14 , or 62.3 relatively compared with other methods that use predefined feature set Table 3.
Bhattacharya and Getoor @cite_11 proposed a combined similarity function defined on attributes and relational information. The method obtained a high F1 score around 0.99 in the CiteSeer collection, lower in the arXiv collection and only around 0.81 in the BioBase collection.
{ "cite_N": [ "@cite_11" ], "mid": [ "2148019918" ], "abstract": [ "Many databases contain uncertain and imprecise references to real-world entities. The absence of identifiers for the underlying entities often results in a database which contains multiple references to the same entity. This can lead not only to data redundancy, but also inaccuracies in query processing and knowledge extraction. These problems can be alleviated through the use of entity resolution. Entity resolution involves discovering the underlying entities and mapping each database reference to these entities. Traditionally, entities are resolved using pairwise similarity over the attributes of references. However, there is often additional relational information in the data. Specifically, references to different entities may cooccur. In these cases, collective entity resolution, in which entities for cooccurring references are determined jointly rather than independently, can improve entity resolution accuracy. We propose a novel relational clustering algorithm that uses both attribute and relational information for determining the underlying domain entities, and we give an efficient implementation. We investigate the impact that different relational similarity measures have on entity resolution quality. We evaluate our collective entity resolution algorithm on multiple real-world databases. We show that it improves entity resolution performance over both attribute-based baselines and over algorithms that consider relational information but do not resolve entities collectively. In addition, we perform detailed experiments on synthetically generated data to identify data characteristics that favor collective relational resolution over purely attribute-based algorithms." ] }
1502.08030
1607091478
Author name ambiguity is one of the problems that decrease the quality and reliability of information retrieved from digital libraries. Existing methods have tried to solve this problem by predefining a feature set based on expert's knowledge for a specific dataset. In this paper, we propose a new approach which uses deep neural network to learn features automatically for solving author name ambiguity. Additionally, we propose the general system architecture for author name disambiguation on any dataset. We evaluate the proposed method on a dataset containing Vietnamese author names. The results show that this method significantly outperforms other methods that use predefined feature set. The proposed method achieves 99.31 in terms of accuracy. Prediction error rate decreases from 1.83 to 0.69 , i.e., it decreases by 1.14 , or 62.3 relatively compared with other methods that use predefined feature set Table 3.
In another research, @cite_3 used a feature set resulting from the comparison between the common citation attributes along with medical subject headings, language, and affiliation of two references in MEDLINE dataset. In a subsequent work @cite_13 , Torvik and Smalheiser incorporated some features into their method to achieve better result.
{ "cite_N": [ "@cite_13", "@cite_3" ], "mid": [ "1964879903", "1766412966" ], "abstract": [ "Background: We recently described “Author-ity,” a model for estimating the probability that two articles in MEDLINE, sharing the same author name, were written by the same individual. Features include shared title words, journal name, coauthors, medical subject headings, language, affiliations, and author name features (middle initial, suffix, and prevalence in MEDLINE). Here we test the hypothesis that the Author-ity model will suffice to disambiguate author names for the vast majority of articles in MEDLINE. Methods: Enhancements include: (a) incorporating first names and their variants, email addresses, and correlations between specific last names and affiliation words; (b) new methods of generating large unbiased training sets; (c) new methods for estimating the prior probability; (d) a weighted least squares algorithm for correcting transitivity violations; and (e) a maximum likelihood based agglomerative algorithm for computing clusters of articles that represent inferred author-individuals. Results: Pairwise comparisons were computed for all author names on all 15.3 million articles in MEDLINE (2006 baseline), that share last name and first initial, to create Author-ity 2006, a database that has each name on each article assigned to one of 6.7 million inferred author-individual clusters. Recall is estimated at ∼98.8p. Lumping (putting two different individuals into the same cluster) affects ∼0.5p of clusters, whereas splitting (assigning articles written by the same individual to >1 cluster) affects ∼2p of articles. Impact: The Author-ity model can be applied generally to other bibliographic databases. Author name disambiguation allows information retrieval and data integration to become person-centered, not just document-centered, setting the stage for new data mining and social network tools that will facilitate the analysis of scholarly publishing and collaboration behavior. Availability: The Author-ity 2006 database is available for nonprofit academic research, and can be freely queried via http: arrowsmith.psych.uic.edu.", "We present a model for estimating the probability that a pair of author names (sharing last name and first initial), appearing on two different Medline articles, refer to the same individual. The model uses a simple yet powerful similarity profile between a pair of articles, based on title, journal name, coauthor names, medical subject headings (MeSH), language, affiliation, and name attributes (prevalence in the literature, middle initial, and suffix). The similarity profile distribution is computed from reference sets consisting of pairs of articles containing almost exclusively author matches versus nonmatches, generated in an unbiased manner. Although the match set is generated automatically and might contain a small proportion of nonmatches, the model is quite robust against contamination with nonmatches. We have created a free, public service (“Author-ity”: ) that takes as input an author's name given on a specific article, and gives as output a list of all articles with that (last name, first initial) ranked by decreasing similarity, with match probability indicated. © 2005 Wiley Periodicals, Inc." ] }
1502.08030
1607091478
Author name ambiguity is one of the problems that decrease the quality and reliability of information retrieved from digital libraries. Existing methods have tried to solve this problem by predefining a feature set based on expert's knowledge for a specific dataset. In this paper, we propose a new approach which uses deep neural network to learn features automatically for solving author name ambiguity. Additionally, we propose the general system architecture for author name disambiguation on any dataset. We evaluate the proposed method on a dataset containing Vietnamese author names. The results show that this method significantly outperforms other methods that use predefined feature set. The proposed method achieves 99.31 in terms of accuracy. Prediction error rate decreases from 1.83 to 0.69 , i.e., it decreases by 1.14 , or 62.3 relatively compared with other methods that use predefined feature set Table 3.
In our previous research @cite_9 , we predefined a feature set to learn a similarity function specifically for Vietnamese author dataset, one of the most difficult case, and obtained around 0.98 of accuracy.
{ "cite_N": [ "@cite_9" ], "mid": [ "176842402" ], "abstract": [ "Automatic integration of bibliographical data from various sources is a really critical task in the field of digital libraries. One of the most important challenges for this process is the author name disambiguation. In this paper, we applied supervised learning approach and proposed a set of features that can be used to assist training classifiers in disambiguating Vietnamese author names. In order to evaluate efficiency of the proposed features set, we did experiments on five supervised learning methods: Random Forest, Support Vector Machine (SVM), k-Nearest Neighbors (kNN), C4.5 (Decision Tree), Bayes. The experiment dataset collected from three online digital libraries such as Microsoft Academic Search, ACM Digital Library, IEEE Digital Library. Our experiments shown that kNN, Random Forest, C4.5 classifier outperform than the others. The average accuracy archived with kNN approximates 94.55 , random forest is 94.23 , C4.5 is 93.98 , SVM is 91.91 and Bayes is lowest with 81.56 . Summary, we archived the highest accuracy 98.39 for author name disambiguation problem with the proposed feature set in our experiments on the Vietnamese authors dataset." ] }
1502.08030
1607091478
Author name ambiguity is one of the problems that decrease the quality and reliability of information retrieved from digital libraries. Existing methods have tried to solve this problem by predefining a feature set based on expert's knowledge for a specific dataset. In this paper, we propose a new approach which uses deep neural network to learn features automatically for solving author name ambiguity. Additionally, we propose the general system architecture for author name disambiguation on any dataset. We evaluate the proposed method on a dataset containing Vietnamese author names. The results show that this method significantly outperforms other methods that use predefined feature set. The proposed method achieves 99.31 in terms of accuracy. Prediction error rate decreases from 1.83 to 0.69 , i.e., it decreases by 1.14 , or 62.3 relatively compared with other methods that use predefined feature set Table 3.
@cite_6 was very successful in using a big DNN to learn features in image recognition. They built a deep convolution neural network and trained such network by simple online back-propagation. Their models greatly outperformed previous methods on many well-known datasets such as MNIST http: yann.lecun.com exdb mnist , NORB http: www.cs.nyu.edu ylclab data norb-v1.0 , etc. without using complicated image pre-processing techniques.
{ "cite_N": [ "@cite_6" ], "mid": [ "2951128674" ], "abstract": [ "Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks." ] }
1502.08030
1607091478
Author name ambiguity is one of the problems that decrease the quality and reliability of information retrieved from digital libraries. Existing methods have tried to solve this problem by predefining a feature set based on expert's knowledge for a specific dataset. In this paper, we propose a new approach which uses deep neural network to learn features automatically for solving author name ambiguity. Additionally, we propose the general system architecture for author name disambiguation on any dataset. We evaluate the proposed method on a dataset containing Vietnamese author names. The results show that this method significantly outperforms other methods that use predefined feature set. The proposed method achieves 99.31 in terms of accuracy. Prediction error rate decreases from 1.83 to 0.69 , i.e., it decreases by 1.14 , or 62.3 relatively compared with other methods that use predefined feature set Table 3.
@cite_2 used a simple deep feedforward neural network to learn features in speech recognition. They proved the model's ability to extract discriminative internal features that are robust to variants in data. Their model outperformed state-of-the-art systems based on GMMs or shallow networks without the need for explicit model adaptation or feature normalization.
{ "cite_N": [ "@cite_2" ], "mid": [ "2964138484" ], "abstract": [ "Abstract: Recent studies have shown that deep neural networks (DNNs) perform significantly better than shallow networks and Gaussian mixture models (GMMs) on large vocabulary speech recognition tasks. In this paper, we argue that the improved accuracy achieved by the DNNs is the result of their ability to extract discriminative internal representations that are robust to the many sources of variability in speech signals. We show that these representations become increasingly insensitive to small perturbations in the input with increasing network depth, which leads to better speech recognition performance with deeper networks. We also show that DNNs cannot extrapolate to test samples that are substantially different from the training examples. If the training data are sufficiently representative, however, internal features learned by the DNN are relatively stable with respect to speaker differences, bandwidth differences, and environment distortion. This enables DNN-based recognizers to perform as well or better than state-of-the-art systems based on GMMs or shallow networks without the need for explicit model adaptation or feature normalization." ] }
1502.07770
2952854553
We consider the problem of minimizing the continuous valued total variation subject to different unary terms on trees and propose fast direct algorithms based on dynamic programming to solve these problems. We treat both the convex and the non-convex case and derive worst case complexities that are equal or better than existing methods. We show applications to total variation based 2D image processing and computer vision problems based on a Lagrangian decomposition approach. The resulting algorithms are very efficient, offer a high degree of parallelism and come along with memory requirements which are only in the order of the number of image pixels.
In this case we show how to compute efficiently distance transforms (or min-convolutions ) for continuous piecewise-linear functions. To our knowledge, the previous algorithmic work considered only distance transforms for discretized functions @cite_31 .
{ "cite_N": [ "@cite_31" ], "mid": [ "1518641734" ], "abstract": [ "We describe linear-time algorithms for solving a class of problems that involve transforming a cost function on a grid using spatial information. These problems can be viewed as a generalization of classical distance transforms of binary images, where the binary image is replaced by an arbitrary function on a grid. Alternatively they can be viewed in terms of the minimum convolution of two functions, which is an important operation in grayscale morphology. A consequence of our techniques is a simple and fast method for computing the Euclidean distance transform of a binary image. Our algorithms are also applicable to Viterbi decoding, belief propagation, and optimal control." ] }
1502.07770
2952854553
We consider the problem of minimizing the continuous valued total variation subject to different unary terms on trees and propose fast direct algorithms based on dynamic programming to solve these problems. We treat both the convex and the non-convex case and derive worst case complexities that are equal or better than existing methods. We show applications to total variation based 2D image processing and computer vision problems based on a Lagrangian decomposition approach. The resulting algorithms are very efficient, offer a high degree of parallelism and come along with memory requirements which are only in the order of the number of image pixels.
Specializing Hochbaum's method to trees yields the following complexities: (i) @math for problems with quadratic unaries, assuming that the values of @math are chosen as in @cite_2 ; (ii) @math for piecewise-linear unaries with @math breakpoints, assuming that the values of @math are computed by a linear-time median algorithm (as discussed in Sec. for chains). Instead of using a linear-time median algorithm, it is also possible to sort all breakpoints in @math time in a preprocessing step.
{ "cite_N": [ "@cite_2" ], "mid": [ "1490240535" ], "abstract": [ "In this paper tube methods for reconstructing discontinuous data from noisy and blurred observation data are considered. It is shown that discrete bounded variation (BV)-regularization (commonly used in inverse problems and image processing) and the taut-string algorithm (commonly used in statistics) select reconstructions in a tube. A version of the taut-string algorithm applicable for higher dimensional data is proposed. This formulation results in a bilateral contact problem which can be solved very efficiently using an active set strategy. As a by-product it is shown that the Lagrange multiplier of the active set strategy is an efficient parameter for edge detection." ] }
1502.07770
2952854553
We consider the problem of minimizing the continuous valued total variation subject to different unary terms on trees and propose fast direct algorithms based on dynamic programming to solve these problems. We treat both the convex and the non-convex case and derive worst case complexities that are equal or better than existing methods. We show applications to total variation based 2D image processing and computer vision problems based on a Lagrangian decomposition approach. The resulting algorithms are very efficient, offer a high degree of parallelism and come along with memory requirements which are only in the order of the number of image pixels.
The convex case on a chain (or its continuous-domain version) has been addressed in @cite_23 @cite_20 @cite_9 @cite_25 @cite_16 @cite_33 @cite_13 @cite_29 @cite_24 . In particular, it has been shown that the problem with quadratic unaries @math can be solved in @math time by the taut string algorithm @cite_0 @cite_12 and by the method of Johnson @cite_8 . Condat @cite_32 presented an @math algorithm, which however empirically outperformed the method in @cite_0 @cite_12 according to the tests in @cite_32 . @cite_1 , the authors proposed an elegant derivation of the method of Condat @cite_32 starting from the tau string algorithm @cite_0 , which in turn also allows to use weighted total variation. Our @math method for this case can be viewed as a generalization to weighted total variation and an alternative implementation of Johnson's algorithm that requires less memory.
{ "cite_N": [ "@cite_33", "@cite_8", "@cite_9", "@cite_29", "@cite_1", "@cite_32", "@cite_24", "@cite_0", "@cite_23", "@cite_12", "@cite_16", "@cite_13", "@cite_25", "@cite_20" ], "mid": [ "", "2025289780", "", "", "311729721", "2962973949", "", "2162894628", "2021345239", "2080238139", "", "", "", "" ], "abstract": [ "", "We propose a dynamic programming algorithm for the one-dimensional Fused Lasso Signal Approximator (FLSA). The proposed algorithm has a linear running time in the worst case. A similar approach is developed for the task of least squares segmentation, and simulations indicate substantial performance improvement over existing algorithms. Examples of R and C implementations are provided in the online Supplementary materials, posted on the journal web site.", "", "", "We study , a widely used technique for eliciting structured sparsity. In particular, we propose efficient algorithms for computing prox-operators for @math -norm TV. The most important among these is @math -norm TV, for whose prox-operator we present a new geometric analysis which unveils a hitherto unknown connection to taut-string methods. This connection turns out to be remarkably useful as it shows how our geometry guided implementation results in efficient weighted and unweighted 1D-TV solvers, surpassing state-of-the-art methods. Our 1D-TV solvers provide the backbone for building more complex (two or higher-dimensional) TV solvers within a modular proximal optimization approach. We review the literature for an array of methods exploiting this strategy, and illustrate the benefits of our modular design through extensive suite of experiments on (i) image denoising, (ii) image deconvolution, (iii) four variants of fused-lasso, and (iv) video denoising. To underscore our claims and permit easy reproducibility, we provide all the reviewed and our new TV solvers in an easy to use multi-threaded C++, Matlab and Python library.", "A very fast noniterative algorithm is proposed for denoising or smoothing one-dimensional discrete signals, by solving the total variation regularized least-squares problem or the related fused lasso problem. A C code implementation is available on the web page of the author.", "", "The paper considers the problem of nonparametric regression with emphasis on controlling the number of local extremes. Two methods, the run method and the taut-string multiresolution method, are introduced and analyzed on standard test beds. It is shown that the number and locations of local extreme values are consistently estimated. Rates of convergence are proved for both methods. The run method converges slowly but can withstand blocks as well as a high proportion of isolated outliers. The rate of convergence of the taut-string multiresolution method is almost optimal. The method is extremely sensitive and can detect very low power peaks. Section 1 contains an introduction with special reference to the number of local extreme values. The run method is described in Section 2 and the taut-string-multiresolution method in Section 3. Low power peaks are considered in Section 4. Section 5 contains a comparison with other methods and Section 6 a short conclusion. The proofs are given in Section 7 and the taut-string algorithm is described in the Appendix.", "It is known that discrete BV-regularization and the taut string algorithm are equivalent. In this paper we extend this result to the continuous case. First we derive necessary equations for the solution of both BV-regularization and the taut string algorithm by computing suitable Gateaux derivatives. The equivalence then follows from a uniqueness result.", "Suppose that we observe independent, identically distributed random pairs (X1; Y1), (X2; Y2), . . . , (Xn; Yn). Our goal is to estimate regression functions such as the conditional mean or nquantile of Y given X, where 0 0 is some tuning parameter. This framework is extended further in order to include binary or Poisson regression, and to include local variation penalties. The latter are needed in order to construct estimators adapting to inhomogenous smoothness of f . For the general framework we develop noniterative algorithms for the solution of the minimization problems which are closely related to the taut string algorithm (cf. Davies and Kovac 2001).", "", "", "", "" ] }
1502.07770
2952854553
We consider the problem of minimizing the continuous valued total variation subject to different unary terms on trees and propose fast direct algorithms based on dynamic programming to solve these problems. We treat both the convex and the non-convex case and derive worst case complexities that are equal or better than existing methods. We show applications to total variation based 2D image processing and computer vision problems based on a Lagrangian decomposition approach. The resulting algorithms are very efficient, offer a high degree of parallelism and come along with memory requirements which are only in the order of the number of image pixels.
For the problem with piecewise-linear unaries @math the best known complexity was @math , which is achieved either by Hochbaum's method (as discussed earlier), or by the method in @cite_12 . We improve this to @math .
{ "cite_N": [ "@cite_12" ], "mid": [ "2080238139" ], "abstract": [ "Suppose that we observe independent, identically distributed random pairs (X1; Y1), (X2; Y2), . . . , (Xn; Yn). Our goal is to estimate regression functions such as the conditional mean or nquantile of Y given X, where 0 0 is some tuning parameter. This framework is extended further in order to include binary or Poisson regression, and to include local variation penalties. The latter are needed in order to construct estimators adapting to inhomogenous smoothness of f . For the general framework we develop noniterative algorithms for the solution of the minimization problems which are closely related to the taut string algorithm (cf. Davies and Kovac 2001)." ] }
1502.07770
2952854553
We consider the problem of minimizing the continuous valued total variation subject to different unary terms on trees and propose fast direct algorithms based on dynamic programming to solve these problems. We treat both the convex and the non-convex case and derive worst case complexities that are equal or better than existing methods. We show applications to total variation based 2D image processing and computer vision problems based on a Lagrangian decomposition approach. The resulting algorithms are very efficient, offer a high degree of parallelism and come along with memory requirements which are only in the order of the number of image pixels.
We generally follow the derivation in @cite_8 , which is quite different from the one in @cite_0 @cite_12 @cite_32 . We extend this derivation to non-smooth functions and to general trees.
{ "cite_N": [ "@cite_0", "@cite_32", "@cite_12", "@cite_8" ], "mid": [ "2162894628", "2962973949", "2080238139", "2025289780" ], "abstract": [ "The paper considers the problem of nonparametric regression with emphasis on controlling the number of local extremes. Two methods, the run method and the taut-string multiresolution method, are introduced and analyzed on standard test beds. It is shown that the number and locations of local extreme values are consistently estimated. Rates of convergence are proved for both methods. The run method converges slowly but can withstand blocks as well as a high proportion of isolated outliers. The rate of convergence of the taut-string multiresolution method is almost optimal. The method is extremely sensitive and can detect very low power peaks. Section 1 contains an introduction with special reference to the number of local extreme values. The run method is described in Section 2 and the taut-string-multiresolution method in Section 3. Low power peaks are considered in Section 4. Section 5 contains a comparison with other methods and Section 6 a short conclusion. The proofs are given in Section 7 and the taut-string algorithm is described in the Appendix.", "A very fast noniterative algorithm is proposed for denoising or smoothing one-dimensional discrete signals, by solving the total variation regularized least-squares problem or the related fused lasso problem. A C code implementation is available on the web page of the author.", "Suppose that we observe independent, identically distributed random pairs (X1; Y1), (X2; Y2), . . . , (Xn; Yn). Our goal is to estimate regression functions such as the conditional mean or nquantile of Y given X, where 0 0 is some tuning parameter. This framework is extended further in order to include binary or Poisson regression, and to include local variation penalties. The latter are needed in order to construct estimators adapting to inhomogenous smoothness of f . For the general framework we develop noniterative algorithms for the solution of the minimization problems which are closely related to the taut string algorithm (cf. Davies and Kovac 2001).", "We propose a dynamic programming algorithm for the one-dimensional Fused Lasso Signal Approximator (FLSA). The proposed algorithm has a linear running time in the worst case. A similar approach is developed for the task of least squares segmentation, and simulations indicate substantial performance improvement over existing algorithms. Examples of R and C implementations are provided in the online Supplementary materials, posted on the journal web site." ] }
1502.07790
2112665608
This thesis studies range-based WSN localization problem in 3D environments that induce coplanarity. In most real-world applications, even though the environment is 3D, the grounded sensor nodes are usually deployed on 2D planar surfaces. Examples of these surfaces include structures seen in both indoor (e.g. floors, doors, walls, tables etc.) and outdoor (e.g. mountains, valleys, hills etc.) environments. In such environments, sensor nodes typically appear as coplanar node clusters. We refer to this type of a deployment as a planar deployment. When there is a planar deployment, the coplanarity causes difficulties to the traditional range-based multilateration algorithms because a node cannot be unambiguously localized if the distance measurements to that node are from coplanar nodes. Thus, many already localized groups of nodes are rendered ineffective in the process just because they are coplanar. We, therefore propose an algorithm called Coplanarity Based Localization (CBL) that can be used as an extension of any localization algorithm to avoid most flips caused by coplanarity. CBL first performs a 2D localization among the nodes that are clustered on the same surface, and then finds the positions of these clusters in 3D. We have carried out experiments using trilateration for 2D localization, and quadrilateration for 3D localization, and experimentally verified that exploiting the clustering information leads to a more precise localization than mere quadrilateration. We also propose a heuristic to extract the clustering information in case it is not available, which is yet to be improved in the future.
Aspnes @cite_46 investigated the localization and localizability of a network in 2010. They defined the term and showed that it is sufficient and necessary condition for a WSN graph to be localized in 2D. Even though global rigidity is defined for all dimensions, the sufficient and necessary conditions for a WSN to be localized in 3D have not been found yet.
{ "cite_N": [ "@cite_46" ], "mid": [ "2110862165" ], "abstract": [ "In this paper, we provide a theoretical foundation for the problem of network localization in which some nodes know their locations and other nodes determine their locations by measuring the distances to their neighbors. We construct grounded graphs to model network localization and apply graph rigidity theory to test the conditions for unique localizability and to construct uniquely localizable networks. We further study the computational complexity of network localization and investigate a subclass of grounded graphs where localization can be computed efficiently. We conclude with a discussion of localization in sensor networks where the sensors are placed randomly" ] }
1502.07639
1499110125
Linearizability of concurrent data structures is usually proved by monolithic simulation arguments relying on the identification of the so-called linearization points. Regrettably, such proofs, whether manual or automatic, are often complicated and scale poorly to advanced non-blocking concurrency patterns, such as helping and optimistic updates. In response, we propose a more modular way of checking linearizability of concurrent queue algorithms that does not involve identifying linearization points. We reduce the task of proving linearizability with respect to the queue specification to establishing four basic properties, each of which can be proved independently by simpler arguments. As a demonstration of our approach, we verify the Herlihy and Wing queue, an algorithm that is challenging to verify by a simulation proof.
Linearizability was first introduced by Herlihy and Wing @cite_8 , who also presented the HW queue as an example whose linearizability cannot be proved by a simple forward simulation where each method performs its effects instantaneously at some point during its execution. The problem is, as we have seen, that neither of @math or @math can be given as the (unique) linearization point of @math events, because the way in which two concurrent enqueues are ordered may depend on not-yet-completed concurrent @math events. In other words, one cannot simply define a mapping from the concrete HW queue states to the queue specification states. Nevertheless, Herlihy and Wing do not dismiss the linearization point technique completely, as we do, but instead construct a proof where they map concrete states to non-empty sets of specification states.
{ "cite_N": [ "@cite_8" ], "mid": [ "2101939036" ], "abstract": [ "A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable." ] }
1502.07639
1499110125
Linearizability of concurrent data structures is usually proved by monolithic simulation arguments relying on the identification of the so-called linearization points. Regrettably, such proofs, whether manual or automatic, are often complicated and scale poorly to advanced non-blocking concurrency patterns, such as helping and optimistic updates. In response, we propose a more modular way of checking linearizability of concurrent queue algorithms that does not involve identifying linearization points. We reduce the task of proving linearizability with respect to the queue specification to establishing four basic properties, each of which can be proved independently by simpler arguments. As a demonstration of our approach, we verify the Herlihy and Wing queue, an algorithm that is challenging to verify by a simulation proof.
This mapping of concrete states to non-empty sets of abstract states is closely related to the method of , employed by a number of manual proof efforts @cite_9 @cite_21 @cite_7 , and which @cite_7 recently showed to be a complete proof method for verifying linearizability. Similar to forward simulation proofs, backward simulation proofs, are monolithic in the sense that they prove linearizability directly by one big proof. Sadly, they are also not very intuitive and as a result often difficult to come up with. For instance, although the definition of their backward simulation relation for the HW queue is four lines long, @cite_7 devote two full pages to explain it.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_7" ], "mid": [ "2063798543", "1592188038", "27871822" ], "abstract": [ "We describe an approach to verifying concurrent data structures based on simulation between two Input Output Automata (IOAs), modelling the specification and the implementation. We explain how we used this approach in mechanically verifying a simple lock-free stack implementation using forward simulation, and briefly discuss our experience in verifying three other lock-free algorithms which all required the use of backward simulation.", "Optimistic and nonblocking concurrent algorithms are increasingly finding their way into practical use; an important example is software transactional memory implementations. Such algorithms are notoriously difficult to design and verify as correct, and we believe complete, formal, and machine-checked correctness proofs for such algorithms are critical. We have been studying the use of automated tools such as the PVS theorem proving system to model algorithms and their specifications using formalisms such as I O automata, and using simulation proof techniques to show the algorithms implement their specifications. While it has been relatively rare in the past, optimistic and nonblocking algorithms often require a special flavour of simulation proof, known as backward simulation. In this paper, we present what we believe is by far the most challenging backward simulation proof achieved to date; this proof was developed and completely checked using PVS.", "Linearisability is the standard correctness criterion for concurrent data structures. In this paper, we present a sound and complete proof technique for linearisability based on backward simulations. We exemplify this technique by a linearisability proof of the queue algorithm presented in Herlihy and Wing's landmark paper. Except for the manual proof by them, none of the many other current approaches to checking linearisability has successfully treated this intricate example. Our approach is grounded on complete mechanisation: the proof obligations for the queue are verified using the interactive prover KIV, and so is the general soundness and completeness result for our proof technique." ] }
1502.07639
1499110125
Linearizability of concurrent data structures is usually proved by monolithic simulation arguments relying on the identification of the so-called linearization points. Regrettably, such proofs, whether manual or automatic, are often complicated and scale poorly to advanced non-blocking concurrency patterns, such as helping and optimistic updates. In response, we propose a more modular way of checking linearizability of concurrent queue algorithms that does not involve identifying linearization points. We reduce the task of proving linearizability with respect to the queue specification to establishing four basic properties, each of which can be proved independently by simpler arguments. As a demonstration of our approach, we verify the Herlihy and Wing queue, an algorithm that is challenging to verify by a simulation proof.
As a result, most work on automatically verifying linearizability (e.g. @cite_0 @cite_12 @cite_1 @cite_19 @cite_20 ) and some manual verification efforts (e.g., @cite_15 @cite_9 ) have relied on the simpler technique of forward simulations, even though it is known to be incomplete. The programmer is typically required to annotate each method with its linearization points and then the verifier uses some kind of shape analysis that automatically constructs the simulation relation. This approach seems to work well for simple concurrent algorithms such as the Treiber stack and the Michael and Scott queues, where finding the linearization points may be automated by brute-force search @cite_1 . Most recently, with their technique based on (automatically) rewriting implementations @cite_20 have succeeded to extend this approach to some implementations with helping. Similar to their precursors, however, their approach also assumes the existence of static linearization points, i.e. instructions in the program code that when executed invariably correspond to the linearization of one or more methods. Thus, there are many implementations, as mentioned in the Introduction, that cannot be handled by this approach.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_0", "@cite_19", "@cite_15", "@cite_12", "@cite_20" ], "mid": [ "2063798543", "1867941603", "2109717427", "", "127393827", "71333976", "1503130891" ], "abstract": [ "We describe an approach to verifying concurrent data structures based on simulation between two Input Output Automata (IOAs), modelling the specification and the implementation. We explain how we used this approach in mechanically verifying a simple lock-free stack implementation using forward simulation, and briefly discuss our experience in verifying three other lock-free algorithms which all required the use of backward simulation.", "This paper presents a practical automatic verification procedure for proving linearizability (i.e., atomicity and functional correctness) of concurrent data structure implementations The procedure employs a novel instrumentation to verify logically pure executions, and is evaluated on a number of standard concurrent stack, queue and set algorithms.", "Linearizability is one of the main correctness criteria for implementations of concurrent data structures. A data structure is linearizable if its operations appear to execute atomically. Verifying linearizability of concurrent unbounded linked data structures is a challenging problem because it requires correlating executions that manipulate (unbounded-size) memory states. We present a static analysis for verifying linearizability of concurrent unbounded linked data structures. The novel aspect of our approach is the ability to prove that two (unboundedsize) memory layouts of two programs are isomorphic in the presence of abstraction. A prototype implementation of the analysis verified the linearizability of several published concurrent data structures implemented by singly-linked lists.", "", "Linearisability is the key correctness criterion for concurrent implementations of data structures shared by multiple processes. In this paper we present a proof of linearisability of the lazy implementation of a set due to The lazy set presents one of the most challenging issues in verifying linearisability: a linearisation point of an operation set by a process other than the one executing it. For this we develop a proof strategy based on refinement which uses thread local simulation conditions and the technique of potential linearisation points. The former allows us to prove linearisability for arbitrary numbers of processes by looking at only two processes at a time, the latter permits disposing with reasoning about the past. All proofs have been mechanically carried out using the interactive prover KIV.", "This paper presents a novel abstraction for heap-allocated data structures that keeps track of both their shape and their contents. By combining this abstraction with thread-local analysis and rely-guarantee reasoning, we can verify a collection of fine-grained blocking and non-blocking concurrent algorithms for an arbitrary (unbounded) number of threads. We prove that these algorithms are linearizable, namely equivalent (modulo termination) to their sequential counterparts.", "An execution containing operations performing queries or updating a concurrent object is linearizable w.r.t an abstract implementation (called specification) iff for each operation, one can associate a point in time, called linearization point, such that the execution of the operations in the order of their linearization points can be reproduced by the specification. Finding linearization points is particularly difficult when they do not belong to the operations's actions. This paper addresses this challenge by introducing a new technique for rewriting the implementation of the concurrent object and its specification such that the new implementation preserves all executions of the original one, and its linearizability (w.r.t. the new specification) implies the linearizability of the original implementation (w.r.t. the original specification). The rewriting introduces additional combined methods to obtain a library with a simpler linearizability proof, i.e., a library whose operations contain their linearization points. We have implemented this technique in a prototype, which has been successfully applied to examples beyond the reach of current techniques, e.g., Stack Elimination and Fetch&Add." ] }
1502.07639
1499110125
Linearizability of concurrent data structures is usually proved by monolithic simulation arguments relying on the identification of the so-called linearization points. Regrettably, such proofs, whether manual or automatic, are often complicated and scale poorly to advanced non-blocking concurrency patterns, such as helping and optimistic updates. In response, we propose a more modular way of checking linearizability of concurrent queue algorithms that does not involve identifying linearization points. We reduce the task of proving linearizability with respect to the queue specification to establishing four basic properties, each of which can be proved independently by simpler arguments. As a demonstration of our approach, we verify the Herlihy and Wing queue, an algorithm that is challenging to verify by a simulation proof.
To the best of our knowledge, there exist only two earlier published proofs of the HW queue: (1) the original pencil-and-paper proof by Herlihy and Wing @cite_8 , and (2) a mechanized backward simulation proof by @cite_7 .
{ "cite_N": [ "@cite_7", "@cite_8" ], "mid": [ "27871822", "2101939036" ], "abstract": [ "Linearisability is the standard correctness criterion for concurrent data structures. In this paper, we present a sound and complete proof technique for linearisability based on backward simulations. We exemplify this technique by a linearisability proof of the queue algorithm presented in Herlihy and Wing's landmark paper. Except for the manual proof by them, none of the many other current approaches to checking linearisability has successfully treated this intricate example. Our approach is grounded on complete mechanisation: the proof obligations for the queue are verified using the interactive prover KIV, and so is the general soundness and completeness result for our proof technique.", "A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable." ] }
1502.07209
1777628566
In this paper, we study the challenging problem of categorizing videos according to high-level semantics such as the existence of a particular human action or a complex event. Although extensive efforts have been devoted in recent years, most existing works combined multiple video features using simple fusion strategies and neglected the utilization of inter-class semantic relationships. This paper proposes a novel unified framework that jointly exploits the feature relationships and the class relationships for improved categorization performance. Specifically, these two types of relationships are estimated and utilized by imposing regularizations in the learning process of a deep neural network (DNN). Through arming the DNN with better capability of harnessing both the feature and the class relationships, the proposed regularized DNN (rDNN) is more suitable for modeling video semantics. We show that rDNN produces better performance over several state-of-the-art approaches. Competitive results are reported on the well-known Hollywood2 and Columbia Consumer Video benchmarks. In addition, to stimulate future research on large scale video categorization, we collect and release a new benchmark dataset, called FCVID, which contains 91,223 Internet videos and 239 manually annotated categories.
Video categorization has received significant research attention. Most approaches followed a very standard pipeline, where various features are first extracted and then used as inputs of classifiers. Many works have focused on the design of novel features, such as the Spatial-Temporal Interest Points (STIP) @cite_48 , trajectory-based descriptors @cite_66 , audio clues @cite_32 , and the Convolutional Neural Networks (CNN) based features @cite_2 @cite_65 @cite_61 @cite_15 .
{ "cite_N": [ "@cite_61", "@cite_48", "@cite_65", "@cite_32", "@cite_2", "@cite_15", "@cite_66" ], "mid": [ "1983364832", "2142194269", "2016053056", "2164311876", "2163605009", "2156303437", "2105101328" ], "abstract": [ "We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods.", "The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8 accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results.", "Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3 to 63.9 ), but only a surprisingly modest improvement compared to single-frame models (59.3 to 60.9 ). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3 up from 43.9 ).", "Inspired by the system presented in [1], we have developed novel auditory-model-based features that preserve the fine time structure lost in conventional frame-based features. While the original auditory model is computationally intense, we present a simpler system that runs about ten times faster but achieves equivalent performance. We use these features for video soundtrack classification with the Columbia Consumer Video dataset, showing that the new features alone are roughly comparable to traditional MFCCs, but combining classifiers based on both features achieves a substantial mean Average Precision improvement of 15 over the MFCC baseline.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art." ] }
1502.07209
1777628566
In this paper, we study the challenging problem of categorizing videos according to high-level semantics such as the existence of a particular human action or a complex event. Although extensive efforts have been devoted in recent years, most existing works combined multiple video features using simple fusion strategies and neglected the utilization of inter-class semantic relationships. This paper proposes a novel unified framework that jointly exploits the feature relationships and the class relationships for improved categorization performance. Specifically, these two types of relationships are estimated and utilized by imposing regularizations in the learning process of a deep neural network (DNN). Through arming the DNN with better capability of harnessing both the feature and the class relationships, the proposed regularized DNN (rDNN) is more suitable for modeling video semantics. We show that rDNN produces better performance over several state-of-the-art approaches. Competitive results are reported on the well-known Hollywood2 and Columbia Consumer Video benchmarks. In addition, to stimulate future research on large scale video categorization, we collect and release a new benchmark dataset, called FCVID, which contains 91,223 Internet videos and 239 manually annotated categories.
In contrast to the variety of video features, Support Vector Machines (SVM) have been the dominate classifier option for over a decade. Recently, with the increasing popularity of the deep learning based approaches, neural networks have also been adopted for video classification @cite_65 @cite_61 @cite_15 . Among them, the best deep learning based video categorization result was probably from Simonyan and Zisserman @cite_15 , who used a two-stream CNN approach to extract features from static frames and motion optical flow respectively. The features were classified separately and the predictions were then linearly fused. Using this pipeline, they reported similar performance to the improved dense trajectories @cite_66 , one of the best hand-crafted feature-based approaches. Besides accuracy, efficiency is another important factor that should be considered in the design of a modern video classification system. Several recent studies investigated this issue by proposing efficient classification methods @cite_30 @cite_49 or parallel computing strategies @cite_12 @cite_73 .
{ "cite_N": [ "@cite_61", "@cite_30", "@cite_65", "@cite_49", "@cite_15", "@cite_73", "@cite_66", "@cite_12" ], "mid": [ "1983364832", "2134380836", "2016053056", "2005173850", "2156303437", "1547840952", "2105101328", "2265775419" ], "abstract": [ "We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods.", "Straightforward classification using kernelized SVMs requires evaluating the kernel for a test vector and each of the support vectors. For a class of kernels we show that one can do this much more efficiently. In particular we show that one can build histogram intersection kernel SVMs (IKSVMs) with runtime complexity of the classifier logarithmic in the number of support vectors as opposed to linear for the standard approach. We further show that by precomputing auxiliary tables we can construct an approximate classifier with constant runtime and space requirements, independent of the number of support vectors, with negligible loss in classification accuracy on various tasks. This approximation also applies to 1 - chi2 and other kernels of similar form. We also introduce novel features based on a multi-level histograms of oriented edge energy and present experiments on various detection datasets. On the INRIA pedestrian dataset an approximate IKSVM classifier based on these features has the current best performance, with a miss rate 13 lower at 10-6 False Positive Per Window than the linear SVM detector of Dalal & Triggs. On the Daimler Chrysler pedestrian dataset IKSVM gives comparable accuracy to the best results (based on quadratic SVM), while being 15times faster. In these experiments our approximate IKSVM is up to 2000times faster than a standard implementation and requires 200times less memory. Finally we show that a 50times speedup is possible using approximate IKSVM based on spatial pyramid features on the Caltech 101 dataset with negligible loss of accuracy.", "Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3 to 63.9 ), but only a surprisingly modest improvement compared to single-frame models (59.3 to 60.9 ). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3 up from 43.9 ).", "Event recognition in unconstrained Internet videos has great potential in many applications. State-of-the-art systems usually include modules that need extensive computation, such as the extraction of spatial-temporal interest points, which poses a big challenge for large-scale video processing. This paper presents SUPER, a Speeded UP Event Recognition framework for efficient Internet video analysis. We take a multimodal baseline that has produced strong performance on popular benchmarks, and systematically evaluate each component in terms of both computational cost and contribution to recognition accuracy. We show that, by choosing suitable features, classifiers, and fusion strategies, recognition speed can be greatly improved with minor performance degradation. In addition, we also evaluate how many visual and audio frames are needed for event recognition in Internet videos, a question left unanswered in the literature. Results on a rigorously designed dataset indicate that similar recognition accuracy can be attained using only 14 frames per video on average. We also observe that, different from the visual channel, the soundtracks contains little redundant information for video event recognition. Integrating all the findings, our suggested SUPER framework is 220-fold faster than the baseline approach with merely 3.8 drop in recognition accuracy. It classifies an 80-second video sequence using models of 20 classes in just 4.56 seconds.", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "In this work, we consider a standard architecture [1] trained on the Imagenet dataset [2] for classification and investigate methods to speed convergence by parallelizing training across multiple GPUs. In this work, we used up to 4 NVIDIA TITAN GPUs with 6GB of RAM. While our experiments are performed on a single server, our GPUs have disjoint memory spaces, and just as in the distributed setting, communication overheads are an important consideration. Unlike previous work [9, 10, 11], we do not aim to improve the underlying optimization algorithm. Instead, we isolate the impact of parallelism, while using standard supervised back-propagation and synchronous mini-batch stochastic gradient descent.", "Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.", "Deep learning gains lots of attentions in recent years and is more and more important for mining values in big data. However, to make deep learning practical for a wide range of applications in Tencent Inc., three requirements must be considered: 1) Lots of computational power are required to train a practical model with tens of millions of parameters and billions of samples for products such as automatic speech recognition (ASR), and the number of parameters and training data is still growing. 2) The capability of training larger model is necessary for better model quality. 3) Easy to use frameworks are valuable to do many experiments to perform model selection, such as finding an appropriate optimization algorithm and tuning optimal hyper-parameters. To accelerate training, support large models, and make experiments easier, we built Mariana, the Tencent deep learning platform, which utilizes GPU and CPU cluster to train models parallelly with three frameworks: 1) a multi-GPU data parallelism framework for deep neural networks (DNNs). 2) a multi-GPU model parallelism and data parallelism framework for deep convolutional neural networks (CNNs). 3) a CPU cluster framework for large scale DNNs. Mariana also provides built-in algorithms and features to facilitate experiments. Mariana is in production usage for more than one year, achieves state-of-the-art acceleration performance, and plays a key role in training models and improving quality for automatic speech recognition and image recognition in Tencent WeChat, a mobile social platform, and for Ad click-through rate prediction (pCTR) in Tencent QQ, an instant messaging platform, and Tencent Qzone, a social networking service." ] }
1502.07209
1777628566
In this paper, we study the challenging problem of categorizing videos according to high-level semantics such as the existence of a particular human action or a complex event. Although extensive efforts have been devoted in recent years, most existing works combined multiple video features using simple fusion strategies and neglected the utilization of inter-class semantic relationships. This paper proposes a novel unified framework that jointly exploits the feature relationships and the class relationships for improved categorization performance. Specifically, these two types of relationships are estimated and utilized by imposing regularizations in the learning process of a deep neural network (DNN). Through arming the DNN with better capability of harnessing both the feature and the class relationships, the proposed regularized DNN (rDNN) is more suitable for modeling video semantics. We show that rDNN produces better performance over several state-of-the-art approaches. Competitive results are reported on the well-known Hollywood2 and Columbia Consumer Video benchmarks. In addition, to stimulate future research on large scale video categorization, we collect and release a new benchmark dataset, called FCVID, which contains 91,223 Internet videos and 239 manually annotated categories.
In most state-of-the-art video categorization systems, two naive feature fusion strategies were adopted, i.e., the early fusion and the late fusion. Although both methods cannot exploit the hidden feature relationships like the correlations of different feature dimensions, they are widely used due to simplicity and good generalizability. Fusion weights are needed in both methods to weigh the importance of each individual feature dimension, which can be set as equal values (a.k.a. average fusion) or learned based on cross validation. In several recent works, multiple kernel learning (MKL) @cite_41 was adopted to estimate the fusion weights @cite_0 @cite_56 . MKL was reported to produce better performance in some cases, but the gain was also often observed to be insignificant @cite_42 .
{ "cite_N": [ "@cite_41", "@cite_42", "@cite_0", "@cite_56" ], "mid": [ "2031823405", "2538008885", "2166781916", "2141939040" ], "abstract": [ "While classical kernel-based classifiers are based on a single kernel, in practice it is often desirable to base classifiers on combinations of multiple kernels. (2004) considered conic combinations of kernel matrices for the support vector machine (SVM), and showed that the optimization of the coefficients of such a combination reduces to a convex optimization problem known as a quadratically-constrained quadratic program (QCQP). Unfortunately, current convex optimization toolboxes can solve this problem only for a small number of kernels and a small number of data points; moreover, the sequential minimal optimization (SMO) techniques that are essential in large-scale implementations of the SVM cannot be applied because the cost function is non-differentiable. We propose a novel dual formulation of the QCQP as a second-order cone programming problem, and show how to exploit the technique of Moreau-Yosida regularization to yield a formulation to which SMO techniques can be applied. We present experimental results that show that our SMO-based algorithm is significantly more efficient than the general-purpose interior point methods available in current optimization toolboxes.", "Our objective is to obtain a state-of-the art object category detector by employing a state-of-the-art image classifier to search for the object in all possible image sub-windows. We use multiple kernel learning of Varma and Ray (ICCV 2007) to learn an optimal combination of exponential χ2 kernels, each of which captures a different feature channel. Our features include the distribution of edges, dense and sparse visual words, and feature descriptors at different levels of spatial organization.", "With the recent efforts made by computer vision researchers, more and more types of features have been designed to describe various aspects of visual characteristics. Modeling such heterogeneous features has become an increasingly critical issue. In this paper, we propose a machinery called the Heterogeneous Feature Machine (HFM) to effectively solve visual recognition tasks in need of multiple types of features. Our HFM builds a kernel logistic regression model based on similarities that combine different features and distance metrics. Different from existing approaches that use a linear weighting scheme to combine different features, HFM does not require the weights to remain the same across different samples, and therefore can effectively handle features of different types with different metrics. To prevent the model from overfitting, we employ the so-called group LASSO constraints to reduce model complexity. In addition, we propose a fast algorithm based on co-ordinate gradient descent to efficiently train a HFM. The power of the proposed scheme is demonstrated across a wide variety of visual recognition tasks including scene, event and action recognition.", "Combining multiple low-level visual features is a proven and effective strategy for a range of computer vision tasks. However, limited attention has been paid to combining such features with information from other modalities, such as audio and videotext, for large scale analysis of web videos. In our work, we rigorously analyze and combine a large set of low-level features that capture appearance, color, motion, audio and audio-visual co-occurrence patterns in videos. We also evaluate the utility of high-level (i.e., semantic) visual information obtained from detecting scene, object, and action concepts. Further, we exploit multimodal information by analyzing available spoken and videotext content using state-of-the-art automatic speech recognition (ASR) and videotext recognition systems. We combine these diverse features using a two-step strategy employing multiple kernel learning (MKL) and late score level fusion methods. Based on the TRECVID MED 2011 evaluations for detecting 10 events in a large benchmark set of ∼45000 videos, our system showed the best performance among the 19 international teams." ] }
1502.07209
1777628566
In this paper, we study the challenging problem of categorizing videos according to high-level semantics such as the existence of a particular human action or a complex event. Although extensive efforts have been devoted in recent years, most existing works combined multiple video features using simple fusion strategies and neglected the utilization of inter-class semantic relationships. This paper proposes a novel unified framework that jointly exploits the feature relationships and the class relationships for improved categorization performance. Specifically, these two types of relationships are estimated and utilized by imposing regularizations in the learning process of a deep neural network (DNN). Through arming the DNN with better capability of harnessing both the feature and the class relationships, the proposed regularized DNN (rDNN) is more suitable for modeling video semantics. We show that rDNN produces better performance over several state-of-the-art approaches. Competitive results are reported on the well-known Hollywood2 and Columbia Consumer Video benchmarks. In addition, to stimulate future research on large scale video categorization, we collect and release a new benchmark dataset, called FCVID, which contains 91,223 Internet videos and 239 manually annotated categories.
With the growing popularity of the DNN, a few recent studies focused on combining multiple features in neural networks, which are closely related to this work. A deep de-noised auto-encoder was employed in @cite_5 to learn a shared representation based on mutimodal inputs. Similarly, a deep Boltzmann machine was utilized in @cite_39 to fuse visual and textual features. Very recently, @cite_8 proposed to learn a good shared representation by minimizing variation of information, so that missing input modality can be better predicted based on the available information. They showed that this method outperforms @cite_39 on several image classification benchmarks. Different from @cite_5 @cite_27 that fused the features in a free" way without imposing any learning or optimization process, in this paper we propose regularized fusion of multiple features, which is intuitively reasonable and empirically effective. Compared with @cite_8 , our objective is to identify dimension-wise feature correlations. Minimizing the variation of information in @cite_8 might be more suitable for images, but for videos, different modalities (e.g, audio and visual) may represent very distinctive information and simply minimizing their variation may not be a good strategy to exploit the complementary information.
{ "cite_N": [ "@cite_5", "@cite_27", "@cite_8", "@cite_39" ], "mid": [ "2184188583", "", "2148463593", "154472438" ], "abstract": [ "Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning.", "", "Deep learning has been successfully applied to multimodal representation learning problems, with a common strategy to learning joint representations that are shared across multiple modalities on top of layers of modality-specific networks. Nonetheless, there still remains a question how to learn a good association between data modalities; in particular, a good generative model of multimodal data should be able to reason about missing data modality given the rest of data modalities. In this paper, we propose a novel multimodal representation learning framework that explicitly aims this goal. Rather than learning with maximum likelihood, we train the model to minimize the variation of information. We provide a theoretical insight why the proposed learning objective is sufficient to estimate the data-generating joint distribution of multimodal data. We apply our method to restricted Boltzmann machines and introduce learning methods based on contrastive divergence and multi-prediction training. In addition, we extend to deep networks with recurrent encoding structure to finetune the whole network. In experiments, we demonstrate the state-of-the-art visual recognition performance on MIR-Flickr database and PASCAL VOC 2007 database with and without text features.", "Data often consists of multiple diverse modalities. For example, images are tagged with textual information and videos are accompanied by audio. Each modality is characterized by having distinct statistical properties. We propose a Deep Boltzmann Machine for learning a generative model of such multimodal data. We show that the model can be used to create fused representations by combining features across modalities. These learned representations are useful for classification and information retrieval. By sampling from the conditional distributions over each data modality, it is possible to create these representations even when some data modalities are missing. We conduct experiments on bimodal image-text and audio-video data. The fused representation achieves good classification results on the MIR-Flickr data set matching or outperforming other deep models as well as SVM based models that use Multiple Kernel Learning. We further demonstrate that this multimodal model helps classification and retrieval even when only unimodal data is available at test time." ] }
1502.07209
1777628566
In this paper, we study the challenging problem of categorizing videos according to high-level semantics such as the existence of a particular human action or a complex event. Although extensive efforts have been devoted in recent years, most existing works combined multiple video features using simple fusion strategies and neglected the utilization of inter-class semantic relationships. This paper proposes a novel unified framework that jointly exploits the feature relationships and the class relationships for improved categorization performance. Specifically, these two types of relationships are estimated and utilized by imposing regularizations in the learning process of a deep neural network (DNN). Through arming the DNN with better capability of harnessing both the feature and the class relationships, the proposed regularized DNN (rDNN) is more suitable for modeling video semantics. We show that rDNN produces better performance over several state-of-the-art approaches. Competitive results are reported on the well-known Hollywood2 and Columbia Consumer Video benchmarks. In addition, to stimulate future research on large scale video categorization, we collect and release a new benchmark dataset, called FCVID, which contains 91,223 Internet videos and 239 manually annotated categories.
Many researchers have investigated class relationships, commonly termed context, to improve classification performance. In @cite_72 , discussed the importance of context in the task of object detection in images. @cite_16 @cite_46 , the class co-occurrence context was utilized to improve object recognition accuracy. For video classification, @cite_6 proposed a semantic diffusion algorithm to harness the class relationships. The algorithm has the capability of domain adaptation. In other words, it can adjust pre-defined class relationships based on data distribution of different domain from the training set. @cite_40 proposed a similar domain-adaptive method that not only used the class relationships, but also explored temporal context information of broadcast news videos. Recently, @cite_68 proposed Hierarchy and Exclusion (HEX) graphs, which can capture not only the co-occurrence class relationships, but also mutual exclusion and subsumption. Another two recent works @cite_25 @cite_10 utilized the co-occurrence statistics to help video classification, where the co-occurrence of classes was used more as a semantic feature representation.
{ "cite_N": [ "@cite_6", "@cite_68", "@cite_40", "@cite_72", "@cite_46", "@cite_16", "@cite_10", "@cite_25" ], "mid": [ "2123049467", "64813323", "2155703706", "2166761907", "2081293863", "2160676519", "", "2083598512" ], "abstract": [ "Exploring context information for visual recognition has recently received significant research attention. This paper proposes a novel and highly efficient approach, which is named semantic diffusion, to utilize semantic context for large-scale image and video annotation. Starting from the initial annotation of a large number of semantic concepts (categories), obtained by either machine learning or manual tagging, the proposed approach refines the results using a graph diffusion technique, which recovers the consistency and smoothness of the annotations over a semantic graph. Different from the existing graph-based learning methods that model relations among data samples, the semantic graph captures context by treating the concepts as nodes and the concept affinities as the weights of edges. In particular, our approach is capable of simultaneously improving annotation accuracy and adapting the concept affinities to new test data. The adaptation provides a means to handle domain change between training and test data, which often occurs in practice. Extensive experiments are conducted to improve concept annotation results using Flickr images and TV program videos. Results show consistent and significant performance gain (10 on both image and video data sets). Source codes of the proposed algorithms are available online.", "In this paper we study how to perform object classification in a principled way that exploits the rich structure of real world labels. We develop a new model that allows encoding of flexible relations between labels. We introduce Hierarchy and Exclusion (HEX) graphs, a new formalism that captures semantic relations between any two labels applied to the same object: mutual exclusion, overlap and subsumption. We then provide rigorous theoretical analysis that illustrates properties of HEX graphs such as consistency, equivalence, and computational implications of the graph structure. Next, we propose a probabilistic classification model based on HEX graphs and show that it enjoys a number of desirable properties. Finally, we evaluate our method using a large-scale benchmark. Empirical results demonstrate that our model can significantly improve object classification by exploiting the label relations.", "The success of query-by-concept, proposed recently to cater to video retrieval needs, depends greatly on the accuracy of concept-based video indexing. Unfortunately, it remains a challenge to recognize the presence of concepts in a video segment or to extract an objective linguistic description from it because of the semantic gap, that is, the lack of correspondence between machine-extracted low-level features and human high-level conceptual interpretation. This paper studies three issues with the aim to reduce such a gap: 1) how to explore cues beyond low-level features, 2) how to combine diverse cues to improve performance, and 3) how to utilize the learned knowledge when applying it to a new domain. To solve these problems, we propose a framework that jointly exploits multiple cues across multiple video domains. First, recursive algorithms are proposed to learn both interconcept and intershot relationships from annotations. Second, all concept labels for all shots are simultaneously refined in a single fusion model. Additionally, unseen shots are assigned pseudolabels according to their initial prediction scores so that contextual and temporal relationships can be learned, thus requiring no additional human effort. Integration of cues embedded within training and testing video sets accommodates domain change. Experiments on popular benchmarks show that our framework is effective, achieving significant improvements over popular baselines.", "There is general consensus that context can be a rich source of information about an object's identity, location and scale. In fact, the structure of many real-world scenes is governed by strong configurational rules akin to those that apply to a single object. Here we introduce a simple framework for modeling the relationship between context and object properties based on the correlation between the statistics of low-level features across the entire scene and the objects that it contains. The resulting scheme serves as an effective procedure for object priming, context driven focus of attention and automatic scale-selection on real-world scenes.", "In the task of visual object categorization, semantic context can play the very important role of reducing ambiguity in objects' visual appearance. In this work we propose to incorporate semantic object context as a post-processing step into any off-the-shelf object categorization model. Using a conditional random field (CRF) framework, our approach maximizes object label agreement according to contextual relevance. We compare two sources of context: one learned from training data and another queried from Google Sets. The overall performance of the proposed framework is evaluated on the PASCAL and MSRC datasets. Our findings conclude that incorporating context into object categorization greatly improves categorization accuracy.", "Object recognition and localization are important tasks in computer vision. The focus of this work is the incorporation of contextual information in order to improve object recognition and localization. For instance, it is natural to expect not to see an elephant to appear in the middle of an ocean. We consider a simple approach to encapsulate such common sense knowledge using co-occurrence statistics from web documents. By merely counting the number of times nouns (such as elephants, sharks, oceans, etc.) co-occur in web documents, we obtain a good estimate of expected co-occurrences in visual data. We then cast the problem of combining textual co-occurrence statistics with the predictions of image-based classifiers as an optimization problem. The resulting optimization problem serves as a surrogate for our inference procedure. Albeit the simplicity of the resulting optimization problem, it is effective in improving both recognition and localization accuracy. Concretely, we observe significant improvements in recognition and localization rates for both ImageNet Detection 2012 and Sun 2012 datasets.", "", "We address the problem of classifying complex videos based on their content. A typical approach to this problem is performing the classification using semantic attributes, commonly termed concepts, which occur in the video. In this paper, we propose a contextual approach to video classification based on Generalized Maximum Clique Problem (GMCP) which uses the co-occurrence of concepts as the context model. To be more specific, we propose to represent a class based on the co-occurrence of its concepts and classify a video based on matching its semantic co-occurrence pattern to each class representation. We perform the matching using GMCP which finds the strongest clique of co-occurring concepts in a video. We argue that, in principal, the co-occurrence of concepts yields a richer representation of a video compared to most of the current approaches. Additionally, we propose a novel optimal solution to GMCP based on Mixed Binary Integer Programming (MBIP). The evaluations show our approach, which opens new opportunities for further research in this direction, outperforms several well established video classification methods." ] }
1502.07209
1777628566
In this paper, we study the challenging problem of categorizing videos according to high-level semantics such as the existence of a particular human action or a complex event. Although extensive efforts have been devoted in recent years, most existing works combined multiple video features using simple fusion strategies and neglected the utilization of inter-class semantic relationships. This paper proposes a novel unified framework that jointly exploits the feature relationships and the class relationships for improved categorization performance. Specifically, these two types of relationships are estimated and utilized by imposing regularizations in the learning process of a deep neural network (DNN). Through arming the DNN with better capability of harnessing both the feature and the class relationships, the proposed regularized DNN (rDNN) is more suitable for modeling video semantics. We show that rDNN produces better performance over several state-of-the-art approaches. Competitive results are reported on the well-known Hollywood2 and Columbia Consumer Video benchmarks. In addition, to stimulate future research on large scale video categorization, we collect and release a new benchmark dataset, called FCVID, which contains 91,223 Internet videos and 239 manually annotated categories.
Our formulation is partly inspired by recent research on Multiple Task Learning (MTL) @cite_57 @cite_43 . MTL trains multiple class models simultaneously and boosts the performance of a task (a classifier model) by seeking help from other related tasks. MTL has demonstrated good results in many applications, such as disease prediction @cite_1 @cite_51 and financial stock selection @cite_26 . Sharing certain commonalities among multiple tasks is the key idea of MTL and several algorithms have been proposed with regularizations on the shared patterns across tasks @cite_11 @cite_37 @cite_58 . These works exploited the class relationships in classification or regression problems using the conventional learning approaches, but never injected such regularizations into the DNN.
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_1", "@cite_57", "@cite_43", "@cite_58", "@cite_51", "@cite_11" ], "mid": [ "2186054958", "2144807460", "2000292092", "2949664970", "", "77289000", "2031250362", "2018096278" ], "abstract": [ "In multi-task learning (MTL), multiple tasks are learnt jointly. A major assumption for this paradigm is that all those tasks are indeed related so that the joint training is appropriate and beneficial. In this paper, we study the problem of multi-task learning of shared feature representations among tasks, while simultaneously determining \"with whom\" each task should share. We formulate the problem as a mixed integer programming and provide an alternating minimization technique to solve the optimization problem of jointly identifying grouping structures and parameters. The algorithm mono-tonically decreases the objective function and converges to a local optimum. Compared to the standard MTL paradigm where all tasks are in a single group, our algorithm improves its performance with statistical significance for three out of the four datasets we have studied. We also demonstrate its advantage over other task grouping techniques investigated in literature.", "Artificial Neural Networks can be used to predict future returns of stocks in order to take financial decisions. Should one build a separate network for each stock or share the same network for all the stocks? In this paper we also explore other alternatives, in which some layers are shared and others are not shared. When the prediction of future returns for different stocks are viewed as different tasks, sharing some parameters across stocks is a form of multi-task learning. In a series of experiments with Canadian stocks, we obtain yearly returns that are more than 14 above various benchmarks.", "Many machine learning and pattern classification methods have been applied to the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). Recently, rather than predicting categorical variables as in classification, several pattern regression methods have also been used to estimate continuous clinical variables from brain images. However, most existing regression methods focus on estimating multiple clinical variables separately and thus cannot utilize the intrinsic useful correlation information among different clinical variables. On the other hand, in those regression methods, only a single modality of data (usually only the structural MRI) is often used, without considering the complementary information that can be provided by different modalities. In this paper, we propose a general methodology, namely Multi-Modal Multi-Task (M3T) learning, to jointly predict multiple variables from multi-modal data. Here, the variables include not only the clinical variables used for regression but also the categorical variable used for classification, with different tasks corresponding to prediction of different variables. Specifically, our method contains two key components, i.e., (1) a multi-task feature selection which selects the common subset of relevant features for multiple variables from each modality, and (2) a multi-modal support vector machine which fuses the above-selected features from all modalities to predict multiple (regression and classification) variables. To validate our method, we perform two sets of experiments on ADNI baseline MRI, FDG-PET, and cerebrospinal fluid (CSF) data from 45 AD patients, 91 MCI patients, and 50 healthy controls (HC). In the first set of experiments, we estimate two clinical variables such as Mini Mental State Examination (MMSE) and Alzheimer’s Disease Assessment Scale - Cognitive Subscale (ADAS-Cog), as well as one categorical variable (with value of ‘AD’, ‘MCI’ or ‘HC’), from the baseline MRI, FDG-PET, and CSF data. In the second set of experiments, we predict the 2-year changes of MMSE and ADAS-Cog scores and also the conversion of MCI to AD from the baseline MRI, FDG-PET, and CSF data. The results on both sets of experiments demonstrate that our proposed M3T learning scheme can achieve better performance on both regression and classification tasks than the conventional learning methods.", "In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others. In the context of learning linear functions for supervised classification or regression, this can be achieved by including a priori information about the weight vectors associated with the tasks, and how they are expected to be related to each other. In this paper, we assume that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors. We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting in a new convex optimization formulation for multi-task learning. We show in simulations on synthetic examples and on the IEDB MHC-I binding dataset, that our approach outperforms well-known convex methods for multi-task learning, as well as related non convex methods dedicated to the same problem.", "", "Multiple task learning (MTL) is becoming popular due to its theoretical advances and empirical successes. The key idea of MTL is to explore the hidden relationships among multiple tasks to enhance learning performance. Recently, many MTL algorithms have been developed and applied to various problems such as feature selection and kernel learning. However, most existing methods highly relied on certain assumptions of the task relationships. For instance, several works assumed that there is a major task group and several outlier tasks, and used a decomposition approach to identify the group structure and outlier tasks simultaneously. In this paper, we adopt a more general formulation for MTL without making specific structure assumptions. Instead of performing model decomposition, we directly impose an elastic-net regularization with a mixture of the structure and outlier penalties and formulate the objective as an unconstrained convex problem. To derive the optimal solution efficiently, we propose to use an Iteratively Reweighted Least Square (IRLS) method with a preconditioned conjugate gradient, which is computationally affordable for high dimensional data. Extensive experiments are conducted over both synthetic and real data, and comparisons with several state-of-the-art algorithms clearly show the superior performance of the proposed method.", "Alzheimer's Disease (AD), the most common type of dementia, is a severe neurodegenerative disorder. Identifying markers that can track the progress of the disease has recently received increasing attentions in AD research. A definitive diagnosis of AD requires autopsy confirmation, thus many clinical cognitive measures including Mini Mental State Examination (MMSE) and Alzheimer's Disease Assessment Scale cognitive subscale (ADAS-Cog) have been designed to evaluate the cognitive status of the patients and used as important criteria for clinical diagnosis of probable AD. In this paper, we propose a multi-task learning formulation for predicting the disease progression measured by the cognitive scores and selecting markers predictive of the progression. Specifically, we formulate the prediction problem as a multi-task regression problem by considering the prediction at each time point as a task. We capture the intrinsic relatedness among different tasks by a temporal group Lasso regularizer. The regularizer consists of two components including an L2,1-norm penalty on the regression weight vectors, which ensures that a small subset of features will be selected for the regression models at all time points, and a temporal smoothness term which ensures a small deviation between two regression models at successive time points. We have performed extensive evaluations using various types of data at the baseline from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database for predicting the future MMSE and ADAS-Cog scores. Our experimental studies demonstrate the effectiveness of the proposed algorithm for capturing the progression trend and the cross-sectional group differences of AD severity. Results also show that most markers selected by the proposed algorithm are consistent with findings from existing cross-sectional studies.", "Multi-task learning (MTL) aims at improving the generalization performance by utilizing the intrinsic relationships among multiple related tasks. A key assumption in most MTL algorithms is that all tasks are related, which, however, may not be the case in many real-world applications. In this paper, we propose a robust multi-task learning (RMTL) algorithm which learns multiple tasks simultaneously as well as identifies the irrelevant (outlier) tasks. Specifically, the proposed RMTL algorithm captures the task relationships using a low-rank structure, and simultaneously identifies the outlier tasks using a group-sparse structure. The proposed RMTL algorithm is formulated as a non-smooth convex (unconstrained) optimization problem. We propose to adopt the accelerated proximal method (APM) for solving such an optimization problem. The key component in APM is the computation of the proximal operator, which can be shown to admit an analytic solution. We also theoretically analyze the effectiveness of the RMTL algorithm. In particular, we derive a key property of the optimal solution to RMTL; moreover, based on this key property, we establish a theoretical bound for characterizing the learning performance of RMTL. Our experimental results on benchmark data sets demonstrate the effectiveness and efficiency of the proposed algorithm." ] }
1502.07540
2949805997
Deep LSTM is an ideal candidate for text recognition. However text recognition involves some initial image processing steps like segmentation of lines and words which can induce error to the recognition system. Without segmentation, learning very long range context is difficult and becomes computationally intractable. Therefore, alternative soft decisions are needed at the pre-processing level. This paper proposes a hybrid text recognizer using a deep recurrent neural network with multiple layers of abstraction and long range context along with a language model to verify the performance of the deep neural network. In this paper we construct a multi-hypotheses tree architecture with candidate segments of line sequences from different segmentation algorithms at its different branches. The deep neural network is trained on perfectly segmented data and tests each of the candidate segments, generating unicode sequences. In the verification step, these unicode sequences are validated using a sub-string match with the language model and best first search is used to find the best possible combination of alternative hypothesis from the tree structure. Thus the verification framework using language models eliminates wrong segmentation outputs and filters recognition errors.
Text recognition algorithms have traditionally been segmentation based where lines are segmented to words and finally characters which get recognized by the use of classifiers. Such approaches have high segmentation error and do not use context information. The main causes of such errors arise from age and quality of documents where inter-word and inter line spacing, ink spread and background text interference cause segmentation errors in turn affecting overall recognition accuracies. In segmentation free approaches sequential classifiers like Hidden Markov Model(HMM) and graphical models like Conditional Random Fields(CRF) have been used. These algorithms introduced the use of context information in terms of transition probabilities and n-gram models, thus improving recognition accuracies @cite_11 . But these approaches mostly do not work with unsegmented words, if some do, they are restricted since they use a dictionary of limited words.
{ "cite_N": [ "@cite_11" ], "mid": [ "2142069714" ], "abstract": [ "Handwriting has continued to persist as a means of communication and recording information in day-to-day life even with the introduction of new technologies. Given its ubiquity in human transactions, machine recognition of handwriting has practical significance, as in reading handwritten notes in a PDA, in postal addresses on envelopes, in amounts in bank checks, in handwritten fields in forms, etc. This overview describes the nature of handwritten language, how it is transduced into electronic data, and the basic concepts behind written language recognition algorithms. Both the online case (which pertains to the availability of trajectory data during writing) and the off-line case (which pertains to scanned images) are considered. Algorithms for preprocessing, character and word recognition, and performance with practical systems are indicated. Other fields of application, like signature verification, writer authentification, handwriting learning tools are also considered." ] }
1502.07540
2949805997
Deep LSTM is an ideal candidate for text recognition. However text recognition involves some initial image processing steps like segmentation of lines and words which can induce error to the recognition system. Without segmentation, learning very long range context is difficult and becomes computationally intractable. Therefore, alternative soft decisions are needed at the pre-processing level. This paper proposes a hybrid text recognizer using a deep recurrent neural network with multiple layers of abstraction and long range context along with a language model to verify the performance of the deep neural network. In this paper we construct a multi-hypotheses tree architecture with candidate segments of line sequences from different segmentation algorithms at its different branches. The deep neural network is trained on perfectly segmented data and tests each of the candidate segments, generating unicode sequences. In the verification step, these unicode sequences are validated using a sub-string match with the language model and best first search is used to find the best possible combination of alternative hypothesis from the tree structure. Thus the verification framework using language models eliminates wrong segmentation outputs and filters recognition errors.
Long Short Term Memory based Recurrent Neural network architecture has been widely used for speech recognition @cite_15 @cite_1 , text recognition @cite_22 , social signal prediction @cite_13 , emotion recognition @cite_2 and time series prediction problems since it has the ability of sequence learning. LSTM has emerged as the most competent classifier for handwriting and speech recognition. It performs considerably well on handwritten text without explicit knowledge of the language and has won several competitions @cite_17 @cite_25 . LSTM has been used for the recognition of printed Urdu Nastaleeq script @cite_26 and printed English and Fraktur scripts @cite_14 . RNN based approaches have been popularly used for Arabic scripts wherein segmentation is immensely difficult @cite_20 . LSTM based approaches have outperformed HMM based ones for handwriting recognition proving that learnt features are better than handcrafted features @cite_9 . With the advent of Deep learning algorithms, deep belief networks and deep neural networks are gaining popularity due to their efficiency over shallow models @cite_24 .
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_22", "@cite_9", "@cite_1", "@cite_24", "@cite_2", "@cite_15", "@cite_13", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2009444210", "2060580591", "2587486463", "", "", "2136922672", "2149940198", "", "2950689855", "", "2167898728", "" ], "abstract": [ "Recurrent neural networks (RNN) have been successfully applied for recognition of cursive handwritten documents, both in English and Arabic scripts. Ability of RNNs to model context in sequence data like speech and text makes them a suitable candidate to develop OCR systems for printed Nabataean scripts (including Nastaleeq for which no OCR system is available to date). In this work, we have presented the results of applying RNN to printed Urdu text in Nastaleeq script. Bidirectional Long Short Term Memory (BLSTM) architecture with Connectionist Temporal Classification (CTC) output layer was employed to recognize printed Urdu text. We evaluated BLSTM networks for two cases: one ignoring the character's shape variations and the second is considering them. The recognition error rate at character level for first case is 5.15 and for the second is 13.6 . These results were obtained on synthetically generated UPTI dataset containing artificially degraded images to reflect some real-world scanning artifacts along with clean images. Comparison with shape-matching based method is also presented.", "Long Short-Term Memory (LSTM) networks have yielded excellent results on handwriting recognition. This paper describes an application of bidirectional LSTM networks to the problem of machine-printed Latin and Fraktur recognition. Latin and Fraktur recognition differs significantly from handwriting recognition in both the statistical properties of the data, as well as in the required, much higher levels of accuracy. Applications of LSTM networks to handwriting recognition use two-dimensional recurrent networks, since the exact position and baseline of handwritten characters is variable. In contrast, for printed OCR, we used a one-dimensional recurrent network combined with a novel algorithm for baseline and x-height normalization. A number of databases were used for training and testing, including the UW3 database, artificially generated and degraded Fraktur text and scanned pages from a book digitization project. The LSTM architecture achieved 0.6 character-level test-set error on English text. When the artificially degraded Fraktur data set is divided into training and test sets, the system achieves an error rate of 1.64 . On specific books printed in Fraktur (not part of the training set), the system achieves error rates of 0.15 (Fontane) and 1.47 (Ersch-Gruber). These recognition accuracies were found without using any language modelling or any other post-processing techniques.", "In this paper we introduce a new connectionist approach to on-line handwriting recognition and address in particular the problem of recognizing handwritten whiteboard notes. The approach uses a bidirectional recurrent neural network with the long short-term memory architecture. We use a recently introduced objective function, known as Connectionist Temporal Classification (CTC), that directly trains the network to label unsegmented sequence data. Our new system achieves a word recognition rate of 74.0 , compared with 65.4 using a previously developed HMMbased recognition system.", "", "", "We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.", "In this paper, we apply a context-sensitive technique for multimodal emotion recognition based on feature-level fusion of acoustic and visual cues. We use bidirectional Long ShortTerm Memory (BLSTM) networks which, unlike most other emotion recognition approaches, exploit long-range contextual information for modeling the evolution of emotion within a conversation. We focus on recognizing dimensional emotional labels, which enables us to classify both prototypical and nonprototypical emotional expressions contained in a large audiovisual database. Subject-independent experiments on various classification tasks reveal that the BLSTM network approach generally prevails over standard classification techniques such as Hidden Markov Models or Support Vector Machines, and achieves F1-measures of the order of 72 , 65 , and 55 for the discrimination of three clusters in emotional space and the distinction between three levels of valence and activation, respectively.", "", "Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates , which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.", "", "In online handwriting recognition the trajectory of the pen is recorded during writing. Although the trajectory provides a compact and complete representation of the written output, it is hard to transcribe directly, because each letter is spread over many pen locations. Most recognition systems therefore employ sophisticated preprocessing techniques to put the inputs into a more localised form. However these techniques require considerable human effort, and are specific to particular languages and alphabets. This paper describes a system capable of directly transcribing raw online handwriting data. The system consists of an advanced recurrent neural network with an output layer designed for sequence labelling, combined with a probabilistic language model. In experiments on an unconstrained online database, we record excellent results using either raw or preprocessed data, well outperforming a state-of-the-art HMM based system in both cases.", "" ] }
1502.07411
1803059841
In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.
Our method exploits the recent advances of deep nets in image classification @cite_10 @cite_16 , object detection @cite_39 and semantic segmentation @cite_5 @cite_11 , for single view image depth estimations. In the following, we give a brief introduction to the most closely related work.
{ "cite_N": [ "@cite_39", "@cite_5", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "2102605133", "1903029394", "2962835968", "2618530766", "1945099168" ], "abstract": [ "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 , respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "We present a two-module approach to semantic segmentation that incorporates Convolutional Networks (CNNs) and Graphical Models. Graphical models are used to generate a small (5-30) set of diverse segmentations proposals, such that this set has high recall. Since the number of required proposals is so low, we can extract fairly complex features to rank them. Our complex feature of choice is a novel CNN called SegNet, which directly outputs a (coarse) semantic segmentation. Importantly, SegNet is specifically trained to optimize the corpus-level PASCAL IOU loss function. To the best of our knowledge, this is the first CNN specifically designed for semantic segmentation. This two-module approach achieves @math on the PASCAL 2012 segmentation challenge." ] }
1502.07411
1803059841
In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.
Recently, Eigen al @cite_30 proposed a multi-scale approach for depth estimation, which bears similarity to our work here. However, our method differs critically from theirs: they use the as a black-box by directly regressing the depth map from an input image through convolutions; in contrast we use a to explicitly model the relations of neighboring superpixels, and learn the potentials (both unary and binary) in a unified framework.
{ "cite_N": [ "@cite_30" ], "mid": [ "2171740948" ], "abstract": [ "Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation." ] }
1502.07411
1803059841
In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.
Recent work of @cite_22 and @cite_9 is relevant to ours in that they also perform depth estimation from a single image. The method of Su al @cite_22 involves a continuous depth optimization step like ours, which also contains a unary regression term and a pairwise local smoothness term. However, these two works focus on 3D reconstruction of known segmented objects while our method targets at depth estimation of general scene images. Furthermore, the method of @cite_22 relies on a pre-constructed 3D shape database of input object categories, and the work of @cite_9 relies on class-specific object keypoints and object segmentations. In contrast, we do not inject these priors.
{ "cite_N": [ "@cite_9", "@cite_22" ], "mid": [ "1893912098", "2015112703" ], "abstract": [ "Object reconstruction from a single image - in the wild - is a problem where we can make progress and get meaningful results today. This is the main message of this paper, which introduces an automated pipeline with pixels as inputs and 3D surfaces of various rigid categories as outputs in images of realistic scenes. At the core of our approach are deformable 3D models that can be learned from 2D annotations available in existing object detection datasets, that can be driven by noisy automatic object segmentations and which we complement with a bottom-up module for recovering high-frequency shape details. We perform a comprehensive quantitative analysis and ablation study of our approach using the recently introduced PASCAL 3D+ dataset and show very encouraging automatic reconstructions on PASCAL VOC.", "Images, while easy to acquire, view, publish, and share, they lack critical depth information. This poses a serious bottleneck for many image manipulation, editing, and retrieval tasks. In this paper we consider the problem of adding depth to an image of an object, effectively 'lifting' it back to 3D, by exploiting a collection of aligned 3D models of related objects. Our key insight is that, even when the imaged object is not contained in the shape collection, the network of shapes implicitly characterizes a shape-specific deformation subspace that regularizes the problem and enables robust diffusion of depth information from the shape collection to the input image. We evaluate our fully automatic approach on diverse and challenging input images, validate the results against Kinect depth readings, and demonstrate several imaging applications including depth-enhanced image editing and image relighting." ] }
1502.07411
1803059841
In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.
Combining and In @cite_1 , Farabet al propose a multi-scale framework for scene labelling, which uses as a post-processing step for local refinement. In the most recent work of @cite_34 , Tompson al present a hybrid architecture for jointly training a deep and an for human pose estimation. They first train a unary term and a spatial model separately, then jointly learn them as a fine tuning step. During fine tuning of the whole model, they simply remove the partition function in the likelihood to have a loose approximation. In contrast, our model performs continuous variable prediction. We can directly solve the log-likelihood optimization without using approximations as the partition function is integrable and can be analytically calculated. Moreover, during prediction, we have closed-form solutions to the MAP inference problem. Although no convolutional layers are involved, the work of @cite_27 shares similarity with ours in that both continuous 's use neural networks to model the potentials. Note that the model in @cite_27 is not deep and only one hidden layer is used. It is unclear how the method of @cite_27 performs on the challenging depth estimation problem that we consider here.
{ "cite_N": [ "@cite_27", "@cite_34", "@cite_1" ], "mid": [ "2145316937", "2136391815", "2022508996" ], "abstract": [ "An increasing number of computer vision and pattern recognition problems require structured regression techniques. Problems like human pose estimation, unsegmented action recognition, emotion prediction and facial landmark detection have temporal or spatial output dependencies that regular regression techniques do not capture. In this paper we present continuous conditional neural fields (CCNF) – a novel structured regression model that can learn non-linear input-output dependencies, and model temporal and spatial output relationships of varying length sequences. We propose two instances of our CCNF framework: Chain-CCNF for time series modelling, and Grid-CCNF for spatial relationship modelling. We evaluate our model on five public datasets spanning three different regression problems: facial landmark detection in the wild, emotion prediction in music and facial action unit recognition. Our CCNF model demonstrates state-of-the-art performance on all of the datasets used.", "This paper proposes a new hybrid architecture that consists of a deep Convolu-tional Network and a Markov Random Field. We show how this architecture is successfully applied to the challenging problem of articulated human pose estimation in monocular images. The architecture can exploit structural domain constraints such as geometric relationships between body joint locations. We show that joint training of these two model paradigms improves performance and allows us to significantly outperform existing state-of-the-art techniques.", "Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction." ] }
1502.07411
1803059841
In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.
Fully convolutional networks Fully convolutional networks have recently been actively studied for dense prediction problems, , semantic segmentation @cite_5 @cite_11 , image restoration @cite_21 , image super-resolution @cite_35 , depth estimations @cite_30 . To deal with the downsampled output issue, interpolations are generally applied @cite_30 @cite_11 . In @cite_18 , Sermanet al propose an input shifting and output interlacing trick to produce dense predictions from coarse outputs without interpolations. Later on, Long al @cite_5 present a deconvolution approach to put the upsampling into the training regime instead of applying it as a post-processing step. The model presented in Eigen al @cite_30 for depth estimation also suffers from this upsampling problem---the predicted depth maps of @cite_30 is 1 4-resolution of the original input image with some border areas lost. They simply use bilinear interpolations to upsample the predictions to the input image size. Unlike these existing methods, we propose a novel superpixel pooling method to address this issue. It jointly exploits the strengths of highly efficient fully convolutional networks and the benefits of superpixels at preserving object boundaries.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_21", "@cite_5", "@cite_11" ], "mid": [ "2171740948", "54257720", "2963542991", "2154815154", "1903029394", "1945099168" ], "abstract": [ "Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation.", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.", "Abstract: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "Photographs taken through a window are often compromised by dirt or rain present on the window surface. Common cases of this include pictures taken from inside a vehicle, or outdoor security cameras mounted inside a protective enclosure. At capture time, defocus can be used to remove the artifacts, but this relies on achieving a shallow depth-of-field and placement of the camera close to the window. Instead, we present a post-capture image processing solution that can remove localized rain and dirt artifacts from a single image. We collect a dataset of clean corrupted image pairs which are then used to train a specialized form of convolutional neural network. This learns how to map corrupted image patches to clean ones, implicitly capturing the characteristic appearance of dirt and water droplets in natural images. Our models demonstrate effective removal of dirt and rain in outdoor test conditions.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "We present a two-module approach to semantic segmentation that incorporates Convolutional Networks (CNNs) and Graphical Models. Graphical models are used to generate a small (5-30) set of diverse segmentations proposals, such that this set has high recall. Since the number of required proposals is so low, we can extract fairly complex features to rank them. Our complex feature of choice is a novel CNN called SegNet, which directly outputs a (coarse) semantic segmentation. Importantly, SegNet is specifically trained to optimize the corpus-level PASCAL IOU loss function. To the best of our knowledge, this is the first CNN specifically designed for semantic segmentation. This two-module approach achieves @math on the PASCAL 2012 segmentation challenge." ] }
1502.07242
1516898087
Technology of autonomous vehicles (AVs) is becoming mature, and many AVs will appear on roads in the near future. AVs become connected with the support of various vehicular communication technologies, and they possess a high degree of control to respond to instantaneous situations cooperatively with high efficiency and flexibility. In this paper, we propose a new public transportation system based on AVs. It manages a fleet of AVs to accommodate transportation requests, offering point-to-point services with ride sharing. We focus on the two major problems of the system: scheduling and admission control. The former is to configure the most economical schedules and routes for the AVs to satisfy the admissible requests, whereas the latter is to determine the set of admissible requests among all requests to produce maximum profit. The scheduling problem is formulated as a mixed-integer linear program, and the admission control problem is cast as a bilevel optimization, which embeds the scheduling problem as the major constraint. By utilizing the analytical properties of the problem, we develop an effective genetic-algorithm-based method to tackle the admission control problem. We validate the performance of the algorithm with real-world transportation service data.
Most research black work on AVs mainly focused on the control and communication aspects. Mladenovic and Abbas @cite_18 proposed a self-organizing and cooperative control framework for distributed vehicle intelligence. Hu @cite_39 studied lane assignment strategies for connected AVs and proposed a lane changing maneuver to balance the tradeoff between efficiency and safety. Petrov and Nashashibi @cite_2 developed a feedback controller for autonomous overtaking without utilizing roadway marking and inter-vehicle communication. Li multilevel presented a multi-level fusion-based road detection system for driverless vehicle navigation to ensure safety in various road conditions. All these show that AV is a promising technology with the support from governments, high-tech companies, and car manufacturers.
{ "cite_N": [ "@cite_18", "@cite_2", "@cite_39" ], "mid": [ "2026876041", "2012561930", "2060058398" ], "abstract": [ "Development of in-vehicle computer and sensing technology, along with short-range vehicle-to-vehicle communication has provided technological potential for large-scale deployment of autonomous vehicles. The issue of intersection control for these future driverless vehicles is one of the emerging research issues. Contrary to some of the previous research approaches, this paper is proposing a paradigm shift based upon self-organizing and cooperative control framework. Distributed vehicle intelligence has been used to calculate each vehicle's approaching velocity. The control mechanism has been developed in an agent-based environment. Self-organizing agent's trajectory adjustment bases upon a proposed priority principle. Testing of the system has proved its safety, user comfort, and efficiency functional requirements. Several recommendations for further research are presented.", "In this paper, we present a mathematical model and adaptive controller for an autonomous vehicle overtaking maneuver. We consider the problem of an autonomous three-phase overtaking without the use of any roadway marking scheme or intervehicle communication. The developed feedback controller requires information for the current relative intervehicle position and orientation, which are assumed to be available from onboard sensors. We apply standard robotic nomenclature for translational and rotational displacements and velocities and propose a general kinematic model of the vehicles and the relative intervehicle kinematics during the overtaking maneuver. The overtaking maneuver is investigated as a tracking problem with respect to desired polynomial virtual trajectories for every phase, which are generated in real time. An update control law for the automated overtaking vehicle is designed that allows tracking the desired trajectories in the presence of unknown velocity of the overtaken vehicle. Simulation results illustrate the performance of the proposed controller.", "With recent progress in vehicle autonomous driving and vehicular communication technologies, vehicle systems are developing towards fully connected and fully autonomous systems. This paper studies lane assignment strategies for connected autonomous vehicles in a highway scenario and their impact on the overall traffic efficiency and safety. We formulate a model of connected autonomous vehicles, which includes three features: traffic data available online, ultra-short reaction time, and cooperative driving. Based on this model, we propose a novel lane change maneuver Politely Change Lane (PCL), which achieves the tradeoff between traffic safety and efficiency. Its effectiveness is validated and evaluated by extensive simulations. The performance shows that PCL improves both safety and efficiency of the overall traffic, especially with heavy traffic." ] }
1502.07242
1516898087
Technology of autonomous vehicles (AVs) is becoming mature, and many AVs will appear on roads in the near future. AVs become connected with the support of various vehicular communication technologies, and they possess a high degree of control to respond to instantaneous situations cooperatively with high efficiency and flexibility. In this paper, we propose a new public transportation system based on AVs. It manages a fleet of AVs to accommodate transportation requests, offering point-to-point services with ride sharing. We focus on the two major problems of the system: scheduling and admission control. The former is to configure the most economical schedules and routes for the AVs to satisfy the admissible requests, whereas the latter is to determine the set of admissible requests among all requests to produce maximum profit. The scheduling problem is formulated as a mixed-integer linear program, and the admission control problem is cast as a bilevel optimization, which embeds the scheduling problem as the major constraint. By utilizing the analytical properties of the problem, we develop an effective genetic-algorithm-based method to tackle the admission control problem. We validate the performance of the algorithm with real-world transportation service data.
Shareability of taxi services has been studied recently. Santi @cite_24 investigated the tradeoff between passenger inconvenience and collective benefits of sharing and concluded that a small increase in discomfort could induce the significant black benefits of less congestion, less running costs, less split fares, less polluted, and cleaner environment. Ma proposed a taxi ridesharing system called T-Share in @cite_16 , where the dynamic taxi ridesharing problem was studied. For a dataset of taxi services in Beijing, it showed that 25 studies confirmed that ridesharing is beneficial but they mostly focused on taxi services. In this paper, we focus on black AVs , which black have a key intrinsic property hardly found in the standard black taxis : the direct control of vehicles does not involve any human factors. In other words, AVs can completely follow the instructions from the control center in the sense that they neither undertake any unassigned requests nor reject any assigned requests. We can see that AVs can fully cooperate to achieve the system objective but it may not be the case for black human-driving taxis.
{ "cite_N": [ "@cite_24", "@cite_16" ], "mid": [ "2156767060", "1976993400" ], "abstract": [ "Taxi services are a vital part of urban transportation, and a considerable contributor to traffic congestion and air pollution causing substantial adverse effects on human health. Sharing taxi trips is a possible way of reducing the negative impact of taxi services on cities, but this comes at the expense of passenger discomfort quantifiable in terms of a longer travel time. Due to computational challenges, taxi sharing has traditionally been approached on small scales, such as within airport perimeters, or with dynamical ad hoc heuristics. However, a mathematical framework for the systematic understanding of the tradeoff between collective benefits of sharing and individual passenger discomfort is lacking. Here we introduce the notion of shareability network, which allows us to model the collective benefits of sharing as a function of passenger inconvenience, and to efficiently compute optimal sharing strategies on massive datasets. We apply this framework to a dataset of millions of taxi trips taken in New York City, showing that with increasing but still relatively low passenger discomfort, cumulative trip length can be cut by 40 or more. This benefit comes with reductions in service cost, emissions, and with split fares, hinting toward a wide passenger acceptance of such a shared service. Simulation of a realistic online system demonstrates the feasibility of a shareable taxi service in New York City. Shareability as a function of trip density saturates fast, suggesting effectiveness of the taxi sharing system also in cities with much sparser taxi fleets or when willingness to share is low.", "Taxi ridesharing can be of significant social and environmental benefit, e.g. by saving energy consumption and satisfying people's commute needs. Despite the great potential, taxi ridesharing, especially with dynamic queries, is not well studied. In this paper, we formally define the dynamic ridesharing problem and propose a large-scale taxi ridesharing service. It efficiently serves real-time requests sent by taxi users and generates ridesharing schedules that reduce the total travel distance significantly. In our method, we first propose a taxi searching algorithm using a spatio-temporal index to quickly retrieve candidate taxis that are likely to satisfy a user query. A scheduling algorithm is then proposed. It checks each candidate taxi and inserts the query's trip into the schedule of the taxi which satisfies the query with minimum additional incurred travel distance. To tackle the heavy computational load, a lazy shortest path calculation strategy is devised to speed up the scheduling algorithm. We evaluated our service using a GPS trajectory dataset generated by over 33,000 taxis during a period of 3 months. By learning the spatio-temporal distributions of real user queries from this dataset, we built an experimental platform that simulates user real behaviours in taking a taxi. Tested on this platform with extensive experiments, our approach demonstrated its efficiency, effectiveness, and scalability. For example, our proposed service serves 25 additional taxi users while saving 13 travel distance compared with no-ridesharing (when the ratio of the number of queries to that of taxis is 6)." ] }
1502.07242
1516898087
Technology of autonomous vehicles (AVs) is becoming mature, and many AVs will appear on roads in the near future. AVs become connected with the support of various vehicular communication technologies, and they possess a high degree of control to respond to instantaneous situations cooperatively with high efficiency and flexibility. In this paper, we propose a new public transportation system based on AVs. It manages a fleet of AVs to accommodate transportation requests, offering point-to-point services with ride sharing. We focus on the two major problems of the system: scheduling and admission control. The former is to configure the most economical schedules and routes for the AVs to satisfy the admissible requests, whereas the latter is to determine the set of admissible requests among all requests to produce maximum profit. The scheduling problem is formulated as a mixed-integer linear program, and the admission control problem is cast as a bilevel optimization, which embeds the scheduling problem as the major constraint. By utilizing the analytical properties of the problem, we develop an effective genetic-algorithm-based method to tackle the admission control problem. We validate the performance of the algorithm with real-world transportation service data.
black Admission control generally refers to a validation process in communication systems for quality-of-service assurance. It determines which new connection or service request can be granted with resources for subsequent operations. For example, @cite_23 designed an admission control mehanism to add or drop session requests in 4G wireless networks and @cite_26 discussed various admission control algorithms for multi-service IP networks. We adopt this idea in the transportation system and design an admission control mechanism to differentiate the transportation service requests for maximizing the total profit. black There are many methods to facilitate admission control. Genetic Algorithm (GA) is one of them and it has been successfully utilized to design admission control mechanisms, e.g., @cite_9 and @cite_30 . Based on the special formulation of the admission control problem (to be discussed in Section ), we will also adopt GA to solve the problem.
{ "cite_N": [ "@cite_30", "@cite_9", "@cite_26", "@cite_23" ], "mid": [ "2114641390", "2160997001", "2085743907", "2114332313" ], "abstract": [ "The wireless mesh network (WMN) has recently emerged as a promising technology for next-generation wireless networking. In the WMN, it is crucial to support mobile users roaming around the network without service interruption. This consideration motivates us to develop an efficient fast handoff approach using distributed computing technology. Particularly, we propose a mobile agent (MA)-based handoff architecture for the WMN, where each mesh client has an MA residing on its registered mesh router to handle the handoff signaling process. To guarantee quality of service (QoS) and achieve differentiated priorities during the handoff, we develop a proportional threshold structured optimal effective bandwidth (PTOEB) policy for call admission control (CAC) on the mesh router, as well as a genetic algorithm (GA)-based approximation approach for the heuristic solution. The simulation study shows that the proposed CAC scheme can obtain a satisfying tradeoff between differentiated priorities and the statistical effective bandwidth in a WMN handoff environment.", "In wireless ATM-based networks, admission control is required to reserve resources in advance for calls requiring guaranteed services. In the case of a multimedia call, each of its substreams (i.e., video, audio, and data) has its own distinct quality of service (QoS) requirements (e.g., cell loss rate, delay, jitter, etc.). The network attempts to deliver the required QoS by allocating an appropriate amount of resources (e.g., bandwidth, buffers). The negotiated QoS requirements constitute a certain QoS level that remains fixed during the call (static allocation approach). Accordingly, the corresponding allocated resources also remain unchanged. We present and analyze an adaptive allocation of resources algorithm based on genetic algorithms. In contrast to the static approach, each substream declares a preset range of acceptable QoS levels (e.g., high, medium, low) instead of just a single one. As the availability of resources in the wireless network varies, the algorithm selects the best possible QoS level that each substream can obtain. In case of congestion, the algorithm attempts to free up some resources by degrading the QoS levels of the existing calls to lesser ones. This is done, however, under the constraint of achieving maximum utilization of the resources while simultaneously distributing them fairly among the calls. The degradation is limited to a minimum value predefined in a user-defined profile (UDP). Genetic algorithms have been used to solve the optimization problem. From the user perspective, the perception of the QoS degradation is very graceful and happens only during overload periods. The network services, on the other hand, are greatly enhanced due to the fact that the call blocking probability is significantly decreased. Simulation results demonstrate that the proposed algorithm performs well in terms of increasing the number of admitted calls while utilizing the available bandwidth fairly and effectively.", "Admission Control (AC) has long been considered as a key mechanism to support Quality of Service objectives in networks. There is a significant base of literature in the area of admission control algorithms, but not all of these algorithms are directly comparable in terms of inputs, outputs or objectives. Current theory does not well describe the when, where, and how to best apply AC in designing network and service infrastructure for multi-service IP networks. This tutorial takes an ontological perspective within which to categorize admission control schemes. Industrystandard architectures (both core and access networks) are used to illustrate some of the key concepts. Two tables are provided to summarize the different approaches for categorization of admission control schemes.", "Admission control plays a very important role in wireless systems, as it is one of the basic mechanisms for ensuring the quality of service offered to users. Based on the available network resources, it estimates the impact of adding or dropping a new session request. In both 2G and 3G systems, admission control refers to a single network. As we are moving towards heterogeneous wireless networks referred to as systems beyond 3G or 4G, admission control will need to deal with many heterogeneous networks and admit new sessions to a network that is most appropriate to supply the requested QoS. In this article we present the fundamentals of access-network-based admission control, an overview of the existing admission control algorithms for 2G and 3G networks, and finally give the design of a new admission control algorithm suitable for future 4G networks and specifically influenced by the objectives of the European WINNER project." ] }
1502.06818
1543694536
We propose a generalization of SimRank similarity measure for heterogeneous information networks. Given the information network, the intraclass similarity score s(a, b) is high if the set of objects that are related with a and the set of objects that are related with b are pair-wise similar according to all imposed relations.
The basic graph structure similarity measure is the classical SimRank @cite_7 over a homogeneous graph @math which is defined as follows: @math @math The main drawback of this approach is that we cannot induce multiple relations or object types, so the only option is mixing them up into blobs "relation exists" and "all objects" that is completely not applicable in the case we have multiple relations with different semantics, for example the OpenCyc ontology node of the concept "Game" (see Figure ) cannot be easily expressed via a single type of relations and objects.
{ "cite_N": [ "@cite_7" ], "mid": [ "2117831564" ], "abstract": [ "The problem of measuring \"similarity\" of objects arises in many applications, and many domain-specific measures have been developed, e.g., matching text across documents or computing overlap among item-sets. We propose a complementary approach, applicable in any domain with object-to-object relationships, that measures similarity of the structural context in which objects occur, based on their relationships with other objects. Effectively, we compute a measure that says \"two objects are similar if they are related to similar objects:\" This general similarity measure, called SimRank, is based on a simple and intuitive graph-theoretic model. For a given domain, SimRank can be combined with other domain-specific similarity measures. We suggest techniques for efficient computation of SimRank scores, and provide experimental results on two application domains showing the computational feasibility and effectiveness of our approach." ] }
1502.06818
1543694536
We propose a generalization of SimRank similarity measure for heterogeneous information networks. Given the information network, the intraclass similarity score s(a, b) is high if the set of objects that are related with a and the set of objects that are related with b are pair-wise similar according to all imposed relations.
There are several works on measuring similarity between objects from different classes, see, for example, @cite_12 .
{ "cite_N": [ "@cite_12" ], "mid": [ "1967863517" ], "abstract": [ "Similarity search is an important function in many applications, which usually focuses on measuring the similarity between objects with the same type. However, in many scenarios, we need to measure the relatedness between objects with different types. With the surge of study on heterogeneous networks, the relevance measure on objects with different types becomes increasingly important. In this paper, we study the relevance search problem in heterogeneous networks, where the task is to measure the relatedness of heterogeneous objects (including objects with the same type or different types). A novel measure HeteSim is proposed, which has the following attributes: (1) a uniform measure: it can measure the relatedness of objects with the same or different types in a uniform framework; (2) a path-constrained measure: the relatedness of object pairs are defined based on the search path that connects two objects through following a sequence of node types; (3) a semi-metric measure: HeteSim has some good properties (e.g., self-maximum and symmetric), which are crucial to many data mining tasks. Moreover, we analyze the computation characteristics of HeteSim and propose the corresponding quick computation strategies. Empirical studies show that HeteSim can effectively and efficiently evaluate the relatedness of heterogeneous objects." ] }
1502.06878
2951856634
We are motivated by the need, in some applications, for impromptu or as-you-go deployment of wireless sensor networks. A person walks along a line, starting from a sink node (e.g., a base-station), and proceeds towards a source node (e.g., a sensor) which is at an a priori unknown location. At equally spaced locations, he makes link quality measurements to the previous relay, and deploys relays at some of these locations, with the aim to connect the source to the sink by a multihop wireless path. In this paper, we consider two approaches for impromptu deployment: (i) the deployment agent can only move forward (which we call a pure as-you-go approach), and (ii) the deployment agent can make measurements over several consecutive steps before selecting a placement location among them (which we call an explore-forward approach). We consider a light traffic regime, and formulate the problem as a Markov decision process, where the trade-off is among the power used by the nodes, the outage probabilities in the links, and the number of relays placed per unit distance. We obtain the structures of the optimal policies for the pure as-you-go approach as well as for the explore-forward approach. We also consider natural heuristic algorithms, for comparison. Numerical examples show that the explore-forward approach significantly outperforms the pure as-you-go approach. Next, we propose two learning algorithms for the explore-forward approach, based on Stochastic Approximation, which asymptotically converge to the set of optimal policies, without using any knowledge of the radio propagation model. We demonstrate numerically that the learning algorithms can converge (as deployment progresses) to the set of optimal policies reasonably fast and, hence, can be practical, model-free algorithms for deployment over large regions.
In our work we formulate impromptu deployment as a sequential decision problem, and derive optimal deployment policies. Recently, ( @cite_19 ) have provided an algorithm based on an MDP formulation in order to establish a multi-hop network between a sink and an unknown source location, by placing relay nodes along a random lattice path. Their model uses a deterministic mapping between power and wireless link length, and, hence, does not consider the effect of shadowing that leads to statistical variability of the transmit power required to maintain the link quality over links having the same length. The statistical variation of link qualities over space requires measurement-based deployment, in which the deployment agent makes placement decisions at a point based on the measurement of the power required to establish a link (with a given quality) to the previously placed node.
{ "cite_N": [ "@cite_19" ], "mid": [ "2122027153" ], "abstract": [ "Our work is motivated by impromptu (or ''as-you-go'') deployment of wireless relay nodes along a path, a need that arises in many situations. In this paper, the path is modeled as starting at the origin (where there is the data sink, e.g., the control center), and evolving randomly over a lattice in the positive quadrant. A person walks along the path deploying relay nodes as he goes. At each step, the path can, randomly, either continue in the same direction or take a turn, or come to an end, at which point a data source (e.g., a sensor) has to be placed, that will send packets to the data sink. A decision has to be made at each step whether or not to place a wireless relay node. Assuming that the packet generation rate by the source is very low, and simple link-by-link scheduling, we consider the problem of sequential relay placement so as to minimize the expectation of an end-to-end cost metric (a linear combination of the sum of convex hop costs and the number of relays placed). This impromptu relay placement problem is formulated as a total cost Markov decision process. First, we derive the optimal policy in terms of an optimal placement set and show that this set is characterized by a boundary (with respect to the position of the last placed relay) beyond which it is optimal to place the next relay. Next, based on a simpler one-step-look-ahead characterization of the optimal policy, we propose an algorithm which is proved to converge to the optimal placement set in a finite number of steps and which is faster than value iteration. We show by simulations that the distance threshold based heuristic, usually assumed in the literature, is close to the optimal, provided that the threshold distance is carefully chosen." ] }
1502.06644
1694496276
Finite mixture models are statistical models which appear in many problems in statistics and machine learning. In such models it is assumed that data are drawn from random probability measures, called mixture components, which are themselves drawn from a probability measure P over probability measures. When estimating mixture models, it is common to make assumptions on the mixture components, such as parametric assumptions. In this paper, we make no assumption on the mixture components, and instead assume that observations from the mixture model are grouped, such that observations in the same group are known to be drawn from the same component. We show that any mixture of m probability measures can be uniquely identified provided there are 2m-1 observations per group. Moreover we show that, for any m, there exists a mixture of m probability measures that cannot be uniquely identified when groups have 2m-2 observations. Our results hold for any sample space with more than one element.
The question of how many samples are necessary in each random group to uniquely identify a finite mixture of measures has come up sporadically over the past couple of decades. The application of Kruskal's theorem has been used to concoct various identifiability results for random groups containing three samples. In @cite_3 it was shown that any mixture of linearly independent measures over a discrete space or linearly independent probability distributions on @math are identifiable from random groups containing three samples. In @cite_5 it was shown that a mixture of @math probability measures on @math is identifiable from random groups of size @math provided there exists some point in @math where the cdf of each mixture component at that point is distinct. The result most closely resembling our own is in @cite_7 . In that paper they show that a mixture of @math probability measures over a discrete domain is identifiable with @math samples in each random group. They also show that this bound is tight and provide a consistent algorithm for estimating arbitrary mixtures of measures over a discrete domain.
{ "cite_N": [ "@cite_5", "@cite_7", "@cite_3" ], "mid": [ "2124112148", "2014277097", "1880262756" ], "abstract": [ "We consider ways to estimate the mixing proportions in a finite mixture distribution or to estimate the number of components of the mixture distribution without making parametric assumptions about the component distributions. We require a vector of observations on each subject. This vector is mapped into a vector of 0s and 1s and summed. The resulting distribution of sums can be modelled as a mixture of binomials. We then work with the binomial mixture. The efficiency and robustness of this method are compared with the strategy of assuming multivariate normal mixtures when, typically, the true underlying mixture distribution is different. It is shown that in many cases the approach based on simple binomial mixtures is superior.", "We give an algorithm for learning a mixture of unstructured distributions. This problem arises in various unsupervised learning scenarios, for example in learning topic models from a corpus of documents spanning several topics. We show how to learn the constituents of a mixture of k arbitrary distributions over a large discrete domain [n]= 1, 2, ...,n and the mixture weights, using O(n polylog n) samples. (In the topic-model learning setting, the mixture constituents correspond to the topic distributions.) This task is information-theoretically impossible for k > 1 under the usual sampling process from a mixture distribution. However, there are situations (such as the above-mentioned topic model case) in which each sample point consists of several observations from the same mixture constituent. This number of observations, which we call the \"sampling aperture\", is a crucial parameter of the problem. We obtain the first bounds for this mixture-learning problem without imposing any assumptions on the mixture constituents. We show that efficient learning is possible exactly at the information-theoretically least-possible aperture of 2k-1. Thus, we achieve near-optimal dependence on n and optimal aperture. While the sample-size required by our algorithm depends exponentially on k, we prove that such a dependence is unavoidable when one considers general mixtures. A sequence of tools contribute to the algorithm, such as concentration results for random matrices, dimension reduction, moment estimations, and sensitivity analysis.", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model." ] }
1502.06691
2088849024
Abstract The notion of aggregate signature has been motivated by applications and it enables any user to compress different signatures signed by different signers on different messages into a short signature. Sequential aggregate signature, in turn, is a special kind of aggregate signature that only allows a signer to add his signature into an aggregate signature in sequential order. This latter scheme has applications in diversified settings such as in reducing bandwidth of certificate chains and in secure routing protocols. Lu, Ostrovsky, Sahai, Shacham, and Waters (EUROCRYPT 2006) presented the first sequential aggregate signature scheme in the standard model. The size of their public key, however, is quite large (i.e., the number of group elements is proportional to the security parameter), and therefore, they suggested as an open problem the construction of such a scheme with short keys. In this paper, we propose the first sequential aggregate signature schemes with short public keys (i.e., a constant number of group elements) in prime order (asymmetric) bilinear groups that are secure under static assumptions in the standard model. Furthermore, our schemes employ a constant number of pairing operations per message signing and message verification operation. Technically, we start with a public-key signature scheme based on the recent dual system encryption technique of Lewko and Waters (TCC 2010). This technique cannot directly provide an aggregate signature scheme since, as we observed, additional elements should be published in a public key to support aggregation. Thus, our constructions are careful augmentation techniques for the dual system technique to allow it to support sequential aggregate signature schemes. We also propose a multi-signature scheme with short public parameters in the standard model.
There are some works on aggregate signature schemes that allow signers to communicate with each other or schemes that compress only partial elements of a signature in the aggregate algorithm @cite_3 @cite_9 @cite_8 @cite_6 . Generally, communication resources of computer systems are very expensive compared with computation resources. Thus, it is preferred to perform several expensive computational operations rather than one single communication exchange. Additionally, a signature scheme with added communications does not correspond to a pure PKS scheme, but corresponds more to a multi-party protocol. In addition, PKS schemes that compress just partial elements of signatures cannot be considered aggregate signature schemes since the total size of signatures is still proportional to the number of signers. Another research area related to aggregate signature is multi-signature @cite_31 @cite_32 @cite_25 . Multi-signature is a special type of aggregate signature in which all signers generate signatures on the same message, and then any user can combine these signatures into a single signature. Aggregate message authentication code (AMAC) is the symmetric key analogue of aggregate signature: Katz and Lindell @cite_24 introduced the concept of AMAC and showed that it is possible to construct an AMAC scheme based on any message authentication code scheme.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_32", "@cite_6", "@cite_3", "@cite_24", "@cite_31", "@cite_25" ], "mid": [ "", "1559037135", "2167882086", "2090942449", "1545950900", "1479727008", "200023587", "1505884345" ], "abstract": [ "", "We propose new identity-based multi-signature (IBMS) and aggregate signature (IBAS) schemes, secure under RSA assumption. Our schemes reduce round complexity of previous RSA-based IBMS scheme of Bellare and Neven [BN07] from three to two rounds. Surprisingly, this improvement comes at virtually no cost, as the computational efficiency and exact security of the new scheme are almost identical to those of [BN07]. The new scheme is enabled by a technical tool of independent interest, a class of zero-knowledge proofs of knowledge of preimages of one-way functions which is straight-line simulatable, enabling concurrency and good exact security, and aggregatable, enabling aggregation of parallel instances of such proofs into short multi aggregate signatures.", "We propose a robust proactive threshold signature scheme, a multisignature scheme and a blind signature scheme which work in any Gap Diffie-Hellman (GDH) group (where the Computational Diffie-Hellman problem is hard but the Decisional Diffie-Hellman problem is easy). Our constructions are based on the recently proposed GDH signature scheme of [8]. Due to the instrumental structure of GDH groups and of the base scheme, it turns out that most of our constructions are simpler, more efficient and have more useful properties than similar existing constructions. We support all the proposed schemes with proofs under the appropriate computational assumptions, using the corresponding notions of security.", "Sequential aggregate signature schemes allow n signers, in order, to sign a message each, at a lower total cost than the cost of n individual signatures. We present a sequential aggregate signature scheme based on trapdoor permutations (e.g., RSA). Unlike prior such proposals, our scheme does not require a signer to retrieve the keys of other signers and verify the aggregate-so-far before adding its own signature. Indeed, we do not even require a signer to know the public keys of other signers!Moreover, for applications that require signers to verify the aggregate anyway, our schemes support lazy verification: a signer can add its own signature to an unverified aggregate and forward it along immediately, postponing verification until load permits or the necessary public keys are obtained. This is especially important for applications where signers must access a large, secure, and current cache of public keys in order to verify messages. The price we pay is that our signature grows slightly with the number of signers.We report a technical analysis of our scheme (which is provably secure in the random oracle model), a detailed implementation-level specification, and implementation results based on RSA and OpenSSL. To evaluate the performance of our scheme, we focus on the target application of BGPsec (formerly known as Secure BGP), a protocol designed for securing the global Internet routing system. There is a particular need for lazy verification with BGPsec, since it is run on routers that must process signatures extremely quickly, while being able to access tens of thousands of public keys. We compare our scheme to the algorithms currently proposed for use in BGPsec, and find that our signatures are considerably shorter than nonaggregate RSA (with the same sign and verify times) and have an order of magnitude faster verification than nonaggregate ECDSA, although ECDSA has shorter signatures when the number of signers is small.", "Multi-signatures allow multiple signers to jointly authenticate a message using a single compact signature. Many applications however require the public keys of the signers to be sent along with the signature, partly defeating the effect of the compact signature. Since identity strings are likely to be much shorter than randomly generated public keys, the identity-based paradigm is particularly appealing for the case of multi-signatures. In this paper, we present and prove secure an identity-based multi-signature (IBMS) scheme based on RSA, which in particular does not rely on (the rather new and untested) assumptions related to bilinear maps. We define an appropriate security notion for interactive IBMS schemes and prove the security of our scheme under the one-wayness of RSA in the random oracle model.", "We propose and investigate the notion of aggregate message authentication codes (MACs) which have the property that multiple MAC tags, computed by (possibly) different senders on multiple (possibly different) messages, can be aggregated into a shorter tag that can still be verified by a recipient who shares a distinct key with each sender. We suggest aggregate MACs as an appropriate tool for authenticated communication in mobile ad-hoc networks or other settings where resource-constrained devices share distinct keys with a single entity (such as a base station), and communication is an expensive resource.", "", "We present the first aggregate signature, the first multisignature, and the first verifiably encrypted signature provably secure without random oracles. Our constructions derive from a novel application of a recent signature scheme due to Waters. Signatures in our aggregate signature scheme are sequentially constructed, but knowledge of the order in which messages were signed is not necessary for verification. The aggregate signatures obtained are shorter than sequential aggregates and can be verified more efficiently than aggregates. We also consider applications to secure routing and proxy signatures." ] }
1502.06197
2110072487
Multiple hypotheses testing is a core problem in statistical inference and arises in almost every scientific field. Given a sequence of null hypotheses @math , Benjamini and Hochberg benjamini1995controlling introduced the false discovery rate (FDR) criterion, which is the expected proportion of false positives among rejected null hypotheses, and proposed a testing procedure that controls FDR below a pre-assigned significance level. They also proposed a different criterion, called mFDR, which does not control a property of the realized set of tests; rather it controls the ratio of expected number of false discoveries to the expected number of discoveries. In this paper, we propose two procedures for multiple hypotheses testing that we will call "LOND" and "LORD". These procedures control FDR and mFDR in an . Concretely, we consider an ordered --possibly infinite-- sequence of null hypotheses @math where, at each step @math , the statistician must decide whether to reject hypothesis @math having access only to the previous decisions. To the best of our knowledge, our work is the first that controls FDR in this setting. This model was introduced by Foster and Stine alpha-investing whose alpha-investing rule only controls mFDR in online manner. In order to compare different procedures, we develop lower bounds on the total discovery rate under the mixture model and prove that both LOND and LORD have nearly linear number of discoveries. We further propose adjustment to LOND to address arbitrary correlation among the @math -values. Finally, we evaluate the performance of our procedures on both synthetic and real data comparing them with alpha-investing rule, Benjamin-Hochberg method and a Bonferroni procedure.
Building upon alpha-investing procedures, @cite_7 develops VIF, a method for feature selection in large regression problems. VIF is accurate and computationally very efficient; it uses a one-pass search over the pool of features and applies alpha-investing to test each feature for adding to the model. VIF regression avoids overfitting leveraging the property that alpha-investing controls @math . Similarly, one can incorporate @math and @math procedures in VIF regression to perform fast online feature selection and provably avoid overfitting.
{ "cite_N": [ "@cite_7" ], "mid": [ "2150859718" ], "abstract": [ "We propose a fast and accurate algorithm, VIF regression, for doing feature selection in large regression problems. VIF regression is extremely fast; it uses a one-pass search over the predictors and a computationally efficient method of testing each potential predictor for addition to the model. VIF regression provably avoids model overfitting, controlling the marginal false discovery rate. Numerical results show that it is much faster than any other published algorithm for regression with feature selection and is as accurate as the best of the slower algorithms." ] }
1502.06197
2110072487
Multiple hypotheses testing is a core problem in statistical inference and arises in almost every scientific field. Given a sequence of null hypotheses @math , Benjamini and Hochberg benjamini1995controlling introduced the false discovery rate (FDR) criterion, which is the expected proportion of false positives among rejected null hypotheses, and proposed a testing procedure that controls FDR below a pre-assigned significance level. They also proposed a different criterion, called mFDR, which does not control a property of the realized set of tests; rather it controls the ratio of expected number of false discoveries to the expected number of discoveries. In this paper, we propose two procedures for multiple hypotheses testing that we will call "LOND" and "LORD". These procedures control FDR and mFDR in an . Concretely, we consider an ordered --possibly infinite-- sequence of null hypotheses @math where, at each step @math , the statistician must decide whether to reject hypothesis @math having access only to the previous decisions. To the best of our knowledge, our work is the first that controls FDR in this setting. This model was introduced by Foster and Stine alpha-investing whose alpha-investing rule only controls mFDR in online manner. In order to compare different procedures, we develop lower bounds on the total discovery rate under the mixture model and prove that both LOND and LORD have nearly linear number of discoveries. We further propose adjustment to LOND to address arbitrary correlation among the @math -values. Finally, we evaluate the performance of our procedures on both synthetic and real data comparing them with alpha-investing rule, Benjamin-Hochberg method and a Bonferroni procedure.
There has been significant interest over the last two years in developing hypothesis testing procedures for high-dimensional regression, especially in conjunction with sparsity-seeking methods. Procedures for computing @math -values of low-dimensional coordinates were developed in @cite_0 @cite_12 @cite_2 @cite_17 @cite_10 . Sequential and selective inference methods were proposed in @cite_14 @cite_1 @cite_18 . Methods to control FDR were put forward in @cite_6 @cite_22 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_1", "@cite_6", "@cite_0", "@cite_2", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "172338866", "2798582617", "1916786071", "2099932489", "2109177042", "", "", "2962726112", "", "2963278901" ], "abstract": [ "In this paper we propose new inference tools for forward stepwise and least angle regression. We first present a general scheme to perform valid inference after any selection event that can be characterized as the observation vector y falling into some polyhedral set. This framework then allows us to derive conditional (post-selection) hypothesis tests at any step of the forward stepwise and least angle regression procedures. We derive an exact null distribution for our proposed test statistics in finite samples, yielding p-values with exact type I error control. The tests can also be inverted to produce confidence intervals for appropriate underlying regression parameters. Application of this framework to general likelihood-based regression models (e.g., generalized linear models and the Cox model) is also discussed.", "", "We introduce a new estimator for the vector of coecients in the linear model y = X +z, where X has dimensions n p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to", "To perform inference after model selection, we propose controlling the selective type I error; i.e., the error rate of a test given that it was performed. By doing so, we recover long-run frequency properties among selected hypotheses analogous to those that apply in the classical (non-adaptive) context. Our proposal is closely related to data splitting and has a similar intuitive justification, but is more powerful. Exploiting the classical theory of Lehmann", "In many fields of science, we observe a response variable together with a large number of potential explanatory variables, and would like to be able to discover which variables are truly associated with the response. At the same time, we need to know that the false discovery rate (FDR) - the expected fraction of false discoveries among all discoveries - is not too high, in order to assure the scientist that most of the discoveries are indeed true and replicable. This paper introduces the knockoff filter, a new variable selection procedure controlling the FDR in the statistical linear model whenever there are at least as many observations as variables. This method achieves exact FDR control in finite sample settings no matter the design or covariates, the number of variables in the model, or the amplitudes of the unknown regression coefficients, and does not require any knowledge of the noise level. As the name suggests, the method operates by manufacturing knockoff variables that are cheap - their construction does not require any new data - and are designed to mimic the correlation structure found within the existing variables, in a way that allows for accurate FDR control, beyond what is possible with permutation-based methods. The method of knockoffs is very general and flexible, and can work with a broad class of test statistics. We test the method in combination with statistics from the Lasso for sparse regression, and obtain empirical results showing that the resulting method has far more power than existing selection rules when the proportion of null variables is high.", "", "", "We consider the problem of fitting the parameters of a high-dimensional linear regression model. In the regime where the number of parameters p is comparable to or exceeds the sample size n, a successful approach uses an l1-penalized least squares estimator, known as Lasso. Unfortunately, unlike for linear estimators (e.g. ordinary least squares), no well-established method exists to compute confidence intervals or p-values on the basis of the Lasso estimator. Very recently, a line of work [8], [7], [13] has addressed this problem by constructing a debiased version of the Lasso estimator. We propose a special debiasing method that is well suited for random designs with sparse inverse covariance. Our approach improves over the state of the art in that it yields nearly optimal average testing power if sample size n asymptotically dominates s0(logp)2, with s0 being the sparsity level (number of non-zero coefficients). Earlier work achieved similar performances only for much larger sample size, namely it requires n to asymptotically dominates (s0 log p)2. We evaluate our method on synthetic data, and compare it with earlier proposals.", "", "We consider linear regression in the high-dimensional regime in which the number of observations n is smaller than the number of parameters p. A very successful approach in this setting uses 1-penalized least squares (a.k.a. the Lasso) to search for a subset of s0 < n parameters that best explain the data, while setting the other parameters to zero. A considerable amount of work has been devoted to characterizing the estimation and model selection problems within this approach. In this paper we consider instead the fundamental, but far less understood, question of statistical significance. We study this problem under the random design model in which the rows of the design matrix are i.i.d. and drawn from a high-dimensional Gaussian distribution. This situation arises, for instance, in learning high-dimensional Gaussian graphical models. Leveraging on an asymptotic distributional characterization of regularized least squares estimators, we develop a procedure for computing p-values and hence assessing statistical significance for hypothesis testing. We characterize the statistical power of this procedure, and evaluate it on synthetic and real data, comparing it with earlier proposals. Finally, we provide an upper bound on the minimax power of tests with a given significance level and show that our proposed procedure achieves this bound in case of design matrices with i.i.d. Gaussian entries." ] }
1502.06197
2110072487
Multiple hypotheses testing is a core problem in statistical inference and arises in almost every scientific field. Given a sequence of null hypotheses @math , Benjamini and Hochberg benjamini1995controlling introduced the false discovery rate (FDR) criterion, which is the expected proportion of false positives among rejected null hypotheses, and proposed a testing procedure that controls FDR below a pre-assigned significance level. They also proposed a different criterion, called mFDR, which does not control a property of the realized set of tests; rather it controls the ratio of expected number of false discoveries to the expected number of discoveries. In this paper, we propose two procedures for multiple hypotheses testing that we will call "LOND" and "LORD". These procedures control FDR and mFDR in an . Concretely, we consider an ordered --possibly infinite-- sequence of null hypotheses @math where, at each step @math , the statistician must decide whether to reject hypothesis @math having access only to the previous decisions. To the best of our knowledge, our work is the first that controls FDR in this setting. This model was introduced by Foster and Stine alpha-investing whose alpha-investing rule only controls mFDR in online manner. In order to compare different procedures, we develop lower bounds on the total discovery rate under the mixture model and prove that both LOND and LORD have nearly linear number of discoveries. We further propose adjustment to LOND to address arbitrary correlation among the @math -values. Finally, we evaluate the performance of our procedures on both synthetic and real data comparing them with alpha-investing rule, Benjamin-Hochberg method and a Bonferroni procedure.
To the best of our knowledge, the only procedure that compares with the ones we develop is the ForwardStop rule of @cite_3 . Note, however, that this approach falls short of addressing the issues we consider, for several reasons. @math It is not online, at least in the form presented in @cite_3 since it reject the first @math null hypotheses, where @math depends on all the @math -values. @math It requires knowledge of all past @math -values (not only discovery events) to compute the current score. @math Since it is constrained to reject all hypotheses before @math , and accept them after, it cannot achieve any discovery rate increasing with @math , let alone nearly linear in @math . For instance in the mixture model of Section , if the fraction of true non-null is @math , then ForwardStop achieves @math discoveries out of @math true non-null. In other words its power is of order @math in this simple case (no matter what is the strength of the signal for non-null hypotheses).
{ "cite_N": [ "@cite_3" ], "mid": [ "1871418963" ], "abstract": [ "Summary We consider a multiple-hypothesis testing setting where the hypotheses are ordered and one is only permitted to reject an initial contiguous block of hypotheses. A rejection rule in this setting amounts to a procedure for choosing the stopping point k. This setting is inspired by the sequential nature of many model selection problems, where choosing a stopping point or a model is equivalent to rejecting all hypotheses up to that point and none thereafter. We propose two new testing procedures and prove that they control the false discovery rate in the ordered testing setting. We also show how the methods can be applied to model selection by using recent results on p-values in sequential model selection settings." ] }
1502.06583
2951317740
The rise of social media provides a great opportunity for people to reach out to their social connections to satisfy their information needs. However, generic social media platforms are not explicitly designed to assist information seeking of users. In this paper, we propose a novel framework to identify the social connections of a user able to satisfy his information needs. The information need of a social media user is subjective and personal, and we investigate the utility of his social context to identify people able to satisfy it. We present questions users post on Twitter as instances of information seeking activities in social media. We infer soft community memberships of the asker and his social connections by integrating network and content information. Drawing concepts from the social foci theory, we identify answerers who share communities with the asker w.r.t. the question. Our experiments demonstrate that the framework is effective in identifying answerers to social media questions.
Social media questions have received considerable attention in research communities @cite_8 @cite_35 @cite_37 . An analytical study on questions asked and the answers received in Twitter is presented in @cite_25 @cite_20 . They indicated that subjective questions were the most prevalent and the trust users have on their friends was the primary factor for asking questions. A study of questions and responses received in Facebook was conducted in @cite_6 @cite_23 and bridging social capital was proposed to be a strong motivation for Q &A activity in social media. These works give interesting insights to the question answering process in social media, but do not focus on identifying answerers to these questions.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_8", "@cite_6", "@cite_23", "@cite_25", "@cite_20" ], "mid": [ "2110515791", "1992364867", "1603802822", "2067022383", "", "2157025439", "2215511344" ], "abstract": [ "Microblogging services such as twitter.com have become popular venues for informal information interactions. An important aspect of these interaction is question asking. In this paper we report results from an analysis of a large sample of data from Twitter. Our analysis focused on the characteristics and strategies that people bring to asking questions in microblogs. In particular, based on our analysis, we propose a taxonomy of questions asked in microblogs. We find that microblog authors express questions to accomplish a wide variety of social and informational tasks. Some microblog questions seek immediate answers, while others accrue information over time. Our overarching finding is that question asking in microblogs is strongly tied to peoples' naturalistic interactions, and that the act of asking questions in Twitter is not analogous to information seeking in more traditional information retrieval environments.", "Recently questioning and answering (QA the key factors of mobile Q&A usage are accessibility convenience of mobile Q&A, promptness of receiving answers, and users' satisficing behavior of information seeking (i.e., minimizing efforts and settling with good enough information). We also observe that users tend to seek more factual information attributed to everyday life activities than they do on traditional Q&A sites and that they exhibit unique interaction patterns such as repeating and refining questions as coping strategies in seeking information needs. Our main findings reported in the paper have significant implications on the design of mobile Q&A systems.", "Online social networking tools are used around the world by people to ask questions of their friends, because friends provide direct, reliable, contextualized, and interactive responses. However, although the tools used in different cultures for question asking are often very similar, the way they are used can be very different, reflecting unique inherent cultural characteristics. We present the results of a survey designed to elicit cultural differences in people’s social question asking behaviors across the United States, the United Kingdom, China, and India. The survey received responses from 933 people distributed across the four countries who held similar job roles and were employed by a single organization. Responses included information about the questions they ask via social networking tools, and their motivations for asking and answering questions online. The results reveal culture as a consistently significant factor in predicting people’s social question and answer behavior. The prominent cultural differences we observe might be traced to people’s inherent cultural characteristics (e.g., their cognitive patterns and social orientation), and should be comprehensively considered in designing social search systems.", "Research has identified a link between Facebook use and bridging social capital, which speaks to the informational resources provided by a diverse network of connections. In order to explicate the mechanism through which Facebook may help individuals mobilize these embedded informational and support resources, this study explores the role of bridging social capital, question type, and relational closeness on the perceived utility and satisfaction of information obtained through questions posed to one's network of Facebook Friends through the status update feature. Employing a mixed-method approach, we utilize survey data collected from a sample of non-academic university staff (N=666), as well as actual Facebook question examples and responses collected during a follow-up lab session from a subset of this sample (N=71). Results indicate that question-askers' bridging social capital positively predicts the utility of responses received on SNS, while useful responses are more likely to be received from weaker ties.", "", "People often turn to their friends, families, and colleagues when they have questions. The recent, rapid rise of online social networking tools has made doing this on a large scale easy and efficient. In this paper we explore the phenomenon of using social network status messages to ask questions. We conducted a survey of 624 people, asking them to share the questions they have asked and answered of their online social networks. We present detailed data on the frequency of this type of question asking, the types of questions asked, and respondents' motivations for asking their social networks rather than using more traditional search tools like Web search engines. We report on the perceived speed and quality of the answers received, as well as what motivates people to respond to questions seen in their friends' status messages. We then discuss the implications of our findings for the design of next-generation search tools.", "People often turn to their social networks to fulfill their information needs. We conducted a study of question asking and answering (QA however, posting more tweets or posting more frequently did not increase chances of receiving a response. Most often the ‘follow’ relationship between asker and answerer was one-way. We provide a rich characterization of Q&A in social information streams and discuss implications for design." ] }
1502.06583
2951317740
The rise of social media provides a great opportunity for people to reach out to their social connections to satisfy their information needs. However, generic social media platforms are not explicitly designed to assist information seeking of users. In this paper, we propose a novel framework to identify the social connections of a user able to satisfy his information needs. The information need of a social media user is subjective and personal, and we investigate the utility of his social context to identify people able to satisfy it. We present questions users post on Twitter as instances of information seeking activities in social media. We infer soft community memberships of the asker and his social connections by integrating network and content information. Drawing concepts from the social foci theory, we identify answerers who share communities with the asker w.r.t. the question. Our experiments demonstrate that the framework is effective in identifying answerers to social media questions.
A related line of research is the study of community Q &A systems like Yahoo! Answers @cite_16 and Quora @cite_7 . Content from existing Q &A sessions are used to rank answerers by NLP techniques. @cite_17 uses link structure to find authoritative answerers for a question category. @cite_18 and @cite_34 combine network and content information to identify authoritative users as answerers. The environment for social media questions is different as the candidate answerers are themselves connected via social relations. Systems utilizing question categories @cite_38 cannot be applied as they are not explicitly known in generic social media. Social expertise systems @cite_13 @cite_1 identify subject matter experts in social media. Social media questions are subjective and personal might require answerers who share social context with the asker rather than subject matter experts.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_7", "@cite_1", "@cite_16", "@cite_34", "@cite_13", "@cite_17" ], "mid": [ "", "", "1730818938", "", "2129251351", "2022149269", "1975583660", "2025895610" ], "abstract": [ "", "", "Efforts such as Wikipedia have shown the ability of user communities to collect, organize and curate information on the Internet. Recently, a number of question and answer (Q&A) sites have successfully built large growing knowledge repositories, each driven by a wide range of questions and answers from its users community. While sites like Yahoo Answers have stalled and begun to shrink, one site still going strong is Quora, a rapidly growing service that augments a regular Q&A system with social links between users. Despite its success, however, little is known about what drives Quora's growth, and how it continues to connect visitors and experts to the right questions as it grows. In this paper, we present results of a detailed analysis of Quora using measurements. We shed light on the impact of three different connection networks (or graphs) inside Quora, a graph connecting topics to users, a social graph connecting users, and a graph connecting related questions. Our results show that heterogeneity in the user and question graphs are significant contributors to the quality of Quora's knowledge base. One drives the attention and activity of users, and the other directs them to a small set of popular and interesting questions.", "", "Yahoo Answers (YA) is a large and diverse question-answer forum, acting not only as a medium for sharing technical knowledge, but as a place where one can seek advice, gather opinions, and satisfy one's curiosity about a countless number of things. In this paper, we seek to understand YA's knowledge sharing and activity. We analyze the forum categories and cluster them according to content characteristics and patterns of interaction among the users. While interactions in some categories resemble expertise sharing forums, others incorporate discussion, everyday advice, and support. With such a diversity of categories in which one can participate, we find that some users focus narrowly on specific topics, while others participate across categories. This not only allows us to map related categories, but to characterize the entropy of the users' interests. We find that lower entropy correlates with receiving higher answer ratings, but only for categories where factual expertise is primarily sought after. We combine both user attributes and answer characteristics to predict, within a given category, whether a particular answer will be chosen as the best answer by the asker.", "Community Question Answering (CQA) websites, where people share expertise on open platforms, have become large repositories of valuable knowledge. To bring the best value out of these knowledge repositories, it is critically important for CQA services to know how to find the right experts, retrieve archived similar questions and recommend best answers to new questions. To tackle this cluster of closely related problems in a principled approach, we proposed Topic Expertise Model (TEM), a novel probabilistic generative model with GMM hybrid, to jointly model topics and expertise by integrating textual content model and link structure analysis. Based on TEM results, we proposed CQARank to measure user interests and expertise score under different topics. Leveraging the question answering history based on long-term community reviews and voting, our method could find experts with both similar topical preference and high topical expertise. Experiments carried out on Stack Overflow data, the largest CQA focused on computer programming, show that our method achieves significant improvement over existing methods on multiple metrics.", "Content in microblogging systems such as Twitter is produced by tens to hundreds of millions of users. This diversity is a notable strength, but also presents the challenge of finding the most interesting and authoritative authors for any given topic. To address this, we first propose a set of features for characterizing social media authors, including both nodal and topical metrics. We then show how probabilistic clustering over this feature space, followed by a within-cluster ranking procedure, can yield a final list of top authors for a given topic. We present results across several topics, along with results from a user study confirming that our method finds authors who are significantly more interesting and authoritative than those resulting from several baseline conditions. Additionally our algorithm is computationally feasible in near real-time scenarios making it an attractive alternative for capturing the rapidly changing dynamics of microblogs.", "Question-Answer portals such as Naver and Yahoo! Answers are quickly becoming rich sources of knowledge on many topics which are not well served by general web search engines. Unfortunately, the quality of the submitted answers is uneven, ranging from excellent detailed answers to snappy and insulting remarks or even advertisements for commercial content. Furthermore, user feedback for many topics is sparse, and can be insufficient to reliably identify good answers from the bad ones. Hence, estimating the authority of users is a crucial task for this emerging domain, with potential applications to answer ranking, spam detection, and incentive mechanism design. We present an analysis of the link structure of a general-purpose question answering community to discover authoritative users, and promising experimental results over a dataset of more than 3 million answers from a popular community QA site. We also describe structural differences between question topics that correlate with the success of link analysis for authority discovery." ] }
1502.06314
1579644103
Empowered by today's rich tools for media generation and distribution, and the convenient Internet access, crowdsourced streaming generalizes the single-source streaming paradigm by including massive contributors for a video channel. It calls a joint optimization along the path from crowdsourcers, through streaming servers, to the end-users to minimize the overall latency. The dynamics of the video sources, together with the globalized request demands and the high computation demand from each sourcer, make crowdsourced live streaming challenging even with powerful support from modern cloud computing. In this paper, we present a generic framework that facilitates a cost-effective cloud service for crowdsourced live streaming. Through adaptively leasing, the cloud servers can be provisioned in a fine granularity to accommodate geo-distributed video crowdsourcers. We present an optimal solution to deal with service migration among cloud instances of diverse lease prices. It also addresses the location impact to the streaming quality. To understand the performance of the proposed strategies in the realworld, we have built a prototype system running over the planetlab and the Amazon Microsoft Cloud. Our extensive experiments demonstrate that the effectiveness of our solution in terms of deployment cost and streaming quality.
Empowered by today's rich tools for media generation and collaborative production, and the convenient Internet access, crowdsourcing further extends the single-source paradigm. It combines the efforts of multiple self-identified contributors, known as crowdsourcers , for a greater result, and has seen success in many areas @cite_3 . For example, LiFS (Locating in Fingerprint Space) was developed for wireless indoor localization with smartphones based crowdsourcing @cite_12 . @cite_13 used crowdsourcing approach to optimize mobile devices' energy efficiency by utilizing signal strength traces shared by other devices in cellular networks. For video applications, a scalable system that allows users to perform content-based searches on continuous collection of crowdsourced video was proposed in @cite_4 . @cite_0 investigated the the crowdsourcing of personal and social traits in online social video or social media content in general. Recently, Youtube has integrated with Google Moderator, a crowdsourcing and feedback production, to increase the engagement between viewers and content creators. Such other video sharing sites as Poptent and VeedMe have also opened interfaces for crowdsourcers with user generated content. Crowdsourced live streaming services have emerged in the market as well, especially for streaming sports online broadcast. Examples include Stream2Watch.me and sportLEMON.tv.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_0", "@cite_13", "@cite_12" ], "mid": [ "2100538831", "", "1998850479", "2026472243", "2003738415" ], "abstract": [ "We propose a scalable Internet system for continuous collection of crowd-sourced video from devices such as Google Glass. Our hybrid cloud architecture, GigaSight, is effectively a Content Delivery Network (CDN) in reverse. It achieves scalability by decentralizing the collection infrastructure using cloudlets based on virtual machines (VMs). Based on time, location, and content, privacy sensitive information is automatically removed from the video. This process, which we refer to as denaturing, is executed in a user-specific VM on the cloudlet. Users can perform content-based searches on the total catalog of denatured videos. Our experiments reveal the bottlenecks for video upload, denaturing, indexing, and content-based search. They also provide insight on how parameters such as frame rate and resolution impact scalability.", "", "While multimedia and social computing research have used crowdsourcing techniques to annotate objects, actions, and scenes in social video sites like YouTube, little work has addressed the crowdsourcing of personal and social traits in online social video or social media content in general. In this paper, we address the problems of (1) crowdsourcing the annotation of first impressions of video bloggers (vloggers) personal and social traits in conversational YouTube videos, and (2) mining the impressions with the goal of modeling the interplay of different vlogger facets. First, we design a human annotation task to crowdsource impressions of vloggers that extends a tradition of studies of personality impressions with the addition of attractiveness and mood impressions. Second, we propose a probabilistic framework using Topic Models to discover prototypical impressions that are data driven, and that combine multiple facets of vloggers. Finally, we address the task of automatically predicting topic impressions using nonverbal and verbal content extracted from videos and comments. Our study of 442 YouTube vlogs and 2,210 annotations collected in Mechanical Turk supports recent literature showing the feasibility to crowdsource interpersonal human impression with comparable quality to what is reported in social psychology research, and provides insights on the interplay among human first impressions. We also show that topic models are useful to discover meaningful prototypical impressions that can be validated by humans, and that different topics can be predicted using different sources of information from vloggers’ nonverbal and verbal content, as well as comments from the audience.", "With the tremendous growth in wireless network deployment and increasing use of mobile devices, e.g., smartphones and tablets, improving energy efficiency in such devices, especially with communication driven workloads, is critical to providing a satisfactory user experience. Studies show that signal strength plays an important role on energy consumption of cellular data communications. While energy consumption can be minimized by accurately predicting signal strengths and reacting to it in real-time, the dynamic nature of wireless environments makes signal strengths highly unpredictable. In this paper, after analyzing in detail the signal strength variation and its impact on energy consumption, we propose to use crowdsourcing approach to optimize mobile devices’ energy efficiency by utilizing signal strength traces reported shared by other users devices in cellular networks. Via a comprehensive measurement study, we observe that signal strength traces collected from different devices are pseudo-identical, and they even exhibit similar threshold-based behaviors in the relationship between signal strength and device power consumption. Based on our observations, we propose a predictive scheduling algorithm that: (i) selects the right set of signal strength traces based on its location, (ii) applies a filter to smooth out signal strengths and hide abrupt changes, (iii) digitizes the signal strength to “good” and “bad” areas, and (iv) schedules transmissions based on power-throughput characteristics to optimize the transmission energy efficiency. To demonstrate the efficacy of the proposed algorithms, we prototype the crowdsourcing-based predicative scheduling algorithm on Android-based smartphones. Our experiment results from real-life driving tests demonstrate that, by leveraging others’ signal traces, mobile devices can save energy up to 35 percent compared to the conventional opportunistic scheduling, i.e., schedule transmissions only based on instantaneous channel conditions.", "Indoor localization is of great importance for a range of pervasive applications, attracting many research efforts in the past decades. Most radio-based solutions require a process of site survey, in which radio signatures of an interested area are annotated with their real recorded locations. Site survey involves intensive costs on manpower and time, limiting the applicable buildings of wireless localization worldwide. In this study, we investigate novel sensors integrated in modern mobile phones and leverage user motions to construct the radio map of a floor plan, which is previously obtained only by site survey. Considering user movements in a building, originally separated RSS fingerprints are geographically connected by user moving paths of locations where they are recorded, and they consequently form a high dimension fingerprint space, in which the distances among fingerprints are preserved. The fingerprint space is then automatically mapped to the floor plan in a stress-free form, which results in fingerprints labeled with physical locations. On this basis, we design LiFS, an indoor localization system based on off-the-shelf WiFi infrastructure and mobile phones. LiFS is deployed in an office building covering over 1,600 m @math , and its deployment is easy and rapid since little human intervention is needed. In LiFS, the calibration of fingerprints is crowdsourced and automatic. Experiment results show that LiFS achieves comparable location accuracy to previous approaches even without site survey." ] }
1502.05599
2006144017
Given a network represented by a weighted directed graph G, we consider the problem of finding a bounded cost set of nodes S such that the influence spreading from S in G, within a given time bound, is as large as possible. The dynamic that governs the spread of influence is the following: initially only elements in S are influenced; subsequently at each round, the set of influenced elements is augmented by all nodes in the network that have a sufficiently large number of already influenced neighbors. We prove that the problem is NP-hard, even in simple networks like complete graphs and trees. We also derive a series of positive results. We present exact pseudo-polynomial time algorithms for general trees, that become polynomial time in case the trees are unweighted. This last result improves on previously published results. We also design polynomial time algorithms for general weighted paths and cycles, and for unweighted complete graphs.
It is clear that the @math -MIS problem represents an abstraction of the viral marketing scenario if one makes the reasonable assumption that an individual decides to adopt the products if a suitable number of his her friends have adopted the products. Analogously, the @math -MIS problem can describe other diffusion problems arising in sociological, economical, and biological networks (again see @cite_13 ). Therefore, it comes as no surprise that special cases of our problem (or variants thereof) have recently attracted the attention of the algorithmic community. We shall limit ourselves here to discussing the work that is most directly related to ours, and refer the reader to the monographs @cite_35 @cite_13 for an excellent overview of the area. We just mention that our results also seem to be relevant to other areas, like dynamic monopolies @cite_21 @cite_15 for instance.
{ "cite_N": [ "@cite_35", "@cite_15", "@cite_21", "@cite_13" ], "mid": [ "", "2066862787", "1999757901", "19838944" ], "abstract": [ "", "This paper provides an overview of recent developments concerning the process of local majority voting in graphs, and its basic properties, from graph theoretic and algorithmic standpoints.", "We consider a well-known distributed colouring game played on a simple connected graph: initially, each vertex is coloured black or white; at each round, each vertex simultaneously recolours itself by the colour of the simple (strong) majority of its neighbours. A set of vertices M is said to be a dynamo, if starting the game with only the vertices of M coloured black, the computation eventually reaches an all-black configuration.The importance of this game follows from the fact that it models the spread of faults in point-to-point systems with majority-based voting; in particular, dynamos correspond to those sets of initial failures which will lead the entire system to fail. Investigations on dynamos have been extensive but restricted to establishing tight bounds on the size (i.e., how small a dynamic monopoly might be).In this paper we start to study dynamos systematically with respect to both the size and the time (i.e., how many rounds are needed to reach all-black configuration) in various models and topologies.We derive tight tradeoffs between the size and the time for a number of regular graphs, including rings, complete d-ary trees, tori, wrapped butterflies, cube connected cycles and hypercubes. In addition, we determine optimal size bounds of irreversible dynamos for butterflies and shuffle-exchange using simple majority and for DeBruijn using strong majority rules. Finally, we make some observations concerning irreversible versus reversible monotone models and slow complete computations from minimal dynamos.", "Over the past decade there has been a growing public fascination with the complex connectedness of modern society. This connectedness is found in many incarnations: in the rapid growth of the Internet, in the ease with which global communication takes place, and in the ability of news and information as well as epidemics and financial crises to spread with surprising speed and intensity. These are phenomena that involve networks, incentives, and the aggregate behavior of groups of people; they are based on the links that connect us and the ways in which our decisions can have subtle consequences for others. This introductory undergraduate textbook takes an interdisciplinary look at economics, sociology, computing and information science, and applied mathematics to understand networks and behavior. It describes the emerging field of study that is growing at the interface of these areas, addressing fundamental questions about how the social, economic, and technological worlds are connected." ] }
1502.05599
2006144017
Given a network represented by a weighted directed graph G, we consider the problem of finding a bounded cost set of nodes S such that the influence spreading from S in G, within a given time bound, is as large as possible. The dynamic that governs the spread of influence is the following: initially only elements in S are influenced; subsequently at each round, the set of influenced elements is augmented by all nodes in the network that have a sufficiently large number of already influenced neighbors. We prove that the problem is NP-hard, even in simple networks like complete graphs and trees. We also derive a series of positive results. We present exact pseudo-polynomial time algorithms for general trees, that become polynomial time in case the trees are unweighted. This last result improves on previously published results. We also design polynomial time algorithms for general weighted paths and cycles, and for unweighted complete graphs.
The first authors to study problems of the spread of influence in networks from an algorithmic point of view were Kempe @cite_36 @cite_25 . However, they were mostly interested in networks with randomly chosen thresholds. Chen @cite_28 studied the following minimization problem: given an unweighted graph @math and fixed thresholds @math , for each vertex @math in @math , find a set of minimum size that eventually influences all (or a fixed fraction of) the nodes of @math . He proved a strong inapproximability result that makes unlikely the existence of an algorithm with approximation factor better than @math . Chen's result stimulated a series of papers @cite_16 @cite_4 @cite_31 @cite_32 @cite_33 @cite_27 @cite_34 @cite_44 @cite_7 @cite_11 @cite_39 @cite_2 that isolated interesting cases in which the problem (and variants thereof) become tractable.
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_7", "@cite_36", "@cite_28", "@cite_32", "@cite_34", "@cite_39", "@cite_44", "@cite_27", "@cite_2", "@cite_31", "@cite_16", "@cite_25", "@cite_11" ], "mid": [ "2119218916", "2170002160", "1582254841", "", "", "2030703286", "2062667824", "2093340343", "2964099023", "2058157772", "1677630274", "2148732839", "2097938283", "", "" ], "abstract": [ "In this paper, we consider the problem of maximizing the spread of influence through a social network. Given a graph with a threshold value thr(v) attached to each vertex v, the spread of influence is modeled as follows: A vertex v becomes ''active'' (influenced) if at least thr(v) of its neighbors are active. In the corresponding optimization problem the objective is then to find a fixed number k of vertices to activate such that the number of activated vertices at the end of the propagation process is maximum. We show that this problem is strongly inapproximable in time f(k)@?n^O^(^1^), for some function f, even for very restrictive thresholds. In the case that the threshold of each vertex equals its degree, we prove that the problem is inapproximable in polynomial time and it becomes r(n)-approximable in time f(k)@?n^O^(^1^), for some function f, for any strictly increasing function r. Moreover, we show that the decision version parameterized by k is W[1]-hard but becomes fixed-parameter tractable on bounded degree graphs.", "In this paper we consider a fundamental problem in the area of viral marketing, called T ARGET S ET S ELECTION problem. In a a viral marketing setting, social networks are modeled by graphs with potential customers of a new product as vertices and friend relationships as edges, where each vertex @math is assigned a threshold value @math . The thresholds represent the different latent tendencies of customers (vertices) to buy the new product when their friend (neighbors) do. Consider a repetitive process on social network @math where each vertex @math is associated with two states, active and inactive, which indicate whether @math is persuaded into buying the new product. Suppose we are given a target set @math . Initially, all vertices in @math are inactive. At time step 0, we choose all vertices in @math to become active. Then, at every time step @math , all vertices that were active in time step @math remain active, and we activate any vertex @math if at least @math of its neighbors were active at time step @math . The activation process terminates when no more vertices can get activated. We are interested in the following optimization problem, called T ARGET S ET S ELECTION : Finding a target set @math of smallest possible size that activates all vertices of @math . There is an important and well-studied threshold called strict majority threshold, where for every vertex @math in @math we have @math and @math is the degree of @math in @math . In this paper, we consider the T ARGET S ET S ELECTION problem under strict majority thresholds and focus on three popular regular network structures: cycle permutation graphs, generalized Petersen graphs and torus cordalis.", "We consider the following activation process in undirected graphs: a vertex is active either if it belongs to a set of initially activated vertices or if at some point it has at least r active neighbors, where r > 1 is the activation threshold. A contagious set is a set whose activation results with the entire graph being active. Given a graph G, let m(G, r) be the minimal size of a contagious set. It is known that for every d-regular or nearly d-regular graph on n vertices, m(G, r) ≤ O(nr d). We consider such graphs that additionally have expansion properties, parameterized by the spectral gap and or the girth of the graphs. The general flavor of our results is that sufficiently strong expansion properties imply that m(G, 2) ≤ O(n d2) (and more generally, m(G, r) ≤ O(n dr (r-1))). In addition, we demonstrate that rather weak assumptions on the girth and or the spectral gap suffice in order to imply that m(G, 2) ≤ O(n log d d2). For example, we show this for graphs of girth at least 7, and for graphs with λ(G) < (1 − e)d, provided the graph has no 4-cycles. Our results are algorithmic, entailing simple and efficient algorithms for selecting contagious sets.", "", "", "Given a graph G, a function f:V(G)->Z, and an initial 0 1-vertex-labelling c\"1:V(G)-> 0,1 , we study an iterative 0 1-vertex-labelling process on G where in each round every vertex v never changes its label from 1 to 0, and changes its label from 0 to 1 if at least f(v) neighbours have label 1. Such processes model opinion disease spreading or fault propagation and have been studied under names such as irreversible threshold majority processes in a large variety of contexts. Our contributions concern computational aspects related to the minimum cardinality irr\"f(G) of sets of vertices with initial label 1 such that during the process on G all vertices eventually change their label to 1. Such sets are known as irreversible conversion sets, dynamic irreversible monopolies, or catastrophic fault patterns. Answering a question posed by Dreyer and Roberts [P.A. Dreyer Jr., F.S. Roberts, Irreversible k-threshold processes: graph-theoretical threshold models of the spread of disease and of opinion, Discrete Appl. Math. 157 (2009) 1615-1627], we prove a hardness result for irr\"f(G) where f(v)=2 for every [email protected]?V(G). Furthermore, we describe a general reduction principle for irr\"f(G), which leads to efficient algorithms for graphs with simply structured blocks such as trees and chordal graphs.", "In this paper we consider a fundamental problem in the area of viral marketing, called Target Set Selection problem. We study the problem when the underlying graph is a block-cactus graph, a chordal graph or a Hamming graph. We show that if G is a block-cactus graph, then the Target Set Selection problem can be solved in linear time, which generalizes Chen’s result (Discrete Math. 23:1400–1415, 2009) for trees, and the time complexity is much better than the algorithm in Ben- (Discrete Optim., 2010) (for bounded treewidth graphs) when restricted to block-cactus graphs. We show that if the underlying graph G is a chordal graph with thresholds θ(v)≤2 for each vertex v in G, then the problem can be solved in linear time. For a Hamming graph G having thresholds θ(v)=2 for each vertex v of G, we precisely determine an optimal target set S for (G,θ). These results partially answer an open problem raised by Dreyer and Roberts (Discrete Appl. Math. 157:1615–1627, 2009).", "", "Let @math be a graph with a threshold function @math such that @math for every vertex @math of @math , where @math is the degree of @math in @math . Suppose we are given a target set @math . This paper considers the following repetitive process on @math . At time step @math the vertices of @math are colored black and the other vertices are colored white. After that, at each time step @math , the colors of white vertices (if any) are updated according to the following rule. All white vertices @math that have at least @math black neighbors at the time step @math are colored black, and the colors of the other vertices do not change. The process runs until no more white vertices can update colors from white to black. The following optimization problem is called Target Set Selection: Find a target set @math of smallest possible size such that all vertices in @math are black at the end of the process. Such an @math is called an optimal target set for @math under the th...", "Target Set Selection, which is a prominent NP-hard problem occurring in social network analysis and distributed computing, is notoriously hard both in terms of achieving useful polynomial-time approximation as well as fixed-parameter algorithms. Given an undirected graph, the task is to select a minimum number of vertices into a \"target set\" such that all other vertices will become active in the course of a dynamic process (which may go through several activation rounds). A vertex, equipped with a threshold value t, becomes active once at least t of its neighbors are active; initially, only the target set vertices are active. We contribute further insights into the existence of islands of tractability for Target Set Selection by spotting new parameterizations characterizing some sparse graphs as well as some \"cliquish\" graphs and developing corresponding fixed-parameter tractability and (parameterized) hardness results. In particular, we demonstrate that upper-bounding the thresholds by a constant may significantly alleviate the search for efficiently solvable, but still meaningful special cases of Target Set Selection.", "Let G be a graph and @t:V(G)->N be an assignment of thresholds to the vertices of G. A subset of vertices D is said to be dynamic monopoly (or simply dynamo) if the vertices of G can be partitioned into subsets D\"0,D\"1,...,D\"k such that D\"0=D and for any i=1,...,k-1 each vertex v in D\"i\"+\"1 has at least t(v) neighbors in D\"[email protected][email protected]?D\"i. Dynamic monopolies are in fact modeling the irreversible spread of influence such as disease or belief in social networks. We denote the smallest size of any dynamic monopoly of G, with a given threshold assignment, by dyn(G). In this paper, we first define the concept of a resistant subgraph and show its relationship with dynamic monopolies. Then we obtain some lower and upper bounds for the smallest size of dynamic monopolies in graphs with different types of thresholds. Next we introduce dynamo-unbounded families of graphs and prove some related results. We also define the concept of a homogeneous society that is a graph with probabilistic thresholds satisfying some conditions and obtain a bound for the smallest size of its dynamos. Finally, we consider dynamic monopoly of line graphs and obtain some bounds for their sizes and determine the exact values in some special cases.", "In this paper we study the Target Set Selection problem proposed by Kempe, Kleinberg, and Tardos; a problem which gives a nice clean combinatorial formulation for many applications arising in economy, sociology, and medicine. Its input is a graph with vertex thresholds, the social network, and the goal is to find a subset of vertices, the target set, that ''activates'' a pre-specified number of vertices in the graph. Activation of a vertex is defined via a so-called activation process as follows: Initially, all vertices in the target set become active. Then at each step i of the process, each vertex gets activated if the number of its active neighbors at iteration i-1 exceeds its threshold. The activation process is ''monotone'' in the sense that once a vertex is activated, it remains active for the entire process. Our contribution is as follows: First, we present an algorithm for Target Set Selection running in n^O^(^w^) time, for graphs with n vertices and treewidth bounded by w. This algorithm can be adopted to much more general settings, including the case of directed graphs, weighted edges, and weighted vertices. On the other hand, we also show that it is highly unlikely to find an n^o^(^w^) time algorithm for Target Set Selection, as this would imply a sub-exponential algorithm for all problems in SNP. Together with our upper bound result, this shows that the treewidth parameter determines the complexity of Target Set Selection to a large extent, and should be taken into consideration when tackling this problem in any scenario. In the last part of the paper we also deal with the ''non-monotone'' variant of Target Set Selection, and show that this problem becomes #P-hard on graphs with edge weights.", "The adoption of everyday decisions in public affairs, fashion, movie-going, and consumer behavior is now thoroughly believed to migrate in a population through an influential network. The same diffusion process when being imitated by intention is called viral marketing. This process can be modeled by a (directed) graph G=(V,E) with a threshold t(v) for every vertex v?V, where v becomes active once at least t(v) of its (in-)neighbors are already active. A Perfect Target Set is a set of vertices whose activation will eventually activate the entire graph, and the Perfect Target Set Selection Problem (PTSS) asks for the minimum such initial set. It is known (Chen (2008) 6) that PTSS is hard to approximate, even for some special cases such as bounded-degree graphs, or majority thresholds.We propose a combinatorial model for this dynamic activation process, and use it to represent PTSS and its variants by linear integer programs. This allows one to use standard integer programming solvers for solving small-size PTSS instances. We also show combinatorial lower and upper bounds on the size of the minimum Perfect Target Set. Our upper bound implies that there are always Perfect Target Sets of size at most |V| 2 and 2|V| 3 under majority and strict majority thresholds, respectively, both in directed and undirected graphs. This improves the bounds of 0.727|V| and 0.7732|V| found recently by Chang and Lyuu (2010) 5 for majority and strict majority thresholds in directed graphs, and matches their bound under majority thresholds in undirected graphs. Furthermore, our proof is much simpler, and we observe that some of these bounds are tight. One interesting and perhaps surprising implication of our lower bound for undirected graphs, is that it is easy to get a constant factor approximation for PTSS for “relatively balanced” graphs (e.g., bounded-degree graphs, nearly regular graphs) with a “more than majority” threshold (that is, t(v)=??deg(v), for every v?V and some constant ?>1 2), whereas no polylogarithmic approximation exists for “more than majority” graphs.", "", "" ] }
1502.05983
2096834990
In this paper we extend the knowledge on the problem of empirically searching for sorting networks of minimal depth. We present new search space pruning techniques for the last four levels of a candidate sorting network by considering only the output set representation of a network. We present an algorithm for checking whether an @math -input sorting network of depth @math exists by considering the minimal up to permutation and reflection itemsets at each level and using the pruning at the last four levels. We experimentally evaluated this algorithm to find the optimal depth sorting networks for all @math .
Parberry @cite_1 presented a computer assisted proof for the minimal depth of a nine-input sorting network. He significantly reduced network level candidates for the first two levels, in comparison to the naive approach, by exploiting symmetries of the networks (referred to as first and second normal form @cite_1 ). For the remaining network levels he proves that we need to only consider ones with maximal number of comparators. Parberry also found a method, referred to as The Heuristic'', to significantly reduce the search space for the second last level of the network. He used a CRAY super computer to test all nine-input comparator networks of depth six that are in second normal form and pass The Heuristic'' check. He verified experimentally that none of them are sorting networks. It is an immediate consequence of his result that there does not exist a ten-input sorting network of depth six. The presented pruning techniques in this paper are at least as good as Parberry's ones for the last two levels, and the third and fourth last levels our work is novel.
{ "cite_N": [ "@cite_1" ], "mid": [ "2067081749" ], "abstract": [ "It is demonstrated that there is no nine-input sorting network of depth six. The proof was obtained by executing on a supercomputer a branch-and-bound algorithm which constructs and tests a critical subset of all possible candidates. Such proofs can be classified as experimental science, rather than mathematics. In keeping with the paradigms of experimental science, a high-level description of the experiment and analysis of the result are given." ] }
1502.05983
2096834990
In this paper we extend the knowledge on the problem of empirically searching for sorting networks of minimal depth. We present new search space pruning techniques for the last four levels of a candidate sorting network by considering only the output set representation of a network. We present an algorithm for checking whether an @math -input sorting network of depth @math exists by considering the minimal up to permutation and reflection itemsets at each level and using the pruning at the last four levels. We experimentally evaluated this algorithm to find the optimal depth sorting networks for all @math .
Codish @cite_0 presents safe pruning techniques for the last layer of a sorting network that are aimed at improving algorithms that use the SAT encoding of sorting networks and transform the problem to SAT problem. Parberry @cite_1 has already presented an 'on-the-fly' method of constructing the last layer. Codish's work is related to the case when the current'' comparator network is not known'' (by the algorithm). Hence they invented conditions that would suit this specific case of encoding the optimal depth sorting network problem as a SAT problem. If the current comparator network (or its full output set) is known by the algorithm then Parberry's result is much stronger than Codish's. In this paper, we focus on the case when the network is known and hence we present improvements over Parberry's technique.
{ "cite_N": [ "@cite_0", "@cite_1" ], "mid": [ "1662328649", "2067081749" ], "abstract": [ "This paper studies properties of the back end of a sorting network and illustrates the utility of these in the search for networks of optimal size or depth. All previous works focus on properties of the front end of networks and on how to apply these to break symmetries in the search. The new properties help shed understanding on how sorting networks sort and speed-up solvers for both optimal size and depth by an order of magnitude.", "It is demonstrated that there is no nine-input sorting network of depth six. The proof was obtained by executing on a supercomputer a branch-and-bound algorithm which constructs and tests a critical subset of all possible candidates. Such proofs can be classified as experimental science, rather than mathematics. In keeping with the paradigms of experimental science, a high-level description of the experiment and analysis of the result are given." ] }
1502.05983
2096834990
In this paper we extend the knowledge on the problem of empirically searching for sorting networks of minimal depth. We present new search space pruning techniques for the last four levels of a candidate sorting network by considering only the output set representation of a network. We present an algorithm for checking whether an @math -input sorting network of depth @math exists by considering the minimal up to permutation and reflection itemsets at each level and using the pruning at the last four levels. We experimentally evaluated this algorithm to find the optimal depth sorting networks for all @math .
Bundala @cite_7 presented a computer assisted proof for the optimal depths of networks with eleven to sixteen (inclusive) inputs. He also managed to significantly reduce the number of candidates for the second layer in comparison to Parberry's approach, by considering only networks whose outputs are minimal representative up to permutation and reflection. Similar work for the second level is also presented by Michael Codish in @cite_3 . Bundala's algorithm for finding sorting networks of optimal depth is based on a SAT encoding of the optimal depth sorting networks problem, which uses the set of candidate two-layer networks as a fixed entry point. Some extra pruning techniques are presented and they use a state of the art SAT solver to find the optimal depth sorting networks for all @math .
{ "cite_N": [ "@cite_3", "@cite_7" ], "mid": [ "2155720305", "1726079515" ], "abstract": [ "Previous work identifying depth-optimal n-channel sorting networks for 9 a#x2264; n a#x2264; 16 is based on exploiting symmetries of the first two layers. However, the naive generate-and-test approach typically applied does not scale. This paper revisits the problem of generating two-layer prefixes modulo symmetries. An improved notion of symmetry is provided and a novel technique based on regular languages and graph isomorphism is shown to generate the set of non-symmetric representations. An empirical evaluation demonstrates that the new method outperforms the generate-and-test approach by orders of magnitude and easily scales until n = 40.", "We prove depth optimality of sorting networks from \"The Art of Computer Programming\".Sorting networks posses symmetry that can be used to generate a few representatives.These representatives can be efficiently encoded using regular expressions.We construct SAT formulas whose unsatisfiability is sufficient to show optimality.Resulting algorithm is orders of magnitude faster than prior work on small instances. We solve a 40-year-old open problem on depth optimality of sorting networks. In 1973, Donald E. Knuth detailed sorting networks of the smallest depth known for n ź 16 inputs, quoting optimality for n ź 8 (Volume 3 of \"The Art of Computer Programming\"). In 1989, Parberry proved optimality of networks with 9 ź n ź 10 inputs. We present a general technique for obtaining such results, proving optimality of the remaining open cases of 11 ź n ź 16 inputs. Exploiting symmetry, we construct a small set R n of two-layer networks such that: if there is a depth-k sorting network on n inputs, then there is one whose first layers are in R n . For each network in R n , we construct a propositional formula whose satisfiability is necessary for the existence of a depth-k sorting network. Using an off-the-shelf SAT solver we prove optimality of the sorting networks listed by Knuth. For n ź 10 inputs, our algorithm is orders of magnitude faster than prior ones." ] }
1502.05983
2096834990
In this paper we extend the knowledge on the problem of empirically searching for sorting networks of minimal depth. We present new search space pruning techniques for the last four levels of a candidate sorting network by considering only the output set representation of a network. We present an algorithm for checking whether an @math -input sorting network of depth @math exists by considering the minimal up to permutation and reflection itemsets at each level and using the pruning at the last four levels. We experimentally evaluated this algorithm to find the optimal depth sorting networks for all @math .
Our algorithm and all related techniques are developed independently of that of Bundala @cite_7 and Codish @cite_3 @cite_0 . Using our program, we manage to prove the optimal sorting networks for all @math . Although, the approach presented in this paper significantly differs from that of Bundala; instead of using a generic SAT solver, our method studies details about the structure of sorting networks and gives much more enriched answer to the yes no question being asked ('does there exists and @math -input sorting network of depth @math ?'). We give more insight on the candidates that are needed to be considered at each level --- also referred to as complete set of filters @math for comparator networks of @math @cite_4 .
{ "cite_N": [ "@cite_0", "@cite_3", "@cite_4", "@cite_7" ], "mid": [ "1662328649", "2155720305", "1769744536", "1726079515" ], "abstract": [ "This paper studies properties of the back end of a sorting network and illustrates the utility of these in the search for networks of optimal size or depth. All previous works focus on properties of the front end of networks and on how to apply these to break symmetries in the search. The new properties help shed understanding on how sorting networks sort and speed-up solvers for both optimal size and depth by an order of magnitude.", "Previous work identifying depth-optimal n-channel sorting networks for 9 a#x2264; n a#x2264; 16 is based on exploiting symmetries of the first two layers. However, the naive generate-and-test approach typically applied does not scale. This paper revisits the problem of generating two-layer prefixes modulo symmetries. An improved notion of symmetry is provided and a novel technique based on regular languages and graph isomorphism is shown to generate the set of non-symmetric representations. An empirical evaluation demonstrates that the new method outperforms the generate-and-test approach by orders of magnitude and easily scales until n = 40.", "A complete set of filters @math for the optimal-depth @math -input sorting network problem is such that if there exists an @math -input sorting network of depth @math then there exists one of the form @math for some @math . Previous work on the topic presents a method for finding complete set of filters @math and @math that consists only of networks of depths one and two respectively, whose outputs are minimal and representative up to permutation and reflection. We present a novel practical approach for finding a complete set of filters @math containing only networks of depth three whose outputs are minimal and representative up to permutation and reflection. In previous work, we have developed a highly efficient algorithm for finding extremal sets ( i.e. outputs of comparator networks; itemsets; ) up to permutation. In this paper we present a modification to this algorithm that identifies the representative itemsets up to permutation and reflection. Hence, the here presented practical approach is the successful combination of known theory and practice that we apply to the domain of sorting networks. For all @math , we empirically compute the complete set of filters @math of the representative minimal up to permutation and reflection @math -input networks of depth three.", "We prove depth optimality of sorting networks from \"The Art of Computer Programming\".Sorting networks posses symmetry that can be used to generate a few representatives.These representatives can be efficiently encoded using regular expressions.We construct SAT formulas whose unsatisfiability is sufficient to show optimality.Resulting algorithm is orders of magnitude faster than prior work on small instances. We solve a 40-year-old open problem on depth optimality of sorting networks. In 1973, Donald E. Knuth detailed sorting networks of the smallest depth known for n ź 16 inputs, quoting optimality for n ź 8 (Volume 3 of \"The Art of Computer Programming\"). In 1989, Parberry proved optimality of networks with 9 ź n ź 10 inputs. We present a general technique for obtaining such results, proving optimality of the remaining open cases of 11 ź n ź 16 inputs. Exploiting symmetry, we construct a small set R n of two-layer networks such that: if there is a depth-k sorting network on n inputs, then there is one whose first layers are in R n . For each network in R n , we construct a propositional formula whose satisfiability is necessary for the existence of a depth-k sorting network. Using an off-the-shelf SAT solver we prove optimality of the sorting networks listed by Knuth. For n ź 10 inputs, our algorithm is orders of magnitude faster than prior ones." ] }
1502.05983
2096834990
In this paper we extend the knowledge on the problem of empirically searching for sorting networks of minimal depth. We present new search space pruning techniques for the last four levels of a candidate sorting network by considering only the output set representation of a network. We present an algorithm for checking whether an @math -input sorting network of depth @math exists by considering the minimal up to permutation and reflection itemsets at each level and using the pruning at the last four levels. We experimentally evaluated this algorithm to find the optimal depth sorting networks for all @math .
Marinov @cite_5 presented a highly efficient practical algorithm for finding the minimal representative itemsets over a domain @math up to a permutation of @math . This algorithm can be applied to reduce the number of candidates for the second layer as described in Bundala and Codish, although @cite_7 and @cite_3 present an extra pruning method using reflection.
{ "cite_N": [ "@cite_5", "@cite_3", "@cite_7" ], "mid": [ "2120457683", "2155720305", "1726079515" ], "abstract": [ "The minimal sets within a collection of sets are defined as the ones that do not have a proper subset within the collection, and the maximal sets are the ones that do not have a proper superset within the collection. Identifying extremal sets is a fundamental problem with a wide range of applications in SAT solvers, data mining, and social network analysis. In this article, we present two novel improvements of the high-quality extremal set identification algorithm, AMS-Lex, described by Bayardo and Panda. The first technique uses memoization to improve the execution time of the single-threaded variant of the AMS-Lex, while our second improvement uses parallel programming methods. In a subset of the presented experiments, our memoized algorithm executes more than 400 times faster than the highly efficient publicly available implementation of AMS-Lex. Moreover, we show that our modified algorithm's speedup is not bounded above by a constant and that it increases as the length of the common prefixes in successive input itemsets increases. We provide experimental results using both real-world and synthetic datasets, and show our multithreaded variant algorithm outperforming AMS-Lex by 3 to 6 times. We find that on synthetic input datasets, when executed using 16 CPU cores of a 32-core machine, our multithreaded program executes about as fast as the state-of-the-art parallel GPU-based program using an NVIDIA GTX 580 graphics processing unit.", "Previous work identifying depth-optimal n-channel sorting networks for 9 a#x2264; n a#x2264; 16 is based on exploiting symmetries of the first two layers. However, the naive generate-and-test approach typically applied does not scale. This paper revisits the problem of generating two-layer prefixes modulo symmetries. An improved notion of symmetry is provided and a novel technique based on regular languages and graph isomorphism is shown to generate the set of non-symmetric representations. An empirical evaluation demonstrates that the new method outperforms the generate-and-test approach by orders of magnitude and easily scales until n = 40.", "We prove depth optimality of sorting networks from \"The Art of Computer Programming\".Sorting networks posses symmetry that can be used to generate a few representatives.These representatives can be efficiently encoded using regular expressions.We construct SAT formulas whose unsatisfiability is sufficient to show optimality.Resulting algorithm is orders of magnitude faster than prior work on small instances. We solve a 40-year-old open problem on depth optimality of sorting networks. In 1973, Donald E. Knuth detailed sorting networks of the smallest depth known for n ź 16 inputs, quoting optimality for n ź 8 (Volume 3 of \"The Art of Computer Programming\"). In 1989, Parberry proved optimality of networks with 9 ź n ź 10 inputs. We present a general technique for obtaining such results, proving optimality of the remaining open cases of 11 ź n ź 16 inputs. Exploiting symmetry, we construct a small set R n of two-layer networks such that: if there is a depth-k sorting network on n inputs, then there is one whose first layers are in R n . For each network in R n , we construct a propositional formula whose satisfiability is necessary for the existence of a depth-k sorting network. Using an off-the-shelf SAT solver we prove optimality of the sorting networks listed by Knuth. For n ź 10 inputs, our algorithm is orders of magnitude faster than prior ones." ] }
1502.05983
2096834990
In this paper we extend the knowledge on the problem of empirically searching for sorting networks of minimal depth. We present new search space pruning techniques for the last four levels of a candidate sorting network by considering only the output set representation of a network. We present an algorithm for checking whether an @math -input sorting network of depth @math exists by considering the minimal up to permutation and reflection itemsets at each level and using the pruning at the last four levels. We experimentally evaluated this algorithm to find the optimal depth sorting networks for all @math .
Marinov @cite_4 presented a modified version of @cite_5 that finds the comparator networks whose outputs are minimal representative up to permutation and reflection @math for the first three layers. This significantly reduces the search space size for any @math . This technique can be easily adapted to find the sets @math for and depth @math , which means that we can further reduce the size of the search space using Marinov's existing technique. Marinov's technique is also applicable to the SAT encoding of a network and would also speedup up Bundala's algorithm by fixing the first three layers of a network. rather than just the first two as described in @cite_7 .
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_7" ], "mid": [ "2120457683", "1769744536", "1726079515" ], "abstract": [ "The minimal sets within a collection of sets are defined as the ones that do not have a proper subset within the collection, and the maximal sets are the ones that do not have a proper superset within the collection. Identifying extremal sets is a fundamental problem with a wide range of applications in SAT solvers, data mining, and social network analysis. In this article, we present two novel improvements of the high-quality extremal set identification algorithm, AMS-Lex, described by Bayardo and Panda. The first technique uses memoization to improve the execution time of the single-threaded variant of the AMS-Lex, while our second improvement uses parallel programming methods. In a subset of the presented experiments, our memoized algorithm executes more than 400 times faster than the highly efficient publicly available implementation of AMS-Lex. Moreover, we show that our modified algorithm's speedup is not bounded above by a constant and that it increases as the length of the common prefixes in successive input itemsets increases. We provide experimental results using both real-world and synthetic datasets, and show our multithreaded variant algorithm outperforming AMS-Lex by 3 to 6 times. We find that on synthetic input datasets, when executed using 16 CPU cores of a 32-core machine, our multithreaded program executes about as fast as the state-of-the-art parallel GPU-based program using an NVIDIA GTX 580 graphics processing unit.", "A complete set of filters @math for the optimal-depth @math -input sorting network problem is such that if there exists an @math -input sorting network of depth @math then there exists one of the form @math for some @math . Previous work on the topic presents a method for finding complete set of filters @math and @math that consists only of networks of depths one and two respectively, whose outputs are minimal and representative up to permutation and reflection. We present a novel practical approach for finding a complete set of filters @math containing only networks of depth three whose outputs are minimal and representative up to permutation and reflection. In previous work, we have developed a highly efficient algorithm for finding extremal sets ( i.e. outputs of comparator networks; itemsets; ) up to permutation. In this paper we present a modification to this algorithm that identifies the representative itemsets up to permutation and reflection. Hence, the here presented practical approach is the successful combination of known theory and practice that we apply to the domain of sorting networks. For all @math , we empirically compute the complete set of filters @math of the representative minimal up to permutation and reflection @math -input networks of depth three.", "We prove depth optimality of sorting networks from \"The Art of Computer Programming\".Sorting networks posses symmetry that can be used to generate a few representatives.These representatives can be efficiently encoded using regular expressions.We construct SAT formulas whose unsatisfiability is sufficient to show optimality.Resulting algorithm is orders of magnitude faster than prior work on small instances. We solve a 40-year-old open problem on depth optimality of sorting networks. In 1973, Donald E. Knuth detailed sorting networks of the smallest depth known for n ź 16 inputs, quoting optimality for n ź 8 (Volume 3 of \"The Art of Computer Programming\"). In 1989, Parberry proved optimality of networks with 9 ź n ź 10 inputs. We present a general technique for obtaining such results, proving optimality of the remaining open cases of 11 ź n ź 16 inputs. Exploiting symmetry, we construct a small set R n of two-layer networks such that: if there is a depth-k sorting network on n inputs, then there is one whose first layers are in R n . For each network in R n , we construct a propositional formula whose satisfiability is necessary for the existence of a depth-k sorting network. Using an off-the-shelf SAT solver we prove optimality of the sorting networks listed by Knuth. For n ź 10 inputs, our algorithm is orders of magnitude faster than prior ones." ] }
1502.05886
2138017439
Information extracted from social media streams has been leveraged to forecast the outcome of a large number of real-world events, from political elections to stock market fluctuations. An increasing amount of studies demonstrates how the analysis of social media conversations provides cheap access to the wisdom of the crowd. However, extents and contexts in which such forecasting power can be effectively leveraged are still unverified at least in a systematic way. It is also unclear how social-media-based predictions compare to those based on alternative information sources. To address these issues, here we develop a machine learning framework that leverages social media streams to automatically identify and predict the outcomes of soccer matches. We focus in particular on matches in which at least one of the possible outcomes is deemed as highly unlikely by professional bookmakers. We argue that sport events offer a systematic approach for testing the predictive power of social media conversations, and allow to compare such power against the rigorous baselines set by external sources. Despite such strict baselines, our framework yields above 8 marginal profit when used to inform simple betting strategies. The system is based on real-time sentiment analysis and exploits data collected immediately before the game start, allowing for bets informed by its predictions. We first discuss the rationale behind our approach, then describe the learning framework, its prediction performance and the return it provides as compared to a set of betting strategies. To test our framework we use both historical Twitter data from the 2014 FIFA World Cup games (10 sample), and real-time Twitter data (full stream) collected by monitoring the conversations about all soccer matches of the four major European tournaments (FA Premier League, Serie A, La Liga, and Bundesliga), and the 2014 UEFA Champions League, during the period between October, 25th 2014 and November, 26th 2014.
This work, to the best of our knowledge, is the first to exploit social media streams to predict soccer matches. However, various recent studies have approached related problems @cite_29 , such as predicting the outcome of political elections @cite_18 @cite_11 , talent shows @cite_7 , movies success @cite_3 @cite_31 , stock-market fluctuations @cite_24 @cite_6 , political protests @cite_19 @cite_23 @cite_15 @cite_21 , and diffusion of information @cite_26 @cite_16 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_7", "@cite_15", "@cite_29", "@cite_21", "@cite_3", "@cite_6", "@cite_24", "@cite_19", "@cite_23", "@cite_31", "@cite_16", "@cite_11" ], "mid": [ "1976267568", "2018165284", "2113321055", "2165066692", "1590495275", "1971204783", "2015186536", "2171468534", "2037625889", "2114204486", "2028993035", "1515963492", "1964521983", "2137036358" ], "abstract": [ "Is social media a valid indicator of political behavior? There is considerable debate about the validity of data extracted from social media for studying offline behavior. To address this issue, we show that there is a statistically significant association between tweets that mention a candidate for the U.S. House of Representatives and his or her subsequent electoral performance. We demonstrate this result with an analysis of 542,969 tweets mentioning candidates selected from a random sample of 3,570,054,618, as well as Federal Election Commission data from 795 competitive races in the 2010 and 2012 U.S. congressional elections. This finding persists even when controlling for incumbency, district partisanship, media coverage of the race, time, and demographic variables such as the district's racial and gender composition. Our findings show that reliable data about political behavior can be extracted from social media.", "We present TwitterMonitor, a system that performs trend detection over the Twitter stream. The system identifies emerging topics (i.e. 'trends') on Twitter in real time and provides meaningful analytics that synthesize an accurate description of each topic. Users interact with the system by ordering the identified trends using different criteria and submitting their own description for each trend. We discuss the motivation for trend detection over social media streams and the challenges that lie therein. We then describe our approach to trend detection, as well as the architecture of TwitterMonitor. Finally, we lay out our demonstration scenario.", "We present a contribution to the debate on the predictability of social events using big data analytics. We focus on the elimination of contestants in the American Idol TV shows as an example of a well defined electoral phenomenon that each week draws millions of votes in the USA. This event can be considered as basic test in a simplified environment to assess the predictive power of Twitter signals. We provide evidence that Twitter activity during the time span defined by the TV show airing and the voting period following it correlates with the contestants ranking and allows the anticipation of the voting outcome. Twitter data from the show and the voting period of the season finale have been analyzed to attempt the winner prediction ahead of the airing of the official result. We also show that the fraction of tweets that contain geolocation information allows us to map the fanbase of each contestant, both within the US and abroad, showing that strong regional polarizations occur. The geolocalized data are crucial for the correct prediction of the final outcome of the show, pointing out the importance of considering information beyond the aggregated Twitter signal. Although American Idol voting is just a minimal and simplified version of complex societ al phenomena such as political elections, this work shows that the volume of information available in online systems permits the real time gathering of quantitative indicators that may be able to anticipate the future unfolding of opinion formation events.", "We examine the temporal evolution of digital communication activity relating to the American anti-capitalist movement Occupy Wall Street. Using a high-volume sample from the microblogging site Twitter, we investigate changes in Occupy participant engagement, interests, and social connectivity over a fifteen month period starting three months prior to the movement's first protest action. The results of this analysis indicate that, on Twitter, the Occupy movement tended to elicit participation from a set of highly interconnected users with pre-existing interests in domestic politics and foreign social movements. These users, while highly vocal in the months immediately following the birth of the movement, appear to have lost interest in Occupy related communication over the remainder of the study period.", "Twitter is a microblogging website where users read and write millions of short messages on a variety of topics every day. This study uses the context of the German federal election to investigate whether Twitter is used as a forum for political deliberation and whether online messages on Twitter validly mirror offline political sentiment. Using LIWC text analysis software, we conducted a content-analysis of over 100,000 messages containing a reference to either a political party or a politician. Our results show that Twitter is indeed used extensively for political deliberation. We find that the mere number of messages mentioning a party reflects the election result. Moreover, joint mentions of two parties are in line with real world political ties and coalitions. An analysis of the tweets’ political sentiment demonstrates close correspondence to the parties' and politicians’ political positions indicating that the content of Twitter messages plausibly reflects the offline political landscape. We discuss the use of microblogging message content as a valid indicator of political sentiment and derive suggestions for further research.", "Social media represent powerful tools of mass communication and information diffusion. They played a pivotal role during recent social uprisings and political mobilizations across the world. Here we present a study of the Gezi Park movement in Turkey through the lens of Twitter. We analyze over 2.3 million tweets produced during the 25 days of protest occurred between May and June 2013. We first characterize the spatio-temporal nature of the conversation about the Gezi Park demonstrations, showing that similarity in trends of discussion mirrors geographic cues. We then describe the characteristics of the users involved in this conversation and what roles they played. We study how roles and individual influence evolved during the period of the upheaval. This analysis reveals that the conversation becomes more democratic as events unfold, with a redistribution of influence over time in the user population. We conclude by observing how the online and offline worlds are tightly intertwined, showing that exogenous events, such as political speeches or police actions, affect social media conversations and trigger changes in individual behavior.", "In recent years, social media has become ubiquitous and important for social networking and content sharing. And yet, the content that is generated from these websites remains largely untapped. In this paper, we demonstrate how social media content can be used to predict real-world outcomes. In particular, we use the chatter from Twitter.com to forecast box-office revenues for movies. We show that a simple model built from the rate at which tweets are created about particular topics can outperform market-based predictors. We further demonstrate how sentiments extracted from Twitter can be utilized to improve the forecasting power of social media.", "Behavioral economics tells us that emotions can profoundly affect individual behavior and decision-making. Does this also apply to societies at large, i.e. can societies experience mood states that affect their collective decision making? By extension is the public mood correlated or even predictive of economic indicators? Here we investigate whether measurements of collective mood states derived from large-scale Twitter feeds are correlated to the value of the Dow Jones Industrial Average (DJIA) over time. We analyze the text content of daily Twitter feeds by two mood tracking tools, namely OpinionFinder that measures positive vs. negative mood and Google-Profile of Mood States (GPOMS) that measures mood in terms of 6 dimensions (Calm, Alert, Sure, Vital, Kind, and Happy). We cross-validate the resulting mood time series by comparing their ability to detect the public's response to the presidential election and Thanksgiving day in 2008. A Granger causality analysis and a Self-Organizing Fuzzy Neural Network are then used to investigate the hypothesis that public mood states, as measured by the OpinionFinder and GPOMS mood time series, are predictive of changes in DJIA closing values. Our results indicate that the accuracy of DJIA predictions can be significantly improved by the inclusion of specific public mood dimensions but not others. We find an accuracy of 87.6 in predicting the daily up and down changes in the closing values of the DJIA and a reduction of the Mean Average Percentage Error by more than 6 . Index Terms—stock market prediction — twitter — mood analysis.", "This paper describes early work trying to predict stock market indicators such as Dow Jones, NASDAQ and S&P 500 by analyzing Twitter posts. We collected the twitter feeds for six months and got a randomized subsample of about one hundredth of the full volume of all tweets. We measured collective hope and fear on each day and analyzed the correlation between these indices and the stock market indicators. We found that emotional tweet percentage significantly negatively correlated with Dow Jones, NASDAQ and S&P 500, but displayed significant positive correlation to VIX. It therefore seems that just checking on twitter for emotional outbursts of any kind gives a predictor of how the stock market will be doing the next day.", "Twitter sentiment was revealed, along with popularity of Egypt-related subjects and tweeter influence on the 2011 revolution.", "Social movements rely in large measure on networked communication technologies to organize and disseminate information relating to the movements’ objectives. In this work we seek to understand how the goals and needs of a protest movement are reflected in the geographic patterns of its communication network, and how these patterns differ from those of stable political communication. To this end, we examine an online communication network reconstructed from over 600,000 tweets from a thirty-six week period covering the birth and maturation of the American anticapitalist movement, Occupy Wall Street. We find that, compared to a network of stable domestic political communication, the Occupy Wall Street network exhibits higher levels of locality and a hub and spoke structure, in which the majority of non-local attention is allocated to high-profile locations such as New York, California, and Washington D.C. Moreover, we observe that information flows across state boundaries are more likely to contain framing language and references to the media, while communication among individuals in the same state is more likely to reference protest action and specific places and times. Tying these results to social movement theory, we propose that these features reflect the movement’s efforts to mobilize resources at the local level and to develop narrative frames that reinforce collective purpose at the national level.", "We predict IMDb movie ratings and consider two sets of features: surface and textual features. For the latter, we assume that no social media signal is isolated and use data from multiple channels that are linked to a particular movie, such as tweets from Twitter and comments from YouTube. We extract textual features from each channel to use in our prediction model and we explore whether data from either of these channels can help to extract a better set of textual feature for prediction. Our best performing model is able to rate movies very close to the observed values.", "Trending topics are the online conversations that grab collective attention on social media. They are continually changing and often reflect exogenous events that happen in the real world. Trends are localized in space and time as they are driven by activity in specific geographic areas that act as sources of traffic and information flow. Taken independently, trends and geography have been discussed in recent literature on online social media; although, so far, little has been done to characterize the relation between trends and geography. Here we investigate more than eleven thousand topics that trended on Twitter in 63 main US locations during a period of 50 days in 2013. This data allows us to study the origins and pathways of trends, how they compete for popularity at the local level to emerge as winners at the country level, and what dynamics underlie their production and consumption in different geographic areas. We identify two main classes of trending topics: those that surface locally, coinciding with three different geographic clusters (East coast, Midwest and Southwest); and those that emerge globally from several metropolitan areas, coinciding with the major air traffic hubs of the country. These hubs act as trendsetters, generating topics that eventually trend at the country level, and driving the conversation across the country. This poses an intriguing conjecture, drawing a parallel between the spread of information and diseases: Do trends travel faster by airplane than over the Internet?", "Purpose – Social media provide an impressive amount of data about users and their interactions, thereby offering computer and social scientists, economists, and statisticians – among others – new opportunities for research. Arguably, one of the most interesting lines of work is that of predicting future events and developments from social media data. However, current work is fragmented and lacks of widely accepted evaluation approaches. Moreover, since the first techniques emerged rather recently, little is known about their overall potential, limitations and general applicability to different domains. Therefore, better understanding the predictive power and limitations of social media is of utmost importance. Design methodology approach – Different types of forecasting models and their adaptation to the special circumstances of social media are analyzed and the most representative research conducted up to date is surveyed. Presentations of current research on techniques, methods, and empirical studies ai..." ] }
1502.05886
2138017439
Information extracted from social media streams has been leveraged to forecast the outcome of a large number of real-world events, from political elections to stock market fluctuations. An increasing amount of studies demonstrates how the analysis of social media conversations provides cheap access to the wisdom of the crowd. However, extents and contexts in which such forecasting power can be effectively leveraged are still unverified at least in a systematic way. It is also unclear how social-media-based predictions compare to those based on alternative information sources. To address these issues, here we develop a machine learning framework that leverages social media streams to automatically identify and predict the outcomes of soccer matches. We focus in particular on matches in which at least one of the possible outcomes is deemed as highly unlikely by professional bookmakers. We argue that sport events offer a systematic approach for testing the predictive power of social media conversations, and allow to compare such power against the rigorous baselines set by external sources. Despite such strict baselines, our framework yields above 8 marginal profit when used to inform simple betting strategies. The system is based on real-time sentiment analysis and exploits data collected immediately before the game start, allowing for bets informed by its predictions. We first discuss the rationale behind our approach, then describe the learning framework, its prediction performance and the return it provides as compared to a set of betting strategies. To test our framework we use both historical Twitter data from the 2014 FIFA World Cup games (10 sample), and real-time Twitter data (full stream) collected by monitoring the conversations about all soccer matches of the four major European tournaments (FA Premier League, Serie A, La Liga, and Bundesliga), and the 2014 UEFA Champions League, during the period between October, 25th 2014 and November, 26th 2014.
To prove the idea that social media data convey predictive power, Asur and Huberman @cite_3 designed a system that uses Twitter to forecast the box-office revenues of upcoming movies: simple signals such as the buzz around a given movie seem indicative of its future popularity. DiGrazia et al @cite_18 used a similar framework to show that there exists a statistically significant association between tweets that mention a political candidate for the U.S. House of Representatives and his or her subsequent electoral performance. Bermingham and Smeaton @cite_2 illustrated a similar case study for the recent Irish General Election, modeling political sentiment by mining social media conversations. They combined sentiment analysis using supervised learning and volume-based measures and found that this signals are highly predictive of election results. Bollen et al @cite_6 analyzed the textual content of the daily Twitter stream to show that Twitter mood is predictive of the daily fluctuations in the closing values of the Dow Jones Industrial Average (DJIA). Xue Zhang et al @cite_24 collected Twitter data for six months and found that the percentage of emotional tweets significantly negatively correlates with Dow Jones, NASDAQ and S &P 500 fluctuations, but displays a significant positive correlation to VIX.
{ "cite_N": [ "@cite_18", "@cite_6", "@cite_3", "@cite_24", "@cite_2" ], "mid": [ "1976267568", "2171468534", "2015186536", "2037625889", "137217113" ], "abstract": [ "Is social media a valid indicator of political behavior? There is considerable debate about the validity of data extracted from social media for studying offline behavior. To address this issue, we show that there is a statistically significant association between tweets that mention a candidate for the U.S. House of Representatives and his or her subsequent electoral performance. We demonstrate this result with an analysis of 542,969 tweets mentioning candidates selected from a random sample of 3,570,054,618, as well as Federal Election Commission data from 795 competitive races in the 2010 and 2012 U.S. congressional elections. This finding persists even when controlling for incumbency, district partisanship, media coverage of the race, time, and demographic variables such as the district's racial and gender composition. Our findings show that reliable data about political behavior can be extracted from social media.", "Behavioral economics tells us that emotions can profoundly affect individual behavior and decision-making. Does this also apply to societies at large, i.e. can societies experience mood states that affect their collective decision making? By extension is the public mood correlated or even predictive of economic indicators? Here we investigate whether measurements of collective mood states derived from large-scale Twitter feeds are correlated to the value of the Dow Jones Industrial Average (DJIA) over time. We analyze the text content of daily Twitter feeds by two mood tracking tools, namely OpinionFinder that measures positive vs. negative mood and Google-Profile of Mood States (GPOMS) that measures mood in terms of 6 dimensions (Calm, Alert, Sure, Vital, Kind, and Happy). We cross-validate the resulting mood time series by comparing their ability to detect the public's response to the presidential election and Thanksgiving day in 2008. A Granger causality analysis and a Self-Organizing Fuzzy Neural Network are then used to investigate the hypothesis that public mood states, as measured by the OpinionFinder and GPOMS mood time series, are predictive of changes in DJIA closing values. Our results indicate that the accuracy of DJIA predictions can be significantly improved by the inclusion of specific public mood dimensions but not others. We find an accuracy of 87.6 in predicting the daily up and down changes in the closing values of the DJIA and a reduction of the Mean Average Percentage Error by more than 6 . Index Terms—stock market prediction — twitter — mood analysis.", "In recent years, social media has become ubiquitous and important for social networking and content sharing. And yet, the content that is generated from these websites remains largely untapped. In this paper, we demonstrate how social media content can be used to predict real-world outcomes. In particular, we use the chatter from Twitter.com to forecast box-office revenues for movies. We show that a simple model built from the rate at which tweets are created about particular topics can outperform market-based predictors. We further demonstrate how sentiments extracted from Twitter can be utilized to improve the forecasting power of social media.", "This paper describes early work trying to predict stock market indicators such as Dow Jones, NASDAQ and S&P 500 by analyzing Twitter posts. We collected the twitter feeds for six months and got a randomized subsample of about one hundredth of the full volume of all tweets. We measured collective hope and fear on each day and analyzed the correlation between these indices and the stock market indicators. We found that emotional tweet percentage significantly negatively correlated with Dow Jones, NASDAQ and S&P 500, but displayed significant positive correlation to VIX. It therefore seems that just checking on twitter for emotional outbursts of any kind gives a predictor of how the stock market will be doing the next day.", "The body of content available on Twitter undoubtedly contains a diverse range of political insight and commentary. But, to what extent is this representative of an electorate? Can we model political sentiment effectively enough to capture the voting intentions of a nation during an election capaign? We use the recent Irish General Election as a case study for investigating the potential to model political sentiment through mining of social media. Our approach combines sentiment analysis using supervised learning and volume-based measures. We evaluate against the conventional election polls and the final election result. We find that social analytics using both volume-based measures and sentiment analysis are predictive and wemake a number of observations related to the task of monitoring public sentiment during an election campaign, including examining a variety of sample sizes, time periods as well as methods for qualitatively exploring the underlying content." ] }
1502.05886
2138017439
Information extracted from social media streams has been leveraged to forecast the outcome of a large number of real-world events, from political elections to stock market fluctuations. An increasing amount of studies demonstrates how the analysis of social media conversations provides cheap access to the wisdom of the crowd. However, extents and contexts in which such forecasting power can be effectively leveraged are still unverified at least in a systematic way. It is also unclear how social-media-based predictions compare to those based on alternative information sources. To address these issues, here we develop a machine learning framework that leverages social media streams to automatically identify and predict the outcomes of soccer matches. We focus in particular on matches in which at least one of the possible outcomes is deemed as highly unlikely by professional bookmakers. We argue that sport events offer a systematic approach for testing the predictive power of social media conversations, and allow to compare such power against the rigorous baselines set by external sources. Despite such strict baselines, our framework yields above 8 marginal profit when used to inform simple betting strategies. The system is based on real-time sentiment analysis and exploits data collected immediately before the game start, allowing for bets informed by its predictions. We first discuss the rationale behind our approach, then describe the learning framework, its prediction performance and the return it provides as compared to a set of betting strategies. To test our framework we use both historical Twitter data from the 2014 FIFA World Cup games (10 sample), and real-time Twitter data (full stream) collected by monitoring the conversations about all soccer matches of the four major European tournaments (FA Premier League, Serie A, La Liga, and Bundesliga), and the 2014 UEFA Champions League, during the period between October, 25th 2014 and November, 26th 2014.
Various works called for caution when using social media to predict exogenous events @cite_9 @cite_13 : in such cases, it is important to keep in mind that the usage of machine learning algorithms or statistical models that function as black boxes can yield to results which are not interpretable and misleading @cite_22 . For these reasons, when we designed our machine learning framework we based it on simple assumptions: the prediction dynamics are entirely explainable and observable in real time. In fact, our model relies only on one single feature (the average conversation sentiment measured over time) and it allows to interpret the predictions in a concise and clear way. Our hypotheses are also rooted on recent advances in social psychology that support the idea that collective attention enhances group emotions @cite_28 @cite_25 @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_28", "@cite_9", "@cite_13", "@cite_25" ], "mid": [ "1983069677", "2068181924", "2147167010", "1974906280", "153654188", "2088513386" ], "abstract": [ "The idea that group contexts can intensify emotions is centuries old. Yet, evidence that speaks to how, or if, emotions become more intense in groups remains elusive. Here we examine the novel possibility that group attention—the experience of simultaneous coattention with one’s group members—increases emotional intensity relative to attending alone, coattending with strangers, or attending nonsimultaneously with one’s group members. In Study 1, scary advertisements felt scarier under group attention. In Study 2, group attention intensified feelings of sadness to negative images, and feelings of happiness to positive images. In Study 3, group attention during a video depicting homelessness led to greater sadness that prompted larger donations to charities benefiting the homeless. In Studies 4 and 5, group attention increased the amount of cognitive resources allocated toward sad and amusing videos (as indexed by the percentage of thoughts referencing video content), leading to more sadness and happiness, respectively. In all, these effects could not be explained by differences in physiological arousal, emotional contagion, or vicarious emotional experience. Greater fear, gloom, and glee can thus result from group attention to scary, sad, and happy events, respectively.", "In February 2013, Google Flu Trends (GFT) made headlines but not for a reason that Google executives or the creators of the flu tracking system would have hoped. Nature reported that GFT was predicting more than double the proportion of doctor visits for influenza-like illness (ILI) than the Centers for Disease Control and Prevention (CDC), which bases its estimates on surveillance reports from laboratories across the United States ( 1 , 2 ). This happened despite the fact that GFT was built to predict CDC reports. Given that GFT is often held up as an exemplary use of big data ( 3 , 4 ), what lessons can we draw from this error?", "Social psychologists have studied the psychological processes involved in persuasion, conformity, and other forms of social influence, but they have rarely modeled the ways influence processes play out when multiple sources and multiple targets of influence interact over time. However, workers in other fields from sociology and economics to cognitive science and physics have recognized the importance of social influence and have developed models of influence flow in populations and groups—generally without relying on detailed social psychological findings. This article reviews models of social influence from a number of fields, categorizing them using four conceptual dimensions to delineate the universe of possible models. The goal is to encourage interdisciplinary collaborations to build models that incorporate the detailed, microlevel understanding of influence processes derived from focused laboratory studies but contextualized in ways that recognize how multidirectional, dynamic influences are situate...", "Predicting X from Twitter is a popular fad within the Twitter research subculture. It seems both appealing and relatively easy. Among such studies, electoral prediction is maybe the most attractive, and a growing body of literature exists on this topic. This research problem isn't only interesting, but is also extremely difficult. However, most authors seem to be more interested in claiming positive results than in providing sound and reproducible methods. It's also especially worrisome that recent papers seem to only acknowledge those studies supporting the idea that Twitter can predict elections. This is all problematic because while simple approaches are purported to be good enough, the predictive power of Twitter regarding elections has been greatly exaggerated, and difficult research problems still lie ahead.", "Using social media for political discourse is becoming common practice, especially around election time. One interesting aspect of this trend is the possibility of pulsing the public’s opinion about the elections, and that has attracted the interest of many researchers and the press. Allegedly, predicting electoral outcomes from social media data can be feasible and even simple. Positive results have been reported, but without an analysis on what principle enables them. Our work puts to test the purported predictive power of socialmedia metrics against the 2010 US congressional elections. Here, we applied techniques that had reportedly led to positive election predictions in the past, on the Twitter data collected from the 2010 US congressional elections. Unfortunately, we find no correlation between the analysis results and the electoral outcomes, contradicting previous reports. Observing that 80 years of polling research would support our findings, we argue that one should not be accepting predictions about events using social media data as a black box. Instead, scholarly research should be accompanied by a model explaining the predictive power of social media, when there is one.", "Across disciplines, social learning research has been unified by the principle that people learn new behaviors to the extent that they identify with the actor modeling them. We propose that this conceptualization may overlook the power of the interpersonal situation in which the modeled behavior is observed. Specifically, we predict that contexts characterized by shared in-group attention are particularly conducive to social learning. In two studies, participants were shown the same written exchange in either paragraph or chat form across multiple interpersonal contexts. We gauged social learning based on participants’ tendency to imitate the form of the written exchange to which they were exposed. Across both studies, results reveal that imitation is especially likely among individuals placed in the specific context of simultaneous observation with a similar other. These findings suggest that shared in-group attention is uniquely adaptive for social learning." ] }