aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1405.6362
|
151718868
|
Exascale systems are predicted to have approximately one billion cores, assuming Gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the current parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. There is therefore an urgent need to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics, but has recently been extended to a wider range of problems, including preconditioners for sparse linear solvers. It's high arithmetic intensity combined with its linear complexity and asynchronous communication patterns makes it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on inter-node communication. We develop a performance model that considers the communication patterns of the FMM, and observe a good match between our model and the actual communication time, when latency, bandwidth, network topology, and multi-core penalties are all taken into account. To our knowledge, this is the first formal characterization of inter-node communication in FMM, which validates the model against actual measurements of communication time.
|
Scaling FMM to higher and higher processors counts has been a popular topic @cite_25 @cite_7 , while extensive study of single-node performance optimization, tuning, and analysis of FMM has also been of interest @cite_15 . However, there has been little effort to model the inter-node communication of FMMs. Lashuk derive the overall complexity of FMM on distributed memory heterogeneous architectures @cite_17 , but do not validate the model against the actual performance. The present work is based on the communication model for AMG @cite_0 , and extends their theory to FMM. To our knowledge, this is the first formal characterization of inter-node communication in FMM, which validates the model against actual measurements of communication time.
|
{
"cite_N": [
"@cite_7",
"@cite_0",
"@cite_15",
"@cite_25",
"@cite_17"
],
"mid": [
"",
"2079472602",
"2139205226",
"1982142580",
"2099813373"
],
"abstract": [
"",
"Now that the performance of individual cores has plateaued, future supercomputers will depend upon increasing parallelism for performance. Processor counts are now in the hundreds of thousands for the largest machines and will soon be in the millions. There is an urgent need to model application performance at these scales and to understand what changes need to be made to ensure continued scalability. This paper considers algebraic multigrid (AMG), a popular and highly efficient iterative solver for large sparse linear systems that is used in many applications. We discuss the challenges for AMG on current parallel computers and future exascale architectures, and we present a performance model for an AMG solve cycle as well as performance measurements on several massively-parallel platforms.",
"This work presents the first extensive study of single-node performance optimization, tuning, and analysis of the fast multipole method (FMM) on modern multi-core systems. We consider single- and double-precision with numerous performance enhancements, including low-level tuning, numerical approximation, data structure transformations, OpenMP parallelization, and algorithmic tuning. Among our numerous findings, we show that optimization and parallelization can improve double-precision performance by 25× on Intel's quad-core Nehalem, 9.4× on AMD's quad-core Barcelona, and 37.6× on Sun's Victoria Falls (dual-sockets on all systems). We also compare our single-precision version against our prior state-of-the-art GPU-based code and show, surprisingly, that the most advanced multicore architecture (Nehalem) reaches parity in both performance and power efficiency with NVIDIA's most advanced GPU architecture.",
"Abstract There is some controversy regarding the scaling of the fast multipole method (FMM). It has recently been proven by Aluru that the FMM is not a linear scaling method, but an O ( N log 4 N ) method. Aluru's proof cannot be applied to typical computational chemistry calculations where the required precision is smaller than the machine accuracy. In this Letter, we deal with this kind of situation and give a rigorous bound to the scaling and a statistical estimate. We also perform numerical tests. Our results agree with Aluru's proof. The scaling of other methods that use multipoles is also discussed.",
"We present new scalable algorithms and a new implementation of our kernel-independent fast multipole method ( ACM IEEE SC '03), in which we employ both distributed memory parallelism (via MPI) and shared memory streaming parallelism (via GPU acceleration) to rapidly evaluate two-body non-oscillatory potentials. On traditional CPU-only systems, our implementation scales well up to 30 billion unknowns on 65K cores (AMD CRAY-based Kraken system at NSF NICS) for highly non-uniform point distributions. On GPU-enabled systems, we achieve 30x speedup for problems of up to 256 million points on 256 GPUs (Lincoln at NSF NCSA) over a comparable CPU-only based implementations. We achieve scalability to such extreme core counts by adopting a new approach to scalable MPI-based tree construction and partitioning, and a new reduction algorithm for the evaluation phase. For the sub-components of the evaluation phase (the direct- and approximate-interactions, the target evaluation, and the source-to-multipole translations), we use NVIDIA's CUDA framework for GPU acceleration to achieve excellent performance. To do so requires carefully constructed data structure transformations, which we describe in the paper and whose cost we show is minor. Taken together, these components show promise for ultrascalable FMM in the petascale era and beyond."
]
}
|
1405.6448
|
1533049312
|
In this paper, we introduce a novel approach for optimal resource allocation from multiple carriers for users with elastic and inelastic traffic in fourth generation long term evolution (4G-LTE) system. In our model, we use logarithmic and sigmoidal-like utility functions to represent the user applications running on different user equipments (UE)s. We use utility proportional fairness policy, where the fairness among users is in utilization (i.e utility percentage) of the application running on the mobile station. Our objective is to allocate the resources to the users optimally from multiple carriers. In addition, every user subscribing for the mobile service is guaranteed to have a minimum quality-of-service (QoS) with a priority criterion. Our rate allocation algorithm selects the carrier or multiple carriers that provide the minimum price for the needed resources. We prove that the novel resource allocation optimization problem with joint carrier aggregation is convex and therefore the optimal solution is tractable. We present a distributed algorithm to allocate the resources optimally from multiple evolved NodeBs (eNodeB)s. Finally, we present simulation results for the performance of our rate allocation algorithm.
|
In @cite_4 , the authors present multiple-stage carrier aggregation with optimal resource allocation algorithm with utility proportional fairness. The users allocate the resources from the first (primary) carrier eNodeB until all the resources in the eNodeB are allocated. The users switch to the second (secondary) carrier eNodeB to allocate more resources, and so forth. In @cite_8 , spectrum sharing of public safety and commercial LTE bands is assumed. The authors presented a resource allocation algorithm with priority given to public safety users. The resource allocation algorithms in @cite_4 @cite_8 does not ensure optimal pricing where the allocation is performed in multiple stages. In this paper, we present an algorithm that allocates the resources jointly from different carriers and therefore ensures optimal rate allocation and optimal pricing.
|
{
"cite_N": [
"@cite_4",
"@cite_8"
],
"mid": [
"1968719883",
"2073839497"
],
"abstract": [
"In this paper, we consider a resource allocation optimization problem with carrier aggregation in fourth generation long term evolution (4G-LTE). In our proposed model, each user equipment (UE) is assigned a utility function that represents the application type running on the UE. Our objective is to allocate the resources from two carriers to each user based on its application that is represented by the utility function assigned to that user. We consider two groups of users, one with elastic traffic and the other with inelastic traffic. Each user is guaranteed a minimum resource allocation. In addition, a priority resource allocation is given to the UEs running adaptive real time applications. We prove that the optimal rate allocated to each UE by the single carrier resource allocation optimization problem is equivalent to the aggregated optimal rates allocated to the same user by the primary and secondary carriers when their total resources is equivalent to the single carrier resources. Our goal is to guarantee a minimum quality of service (QoS) that varies based on the user application type. We present a carrier aggregation rate allocation algorithm to allocate two carriers resources optimally among users. Finally we present simulation results with the carrier aggregation rate allocation algorithm.",
"In this paper, we consider resource allocation optimization problem in fourth generation long term evolution (4G-LTE) for public safety and commercial users running elastic or inelastic traffic. Each mobile user can run delay-tolerant or real-time applications. In our proposed model, each user equipment (UE) is assigned a utility function that represents the application type running on the UE. Our objective is to allocate the resources from a single evolved node B (eNodeB) to each user based on the user application that is represented by the utility function assigned to that user. We consider two groups of users, one represents public safety users with elastic or inelastic traffic and the other represents commercial users with elastic or inelastic traffic. The public safety group is given priority over the commercial group and within each group the inelastic traffic is prioritized over the elastic traffic. Our goal is to guarantee a minimum quality of service (QoS) that varies based on the user type, the user application type and the application target rate. A rate allocation algorithm is presented to allocate the eNodeB resources optimally among public safety and commercial users. Finally, the simulation results are presented on the performance of the proposed rate allocation algorithm."
]
}
|
1405.5449
|
2261940748
|
We consider a branching random walk on the lattice, where the branching rates are given by an i.i.d. Pareto random potential. We describe the process, including a detailed shape theorem, in terms of a system of growing lilypads. As an application we show that the branching random walk is intermittent, in the sense that most particles are concentrated on one very small island with large potential. Moreover, we compare the branching random walk to the parabolic Anderson model and observe that although the two systems show similarities, the mechanisms that control the growth are fundamentally different.
|
Closely related to our model is a branching random walk on @math in discrete time with a spatial i.i.d. offspring distribution. Here, much more is known about the number of particles. Early work includes @cite_15 @cite_7 for @math , who start with an infinite population and describe the local and global growth rates in terms of a variational problem (depending on a drift in the underlying random walk). Many other authors address the question of survival (see e.g. @cite_12 @cite_14 ) and recurrence vs. transience (see e.g. @cite_2 @cite_1 @cite_24 @cite_13 ).
|
{
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_1",
"@cite_24",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_12"
],
"mid": [
"2108441746",
"",
"2000306115",
"",
"2049125378",
"1991205207",
"2101723786",
"2132327160"
],
"abstract": [
"We study survival of nearest-neighbor branching random walks in random environment (BRWRE) on ℤ. A priori there are three different regimes of survival: global survival, local survival, and strong local survival. We show that local and strong local survival regimes coincide for BRWRE and that they can be characterized with the spectral radius of the first moment matrix of the process. These results are generalizations of the classification of BRWRE in recurrent and transient regimes. Our main result is a characterization of global survival that is given in terms of Lyapunov exponents of an infinite product of i.i.d. 2×2 random matrices.",
"",
"We study branching random walks in random i.i.d. environment in Z d , d ≥ 1. For this model, the population size cannot decrease, and a natural definition of recurrence is introduced. We prove a dichotomy for recurrence transience, depending only on the support of the environmental law. We give sufficient conditions for recurrence and for transience. In the recurrent case, we study the asymptotics of the tail of the distribution of the hitting times and prove a shape theorem for the set of lattice sites which are visited up to a large time.",
"",
"This paper considers an infinite system of particles on the integers @math that: (1) step to the right with a random delay, and (2) split or die along the way according to a random law depending on their position. The exponential growth rate of the particle density is computed in the long time limit in the form of a variational formula that can be solved explicitly. The result reveals two phase transitions associated with localization vs. delocalization and survival vs. extinction. In addition, the system exhibits an intermittency effect. Greven and den Hollander considered the more difficult situation where the particles may step both to the left and right, but the analysis of the phase diagram was less complete.",
"Let (η n ) be the infinite particle system on ℤ whose evolution is as follows. At each unit of time each particle independently is replaced by a new generation. The size of a new generation descending from a particle at sitex has distributionF x and each of its members independently jumps to sitex±1 with probability (1±h) 2,h∈[0, 1]. The sequence F x is i.i.d. with uniformly bounded second moment and is kept fixed during the evolution. The initial configurationη0 is shift invariant and ergodic.",
"We develop a criterion for transience for a general model of branching Markov chains. In the case of multi-dimensional branching random walk in random environment (BRWRE) this criterion becomes explicit. In particular, we show that Condition L of Comets and Popov [3] is necessary and sufficient for transience as conjectured. Furthermore, the criterion applies to two important classes of branching random walks and implies that the critical branching random walk is transient resp. dies out locally.",
"We consider a particular Branching Random Walk in Random Environment (BRWRE) on @math started with one particle at the origin. Particles reproduce according to an offspring distribution (which depends on the location) and move either one step to the right (with a probability in @math which also depends on the location) or stay in the same place. We give criteria for local and global survival and show that global survival is equivalent to exponential growth of the moments. Further, on the event of survival the number of particles grows almost surely exponentially fast with the same growth rate as the moments."
]
}
|
1405.5449
|
2261940748
|
We consider a branching random walk on the lattice, where the branching rates are given by an i.i.d. Pareto random potential. We describe the process, including a detailed shape theorem, in terms of a system of growing lilypads. As an application we show that the branching random walk is intermittent, in the sense that most particles are concentrated on one very small island with large potential. Moreover, we compare the branching random walk to the parabolic Anderson model and observe that although the two systems show similarities, the mechanisms that control the growth are fundamentally different.
|
Since our interest is in the effect of heavy-tailed environments, we assume that the branching rates are bounded away from zero and thus avoid the issue of recurrence and transience. Indeed we see that as soon as a site is occupied, there are almost immediately exponentially many particles, and we focus on analysing the growth of the branching process by describing when sites are hit and how the number of particles evolves thereafter. We find that for our choice of potential the sites that are hit, as well as the local growth rates, are --- even after rescaling appropriately --- random. This is in sharp contrast to existing shape theorems for branching random walks with spatial i.i.d offspring distribution, see @cite_1 @cite_11 , where the set of visited sites converges after rescaling to a deterministic convex set and the local growth rates are given by a deterministic function. Furthermore, we will show that in our case the growth rates for the actual number of particles deviates dramatically from those for the expected number, which again contrasts with @cite_1 @cite_11 .
|
{
"cite_N": [
"@cite_1",
"@cite_11"
],
"mid": [
"2000306115",
"2963863740"
],
"abstract": [
"We study branching random walks in random i.i.d. environment in Z d , d ≥ 1. For this model, the population size cannot decrease, and a natural definition of recurrence is introduced. We prove a dichotomy for recurrence transience, depending only on the support of the environmental law. We give sufficient conditions for recurrence and for transience. In the recurrent case, we study the asymptotics of the tail of the distribution of the hitting times and prove a shape theorem for the set of lattice sites which are visited up to a large time.",
"We study branching random walks in random environment on the d- dimensional square lattice, d 1. In this model, the environment has nite range dependence, and the population size cannot decrease. We prove limit theorems (laws of large numbers) for the set of lattice sites which are visited up to a large time as well as for the local size of the population. The limiting shape of this set is compact and convex, and the local size is given by a concave growth exponent. Also, we obtain the law of large numbers for the logarithm of the total number of particles in the process. 1. Introduction and results We start with an informal description of the model we study in this paper. Particles live in Z d and evolve in discrete time. At each time, every particle is substituted by (possibly more than one) ospring which are placed in neighboring sites, independently of the other particles. The rules of ospring generation depend only on the location of the particle. The collection of those rules (so-called the environment) is itself random, it is chosen randomly before starting the process, and then it is kept xed during all the subsequent evolution of the particle system."
]
}
|
1405.5443
|
1763059245
|
Traditional AI reasoning techniques have been used successfully in many domains, including logistics, scheduling and game playing. This paper is part of a project aimed at investigating how such techniques can be extended to coordinate teams of unmanned aerial vehicles (UAVs) in dynamic environments. Specifically challenging are real-world environments where UAVs and other network-enabled devices must communicate to coordinate -- and communication actions are neither reliable nor free. Such network-centric environments are common in military, public safety and commercial applications, yet most research (even multi-agent planning) usually takes communications among distributed agents as a given. We address this challenge by developing an agent architecture and reasoning algorithms based on Answer Set Programming (ASP). Although ASP has been used successfully in a number of applications, to the best of our knowledge this is the first practical application of a complete ASP-based agent architecture. It is also the first practical application of ASP involving a combination of centralized reasoning, decentralized reasoning, execution monitoring, and reasoning about network communications.
|
Incorporating network properties into planning and decision-making has been investigated in @cite_0 . The authors' results indicate that plan execution effectiveness and performance is increased with the increased network-awareness during the planning phase. The UAV coordination approach in this current work combines network-awareness during the reasoning processes with a plan-aware network layer.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2039531526"
],
"abstract": [
"As methods for detecting improvised explosive devices (IEDs) continue to diversify, it becomes increasingly important to establish a framework for coordinating distributed IED monitoring resources to best protect a designated area. The purpose of this paper is to establish the beginnings of such a framework in a distributed plan execution context. The first contribution of this paper is defining an automated planning domain for distributed IED detection. In doing so, we investigate approaches for coordinating distributed plan execution resources. Whereas many existing multi-agent system (MAS) frameworks abstract network information from agent decision-making processes, we instead propose that MAS frameworks consider network properties to improve effectiveness. The second contribution of the paper is the description of several types of network-aware planning, execution, and monitoring agents and a comparison of their performance and effectiveness in an IED monitoring scenario. The results of this research ..."
]
}
|
1405.5443
|
1763059245
|
Traditional AI reasoning techniques have been used successfully in many domains, including logistics, scheduling and game playing. This paper is part of a project aimed at investigating how such techniques can be extended to coordinate teams of unmanned aerial vehicles (UAVs) in dynamic environments. Specifically challenging are real-world environments where UAVs and other network-enabled devices must communicate to coordinate -- and communication actions are neither reliable nor free. Such network-centric environments are common in military, public safety and commercial applications, yet most research (even multi-agent planning) usually takes communications among distributed agents as a given. We address this challenge by developing an agent architecture and reasoning algorithms based on Answer Set Programming (ASP). Although ASP has been used successfully in a number of applications, to the best of our knowledge this is the first practical application of a complete ASP-based agent architecture. It is also the first practical application of ASP involving a combination of centralized reasoning, decentralized reasoning, execution monitoring, and reasoning about network communications.
|
The problem of mission planning for UAVs under communication constraints has been addressed in @cite_13 , where an ad-hoc task allocation process is employed to engage under-utilized UAVs as communication relays. In our work, we do not separate planning from the engagement of under-utilized UAVs, and do not rely on ad-hoc, hard-wired behaviors. Our approach gives the planner more flexibility and finer-grained control of the actions that occur in the plans, and allows for the emergence of sophisticated behaviors without the need to pre-specify them.
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"2034297074"
],
"abstract": [
"A multi-UAV system relies on communications to operate. Failure to communicate remotely sensed mission data to the base may render the system ineffective, and the inability to exchange command and control messages can lead to system failures. This paper describes a unique method to control network communications through distributed task allocation to engage under-utilized UAVs to serve as communication relays and to ensure that the network supports mission tasks. This work builds upon a distributed algorithm previously developed by the authors, CBBA with Relays, which uses task assignment information, including task location and proposed execution time, to predict the network topology and plan support using relays. By explicitly coupling task assignment and relay creation processes, the team is able to optimize the use of agents to address the needs of dynamic complex missions. In this work, the algorithm is extended to explicitly consider realistic network communication dynamics, including path loss, stochastic fading, and information routing. Simulation and flight test results validate the proposed approach, demonstrating that the algorithm ensures both data-rate and interconnectivity bit-error-rate requirements during task execution."
]
}
|
1405.5483
|
114092773
|
We consider the classical exact multiple string matching problem. Our solution is based on @math -grams combined with pattern superimposition, bit-parallelism and alphabet size reduction. We discuss the pros and cons of the various alternatives of how to achieve best combination. Our method is closely related to previous work by (, 2006). The experimental results show that our method performs well on different alphabet sizes and that they scale to large pattern sets.
|
The classical algorithms for the present problem can be roughly divided into three different categories, ( @math ) prefix searching, ( @math ) suffix searching and ( @math ) factor searching. Another way to classify the solutions is to say that they are based on character comparisons, hashing, or bit-parallelism. Yet another view is to say that they are based on filtering, aiming for good average case complexity, or on some kind of direct search'' with good worst case complexity guarantees. These different categorizations are of course not mutually exclusive, and many solutions are hybrids that borrow ideas from several techniques. For a good overview of the classical solutions we refer the reader e.g. to @cite_0 @cite_1 @cite_6 . We briefly review some of them in the following.
|
{
"cite_N": [
"@cite_0",
"@cite_1",
"@cite_6"
],
"mid": [
"1515839227",
"2610179052",
""
],
"abstract": [
"This book presents a practical approach to string matching problems, focusing on the algorithms and implementations that perform best in practice. It covers searching for simple, multiple, and extended strings, as well as regular expressions, exactly and approximately. It includes all of the most significant new developments in complex pattern searching. The clear explanations, step-by-step examples, algorithms pseudo-code, and implementation efficiency maps will enable researchers, professionals, and students in bioinformatics, computer science, and software engineering to choose the most appropriate algorithms for their applications.",
"Linear-Time Construction of Suffix Trees We will present two methods for constructing suffix trees in detail, Ukkonen’s method and Weiner’s method. Weiner was the first to show that suffix trees can be built in linear time, and his method is presented both for its historical importance and for some different technical ideas that it contains. However, lJkkonen’s method is equally fast and uses far less space (i.e., memory) in practice than Weiner’s method Hence Ukkonen is the method of choice for most problems requiring the construction of a suffix tree. We also believe that Ukkonen’s method is easier to understand. Therefore, it will be presented first A reader who wishes to study only one method is advised to concentrate on it. However, our development of Weiner’s method does not depend on understanding Ukkonen’s algorithm, and the two algorithms can be read independently (with one small shared section noted in the description of Weiner’s method).",
""
]
}
|
1405.5483
|
114092773
|
We consider the classical exact multiple string matching problem. Our solution is based on @math -grams combined with pattern superimposition, bit-parallelism and alphabet size reduction. We discuss the pros and cons of the various alternatives of how to achieve best combination. Our method is closely related to previous work by (, 2006). The experimental results show that our method performs well on different alphabet sizes and that they scale to large pattern sets.
|
Perhaps the most famous solution to the multiple pattern matching problem is the Aho--Corasick (AC) @cite_19 algorithm, which works in linear time (prefix-based approach). It builds a pattern trie with extra (failure) links and actually generalizes the Knuth--Morris--Pratt algorithm @cite_23 for a single pattern. More precisely, AC total time is @math , where @math , the sum of pattern lengths, is the preprocessing cost, and @math is the total number of pattern occurrences in @math . Recently Fredriksson and Grabowski @cite_21 showed an average-optimal filtering variant of the classic AC algorithm. They built the AC automaton over superimposed subpatterns, which allows to sample the text characters in regular distances, not to miss any match (i.e., any verification). This algorithm is based on the same ideas as the current work.
|
{
"cite_N": [
"@cite_19",
"@cite_21",
"@cite_23"
],
"mid": [
"",
"2098102002",
"1985108724"
],
"abstract": [
"",
"The exact string matching problem is to find the occurrences of a pattern of length m from a text of length n symbols. We develop a novel and unorthodox filtering technique for this problem. Our method is based on transforming the problem into multiple matching of carefully chosen pattern subsequences. While this is seemingly more difficult than the original problem, we show that the idea leads to very simple algorithms that are optimal on average. We then show how our basic method can be used to solve multiple string matching as well as several approximate matching problems in average optimal time. The general method can be applied to many existing string matching algorithms. Our experimental results show that the algorithms perform very well in practice.",
"An algorithm is presented which finds all occurrences of one given string within another, in running time proportional to the sum of the lengths of the strings. The constant of proportionality is low enough to make this algorithm of practical use, and the procedure can also be extended to deal with some more general pattern-matching problems. A theoretical application of the algorithm shows that the set of concatenations of even palindromes, i.e., the language @math , can be recognized in linear time. Other algorithms which run even faster on the average are also considered."
]
}
|
1405.4951
|
2405163796
|
The problem of secure friend discovery on a social network has long been proposed and studied. The requirement is that a pair of nodes can make befriending decisions with minimum information exposed to the other party. In this paper, we propose to use community detection to tackle the problem of secure friend discovery. We formulate the first privacy-preserving and decentralized community detection problem as a multiobjective optimization. We design the first protocol to solve this problem, which transforms community detection to a series of Private Set Intersection (PSI) instances using Truncated Random Walk (TRW). Preliminary theoretical results show that our protocol can uncover communities with overwhelming probability and preserve privacy. We also discuss future works, potential extensions and variations.
|
First type of related work is Private Set Intersection (PSI) as they are already widely used for secure friend discovery. Second type of related work is topology-based graph mining. Although our problem is termed community detection'', the most closely related works are actually topology-based Sybil defense. This is because previous community detection problems are mainly considered under the centralized scenario. On the contrary, Sybil defense scheme sees wide application in P2P system, so one of the root concern is decentralized execution. Note, there exist some distributed community detection works but they can not be directly used because nodes exchange too much information. For example @cite_0 allow nodes to exchange adjacency lists and intermediate community detection results, which directly breaks the privacy constraint that we will formulate in following sections. Due to space limit, a detailed survey of related work is omitted. Interested readers can see community detection surveys @cite_20 @cite_13 and Sybil detection surveys @cite_17 @cite_8 .
|
{
"cite_N": [
"@cite_8",
"@cite_0",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"1989643196",
"2145636261",
"1977713568",
"2127048411",
"2152760593"
],
"abstract": [
"Sybil attacks in which an adversary forges a potentially unbounded number of identities are a danger to distributed systems and online social networks. The goal of sybil defense is to accurately identify sybil identities. This paper surveys the evolution of sybil defense protocols that leverage the structural properties of the social graph underlying a distributed system to identify sybil identities. We make two main contributions. First, we clarify the deep connection between sybil defense and the theory of random walks. This leads us to identify a community detection algorithm that, for the first time, offers provable guarantees in the context of sybil defense. Second, we advocate a new goal for sybil defense that addresses the more limited, but practically useful, goal of securely white-listing a local region of the graph.",
"Community is an important attribute of Pocket Switched Networks (PSN), because mobile devices are carried by people who tend to belong to communities. We analysed community structure from mobility traces and used for forwarding algorithms [12], which shows significant impact of community. Here, we propose and evaluate three novel distributed community detection approaches with great potential to detect both static and temporal communities. We find that with suitable configuration of the threshold values, the distributed community detection can approximate their corresponding centralised methods up to 90 accuracy.",
"This article reviews the state-of-the-art in overlapping community detection algorithms, quality measures, and benchmarks. A thorough comparison of different algorithms (a total of fourteen) is provided. In addition to community-level evaluation, we propose a framework for evaluating algorithms' ability to detect overlapping nodes, which helps to assess overdetection and underdetection. After considering community-level detection performance measured by normalized mutual information, the Omega index, and node-level detection performance measured by F-score, we reached the following conclusions. For low overlapping density networks, SLPA, OSLOM, Game, and COPRA offer better performance than the other tested algorithms. For networks with high overlapping density and high overlapping diversity, both SLPA and Game provide relatively stable performance. However, test results also suggest that the detection in such networks is still not yet fully resolved. A common feature observed by various algorithms in real-world networks is the relatively small fraction of overlapping nodes (typically less than 30p), each of which belongs to only 2 or 3 communities.",
"The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.",
"The sybil attack in distributed systems refers to individual malicious users joining the system multiple times under multiple fake identities. Sybil attacks can easily invalidate the overarching prerequisite of many fault-tolerant designs which assume that the fraction of malicious nodes is not too large. This article presents a tutorial and survey on effective sybil defenses leveraging social networks. Since this approach of sybil defenses via social networks was introduced 5 years ago, it has attracted much more attention from the research community than many other alternatives. We will first explain the intuitions and insights behind this approach, and then survey a number of specific sybil defense mechanisms based on this approach, including SybilGuard, SybilLimit, SybilInfer, Gatekeeper, SumUp, Whanau, and Ostra. We will also discuss some practical implications and deployment considerations of this approach."
]
}
|
1405.5302
|
1505577496
|
Following fast growth of cellular networks, more users have drawn attention to the contradiction between dynamic user data traffic and static data plans. To address this important but largely unexplored issue, in this paper, we design a new data plan sharing system named Prometheus, which is based on the scenario that some smartphone users have surplus data traffic and are willing to help others download data. To realize this system, we first propose a mechanism that incorporates LT codes into UDP. It is robust to transmission errors and encourages more concurrent transmissions and forwardings. It also can be implemented easily with low implementation complexity. Then we design an incentive mechanism using a Stackelberg game to choose assistant users ( @math ), all participants will gain credits in return, which can be used to ask for future help when they need to download something. Finally real environment experiments are conducted and the results show that users in our Prometheus not only can manage their surplus data plan more efficiently, but also achieve a higher speed download rate.
|
Cooperative transmission has been studied for many years. Most VANET protocols are designed based on the assumption of node cooperativeness @cite_0 . Wireless sensor networks (WSNs) are another areas where cooperative transmission often occurs @cite_6 @cite_15 @cite_10 .
|
{
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_10",
"@cite_6"
],
"mid": [
"",
"2146786649",
"2116820055",
"2120359406"
],
"abstract": [
"",
"In this paper, we investigate the use of cooperative communications for high performance data dissemination in dense wireless sensor networks. We first identify the limitations of existing cooperative schemes. While we previously proposed a multi-hop cooperative data dissemination scheme, REER, to address these limitations, the construction of such structure relies on a pre-established reference path. The partially centralized approach makes REER unscalable when encountering network dynamics. To address this issue, this paper proposes a novel distributed multi-hop cooperative communication scheme (DMC), which is fully distributed and consists of two operation phases: (1) cooperative mesh structure (CMS) construction, and (2) CMS-based data dissemination, which includes random value-based scheme and distance-based scheme for forwarding node selection. Simulation results show that DMC performs well in terms of a number of QoS metrics, and fits well in large-scale networks and highly dynamic environments.",
"In this paper, a novel idea of user cooperation in wireless networks has been exploited to improve the performance of the IEEE 802.11 medium access control (MAC) protocol. The new MAC protocol leverages the multi-rate capability of IEEE 802.11b and allows the mobile stations (STA) far away from the access point (AP) to transmit at a higher rate by using an intermediate station as a relay. Two specific variations of the new MAC protocol, namely CoopMAC I and CoopMAC II, are introduced in the paper. Both are able to increase the throughput of the whole network and reduce the average packet delay. Moreover, CoopMAC II also maintains backward compatibility with the legacy 802.11 protocol. The performance improvement is further evaluated by analysis and extensive simulations.",
"The gains of cooperative communications in wireless networks have been explored recently under the ideal assumption of negligible receiving and processing power. In sensor networks, the power spent for listening and computing can constitute a significant portion of the total consumed power, and such an overhead can reduce the gains promised by cooperation. In this paper, cooperation gains are investigated by taking into consideration such overheads in the analytical framework. The performance metric considered is the energy efficiency of the system measured by the total power required to achieve a certain quality of service requirement. The analytical and numerical results reveal very interesting threshold behavior below which direct transmission is more energy efficient, and above which cooperation provides more gains. Such a tradeoff is shown to depend on many parameters such as the relative locations of the source and destination, the values of the receive and processing powers, the application, and many other factors. Moreover, there are experimental results conducted to verify the channel model assumed in the paper"
]
}
|
1405.5302
|
1505577496
|
Following fast growth of cellular networks, more users have drawn attention to the contradiction between dynamic user data traffic and static data plans. To address this important but largely unexplored issue, in this paper, we design a new data plan sharing system named Prometheus, which is based on the scenario that some smartphone users have surplus data traffic and are willing to help others download data. To realize this system, we first propose a mechanism that incorporates LT codes into UDP. It is robust to transmission errors and encourages more concurrent transmissions and forwardings. It also can be implemented easily with low implementation complexity. Then we design an incentive mechanism using a Stackelberg game to choose assistant users ( @math ), all participants will gain credits in return, which can be used to ask for future help when they need to download something. Finally real environment experiments are conducted and the results show that users in our Prometheus not only can manage their surplus data plan more efficiently, but also achieve a higher speed download rate.
|
Research in recent years focuses on the cellular networks. SHAKE was proposed in @cite_2 , which lets some mobile hosts temporarily connect mutually and selects several kinds of paths to communicate with outside. It could ensure mobile hosts get larger transfer capacity. However, not only mobile hosts but also correspondent hosts on the internet require special software for SHAKE. So it is difficult to be popularized. In @cite_14 @cite_17 , had conducted extensive research into the issues raised. In SHAKE, mobile hosts which are connected with fast local link can use multiple wireless links owned by each host simultaneously to communicate with hosts on the internet. SHAKE can improve the data transfer speed.
|
{
"cite_N": [
"@cite_14",
"@cite_17",
"@cite_2"
],
"mid": [
"1574443249",
"1884056947",
"2147144210"
],
"abstract": [
"Wireless links used for mobile communications have some problems such as narrow bandwidth and low reliability. To offer high speed communication on wireless links, we have proposed SHAKE(SHAring multiple paths procedure for cluster networK Environment). In SHAKE, mobile hosts which are connected with fast local link each other use multiple wireless links owned by each host simultaneously to communicate with hosts on the internet. In this paper, we propose a fast WWW access method with SHAKE (Web SHAKE). The feature of Web SHAKE is that it does not require any special software on web servers on the internet to use SHAKE. The feature is realized by an use of HTTP Proxy Server which works on each mobile host. We present the result of performance evaluation of the Web SHAKE. We have shown that data transfer speed was improved by Web SHAKE in comparison with normal WWW access from three experiments.",
"Mobile computing has become very popular. Many people can access the Internet through mobile hosts. However, wireless links used by mobile hosts have problems such as narrow bandwidth and low reliability. To offer high-speed communication on wireless links, we have proposed SHAKE (SHAring multiple paths procedure for cluster networK Environment). In SHAKE, mobile hosts that are connected by a fast local link use multiple wireless links owned by each host simultaneously to communicate with hosts on the Internet. An experimental system was implemented and evaluated. In this system, however, not only mobile hosts but also correspondent hosts on the internet require special software for SHAKE. We propose a fast WWW access method with SHAKE (Web SHAKE). The feature of Web SHAKE is that it does not require any special software on Web servers on the Internet to use SHAKE. The feature is realized by an use of HTTP proxy server that works on each mobile host. We present the result of a performance evaluation of the Web SHAKE.",
"We can access Internet by carrying a portable computer and using the wireless communication. The wireless network with PHS (Personal Handy phone System) and portable cellular telephone has only rates of tens of Kbps to a few Mbps. Compared with the cable network, the transfer rate cannot generally satisfy a highly developed communication services such as large file transfer and real-time communications. This paper proposes a protocol, SHAKE, for sharing multiple paths in cluster type network that is a kind of LAN in which some mobile hosts temporarily connect mutually. SHAKE provides the functions for composing cluster type network, and dispersing traffic efficiently by measuring transfer rate and round-trip time. As a mobile host has only low transfer capacity in individual to communicate with outside, if whole capacities of other hosts which compose cluster type network are shared, we can get larger transfer capacity and satisfy the required communication services."
]
}
|
1405.5302
|
1505577496
|
Following fast growth of cellular networks, more users have drawn attention to the contradiction between dynamic user data traffic and static data plans. To address this important but largely unexplored issue, in this paper, we design a new data plan sharing system named Prometheus, which is based on the scenario that some smartphone users have surplus data traffic and are willing to help others download data. To realize this system, we first propose a mechanism that incorporates LT codes into UDP. It is robust to transmission errors and encourages more concurrent transmissions and forwardings. It also can be implemented easily with low implementation complexity. Then we design an incentive mechanism using a Stackelberg game to choose assistant users ( @math ), all participants will gain credits in return, which can be used to ask for future help when they need to download something. Finally real environment experiments are conducted and the results show that users in our Prometheus not only can manage their surplus data plan more efficiently, but also achieve a higher speed download rate.
|
Many researchers in this field have turned their research on streaming application. Instead of sending the stream independently to each mobile device, the server distributes the video packets among the devices, over a long-range wireless link using WLAN technology; the devices then exchange the received packets each other over short-range wireless links @cite_8 @cite_9 .
|
{
"cite_N": [
"@cite_9",
"@cite_8"
],
"mid": [
"2054265414",
"2169986911"
],
"abstract": [
"Video applications are increasingly popular over smartphones. However, in current cellular systems, the downlink data rate fluctuates and the loss rate can be quite high. We are interested in the scenario where a group of smartphone users, within proximity of each other, are interested in viewing the same video at the same time and are also willing to cooperate with each other. We propose a system that maximizes the video quality by appropriately using all available resources, namely the cellular connections to the phones as well as the device-to-device links that can be established via Bluetooth or WiFi. Key ingredients of our design are: (i) the cooperation among users, (ii) network coding, and (iii) exploiting broadcast in the mobile-to-mobile links. Our approach is grounded on a network utility maximization formulation of the problem. We present numerical results that demonstrate the benefit of our approach, and we implement a prototype on android phones.",
"A promising approach for power reduction in mobile devices is peer-to-peer cooperation. In this paper, we present a testbed system implementation to evaluate the performance of an infrastructure controlled cooperative video streaming architecture. A number of mobile devices in close proximity are connected to the Internet through a wireless access point and are all requesting the same video stream from a dedicated server. Instead of sending the stream independently to each device, the server distributes the video packets among the devices, over a long-range wireless link using WLAN technology; the devices then exchange the received packets among each other over short-range wireless links using Bluetooth technology. The implemented testbed is used to experimentally demonstrate the power reduction gains in mobile devices and to derive an analytical model that relates power consumption to various design and system parameters."
]
}
|
1405.4375
|
174845111
|
In a wireless storage system, having to communicate over a fading channel makes repair transmissions prone to physical layer errors. The first approach to combat fading is to utilize the existing optimal space-time codes. However, it was recently pointed out that such codes are in general too complex to decode when the number of helper nodes is bigger than the number of antennas at the newcomer or data collector. In this paper, a novel protocol for wireless storage transmissions based on algebraic space-time codes is presented in order to improve the system reliability while enabling feasible decoding. The diversity-multiplexing gain tradeoff (DMT) of the system together with sphere-decodability even with low number of antennas are used as the main design criteria, thus naturally establishing a DMT-complexity tradeoff. It is shown that the proposed protocol outperforms the simple time-division multiple access (TDMA) protocol, while still falling behind the optimal DMT.
|
In most of the storage related research the focus is on the (logical) network layer, while the physical layer is usually ignored. Nonetheless, some interesting works considering the physical layer do exist. In @cite_10 , a so-called partial downloading scheme is proposed that allows for data reconstruction with limited bandwidth by downloading only parts of the contents of the helper nodes. In @cite_3 , the use of a forward error correction code (, LDPC code) is proposed in order to correct bit errors caused by fading. In @cite_21 , optimal storage codes are constructed for the error and erasure scenario.
|
{
"cite_N": [
"@cite_21",
"@cite_10",
"@cite_3"
],
"mid": [
"1979205928",
"1998021782",
""
],
"abstract": [
"Regenerating codes are a class of codes proposed for providing reliability of data and efficient repair of failed nodes in distributed storage systems. In this paper, we address the fundamental problem of handling errors and erasures at the nodes or links, during the data-reconstruction and node-repair operations. We provide explicit regenerating codes that are resilient to errors and erasures, and show that these codes are optimal with respect to storage and bandwidth requirements. As a special case, we also establish the capacity of a class of distributed storage systems in the presence of malicious adversaries. While our code constructions are based on previously constructed Product-Matrix codes, we also provide necessary and sufficient conditions for introducing resilience in any regenerating code.",
"We consider a distributed storage system employing some existing regenerate codes where the storage nodes are scattered in a wireless network. The data collector connects to the storage nodes via orthogonal channels and downloads data symbols from these nodes. In the existing data reconstruction schemes for distributed storage systems, the data collector downloads all symbols from a subset of the storage nodes. Such a full-downloading approach becomes inefficient in wireless networks since due to fading, the wireless channels may not offer sufficient bandwidths for full downloading. Moreover, full-downloading is also less power efficient than partial downloading. In this paper, given a coding scheme employed by the wireless distributed storage system, we propose a partial downloading scheme that allows downloading a portion of the symbols from any storage node. We formulate a cross-layer wireless resource allocation problem for data reconstruction in distributed storage systems employing such partial downloading. To derive the fundamental properties of partial downloading as well as to reduce the complexity of wireless resource allocation, we derive necessary and sufficient conditions for data reconstructability for partial downloading, in terms of the numbers of downloaded symbols from the storage nodes. We also propose channel and power allocation schemes for partial downloading in wireless distributed storage systems. Simulation results are provided to demonstrate the significant power savings by the proposed partial downloading scheme compared to the full-downloading methods for distributed storage.",
""
]
}
|
1405.4375
|
174845111
|
In a wireless storage system, having to communicate over a fading channel makes repair transmissions prone to physical layer errors. The first approach to combat fading is to utilize the existing optimal space-time codes. However, it was recently pointed out that such codes are in general too complex to decode when the number of helper nodes is bigger than the number of antennas at the newcomer or data collector. In this paper, a novel protocol for wireless storage transmissions based on algebraic space-time codes is presented in order to improve the system reliability while enabling feasible decoding. The diversity-multiplexing gain tradeoff (DMT) of the system together with sphere-decodability even with low number of antennas are used as the main design criteria, thus naturally establishing a DMT-complexity tradeoff. It is shown that the proposed protocol outperforms the simple time-division multiple access (TDMA) protocol, while still falling behind the optimal DMT.
|
Recently, were introduced in @cite_14 as class of codes that should be able to resist fading of the signals during repair transmissions, while also maintaining the repair property of the underlying storage code. It was pointed out that the obvious way of constructing such codes, namely combining an optimal storage code and an optimal space-time code results in infeasible decoding complexity, when the number of helper nodes is bigger than the number of antennas at the newcomer or data collector (DC). Motivated by this work, we tackle the complexity issue and design a novel yet simple protocol that has feasible decoding complexity for any number of helpers, while the helpers and the newcomer DC are only required to have at most two antennas each.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2949453955"
],
"abstract": [
"Distributed storage systems (DSSs) have gained a lot of interest recently, thanks to their robustness and scalability compared to single-device storage. Majority of the related research has exclusively concerned the network layer. At the same time, the number of users of, e.g., peer-to-peer (p2p) and device-to-device (d2d) networks as well as proximity based services is growing rapidly, and the mobility of users is considered more and more important. This motivates, in contrast to the existing literature, the study of the physical layer functionality of wireless distributed storage systems. In this paper, we take the first step towards protecting the storage repair transmissions from physical layer errors when the transmission takes place over a fading channel. To this end, we introduce the notion of a space-time storage code, drawing together the aspects of network layer and physical layer functionality and resulting in cross-layer robustness. It is also pointed out that existing space-time codes are too complex to be utilized in storage networks when the number of helpers involved is larger than the number of receive antennas at the newcomer or data collector, hence creating a call for less complex transmission protocols."
]
}
|
1405.4375
|
174845111
|
In a wireless storage system, having to communicate over a fading channel makes repair transmissions prone to physical layer errors. The first approach to combat fading is to utilize the existing optimal space-time codes. However, it was recently pointed out that such codes are in general too complex to decode when the number of helper nodes is bigger than the number of antennas at the newcomer or data collector. In this paper, a novel protocol for wireless storage transmissions based on algebraic space-time codes is presented in order to improve the system reliability while enabling feasible decoding. The diversity-multiplexing gain tradeoff (DMT) of the system together with sphere-decodability even with low number of antennas are used as the main design criteria, thus naturally establishing a DMT-complexity tradeoff. It is shown that the proposed protocol outperforms the simple time-division multiple access (TDMA) protocol, while still falling behind the optimal DMT.
|
Thus, the present paper deviates from earlier work on DSSs in that it addresses the actual of the transmitted data in order to fight the effects caused by fading, continuing along the lines of @cite_14 . In addition to encoding, a sphere decodable transmission protocol for the encoded data will be studied. To make our system as realistic as possible, the protocol requires only up to two antennas at each end while still being sphere-decodable. This is in contrast to the previous optimal space-time codes @cite_13 that would be in principle suitable for storage transmissions, but would have exponential complexity when the number of helpers is bigger than the number of receive antennas. Furthermore, It is shown that the designed system achieves a significantly higher DMT than the TDMA protocol. The observed gap to the optimal DMT is due to the above complexity requirement, which establishes a natural DMT-complexity tradeoff.
|
{
"cite_N": [
"@cite_14",
"@cite_13"
],
"mid": [
"2949453955",
"2160800591"
],
"abstract": [
"Distributed storage systems (DSSs) have gained a lot of interest recently, thanks to their robustness and scalability compared to single-device storage. Majority of the related research has exclusively concerned the network layer. At the same time, the number of users of, e.g., peer-to-peer (p2p) and device-to-device (d2d) networks as well as proximity based services is growing rapidly, and the mobility of users is considered more and more important. This motivates, in contrast to the existing literature, the study of the physical layer functionality of wireless distributed storage systems. In this paper, we take the first step towards protecting the storage repair transmissions from physical layer errors when the transmission takes place over a fading channel. To this end, we introduce the notion of a space-time storage code, drawing together the aspects of network layer and physical layer functionality and resulting in cross-layer robustness. It is also pointed out that existing space-time codes are too complex to be utilized in storage networks when the number of helpers involved is larger than the number of receive antennas at the newcomer or data collector, hence creating a call for less complex transmission protocols.",
"Explicit code constructions for multiple-input multiple-output (MIMO) multiple-access channels (MAC) with K users are presented in this paper. The first construction is dedicated to the case of symmetric MIMO-MAC where all the users have the same number of transmit antennas nt and transmit at the same level of per-user multiplexing gain r. Furthermore, we assume that the users transmit in an independent fashion and do not cooperate. The construction is systematic for any values of K, nt and r. It is proved that this newly proposed construction achieves the optimal MIMO-MAC diversity-multiplexing gain tradeoff (DMT) provided by Tse at high-SNR regime. In the second part of the paper we take a further step to investigate the MAC-DMT of a general MIMO-MAC where the users are allowed to have different numbers of transmit antennas and can transmit at different levels of multiplexing gain. The exact optimal MAC-DMT of such channel is explicitly characterized in this paper. Interestingly, in the general MAC-DMT, some users might not be able to achieve their single-user DMT performance as in the symmetric case, even when the multiplexing gains of the other users are close to 0. Detailed explanations of such unexpected result are provided in this paper. Finally, by generalizing the code construction for the symmetric MIMO-MAC, explicit code constructions are provided for the general MIMO-MAC and are proved to be optimal in terms of the general MAC-DMT."
]
}
|
1405.4658
|
2328538789
|
A basic question for zero-sum repeated games consists in determining whether the mean payoff per time unit is independent of the initial state. In the special case of "zero-player" games, i.e., of Markov chains equipped with additive functionals, the answer is provided by the mean ergodic theorem. We generalize this result to repeated games. We show that the mean payoff is independent of the initial state for all state-dependent perturbations of the rewards if and only if an ergodicity condition is verified. The latter is characterized by the uniqueness modulo constants of nonlinear harmonic functions (fixed points of the recession function associated to the Shapley operator), or, in the special case of stochastic games with finite action spaces and perfect information, by a reachability condition involving conjugate subsets of states in directed hypergraphs. We show that the ergodicity condition for games only depends on the support of the transition probability, and that it can be checked in polynomial time when the number of states is fixed.
|
A useful tool to address the issue of the solvability of the ergodic equation , or of the corresponding nonlinear eigenproblem , is the recession function associated with the Shapley operator, which has already been used in several ways @cite_19 @cite_0 @cite_22 . In particular, Rosenberg and Sorin @cite_19 gave conditions for the existence of the mean payoff vector of a two-person zero-sum stochastic game. In their framework, the recession function appears as the Shapley operator of the projective'' game, which corresponds to the game with no running payments.
|
{
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_22"
],
"mid": [
"1980657465",
"2096935426",
"1608548244"
],
"abstract": [
"Let T be a nonexpansive monotonic mapping from C to itself where C is a closed subset of a space of bounded real functions, with the supremum norm. We study asymptotic properties of several average iterates of T, related to the cycle time.",
"We consider two person zero-sum stochastic games. The recursive formula for the valuesvλ (resp.v n) of the discounted (resp. finitely repeated) version can be written in terms of a single basic operator Φ(α,f) where α is the weight on the present payoff andf the future payoff. We give sufficient conditions in terms of Φ(α,f) and its derivative at 0 for limv n and limvλ to exist and to be equal.",
"If A is a nonnegative matrix whose associated directed graph is strongly connected, the Perron-Frobenius theorem asserts that A has an eigenvector in the positive cone, (R + ) n . We associate a directed graph to any homogeneous, monotone function, f: (R + ) n → (R + ) n , and show that if the graph is strongly connected, then f has a (nonlinear) eigenvector in (R + ) n . Several results in the literature emerge as corollaries. Our methods show that the Perron-Frobenius theorem is really about the boundedness of invariant subsets in the Hilbert projective metric. They lead to further existence results and open problems."
]
}
|
1405.4658
|
2328538789
|
A basic question for zero-sum repeated games consists in determining whether the mean payoff per time unit is independent of the initial state. In the special case of "zero-player" games, i.e., of Markov chains equipped with additive functionals, the answer is provided by the mean ergodic theorem. We generalize this result to repeated games. We show that the mean payoff is independent of the initial state for all state-dependent perturbations of the rewards if and only if an ergodicity condition is verified. The latter is characterized by the uniqueness modulo constants of nonlinear harmonic functions (fixed points of the recession function associated to the Shapley operator), or, in the special case of stochastic games with finite action spaces and perfect information, by a reachability condition involving conjugate subsets of states in directed hypergraphs. We show that the ergodicity condition for games only depends on the support of the transition probability, and that it can be checked in polynomial time when the number of states is fixed.
|
Observe that every constant vector is a fixed point of a payment-free Shapley operator. We shall refer to such a fixed point as trivial . @cite_22 , Gaubert and Gunawardena show that the ergodic equation is solvable if @math has only trivial fixed points. A sufficient explicit condition for this to hold, involving a sequence of aggregated directed graphs, generalizing the classical directed graph of Perron-Frobenius theory, was given there.
|
{
"cite_N": [
"@cite_22"
],
"mid": [
"1608548244"
],
"abstract": [
"If A is a nonnegative matrix whose associated directed graph is strongly connected, the Perron-Frobenius theorem asserts that A has an eigenvector in the positive cone, (R + ) n . We associate a directed graph to any homogeneous, monotone function, f: (R + ) n → (R + ) n , and show that if the graph is strongly connected, then f has a (nonlinear) eigenvector in (R + ) n . Several results in the literature emerge as corollaries. Our methods show that the Perron-Frobenius theorem is really about the boundedness of invariant subsets in the Hilbert projective metric. They lead to further existence results and open problems."
]
}
|
1405.4658
|
2328538789
|
A basic question for zero-sum repeated games consists in determining whether the mean payoff per time unit is independent of the initial state. In the special case of "zero-player" games, i.e., of Markov chains equipped with additive functionals, the answer is provided by the mean ergodic theorem. We generalize this result to repeated games. We show that the mean payoff is independent of the initial state for all state-dependent perturbations of the rewards if and only if an ergodicity condition is verified. The latter is characterized by the uniqueness modulo constants of nonlinear harmonic functions (fixed points of the recession function associated to the Shapley operator), or, in the special case of stochastic games with finite action spaces and perfect information, by a reachability condition involving conjugate subsets of states in directed hypergraphs. We show that the ergodicity condition for games only depends on the support of the transition probability, and that it can be checked in polynomial time when the number of states is fixed.
|
Then, in @cite_26 , Cavazos-Cadena and Hern 'a ndez-Hern 'a ndez introduced a weak convexity property, and showed that when the conjugate map @math is weakly convex, the recession function @math has only trivial fixed points if and only if the first of the directed graphs of @cite_22 consists of a single final class and of trivial classes (reduced to one node, and loop free). They deduced that when @math is weakly convex, the ergodic equation for all maps @math with @math is solvable if and only if @math has only trivial fixed points. We shall consider the same additive perturbations @math of the Shapley operator, but without any assumption on @math except that the payment @math be bounded. Indeed, this weak convexity property is rarely satisfied for games although it captures an interesting class of risk sensitive problems.
|
{
"cite_N": [
"@cite_26",
"@cite_22"
],
"mid": [
"2084982707",
"1608548244"
],
"abstract": [
"Abstract Given a monotone and homogeneous self-mapping f of the n -dimensional positive cone, a communication matrix M f is introduced and a family F of functions is obtained by multiplying each component of f by arbitrary positive numbers. Assuming that f satisfies a weak form of convexity, a necessary and sufficient criterion on the structure of M f is given so that each function in F has a (nonlinear) eigenvalue. An alternative necessary and sufficient condition in terms of the recession function of f is also provided.",
"If A is a nonnegative matrix whose associated directed graph is strongly connected, the Perron-Frobenius theorem asserts that A has an eigenvector in the positive cone, (R + ) n . We associate a directed graph to any homogeneous, monotone function, f: (R + ) n → (R + ) n , and show that if the graph is strongly connected, then f has a (nonlinear) eigenvector in (R + ) n . Several results in the literature emerge as corollaries. Our methods show that the Perron-Frobenius theorem is really about the boundedness of invariant subsets in the Hilbert projective metric. They lead to further existence results and open problems."
]
}
|
1405.4356
|
2951272922
|
The main results of this paper are (I) a simulation algorithm which, under quite general constraints, transforms algorithms running on the Congested Clique into algorithms running in the MapReduce model, and (II) a distributed @math -coloring algorithm running on the Congested Clique which has an expected running time of (i) @math rounds, if @math ; and (ii) @math rounds otherwise. Applying the simulation theorem to the Congested-Clique @math -coloring algorithm yields an @math -round @math -coloring algorithm in the MapReduce model. Our simulation algorithm illustrates a natural correspondence between per-node bandwidth in the Congested Clique model and memory per machine in the MapReduce model. In the Congested Clique (and more generally, any network in the @math model), the major impediment to constructing fast algorithms is the @math restriction on message sizes. Similarly, in the MapReduce model, the combined restrictions on memory per machine and total system memory have a dominant effect on algorithm design. In showing a fairly general simulation algorithm, we highlight the similarities and differences between these models.
|
The earliest interesting algorithm in the Congested Clique model is an @math -round deterministic algorithm to compute a minimum spanning tree (MST), due to @cite_12 . @cite_1 presented a random @math -round algorithm in the Congested Clique model that produced a constant-factor approximation algorithm for the metric facility location problem. @cite_6 @cite_9 considered the more-general non-uniform metric facility location in the Congested Clique model and presented a constant-factor approximation running in expected @math rounds. reduce the metric facility location problem to the problem of computing a @math -ruling set of a spanning subgraph of the underlying communication network and show how to solve this in @math rounds in expectation. In 2013, Lenzen presented a routing protocol to solve a problem called an @cite_7 . The setup for this problem is that each node @math is given a set of @math messages, each of size @math , @math , with destinations @math , @math . Messages are globally lexicographically ordered by their source @math , destination @math , and @math . Each node is also the destination of at most @math messages. Lenzen's routing protocol solves the Information Distribution Task in @math rounds.
|
{
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_12"
],
"mid": [
"2949818233",
"2950999367",
"2035994564",
"",
"2032607682"
],
"abstract": [
"Consider a clique of n nodes, where in each synchronous round each pair of nodes can exchange O(log n) bits. We provide deterministic constant-time solutions for two problems in this model. The first is a routing problem where each node is source and destination of n messages of size O(log n). The second is a sorting problem where each node i is given n keys of size O(log n) and needs to receive the ith batch of n keys according to the global order of the keys. The latter result also implies deterministic constant-round solutions for related problems such as selection or determining modes.",
"This paper presents a distributed O(1)-approximation algorithm, with expected- @math running time, in the @math model for the metric facility location problem on a size- @math clique network. Though metric facility location has been considered by a number of researchers in low-diameter settings, this is the first sub-logarithmic-round algorithm for the problem that yields an O(1)-approximation in the setting of non-uniform facility opening costs. In order to obtain this result, our paper makes three main technical contributions. First, we show a new lower bound for metric facility location, extending the lower bound of B a (ICALP 2005) that applies only to the special case of uniform facility opening costs. Next, we demonstrate a reduction of the distributed metric facility location problem to the problem of computing an O(1)-ruling set of an appropriate spanning subgraph. Finally, we present a sub-logarithmic-round (in expectation) algorithm for computing a 2-ruling set in a spanning subgraph of a clique. Our algorithm accomplishes this by using a combination of randomized and deterministic sparsification.",
"In this paper, we present a randomized constant factor approximation algorithm for the metric minimum facility location problem with uniform costs and demands in a distributed setting, in which every point can open a facility. In particular, our distributed algorithm uses three communication rounds with message sizes bounded to O(log n) bits where n is the number of points. We also extend our algorithm to constant powers of metric spaces, where we also obtain a randomized constant factor approximation algorithm.",
"",
"This paper considers the problem of distributively constructing a minimum-weight spanning tree (MST) for graphs of constant diameter in the bounded-messages model, where each message can contain at most B bits for some parameter B. It is shown that the number of communication rounds necessary to compute an MST for graphs of diameter 4 or 3 can be as high as ( ( [3]n B ) ) and ( ( [4]n B ) ), respectively. The asymptotic lower bounds hold for randomized algorithms as well. On the other hand, we observe that O(log n) communication rounds always suffice to compute an MST deterministically for graphs with diameter 2, when B = O(log n). These results complement a previously known lower bound of ( ( [2]n B) ) for graphs of diameter Ω(log n)."
]
}
|
1405.4356
|
2951272922
|
The main results of this paper are (I) a simulation algorithm which, under quite general constraints, transforms algorithms running on the Congested Clique into algorithms running in the MapReduce model, and (II) a distributed @math -coloring algorithm running on the Congested Clique which has an expected running time of (i) @math rounds, if @math ; and (ii) @math rounds otherwise. Applying the simulation theorem to the Congested-Clique @math -coloring algorithm yields an @math -round @math -coloring algorithm in the MapReduce model. Our simulation algorithm illustrates a natural correspondence between per-node bandwidth in the Congested Clique model and memory per machine in the MapReduce model. In the Congested Clique (and more generally, any network in the @math model), the major impediment to constructing fast algorithms is the @math restriction on message sizes. Similarly, in the MapReduce model, the combined restrictions on memory per machine and total system memory have a dominant effect on algorithm design. In showing a fairly general simulation algorithm, we highlight the similarities and differences between these models.
|
Our main sources of reference on the MapReduce model and for graph algorithms in this model are the work of @cite_14 and @cite_8 respectively. Besides these, the work of @cite_2 on algorithms for clustering in MapReduce model and the work of @cite_5 on greedy algorithms in the MapReduce model are relevant.
|
{
"cite_N": [
"@cite_5",
"@cite_14",
"@cite_2",
"@cite_8"
],
"mid": [
"2101246692",
"2051586153",
"2950858762",
"2153977620"
],
"abstract": [
"Greedy algorithms are practitioners' best friends - they are intuitive, simple to implement, and often lead to very good solutions. However, implementing greedy algorithms in a distributed setting is challenging since the greedy choice is inherently sequential, and it is not clear how to take advantage of the extra processing power. Our main result is a powerful sampling technique that aids in parallelization of sequential algorithms. We then show how to use this primitive to adapt a broad class of greedy algorithms to the MapReduce paradigm; this class includes maximum cover and submodular maximization subject to p-system constraints. Our method yields efficient algorithms that run in a logarithmic number of rounds, while obtaining solutions that are arbitrarily close to those produced by the standard sequential greedy algorithm. We begin with algorithms for modular maximization subject to a matroid constraint, and then extend this approach to obtain approximation algorithms for submodular maximization subject to knapsack or p-system constraints. Finally, we empirically validate our algorithms, and show that they achieve the same quality of the solution as standard greedy algorithms but run in a substantially fewer number of rounds.",
"In recent years the MapReduce framework has emerged as one of the most widely used parallel computing platforms for processing data on terabyte and petabyte scales. Used daily at companies such as Yahoo!, Google, Amazon, and Facebook, and adopted more recently by several universities, it allows for easy parallelization of data intensive computations over many machines. One key feature of MapReduce that differentiates it from previous models of parallel computation is that it interleaves sequential and parallel computation. We propose a model of efficient computation using the MapReduce paradigm. Since MapReduce is designed for computations over massive data sets, our model limits the number of machines and the memory per machine to be substantially sublinear in the size of the input. On the other hand, we place very loose restrictions on the computational power of of any individual machine---our model allows each machine to perform sequential computations in time polynomial in the size of the original input. We compare MapReduce to the PRAM model of computation. We prove a simulation lemma showing that a large class of PRAM algorithms can be efficiently simulated via MapReduce. The strength of MapReduce, however, lies in the fact that it uses both sequential and parallel computation. We demonstrate how algorithms can take advantage of this fact to compute an MST of a dense graph in only two rounds, as opposed to Ω(log(n)) rounds needed in the standard PRAM model. We show how to evaluate a wide class of functions using the MapReduce framework. We conclude by applying this result to show how to compute some basic algorithmic problems such as undirected s-t connectivity in the MapReduce framework.",
"Clustering problems have numerous applications and are becoming more challenging as the size of the data increases. In this paper, we consider designing clustering algorithms that can be used in MapReduce, the most popular programming environment for processing large datasets. We focus on the practical and popular clustering problems, @math -center and @math -median. We develop fast clustering algorithms with constant factor approximation guarantees. From a theoretical perspective, we give the first analysis that shows several clustering algorithms are in @math , a theoretical MapReduce class introduced by KarloffSV10 . Our algorithms use sampling to decrease the data size and they run a time consuming clustering algorithm such as local search or Lloyd's algorithm on the resulting data set. Our algorithms have sufficient flexibility to be used in practice since they run in a constant number of MapReduce rounds. We complement these results by performing experiments using our algorithms. We compare the empirical performance of our algorithms to several sequential and parallel algorithms for the @math -median problem. The experiments show that our algorithms' solutions are similar to or better than the other algorithms' solutions. Furthermore, on data sets that are sufficiently large, our algorithms are faster than the other parallel algorithms that we tested.",
"The MapReduce framework is currently the de facto standard used throughout both industry and academia for petabyte scale data analysis. As the input to a typical MapReduce computation is large, one of the key requirements of the framework is that the input cannot be stored on a single machine and must be processed in parallel. In this paper we describe a general algorithmic design technique in the MapReduce framework called filtering. The main idea behind filtering is to reduce the size of the input in a distributed fashion so that the resulting, much smaller, problem instance can be solved on a single machine. Using this approach we give new algorithms in the MapReduce framework for a variety of fundamental graph problems for sufficiently dense graphs. Specifically, we present algorithms for minimum spanning trees, maximal matchings, approximate weighted matchings, approximate vertex and edge covers and minimum cuts. In all of these cases, we parameterize our algorithms by the amount of memory available on the machines allowing us to show tradeoffs between the memory available and the number of MapReduce rounds. For each setting we will show that even if the machines are only given substantially sublinear memory, our algorithms run in a constant number of MapReduce rounds. To demonstrate the practical viability of our algorithms we implement the maximal matching algorithm that lies at the core of our analysis and show that it achieves a significant speedup over the sequential version."
]
}
|
1405.4392
|
2950310334
|
In this paper, we propose an unsupervised method to identify noun sense changes based on rigorous analysis of time-varying text data available in the form of millions of digitized books. We construct distributional thesauri based networks from data at different time points and cluster each of them separately to obtain word-centric sense clusters corresponding to the different time points. Subsequently, we compare these sense clusters of two different time points to find if (i) there is birth of a new sense or (ii) if an older sense has got split into more than one sense or (iii) if a newer sense has been formed from the joining of older senses or (iv) if a particular sense has died. We conduct a thorough evaluation of the proposed methodology both manually as well as through comparison with WordNet. Manual evaluation indicates that the algorithm could correctly identify 60.4 birth cases from a set of 48 randomly picked samples and 57 split join cases from a set of 21 randomly picked samples. Remarkably, in 44 cases the birth of a novel sense is attested by WordNet, while in 46 cases and 43 cases split and join are respectively confirmed by WordNet. Our approach can be applied for lexicography, as well as for applications like word sense disambiguation or semantic search.
|
Word sense disambiguation as well as word sense discovery have both remained key areas of research right from the very early initiatives in natural language processing research. Ide and Veronis present a very concise survey of the history of ideas used in word sense disambiguation; for a recent survey of the state-of-the-art one can refer to @cite_9 . Some of the first attempts to automatic word sense discovery were made by Karen Sp "a rck Jones ; later in lexicography, it has been extensively used as a pre-processing step for preparing mono- and multi-lingual dictionaries @cite_6 @cite_1 . However, as we have already pointed out that none of these works consider the temporal aspect of the problem.
|
{
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_6"
],
"mid": [
"2436001372",
"1546097390",
"2140982618"
],
"abstract": [
"Word sense disambiguation (WSD) is the ability to identify the meaning of words in context in a computational manner. WSD is considered an AI-complete problem, that is, a task whose solution is at least as hard as the most difficult problems in artificial intelligence. We introduce the reader to the motivations for solving the ambiguity of words and provide a description of the task. We overview supervised, unsupervised, and knowledge-based approaches. The assessment of WSD systems is discussed in the context of the Senseval Semeval campaigns, aiming at the objective evaluation of systems participating in several different disambiguation tasks. Finally, applications, open problems, and future directions are discussed.",
"The chapter deals with word sketches - one-page automatic, corpus-based summaries of a word's grammatical and collocational behaviour. They were first used in the production of the Macmillan English Dictionary and were presented at Euralex 2002. At that point, they only existed for English. Now, we have developed the Sketch Engine, a corpus tool which takes as input a corpus of any language and a corresponding grammar patterns and which generates word sketches for the words of that language. It also generates a thesaurus and 'sketch differences', which specify similarities and differences between near-synonyms. We briefly present a case study investigating applicability of the Sketch Engine to free word-order languages. The results show that word sketches could facilitate lexicographic work in Czech as they have for English.",
"This paper introduces the Word Sketch: a collocation-based resource of proven value for English lexicography. Issues involving the automatic extraction and presentation of salient collocations are discussed. It is further shown how the combination of signicant patterns may lead to even greater precision in the identication of collocations."
]
}
|
1405.4392
|
2950310334
|
In this paper, we propose an unsupervised method to identify noun sense changes based on rigorous analysis of time-varying text data available in the form of millions of digitized books. We construct distributional thesauri based networks from data at different time points and cluster each of them separately to obtain word-centric sense clusters corresponding to the different time points. Subsequently, we compare these sense clusters of two different time points to find if (i) there is birth of a new sense or (ii) if an older sense has got split into more than one sense or (iii) if a newer sense has been formed from the joining of older senses or (iv) if a particular sense has died. We conduct a thorough evaluation of the proposed methodology both manually as well as through comparison with WordNet. Manual evaluation indicates that the algorithm could correctly identify 60.4 birth cases from a set of 48 randomly picked samples and 57 split join cases from a set of 21 randomly picked samples. Remarkably, in 44 cases the birth of a novel sense is attested by WordNet, while in 46 cases and 43 cases split and join are respectively confirmed by WordNet. Our approach can be applied for lexicography, as well as for applications like word sense disambiguation or semantic search.
|
A few approaches suggested by @cite_5 @cite_11 attempt to augment WordNet synsets primarily using methods of annotation. Another recent work by attempts to induce word senses and then identify novel senses by comparing two different corpora: the focus corpora'' (i.e., a recent version of the corpora) and the reference corpora'' (older version of the corpora). However, this method is limited as it only considers two time points to identify sense changes as opposed to our approach which is over a much larger timescale, thereby, effectively allowing us to track the points of change and the underlying causes. One of the closest work to what we present here has been put forward by @cite_13 , where the authors analyze a newspaper corpus containing articles between 1785 and 1985. The authors mainly report the frequency patterns of certain words that they found to be candidates for change; however a detailed cause analysis as to why and how a particular word underwent a sense change has not been demonstrated. Further, systematic evaluation of the results obtained by the authors has not been provided.
|
{
"cite_N": [
"@cite_5",
"@cite_13",
"@cite_11"
],
"mid": [
"2003321079",
"",
"171586348"
],
"abstract": [
"The Japanese WordNet currently has 51,000 synsets with Japanese entries. In this paper, we discuss three methods of extending it: increasing the cover, linking it to examples in corpora and linking it to other resources (SUMO and GoiTaikei). In addition, we outline our plans to make it more useful by adding Japanese definition sentences to each synset. Finally, we discuss how releasing the corpus under an open license has led to the construction of interfaces in a variety of programming languages.",
"",
"FinnWordNet is a Finnish wordnet which complies with the structure of the Princeton WordNet (Fellbaum, 1998). It was created by translating all the words in Princeton WordNet. It is open source and contains over 117 000 synsets. We are now testing different methods in order to improve and expand the content of FinnWordNet. Since wordnets are structured ontologies, a location for a word in FinnWordNet can be pinpointed by its relations to other words. To us, finding a location for a word therefore means finding a hyperonym, a hyponym or a synonym for the word. This article describes some methods for finding a location for a new word in FinnWordNet. Our methods include searching for multiword terms, compounds and lexicosyntactic patterns. Testing shows that with a few simple methods, we were able to find an indicator of the location for 83.2 of new words. Out of the new synonym pairs we tested, we were able to find an indication for 86.7 ."
]
}
|
1405.4699
|
7549854
|
Cloud computing has become the leading paradigm for deploying large-scale infrastructures and running big data applications, due to its capacity of achieving economies of scale. In this work, we focus on one of the most prominent advantages of cloud computing, namely the on-demand resource provisioning, which is commonly referred to as elasticity. Although a lot of effort has been invested in developing systems and mechanisms that enable elasticity, the elasticity decision policies tend to be designed without guaranteeing or quantifying the quality of their operation. This work aims to make the development of elasticity policies more formalized and dependable. We make two distinct contributions. First, we propose an extensible approach to enforcing elasticity through the dynamic instantiation and online quantitative verification of Markov Decision Processes (MDP) using probabilistic model checking. Second, we propose concrete elasticity models and related elasticity policies. We evaluate our decision policies using both real and synthetic datasets in clusters of NoSQL databases. According to the experimental results, our approach improves upon the state-of-the-art in significantly increasing user-defined utility values and decreasing user-defined threshold violations.
|
The work of @cite_0 targets dynamic resource allocation for distributed storage systems, which need to maintain strict performance SLA. In our case, we are targeting a broader area of applications, resources and available cloud providers. The automatic scaling of a distributed storage system is also the work of @cite_21 , which is limited to key-value datastores. Similarly, the work of @cite_10 is limited to Hadoop clusters. The work of @cite_13 presents policies for elastically scaling a Hadoop storage-tier of multi-tier Web services based on automated control. This approach is reactive and has a limited focus on Hadoop clusters. Other examples of rule-based techniques that trigger elasticity actions are described in @cite_2 @cite_16 @cite_4 . Orthogonally to the elasticity policies, the focus in @cite_24 is on the cost of resource provisioning. There are also several proposals on performance analysis and resource allocation (e.g., @cite_9 @cite_7 ).
|
{
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_21",
"@cite_0",
"@cite_24",
"@cite_2",
"@cite_16",
"@cite_10"
],
"mid": [
"2145457647",
"2167090833",
"",
"2100618668",
"2075233755",
"2126707271",
"",
"2081360300",
"1987191256",
"1984116073"
],
"abstract": [
"Elasticity - where systems acquire and release resources in response to dynamic workloads, while paying only for what they need - is a driving property of cloud computing. At the core of any elastic system is an automated controller. This paper addresses elastic control for multi-tier application services that allocate and release resources in discrete units, such as virtual server instances of predetermined sizes. It focuses on elastic control of the storage tier, in which adding or removing a storage node or \"brick\" requires rebalancing stored data across the nodes. The storage tier presents new challenges for elastic control: actuator delays (lag) due to rebalancing, interference with applications and sensor measurements, and the need to synchronize the multiple control elements, including rebalancing. We have designed and implemented a new controller for elastic storage systems to address these challenges. Using a popular distributed storage system - the Hadoop Distributed File System (HDFS) - under dynamic Web 2.0 workloads, we show how the controller adapts to workload changes to maintain performance objectives efficiently in a pay-as-you-go cloud computing environment.",
"Elastic resource provisioning is a key feature of cloud computing, allowing users to scale up or down resource allocation for their applications at run-time. To date, most practical approaches to managing elasticity are based on allocation de-allocation of the virtual machine (VM) instances to the application. This VM-level elasticity typically incurs both considerable overhead and extra costs, especially for applications with rapidly fluctuating demands. In this paper, we propose a lightweight approach to enable cost-effective elasticity for cloud applications. Our approach operates fine-grained scaling at the resource level itself (CPUs, memory, I O, etc) in addition to VM-level scaling. We also present the design and implementation of an intelligent platform for light-weight resource management of cloud applications. We describe our algorithms for light-weight scaling and VM-level scaling and show their interaction. We then use an industry standard benchmark to evaluate the effectiveness of our approach and compare its performance against traditional approaches.",
"",
"Cloud computing is an emerging commercial infrastructure paradigm that promises to eliminate the need for maintaining expensive computing facilities by companies and institutes alike. Through the use of virtualization and resource time sharing, clouds serve with a single set of physical resources a large user base with different needs. Thus, clouds have the potential to provide to their owners the benefits of an economy of scale and, at the same time, become an alternative for scientists to clusters, grids, and parallel production environments. However, the current commercial clouds have been built to support web and small database workloads, which are very different from typical scientific computing workloads. Moreover, the use of virtualization and resource time sharing may introduce significant performance penalties for the demanding scientific computing workloads. In this work, we analyze the performance of cloud computing services for scientific computing workloads. We quantify the presence in real scientific computing workloads of Many-Task Computing (MTC) users, that is, of users who employ loosely coupled applications comprising many tasks to achieve their scientific goals. Then, we perform an empirical evaluation of the performance of four commercial cloud computing services including Amazon EC2, which is currently the largest commercial cloud. Last, we compare through trace-based simulation the performance characteristics and cost models of clouds and other scientific computing platforms, for general and MTC-based scientific computing workloads. Our results indicate that the current clouds need an order of magnitude in performance improvement to be useful to the scientific community, and show which improvements should be considered first to address this discrepancy between offer and demand.",
"Energy costs for data centers continue to rise, already exceeding $15 billion yearly. Sadly much of this power is wasted. Servers are only busy 10--30p of the time on average, but they are often left on, while idle, utilizing 60p or more of peak power when in the idle state. We introduce a dynamic capacity management policy, AutoScale, that greatly reduces the number of servers needed in data centers driven by unpredictable, time-varying load, while meeting response time SLAs. AutoScale scales the data center capacity, adding or removing servers as needed. AutoScale has two key features: (i) it autonomically maintains just the right amount of spare capacity to handle bursts in the request rate; and (ii) it is robust not just to changes in the request rate of real-world traces, but also request size and server efficiency. We evaluate our dynamic capacity management approach via implementation on a 38-server multi-tier data center, serving a web site of the type seen in Facebook or Amazon, with a key-value store workload. We demonstrate that AutoScale vastly improves upon existing dynamic capacity management policies with respect to meeting SLAs and robustness.",
"Elasticity of cloud computing environments provides an economic incentive for automatic resource allocation of stateful systems running in the cloud. However, these systems have to meet strict performance Service-Level Objectives (SLOs) expressed using upper percentiles of request latency, such as the 99th. Such latency measurements are very noisy, which complicates the design of the dynamic resource allocation. We design and evaluate the SCADS Director, a control framework that reconfigures the storage system on-the-fly in response to workload changes using a performance model of the system. We demonstrate that such a framework can respond to both unexpected data hotspots and diurnal workload patterns without violating strict performance SLOs.",
"",
"Database systems serving cloud platforms must serve large numbers of applications (or tenants). In addition to managing tenants with small data footprints, different schemas, and variable load patterns, such multitenant data platforms must minimize their operating costs by efficient resource sharing. When deployed over a pay-per-use infrastructure, elastic scaling and load balancing, enabled by low cost live migration of tenant databases, is critical to tolerate load variations while minimizing operating cost. However, existing databases---relational databases and Key-Value stores alike---lack low cost live migration techniques, thus resulting in heavy performance impact during elastic scaling. We present Albatross, a technique for live migration in a multitenant database serving OLTP style workloads where the persistent database image is stored in a network attached storage. Albatross migrates the database cache and the state of active transactions to ensure minimal impact on transaction execution while allowing transactions active during migration to continue execution. It also guarantees serializability while ensuring correctness during failures. Our evaluation using two OLTP benchmarks shows that Albatross can migrate a live tenant database with no aborted transactions, negligible impact on transaction latency and throughput both during and after migration, and an unavailability window as low as 300 ms.",
"One of the key goals in the data center today is providing storage services with service-level objectives (SLOs) for performance metrics such as latency and throughput. Meeting such SLOs is challenging due to the dynamism observed in these environments. In this position paper, we propose dynamic instantiation of virtual appliances, that is, virtual machines with storage functionality, as a mechanism to meet storage SLOs efficiently. In order for dynamic instantiation to be realistic for rapidlychanging environments, it should be automated. Therefore, an important goal of this paper is to show that such automation is feasible. We do so through a caching case study. Specifically, we build the automation framework for dynamically instantiating virtual caching appliances. This framework identifies sets of interfering workloads that can benefit from caching, determines the cache-size requirements of workloads, non-disruptively migrates the application to use the cache, and warms the cache to quickly return to acceptable service levels. We show through an experiment that this approach addresses SLO violations while using resources efficiently.",
"Infrastructure-as-a-Service (IaaS) cloud platforms have brought two unprecedented changes to cluster provisioning practices. First, any (nonexpert) user can provision a cluster of any size on the cloud within minutes to run her data-processing jobs. The user can terminate the cluster once her jobs complete, and she needs to pay only for the resources used and duration of use. Second, cloud platforms enable users to bypass the traditional middleman---the system administrator---in the cluster-provisioning process. These changes give tremendous power to the user, but place a major burden on her shoulders. The user is now faced regularly with complex cluster sizing problems that involve finding the cluster size, the type of resources to use in the cluster from the large number of choices offered by current IaaS cloud platforms, and the job configurations that best meet the performance needs of her workload. In this paper, we introduce the Elastisizer, a system to which users can express cluster sizing problems as queries in a declarative fashion. The Elastisizer provides reliable answers to these queries using an automated technique that uses a mix of job profiling, estimation using black-box and white-box models, and simulation. We have prototyped the Elastisizer for the Hadoop MapReduce framework, and present a comprehensive evaluation that shows the benefits of the Elastisizer in common scenarios where cluster sizing problems arise."
]
}
|
1405.4699
|
7549854
|
Cloud computing has become the leading paradigm for deploying large-scale infrastructures and running big data applications, due to its capacity of achieving economies of scale. In this work, we focus on one of the most prominent advantages of cloud computing, namely the on-demand resource provisioning, which is commonly referred to as elasticity. Although a lot of effort has been invested in developing systems and mechanisms that enable elasticity, the elasticity decision policies tend to be designed without guaranteeing or quantifying the quality of their operation. This work aims to make the development of elasticity policies more formalized and dependable. We make two distinct contributions. First, we propose an extensible approach to enforcing elasticity through the dynamic instantiation and online quantitative verification of Markov Decision Processes (MDP) using probabilistic model checking. Second, we propose concrete elasticity models and related elasticity policies. We evaluate our decision policies using both real and synthetic datasets in clusters of NoSQL databases. According to the experimental results, our approach improves upon the state-of-the-art in significantly increasing user-defined utility values and decreasing user-defined threshold violations.
|
Our work also relates to proposals that employ model checking for cloud solutions and runtime quantitative verification. In the work of @cite_26 , a technique to predict the minimum VM cost of cloud deployments based on existing application logs is introduced. Queuing network theory is used to derive VM resource usage profiles. With the the latter, MDP models are instantiated and used to verify system properties. The work of @cite_3 presents an incremental model verification technique, which is applied on component-based software systems deployed on cloud infrastructures. Their technique achieves lower verification time as only the modified components are verified. Both these techniques are used to analyze the cloud-based systems and not to drive elasticity or other adaptivity decisions. QoSMOS @cite_14 is a framework that also utilizes PRISM to analyse and dynamically choose the appropriate configuration of service-based systems modelled as Markov chains. Finally, the work in @cite_18 introduces a model-driven framework called ADAM, where the users provide activity diagrams that are converted to MDP models, for which cumulative rewards for various quality metrics are computed.
|
{
"cite_N": [
"@cite_14",
"@cite_18",
"@cite_26",
"@cite_3"
],
"mid": [
"2146044140",
"2144780749",
"2125217134",
""
],
"abstract": [
"Service-based systems that are dynamically composed at runtime to provide complex, adaptive functionality are currently one of the main development paradigms in software engineering. However, the Quality of Service (QoS) delivered by these systems remains an important concern, and needs to be managed in an equally adaptive and predictable way. To address this need, we introduce a novel, tool-supported framework for the development of adaptive service-based systems called QoSMOS (QoS Management and Optimization of Service-based systems). QoSMOS can be used to develop service-based systems that achieve their QoS requirements through dynamically adapting to changes in the system state, environment, and workload. QoSMOS service-based systems translate high-level QoS requirements specified by their administrators into probabilistic temporal logic formulae, which are then formally and automatically analyzed to identify and enforce optimal system configurations. The QoSMOS self-adaptation mechanism can handle reliability and performance-related QoS requirements, and can be integrated into newly developed solutions or legacy systems. The effectiveness and scalability of the approach are validated using simulations and a set of experiments based on an implementation of an adaptive service-based system for remote medical assistance.",
"Modern software systems are often characterized by uncertainty and changes in the environment in which they are embedded. Hence, they must be designed as adaptive systems. We propose a framework that supports adaptation to non-functional manifestations of uncertainty. Our framework allows engineers to derive, from an initial model of the system, a finite state automaton augmented with probabilities. The system is then executed by an interpreter that navigates the automaton and invokes the component implementations associated to the states it traverses. The interpreter adapts the execution by choosing among alternative possible paths of the automaton in order to maximize the system's ability to meet its non-functional requirements. To demonstrate the adaptation capabilities of the proposed approach we implemented an adaptive application inspired by an existing worldwide distributed mobile application and we discussed several adaptation scenarios.",
"Numerous organisations are considering moving at least some of their existing applications to the cloud. A key motivating factor for this fast-paced adoption of cloud is the expectation of cost savings. Estimating what these cost savings might be requires comparing the known cost of running an application in-house with a predicted cost of its cloud deployment. A major problem with this approach is the lack of suitable techniques for predicting the cost of the virtual machines (VMs) that a cloud-deployed application requires in order to achieve a given service-level agreement. We introduce a technique that addresses this problem by using established results from queueing network theory to predict the minimum VM cost of cloud deployments starting from existing application logs. We describe how this formal technique can be used to predict the cost-performance trade-offs available for the cloud deployment of an application, and presents a case study based on a real-world webmail service.",
""
]
}
|
1405.4429
|
1965875863
|
We consider compressive imaging problems, where images are reconstructed from a reduced number of linear measurements. Our objective is to improve over existing compressive imaging algorithms in terms of both reconstruction error and runtime. To pursue our objective, we propose compressive imaging algorithms that employ the approximate message passing (AMP) framework. AMP is an iterative signal reconstruction algorithm that performs scalar denoising at each iteration; in order for AMP to reconstruct the original input signal well, a good denoiser must be used. We apply two wavelet-based image denoisers within AMP. The first denoiser is the “amplitude-scale-invariant Bayes estimator” (ABE), and the second is an adaptive Wiener filter; we call our AMP-based algorithms for compressive imaging AMP-ABE and AMP-Wiener. Numerical results show that both AMP-ABE and AMP-Wiener significantly improve over the state of the art in terms of runtime. In terms of reconstruction quality, AMP-Wiener offers lower mean-square error (MSE) than existing compressive imaging algorithms. In contrast, AMP-ABE has higher MSE, because ABE does not denoise as well as the adaptive Wiener filter.
|
Many compressive imaging algorithms have been proposed in the literature. For example, Som and Schniter @cite_37 modeled the structure of the wavelet coefficients by a hidden Markov tree (HMT), and applied a turbo scheme that alternates between inference on the HMT structure with standard belief propagation and inference on the compressed sensing measurement structure with the generalized approximate message passing algorithm. He and Carin @cite_20 proposed a hierarchical Bayesian approach with Markov chain Monte Carlo (MCMC) for natural image reconstruction. Soni and Haupt @cite_24 exploited a hierarchical dictionary learning method @cite_34 and assumed that projecting images onto the learned dictionary will yield tree-sparsity, and therefore the nonzero supports of the dictionary can be identified and estimated accurately by setting an appropriate threshold.
|
{
"cite_N": [
"@cite_24",
"@cite_37",
"@cite_34",
"@cite_20"
],
"mid": [
"2113421750",
"1500149156",
"33214042",
"2049502219"
],
"abstract": [
"Breakthrough results in compressive sensing (CS) have shown that high dimensional signals (vectors) can often be accurately recovered from a relatively small number of non-adaptive linear projection observations, provided that they possess a sparse representation in some basis. Subsequent efforts have established that the reconstruction performance of CS can be improved by employing additional prior signal knowledge, such as dependency in the location of the non-zero signal coefficients (structured sparsity) or by collecting measurements sequentially and adaptively, in order to focus measurements into the proper subspace where the unknown signal resides. In this paper, we examine a powerful hybrid of adaptivity and structure. We identify a particular form of structured sparsity that is amenable to adaptive sensing, and using concepts from sparse hierarchical dictionary learning we demonstrate that sparsifying dictionaries exhibiting the appropriate form of structured sparsity can be learned from a collection of training data. The combination of these techniques (structured dictionary learning and adaptive sensing) results in an effective and efficient adaptive compressive acquisition approach which we refer to as LASeR (Learning Adaptive Sensing Representations.)",
"We propose a novel algorithm for compressive imaging that exploits both the sparsity and persistence across scales found in the 2D wavelet transform coefficients of natural images. Like other recent works, we model wavelet structure using a hidden Markov tree (HMT) but, unlike other works, ours is based on loopy belief propagation (LBP). For LBP, we adopt a recently proposed “turbo” message passing schedule that alternates between exploitation of HMT structure and exploitation of compressive-measurement structure. For the latter, we leverage Donoho, Maleki, and Montanari's recently proposed approximate message passing (AMP) algorithm. Experiments with a large image database suggest that, relative to existing schemes, our turbo LBP approach yields state-of-the-art reconstruction performance with substantial reduction in complexity.",
"We propose to combine two approaches for modeling data admitting sparse representations: on the one hand, dictionary learning has proven effective for various signal processing tasks. On the other hand, recent work on structured sparsity provides a natural framework for modeling dependencies between dictionary elements. We thus consider a tree-structured sparse regularization to learn dictionaries embedded in a hierarchy. The involved proximal operator is computable exactly via a primal-dual method, allowing the use of accelerated gradient techniques. Experiments show that for natural image patches, learned dictionary elements organize themselves in such a hierarchical structure, leading to an improved performance for restoration tasks. When applied to text documents, our method learns hierarchies of topics, thus providing a competitive alternative to probabilistic topic models.",
"Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and, therefore, this framework goes beyond simply assuming that the data are compressible in a wavelet basis. The structure exploited within the wavelet coefficients is consistent with that used in wavelet-based compression algorithms. A hierarchical Bayesian model is constituted, with efficient inference via Markov chain Monte Carlo (MCMC) sampling. The algorithm is fully developed and demonstrated using several natural images, with performance comparisons to many state-of-the-art compressive-sensing inversion algorithms."
]
}
|
1405.4429
|
1965875863
|
We consider compressive imaging problems, where images are reconstructed from a reduced number of linear measurements. Our objective is to improve over existing compressive imaging algorithms in terms of both reconstruction error and runtime. To pursue our objective, we propose compressive imaging algorithms that employ the approximate message passing (AMP) framework. AMP is an iterative signal reconstruction algorithm that performs scalar denoising at each iteration; in order for AMP to reconstruct the original input signal well, a good denoiser must be used. We apply two wavelet-based image denoisers within AMP. The first denoiser is the “amplitude-scale-invariant Bayes estimator” (ABE), and the second is an adaptive Wiener filter; we call our AMP-based algorithms for compressive imaging AMP-ABE and AMP-Wiener. Numerical results show that both AMP-ABE and AMP-Wiener significantly improve over the state of the art in terms of runtime. In terms of reconstruction quality, AMP-Wiener offers lower mean-square error (MSE) than existing compressive imaging algorithms. In contrast, AMP-ABE has higher MSE, because ABE does not denoise as well as the adaptive Wiener filter.
|
However, existing compressive imaging algorithms may either not achieve good reconstruction quality or not be fast enough. Therefore, in this paper, we focus on a variation of a fast and effective algorithm called approximate message passing (AMP) @cite_18 to improve over the prior art. AMP is an iterative signal reconstruction algorithm that performs scalar denoising within each iteration, and proper selection of the denoising function used within AMP is needed to obtain better reconstruction quality. One challenge in applying image denoisers within AMP is that it may be hard to compute the so-called Onsager reaction term" @cite_21 @cite_18 in the AMP iteration steps. The Onsager reaction term includes the derivative of the image denoising function, and thus if an image function does not have a convenient closed form, then the Onsager reaction term may be difficult to compute.
|
{
"cite_N": [
"@cite_18",
"@cite_21"
],
"mid": [
"2082029531",
"1990755770"
],
"abstract": [
"Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity–undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity–undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity–undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity–undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this apparently very different theoretical formalism.",
"Abstract The Sherrmgton-Kirkpatrick model of a spin glass is solved by a mean field technique which is probably exact in the limit of infinite range interactions. At and above T c the solution is identical to that obtained by Sherrington and Kirkpatrick (1975) using the n → O replica method, but below T c the new result exhibits several differences and remains physical down to T = 0."
]
}
|
1405.4429
|
1965875863
|
We consider compressive imaging problems, where images are reconstructed from a reduced number of linear measurements. Our objective is to improve over existing compressive imaging algorithms in terms of both reconstruction error and runtime. To pursue our objective, we propose compressive imaging algorithms that employ the approximate message passing (AMP) framework. AMP is an iterative signal reconstruction algorithm that performs scalar denoising at each iteration; in order for AMP to reconstruct the original input signal well, a good denoiser must be used. We apply two wavelet-based image denoisers within AMP. The first denoiser is the “amplitude-scale-invariant Bayes estimator” (ABE), and the second is an adaptive Wiener filter; we call our AMP-based algorithms for compressive imaging AMP-ABE and AMP-Wiener. Numerical results show that both AMP-ABE and AMP-Wiener significantly improve over the state of the art in terms of runtime. In terms of reconstruction quality, AMP-Wiener offers lower mean-square error (MSE) than existing compressive imaging algorithms. In contrast, AMP-ABE has higher MSE, because ABE does not denoise as well as the adaptive Wiener filter.
|
Dictionary learning is an effective technique that has attracted a great deal of attention in image denoising. Dictionary learning based methods @cite_32 @cite_7 generally achieve lower reconstruction error than wavelet-based methods. However, the learning procedure requires a large amount of training images, and may involve manual tuning. Owing to these limitations, our main focus in this paper is to integrate relatively simple and fast image denoisers into compressive imaging reconstruction algorithms.
|
{
"cite_N": [
"@cite_32",
"@cite_7"
],
"mid": [
"2086962710",
"2027606067"
],
"abstract": [
"Nonparametric Bayesian methods are considered for recovery of imagery based upon compressive, incomplete, and or noisy measurements. A truncated beta-Bernoulli process is employed to infer an appropriate dictionary for the data under test and also for image recovery. In the context of compressive sensing, significant improvements in image recovery are manifested using learned dictionaries, relative to using standard orthonormal image expansions. The compressive-measurement projections are also optimized for the learned dictionary. Additionally, we consider simpler (incomplete) measurements, defined by measuring a subset of image pixels, uniformly selected at random. Spatial interrelationships within imagery are exploited through use of the Dirichlet and probit stick-breaking processes. Several example results are presented, with comparisons to other methods in the literature.",
"In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods."
]
}
|
1405.4429
|
1965875863
|
We consider compressive imaging problems, where images are reconstructed from a reduced number of linear measurements. Our objective is to improve over existing compressive imaging algorithms in terms of both reconstruction error and runtime. To pursue our objective, we propose compressive imaging algorithms that employ the approximate message passing (AMP) framework. AMP is an iterative signal reconstruction algorithm that performs scalar denoising at each iteration; in order for AMP to reconstruct the original input signal well, a good denoiser must be used. We apply two wavelet-based image denoisers within AMP. The first denoiser is the “amplitude-scale-invariant Bayes estimator” (ABE), and the second is an adaptive Wiener filter; we call our AMP-based algorithms for compressive imaging AMP-ABE and AMP-Wiener. Numerical results show that both AMP-ABE and AMP-Wiener significantly improve over the state of the art in terms of runtime. In terms of reconstruction quality, AMP-Wiener offers lower mean-square error (MSE) than existing compressive imaging algorithms. In contrast, AMP-ABE has higher MSE, because ABE does not denoise as well as the adaptive Wiener filter.
|
@cite_11 developed an image denoising strategy that employs collaborative filtering in a sparse 3-D transform domain, and they offered an efficient implementation that achieves favorable denoising quality. Other efficient denoising schemes include wavelet-based methods. A typical wavelet-based image denoiser proceeds as follows: ( i ) apply a wavelet transform to the image and obtain wavelet coefficients; ( ii ) denoise the wavelet coefficients; and ( iii ) apply an inverse wavelet transform to the denoised wavelet coefficients, yielding a denoised image. Two popular examples of denoisers that can be applied to the wavelet coefficients are hard thresholding and soft thresholding @cite_31 . Variations on the thresholding scheme can be found in @cite_25 @cite_12 ; other wavelet-based methods were proposed by Simoncelli and Adelson @cite_35 , M h c @cite_0 , and Moulin and Liu @cite_22 .
|
{
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_0",
"@cite_31",
"@cite_25",
"@cite_12",
"@cite_11"
],
"mid": [
"2149925139",
"2124335859",
"2065391104",
"2158940042",
"2156706175",
"2125527601",
"2056370875"
],
"abstract": [
"The classical solution to the noise removal problem is the Wiener filter, which utilizes the second-order statistics of the Fourier decomposition. Subband decompositions of natural images have significantly non-Gaussian higher-order point statistics; these statistics capture image properties that elude Fourier-based techniques. We develop a Bayesian estimator that is a natural extension of the Wiener solution, and that exploits these higher-order statistics. The resulting nonlinear estimator performs a \"coring\" operation. We provide a simple model for the subband statistics, and use it to develop a semi-blind noise removal algorithm based on a steerable wavelet pyramid.",
"Research on universal and minimax wavelet shrinkage and thresholding methods has demonstrated near-ideal estimation performance in various asymptotic frameworks. However, image processing practice has shown that universal thresholding methods are outperformed by simple Bayesian estimators assuming independent wavelet coefficients and heavy-tailed priors such as generalized Gaussian distributions (GGDs). In this paper, we investigate various connections between shrinkage methods and maximum a posteriori (MAP) estimation using such priors. In particular, we state a simple condition under which MAP estimates are sparse. We also introduce a new family of complexity priors based upon Rissanen's universal prior on integers. One particular estimator in this class outperforms conventional estimators based on earlier applications of the minimum description length (MDL) principle. We develop analytical expressions for the shrinkage rules implied by GGD and complexity priors. This allows us to show the equivalence between universal hard thresholding, MAP estimation using a very heavy-tailed GGD, and MDL estimation using one of the new complexity priors. Theoretical analysis supported by numerous practical experiments shows the robustness of some of these estimates against mis-specifications of the prior-a basic concern in image processing applications.",
"We introduce a simple spatially adaptive statistical model for wavelet image coefficients and apply it to image denoising. Our model is inspired by a recent wavelet image compression algorithm, the estimation-quantization (EQ) coder. We model wavelet image coefficients as zero-mean Gaussian random variables with high local correlation. We assume a marginal prior distribution on wavelet coefficients variances and estimate them using an approximate maximum a posteriori probability rule. Then we apply an approximate minimum mean squared error estimation procedure to restore the noisy wavelet image coefficients. Despite the simplicity of our method, both in its concept and implementation, our denoising results are among the best reported in the literature.",
"SUMMARY With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle offers dramatic advantages over traditional linear estimation by nonadaptive kernels; however, it is a priori unclear whether such performance can be obtained by a procedure relying on the data alone. We describe a new principle for spatially-adaptive estimation: selective wavelet reconstruction. We show that variable-knot spline fits and piecewise-polynomial fits, when equipped with an oracle to select the knots, are not dramatically more powerful than selective wavelet reconstruction with an oracle. We develop a practical spatially adaptive method, RiskShrink, which works by shrinkage of empirical wavelet coefficients. RiskShrink mimics the performance of an oracle for selective wavelet reconstruction as well as it is possible to do so. A new inequality in multivariate normal decision theory which we call the oracle inequality shows that attained performance differs from ideal performance by at most a factor of approximately 2 log n, where n is the sample size. Moreover no estimator can give a better guarantee than this. Within the class of spatially adaptive procedures, RiskShrink is essentially optimal. Relying only on the data, it comes within a factor log 2 n of the performance of piecewise polynomial and variableknot spline methods equipped with an oracle. In contrast, it is unknown how or if piecewise polynomial methods could be made to function this well when denied access to an oracle and forced to rely on data alone.",
"The sparseness and decorrelation properties of the discrete wavelet transform have been exploited to develop powerful denoising methods. However, most of these methods have free parameters which have to be adjusted or estimated. In this paper, we propose a wavelet-based denoising technique without any free parameters; it is, in this sense, a \"universal\" method. Our approach uses empirical Bayes estimation based on a Jeffreys' noninformative prior; it is a step toward objective Bayesian wavelet-based denoising. The result is a remarkably simple fixed nonlinear shrinkage thresholding rule which performs better than other more computationally demanding methods.",
"The first part of this paper proposes an adaptive, data-driven threshold for image denoising via wavelet soft-thresholding. The threshold is derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution (GGD) widely used in image processing applications. The proposed threshold is simple and closed-form, and it is adaptive to each subband because it depends on data-driven estimates of the parameters. Experimental results show that the proposed method, called BayesShrink, is typically within 5 of the MSE of the best soft-thresholding benchmark with the image assumed known. It also outperforms SureShrink (Donoho and Johnstone 1994, 1995; Donoho 1995) most of the time. The second part of the paper attempts to further validate claims that lossy compression can be used for denoising. The BayesShrink threshold can aid in the parameter selection of a coder designed with the intention of denoising, and thus achieving simultaneous denoising and compression. Specifically, the zero-zone in the quantization step of compression is analogous to the threshold value in the thresholding function. The remaining coder design parameters are chosen based on a criterion derived from Rissanen's minimum description length (MDL) principle. Experiments show that this compression method does indeed remove noise significantly, especially for large noise power. However, it introduces quantization noise and should be used only if bitrate were an additional concern to denoising.",
"We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call \"groups.\" Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality."
]
}
|
1405.4380
|
1983802961
|
In this paper, we study the transport capacity of large multi-hop wireless CSMA networks. Different from previous studies which rely on the use of centralized scheduling algorithm and or centralized routing algorithm to achieve the optimal capacity scaling law, we show that the optimal capacity scaling law can be achieved using entirely distributed routing and scheduling algorithms. Specifically, we consider a network with nodes Poissonly distributed with unit intensity on a @math square @math . Furthermore, each node chooses its destination randomly and independently and transmits following a CSMA protocol. By resorting to the percolation theory and by carefully tuning the three controllable parameters in CSMA protocols, i.e. transmission power, carrier-sensing threshold and count-down timer, we show that a throughput of @math is achievable in distributed CSMA networks. Furthermore, we derive the pre-constant preceding the order of the transport capacity by giving an upper and a lower bound of the transport capacity. The tightness of the bounds is validated using simulations.
|
Improving spatial frequency reuse of CSMA networks is an important problem that has also been extensively investigated, see @cite_5 @cite_15 @cite_11 for the relevant work. However, high level of spatial frequency reuse does not directly lead to increased end-to-end throughput because the latter performance metric also critically relies on the communication strategies, i.e., routing algorithm and scheduling scheme, used in the network. In this paper we focus on the study of achievable end-to-end throughput.
|
{
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_11"
],
"mid": [
"2137661534",
"2149151217",
"2127242306"
],
"abstract": [
"In multihop wireless ad-hoc networks, the medium access control (MAC) protocol plays a key role in coordinating the access to the shared medium among wireless nodes. Currently, the distributed coordination function (DCF) of the IEEE 802.11 is the dominant MAC protocol for both wireless LANs and wireless multihop ad hoc environment due to its simple implementation and distributed nature. The current access method of the IEEE 802.11 does not make efficient use of the shared channel due to its conservative approach in assessing the level of interference; this in turn affects the spatial reuse of the limited radio resources and highly affect the achieved throughput of a multihop wireless network. This paper surveys various methods that have been proposed in order to enhance the channel utilization by improving the spatial reuse.",
"The importance of spatial reuse in wireless ad hoc networks has been long recognized as a key to improving the network capacity. In this paper, we show that 1) in the case that the achievable channel rate follows the Shannon capacity, spatial reuse depends only on the ratio of the transmit power to the carrier sense threshold and 2) in the case that only a set of discrete data rates are available, as a control knob for sustaining achievable data rates, tuning the transmit power provides more sophisticated rate control over tuning the carrier sense threshold, provided that there is a sufficient number of power levels available. Based on the findings, we then propose a decentralized power and rate control algorithm to enable each node to adjust, based on its signal interference level, its transmit power and data rate. The transmit power is so determined that the transmitter can sustain a high data rate while keeping the adverse interference effect on the other neighboring concurrent transmissions minimal. Simulation results have shown that compared to existing carrier sense threshold tuning algorithms, the proposed power and rate control algorithm yields higher network capacity.",
"In CSMA CA-based, multi-hop, multi-rate wireless networks, spatial reuse can be increased by tuning the carrier-sensing threshold (Tcs) to reduce the carrier sense range (dcs). While reducing dcs enables more concurrent transmissions, the transmission quality suffers from the increased accumulative interference contributed by concurrent transmissions outside dcs. As a result, the data rate at which the transmission can sustain may decrease. How to balance the interplay of spatial reuse and transmission quality (and hence the sustainable data rate) so as to achieve high network capacity is thus an important issue. In this paper, we investigate this issue by extending Cali's model and devising an analytical model that characterizes the transmission activities as governed by IEEE 802.11 DCF in a single-channel, multi-rate, multi-hop wireless network. The systems throughput is derived as a function of Tcs, SINR, beta, and other PHY MAC systems parameters. We incorporate the effect of varying the degree of spatial reuse by tuning the Tcs. Based on the physical radio propagation model, we theoretically estimate the potential accumulated interference contributed by concurrent transmissions and the corresponding SINR. For a given SINR value, we then determine an appropriate data rate at which a transmission can sustain. To the best of our knowledge, this is perhaps the first effort that considers tuning of PHY characteristics (transmit power and data rates) and MAC parameters (contention backoff timer) jointly in an unified framework in order to optimize the overall network throughput. Analytical results indicate that the systems throughput is not a monotonically increasing decreasing function of Tcs, but instead exhibits transitional points where several possible choices of Tcs can be made. In addition, the network capacity can be further improved by choosing the backoff timer values appropriately."
]
}
|
1405.4053
|
2949547296
|
Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, "powerful," "strong" and "Paris" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.
|
Distributed representations for words were first proposed in @cite_8 and have become a successful paradigm, especially for statistical language modeling @cite_3 @cite_20 @cite_26 . Word vectors have been used in NLP applications such as word representation, named entity recognition, word sense disambiguation, parsing, tagging and machine translation @cite_18 @cite_14 @cite_23 @cite_33 @cite_15 @cite_0 @cite_28 .
|
{
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_14",
"@cite_33",
"@cite_8",
"@cite_28",
"@cite_3",
"@cite_0",
"@cite_23",
"@cite_15",
"@cite_20"
],
"mid": [
"2117130368",
"",
"1662133657",
"2158899491",
"1498436455",
"2118090838",
"2110485445",
"2164019165",
"2158139315",
"1423339008",
"100623710"
],
"abstract": [
"We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.",
"",
"Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field.",
"We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.",
"We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.",
"We introduce bilingual word embeddings: semantic embeddings associated across two languages in the context of neural language models. We propose a method to learn bilingual embeddings from a large unlabeled corpus, while utilizing MT word alignments to constrain translational equivalence. The new embeddings significantly out-perform baselines in word semantic similarity. A single semantic similarity feature induced with bilingual embeddings adds near half a BLEU point to the results of NIST08 Chinese-English machine translation task.",
"Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands; indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly context-dependent while also expressing generalizations across classes of items. These representations suggest a method for representing lexical categories and the type token distinction.",
"Unsupervised word representations are very useful in NLP tasks both as inputs to learning algorithms and as extra word features in NLP systems. However, most of these models are built with only local context and one representation per word. This is problematic because words are often polysemous and global context can also provide useful information for learning word meanings. We present a new neural network architecture which 1) learns word embeddings that better capture the semantics of words by incorporating both local and global document context, and 2) accounts for homonymy and polysemy by learning multiple embeddings per word. We introduce a new dataset with human judgments on pairs of words in sentential context, and evaluate our model on it, showing that our model outperforms competitive baselines and other neural language models.",
"If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http: metaoptimize.com projects wordreprs",
"Recursive structure is commonly found in the inputs of different modalities such as natural scene images or natural language sentences. Discovering this recursive structure helps us to not only identify the units that an image or sentence contains but also how they interact to form a whole. We introduce a max-margin structure prediction architecture based on recursive neural networks that can successfully recover such structure both in complex scene images as well as sentences. The same algorithm can be used both to provide a competitive syntactic parser for natural language sentences from the Penn Treebank and to outperform alternative approaches for semantic scene segmentation, annotation and classification. For segmentation and annotation our algorithm obtains a new level of state-of-the-art performance on the Stanford background dataset (78.1 ). The features from the image parse tree outperform Gist descriptors for scene classification by 4 .",
"A central goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on several methods to speed-up both training and probability computation, as well as comparative experiments to evaluate the improvements brought by these techniques. We finally describe the incorporation of this new language model into a state-of-the-art speech recognizer of conversational speech."
]
}
|
1405.4053
|
2949547296
|
Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, "powerful," "strong" and "Paris" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.
|
Representing phrases is a recent trend and received much attention @cite_7 @cite_30 @cite_4 @cite_19 @cite_5 . In this direction, autoencoder-style models have also been used to model paragraphs @cite_17 @cite_25 @cite_6 .
|
{
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_7",
"@cite_6",
"@cite_19",
"@cite_5",
"@cite_25",
"@cite_17"
],
"mid": [
"2100693535",
"42251416",
"1984052055",
"1853745982",
"",
"",
"2157006255",
"2113459411"
],
"abstract": [
"In distributional semantics studies, there is a growing attention in compositionally determining the distributional meaning of word sequences. Yet, compositional distributional models depend on a large set of parameters that have not been explored. In this paper we propose a novel approach to estimate parameters for a class of compositional distributional models: the additive models. Our approach leverages on two main ideas. Firstly, a novel idea for extracting compositional distributional semantics examples. Secondly, an estimation method based on regression models for multiple dependent variables. Experiments demonstrate that our approach outperforms existing methods for determining a good model for compositional distributional semantics.",
"We present a general learning-based approach for phrase-level sentiment analysis that adopts an ordinal sentiment scale and is explicitly compositional in nature. Thus, we can model the compositional effects required for accurate assignment of phrase-level sentiment. For example, combining an adverb (e.g., \"very\") with a positive polar adjective (e.g., \"good\") produces a phrase (\"very good\") with increased polarity over the adjective alone. Inspired by recent work on distributional approaches to compositionality, we model each word as a matrix and combine words using iterated matrix multiplication, which allows for the modeling of both additive and multiplicative semantic effects. Although the multiplication-based matrix-space framework has been shown to be a theoretically elegant way to model composition (Rudolph and Giesbrecht, 2010), training such models has to be done carefully: the optimization is non-convex and requires a good initial starting point. This paper presents the first such algorithm for learning a matrix-space model for semantic composition. In the context of the phrase-level sentiment analysis task, our experimental results show statistically significant improvements in performance over a bag-of-words model.",
"Vector-based models of word meaning have become increasingly popular in cognitive science. The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar. Despite their widespread use, vector-based models are typically directed at representing words in isolation, and methods for constructing representations for phrases or sentences have received little attention in the literature. This is in marked contrast to experimental evidence (e.g., in sentential priming) suggesting that semantic similarity is more complex than simply a relation between isolated words. This article proposes a framework for representing the meaning of word combinations in vector space. Central to our approach is vector composition, which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models that we evaluate empirically on a phrase similarity task.",
"We introduce a type of Deep Boltzmann Machine (DBM) that is suitable for extracting distributed semantic representations from a large unstructured collection of documents. We overcome the apparent difficulty of training a DBM with judicious parameter tying. This enables an efficient pretraining algorithm and a state initialization scheme for fast inference. The model can be trained just as efficiently as a standard Restricted Boltzmann Machine. Our experiments show that the model assigns better log probability to unseen data than the Replicated Softmax model. Features extracted from our model outperform LDA, Replicated Softmax, and DocNADE models on document retrieval and document classification tasks.",
"",
"",
"We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm.",
"Unsupervised vector-based approaches to semantics can model rich lexical meanings, but they largely fail to capture sentiment information that is central to many word meanings and important for a wide range of NLP tasks. We present a model that uses a mix of unsupervised and supervised techniques to learn word vectors capturing semantic term--document information as well as rich sentiment content. The proposed model can leverage both continuous and multi-dimensional sentiment information as well as non-sentiment annotations. We instantiate the model to utilize the document-level sentiment polarity annotations present in many online documents (e.g. star ratings). We evaluate the model using small, widely used sentiment and subjectivity corpora and find it out-performs several previously introduced methods for sentiment classification. We also introduce a large dataset of movie reviews to serve as a more robust benchmark for work in this area."
]
}
|
1405.4053
|
2949547296
|
Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, "powerful," "strong" and "Paris" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.
|
Distributed representations of phrases and sentences are also the focus of @cite_36 @cite_29 @cite_2 . Their methods typically require parsing and is shown to work for sentence-level representations. And it is not obvious how to extend their methods beyond single sentences. Their methods are also supervised and thus require more labeled data to work well. Paragraph Vector, in contrast, is mostly unsupervised and thus can work well with less labeled data.
|
{
"cite_N": [
"@cite_36",
"@cite_29",
"@cite_2"
],
"mid": [
"2103305545",
"71795751",
"2251939518"
],
"abstract": [
"Paraphrase detection is the task of examining two sentences and determining whether they have the same meaning. In order to obtain high accuracy on this task, thorough syntactic and semantic analysis of the two statements is needed. We introduce a method for paraphrase detection based on recursive autoencoders (RAE). Our unsupervised RAEs are based on a novel unfolding objective and learn feature vectors for phrases in syntactic trees. These features are used to measure the word- and phrase-wise similarity between two sentences. Since sentences may be of arbitrary length, the resulting matrix of similarity measures is of variable size. We introduce a novel dynamic pooling layer which computes a fixed-sized representation from the variable-sized matrices. The pooled representation is then used as input to a classifier. Our method outperforms other state-of-the-art approaches on the challenging MSRP paraphrase corpus.",
"We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model's ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines.",
"Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases."
]
}
|
1405.4053
|
2949547296
|
Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, "powerful," "strong" and "Paris" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.
|
Our approach of computing the paragraph vectors via gradient descent bears resemblance to a successful paradigm in computer vision @cite_9 @cite_35 known as Fisher kernels @cite_16 . The basic construction of Fisher kernels is the gradient vector over an unsupervised generative model.
|
{
"cite_N": [
"@cite_35",
"@cite_9",
"@cite_16"
],
"mid": [
"2071027807",
"2147238549",
"2166473218"
],
"abstract": [
"The problem of large-scale image search has been traditionally addressed with the bag-of-visual-words (BOV). In this article, we propose to use as an alternative the Fisher kernel framework. We first show why the Fisher representation is well-suited to the retrieval problem: it describes an image by what makes it different from other images. One drawback of the Fisher vector is that it is high-dimensional and, as opposed to the BOV, it is dense. The resulting memory and computational costs do not make Fisher vectors directly amenable to large-scale retrieval. Therefore, we compress Fisher vectors to reduce their memory footprint and speed-up the retrieval. We compare three binarization approaches: a simple approach devised for this representation and two standard compression techniques. We show on two publicly available datasets that compressed Fisher vectors perform very well using as little as a few hundreds of bits per image, and significantly better than a very recent compressed BOV approach.",
"Within the field of pattern classification, the Fisher kernel is a powerful framework which combines the strengths of generative and discriminative approaches. The idea is to characterize a signal with a gradient vector derived from a generative probability model and to subsequently feed this representation to a discriminative classifier. We propose to apply this framework to image categorization where the input signals are images and where the underlying generative model is a visual vocabulary: a Gaussian mixture model which approximates the distribution of low-level features in images. We show that Fisher kernels can actually be understood as an extension of the popular bag-of-visterms. Our approach demonstrates excellent performance on two challenging databases: an in-house database of 19 object scene categories and the recently released VOC 2006 database. It is also very practical: it has low computational needs both at training and test time and vocabularies trained on one set of categories can be applied to another set without any significant loss in performance.",
"Generative probability models such as hidden Markov models provide a principled way of treating missing information and dealing with variable length sequences. On the other hand, discriminative methods such as support vector machines enable us to construct flexible decision boundaries and often result in classification performance superior to that of the model based approaches. An ideal classifier should combine these two complementary approaches. In this paper, we develop a natural way of achieving this combination by deriving kernel functions for use in discriminative methods such as support vector machines from generative probability models. We provide a theoretical justification for this combination as well as demonstrate a substantial improvement in the classification performance in the context of DNA and protein sequence analysis."
]
}
|
1405.3391
|
2951880787
|
We propose a simple, yet expressive proof representation from which proofs for different proof assistants can easily be generated. The representation uses only a few inference rules and is based on a frag- ment of first-order logic called coherent logic. Coherent logic has been recognized by a number of researchers as a suitable logic for many ev- eryday mathematical developments. The proposed proof representation is accompanied by a corresponding XML format and by a suite of XSL transformations for generating formal proofs for Isabelle Isar and Coq, as well as proofs expressed in a natural language form (formatted in LATEX or in HTML). Also, our automated theorem prover for coherent logic exports proofs in the proposed XML format. All tools are publicly available, along with a set of sample theorems.
|
The literature contains many results about exchanging proofs between proof assistant using deep or shallow embeddings @cite_7 @cite_6 . Boessplug, Cerbonneaux and Hermant propose to use the @math -calculus as a universal proof language which can express proof without losing their computational properties @cite_5 . To our knowledge, these works do not focus on the readability of proofs.
|
{
"cite_N": [
"@cite_5",
"@cite_6",
"@cite_7"
],
"mid": [
"2186150166",
"1516654960",
"2786306691"
],
"abstract": [
"The -calculus forms one of the vertices in Barendregt’s -cube and has been used as the core language for a number of logical frameworks. Following earlier extensions of natural deduction [14], Cousineau and Dowek [11] generalize the denitional equality of this well studied calculus to an arbitrary congruence generated by rewrite rules, which allows for more faithful encodings of foreign logics. This paper motivates the resulting language, the -calculus modulo, as a universal proof language, capable of expressing proofs from many other systems without losing their computational properties. We further show how to very simply and eciently check proofs from this language. We have implemented this scheme in a proof checker called Dedukti.",
"We present a new scheme to translate mathematical developments from HOL Light to Coq, where they can be re-used and re-checked. By relying on a carefully chosen embedding of Higher-Order Logic into Type Theory, we try to avoid some pitfalls of inter-operation between proof systems. In particular, our translation keeps the mathematical statements intelligible. This translation has been implemented and allows the importation of the HOL Light basic library into Coq.",
""
]
}
|
1405.3365
|
2400328261
|
An FOL-program consists of a background theory in a decidable fragment of first-order logic and a collection of rules possibly containing first-order formulas. The formalism stems from recent approaches to tight integrations of ASP with description logics. In this paper, we define a well-founded semantics for FOL-programs based on a new notion of unfounded sets on consistent as well as inconsistent sets of literals, and study some of its properties. The semantics is defined for all FOL-programs, including those where it is necessary to represent inconsistencies explicitly. The semantics supports a form of combined reasoning by rules under closed world as well as open world assumptions, and it is a generalization of the standard well-founded semantics for normal logic programs. We also show that the well-founded semantics defined here approximates the well-supported answer set semantics for normal DL programs.
|
The most relevant work in defining well-founded semantics for combing rules with DLs are @cite_4 @cite_10 . The former embeds dl-atoms in rule bodies to serve as queries to the underlying ontology, and it does not allow the predicate in a rule head to be shared in the ontology. In both approaches, syntactic restrictions are posted so that the least fixpoint is always constructed over sets of consistent literals. It is also a unique feature in our approach that combined reasoning with closed world and open world is supported. A program in FO(ID) has a clear knowledge representation task" -- the rule component is used to define concepts, whereas the FO component may assert additional properties of the defined concepts. All formulas in FO(ID) are interpreted under closed world assumption. Thus, FOL-programs and FO(ID) have fundamental differences in basic ideas. On semantics, FOL-formulas can be interpreted under open world and closed world flexibly. On modeling, the rule set in FO(ID) is built on ontologies, thus information can only flow from a first order theory to rules. But in FOL-programs, the first order theory and rules are tightly integrated, and thus information can flow from each other bilaterally.
|
{
"cite_N": [
"@cite_10",
"@cite_4"
],
"mid": [
"2048082592",
"2115303419"
],
"abstract": [
"We present a novel combination of disjunctive programs under the answer set semantics with description logics for the Semantic Web. The combination is based on a well-balanced interface between disjunctive programs and description logics, which guarantees the decidability of the resulting formalism without assuming syntactic restrictions. We show that the new formalism has very nice semantic properties. In particular, it faithfully extends both disjunctive programs and description logics. Furthermore, we describe algorithms for reasoning in the new formalism, and we give a precise picture of its computational complexity. We also define the well-founded semantics for the normal case, where normal programs are combined with tractable description logics, and we explore its semantic and computational properties. In particular, we show that the well-founded semantics approximates the answer set semantics. We also describe algorithms for the problems of consistency checking and literal entailment under the well-founded semantics, and we give a precise picture of their computational complexity. As a crucial property, in the normal case, consistency checking and literal entailment under the well-founded semantics are both tractable in the data complexity, and even first-order rewritable (and thus can be done in LogSpace in the data complexity) in a special case that is especially useful for representing mappings between ontologies.",
"The realization of the Semantic Web vision, in which computational logic has a prominent role, has stimulated a lot of research on combining rules and ontologies, which are formulated in different formalisms. In particular, combining logic programming with the Web Ontology Language (OWL), which is a standard based on description logics, emerged as an important issue for linking the Rules and Ontology Layers of the Semantic Web. Nonmonotonic description logic programs (dl-programs) were introduced for such a combination, in which a pair (L,P) of a description logic knowledge base L and a set of rules P with negation as failure is given a model-based semantics that generalizes the answer set semantics of logic programs. In this article, we reconsider dl-programs and present a well-founded semantics for them as an analog for the other main semantics of logic programs. It generalizes the canonical definition of the well-founded semantics based on unfounded sets, and, as we show, lifts many of the well-known properties from ordinary logic programs to dl-programs. Among these properties, our semantics amounts to a partial model approximating the answer set semantics, which yields for positive and stratified dl-programs, a total model coinciding with the answer set semantics; it has polynomial data complexity provided the access to the description logic knowledge base is polynomial; under suitable restrictions, it has lower complexity and even first-order rewritability is achievable. The results add to previous evidence that dl-programs are a versatile and robust combination approach, which moreover is implementable using legacy engines."
]
}
|
1405.3267
|
1562488694
|
The stochastic block model (SBM) with two communities, or equivalently the planted bisection model, is a popular model of random graph exhibiting a cluster behaviour. In the symmetric case, the graph has two equally sized clusters and vertices connect with probability @math within clusters and @math across clusters. In the past two decades, a large body of literature in statistics and computer science has focused on providing lower-bounds on the scaling of @math to ensure exact recovery. In this paper, we identify a sharp threshold phenomenon for exact recovery: if @math and @math are constant (with @math ), recovering the communities with high probability is possible if @math and impossible if @math . In particular, this improves the existing bounds. This also sets a new line of sight for efficient clustering algorithms. While maximum likelihood (ML) achieves the optimal threshold (by definition), it is in the worst-case NP-hard. This paper proposes an efficient algorithm based on a semidefinite programming relaxation of ML, which is proved to succeed in recovering the communities close to the threshold, while numerical experiments suggest it may achieve the threshold. An efficient algorithm which succeeds all the way down to the threshold is also obtained using a partial recovery algorithm combined with a local improvement procedure.
|
There has been a significant body of literature on the recovery property for the stochastic block model with two communities @math , ranging from computer science and statistics literature to machine learning literature. We provide next a partial The approach of McSherry was recently simplified and extended in @cite_9 . list of works that obtain bounds on the connectivity parameters to ensure recovery with various algorithms: While these algorithmic developments are impressive, we next argue how they do not reveal the sharp behavioral transition that takes place in this model. In particular, we will obtain an improved bound that is shown to be tight.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"1510081914"
],
"abstract": [
"Finding a hidden partition in a random environment is a general and important problem, which contains as subproblems many famous questions, such as finding a hidden clique, finding a hidden coloring, finding a hidden bipartition etc. In this paper, we provide a simple SVD algorithm for this purpose, answering a question of McSherry. This algorithm is very easy to implement and works for sparse graphs with optimal density."
]
}
|
1405.3296
|
1467760704
|
This paper introduces algorithm instance games (AIGs) as a conceptual classification applying to games in which outcomes are resolved from joint strategies algorithmically. For such games, a fundamental question asks: How do the details of the algorithm's description influence agents' strategic behavior? We analyze two versions of an AIG based on the set-cover optimization problem. In these games, joint strategies correspond to instances of the set-cover problem, with each subset (of a given universe of elements) representing the strategy of a single agent. Outcomes are covers computed from the joint strategies by a set-cover algorithm. In one variant of this game, outcomes are computed by a deterministic greedy algorithm, and the other variant utilizes a non-deterministic form of the greedy algorithm. We characterize Nash equilibrium strategies for both versions of the game, finding that agents' strategies can vary considerably between the two settings. In particular, we find that the version of the game based on the deterministic algorithm only admits Nash equilibrium in which agents choose strategies (i.e., subsets) containing at most one element, with no two agents picking the same element. On the other hand, in the version of the game based on the non-deterministic algorithm, Nash equilibrium strategies can include agents with zero, one, or every element, and the same element can appear in the strategies of multiple agents.
|
Finally, a number of covering games have appeared in the algorithm game theory literature in recent years @cite_9 @cite_4 @cite_15 @cite_12 @cite_18 . Although they are based on covering problems like set-cover, the work presented in these papers has little in common with AIGs or the SCIG presented in the current paper. Typically, agents in these covering games correspond to elements in the universal set, and the subsets are given as part of the specification of the game. An agent's strategy then corresponds to a selection of subsets containing their element, and the joint strategy induces a covering of the agents elements. The distinction is that, in these covering games, joint strategies correspond to to a covering problem, whereas in our work, the joint strategies correspond to of a covering problem.
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_9",
"@cite_15",
"@cite_12"
],
"mid": [
"1987309458",
"2073627465",
"2011395778",
"1651518121",
""
],
"abstract": [
"In this work, we study the degree to which small fluctuations in costs in well-studied potential games can impact the result of natural best-response and improved-response dynamics. We call this the Price of Uncertainty and study it in a wide variety of potential games including fair cost-sharing games, set-cover games, routing games, and job-scheduling games. We show that in certain cases, even extremely small fluctuations can have the ability to cause these dynamics to spin out of control and move to states of much higher social cost, whereas in other cases these dynamics are much more stable even to large degrees of fluctuation. We also consider the resilience of these dynamics to a small number of Byzantine players about which no assumptions are made. We show again a contrast between different games. In certain cases (e.g., fair cost-sharing, set-cover, job-scheduling) even a single Byzantine player can cause best-response dynamics to transition from low-cost states to states of substantially higher cost, whereas in others (e.g., the class of β-nice games, which includes routing, market-sharing and many others) these dynamics are much more resilient. Overall, our work can be viewed as analyzing the inherent resilience or safety of games to different kinds of imperfections in player behavior, player information, or in modeling assumptions made.",
"We consider a general class of non-cooperative games related to combinatorial covering and facility location problems. A game is based on an integer programming formulation of the corresponding optimization problem, and each of the k players wants to satisfy a subset of the constraints. For that purpose, resources available in integer units must be bought, and their cost can be shared arbitrarily between players. We consider the existence and cost of exact and approximate pure-strategy Nash equilibria. In general, the prices of anarchy and stability are in @Q(k) and deciding the existence of a pure Nash equilibrium is NP-hard. Under certain conditions, however, cheap Nash equilibria exist, in particular if the integrality gap of the underlying integer program is 1, or in the case of single constraint players. We also present algorithms that compute simultaneously near-stable and near-optimal approximate Nash equilibria in polynomial time.",
"We consider a cost sharing system where users are selfish and act according to their own interest. There is a set of facilities and each facility provides services to a subset of the users. Each user is interested in purchasing a service, and will buy it from the facility offering it at the lowest cost. The overall system performance is defined to be the total cost of the facilities chosen by the users. A central authority can encourage the purchase of services by offering subsidies that reduce their price, in order to improve the system performance. The subsidies are financed by taxes collected from the users. Specifically, we investigate a non-cooperative game, where users join the system, and act according to their best response. We model the system as an instance of a set cover game, where each element is interested in selecting a cover minimizing its payment. The subsidies are updated dynamically, following the selfish moves of the elements and the taxes collected due to their payments. Our objective is to design a dynamic subsidy mechanism that improves on the overall system performance while collecting as taxes only a small fraction of the sum of the payments of the users. The performance of such a subsidy mechanism is thus defined by two different quality parameters: (i) the price of anarchy, defined as the ratio between the cost of the Nash equilibrium obtained and the cost of an optimal solution; and (ii) the taxation ratio, defined as the fraction of payments collected as taxes from the users. We investigate two different models: (i) an integral model in which each element is covered by a single set; and (ii) a fractional model in which an element can be fractionally covered by several sets. Let f denote the maximum number of sets that an element can belong to. For the fractional model, we provide a subsidy mechanism such that, for any e≤1, the price of anarchy is @math and the taxation ratio is e. For the integral model, we provide a subsidy mechanism such that, for any e≤1, the price of anarchy is @math and the taxation ratio is e, where n is the number of elements.",
"Given a collection @math of weighted subsets of a ground set @math , the set cover problem is to find a minimum weight subset of @math which covers all elements of @math . We study a strategic game defined upon this classical optimization problem. Every element of @math is a player which chooses one set of @math where it appears. Following a public tax function, every player is charged a fraction of the weight of the set that it has selected. Our motivation is to design a tax function having the following features: it can be implemented in a distributed manner, existence of an equilibrium is guaranteed and the social cost for these equilibria is minimized.",
""
]
}
|
1405.2833
|
218494213
|
The increase in data storage and power consumption at data-centers has made it imperative to design energy efficient Distributed Storage Systems (DSS). The energy efficiency of DSS is strongly influenced not only by the volume of data, frequency of data access and redundancy in data storage, but also by the heterogeneity exhibited by the DSS in these dimensions. To this end, we propose and analyze the energy efficiency of a heterogeneous distributed storage system in which @math storage servers (disks) store the data of @math distinct classes. Data of class @math is encoded using a @math erasure code and the (random) data retrieval requests can also vary across classes. We show that the energy efficiency of such systems is closely related to the average latency and hence motivates us to study the energy efficiency via the lens of average latency. Through this connection, we show that erasure coding serves the dual purpose of reducing latency and increasing energy efficiency. We present a queuing theoretic analysis of the proposed model and establish upper and lower bounds on the average latency for each data class under various scheduling policies. Through extensive simulations, we present qualitative insights which reveal the impact of coding rate, number of servers, service distribution and number of redundant requests on the average latency and energy efficiency of the DSS.
|
A number of good MDS codes such as LINUX RAID-6 and array codes (EVENODD codes, X-code, RDP codes) have been developed to encode the data stored on cloud (see @cite_37 and references therein). These codes have very low encoding decoding complexity as they avoid Galois Field arithmetic (unlike the classical Reed-Solomon MDS codes) and involve only XOR operations. However, they are usually applicable upto two or three disk failures. Also, in the event of disk failure(s), Array codes and recently introduced Regenerating codes reduce disk and network I O respectively. Recently, non-MDS codes such as Tornado, Raptor and LRC codes @cite_7 @cite_12 have been developed for erasure coded storage. Although the fault-tolerance is not as good as MDS codes, they achieve higher performance due to lower repair bandwidth and I O costs.
|
{
"cite_N": [
"@cite_37",
"@cite_12",
"@cite_7"
],
"mid": [
"2951975749",
"",
"2112034660"
],
"abstract": [
"We consider the setting of data storage across n nodes in a distributed manner. A data collector (DC) should be able to reconstruct the entire data by connecting to any k out of the n nodes and downloading all the data stored in them. When a node fails, it has to be regenerated back using the existing nodes. In a recent paper, have obtained an information theoretic lower bound for the repair bandwidth. Also, there has been additional interest in storing data in systematic form as no post processing is required when DC connects to k systematic nodes. Because of their preferred status there is a need to regenerate back any systematic node quickly and exactly. Replacement of a failed node by an exact replica is termed Exact Regeneration.In this paper, we consider the problem of minimizing the repair bandwidth for exact regeneration of the systematic nodes. The file to be stored is of size B and each node can store alpha = B k units of data. A failed systematic node is regenerated by downloading beta units of data each from d existing nodes. We give a lower bound for the repair bandwidth for exact regeneration of the systematic nodes which matches with the bound given by For d >= 2k-1 we give an explicit code construction which minimizes the repair bandwidth when the existing k-1 systematic nodes participate in the regeneration. We show the existence and construction of codes that achieve the bound for d >= 2k-3. Here we also establish the necessity of interference alignment. We prove that the bound is not achievable for d = 2k-1.",
"",
"With the increasing adoption of cloud computing for data storage, assuring data service reliability, in terms of data correctness and availability, has been outstanding. While redundancy can be added into the data for reliability, the problem becomes challenging in the “pay-as-you-use” cloud paradigm where we always want to efficiently resolve it for both corruption detection and data repair. Prior distributed storage systems based on erasure codes or network coding techniques have either high decoding computational cost for data users, or too much burden of data repair and being online for data owners. In this paper, we design a secure cloud storage service which addresses the reliability issue with near-optimal overall performance. By allowing a third party to perform the public integrity verification, data owners are significantly released from the onerous work of periodically checking data integrity. To completely free the data owner from the burden of being online after data outsourcing, this paper proposes an exact repair solution so that no metadata needs to be generated on the fly for repaired data. The performance analysis and experimental results show that our designed service has comparable storage and communication cost, but much less computational cost during data retrieval than erasure codes-based storage solutions. It introduces less storage cost, much faster data retrieval, and comparable communication cost comparing to network coding-based distributed storage systems."
]
}
|
1405.2833
|
218494213
|
The increase in data storage and power consumption at data-centers has made it imperative to design energy efficient Distributed Storage Systems (DSS). The energy efficiency of DSS is strongly influenced not only by the volume of data, frequency of data access and redundancy in data storage, but also by the heterogeneity exhibited by the DSS in these dimensions. To this end, we propose and analyze the energy efficiency of a heterogeneous distributed storage system in which @math storage servers (disks) store the data of @math distinct classes. Data of class @math is encoded using a @math erasure code and the (random) data retrieval requests can also vary across classes. We show that the energy efficiency of such systems is closely related to the average latency and hence motivates us to study the energy efficiency via the lens of average latency. Through this connection, we show that erasure coding serves the dual purpose of reducing latency and increasing energy efficiency. We present a queuing theoretic analysis of the proposed model and establish upper and lower bounds on the average latency for each data class under various scheduling policies. Through extensive simulations, we present qualitative insights which reveal the impact of coding rate, number of servers, service distribution and number of redundant requests on the average latency and energy efficiency of the DSS.
|
The latency analysis of (MDS) erasure coded @math homogenous DSS has been well investigated in @cite_19 @cite_39 @cite_33 which provide queuing theoretic bounds on average latency. A related line of work @cite_24 @cite_31 independently showed that sending requests to multiple servers always reduces the (read) latency. Then Liang et. al. @cite_28 extended the latency analysis to a @math DSS, in which @math of a total @math number of independent servers are used to store the @math MDS code. It assumed a constant+exponential'' model for the service time of jobs. The authors in @cite_4 @cite_26 developed load-adaptive algorithms that dynamically vary job size, coding rate and number of parallel connections to improve the delay-throughput tradeoff of key-value storage systems. These solutions were extended for heterogeneous services with mixture of job sizes and coding rate. Recently, Xiang et. al. @cite_8 provided a tight upper bound on average latency, assuming arbitrary erasure code, multiple file types and a general service time distribution. This was then used to solve a joint latency and storage cost optimization problem by optimizing over the choice of erasure code, placement of encoded chunks and the choice of scheduling policy.
|
{
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_33",
"@cite_8",
"@cite_28",
"@cite_39",
"@cite_24",
"@cite_19",
"@cite_31"
],
"mid": [
"",
"2022138967",
"",
"2072019125",
"2057934903",
"",
"1976448792",
"2064224609",
""
],
"abstract": [
"",
"Our paper presents solutions that can significantly improve the delay performance of putting and retrieving data in and out of cloud storage. We first focus on measuring the delay performance of a very popular cloud storage service Amazon S3. We establish that there is significant randomness in service times for reading and writing small and medium size objects when assigned distinct keys. We further demonstrate that using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller objects) together pushes the envelope on service time distributions significantly (e.g., 76 , 80 , and 85 reductions in mean, 90th, and 99th percentiles for 2-MB files) at the expense of additional storage (e.g., 1.75x). However, chunking and erasure coding increase the load and hence the queuing delays while reducing the supportable rate region in number of requests per second per node. Thus, in the second part of our paper, we focus on analyzing the delay performance when chunking, forward error correction (FEC), and parallel connections are used together. Based on this analysis, we develop load-adaptive algorithms that can pick the best code rate on a per-request basis by using offline computed queue backlog thresholds. The solutions work with homogeneous services with fixed object sizes, chunk sizes, operation type (e.g., read or write) as well as heterogeneous services with mixture of object sizes, chunk sizes, and operation types. We also present a simple greedy solution that opportunistically uses idle connections and picks the erasure coding rate accordingly on the fly. Both backlog-based and greedy solutions support the full rate region and provide best mean delay performance when compared to the best fixed coding rate policy. Our evaluations show that backlog-based solutions achieve better delay performance at higher percentile values than the greedy solution.",
"",
"Modern distributed storage systems offer large capacity to satisfy the exponentially increasing need of storage space. They often use erasure codes to protect against disk and node failures to increase reliability, while trying to meet the latency requirements of the applications and clients. This paper provides an insightful upper bound on the average service delay of such erasure-coded storage with arbitrary service time distribution and consisting of multiple heterogeneous files. Not only does the result supersede known delay bounds that only work for homogeneous files, it also enables a novel problem of joint latency and storage cost minimization over three dimensions: selecting the erasure code, placement of encoded chunks, and optimizing scheduling policy. The problem is efficiently solved via the computation of a sequence of convex approximations with provable convergence. We further prototype our solution in an open-source, cloud storage deployment over three geographically distributed data centers. Experimental results validate our theoretical delay analysis and show significant latency reduction, providing valuable insights into the proposed latency-cost tradeoff in erasure-coded storage.",
"The declining costs of commodity disk drives is rapidly changing the economics of deploying large amounts of online or near-line storage. Conventional mass storage systems use either high performance RAID clusters, automated tape libraries or a combination of tape and disk. In this paper, we analyze an alternative design using massive arrays of idle disks, or MAID. We argue that this storage organization provides storage densities matching or exceeding those of tape libraries with performance similar to disk arrays. Moreover, we show that with effective power management of individual drives, this performance can be achieved using a very small power budget. In particular, we show that our power management strategy can result in the performance comparable to an always-on RAID system while using 1 15th the power of such a RAID system.",
"",
"In this paper, we study the problem of reducing the delay of downloading data from cloud storage systems by leveraging multiple parallel threads, assuming that the data has been encoded and stored in the clouds using fixed rate forward error correction (FEC) codes with parameters (n, k). That is, each file is divided into k equal-sized chunks, which are then expanded into n chunks such that any k chunks out of the n are sufficient to successfully restore the original file. The model can be depicted as a multiple-server queue with arrivals of data retrieving requests and a server corresponding to a thread. However, this is not a typical queueing model because a server can terminate its operation, depending on when other servers complete their service (due to the redundancy that is spread across the threads). Hence, to the best of our knowledge, the analysis of this queueing model remains quite uncharted. Recent traces from Amazon S3 show that the time to retrieve a fixed size chunk is random and can be approximated as a constant delay plus an i.i.d. exponentially distributed random variable. For the tractability of the theoretical analysis, we assume that the chunk downloading time is i.i.d. exponentially distributed. Under this assumption, we show that any work-conserving scheme is delay-optimal among all on-line scheduling schemes when k = 1. When k > 1, we find that a simple greedy scheme, which allocates all available threads to the head of line request, is delay optimal among all on-line scheduling schemes. We also provide some numerical results that point to the limitations of the exponential assumption, and suggest further research directions.",
"We study how coding in distributed storage reduces expected download time, in addition to providing reliability against disk failures. The expected download time is reduced because when a content file is encoded with redundancy and distributed across multiple disks, reading only a subset of the disks is sufficient for content reconstruction. For the same total storage used, coding exploits the diversity in storage better than simple replication, and hence gives faster download. We use a novel fork-join queueing framework to model multiple users requesting the content simultaneously, and derive bounds on the expected download time. Our system model and results are a novel generalization of the fork-join system that is studied in queueing theory literature. Our results demonstrate the fundamental trade-off between the expected download time and the amount of storage space. This trade-off can be used for design of the amount of redundancy required to meet the delay constraints on content delivery.",
""
]
}
|
1405.2363
|
253841901
|
Sampled-data (SD) systems, which are composed of both discrete- and continuous-time components, are arguably one of the most common classes of cyberphysical systems in practice; most modern controllers are implemented on digital platforms while the plant dynamics that are being controlled evolve continuously in time. As with all cyberphysical systems, ensuring hard constraint satisfaction is key in the safe operation of SD systems. A powerful analytical tool for guaranteeing such constraint satisfaction is the viability kernel: the set of all initial conditions for which a safety-preserving control law (that is, a control law that satisfies all input and state constraints) exists. In this paper we present a novel sampling-based algorithm that tightly approximates the viability kernel for high-dimensional sampled-data linear time-invariant (LTI) systems. Unlike prior work in this area, our algorithm formally handles both the discrete and continuous characteristics of SD systems. We prove the correctness and convergence of our approximation technique, provide discussions on heuristic methods to optimally bias the sampling process, and demonstrate the results on a twelve-dimensional flight envelope protection problem.
|
Algorithms from within the MPC community have also emerged that enable the computation of the viability kernel for discrete-time LTI systems with polytopic constraints @cite_5 . Due to the fact that these algorithms recursively compute the Minkowski sum, linear transformation, and intersection of polytopes, they can only be applied to low dimensional systems; the number of vertices of the resulting polytope grows rapidly with each subsequent Minkowski sum operation, while the intersection operation at each iteration requires a vertex to facet enumeration of polytopes---an operation that is known to be intractable in high dimensions @cite_27 . In more general contexts (e.g., for continuous-time systems), an ellipsoidal approximation of the region of attraction of the MPC is computed as a (crude) representation of the viability kernel @cite_35 . These, as well as other approximation techniques such as @math -contractive polytopes @cite_11 , generally require existence of a stabilizing controller within the constraints---a requirement that may not always be readily satisfied.
|
{
"cite_N": [
"@cite_5",
"@cite_27",
"@cite_35",
"@cite_11"
],
"mid": [
"",
"1016068550",
"2057894283",
"2072064204"
],
"abstract": [
"",
"Every convex polytope is both the intersection of a finite set of halfspaces and the convex hull of a finite vertex set. Transforming from the halfspaces (vertices, respectively) to the vertices (halfspaces, respectively) is called vertex enumeration (facet enumeration, respectively). It is an open problem whether there is an algorithm for these two problems polynomial in the input and the output size. For each of the known methods, this thesis develops a characterization of what constitutes an easy or difficult input. Example families of polytopes are presented that show that none of the known methods will yield a polynomial algorithm. On the other hand, a family of polytopes difficult for one class of algorithms can (sometimes) be easily solvable for another class of algorithms; the characterizations given here can be used to guide a choice of algorithms. Similarly, although the general problems of vertex and facet enumeration are equivalent by the duality of convex polytopes, for fixed polytope family and algorithm, one of these directions can be much easier than the other. This thesis presents a new class of algorithms that use the easy direction as an oracle to solve the seemingly difficult direction.",
"This paper addresses the attraction domain of model-based predictive control (MPC) for nonlinear systems with control input and state constraints. Based on a stability condition of nonlinear MPC, a method to determine the terminal weighting term in the performance index and the terminal stabilising control law to maximise the domain of attraction of the nonlinear MPC is proposed. The problem of maximisation of the attraction region is recast as a well-defined optimisation problem. By an LMI based optimisation approach, the terminal weighting item and fictitious terminal stabilising control law axe optimised to enlarge the attraction domain and hence the feasibility domain of the nonlinear MPC method. The proposed method is illustrated by a numerical example and favourably compared with existing results.",
"This paper presents a new (geometrical) approach to the computation of polyhedral (robustly) positively invariant (PI) sets for general (possibly discontinuous) nonlinear discrete-time systems possibly affected by disturbances. Given a @b-contractive ellipsoidal set E, the key idea is to construct a polyhedral set that lies between the ellipsoidal sets @bE and E. A proof that the resulting polyhedral set is contractive and thus, PI, is given, and a new algorithm is developed to construct the desired polyhedral set. The problem of computing polyhedral invariant sets is formulated as a number of quadratic programming (QP) problems. The number of QP problems is guaranteed to be finite and therefore, the algorithm has finite termination. An important application of the proposed algorithm is the computation of polyhedral terminal constraint sets for model predictive control based on quadratic costs."
]
}
|
1405.2494
|
1989588773
|
We study the framework of abductive logic programming extended with integrity constraints. For this framework, we introduce a new measure of the simplicity of an explanation based on its degree of arbitrariness : the more arbitrary the explanation, the less appealing it is, with explanations having no arbitrariness — they are called constrained — being the preferred ones. In the paper, we study basic properties of constrained explanations. For the case when programs in abductive theories are stratified we establish results providing a detailed picture of the complexity of the problem to decide whether constrained explanations exist.
|
Abduction was introduced to artificial intelligence in early 1970s by Harry Pople Jr. , where it is now commonly understood as the @cite_3 . Over the years several criteria have been proposed to identify the preferred (best) explanations, all rooted in the Occam's razor parsimony principle. The most commonly considered one is subset-minimality @cite_2 @cite_20 @cite_6 . A more restrictive condition of minimum cardinality has also been broadly studied @cite_8 . The abduction reasoning formalism we study in the paper uses logic programs to represent background knowledge in abductive theories. It is referred to as @cite_10 @cite_15 @cite_17 . Abductive explanations which allow the removal of hypotheses are first introduced by Inoue and Sakama . The importance of abductive logic programming to knowledge representation was argued by Denecker and Schreye . It was applied in diagnosis @cite_21 , planning @cite_7 @cite_13 , natural language understanding @cite_11 , and case-based reasoning @cite_9 . Denecker and Kakas provide a comprehensive survey of the area.
|
{
"cite_N": [
"@cite_13",
"@cite_11",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_6",
"@cite_3",
"@cite_2",
"@cite_15",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"2487701457",
"2063499502",
"2041428960",
"1540899940",
"",
"2133291246",
"",
"1546644035",
"",
"2174466192",
"1509180139",
"1967802770",
"105821165"
],
"abstract": [
"",
"Abstract We investigate the relationship between various alternative semantics for logic programming, viz. the stable model semantics of Gelfond and Lifschitz (1988), the supported model semantics as developed by Apt, Blair and Walker (1988), autoepistemic translations (cf. Moore (1985)) of general logic programs and default translations of general logic programs, Reiter (1980).",
"",
"Contents: Abduction and Diagnostic Inference.- Computational Models for Diagnostic Problem Solving.- Basics of Parsimonious Covering Theory.- Probabilistic Causal Model.- Diagnostic Strategies in the Probabilistic Causal Model.- Causal Chaining.- Parallel Processing for Diagnostic Problem-Solving.- Conclusion.- Bibliography.- Index.",
"",
"Several artificial intelligence architectures and systems based on \"deep\" models of a domain have been proposed, in particular for the diagnostic task. These systems have several advantages over traditional knowledge based systems, but they have a main limitation in their computational complexity. One of the ways to face this problem is to rely on a knowledge compilation phase, which produces knowledge that can be used more effectively with respect to the original one. We show how a specific knowledge compilation approach can focus reasoning in abductive diagnosis, and, in particular, can improve the performances of AID, an abductive diagnosis system. The approach aims at focusing the overall diagnostic cycle in two interdependent ways: avoiding the generation of candidate solutions to be discarded a posteriori and integrating the generation of candidate solutions with discrimination among different candidates. Knowledge compilation is used off-line to produce operational (i.e., easily evaluated) conditions that embed the abductive reasoning strategy and are used in addition to the original model, with the goal of ruling out parts of the search space or focusing on parts of it. The conditions are useful to solve most cases using less time for computing the same solutions, yet preserving all the power of the model-based system for dealing with multiple faults and explaining the solutions. Experimental results showing the advantages of the approach are presented.",
"",
"Introduction 1. Conceptual analysis of abduction: what is abduction? 2. Knowledge-based systems and the science of AI: 3. Two RED systems 4. Generalizing the control strategy 5. More kinds of knowledge: TIPS and PATHEX LIVER TIPS 6. Better task analysis, better strategy 7. Computational complexity of abduction 8. Diagnostic systems MDX2 and QUADS 9. Practical abduction 10. Perception and language understanding Appendices.",
"",
"",
"Abduction in Logic Programming started in the late 80s, early 90s, in an attempt to extend logic programming into a framework suitable for a variety of problems in Artificial Intelligence and other areas of Computer Science. This paper aims to chart out the main developments of the field over the last ten years and to take a critical view of these developments from several perspectives: logical, epistemological, computational and suitability to application. The paper attempts to expose some of the challenges and prospects for the further development of the field.",
"Abstract There are two distinct formalizations for reasoning from observations to explanations, as in diagnostic tasks. The consistency based approach treats the task as a deductive one, in which the explanation is deduced from a background theory and a minimal set of abnormalities. The abductive method, on the other hand, treats explanations as sentences that, when added to the background theory, derive the observations. We show that there is a close connection between these two formalizations in the context of simple causal theories: domain theories in which a set of sentences are singled out as the explanatorily relevant causes of observations. There are two main results, which show that (with certain caveats) the consistency based approach can emulate abductive reasoning by adding closure axioms to a causal theory; and that abductive techniques can be used in place of the consistency based method in the domain of logic based diagnosis. It is especially interesting that in the latter case, the abductive techniques generate only relevant explanations, while diagnoses may have irrelevant elements.",
""
]
}
|
1405.2262
|
2951764010
|
We present a method for training a deep neural network containing sinusoidal activation functions to fit to time-series data. Weights are initialized using a fast Fourier transform, then trained with regularization to improve generalization. A simple dynamic parameter tuning method is employed to adjust both the learning rate and regularization term, such that stability and efficient training are both achieved. We show how deeper layers can be utilized to model the observed sequence using a sparser set of sinusoid units, and how non-uniform regularization can improve generalization by promoting the shifting of weight toward simpler units. The method is demonstrated with time-series problems to show that it leads to effective extrapolation of nonlinear trends.
|
A more sophisticated group of methods involves neural networks with recurrent connections @cite_34 . These produce their own internal representation of state. (See Figure .B.) This enables them to learn how much to adjust their representations of state based on observed values, and hence operate in a manner more robust against noisy observations.
|
{
"cite_N": [
"@cite_34"
],
"mid": [
"2139122335"
],
"abstract": [
"The paper first summarizes a general approach to the training of recurrent neural networks by gradient-based algorithms, which leads to the introduction of four families of training algorithms. Because of the variety of possibilities thus available to the \"neural network designer,\" the choice of the appropriate algorithm to solve a given problem becomes critical. We show that, in the case of process modeling, this choice depends on how noise interferes with the process to be modeled; this is evidenced by three examples of modeling of dynamical processes, where the detrimental effect of inappropriate training algorithms on the prediction error made by the network is clearly demonstrated. >"
]
}
|
1405.2262
|
2951764010
|
We present a method for training a deep neural network containing sinusoidal activation functions to fit to time-series data. Weights are initialized using a fast Fourier transform, then trained with regularization to improve generalization. A simple dynamic parameter tuning method is employed to adjust both the learning rate and regularization term, such that stability and efficient training are both achieved. We show how deeper layers can be utilized to model the observed sequence using a sparser set of sinusoid units, and how non-uniform regularization can improve generalization by promoting the shifting of weight toward simpler units. The method is demonstrated with time-series problems to show that it leads to effective extrapolation of nonlinear trends.
|
In early instantiations, recurrent neural networks struggled to retain internal state over long periods of time because the logistic activation functions typically used with neural networks tend to diminish the values with each time step. Long Short Term Memory architectures address this problem by using only linear units with the recurrent loops @cite_10 . This advance has made recurrent neural networks much more capable for modeling time-series data. Recent advances in deep neural network learning have also helped to improve the training of recurrent neural networks @cite_14 @cite_33 @cite_42 @cite_31 .
|
{
"cite_N": [
"@cite_14",
"@cite_33",
"@cite_42",
"@cite_31",
"@cite_10"
],
"mid": [
"1969067576",
"1650018112",
"2106725254",
"2143612262",
""
],
"abstract": [
"Abstract Graphics processing unit (GPU) is used for a faster artificial neural network. It is used to implement the matrix multiplication of a neural network to enhance the time performance of a text detection system. Preliminary results produced a 20-fold performance enhancement using an ATI RADEON 9700 PRO board. The parallelism of a GPU is fully utilized by accumulating a lot of input feature vectors and weight vectors, then converting the many inner-product operations into one matrix operation. Further research areas include benchmarking the performance with various hardware and GPU-aware learning algorithms.",
"With the help of neural networks, data sets with many dimensions can be analyzed to find lower dimensional structures within them.",
"Existing Nonlinear dimensionality reduction (NLDR) algorithms make the assumption that distances between observations are uniformly scaled. Unfortunately, with many interesting systems, this assumption does not hold. We present a new technique called Temporal NLDR (TNLDR), which is specifically designed for analyzing the high-dimensional observations obtained from random-walks with dynamical systems that have external controls. It uses the additional information implicit in ordered sequences of observations to compensate for non-uniform scaling in observation space. We demonstrate that TNLDR computes more accurate estimates of intrinsic state than regular NLDR, and we show that accurate estimates of state can be used to train accurate models of dynamical systems.",
"Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates deep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.",
""
]
}
|
1405.2262
|
2951764010
|
We present a method for training a deep neural network containing sinusoidal activation functions to fit to time-series data. Weights are initialized using a fast Fourier transform, then trained with regularization to improve generalization. A simple dynamic parameter tuning method is employed to adjust both the learning rate and regularization term, such that stability and efficient training are both achieved. We show how deeper layers can be utilized to model the observed sequence using a sparser set of sinusoid units, and how non-uniform regularization can improve generalization by promoting the shifting of weight toward simpler units. The method is demonstrated with time-series problems to show that it leads to effective extrapolation of nonlinear trends.
|
Many other approaches, besides Fourier neural networks, have been proposed for fitting to time-series data. Some popular approaches include wavelet networks @cite_13 @cite_4 @cite_39 @cite_6 @cite_27 @cite_25 , and support vector machines @cite_15 @cite_18 @cite_12 .
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_6",
"@cite_39",
"@cite_27",
"@cite_15",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"",
"2030032088",
"2137076380",
"",
"1970876195",
"2032170121",
"",
"1980418485",
"2172064003"
],
"abstract": [
"",
"A new technique, wavelet network, is introduced to predict chaotic time series. By using this technique, firstly, we make accurate short-term predictions of the time series from chaotic attractors. Secondly, we make accurate predictions of the values and bifurcation structures of the time series from dynamical systems whose parameter values are changing with time. Finally we predict chaotic attractors by making long-term predictions based on remarkably few data points, where the correlation dimensions of predicted attractors are calculated and are found to be almost identical to those of actual attractors.",
"The effectiveness of a multiscale neural net architecture for time series prediction of nonlinear dynamic systems is investigated. The prediction task is simplified by decomposing different scales of past windows into different scales of wavelets, and predicting the coefficients of each scale of wavelets by means of a separate multilayer perceptron. The short-term history is decomposed into the lower scales of wavelet coefficients, which are utilized for detailed analysis and prediction, while the long-term history is decomposed into higher scales of wavelet coefficients that are used for the analysis and prediction of slow trends in the time series. These coordinated scales of time and frequency provide an interpretation of the series structures, and more information about the history of the series, using fewer coefficients than other methods. Results concerning scales of time and frequencies are combined by another expert perceptron, which learns the weight of each scale in the goal-prediction of the original time series. Each network is trained by backpropagation. The weights and biases are initialized by a clustering algorithm of the temporal patterns of the time series, which improves the prediction results as compared to random initialization. The suggested multiscale architecture outperforms the corresponding single-scale architectures. The employment of improved learning methods for each of the ScaleNet networks can further improve the prediction results.",
"",
"Since EEG is one of the most important sources of information in therapy of epilepsy, several researchers tried to address the issue of decision support for such a data. In this paper, we introduce two fundamentally different approaches for designing classification models (classifiers); the traditional statistical method based on logistic regression and the emerging computationally powerful techniques based on artificial neural networks (ANNs). Logistic regression as well as feedforward error backpropagation artificial neural networks (FEBANN) and wavelet neural networks (WNN) based classifiers were developed and compared in relation to their accuracy in classification of EEG signals. In these methods we used FFT and autoregressive (AR) model by using maximum likelihood estimation (MLE) of EEG signals as an input to classification system with two discrete outputs: epileptic seizure or nonepileptic seizure. By identifying features in the signal we want to provide an automatic system that will support a physician in the diagnosing process. By applying AR with MLE in connection with WNN, we obtained novel and reliable classifier architecture. The network is constructed by the error backpropagation neural network using Morlet mother wavelet basic function as node activation function. The comparisons between the developed classifiers were primarily based on analysis of the receiver operating characteristic (ROC) curves as well as a number of scalar performance measures pertaining to the classification. The WNN-based classifier outperformed the FEBANN and logistic regression based counterpart. Within the same group, the WNN-based classifier was more accurate than the FEBANN-based classifier, and the logistic regression-based classifier.",
"Support vector machine (SVM) is a very specific type of learning algorithms characterized by the capacity control of the decision function, the use of the kernel functions and the sparsity of the solution. In this paper, we investigate the predictability of financial movement direction with SVM by forecasting the weekly movement direction of NIKKEI 225 index. To evaluate the forecasting ability of SVM, we compare its performance with those of Linear Discriminant Analysis, Quadratic Discriminant Analysis and Elman Backpropagation Neural Networks. The experiment results show that SVM outperforms the other classification methods. Further, we propose a combining model by integrating SVM with the other classification methods. The combining model performs best among all the forecasting methods.",
"",
"A local linear wavelet neural network (LLWNN) is presented in this paper. The difference of the network with conventional wavelet neural network (WNN) is that the connection weights between the hidden layer and output layer of conventional WNN are replaced by a local linear model. A hybrid training algorithm of particle swarm optimization (PSO) with diversity learning and gradient descent method is introduced for training the LLWNN. Simulation results for the prediction of time-series show the feasibility and effectiveness of the proposed method.",
"Time series prediction techniques have been used in many real-world applications such as financial market prediction, electric utility load forecasting , weather and environmental state prediction, and reliability forecasting. The underlying system models and time series data generating processes are generally complex for these applications and the models for these systems are usually not known a priori. Accurate and unbiased estimation of the time series data produced by these systems cannot always be achieved using well known linear techniques, and thus the estimation process requires more advanced time series prediction algorithms. This paper provides a survey of time series prediction applications using a novel machine learning approach: support vector machines (SVM). The underlying motivation for using SVMs is the ability of this methodology to accurately forecast time series data when the underlying system processes are typically nonlinear, non-stationary and not defined a-priori. SVMs have also been proven to outperform other non-linear techniques including neural-network based non-linear prediction techniques such as multi-layer perceptrons.The ultimate goal is to provide the reader with insight into the applications using SVM for time series prediction, to give a brief tutorial on SVMs for time series prediction, to outline some of the advantages and challenges in using SVMs for time series prediction, and to provide a source for the reader to locate books, technical journals, and other online SVM research resources."
]
}
|
1405.1837
|
2952464428
|
Recent research has unveiled the importance of online social networks for improving the quality of recommender systems and encouraged the research community to investigate better ways of exploiting the social information for recommendations. To contribute to this sparse field of research, in this paper we exploit users' interactions along three data sources (marketplace, social network and location-based) to assess their performance in a barely studied domain: recommending products and domains of interests (i.e., product categories) to people in an online marketplace environment. To that end we defined sets of content- and network-based user similarity features for each data source and studied them isolated using an user-based Collaborative Filtering (CF) approach and in combination via a hybrid recommender algorithm, to assess which one provides the best recommendation performance. Interestingly, in our experiments conducted on a rich dataset collected from SecondLife, a popular online virtual world, we found that recommenders relying on user similarity features obtained from the social network data clearly yielded the best results in terms of accuracy in case of predicting products, whereas the features obtained from the marketplace and location-based data sources also obtained very good results in case of predicting categories. This finding indicates that all three types of data sources are important and should be taken into account depending on the level of specialization of the recommendation task.
|
Most of the literature that leverages social data for recommendations is focused on recommending users, (e.g., @cite_5 @cite_15 ), tags (e.g., @cite_7 ) or points-of-interest (e.g, @cite_14 ), although some works have exploited social information for item or product recommendation, being the most important ones model-based. @cite_17 introduced SocialMF, a matrix factorization model that incorporates social relations into a rating prediction task, decreasing RMSE with respect to previous work. Similarly, @cite_14 incorporated social information in two models of matrix factorization with social regularization, with improvements in both MAE and RMSE for rating prediction. Among their evaluations, they concluded that choosing the right similarity feature between users plays an important role in making a more accurate prediction.
|
{
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_5",
"@cite_15",
"@cite_17"
],
"mid": [
"2144487656",
"2083381833",
"2109816759",
"2143706358",
"2135598826"
],
"abstract": [
"Although Recommender Systems have been comprehensively analyzed in the past decade, the study of social-based recommender systems just started. In this paper, aiming at providing a general method for improving recommender systems by incorporating social network information, we propose a matrix factorization framework with social regularization. The contributions of this paper are four-fold: (1) We elaborate how social network information can benefit recommender systems; (2) We interpret the differences between social-based recommender systems and trust-aware recommender systems; (3) We coin the term Social Regularization to represent the social constraints on recommender systems, and we systematically illustrate how to design a matrix factorization objective function with social regularization; and (4) The proposed method is quite general, which can be easily extended to incorporate other contextual information, like social tags, etc. The empirical analysis on two large datasets demonstrates that our approaches outperform other state-of-the-art methods.",
"A social tagging system provides users an effective way to collaboratively annotate and organize items with their own tags. A social tagging system contains heterogeneous information like users' tagging behaviors, social networks, tag semantics and item profiles. All the heterogeneous information helps alleviate the cold start problem due to data sparsity. In this paper, we model a social tagging system as a multi-type graph. To learn the weights of different types of nodes and edges, we propose an optimization framework, called OptRank. OptRank can be characterized as follows:(1) Edges and nodes are represented by features. Different types of edges and nodes have different set of features. (2) OptRank learns the best feature weights by maximizing the average AUC (Area Under the ROC Curve) of the tag recommender. We conducted experiments on two publicly available datasets, i.e., Delicious and Last.fm. Experimental results show that: (1) OptRank outperforms the existing graph based methods when only (user, tag, item) relation is available. (2) OptRank successfully improves the results by incorporating social network, tag semantics and item profiles.",
"While social interactions are critical to understanding consumer behavior, the relationship between social and commerce networks has not been explored on a large scale. We analyze Taobao, a Chinese consumer marketplace that is the world's largest e-commerce website. What sets Taobao apart from its competitors is its integrated instant messaging tool, which buyers can use to ask sellers about products or ask other buyers for advice. In our study, we focus on how an individual's commercial transactions are embedded in their social graphs. By studying triads and the directed closure process, we quantify the presence of information passing and gain insights into when different types of links form in the network. Using seller ratings and review information, we then quantify a price of trust. How much will a consumer pay for transaction with a trusted seller? We conclude by modeling this consumer choice problem: if a buyer wishes to purchase a particular product, how does (s)he decide which store to purchase it from? By analyzing the performance of various feature sets in an information retrieval setting, we demonstrate how the social graph factors into understanding consumer behavior.",
"Music plays an important role in our everyday lives. Not surprisingly, shared musical taste is said to lead to social attraction. In this paper, we study in detail friendship links on the social music platform Last.fm, asking for similarities in taste as well as on demographic attributes and local network structure. On Last.fm, users connect to 'online' friends as usual, but also indicate strong 'real-life' friends by co-attending the same events. Thus, we can contrast these online ties with offline links of different strength. Complementing the analysis, we learn to predict both kinds of ties automatically, including public interaction data as additional relevant features. Our results emphasize the predictive power of the simple measure of mutual friends, while the indicative value of similarity on taste (though increasing with tie strength) is negligible.",
"Recommender systems are becoming tools of choice to select the online information relevant to a given user. Collaborative filtering is the most popular approach to building recommender systems and has been successfully employed in many applications. With the advent of online social networks, the social network based approach to recommendation has emerged. This approach assumes a social network among users and makes recommendations for a user based on the ratings of the users that have direct or indirect social relations with the given user. As one of their major benefits, social network based approaches have been shown to reduce the problems with cold start users. In this paper, we explore a model-based approach for recommendation in social networks, employing matrix factorization techniques. Advancing previous work, we incorporate the mechanism of trust propagation into the model. Trust propagation has been shown to be a crucial phenomenon in the social sciences, in social network analysis and in trust-based recommendation. We have conducted experiments on two real life data sets, the public domain Epinions.com dataset and a much larger dataset that we have recently crawled from Flixster.com. Our experiments demonstrate that modeling trust propagation leads to a substantial increase in recommendation accuracy, in particular for cold start users."
]
}
|
1405.1837
|
2952464428
|
Recent research has unveiled the importance of online social networks for improving the quality of recommender systems and encouraged the research community to investigate better ways of exploiting the social information for recommendations. To contribute to this sparse field of research, in this paper we exploit users' interactions along three data sources (marketplace, social network and location-based) to assess their performance in a barely studied domain: recommending products and domains of interests (i.e., product categories) to people in an online marketplace environment. To that end we defined sets of content- and network-based user similarity features for each data source and studied them isolated using an user-based Collaborative Filtering (CF) approach and in combination via a hybrid recommender algorithm, to assess which one provides the best recommendation performance. Interestingly, in our experiments conducted on a rich dataset collected from SecondLife, a popular online virtual world, we found that recommenders relying on user similarity features obtained from the social network data clearly yielded the best results in terms of accuracy in case of predicting products, whereas the features obtained from the marketplace and location-based data sources also obtained very good results in case of predicting categories. This finding indicates that all three types of data sources are important and should be taken into account depending on the level of specialization of the recommendation task.
|
On a more general approach, @cite_0 use implicit feedback and social graph data to recommend places and items, evaluating with a ranking task and reporting significant improvements over past related methods. Compared to these state-of-the-art approaches, our focus on this paper is at providing a richer analysis of feature selection (similarity features) with a more comprehensive evaluation than previous works, and in a rarely investigates domain: product recommendation in a social online marketplace. For instance, in @cite_5 or @cite_16 the authors leveraged social interactions between sellers and buyers in order to predict sellers to customers. Other relevant work in this context is the study of Zhang & Pennacchiotti @cite_12 who showed how top-level categories can be better predicted in a cold-start setting on eBay by exploiting the user's likes'' from Facebook.
|
{
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_16",
"@cite_12"
],
"mid": [
"57543762",
"2109816759",
"2251544961",
"115528473"
],
"abstract": [
"In the age of information overload, collaborative filtering and recommender systems have become essential tools for content discovery. The advent of online social networks has added another approach to recommendation whereby the social network itself is used as a source for recommendations i.e. users are recommended items that are preferred by their friends. In this paper we develop a new model-based recommendation method that merges collaborative and social approaches and utilizes implicit feedback and the social graph data. Employing factor models, we represent each user profile as a mixture of his own and his friends' profiles. This assumes and exploits \"homophily\" in the social network, a phenomenon that has been studied in the social sciences. We test our model on the Epinions data and on the Tuenti Places Recommendation data, a large-scale industry dataset, where it outperforms several state-of-the-art methods.",
"While social interactions are critical to understanding consumer behavior, the relationship between social and commerce networks has not been explored on a large scale. We analyze Taobao, a Chinese consumer marketplace that is the world's largest e-commerce website. What sets Taobao apart from its competitors is its integrated instant messaging tool, which buyers can use to ask sellers about products or ask other buyers for advice. In our study, we focus on how an individual's commercial transactions are embedded in their social graphs. By studying triads and the directed closure process, we quantify the presence of information passing and gain insights into when different types of links form in the network. Using seller ratings and review information, we then quantify a price of trust. How much will a consumer pay for transaction with a trusted seller? We conclude by modeling this consumer choice problem: if a buyer wishes to purchase a particular product, how does (s)he decide which store to purchase it from? By analyzing the performance of various feature sets in an information retrieval setting, we demonstrate how the social graph factors into understanding consumer behavior.",
"In this paper we present the latest results of a recently started project that aims at studying the extent to which links between buyers and sellers, i.e. trading interactions in online trading platforms, can be predicted from external knowledge sources such as online social networks. To that end, we conducted a large-scale experiment on data obtained from the virtual world Second Life. As our results reveal, online social network data bears a significant potential (28 over the baseline) to predict links between buyers and sellers in online trading platforms.",
"In the era of social commerce, users often connect from e-commerce websites to social networking venues such as Facebook and Twitter. However, there have been few efforts on understanding the correlations between users' social media profiles and their e-commerce behaviors. This paper presents a system for predicting a user's purchase behaviors on e-commerce websites from the user's social media profile. We specifically aim at understanding if the user's profile information in a social network (for example Facebook) can be leveraged to predict what categories of products the user will buy from (for example eBay Electronics). The paper provides an extensive analysis on how users' Facebook profile information correlates to purchases on eBay, and analyzes the performance of different feature sets and learning algorithms on the task of purchase behavior prediction."
]
}
|
1405.2102
|
2065943805
|
We propose a method to improve image clustering using sparse text and the wisdom of the crowds. In particular, we present a method to fuse two different kinds of document features, image and text features, and use a common dictionary or "wisdom of the crowds" as the connection between the two different kinds of documents. With the proposed fusion matrix, we use topic modeling via non-negative matrix factorization to cluster documents.
|
The term frequency-inverse document frequency (TF-IDF) is a technique to create a feature matrix from a collection, or corpus, of documents. TF-IDF is a weighting scheme that weighs features in documents based on how often the words occurs in an individual document compared with how often it occurs in other documents @cite_4 . TF-IDF has been used for text mining, near duplicate detection, and information retrieval. When dealing with text documents, the natural features to use are words (i.e. delimiting strings by white space to obtain features). We can represent each word by a unique integer.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"1978394996"
],
"abstract": [
"The experimental evidence accumulated over the past 20 years indicates that textindexing systems based on the assignment of appropriately weighted single terms produce retrieval results that are superior to those obtainable with other more elaborate text representations. These results depend crucially on the choice of effective term weighting systems. This paper summarizes the insights gained in automatic term weighting, and provides baseline single term indexing models with which other more elaborate content analysis procedures can be compared."
]
}
|
1405.2102
|
2065943805
|
We propose a method to improve image clustering using sparse text and the wisdom of the crowds. In particular, we present a method to fuse two different kinds of document features, image and text features, and use a common dictionary or "wisdom of the crowds" as the connection between the two different kinds of documents. With the proposed fusion matrix, we use topic modeling via non-negative matrix factorization to cluster documents.
|
In order to use text processing techniques for image databases, we generate a collection of image words using two steps. First, we obtain a collection of image features, and then define a mapping from the image features to the integers. To obtain image features, we use the scale invariant feature transform (SIFT) @cite_1 . We then use k-means to cluster the image features into @math different clusters. The mapping from the image feature to the cluster is used to identify image words, and results in the image Bag-Of-Words model @cite_5 .
|
{
"cite_N": [
"@cite_5",
"@cite_1"
],
"mid": [
"2107034620",
"2151103935"
],
"abstract": [
"We propose a novel approach to learn and recognize natural scene categories. Unlike previous work, it does not require experts to annotate the training set. We represent the image of a scene by a collection of local regions, denoted as codewords obtained by unsupervised learning. Each region is represented as part of a \"theme\". In previous work, such themes were learnt from hand-annotations of experts, while our method learns the theme distributions as well as the codewords distribution over the themes without supervision. We report satisfactory categorization performances on a large set of 13 categories of complex scenes.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance."
]
}
|
1405.1167
|
2951451886
|
In the problem of reliable multiparty computation (RC), there are @math parties, each with an individual input, and the parties want to jointly compute a function @math over @math inputs. The problem is complicated by the fact that an omniscient adversary controls a hidden fraction of the parties. We describe a self-healing algorithm for this problem. In particular, for a fixed function @math , with @math parties and @math gates, we describe how to perform RC repeatedly as the inputs to @math change. Our algorithm maintains the following properties, even when an adversary controls up to @math parties, for any constant @math . First, our algorithm performs each reliable computation with the following amortized resource costs: @math messages, @math computational operations, and @math latency, where @math is the depth of the circuit that computes @math . Second, the expected total number of corruptions is @math , after which the adversarially controlled parties are effectively quarantined so that they cause no more corruptions.
|
Our results are inspired by recent work on self-healing algorithms. Early work of @cite_7 @cite_13 @cite_21 @cite_25 @cite_2 discusses different restoration mechanisms to preserve network performance by adding capacity and rerouting traffic streams in the presence of node or link failures. This work presents mathematical models to determine global optimal restoration paths, and provides methods for capacity optimization of path-restorable networks.
|
{
"cite_N": [
"@cite_7",
"@cite_21",
"@cite_2",
"@cite_13",
"@cite_25"
],
"mid": [
"2114450250",
"2138244558",
"2140252921",
"2131856621",
"2171699260"
],
"abstract": [
"With the employment of very high capacity transmission systems in high-speed broadband networks (B-ISDN, broadband integrated services digital network), based upon the asynchronous transfer mode (ATM), the consequences of a link failure have increased, since even during a short disconnection a large volume of data is lost. These networks can be made safer against failure, if in the case of a transmission link outage the affected traffic is routed over still intact network parts. This paper describes various protection switching methods for ATM networks and presents mathematical models which can be used to determine globally optimal restoration paths and to dimension spare capacities in the network. Finally, the results are discussed and the various methods are compared.",
"In self-healing networks, end-to-end restoration schemes have been considered more advantageous than line restoration schemes because of a possible cost reduction of the total capacity to construct a fully restorable network. This paper clarifies the benefit of end-to-end restoration schemes quantitatively through a comparative analysis of the minimum link capacity installation cost. A jointly optimal capacity and flow assignment algorithm is developed for the self-healing ATM networks based on end-to-end and line restoration. Several networks with diverse topological characteristics as well as multiple projected traffic demand patterns are employed in the experiments to see the effect of various network parameters. The results indicate that the network topology has a significant impact on the required resource installation cost for each restoration scheme. Contrary to a wide belief in the economic advantage of the end-to-end restoration scheme, this study reveals that the attainable gain could be marginal for a well-connected and or unbalanced network.",
"This paper studies the capacity and flow assignment problem arising in the design of self-healing asynchronous transfer mode (ATM) networks using the virtual path concept. The problem is formulated here as a linear programming problem which is solved using standard methods. The objective is to minimize the spare capacity cost for the given restoration requirement. The spare cost depends on the restoration strategies used in the network. We compare several restoration strategies quantitatively in terms of spare cost, notably: global versus failure-oriented reconfiguration, path versus link restoration, and state-dependent versus state-independent restoration. The advantages and disadvantages of various restoration strategies are also highlighted. Such comparisons provide useful guidance for real network design. Further, a new heuristic algorithm based on the minimum cost route concept is developed for the design of large self-healing ATM networks using path restoration. Numerical results illustrate that the heuristic algorithm is efficient and gives near-optimal solutions for the spare capacity allocation and flow assignment for tested examples.",
"The total transmission capacity required by a transport network to satisfy demand and protect it from failures contributes significantly to its cost, especially in long-haul networks. Previously, the spare capacity of a network with a given set of working span sizes has been optimized to facilitate span restoration. Path restorable networks can, however, be even more efficient by defining the restoration problem from an end to end rerouting viewpoint. We provide a method for capacity optimization of path restorable networks which is applicable to both synchronous transfer mode (STM) and asynchronous transfer mode (ATM) virtual path (VP)-based restoration. Lower bounds on spare capacity requirements in span and path restorable networks are first compared, followed by an integer program formulation based on flow constraints which solves the spare and or working capacity placement problem in either span or path restorable networks. The benefits of path and span restoration, and of jointly optimizing working path routing and spare capacity placement, are then analyzed.",
"For restoration in the case of single link failures in meshed networks several strategies can be considered: link restoration, path restoration and path restoration with link-disjunct route. In terms of spare capacity requirement path restoration is found to be most attractive. These results are obtained by using two general optimisation techniques (simulated annealing and integer linear programming). Both techniques have each their own features; their applicability depends on the size and type of the problem. Application to WDM fibre networks is discussed."
]
}
|
1405.1167
|
2951451886
|
In the problem of reliable multiparty computation (RC), there are @math parties, each with an individual input, and the parties want to jointly compute a function @math over @math inputs. The problem is complicated by the fact that an omniscient adversary controls a hidden fraction of the parties. We describe a self-healing algorithm for this problem. In particular, for a fixed function @math , with @math parties and @math gates, we describe how to perform RC repeatedly as the inputs to @math change. Our algorithm maintains the following properties, even when an adversary controls up to @math parties, for any constant @math . First, our algorithm performs each reliable computation with the following amortized resource costs: @math messages, @math computational operations, and @math latency, where @math is the depth of the circuit that computes @math . Second, the expected total number of corruptions is @math , after which the adversarially controlled parties are effectively quarantined so that they cause no more corruptions.
|
This paper particularly builds on @cite_8 . That paper describes self-healing algorithms that provide reliable communication, with a minimum of corruptions, even when a Byzantine adversary can take over a constant fraction of the nodes in a network. While our attack model is similar to @cite_8 , reliable is more challenging than reliable communication, and hence this paper requires a significantly different technical approach. Additionally, we improve the fraction of bad parties that can be tolerated from @math to @math .
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2113289183"
],
"abstract": [
"Recent years have seen significant interest in designing networks that are self-healing in the sense that they can automatically recover from adversarial attacks. Previous work shows that it is possible for a network to automatically recover, even when an adversary repeatedly deletes nodes in the network. However, there have not yet been any algorithms that self-heal in the case where an adversary takes over nodes in the network. In this paper, we address this gap. In particular, we describe a communication network over n nodes that ensures the following properties, even when an adversary controls up to t ≤ (1 8 '—' e)n nodes, for any non-negative e. First, the network provides a point-to-point communication with bandwidth and latency costs that are asymptotically optimal. Second, the expected total number of message corruptions is O(t(log* n)2) before the adversarially controlled nodes are effectively quarantined so that they cause no more corruptions. Empirical results show that our algorithm can reduce bandwidth cost by up to a factor of 70."
]
}
|
1405.1675
|
2950153056
|
Modelling problems containing a mixture of Boolean and numerical variables is a long-standing interest of Artificial Intelligence. However, performing inference and learning in hybrid domains is a particularly daunting task. The ability to model this kind of domains is crucial in "learning to design" tasks, that is, learning applications where the goal is to learn from examples how to perform automatic de novo design of novel objects. In this paper we present Structured Learning Modulo Theories, a max-margin approach for learning in hybrid domains based on Satisfiability Modulo Theories, which allows to combine Boolean reasoning and optimization over continuous linear arithmetical constraints. The main idea is to leverage a state-of-the-art generalized Satisfiability Modulo Theory solver for implementing the inference and separation oracles of Structured Output SVMs. We validate our method on artificial and real world scenarios.
|
While a number of efficient lifted-inference algorithms have been developed for Relational Continuous Models @cite_57 @cite_61 @cite_5 , performing inference over joint continuous-discrete relational domains is still a challenge. The few existing attempts aim at extending statistical relational learning methods to the hybrid domain.
|
{
"cite_N": [
"@cite_57",
"@cite_5",
"@cite_61"
],
"mid": [
"1922870332",
"2271929959",
""
],
"abstract": [
"Relational Continuous Models (RCMs) represent joint probability densities over attributes of objects, when the attributes have continuous domains. With relational representations, they can model joint probability distributions over large numbers of variables compactly in a natural way. This paper presents a new exact lifted inference algorithm for RCMs, thus it scales up to large models of real world applications. The algorithm applies to Relational Pairwise Models which are (relational) products of potentials of arity 2. Our algorithm is unique in two ways. First, it substantially improves the efficiency of lifted inference with variables of continuous domains. When a relational model has Gaussian potentials, it takes only linear-time compared to cubic time of previous methods. Second, it is the first exact inference algorithm which handles RCMs in a lifted way. The algorithm is illustrated over an example from econometrics. Experimental results show that our algorithm outperforms both a ground-level inference algorithm and an algorithm built with previously-known lifted methods.",
"Lifted message passing algorithms exploit repeated structure within a given graphical model to answer queries efficiently. Given evidence, they construct a lifted network of supernodes and superpotentials corresponding to sets of nodes and potentials that are indistinguishable given the evidence. Recently, efficient algorithms were presented for updating the structure of an existing lifted network with incremental changes to the evidence. In the inference stage, however, current algorithms need to construct a separate lifted network for each evidence case and run a modified message passing algorithm on each lifted network separately. Consequently, symmetries across the inference tasks are not exploited. In this paper, we present a novel lifted message passing technique that exploits symmetries across multiple evidence cases. The benefits of this multi-evidence lifted inference are shown for several important AI tasks such as computing personalized PageRanks and Kalman filters via multievidence lifted Gaussian belief propagation.",
""
]
}
|
1405.1675
|
2950153056
|
Modelling problems containing a mixture of Boolean and numerical variables is a long-standing interest of Artificial Intelligence. However, performing inference and learning in hybrid domains is a particularly daunting task. The ability to model this kind of domains is crucial in "learning to design" tasks, that is, learning applications where the goal is to learn from examples how to perform automatic de novo design of novel objects. In this paper we present Structured Learning Modulo Theories, a max-margin approach for learning in hybrid domains based on Satisfiability Modulo Theories, which allows to combine Boolean reasoning and optimization over continuous linear arithmetical constraints. The main idea is to leverage a state-of-the-art generalized Satisfiability Modulo Theory solver for implementing the inference and separation oracles of Structured Output SVMs. We validate our method on artificial and real world scenarios.
|
Hybrid Probabilistic Relational Models @cite_43 extend Probabilistic Relational Models (PRM) to deal with hybrid domains by specifying templates for hybrid distributions as standard PRM specify templates for discrete distributions. A template instantiation over a database defines a Hybrid Bayesian Network @cite_45 @cite_27 . Inference in Hybrid BN is known to be hard, and restrictions are typically imposed on the allowed relational structure (e.g. in conditional Gaussian models, discrete nodes cannot have continuous parents). On the other hand, LMT can accomodate arbitrary combinations of predicates from the theories for which a solver is available. These currently include linear arithmetic over both rationals and integers as well as a number of other theories like strings, arrays and bit-vectors.
|
{
"cite_N": [
"@cite_43",
"@cite_27",
"@cite_45"
],
"mid": [
"1986263554",
"2046584898",
"1570594552"
],
"abstract": [
"The formalism Probabilistic Relational Models (PRM) couples discrete Bayesian Networks with a modeling formalism similar to UML class diagrams and has been used for architecture analysis. PRMs are well-suited to perform architecture analysis with respect to system qualities since they support both modeling and analysis within the same formalism. A particular strength of PRMs is the ability to perform meaningful analysis of domains where there is a high level of uncertainty, as is often the case when performing system quality analysis. However, the use of discrete Bayesian networks in PRMs complicates the analysis of continuous phenomena. The main contribution of this paper is the Hybrid Probabilistic Relational Models (HPRM) formalism which extends PRMs to enable continuous analysis thus extending the applicability for architecture analysis and especially for trade-off analysis of system qualities. HPRMs use hybrid Bayesian networks which allow combinations of discrete and continuous variables. In addition to presenting the HPRM formalism, the paper contains an example which details the use of HPRMs for architecture trade-off analysis.",
"Abstract A scheme is presented for modeling and local computation of exact probabilities, means, and variances for mixed qualitative and quantitative variables. The models assume that the conditional distribution of the quantitative variables, given the qualitative, is multivariate Gaussian. The computational architecture is set up by forming a tree of belief universes, and the calculations are then performed by local message passing between universes. The asymmetry between the quantitative and qualitative variables sets some additional limitations for the specification and propagation structure. Approximate methods when these are not appropriately fulfilled are sketched. It has earlier been shown how to exploit the local structure in the specification of a discrete probability model for fast and efficient computation, thereby paving the way for exploiting probability-based models as parts of realistic systems for planning and decision support. The purpose of this article is to extend this computational s...",
"We survey the literature on methods for inference and learning in Bayesian Networks composed of discrete and continuous nodes, in which the continuous nodes have a multivariate Gaussian distribution, whose mean and variance depends on the values of the discrete nodes. We also briefly consider hybrid Dynamic Bayesian Networks, an extension of switching Kalman filters. This report is meant to summarize what is known at a sufficient level of detail to enable someone to implement the algorithms, but without dwelling on formalities."
]
}
|
1405.1675
|
2950153056
|
Modelling problems containing a mixture of Boolean and numerical variables is a long-standing interest of Artificial Intelligence. However, performing inference and learning in hybrid domains is a particularly daunting task. The ability to model this kind of domains is crucial in "learning to design" tasks, that is, learning applications where the goal is to learn from examples how to perform automatic de novo design of novel objects. In this paper we present Structured Learning Modulo Theories, a max-margin approach for learning in hybrid domains based on Satisfiability Modulo Theories, which allows to combine Boolean reasoning and optimization over continuous linear arithmetical constraints. The main idea is to leverage a state-of-the-art generalized Satisfiability Modulo Theory solver for implementing the inference and separation oracles of Structured Output SVMs. We validate our method on artificial and real world scenarios.
|
Relational Hybrid Models @cite_23 (RHM) extend Relational Continuous Models to represent combinations of discrete and continuous distributions. The authors present a family of lifted variational algorithms for performing efficient inference, showing substantial improvements over their ground counterparts. As for most hybrid SRL approaches which will be discussed further on, the authors focus on efficiently computing probabilities rather than efficiently finding optimal configurations. Exact inference, hard constraints and theories like algebra over integers, which are naturally handled by our LMT framework, are all out of the scope of these approaches. Nonetheless, lifted inference is a powerful strategy to scale up inference and equipping OMT and SMT tools with lifting capabilities is a promising direction for future improvements.
|
{
"cite_N": [
"@cite_23"
],
"mid": [
"2952413404"
],
"abstract": [
"Hybrid continuous-discrete models naturally represent many real-world applications in robotics, finance, and environmental engineering. Inference with large-scale models is challenging because relational structures deteriorate rapidly during inference with observations. The main contribution of this paper is an efficient relational variational inference algorithm that factors largescale probability models into simpler variational models, composed of mixtures of iid (Bernoulli) random variables. The algorithm takes probability relational models of largescale hybrid systems and converts them to a close-to-optimal variational models. Then, it efficiently calculates marginal probabilities on the variational models by using a latent (or lifted) variable elimination or a lifted stochastic sampling. This inference is unique because it maintains the relational structure upon individual observations and during inference steps."
]
}
|
1405.1124
|
182869335
|
Traditional AI reasoning techniques have been used successfully in many domains, including logistics, scheduling and game playing. This paper is part of a project aimed at investigating how such techniques can be extended to coordinate teams of unmanned aerial vehicles (UAVs) in dynamic environments. Specifically challenging are real-world environments where UAVs and other network-enabled devices must communicate to coordinate---and communication actions are neither reliable nor free. Such network-centric environments are common in military, public safety and commercial applications, yet most research (even multi-agent planning) usually takes communications among distributed agents as a given. We address this challenge by developing an agent architecture and reasoning algorithms based on Answer Set Programming (ASP). ASP has been chosen for this task because it enables high flexibility of representation, both of knowledge and of reasoning tasks. Although ASP has been used successfully in a number of applications, and ASP-based architectures have been studied for about a decade, to the best of our knowledge this is the first practical application of a complete ASP-based agent architecture. It is also the first practical application of ASP involving a combination of centralized reasoning, decentralized reasoning, execution monitoring, and reasoning about network communications. This work has been empirically validated using a distributed network-centric software evaluation testbed and the results provide guidance to designers in how to understand and control intelligent systems that operate in these environments.
|
Incorporating network properties into planning and decision-making has been investigated in @cite_6 . The authors' results indicate that plan execution effectiveness and performance is increased with the increased network-awareness during the planning phase. The UAV coordination approach in this current work combines network-awareness during the reasoning processes with a plan-aware network layer.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2039531526"
],
"abstract": [
"As methods for detecting improvised explosive devices (IEDs) continue to diversify, it becomes increasingly important to establish a framework for coordinating distributed IED monitoring resources to best protect a designated area. The purpose of this paper is to establish the beginnings of such a framework in a distributed plan execution context. The first contribution of this paper is defining an automated planning domain for distributed IED detection. In doing so, we investigate approaches for coordinating distributed plan execution resources. Whereas many existing multi-agent system (MAS) frameworks abstract network information from agent decision-making processes, we instead propose that MAS frameworks consider network properties to improve effectiveness. The second contribution of the paper is the description of several types of network-aware planning, execution, and monitoring agents and a comparison of their performance and effectiveness in an IED monitoring scenario. The results of this research ..."
]
}
|
1405.1124
|
182869335
|
Traditional AI reasoning techniques have been used successfully in many domains, including logistics, scheduling and game playing. This paper is part of a project aimed at investigating how such techniques can be extended to coordinate teams of unmanned aerial vehicles (UAVs) in dynamic environments. Specifically challenging are real-world environments where UAVs and other network-enabled devices must communicate to coordinate---and communication actions are neither reliable nor free. Such network-centric environments are common in military, public safety and commercial applications, yet most research (even multi-agent planning) usually takes communications among distributed agents as a given. We address this challenge by developing an agent architecture and reasoning algorithms based on Answer Set Programming (ASP). ASP has been chosen for this task because it enables high flexibility of representation, both of knowledge and of reasoning tasks. Although ASP has been used successfully in a number of applications, and ASP-based architectures have been studied for about a decade, to the best of our knowledge this is the first practical application of a complete ASP-based agent architecture. It is also the first practical application of ASP involving a combination of centralized reasoning, decentralized reasoning, execution monitoring, and reasoning about network communications. This work has been empirically validated using a distributed network-centric software evaluation testbed and the results provide guidance to designers in how to understand and control intelligent systems that operate in these environments.
|
The problem of mission planning for UAVs under communication constraints has been addressed in @cite_15 , where an ad-hoc task allocation process is employed to engage under-utilized UAVs as communication relays. In our work, we do not separate planning from the engagement of under-utilized UAVs, and do not rely on ad-hoc, hard-wired behaviors. Our approach gives the planner more flexibility and finer-grained control of the actions that occur in the plans, and allows for the emergence of sophisticated behaviors without the need to pre-specify them.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2034297074"
],
"abstract": [
"A multi-UAV system relies on communications to operate. Failure to communicate remotely sensed mission data to the base may render the system ineffective, and the inability to exchange command and control messages can lead to system failures. This paper describes a unique method to control network communications through distributed task allocation to engage under-utilized UAVs to serve as communication relays and to ensure that the network supports mission tasks. This work builds upon a distributed algorithm previously developed by the authors, CBBA with Relays, which uses task assignment information, including task location and proposed execution time, to predict the network topology and plan support using relays. By explicitly coupling task assignment and relay creation processes, the team is able to optimize the use of agents to address the needs of dynamic complex missions. In this work, the algorithm is extended to explicitly consider realistic network communication dynamics, including path loss, stochastic fading, and information routing. Simulation and flight test results validate the proposed approach, demonstrating that the algorithm ensures both data-rate and interconnectivity bit-error-rate requirements during task execution."
]
}
|
1405.1361
|
2949759817
|
There exist many well-established techniques to recover sparse signals from compressed measurements with known performance guarantees in the static case. However, only a few methods have been proposed to tackle the recovery of time-varying signals, and even fewer benefit from a theoretical analysis. In this paper, we study the capacity of the Iterative Soft-Thresholding Algorithm (ISTA) and its continuous-time analogue the Locally Competitive Algorithm (LCA) to perform this tracking in real time. ISTA is a well-known digital solver for static sparse recovery, whose iteration is a first-order discretization of the LCA differential equation. Our analysis shows that the outputs of both algorithms can track a time-varying signal while compressed measurements are streaming, even when no convergence criterion is imposed at each time step. The L2-distance between the target signal and the outputs of both discrete- and continuous-time solvers is shown to decay to a bound that is essentially optimal. Our analyses is supported by simulations on both synthetic and real data.
|
ISTA is one of the earliest digital algorithms developed for sparse recovery @cite_17 , and although it tends to converge slowly, many state-of-the-art solvers are only slight variations of its simple update rule @cite_13 @cite_10 @cite_5 @cite_32 . Its update rule can be seen as a discretized generalized-gradient step for . At the @math iterate, the output is denoted by @math and the update rule is We indicate the iterate number @math in brackets, to match the notation for the continuous time index, and the @math entry of the vector in a subscript: @math . : The activation function @math is the soft-thresholding function defined by: , where @math . The constant @math represents the size of the gradient step, which is usually required to be in the interval @math to ensure convergence. Several papers have shown that ISTA converges to the solution @math of as @math goes to infinity from any initial point @math with linear rate @cite_27 @cite_2 , i.e. @math such that @math @math
|
{
"cite_N": [
"@cite_13",
"@cite_32",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_10",
"@cite_17"
],
"mid": [
"2136398689",
"2126607811",
"1986737463",
"",
"2109449402",
"2028349405",
"2115706991"
],
"abstract": [
"We consider the class of Iterative Shrinkage-Thresholding Algorithms (ISTA) for solving linear inverse problems arising in signal image processing. This class of methods is attractive due to its simplicity, however, they are also known to converge quite slowly. In this paper we present a Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) which preserves the computational simplicity of ISTA, but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA.",
"Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.",
"In this article a unified approach to iterative soft-thresholding algorithms for the solution of linear operator equations in infinite dimensional Hilbert spaces is presented. We formulate the algorithm in the framework of generalized gradient methods and present a new convergence analysis. As main result we show that the algorithm converges with linear rate as soon as the underlying operator satisfies the so-called finite basis injectivity property or the minimizer possesses a so-called strict sparsity pattern. Moreover it is shown that the constants can be calculated explicitly in special cases (i.e. for compact operators). Furthermore, the techniques also can be used to establish linear convergence for related methods such as the iterative thresholding algorithm for joint sparsity and the accelerated gradient projection method.",
"",
"Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ) error term combined with a sparseness-inducing regularization term. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution, and compressed sensing are a few well-known examples of this approach. This paper proposes gradient projection (GP) algorithms for the bound-constrained quadratic programming (BCQP) formulation of these problems. We test variants of this approach that select the line search parameters in different ways, including techniques based on the Barzilai-Borwein method. Computational experiments show that these GP approaches perform well in a wide range of applications, often being significantly faster (in terms of computation time) than competing methods. Although the performance of GP methods tends to degrade as the regularization term is de-emphasized, we show how they can be embedded in a continuation scheme to recover their efficient practical performance.",
"Iterative shrinkage thresholding (1ST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse problems. This class of problems results from combining a linear observation model with a nonquadratic regularizer (e.g., total variation or wavelet-based regularization). It happens that the convergence rate of these 1ST algorithms depends heavily on the linear observation operator, becoming very slow when this operator is ill-conditioned or ill-posed. In this paper, we introduce two-step 1ST (TwIST) algorithms, exhibiting much faster convergence rate than 1ST for ill-conditioned problems. For a vast class of nonquadratic convex regularizers (lscrP norms, some Besov norms, and total variation), we show that TwIST converges to a minimizer of the objective function, for a given range of values of its parameters. For noninvertible observation operators, we introduce a monotonic version of TwIST (MTwIST); although the convergence proof does not apply to this scenario, we give experimental evidence that MTwIST exhibits similar speed gains over IST. The effectiveness of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples.",
"We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary preassigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted p-penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem. Use of such p-penalized problems with p < 2 is often advocated when one expects the underlying ideal noiseless solution to have a sparse expansion with respect to the basis under consideration. To compute the corresponding regularized solutions, we analyze an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. © 2004 Wiley Periodicals, Inc."
]
}
|
1405.1361
|
2949759817
|
There exist many well-established techniques to recover sparse signals from compressed measurements with known performance guarantees in the static case. However, only a few methods have been proposed to tackle the recovery of time-varying signals, and even fewer benefit from a theoretical analysis. In this paper, we study the capacity of the Iterative Soft-Thresholding Algorithm (ISTA) and its continuous-time analogue the Locally Competitive Algorithm (LCA) to perform this tracking in real time. ISTA is a well-known digital solver for static sparse recovery, whose iteration is a first-order discretization of the LCA differential equation. Our analysis shows that the outputs of both algorithms can track a time-varying signal while compressed measurements are streaming, even when no convergence criterion is imposed at each time step. The L2-distance between the target signal and the outputs of both discrete- and continuous-time solvers is shown to decay to a bound that is essentially optimal. Our analyses is supported by simulations on both synthetic and real data.
|
Finally, the last class of methods is based on optimization. For instance, in @cite_25 @cite_11 , an optimization program is set up to account for the temporal correlation in the target, and the recovery is performed in batches. @cite_21 , the best dynamical model is chosen among a family of possible dynamics or parameters. In @cite_34 , a continuation approach is used to update the estimate of the target using the solution at the previous time-step. @cite_22 @cite_20 @cite_24 @cite_19 , the optimization is solved using low-complexity iterative schemes. Unfortunately, all of the above methods lack strong theoretical convergence and accuracy guarantees in the dynamic case. Finally, in @cite_8 , a very general projection-based approach is studied. A convergence result is given but it is not clear how the necessary assumptions apply in the time-varying setting and it does not come with an accuracy result.
|
{
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_21",
"@cite_24",
"@cite_19",
"@cite_34",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"2113661696",
"2149967459",
"2964148637",
"",
"",
"2022240529",
"2002990237",
"",
"2115671189"
],
"abstract": [
"We develop a recursive L1-regularized least squares (SPARLS) algorithm for the estimation of a sparse tap-weight vector in the adaptive filtering setting. The SPARLS algorithm exploits noisy observations of the tap-weight vector output stream and produces its estimate using an expectation-maximization type algorithm. We prove the convergence of the SPARLS algorithm to a near-optimal estimate in a stationary environment and present analytical results for the steady state error. Simulation studies in the context of channel estimation, employing multipath wireless channels, show that the SPARLS algorithm has significant improvement over the conventional widely used recursive least squares (RLS) algorithm in terms of mean squared error (MSE). Moreover, these simulation studies suggest that the SPARLS algorithm (with slight modifications) can operate with lower computational requirements than the RLS algorithm, when applied to tap-weight vectors with fixed support.",
"This paper considers a sparse signal recovery task in time-varying (time-adaptive) environments. The contribution of the paper to sparsity-aware online learning is threefold; first, a generalized thresholding (GT) operator, which relates to both convex and non-convex penalty functions, is introduced. This operator embodies, in a unified way, the majority of well-known thresholding rules which promote sparsity. Second, a non-convexly constrained, sparsity-promoting, online learning scheme, namely the adaptive projection-based generalized thresholding (APGT), is developed that incorporates the GT operator with a computational complexity that scales linearly to the number of unknowns. Third, the novel family of partially quasi-nonexpansive mappings is introduced as a functional analytic tool for treating the GT operator. By building upon the rich fixed point theory, the previous class of mappings establishes also a link between the GT operator and a union of linear subspaces; a non-convex object which lies at the heart of any sparsity promoting technique, batch or online. Based on this functional analytic framework, a convergence analysis of the APGT is provided. Extensive experiments suggest that the APGT exhibits competitive performance when compared to computationally more demanding alternatives, such as the sparsity-promoting affine projection algorithm (APA)- and recursive least-squares (RLS)-based techniques.",
"This paper describes a new online convex optimization method which incorporates a family of candidate dynamical models and establishes novel tracking regret bounds that scale with the comparator's deviation from the best dynamical model in this family. Previous online optimization methods are designed to have a total accumulated loss comparable to that of the best comparator sequence, and existing tracking or shifting regret bounds scale with the overall variation of the comparator sequence. In many practical scenarios, however, the environment is non-stationary and comparator sequences with small variation are quite weak, resulting in large losses. The proposed Dynamic Mirror Descent method, in contrast, can yield low regret relative to highly variable comparator sequences by both tracking the best dynamical model and forming predictions based on that model. This concept is demonstrated empirically in the context of sequential compressive observations of a dynamic scene and tracking a dynamic social network.",
"",
"",
"The theory of compressive sensing (CS) has shown us that under certain conditions, a sparse signal can be recovered from a small number of linear incoherent measurements. An effective class of reconstruction algorithms involve solving a convex optimization program that balances the l1 norm of the solution against a data fidelity term. Tremendous progress has been made in recent years on algorithms for solving these l1 minimization programs. These algorithms, however, are for the most part static: they focus on finding the solution for a fixed set of measurements. In this paper, we present a suite of dynamic algorithms for solving l1 minimization programs for streaming sets of measurements. We consider cases where the underlying signal changes slightly between measurements, and where new measurements of a fixed signal are sequentially added to the system. We develop algorithms to quickly update the solution of several different types of l1 optimization problems whenever these changes occur, thus avoiding having to solve a new optimization problem from scratch. Our proposed schemes are based on homotopy continuation, which breaks down the solution update in a systematic and efficient way into a small number of linear steps. Each step consists of a low-rank update and a small number of matrix-vector multiplications - very much like recursive least squares. Our investigation also includes dynamic updating schemes for l1 decoding problems, where an arbitrary signal is to be recovered from redundant coded measurements which have been corrupted by sparse errors.",
"Compressed sensing (CS) lowers the number of measurements required for reconstruction and estimation of signals that are sparse when expanded over a proper basis. Traditional CS approaches deal with time-invariant sparse signals, meaning that, during the measurement process, the signal of interest does not exhibit variations. However, many signals encountered in practice are varying with time as the observation window increases (e.g., video imaging, where the signal is sparse and varies between different frames). The present paper develops CS algorithms for time-varying signals, based on the least-absolute shrinkage and selection operator (Lasso) that has been popular for sparse regression problems. The Lasso here is tailored for smoothing time-varying signals, which are modeled as vector valued discrete time series. Two algorithms are proposed: the Group-Fused Lasso, when the unknown signal support is time-invariant but signal samples are allowed to vary with time; and the Dynamic Lasso, for the general class of signals with time-varying amplitudes and support. Performance of these algorithms is compared with a sparsity-unaware Kalman smoother, a support-aware Kalman smoother, and the standard Lasso which does not account for time variations. The numerical results amply demonstrate the practical merits of the novel CS algorithms.",
"",
"Using the l1-norm to regularize the least-squares criterion, the batch least-absolute shrinkage and selection operator (Lasso) has well-documented merits for estimating sparse signals of interest emerging in various applications where observations adhere to parsimonious linear regression models. To cope with high complexity, increasing memory requirements, and lack of tracking capability that batch Lasso estimators face when processing observations sequentially, the present paper develops a novel time-weighted Lasso (TWL) approach. Performance analysis reveals that TWL cannot estimate consistently the desired signal support without compromising rate of convergence. This motivates the development of a time- and norm-weighted Lasso (TNWL) scheme with l1-norm weights obtained from the recursive least-squares (RLS) algorithm. The resultant algorithm consistently estimates the support of sparse signals without reducing the convergence rate. To cope with sparsity-aware recursive real-time processing, novel adaptive algorithms are also developed to enable online coordinate descent solvers of TWL and TNWL that provably converge to the true sparse signal in the time-invariant case. Simulated tests compare competing alternatives and corroborate the performance of the novel algorithms in estimating time-invariant signals, and tracking time-varying signals under sparsity constraints."
]
}
|
1405.1438
|
2170857652
|
Consider a person trying to spread an important message on a social network. He she can spend hours trying to craft the message. Does it actually matter? While there has been extensive prior work looking into predicting popularity of social-media content, the effect of wording per se has rarely been studied since it is often confounded with the popularity of the author and the topic. To control for these confounding factors, we take advantage of the surprising fact that there are many pairs of tweets containing the same url and written by the same user but employing different wording. Given such pairs, we ask: which version attracts more retweets? This turns out to be a more difficult task than predicting popular topics. Still, humans can answer this question better than chance (but far from perfectly), and the computational methods we develop can do better than both an average human and a strong competing method trained on non-controlled data.
|
The idea of using carefully controlled experiments to study effective communication strategies dates back at least to . Recent studies range from examining what characteristics of articles correlate with high re-sharing rates @cite_16 to looking at how differences in description affect the spread of content-controlled videos or images @cite_11 @cite_3 . examined the variation of quotes from different sources to examine how textual memes mutate as people pass them along, but did not control for author. Predicting the success'' of various texts such as novels and movie quotes has been the aim of additional prior work not already mentioned in @cite_13 @cite_10 @cite_6 @cite_23 @cite_12 . To our knowledge, there have been no large-scale studies exploring wording effects in a both topic- and author-controlled setting. Employing such controls, we find that predicting the more effective alternative wording is much harder than the previously well-studied problem of predicting popular content when author or topic can freely vary.
|
{
"cite_N": [
"@cite_13",
"@cite_3",
"@cite_6",
"@cite_23",
"@cite_16",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"2250251786",
"105848778",
"2953303434",
"2019416425",
"2964586292",
"2184410296",
"2162079025",
"2000213460"
],
"abstract": [
"Predicting the success of literary works is a curious question among publishers and aspiring writers alike. We examine the quantitative connection, if any, between writing style and successful literature. Based on novels over several different genres, we probe the predictive power of statistical stylometry in discriminating successful literary works, and identify characteristic stylistic elements that are more prominent in successful writings. Our study reports for the first time that statistical stylometry can be surprisingly effective in discriminating highly successful literature from less successful counterpart, achieving accuracy up to 84 . Closer analyses lead to several new insights into characteristics of the writing style in successful literature, including findings that are contrary to the conventional wisdom with respect to good writing style and readability.",
"Creating, placing, and presenting social media content is a difficult problem. In addition to the quality of the content itself, several factors such as the way the content is presented (the title), the community it is posted to, whether it has been seen before, and the time it is posted determine its success. There are also interesting between these factors. For example, the language of the title should be targeted to the community where the content is submitted, yet it should also highlight the distinctive nature of the content. In this paper, we examine how these factors interact to determine the popularity of social media content. We do so by studying resubmissions, i.e., content that has been submitted multiple times, with multiple titles, to multiple different communities. Such data allows us to tease apart' the extent to which each factor influences the success of that content. The models we develop help us understand how to better target social media content: by using the right title, for the right community, at the right time.",
"Understanding the ways in which information achieves widespread public awareness is a research question of significant interest. We consider whether, and how, the way in which the information is phrased --- the choice of words and sentence structure --- can affect this process. To this end, we develop an analysis framework and build a corpus of movie quotes, annotated with memorability information, in which we are able to control for both the speaker and the setting of the quotes. We find that there are significant differences between memorable and non-memorable quotes in several key dimensions, even after controlling for situational and contextual factors. One is lexical distinctiveness: in aggregate, memorable quotes use less common word choices, but at the same time are built upon a scaffolding of common syntactic patterns. Another is that memorable quotes tend to be more general in ways that make them easy to apply in new contexts --- that is, more portable. We also show how the concept of \"memorable language\" can be extended across domains.",
"We combine lexical, syntactic, and discourse features to produce a highly predictive model of human readers' judgments of text readability. This is the first study to take into account such a variety of linguistic factors and the first to empirically demonstrate that discourse relations are strongly associated with the perceived quality of text. We show that various surface metrics generally expected to be related to readability are not very good predictors of readability judgments in our Wall Street Journal corpus. We also establish that readability predictors behave differently depending on the task: predicting text readability or ranking the readability. Our experiments indicate that discourse relations are the one class of features that exhibits robustness across these two tasks.",
"Why are certain pieces of online content (e.g., advertisements, videos, news articles) more viral than others? This article takes a psychological approach to understanding diffusion. Using a unique data set of all the New York Times articles published over a three-month period, the authors examine how emotion shapes virality. The results indicate that positive content is more viral than negative content, but the relationship between emotion and social transmission is more complex than valence alone. Virality is partially driven by physiological arousal. Content that evokes high-arousal positive (awe) or negative (anger or anxiety) emotions is more viral. Content that evokes low-arousal, or deactivating, emotions (e.g., sadness) is less viral. These results hold even when the authors control for how surprising, interesting, or practically useful content is (all of which are positively linked to virality), as well as external drivers of attention (e.g., how prominently content was featured). Experimental re...",
"Great writing is rare and highly admired. Readers seek out articles that are beautifully written, informative and entertaining. Yet information-access technologies lack capabilities for predicting article quality at this level. In this paper we present first experiments on article quality prediction in the science journalism domain. We introduce a corpus of great pieces of science journalism, along with typical articles from the genre. We implement features to capture aspects of great writing, including surprising, visual and emotional content, as well as general features related to discourse organization and sentence structure. We show that the distinction between great and typical articles can be detected fairly accurately, and that the entire spectrum of our features contribute to the distinction.",
"Computational story telling has sparked great interest in artificial intelligence, partly because of its relevance to educational and gaming applications. Traditionally, story generators rely on a large repository of background knowledge containing information about the story plot and its characters. This information is detailed and usually hand crafted. In this paper we propose a data-driven approach for generating short children's stories that does not require extensive manual involvement. We create an end-to-end system that realizes the various components of the generation pipeline stochastically. Our system follows a generate-and-and-rank approach where the space of multiple candidate stories is pruned by considering whether they are plausible, interesting, and coherent.",
"Video dissemination through sites such as YouTube can have widespread impacts on opinions, thoughts, and cultures. Not all videos will reach the same popularity and have the same impact. Popularity differences arise not only because of differences in video content, but also because of other \"content-agnostic\" factors. The latter factors are of considerable interest but it has been difficult to accurately study them. For example, videos uploaded by users with large social networks may tend to be more popular because they tend to have more interesting content, not because social network size has a substantial direct impact on popularity. In this paper, we develop and apply a methodology that is able to accurately assess, both qualitatively and quantitatively, the impacts of various content-agnostic factors on video popularity. When controlling for video content, we observe a strong linear \"rich-get-richer\" behavior, with the total number of previous views as the most important factor except for very young videos. The second most important factor is found to be video age. We analyze a number of phenomena that may contribute to rich-get-richer, including the first-mover advantage, and search bias towards popular videos. For young videos we find that factors other than the total number of previous views, such as uploader characteristics and number of keywords, become relatively more important. Our findings also confirm that inaccurate conclusions can be reached when not controlling for content."
]
}
|
1405.1605
|
2136961259
|
While many lexica annotated with words polarity are available for sentiment analysis, very few tackle the harder task of emotion analysis and are usually quite limited in coverage. In this paper, we present a novel approach for extracting - in a totally automated way - a high-coverage and high-precision lexicon of roughly 37 thousand terms annotated with emotion scores, called DepecheMood. Our approach exploits in an original way 'crowd-sourced' affective annotation implicitly provided by readers of news articles from rappler.com. By providing new state-of-the-art performances in unsupervised settings for regression and classification tasks, even using a na " ve approach, our experiments show the beneficial impact of harvesting social media data for affective lexicon building.
|
One of the most well-known resources is (SWN) @cite_8 @cite_2 , in which each entry is associated with the numerical scores Pos(s) and Neg(s) , ranging from 0 to 1. These scores -- automatically assigned starting from a bunch of seed terms -- represent the positive and negative valence (or posterior polarity) of each entry, that takes the form lemma #pos #sense-number . Starting from SWN, several prior polarities for words (), in the form lemma #PoS , can be computed (e.g. considering only the first-sense, averaging on all the senses, etc.). These approaches, detailed in @cite_14 , produce a list of 155k words, where the lower precision given by the automatic scoring of SWN is compensated by the high coverage.
|
{
"cite_N": [
"@cite_14",
"@cite_2",
"@cite_8"
],
"mid": [
"2951563156",
"193524605",
"38739846"
],
"abstract": [
"Assigning a positive or negative score to a word out of context (i.e. a word's prior polarity) is a challenging task for sentiment analysis. In the literature, various approaches based on SentiWordNet have been proposed. In this paper, we compare the most often used techniques together with newly proposed ones and incorporate all of them in a learning framework to see whether blending them can further improve the estimation of prior polarity scores. Using two different versions of SentiWordNet and testing regression and classification models across tasks and datasets, our learning approach consistently outperforms the single metrics, providing a new state-of-the-art approach in computing words' prior polarity for sentiment analysis. We conclude our investigation showing interesting biases in calculated prior polarity scores when word Part of Speech and annotator gender are considered.",
"In this work we present SENTIWORDNET 3.0, a lexical resource explicitly devised for supporting sentiment classification and opinion mining applications. SENTIWORDNET 3.0 is an improved version of SENTIWORDNET 1.0, a lexical resource publicly available for research purposes, now currently licensed to more than 300 research groups and used in a variety of research projects worldwide. Both SENTIWORDNET 1.0 and 3.0 are the result of automatically annotating all WORDNET synsets according to their degrees of positivity, negativity, and neutrality. SENTIWORDNET 1.0 and 3.0 differ (a) in the versions of WORDNET which they annotate (WORDNET 2.0 and 3.0, respectively), (b) in the algorithm used for automatically annotating WORDNET, which now includes (additionally to the previous semi-supervised learning step) a random-walk step for refining the scores. We here discuss SENTIWORDNET 3.0, especially focussing on the improvements concerning aspect (b) that it embodies with respect to version 1.0. We also report the results of evaluating SENTIWORDNET 3.0 against a fragment of WORDNET 3.0 manually annotated for positivity, negativity, and neutrality; these results indicate accuracy improvements of about 20 with respect to SENTIWORDNET 1.0.",
"Opinion mining (OM) is a recent subdiscipline at the crossroads of information retrieval and computational linguistics which is concerned not with the topic a document is about, but with the opinion it expresses. OM has a rich set of applications, ranging from tracking users’ opinions about products or about political candidates as expressed in online forums, to customer relationship management. In order to aid the extraction of opinions from text, recent research has tried to automatically determine the “PN-polarity” of subjective terms, i.e. identify whether a term that is a marker of opinionated content has a positive or a negative connotation. Research on determining whether a term is indeed a marker of opinionated content (a subjective term) or not (an objective term) has been, instead, much more scarce. In this work we describe SENTIWORDNET, a lexical resource in which each WORDNET synset s is associated to three numerical scoresObj(s), Pos(s) and Neg(s), describing how objective, positive, and negative the terms contained in the synset are. The method used to develop SENTIWORDNET is based on the quantitative analysis of the glosses associated to synsets, and on the use of the resulting vectorial term representations for semi-supervised synset classification. The three scores are derived by combining the results produced by a committee of eight ternary classifiers, all characterized by similar accuracy levels but different classification behaviour. SENTIWORDNET is freely available for research purposes, and is endowed with a Web-based graphical user interface."
]
}
|
1405.1523
|
2002169077
|
Dynamic systems play a central role in fields such as planning, verification, and databases. Fragmented throughout these fields, we find a multitude of languages to formally specify dynamic systems and a multitude of systems to reason on such specifications. Often, such systems are bound to one specific language and one specific inference task. It is troublesome that performing several inference tasks on the same knowledge requires translations of your specification to other languages. In this paper we study whether it is possible to perform a broad set of well-studied inference tasks on one specification. More concretely, we extend IDP 3 with several inferences from fields concerned with dynamic specifications.
|
Many database systems implement some form of progression @cite_36 . Often, these systems use (a variant of) transaction logic @cite_13 to express progression steps. Other dynamic inferences, such as backwards reasoning, planning, and verification are, to the best of our knowledge, not possible in these systems. A very interesting database system is LogicBlox @cite_12 ; it supports a refined interactive simulation by means of a huge set of built-in predicates (windows, buttons, etc.). Users can specify workflows declaratively; during simulations, the UI is derived from the interpretations of the built-ins.
|
{
"cite_N": [
"@cite_36",
"@cite_13",
"@cite_12"
],
"mid": [
"1973391999",
"1508300342",
"102682372"
],
"abstract": [
"Abstract One way to think about a STRIPS operator is as a mapping from databases to databases, in the following sense: suppose we want to know what the world would be like if an action, represented by the STRIPS operator α, were done in some world, represented by the STRIPS database D 0. To find out, simply perform the operator α on D 0 (by applying α's elementary add and delete revision operators to D 0). We describe this process as progressing the database D 0 in response to the action α. In this paper, we consider the general problem of progressing an initial database in response to a given sequence of actions. We appeal to the situation calculus and an axiomatization of actions which addresses the frame problem (Reiter (1991)). This setting is considerably more general than STRIPS. Our results concerning progression are mixed. The (surprising) bad news is that, in general, to characterize a progressed database we must appeal to second-order logic. The good news is that there are many useful special cases for which we can compute the progressed database in first-order logic; not only that, we can do so efficiently. Finally, we relate these results about progression to STRIPS-like systems by providing a semantics for such systems in terms of a purely declarative situation calculus axiomatization for actions and their effects. On our view, STRIPS operators provide a mechanism for computing the progression of an initial situation calculus database under the effects of an action. We illustrate this idea by describing two different STRIPS mechanisms, and proving their correctness with respect to their situation calculus specifications.",
"This paper presents database applications of the recently proposed Transaction Logic—an extension of classical predicate logic that accounts in a clean and declarative fashion for the phenomenon of state changes in logic programs and databases. It has a natural model theory and a sound and complete proof theory, but, unlike many other logics, it allows users to program transactions. In addition, the semantics leads naturally to features whose amalgamation in a single logic has proved elusive in the past. Finally, Transaction Logic holds promise as a logical model of hitherto non-logical phenomena, including so-called procedural knowledge in AI, and the behavior of object-oriented databases, especially methods with side effects. This paper focuses on the applications of T r to database systems, including transaction definition and execution, nested transactions, view updates, consistency maintenance, bulk updates, non-determinism, sampling, active databases, dynamic integrity-constraints, hypothetical reasoning, and imperative-style programming.",
"The modern enterprise software stack--a collection of applications supporting bookkeeping, analytics, planning, and forecasting for enterprise data--is in danger of collapsing under its own weight. The task of building and maintaining enterprise software is tedious and laborious; applications are cumbersome for end-users; and adapting to new computing hardware and infrastructures is difficult. We believe that much of the complexity in today's architecture is accidental, rather than inherent. This tutorial provides an overview of the LogicBlox platform, a ambitious redesign of the enterprise software stack centered around a unified declarative programming model, based on an extended version of Datalog."
]
}
|
1405.1523
|
2002169077
|
Dynamic systems play a central role in fields such as planning, verification, and databases. Fragmented throughout these fields, we find a multitude of languages to formally specify dynamic systems and a multitude of systems to reason on such specifications. Often, such systems are bound to one specific language and one specific inference task. It is troublesome that performing several inference tasks on the same knowledge requires translations of your specification to other languages. In this paper we study whether it is possible to perform a broad set of well-studied inference tasks on one specification. More concretely, we extend IDP 3 with several inferences from fields concerned with dynamic specifications.
|
The above discussion focuses on general fields tackling (only) one of the problems we are typically interested in (in a dynamic context). To be fair, it is worth mentioning that systems in these domains often tackle a more general problem (for example ASP systems can do much more than only planning). tackles these more general problems efficiently as well. Over the years, and (the solver underlying ) have proven to be among the best ASP and CP systems @cite_7 @cite_26 @cite_10 .
|
{
"cite_N": [
"@cite_26",
"@cite_10",
"@cite_7"
],
"mid": [
"203432916",
"1750419207",
"1597298151"
],
"abstract": [
"Answer Set Programming is a well-established paradigm of declarative programming in close relationship with other declarative formalisms such as SAT Modulo Theories, Constraint Handling Rules, PDDL and many others. Since its first informal editions, ASP systems are compared in the nowadays customary ASP Competition. The fourth ASP Competition, held in 2012 2013, is the sequel to previous editions and it was jointly organized by University of Calabria Italy and the Vienna University of Technology Austria. Participants competed on a selected collection of benchmark problems, taken from a variety of research areas and real world applications. The Competition featured two tracks: the Model& Solve Track, held on an open problem encoding, on an open language basis, and open to any kind of system based on a declarative specification paradigm; and the System Track, held on the basis of fixed, public problem encodings, written in a standard ASP language.",
"",
"Answer Set Programming is a well-established paradigm of declarative programming in close relationship with other declarative formalisms such as SAT Modulo Theories, Constraint Handling Rules, FO(.), PDDL and many others. Since its first informal editions, ASP systems are compared in the nowadays customary ASP Competition. The Third ASP Competition, as the sequel to the ASP Competitions Series held at the University of Potsdam in Germany (2006-2007) and at the University of Leuven in Belgium in 2009, took place at the University of Calabria (Italy) in the first half of 2011. Participants competed on a selected collection of declarative specifications of benchmark problems, taken from a variety of domains as well as real world applications, and instances thereof. The Competition ran on two tracks: the Model & Solve Competition, held on an open problem encoding, on an open language basis, and open to any kind of system based on a declarative specification paradigm; and the System Competition, held on the basis of fixed, public problem encodings, written in a standard ASP language. This paper briefly discuss the format and rationale of the System competition track, and preliminarily reports its results."
]
}
|
1405.1523
|
2002169077
|
Dynamic systems play a central role in fields such as planning, verification, and databases. Fragmented throughout these fields, we find a multitude of languages to formally specify dynamic systems and a multitude of systems to reason on such specifications. Often, such systems are bound to one specific language and one specific inference task. It is troublesome that performing several inference tasks on the same knowledge requires translations of your specification to other languages. In this paper we study whether it is possible to perform a broad set of well-studied inference tasks on one specification. More concretely, we extend IDP 3 with several inferences from fields concerned with dynamic specifications.
|
The ProB system @cite_1 is an automated animator and model checker for the B-Method. It can provide interactive animations (interactive simulation) and can also be used to do (optimal) planning and automatically verify dynamic specifications. ProB is a very general and powerful system. The only inference studied in this paper it does not support is domain independent proving of invariants.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2031111226"
],
"abstract": [
"We present ProB, a validation toolset for the B method. ProB’s automated animation facilities allow users to gain confidence in their specifications. ProB also contains a model checker and a refinement checker, both of which can be used to detect various errors in B specifications. We describe the underlying methodology of ProB, and present the important aspects of the implementation. We also present empirical evaluations as well as several case studies, highlighting that ProB enables users to uncover errors that are not easily discovered by existing tools."
]
}
|
1405.0538
|
2171643217
|
The problem of finding optimal set of users for influencing others in the social network has been widely studied. Because it is NP-hard, some heuristics were proposed to find sub-optimal solutions. Still, one of the commonly used assumption is the one that seeds are chosen on the static network, not the dynamic one. This static approach is in fact far from the real-world networks, where new nodes may appear and old ones dynamically disappear in course of time.
|
Having these conclusions in mind, it was decided to evaluate how the most typical heuristics based on the network structure perform on temporal and static networks. A special attention has been turned to observation of dynamics of the influence that spreads over temporal (changing) network after choosing initial seed sets. Indeed, this direction is emerging, because first work studying spreading phenomenon in temporal networks is published nowadays @cite_20 @cite_34 . However, the authors did not focus on seeding strategies there, but they analysed the process under different assumptions for the aggregation level.
|
{
"cite_N": [
"@cite_34",
"@cite_20"
],
"mid": [
"2100238891",
"2128751891"
],
"abstract": [
"Threshold models try to explain the consequences of social influence like the spread of fads and opinions. Along with models of epidemics, they constitute a major theoretical framework of social spreading processes. In threshold models on static networks, an individual changes her state if a certain fraction of her neighbors has done the same. When there are strong correlations in the temporal aspects of contact patterns, it is useful to represent the system as a temporal network. In such a system, not only contacts but also the time of the contacts are represented explicitly. In many cases, bursty temporal patterns slow down disease spreading. However, as we will see, this is not a universal truth for threshold models. In this work we propose an extension of Watts’s classic threshold model to temporal networks. We do this by assuming that an agent is influenced by contacts which lie a certain time into the past. I.e., the individuals are affected by contacts within a time window. In addition to thresholds in the fraction of contacts, we also investigate the number of contacts within the time window as a basis for influence. To elucidate the model’s behavior, we run the model on real and randomized empirical contact datasets.",
"Diffusion of information in social networks takes more and more attention from marketers. New methods and algorithms are constantly developed towards maximizing reach of the campaigns and increasing their effectiveness. One of the important research directions in this area is related to selecting initial nodes of the campaign to result with maximizing its effects represented as total number of infections. To achieve this goal, several strategies were developed and they are based on different network measures and other characteristics of users. The problem is that most of these strategies base on static network properties while typical online networks change over time and are sensitive to varying activity of users. In this work a novel strategy is proposed which is based on multiple measures with additional parameters related to nodes availability in time periods prior to the campaign. Presented results show that it is possible to compensate users with high network measures by others having high frequency of system usage, which, instead, may be easier or cheaper to acquire."
]
}
|
1405.0999
|
2950114548
|
This paper describes an architecture that combines the complementary strengths of declarative programming and probabilistic graphical models to enable robots to represent, reason with, and learn from, qualitative and quantitative descriptions of uncertainty and knowledge. An action language is used for the low-level (LL) and high-level (HL) system descriptions in the architecture, and the definition of recorded histories in the HL is expanded to allow prioritized defaults. For any given goal, tentative plans created in the HL using default knowledge and commonsense reasoning are implemented in the LL using probabilistic algorithms, with the corresponding observations used to update the HL history. Tight coupling between the two levels enables automatic selection of relevant variables and generation of suitable action policies in the LL for each HL action, and supports reasoning with violation of defaults, noisy observations and unreliable actions in large and complex domains. The architecture is evaluated in simulation and on physical robots transporting objects in indoor domains; the benefit on robots is a reduction in task execution time of 39 compared with a purely probabilistic, but still hierarchical, approach.
|
Probabilistic graphical models such as POMDPs have been used to represent knowledge and plan sensing, navigation and interaction for robots @cite_21 @cite_13 . However, these formulations (by themselves) make it difficult to perform commonsense reasoning, e.g., default reasoning and non-monotonic logical reasoning, especially with information not directly relevant to tasks at hand. In parallel, research in classical planning has provided many algorithms for knowledge representation and logical reasoning @cite_2 , but these algorithms require substantial prior knowledge about the domain, task and the set of actions. Many of these algorithms also do not support merging of new, unreliable information from sensors and humans with the current beliefs in a knowledge base. Answer Set Programming (ASP), a non-monotonic logic programming paradigm, is well-suited for representing and reasoning with commonsense knowledge @cite_23 @cite_5 . An international research community has been built around ASP, with applications such as reasoning in simulated robot housekeepers and for representing knowledge extracted from natural language human-robot interaction @cite_10 @cite_8 . However, ASP does not support probabilistic analysis, whereas a lot of information available to robots is represented probabilistically to quantitatively model the uncertainty in sensor input processing and actuation in the real world.
|
{
"cite_N": [
"@cite_8",
"@cite_10",
"@cite_21",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_13"
],
"mid": [
"2032560235",
"2156415207",
"",
"",
"1489183362",
"1606891084",
"2400776907"
],
"abstract": [
"Answer set programming (ASP) is a knowledge representation and reasoning paradigm with high-level expressive logic-based formalism, and efficient solvers; it is applied to solve hard problems in various domains, such as systems biology, wire routing, and space shuttle control. In this paper, we present an application of ASP to housekeeping robotics. We show how the following problems are addressed using computational methods tools of ASP: (1) embedding commonsense knowledge automatically extracted from the commonsense knowledge base ConceptNet, into high-level representation, and (2) embedding (continuous) geometric reasoning and temporal reasoning about durations of actions, into (discrete) high-level reasoning. We introduce a planning and monitoring algorithm for safe execution of plans, so that robots can recover from plan failures due to collision with movable objects whose presence and location are not known in advance or due to heavy objects that cannot be lifted alone. Some of the recoveries require collaboration of robots. We illustrate the applicability of ASP on several housekeeping robotics problems, and report on the computational efficiency in terms of CPU time and memory.",
"This paper presents an effort to enable robots to utilize open-source knowledge resources autonomously for human-robot interaction. The main challenges include how to extract knowledge in semi-structured and unstructured natural languages, how to make use of multiple types of knowledge in decision making, and how to identify the knowledge that is missing. A set of techniques for multi-mode natural language processing, integrated decision making, and open knowledge searching is proposed. The OK-KeJia robot prototype is implemented and evaluated, with special attention to two tests on 11,615 user tasks and 467 user desires. The experiments show that the overall performance improves remarkably due to the use of appropriate open knowledge.",
"",
"",
"1 Introduction and Overview I Classical Planning 2 Representations for Classical Planning*3 Complexity of Classical Planning*4 State-Space Planning*5 Plan-Space Planning II Neoclassical Planning 6 Planning-Graph Techniques*7 Propositional Satisfiability Techniques*8 Constraint Satisfaction Techniques III Heuristics and Control Strategies 9 Heuristics in Planning*10 Control Rules in Planning*11 Hierarchical Task Network Planning*12 Control Strategies in Deductive Planning IV Planning with Time and Resources 13 Time for Planning*14 Temporal Planning*15 Planning and Resource Scheduling V Planning under Uncertainty 16 Planning based on Markov Decision Processes*17 Planning based on Model Checking*18 Uncertainty with Neo-Classical Techniques VI Case Studies and Applications 19 Space Applications*20 Planning in Robotics*21 Planning for Manufacturability Analysis*22 Emergency Evacuation Planning *23 Planning in the Game of Bridge VII Conclusion 24 Conclusion and Other Topics VIII Appendices A Search Procedures and Computational Complexity*B First Order Logic*C Model Checking",
"Knowledge management and knowledge-based intelligence are areas of importance in today's economy and society, and their exploitation requires representation via the development of a declarative interface whose input language is based on logic. Chitta Baral demonstrates how to write programs that behave intelligently by giving them the ability to express knowledge and reason about it. He presents a language, AnsProlog, for both knowledge representation and reasoning, and declarative problem solving. Many of the results have never appeared before in book form but are organized here for those wishing to learn more about the subject, either in courses or through self-study.",
"Indoor autonomous mobile service robots can overcome their hardware and potential algorithmic limitations by asking humans for help. In this work, we focus on mobile robots that need human assistance at specific spatially-situated locations (e.g., to push buttons in an elevator or to make coffee in the kitchen). We address the problem of what the robot should do when there are no humans present at such help locations. As the robots are mobile, we argue that they should plan to proactively seek help and travel to offices or occupied locations to bring people to the help locations. Such planning involves many trade-offs, including the wait time at the help location before seeking help, and the time and potential interruption to find and displace someone in an office. In order to choose appropriate parameters to represent such decisions, we first conduct a survey to understand potential helpers' travel preferences in terms of distance, interruptibility, and frequency of providing help. We then use these results to contribute a decision-theoretic algorithm to evaluate the possible choices in offices and plan where to proactively seek help. We demonstrate that our algorithm aims to minimize the number of office interruptions as well as task completion time."
]
}
|
1405.0093
|
1501674528
|
As graphs continue to grow in size, we seek ways to effectively process such data at scale. The model of streaming graph processing, in which a compact summary is maintained as each edge insertion deletion is observed, is an attractive one. However, few results are known for optimization problems over such dynamic graph streams. In this paper, we introduce a new approach to handling graph streams, by instead seeking solutions for the parameterized versions of these problems where we are given a parameter @math and the objective is to decide whether there is a solution bounded by @math . By combining kernelization techniques with randomized sketch structures, we obtain the first streaming algorithms for the parameterized versions of the Vertex Cover problem. We consider the following three models for a graph stream on @math nodes: 1. The insertion-only model where the edges can only be added. 2. The dynamic model where edges can be both inserted and deleted. 3. The dynamic model where we are guaranteed that at each timestamp there is a solution of size at most @math . In each of these three models we are able to design parameterized streaming algorithms for the Vertex Cover problem. We are also able to show matching lower bound for the space complexity of our algorithms. (Due to the arXiv limit of 1920 characters for abstract field, please see the abstract in the paper for detailed description of our results)
|
The question of finding maximal and maximum cardinality matchings has been heavily studied in the model of (insert-only) graph streams. A greedy algorithm trivially obtains a maximal matching (simply store every edge that links two currently unmatched nodes); this can also be shown to be a 0.5-approximation to the maximum cardinality matching @cite_3 . By taking multiple passes over the input streams, this can be improved to a @math approximation, by finding augmenting paths with successive passes @cite_16 @cite_13 .
|
{
"cite_N": [
"@cite_16",
"@cite_13",
"@cite_3"
],
"mid": [
"1514707655",
"1592953077",
"2165753192"
],
"abstract": [
"We present algorithms for finding large graph matchings in the streaming model. In this model, applicable when dealing with massive graphs, edges are streamed-in in some arbitrary order rather than residing in randomly accessible memory. For e>0, we achieve a @math approximation for maximum cardinality matching and a @math approximation to maximum weighted matching. Both algorithms use a constant number of passes and @math space.",
"Multi-Pass Models: It is common in graph mining to consider algorithms that may take more than one pass over the stream. There has also been work in the W-Stream model in which the algorithm is allowed to write to the stream during each pass [9]. These annotations can then be utilized by the algorithm during successive passes and it can be shown that this gives sufficient power to the model for PRAM algorithms to be simulated [8]. The Stream-Sort model goes one step further and allows sorting passes in which the data stream is sorted according to a key encoded by the annotations [1].",
"We formalize a potentially rich new streaming model, the semi-streaming model, that we believe is necessary for the fruitful study of efficient algorithms for solving problems on massive graphs whose edge sets cannot be stored in memory. In this model, the input graph, G = (V, E), is presented as a stream of edges (in adversarial order), and the storage space of an algorithm is bounded by O(n ċ polylog n), where n = |V|. We are particularly interested in algorithms that use only one pass over the input, but, for problems where this is provably insufficient, we also look at algorithms using constant or, in some cases, logarithmically many passes. In the course of this general study, we give semi-streaming constant approximation algorithms for the unweighted and weighted matching problems, along with a further algorithmic improvement for the bipartite case. We also exhibit log n log log n semi-streaming approximations to the diameter and the problem of computing the distance between specified vertices in a weighted graph. These are complemented by Ω(log(1-e) n) lower bounds."
]
}
|
1405.0093
|
1501674528
|
As graphs continue to grow in size, we seek ways to effectively process such data at scale. The model of streaming graph processing, in which a compact summary is maintained as each edge insertion deletion is observed, is an attractive one. However, few results are known for optimization problems over such dynamic graph streams. In this paper, we introduce a new approach to handling graph streams, by instead seeking solutions for the parameterized versions of these problems where we are given a parameter @math and the objective is to decide whether there is a solution bounded by @math . By combining kernelization techniques with randomized sketch structures, we obtain the first streaming algorithms for the parameterized versions of the Vertex Cover problem. We consider the following three models for a graph stream on @math nodes: 1. The insertion-only model where the edges can only be added. 2. The dynamic model where edges can be both inserted and deleted. 3. The dynamic model where we are guaranteed that at each timestamp there is a solution of size at most @math . In each of these three models we are able to design parameterized streaming algorithms for the Vertex Cover problem. We are also able to show matching lower bound for the space complexity of our algorithms. (Due to the arXiv limit of 1920 characters for abstract field, please see the abstract in the paper for detailed description of our results)
|
Subsequent work has extended to the case of weighted edges (when a maximum weight matching is sought), and reducing the number of passes to provide a guaranteed approximation @cite_32 @cite_24 . While approximating the size of the vertex cover has been studied in other sublinear models, such as sampling @cite_15 @cite_1 , we are not aware of prior work that has addressed the question of finding a vertex cover over a graph stream. Likewise, parameterized complexity has not been explicitly studied in the streaming model, so we initiate it here.
|
{
"cite_N": [
"@cite_24",
"@cite_15",
"@cite_1",
"@cite_32"
],
"mid": [
"2157523155",
"1980155175",
"2116953739",
""
],
"abstract": [
"We present the first deterministic 1+e approximation algorithm for finding a large matching in a bipartite graph in the semi-streaming model which requires only O((1 e)5) passes over the input stream. In this model, the input graph G=(V,E) is given as a stream of its edges in some arbitrary order, and storage of the algorithm is bounded by O(npolylog n) bits, where @math . The only previously known arbitrarily good approximation for general graphs is achieved by the randomized algorithm of McGregor (Proceedings of the International Workshop on Approximation Algorithms for Combinatorial Optimization Problems and Randomization and Computation, Berkeley, CA, USA, pp. 170–181, 2005), which uses Ω((1 e)1 e ) passes. We show that even for bipartite graphs, McGregor’s algorithm needs Ω(1 e) Ω(1 e) passes, thus it is necessarily exponential in the approximation parameter. The design as well as the analysis of our algorithm require the introduction of some new techniques. A novelty of our algorithm is a new deterministic assignment of matching edges to augmenting paths which is responsible for the complexity reduction, and gets rid of randomization. We repeatedly grow an initial matching using augmenting paths up to a length of 2k+1 for k=⌈2 e⌉. We terminate when the number of augmenting paths found in one iteration falls below a certain threshold also depending on k, that guarantees a 1+e approximation. The main challenge is to find those augmenting paths without requiring an excessive number of passes. In each iteration, using multiple passes, we grow a set of alternating paths in parallel, considering each edge as a possible extension as it comes along in the stream. Backtracking is used on paths that fail to grow any further. Crucial are the so-called position limits: when a matching edge is the ith matching edge in a path and it is then removed by backtracking, it will only be inserted into a path again at a position strictly lesser than i. This rule strikes a balance between terminating quickly on the one hand and giving the procedure enough freedom on the other hand.",
"For a given graph G over n vertices, let OPT\"G denote the size of an optimal solution in G of a particular minimization problem (e.g., the size of a minimum vertex cover). A randomized algorithm will be called an @a-approximation algorithm with an additive error for this minimization problem if for any given additive error parameter @e>0 it computes a value [email protected]? such that, with probability at least 2 3, it holds that OPT\"[email protected][email protected][email protected][email protected]@?OPT\"[email protected] Assume that the maximum degree or average degree of G is bounded. In this case, we show a reduction from local distributed approximation algorithms for the vertex cover problem to sublinear approximation algorithms for this problem. This reduction can be modified easily and applied to other optimization problems that have local distributed approximation algorithms, such as the dominating set problem. We also show that for the minimum vertex cover problem, the query complexity of such approximation algorithms must grow at least linearly with the average degree [email protected]? of the graph. This lower bound holds for every multiplicative factor @a and small constant @e as long as [email protected]?=O(n @a). In particular this means that for dense graphs it is not possible to design an algorithm whose complexity is o(n).",
"We give a nearly optimal sublinear-time algorithm for approximating the size of a minimum vertex cover in a graph G. The algorithm may query the degree deg(v) of any vertex v of its choice, and for each 1 ≤ i ≤ deg(v), it may ask for the ith neighbor of v. Letting VCopt(G) denote the minimum size of vertex cover in G, the algorithm outputs, with high constant success probability, an estimate [EQUATION] such that [EQUATION], where e is a given additive approximation parameter. We refer to such an estimate as a (2, e)-estimate. The query complexity and running time of the algorithm are O([EQUATION] · poly(1 e)), where d denotes the average vertex degree in the graph. The best previously known sublinear algorithm, of (STOC 2009), has query complexity and running time O(d4 e2), where d is the maximum degree in the graph. Given the lower bound of Ω(d) (for constant e) for obtaining such an estimate (with any constant multiplicative factor) due to Parnas and Ron (TCS 2007), our result is nearly optimal. In the case that the graph is dense, that is, the number of edges is Θ(n2), we consider another model, in which the algorithm may ask, for any pair of vertices u and v, whether there is an edge between u and v. We show how to adapt the algorithm that uses neighbor queries to this model and obtain an algorithm that outputs a (2, e)-estimate of the size of a minimum vertex cover whose query complexity and running time are O(n) · poly(1 e).",
""
]
}
|
1405.0205
|
173322064
|
Privacy is of the utmost importance in genomic matching. Therefore a number of privacy-preserving protocols have been presented using secure computation. Nevertheless, none of these protocols prevents inferences from the result. Goodrich has shown that this resulting information is sufficient for an effective attack on genome databases. In this paper we present an approach that can detect and mitigate such an attack on encrypted messages while still preserving the privacy of both parties. Note that randomization, e.g. using differential privacy, will almost certainly destroy the utility of the matching result. We combine two known cryptographic primitives -- secure computation of the edit distance and fuzzy commitments -- in order to prevent submission of similar genome sequences. Particularly, we contribute an efficient zero-knowledge proof that the same input has been used in both primitives. We show that using our approach it is feasible to preserve privacy in genome matching and also detect and mitigate Goodrich's attack.
|
Privacy-preserving matching of genomes has been introduced in @cite_29 . It presents a secure computation based on homomorphic encryption, such that both the querier's genome and the database's genome are protected. The algorithm implemented in the secure computation is edit distance. This setup has been improved in performance using Yao's protocol @cite_20 for secure computations in @cite_11 . Although further improvements to Yao's protocol yield even better performance in this computation @cite_10 it is still too slow for large scale deployment.
|
{
"cite_N": [
"@cite_29",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"2165785801",
"2119948977",
"2088492763",
"2166971704"
],
"abstract": [
"We give an efficient protocol for sequence comparisons of the edit-distance kind, such that neither party reveals anything about their private sequence to the other party (other than what can be inferred from the edit distance between their two sequences - which is unavoidable because computing that distance is the purpose of the protocol). The amount of communication done by our protocol is proportional to the time complexity of the best-known algorithm for performing the sequence comparison.The problem of determining the similarity between two sequences arises in a large number of applications, in particular in bioinformatics. In these application areas, the edit distance is one of the most widely used notions of sequence similarity: It is the least-cost set of insertions, deletions, and substitutions required to transform one string into the other. The generalizations of edit distance that are solved by the same kind of dynamic programming recurrence relation as the one for edit distance, cover an even wider domain of applications.",
"Secure two-party computation enables two parties to evaluate a function cooperatively without revealing to either party anything beyond the function’s output. The garbled-circuit technique, a generic approach to secure two-party computation for semi-honest participants, was developed by Yao in the 1980s, but has been viewed as being of limited practical significance due to its inefficiency. We demonstrate several techniques for improving the running time and memory requirements of the garbled-circuit technique, resulting in an implementation of generic secure two-party computation that is significantly faster than any previously reported while also scaling to arbitrarily large circuits. We validate our approach by demonstrating secure computation of circuits with over 10 9 gates at a rate of roughly 10 ms per garbled gate, and showing order-of-magnitude improvements over the best previous privacy-preserving protocols for computing Hamming distance, Levenshtein distance, Smith-Waterman genome alignment, and AES.",
"In this paper we introduce a new tool for controlling the knowledge transfer process in cryptographic protocol design. It is applied to solve a general class of problems which include most of the two-party cryptographic problems in the literature. Specifically, we show how two parties A and B can interactively generate a random integer N = p?q such that its secret, i.e., the prime factors (p, q), is hidden from either party individually but is recoverable jointly if desired. This can be utilized to give a protocol for two parties with private values i and j to compute any polynomially computable functions f(i,j) and g(i,j) with minimal knowledge transfer and a strong fairness property. As a special case, A and B can exchange a pair of secrets sA, sB, e.g. the factorization of an integer and a Hamiltonian circuit in a graph, in such a way that sA becomes computable by B when and only when sB becomes computable by A. All these results are proved assuming only that the problem of factoring large intergers is computationally intractable.",
"Many basic tasks in computational biology involve operations on individual DNA and protein sequences. These sequences, even when anonymized, are vulnerable to re-identification attacks and may reveal highly sensitive information about individuals. We present a relatively efficient, privacy-preserving implementation of fundamental genomic computations such as calculating the edit distance and Smith- Waterman similarity scores between two sequences. Our techniques are crypto graphically secure and significantly more practical than previous solutions. We evaluate our prototype implementation on sequences from the Pfam database of protein families, and demonstrate that its performance is adequate for solving real-world sequence-alignment and related problems in a privacy- preserving manner. Furthermore, our techniques have applications beyond computational biology. They can be used to obtain efficient, privacy-preserving implementations for many dynamic programming algorithms over distributed datasets."
]
}
|
1405.0205
|
173322064
|
Privacy is of the utmost importance in genomic matching. Therefore a number of privacy-preserving protocols have been presented using secure computation. Nevertheless, none of these protocols prevents inferences from the result. Goodrich has shown that this resulting information is sufficient for an effective attack on genome databases. In this paper we present an approach that can detect and mitigate such an attack on encrypted messages while still preserving the privacy of both parties. Note that randomization, e.g. using differential privacy, will almost certainly destroy the utility of the matching result. We combine two known cryptographic primitives -- secure computation of the edit distance and fuzzy commitments -- in order to prevent submission of similar genome sequences. Particularly, we contribute an efficient zero-knowledge proof that the same input has been used in both primitives. We show that using our approach it is feasible to preserve privacy in genome matching and also detect and mitigate Goodrich's attack.
|
Therefore different approaches to computing the edit distance were sought. Automata and regular expressions can emulate edit distance computations efficiently for small edit distances. An oblivious evaluation of automata is presented in @cite_24 , but due to the regular expressions it does not scale to real-world sized genomes. Bloom filters can also be used to estimate the edit distance. An evaluation of this approach using homomorphic encryption is presented in @cite_30 . This approach yields reasonable run-times (approx. 5 minutes) for real-world sized genomes (approx. 16500 characters) and we therefore build upon this approach.
|
{
"cite_N": [
"@cite_24",
"@cite_30"
],
"mid": [
"2085801087",
"2091580995"
],
"abstract": [
"Human Desoxyribo-Nucleic Acid (DNA) sequences offer a wealth of information that reveal, among others, predisposition to various diseases and paternity relations. The breadth and personalized nature of this information highlights the need for privacy-preserving protocols. In this paper, we present a new error-resilient privacy-preserving string searching protocol that is suitable for running private DNA queries. This protocol checks if a short template (e.g., a string that describes a mutation leading to a disease), known to one party, is present inside a DNA sequence owned by another party, accounting for possible errors and without disclosing to each party the other party's input. Each query is formulated as a regular expression over a finite alphabet and implemented as an automaton. As the main technical contribution, we provide a protocol that allows to execute any finite state machine in an oblivious manner, requiring a communication complexity which is linear both in the number of states and the length of the input string.",
"Consider two parties who want to compare their strings, e.g., genomes, but do not want to reveal them to each other. We present a system for privacy-preserving matching of strings, which differs from existing systems by providing a deterministic approximation instead of an exact distance. It is efficient (linear complexity), non-interactive and does not involve a third party which makes it particularly suitable for cloud computing. We extend our protocol, such that it only reveals whether there is a match and not the exact distance. Further an implementation of the system is evaluated and compared against current privacy-preserving string matching algorithms."
]
}
|
1405.0205
|
173322064
|
Privacy is of the utmost importance in genomic matching. Therefore a number of privacy-preserving protocols have been presented using secure computation. Nevertheless, none of these protocols prevents inferences from the result. Goodrich has shown that this resulting information is sufficient for an effective attack on genome databases. In this paper we present an approach that can detect and mitigate such an attack on encrypted messages while still preserving the privacy of both parties. Note that randomization, e.g. using differential privacy, will almost certainly destroy the utility of the matching result. We combine two known cryptographic primitives -- secure computation of the edit distance and fuzzy commitments -- in order to prevent submission of similar genome sequences. Particularly, we contribute an efficient zero-knowledge proof that the same input has been used in both primitives. We show that using our approach it is feasible to preserve privacy in genome matching and also detect and mitigate Goodrich's attack.
|
Although secure computation offers a formal security model -- semi-honest security @cite_2 , it does not prevent inferences from the result. Goodrich therefore presented an attack based on the information of the edit distance alone @cite_8 . With very few repeated queries it can infer real-world sized genomes. The contribution of this paper is to combine the approach of @cite_30 with mitigation of this and similar attacks.
|
{
"cite_N": [
"@cite_30",
"@cite_8",
"@cite_2"
],
"mid": [
"2091580995",
"2154357802",
"2013623332"
],
"abstract": [
"Consider two parties who want to compare their strings, e.g., genomes, but do not want to reveal them to each other. We present a system for privacy-preserving matching of strings, which differs from existing systems by providing a deterministic approximation instead of an exact distance. It is efficient (linear complexity), non-interactive and does not involve a third party which makes it particularly suitable for cloud computing. We extend our protocol, such that it only reveals whether there is a match and not the exact distance. Further an implementation of the system is evaluated and compared against current privacy-preserving string matching algorithms.",
"In this paper, we study the degree to which a genomic string, @math ,leaks details about itself any time it engages in comparison protocolswith a genomic querier, Bob, even if those protocols arecryptographically guaranteed to produce no additional information otherthan the scores that assess the degree to which @math matches stringsoffered by Bob. We show that such scenarios allow Bob to play variantsof the game of Mastermind with @math so as to learn the complete identityof @math . We show that there are a number of efficient implementationsfor Bob to employ in these Mastermind attacks, depending on knowledgehe has about the structure of @math , which show how quickly he candetermine @math . Indeed, we show that Bob can discover @math using anumber of rounds of test comparisons that is much smaller than thelength of @math , under various assumptions regarding the types of scoresthat are returned by the cryptographic protocols and whether he can useknowledge about the distribution that @math comes from, e.g., usingpublic knowledge about the properties of human DNA. We also providethe results of an experimental study we performed on a database ofmitochondrial DNA, showing the vulnerability of existing real-world DNAdata to the Mastermind attack.",
"Cryptography is concerned with the conceptualization, definition, and construction of computing systems that address security concerns. The design of cryptographic systems must be based on firm foundations. Building on the basic tools presented in the first volume, this second volume of Foundations of Cryptography contains a rigorous and systematic treatment of three basic applications: Encryption, Signatures, and General Cryptographic Protocols. It is suitable for use in a graduate course on cryptography and as a reference book for experts. The author assumes basic familiarity with the design and analysis of algorithms; some knowledge of complexity theory and probability is also useful. Also available: Volume I: Basic Tools 0-521-79172-3 Hardback $75.00 C"
]
}
|
1405.0205
|
173322064
|
Privacy is of the utmost importance in genomic matching. Therefore a number of privacy-preserving protocols have been presented using secure computation. Nevertheless, none of these protocols prevents inferences from the result. Goodrich has shown that this resulting information is sufficient for an effective attack on genome databases. In this paper we present an approach that can detect and mitigate such an attack on encrypted messages while still preserving the privacy of both parties. Note that randomization, e.g. using differential privacy, will almost certainly destroy the utility of the matching result. We combine two known cryptographic primitives -- secure computation of the edit distance and fuzzy commitments -- in order to prevent submission of similar genome sequences. Particularly, we contribute an efficient zero-knowledge proof that the same input has been used in both primitives. We show that using our approach it is feasible to preserve privacy in genome matching and also detect and mitigate Goodrich's attack.
|
The guaranteed randomization approach to prevent inferences about the input is differential privacy @cite_21 . A randomized function @math gives @math -differential privacy if, for all data sets @math and @math differing on at most one element and all @math , @math It means, that the likelihood of any function result will only marginally change with the presence or absence of one additional element.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"2109426455"
],
"abstract": [
"Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning."
]
}
|
1405.0205
|
173322064
|
Privacy is of the utmost importance in genomic matching. Therefore a number of privacy-preserving protocols have been presented using secure computation. Nevertheless, none of these protocols prevents inferences from the result. Goodrich has shown that this resulting information is sufficient for an effective attack on genome databases. In this paper we present an approach that can detect and mitigate such an attack on encrypted messages while still preserving the privacy of both parties. Note that randomization, e.g. using differential privacy, will almost certainly destroy the utility of the matching result. We combine two known cryptographic primitives -- secure computation of the edit distance and fuzzy commitments -- in order to prevent submission of similar genome sequences. Particularly, we contribute an efficient zero-knowledge proof that the same input has been used in both primitives. We show that using our approach it is feasible to preserve privacy in genome matching and also detect and mitigate Goodrich's attack.
|
Further attacks on privacy mechanisms in genomic computing have to be considered. Bloom filter matching using the approach of @cite_19 (keyed cryptographic hash functions instead of regular hash functions) is insecure @cite_22 . A sophisticated attack can infer information from a Bloom filter with unknown hash functions using environmental information, such as the space of all genomes. We therefore use homomorphic encryption to protect the Bloom filter and are not susceptible to this attack. Anonymization techniques have also been found to be insecure @cite_3 .
|
{
"cite_N": [
"@cite_19",
"@cite_22",
"@cite_3"
],
"mid": [
"1549681244",
"1916258859",
"2141481372"
],
"abstract": [
"It is often necessary for two or more or more parties that do not fully trust each other to share data selectively. For example, one intelligence agency might be willing to turn over certain documents to another such agency, but only if the second agency requests the specific documents. The problem, of course, is finding out that such documents exist when access to the database is restricted. We propose a search scheme based on Bloom filters and group ciphers such as Pohlig-Hellman encryption. A semi-trusted third party can transform one party’s search queries to a form suitable for querying the other party’s database, in such a way that neither the third party nor the database owner can see the original query. Furthermore, the encryption keys used to construct the Bloom filters are not shared with this third party. Multiple providers and queriers are supported; provision can be made for third-party “warrant servers”, as well as “censorship sets” that limit the data to be shared.",
"For over fifty years, \"record linkage\" procedures have been refined to integrate data in the face of typographical and semantic errors. These procedures are traditionally performed over personal identifiers (e.g., names), but in modern decentralized environments, privacy concerns have led to regulations that require the obfuscation of such attributes. Various techniques have been proposed to resolve the tension, including secure multi-party computation protocols, however, such protocols are computationally intensive and do not scale for real world linkage scenarios. More recently, procedures based on Bloom filter encoding (BFE) have gained traction in various applications, such as healthcare, where they yield highly accurate record linkage results in a reasonable amount of time. Though promising, no formal security analysis has been designed or applied to this emerging model, which is of concern considering the sensitivity of the corresponding data. In this paper, we introduce a novel attack, based on constraint satisfaction, to provide a rigorous analysis for BFE and guidelines regarding how to mitigate risk against the attack. In addition, we conduct an empirical analysis with data derived from public voter records to illustrate the feasibility of the attack. Our investigations show that the parameters of the BFE protocol can be configured to make it relatively resilient to the proposed attack without significant reduction in record linkage performance.",
"Genome-wide association studies (GWAS) aim at discovering the association between genetic variations, particularly single-nucleotide polymorphism (SNP), and common diseases, which is well recognized to be one of the most important and active areas in biomedical research. Also renowned is the privacy implication of such studies, which has been brought into the limelight by the recent attack proposed by Homer's attack demonstrates that it is possible to identify a GWAS participant from the allele frequencies of a large number of SNPs. Such a threat, unfortunately, was found in our research to be significantly understated. In this paper, we show that individuals can actually be identified from even a relatively small set of statistics, as those routinely published in GWAS papers. We present two attacks. The first one extends Homer's attack with a much more powerful test statistic, based on the correlations among different SNPs described by coefficient of determination (r2). This attack can determine the presence of an individual from the statistics related to a couple of hundred SNPs. The second attack can lead to complete disclosure of hundreds of participants' SNPs, through analyzing the information derived from published statistics. We also found that those attacks can succeed even when the precisions of the statistics are low and part of data is missing. We evaluated our attacks on the real human genomes and concluded that such threats are completely realistic."
]
}
|
1405.0205
|
173322064
|
Privacy is of the utmost importance in genomic matching. Therefore a number of privacy-preserving protocols have been presented using secure computation. Nevertheless, none of these protocols prevents inferences from the result. Goodrich has shown that this resulting information is sufficient for an effective attack on genome databases. In this paper we present an approach that can detect and mitigate such an attack on encrypted messages while still preserving the privacy of both parties. Note that randomization, e.g. using differential privacy, will almost certainly destroy the utility of the matching result. We combine two known cryptographic primitives -- secure computation of the edit distance and fuzzy commitments -- in order to prevent submission of similar genome sequences. Particularly, we contribute an efficient zero-knowledge proof that the same input has been used in both primitives. We show that using our approach it is feasible to preserve privacy in genome matching and also detect and mitigate Goodrich's attack.
|
A related problem to privacy-preserving genome matching between two parties is outsourcing of this computation. This has been first considered in @cite_6 . A protocol for two servers executing a secure computation is presented. The protocol of @cite_24 has been used for outsourcing in @cite_12 . The protocol of @cite_11 has been used in @cite_14 . A clever technique of partitioning the problem into a coarse and a fine-granular part has been presented in @cite_0 . An approach for simple queries on an encrypted, outsourced genome database are presented in @cite_4 .
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_6",
"@cite_24",
"@cite_0",
"@cite_12",
"@cite_11"
],
"mid": [
"2131772137",
"2124398102",
"1547671177",
"2085801087",
"2398522641",
"1729744401",
"2166971704"
],
"abstract": [
"We treat the problem of secure outsourcing of sequence comparisons by a client to remote servers, which given two strings λ and μ of respective lengths n and m, consists of finding a minimum-cost sequence of insertions, deletions, and substitutions (also called an edit script) that transform λ into μ. In our setting a client owns λ and μ and outsources the computation to two servers without revealing to them information about either the input strings or the output sequence. Our solution is non-interactive for the client (who only sends information about the inputs and receives the output) and the client’s work is linear in its input output. The servers’ performance is O(σmn) computation (which is optimal) and communication, where σ is the alphabet size, and the solution is designed to work when the servers have only O(σ(m + n)) memory. By utilizing garbled circuit evaluation in a novel way, we completely avoid public-key cryptography, which makes our solution particularly efficient.",
"To support large-scale biomedical research projects, organizations need to share person-specific genomic sequences without violating the privacy of their data subjects. In the past, organizations protected subjects' identities by removing identifiers, such as name and social security number; however, recent investigations illustrate that deidentified genomic data can be ldquoreidentifiedrdquo to named individuals using simple automated methods. In this paper, we present a novel cryptographic framework that enables organizations to support genomic data mining without disclosing the raw genomic sequences. Organizations contribute encrypted genomic sequence records into a centralized repository, where the administrator can perform queries, such as frequency counts, without decrypting the data. We evaluate the efficiency of our framework with existing databases of single nucleotide polymorphism (SNP) sequences and demonstrate that the time needed to complete count queries is feasible for real world applications. For example, our experiments indicate that a count query over 40 SNPs in a database of 5000 records can be completed in approximately 30 min with off-the-shelf technology. We further show that approximation strategies can be applied to significantly speed up query execution times with minimal loss in accuracy. The framework can be implemented on top of existing information and network technologies in biomedical environments.",
"Large-scale problems in the physical and life sciences are being revolutionized by Internet computing technologies, like grid computing, that make possible the massive cooperative sharing of computational power, bandwidth, storage, and data. A weak computational device, once connected to such a grid, is no longer limited by its slow speed, small amounts of local storage, and limited bandwidth: It can avail itself of the abundance of these resources that is available elsewhere on the network. An impediment to the use of “computational outsourcing” is that the data in question is often sensitive, e.g., of national security importance, or proprietary and containing commercial secrets, or to be kept private for legal requirements such as the HIPAA legislation, Gramm-Leach-Bliley, or similar laws. This motivates the design of techniques for computational outsourcing in a privacy-preserving manner, i.e., without revealing to the remote agents whose computational power is being used, either one's data or the outcome of the computation on the data. This paper investigates such secure outsourcing for widely applicable sequence comparison problems, and gives an efficient protocol for a customer to securely outsource sequence comparisons to two remote agents, such that the agents learn nothing about the customer's two private sequences or the result of the comparison. The local computations done by the customer are linear in the size of the sequences, and the computational cost and amount of communication done by the external agents are close to the time complexity of the best known algorithm for solving the problem on a single machine (i.e., quadratic, which is a huge computational burden for the kinds of massive data on which such comparisons are made). The sequence comparison problem considered arises in a large number of applications, including speech recognition, machine vision, and molecular sequence comparisons. In addition, essentially the same protocol can solve a larger class of problems whose standard dynamic programming solutions are similar in structure to the recurrence that subtends the sequence comparison algorithm.",
"Human Desoxyribo-Nucleic Acid (DNA) sequences offer a wealth of information that reveal, among others, predisposition to various diseases and paternity relations. The breadth and personalized nature of this information highlights the need for privacy-preserving protocols. In this paper, we present a new error-resilient privacy-preserving string searching protocol that is suitable for running private DNA queries. This protocol checks if a short template (e.g., a string that describes a mutation leading to a disease), known to one party, is present inside a DNA sequence owned by another party, accounting for possible errors and without disclosing to each party the other party's input. Each query is formulated as a regular expression over a finite alphabet and implemented as an automaton. As the main technical contribution, we provide a protocol that allows to execute any finite state machine in an oblivious manner, requiring a communication complexity which is linear both in the number of states and the length of the input string.",
"An operation preceding most human DNA analyses is read mapping, which aligns millions of short sequences (called reads) to a reference genome. This step involves an enormous amount of computation (evaluating edit distances for millions upon billions of sequence pairs) and thus needs to be outsourced to low-cost commercial clouds. This asks for scalable techniques to protect sensitive DNA information, a demand that cannot be met by any existing techniques (e.g., homomorphic encryption, secure multiparty computation). In this paper, we report a new step towards secure and scalable read mapping on the hybrid cloud, which includes both the public commercial cloud and the private cloud within an organization. Inspired by the famous “seed-and-extend” method, our approach strategically splits a mapping task: the public cloud seeks exact matches between the keyed hash values of short read substrings (called seeds) and those of reference sequences to roughly position reads on the genome; the private cloud extends the seeds from these positions to find right alignments. Our novel seed-combination technique further moves most workload of this task to the public cloud. The new approach is found to work effectively against known inference attacks, and also easily scale to millions of reads.",
"This work treats the problem of error-resilient DNA searching via oblivious evaluation of finite automata, where a client has a DNA sequence, and a service provider has a pattern that corresponds to a genetic test. Error-resilient searching is achieved by representing the pattern as a finite automaton and evaluating it on the DNA sequence, where privacy of both the pattern and the DNA sequence must be preserved. Interactive solutions to this problem already exist, but can be a burden on the participants. Thus, we propose techniques for secure outsourcing of finite automata evaluation to computational servers, which do not learn any information. Our techniques are applicable to any type of finite automata, but the optimizations are tailored to DNA searching.",
"Many basic tasks in computational biology involve operations on individual DNA and protein sequences. These sequences, even when anonymized, are vulnerable to re-identification attacks and may reveal highly sensitive information about individuals. We present a relatively efficient, privacy-preserving implementation of fundamental genomic computations such as calculating the edit distance and Smith- Waterman similarity scores between two sequences. Our techniques are crypto graphically secure and significantly more practical than previous solutions. We evaluate our prototype implementation on sequences from the Pfam database of protein families, and demonstrate that its performance is adequate for solving real-world sequence-alignment and related problems in a privacy- preserving manner. Furthermore, our techniques have applications beyond computational biology. They can be used to obtain efficient, privacy-preserving implementations for many dynamic programming algorithms over distributed datasets."
]
}
|
1405.0174
|
1791593905
|
In this paper, we present VSCAN, a novel approach for generating static video summaries. This approach is based on a modified DBSCAN clustering algorithm to summarize the video content utilizing both color and texture features of the video frames. The paper also introduces an enhanced evaluation method that depends on color and texture features. Video Summaries generated by VSCAN are compared with summaries generated by other approaches found in the literature and those created by users. Experimental results indicate that the video summaries generated by VSCAN have a higher quality than those generated by other approaches.
|
A comprehensive review of video summarization approaches can be found in @cite_6 . Some of the main approaches and techniques related to static video summarization which can be found in the literature are briefly discussed next.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2094998392"
],
"abstract": [
"The demand for various multimedia applications is rapidly increasing due to the recent advance in the computing and network infrastructure, together with the widespread use of digital video technology. Among the key elements for the success of these applications is how to effectively and efficiently manage and store a huge amount of audio visual information, while at the same time providing user-friendly access to the stored data. This has fueled a quickly evolving research area known as video abstraction. As the name implies, video abstraction is a mechanism for generating a short summary of a video, which can either be a sequence of stationary images (keyframes) or moving images (video skims). In terms of browsing and navigation, a good video abstract will enable the user to gain maximum information about the target video sequence in a specified time constraint or sufficient information in the minimum time. Over past years, various ideas and techniques have been proposed towards the effective abstraction of video contents. The purpose of this article is to provide a systematic classification of these works. We identify and detail, for each approach, the underlying components and how they are addressed in specific works."
]
}
|
1405.0174
|
1791593905
|
In this paper, we present VSCAN, a novel approach for generating static video summaries. This approach is based on a modified DBSCAN clustering algorithm to summarize the video content utilizing both color and texture features of the video frames. The paper also introduces an enhanced evaluation method that depends on color and texture features. Video Summaries generated by VSCAN are compared with summaries generated by other approaches found in the literature and those created by users. Experimental results indicate that the video summaries generated by VSCAN have a higher quality than those generated by other approaches.
|
In @cite_0 , an approach based on clustering the video frames using the Delaunay Triangulation (DT) is developed. The first step in this apporach is pre-sampling the frames of the input video. Then, the video frames are represented by a color histogram in the HSV color space and the Principal Component Analysis (PCA) is applied on the color feature matrix to reduce its dimensionality. After that, the Delaunay diagram is built and clusters are formed by separating edges in the Delaunay diagram. Finally, for each cluster, the frame that is closest to its center is selected as the key frame.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2134577448"
],
"abstract": [
"Recent advances in technology have made tremendous amounts of multimedia information available to the general population. An efficient way of dealing with this new development is to develop browsing tools that distill multimedia data as information oriented summaries. Such an approach will not only suit resource poor environments such as wireless and mobile, but also enhance browsing on the wired side for applications like digital libraries and repositories. Automatic summarization and indexing techniques will give users an opportunity to browse and select multimedia document of their choice for complete viewing later. In this paper, we present a technique by which we can automatically gather the frames of interest in a video for purposes of summarization. Our proposed technique is based on using Delaunay Triangulation for clustering the frames in videos. We represent the frame contents as multi-dimensional point data and use Delaunay Triangulation for clustering them. We propose a novel video summarization technique by using Delaunay clusters that generates good quality summaries with fewer frames and less redundancy when compared to other schemes. In contrast to many of the other clustering techniques, the Delaunay clustering algorithm is fully automatic with no user specified parameters and is well suited for batch processing. We demonstrate these and other desirable properties of the proposed algorithm by testing it on a collection of videos from Open Video Project. We provide a meaningful comparison between results of the proposed summarization technique with Open Video storyboard and K-means clustering. We evaluate the results in terms of metrics that measure the content representational value of the proposed technique."
]
}
|
1405.0174
|
1791593905
|
In this paper, we present VSCAN, a novel approach for generating static video summaries. This approach is based on a modified DBSCAN clustering algorithm to summarize the video content utilizing both color and texture features of the video frames. The paper also introduces an enhanced evaluation method that depends on color and texture features. Video Summaries generated by VSCAN are compared with summaries generated by other approaches found in the literature and those created by users. Experimental results indicate that the video summaries generated by VSCAN have a higher quality than those generated by other approaches.
|
In @cite_11 , an approach called STIMO (STIll and MOving Video Storyboard) is introduced. This approach is designed to produce on-the-fly video storyboards and it is composed of three phases. In the first phase, the video frames are pre-sampled and then feature vectors are extracted from the selected video frames by computing a color histogram in the HSV color space. In the second phase, a clustering method based on the Furthest-Point-First (FPF) algorithm is applied. To estimate the number of clusters, the pairwise distance of consecutive frames is computed using Generalized Jaccard Distance (GJD). Finally, a post-processing step is performed for removing noise video frames.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"1989863986"
],
"abstract": [
"In the current Web scenario a video browsing tool that produces on-the-fly storyboards is more and more a need. Video summary techniques can be helpful but, due to their long processing time, they are usually unsuitable for on-the-fly usage. Therefore, it is common to produce storyboards in advance, penalizing users customization. The lack of customization is more and more critical, as users have different demands and might access the Web with several different networking and device technologies. In this paper we propose STIMO, a summarization technique designed to produce on-the-fly video storyboards. STIMO produces still and moving storyboards and allows advanced users customization (e.g., users can select the storyboard length and the maximum time they are willing to wait to get the storyboard). STIMO is based on a fast clustering algorithm that selects the most representative video contents using HSV frame color distribution. Experimental results show that STIMO produces storyboards with good quality and in a time that makes on-the-fly usage possible."
]
}
|
1405.0174
|
1791593905
|
In this paper, we present VSCAN, a novel approach for generating static video summaries. This approach is based on a modified DBSCAN clustering algorithm to summarize the video content utilizing both color and texture features of the video frames. The paper also introduces an enhanced evaluation method that depends on color and texture features. Video Summaries generated by VSCAN are compared with summaries generated by other approaches found in the literature and those created by users. Experimental results indicate that the video summaries generated by VSCAN have a higher quality than those generated by other approaches.
|
In @cite_9 , an approach called VSUMM (Video SUMMarization) is presented. In the first step, the video frames are pre-sampled by selecting one frame per second. In the second step, the color features of video frames are extracted from Hue component only in the HSV color space. In the third step, the meaningless frames are eliminated. In the fourth step, the frames are clustered using k-means algorithm where the number of clusters is estimated by computing the pairwise Euclidean distances between video frames and a key frame is extracted from each cluster. Finally, another extra step occurs in which the key frames are compared among themselves through color histogram to eliminate those similar key frames in the produced summaries.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"1987366351"
],
"abstract": [
"The fast evolution of digital video has brought many new multimedia applications and, as a consequence, has increased the amount of research into new technologies that aim at improving the effectiveness and efficiency of video acquisition, archiving, cataloging and indexing, as well as increasing the usability of stored videos. Among possible research areas, video summarization is an important topic that potentially enables faster browsing of large video collections and also more efficient content indexing and access. Essentially, this research area consists of automatically generating a short summary of a video, which can either be a static summary or a dynamic summary. In this paper, we present VSUMM, a methodology for the production of static video summaries. The method is based on color feature extraction from video frames and k-means clustering algorithm. As an additional contribution, we also develop a novel approach for the evaluation of video static summaries. In this evaluation methodology, video summaries are manually created by users. Then, several user-created summaries are compared both to our approach and also to a number of different techniques in the literature. Experimental results show - with a confidence level of 98 - that the proposed solution provided static video summaries with superior quality relative to the approaches to which it was compared."
]
}
|
1405.0312
|
2952122856
|
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
|
Throughout the history of computer vision research datasets have played a critical role. They not only provide a means to train and evaluate algorithms, they drive research in new and more challenging directions. The creation of ground truth stereo and optical flow datasets @cite_42 @cite_48 helped stimulate a flood of interest in these areas. The early evolution of object recognition datasets @cite_5 @cite_46 @cite_30 facilitated the direct comparison of hundreds of image recognition algorithms while simultaneously pushing the field towards more complex problems. Recently, the ImageNet dataset @cite_26 containing millions of images has enabled breakthroughs in both object classification and detection research using a new class of deep learning algorithms @cite_45 @cite_8 @cite_13 .
|
{
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_8",
"@cite_48",
"@cite_42",
"@cite_45",
"@cite_5",
"@cite_46",
"@cite_13"
],
"mid": [
"2161969291",
"2108598243",
"2102605133",
"2147253850",
"2104974755",
"",
"2155904486",
"1576445103",
"1487583988"
],
"abstract": [
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"The quantitative evaluation of optical flow algorithms by (1994) led to significant advances in performance. The challenges for optical flow algorithms today go beyond the datasets and evaluation methods proposed in that paper. Instead, they center on problems associated with complex natural scenes, including nonrigid motion, real sensor noise, and motion discontinuities. We propose a new set of benchmarks and evaluation methods for the next generation of optical flow algorithms. To that end, we contribute four types of data to test different aspects of optical flow algorithms: (1) sequences with nonrigid motion where the ground-truth flow is determined by tracking hidden fluorescent texture, (2) realistic synthetic sequences, (3) high frame-rate video used to study interpolation error, and (4) modified stereo sequences of static scenes. In addition to the average angular error used by , we compute the absolute flow endpoint error, measures for frame interpolation error, improved statistics, and results at motion discontinuities and in textureless regions. In October 2007, we published the performance of several well-known methods on a preliminary version of our data to establish the current state of the art. We also made the data freely available on the web at http: vision.middlebury.edu flow . Subsequently a number of researchers have uploaded their results to our website and published papers using the data. A significant improvement in performance has already been achieved. In this paper we analyze the results obtained to date and draw a large number of conclusions from them.",
"Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web.",
"",
"Current computational approaches to learning visual object categories require thousands of training images, are slow, cannot learn in an incremental manner and cannot incorporate prior information into the learning process. In addition, no algorithm presented in the literature has been tested on more than a handful of object categories. We present an method for learning object categories from just a few training images. It is quick and it uses prior information in a principled way. We test it on a dataset composed of images of objects belonging to 101 widely varied categories. Our proposed method is based on making use of prior information, assembled from (unrelated) object categories which were previously learnt. A generative probabilistic model is used, which represents the shape and appearance of a constellation of features belonging to the object. The parameters of the model are learnt incrementally in a Bayesian manner. Our incremental algorithm is compared experimentally to an earlier batch Bayesian algorithm, as well as to one based on maximum-likelihood. The incremental and batch versions have comparable classification performance on small training sets, but incremental learning is significantly faster, making real-time learning feasible. Both Bayesian methods outperform maximum likelihood on small training sets.",
"We introduce a challenging set of 256 object categories containing a total of 30607 images. The original Caltech-101 [1] was collected by choosing a set of object categories, downloading examples from Google Images and then manually screening out all images that did not fit the category. Caltech-256 is collected in a similar manner with several improvements: a) the number of categories is more than doubled, b) the minimum number of images in any category is increased from 31 to 80, c) artifacts due to image rotation are avoided and d) a new and larger clutter category is introduced for testing background rejection. We suggest several testing paradigms to measure classification performance, then benchmark the dataset using two simple metrics as well as a state-of-the-art spatial pyramid matching [2] algorithm. Finally we use the clutter category to train an interest detector which rejects uninformative background regions.",
"We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat."
]
}
|
1405.0312
|
2952122856
|
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
|
Image Classification The task of object classification requires binary labels indicating whether objects are present in an image; see Fig. (a). Early datasets of this type comprised images containing a single object with blank backgrounds, such as the MNIST handwritten digits @cite_27 or COIL household objects @cite_40 . Caltech 101 @cite_5 and Caltech 256 @cite_46 marked the transition to more realistic object images retrieved from the internet while also increasing the number of object categories to 101 and 256, respectively. Popular datasets in the machine learning community due to the larger number of training examples, CIFAR-10 and CIFAR-100 @cite_22 offered 10 and 100 categories from a dataset of tiny @math images @cite_20 . While these datasets contained up to 60,000 images and hundreds of categories, they still only captured a small fraction of our visual world.
|
{
"cite_N": [
"@cite_22",
"@cite_40",
"@cite_27",
"@cite_5",
"@cite_46",
"@cite_20"
],
"mid": [
"",
"",
"200806003",
"2155904486",
"1576445103",
"2145607950"
],
"abstract": [
"",
"",
"Disclosed is an improved articulated bar flail having shearing edges for efficiently shredding materials. An improved shredder cylinder is disclosed with a plurality of these flails circumferentially spaced and pivotally attached to the periphery of a rotatable shaft. Also disclosed is an improved shredder apparatus which has a pair of these shredder cylinders mounted to rotate about spaced parallel axes which cooperates with a conveyer apparatus which has a pair of inclined converging conveyer belts with one of the belts mounted to move with respect to the other belt to allow the transport of articles of various sizes therethrough.",
"Current computational approaches to learning visual object categories require thousands of training images, are slow, cannot learn in an incremental manner and cannot incorporate prior information into the learning process. In addition, no algorithm presented in the literature has been tested on more than a handful of object categories. We present an method for learning object categories from just a few training images. It is quick and it uses prior information in a principled way. We test it on a dataset composed of images of objects belonging to 101 widely varied categories. Our proposed method is based on making use of prior information, assembled from (unrelated) object categories which were previously learnt. A generative probabilistic model is used, which represents the shape and appearance of a constellation of features belonging to the object. The parameters of the model are learnt incrementally in a Bayesian manner. Our incremental algorithm is compared experimentally to an earlier batch Bayesian algorithm, as well as to one based on maximum-likelihood. The incremental and batch versions have comparable classification performance on small training sets, but incremental learning is significantly faster, making real-time learning feasible. Both Bayesian methods outperform maximum likelihood on small training sets.",
"We introduce a challenging set of 256 object categories containing a total of 30607 images. The original Caltech-101 [1] was collected by choosing a set of object categories, downloading examples from Google Images and then manually screening out all images that did not fit the category. Caltech-256 is collected in a similar manner with several improvements: a) the number of categories is more than doubled, b) the minimum number of images in any category is increased from 31 to 80, c) artifacts due to image rotation are avoided and d) a new and larger clutter category is introduced for testing background rejection. We suggest several testing paradigms to measure classification performance, then benchmark the dataset using two simple metrics as well as a state-of-the-art spatial pyramid matching [2] algorithm. Finally we use the clutter category to train an interest detector which rejects uninformative background regions.",
"With the advent of the Internet, billions of images are now freely available online and constitute a dense sampling of the visual world. Using a variety of non-parametric methods, we explore this world with the aid of a large dataset of 79,302,017 images collected from the Internet. Motivated by psychophysical results showing the remarkable tolerance of the human visual system to degradations in image resolution, the images in the dataset are stored as 32 x 32 color images. Each image is loosely labeled with one of the 75,062 non-abstract nouns in English, as listed in the Wordnet lexical database. Hence the image database gives a comprehensive coverage of all object categories and scenes. The semantic information from Wordnet can be used in conjunction with nearest-neighbor methods to perform object classification over a range of semantic levels minimizing the effects of labeling noise. For certain classes that are particularly prevalent in the dataset, such as people, we are able to demonstrate a recognition performance comparable to class-specific Viola-Jones style detectors."
]
}
|
1405.0312
|
2952122856
|
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
|
Recently, ImageNet @cite_26 made a striking departure from the incremental increase in dataset sizes. They proposed the creation of a dataset containing 22k categories with 500-1000 images each. Unlike previous datasets containing entry-level categories @cite_2 , such as dog'' or chair,'' like @cite_20 , ImageNet used the WordNet Hierarchy @cite_15 to obtain both entry-level and fine-grained @cite_39 categories. Currently, the ImageNet dataset contains over 14 million labeled images and has enabled significant advances in image classification @cite_45 @cite_8 @cite_13 .
|
{
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_39",
"@cite_45",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_20"
],
"mid": [
"2108598243",
"2102605133",
"",
"",
"2135166986",
"2153564686",
"1487583988",
"2145607950"
],
"abstract": [
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"",
"",
"Entry level categories - the labels people will use to name an object - were originally defined and studied by psychologists in the 1980s. In this paper we study entry-level categories at a large scale and learn the first models for predicting entry-level categories for images. Our models combine visual recognition predictions with proxies for word \"naturalness\" mined from the enormous amounts of text on the web. We demonstrate the usefulness of our models for predicting nouns (entry-level words) associated with images by people. We also learn mappings between concepts predicted by existing visual recognition systems and entry-level concepts that could be useful for improving human-focused applications such as natural language image description or retrieval.",
"",
"We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.",
"With the advent of the Internet, billions of images are now freely available online and constitute a dense sampling of the visual world. Using a variety of non-parametric methods, we explore this world with the aid of a large dataset of 79,302,017 images collected from the Internet. Motivated by psychophysical results showing the remarkable tolerance of the human visual system to degradations in image resolution, the images in the dataset are stored as 32 x 32 color images. Each image is loosely labeled with one of the 75,062 non-abstract nouns in English, as listed in the Wordnet lexical database. Hence the image database gives a comprehensive coverage of all object categories and scenes. The semantic information from Wordnet can be used in conjunction with nearest-neighbor methods to perform object classification over a range of semantic levels minimizing the effects of labeling noise. For certain classes that are particularly prevalent in the dataset, such as people, we are able to demonstrate a recognition performance comparable to class-specific Viola-Jones style detectors."
]
}
|
1405.0312
|
2952122856
|
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
|
Object detection Detecting an object entails both stating that an object belonging to a specified class is present, and localizing it in the image. The location of an object is typically represented by a bounding box, Fig. (b). Early algorithms focused on face detection @cite_16 using various ad hoc datasets. Later, more realistic and challenging face detection datasets were created @cite_0 . Another popular challenge is the detection of pedestrians for which several datasets have been created @cite_30 @cite_31 . The Caltech Pedestrian Dataset @cite_31 contains 350,000 labeled instances with bounding boxes.
|
{
"cite_N": [
"@cite_0",
"@cite_31",
"@cite_16",
"@cite_30"
],
"mid": [
"1782590233",
"2031454541",
"",
"2161969291"
],
"abstract": [
"Most face databases have been created under controlled conditions to facilitate the study of specific parameters on the face recognition problem. These parameters include such variables as position, pose, lighting, background, camera quality, and gender. While there are many applications for face recognition technology in which one can control the parameters of image acquisition, there are also many applications in which the practitioner has little or no control over such parameters. This database, Labeled Faces in the Wild, is provided as an aid in studying the latter, unconstrained, recognition problem. The database contains labeled face photographs spanning the range of conditions typically encountered in everyday life. The database exhibits “natural” variability in factors such as pose, lighting, race, accessories, occlusions, and background. In addition to describing the details of the database, we provide specific experimental paradigms for which the database is suitable. This is done in an effort to make research performed with the database as consistent and comparable as possible. We provide baseline results, including results of a state of the art face recognition system combined with a face alignment system. To facilitate experimentation on the database, we provide several parallel databases, including an aligned version.",
"Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.",
"",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds."
]
}
|
1405.0312
|
2952122856
|
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
|
For the detection of basic object categories, a multi-year effort from 2005 to 2012 was devoted to the creation and maintenance of a series of benchmark datasets that were widely adopted. The PASCAL VOC @cite_43 datasets contained 20 object categories spread over 11,000 images. Over 27,000 object instance bounding boxes were labeled, of which almost 7,000 had detailed segmentations. Recently, a detection challenge has been created from 200 object categories using a subset of 400,000 images from ImageNet @cite_4 . An impressive 350,000 objects have been labeled using bounding boxes.
|
{
"cite_N": [
"@cite_43",
"@cite_4"
],
"mid": [
"2031489346",
"1972515067"
],
"abstract": [
"The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.",
"The growth of detection datasets and the multiple directions of object detection research provide both an unprecedented need and a great opportunity for a thorough evaluation of the current state of the field of categorical object detection. In this paper we strive to answer two key questions. First, where are we currently as a field: what have we done right, what still needs to be improved? Second, where should we be going in designing the next generation of object detectors? Inspired by the recent work of on the standard PASCAL VOC detection dataset, we perform a large-scale study on the Image Net Large Scale Visual Recognition Challenge (ILSVRC) data. First, we quantitatively demonstrate that this dataset provides many of the same detection challenges as the PASCAL VOC. Due to its scale of 1000 object categories, ILSVRC also provides an excellent test bed for understanding the performance of detectors as a function of several key properties of the object classes. We conduct a series of analyses looking at how different detection methods perform on a number of image-level and object-class-level properties such as texture, color, deformation, and clutter. We learn important lessons of the current object detection methods and propose a number of insights for designing the next generation object detectors."
]
}
|
1405.0312
|
2952122856
|
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
|
Semantic scene labeling The task of labeling semantic objects in a scene requires that each pixel of an image be labeled as belonging to a category, such as sky, chair, floor, street, etc. In contrast to the detection task, individual instances of objects do not need to be segmented, Fig. (c). This enables the labeling of objects for which individual instances are hard to define, such as grass, streets, or walls. Datasets exist for both indoor @cite_21 and outdoor @cite_23 @cite_33 scenes. Some datasets also include depth information @cite_21 . Similar to semantic scene labeling, our goal is to measure the pixel-wise accuracy of object labels. However, we also aim to distinguish between individual instances of an object, which requires a solid understanding of each object's extent.
|
{
"cite_N": [
"@cite_21",
"@cite_33",
"@cite_23"
],
"mid": [
"125693051",
"",
"2054279472"
],
"abstract": [
"We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.",
"",
"This paper details a new approach for learning a discriminative model of object classes, incorporating texture, layout, and context information efficiently. The learned model is used for automatic visual understanding and semantic segmentation of photographs. Our discriminative model exploits texture-layout filters, novel features based on textons, which jointly model patterns of texture and their spatial layout. Unary classification and feature selection is achieved using shared boosting to give an efficient classifier which can be applied to a large number of classes. Accurate image segmentation is achieved by incorporating the unary classifier in a conditional random field, which (i) captures the spatial interactions between class labels of neighboring pixels, and (ii) improves the segmentation of specific object instances. Efficient training of the model on large datasets is achieved by exploiting both random feature selection and piecewise training methods. High classification and segmentation accuracy is demonstrated on four varied databases: (i) the MSRC 21-class database containing photographs of real objects viewed under general lighting conditions, poses and viewpoints, (ii) the 7-class Corel subset and (iii) the 7-class Sowerby database used in (Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 695---702, June 2004), and (iv) a set of video sequences of television shows. The proposed algorithm gives competitive and visually pleasing results for objects that are highly textured (grass, trees, etc.), highly structured (cars, faces, bicycles, airplanes, etc.), and even articulated (body, cow, etc.)."
]
}
|
1405.0312
|
2952122856
|
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
|
Other vision datasets Datasets have spurred the advancement of numerous fields in computer vision. Some notable datasets include the Middlebury datasets for stereo vision @cite_42 , multi-view stereo @cite_17 and optical flow @cite_48 . The Berkeley Segmentation Data Set (BSDS500) @cite_18 has been used extensively to evaluate both segmentation and edge detection algorithms. Datasets have also been created to recognize both scene @cite_49 and object attributes @cite_38 @cite_3 . Indeed, numerous areas of vision have benefited from challenging datasets that helped catalyze progress.
|
{
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_48",
"@cite_42",
"@cite_3",
"@cite_49",
"@cite_17"
],
"mid": [
"2098411764",
"2110158442",
"2147253850",
"2104974755",
"2134270519",
"2070148066",
""
],
"abstract": [
"We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework.",
"This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications.",
"The quantitative evaluation of optical flow algorithms by (1994) led to significant advances in performance. The challenges for optical flow algorithms today go beyond the datasets and evaluation methods proposed in that paper. Instead, they center on problems associated with complex natural scenes, including nonrigid motion, real sensor noise, and motion discontinuities. We propose a new set of benchmarks and evaluation methods for the next generation of optical flow algorithms. To that end, we contribute four types of data to test different aspects of optical flow algorithms: (1) sequences with nonrigid motion where the ground-truth flow is determined by tracking hidden fluorescent texture, (2) realistic synthetic sequences, (3) high frame-rate video used to study interpolation error, and (4) modified stereo sequences of static scenes. In addition to the average angular error used by , we compute the absolute flow endpoint error, measures for frame interpolation error, improved statistics, and results at motion discontinuities and in textureless regions. In October 2007, we published the performance of several well-known methods on a preliminary version of our data to establish the current state of the art. We also made the data freely available on the web at http: vision.middlebury.edu flow . Subsequently a number of researchers have uploaded their results to our website and published papers using the data. A significant improvement in performance has already been achieved. In this paper we analyze the results obtained to date and draw a large number of conclusions from them.",
"Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web.",
"We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes.",
"In this paper we present the first large-scale scene attribute database. First, we perform crowd-sourced human studies to find a taxonomy of 102 discriminative attributes. Next, we build the “SUN attribute database” on top of the diverse SUN categorical database. Our attribute database spans more than 700 categories and 14,000 images and has potential for use in high-level scene understanding and fine-grained scene recognition. We use our dataset to train attribute classifiers and evaluate how well these relatively simple classifiers can recognize a variety of attributes related to materials, surface properties, lighting, functions and affordances, and spatial envelope properties.",
""
]
}
|
1404.7571
|
2952036987
|
Tracking and approximating data matrices in streaming fashion is a fundamental challenge. The problem requires more care and attention when data comes from multiple distributed sites, each receiving a stream of data. This paper considers the problem of "tracking approximations to a matrix" in the distributed streaming model. In this model, there are m distributed sites each observing a distinct stream of data (where each element is a row of a distributed matrix) and has a communication channel with a coordinator, and the goal is to track an eps-approximation to the norm of the matrix along any direction. To that end, we present novel algorithms to address the matrix approximation problem. Our algorithms maintain a smaller matrix B, as an approximation to a distributed streaming matrix A, such that for any unit vector x: | ||A x||^2 - ||B x||^2 | <= eps ||A||_F^2. Our algorithms work in streaming fashion and incur small communication, which is critical for distributed computation. Our best method is deterministic and uses only O((m eps) log(beta N)) communication, where N is the size of stream (at the time of the query) and beta is an upper-bound on the squared norm of any row of the matrix. In addition to proving all algorithmic properties theoretically, extensive experiments with real large datasets demonstrate the efficiency of these protocols.
|
Matrix approximation in a centralized stream. Every incoming item in a centralized stream represents a new row of data in a streaming matrix. The goal is to continuously maintain a low rank matrix approximation. It is a special instance of our problem for @math , i.e., there is only a single site. Several results exist in the literature, including streaming PCA (principal component analysis) @cite_8 , streaming SVD (singular value decomposition) @cite_11 @cite_37 , and matrix sketching @cite_23 @cite_3 @cite_31 . The matrix sketching technique @cite_3 only recently appeared and is the start-of-the-art for low-rank matrix approximation in a single stream. Liberty @cite_3 adapts a well-known streaming algorithm for approximating item frequencies, the MG algorithm @cite_6 , to sketching a streaming matrix. The method, Frequent Directions ( FD ), receives @math rows of a matrix @math one after another, in a centralized streaming fashion. It maintains a sketch @math with only @math rows, but guarantees that @math . More precisely, it guarantees that @math , @math , @math . FD uses @math space, and each item updates the sketch in amortized @math time; two such sketches can also be merged in @math time.
|
{
"cite_N": [
"@cite_37",
"@cite_8",
"@cite_3",
"@cite_6",
"@cite_23",
"@cite_31",
"@cite_11"
],
"mid": [
"",
"2953310052",
"2951542269",
"2006355640",
"2059867647",
"2962984690",
"1584189884"
],
"abstract": [
"",
"We consider streaming, one-pass principal component analysis (PCA), in the high-dimensional regime, with limited memory. Here, @math -dimensional samples are presented sequentially, and the goal is to produce the @math -dimensional subspace that best approximates these points. Standard algorithms require @math memory; meanwhile no algorithm can do better than @math memory, since this is what the output itself requires. Memory (or storage) complexity is most meaningful when understood in the context of computational and sample complexity. Sample complexity for high-dimensional PCA is typically studied in the setting of the spiked covariance model , where @math -dimensional points are generated from a population covariance equal to the identity (white noise) plus a low-dimensional perturbation (the spike) which is the signal to be recovered. It is now well-understood that the spike can be recovered when the number of samples, @math , scales proportionally with the dimension, @math . Yet, all algorithms that provably achieve this, have memory complexity @math . Meanwhile, algorithms with memory-complexity @math do not have provable bounds on sample complexity comparable to @math . We present an algorithm that achieves both: it uses @math memory (meaning storage of any kind) and is able to compute the @math -dimensional spike with @math sample-complexity -- the first algorithm of its kind. While our theoretical analysis focuses on the spiked covariance model, our simulations show that our algorithm is successful on much more general models for the data.",
"We adapt a well known streaming algorithm for approximating item frequencies to the matrix sketching setting. The algorithm receives the rows of a large matrix @math one after the other in a streaming fashion. It maintains a sketch matrix @math such that for any unit vector @math [ |Ax |^2 |Bx |^2 |Ax |^2 - |A |_ f ^2 .] Sketch updates per row in @math require @math operations in the worst case. A slight modification of the algorithm allows for an amortized update time of @math operations per row. The presented algorithm stands out in that it is: deterministic, simple to implement, and elementary to prove. It also experimentally produces more accurate sketches than widely used approaches while still being computationally competitive.",
"Two algorithms are presented for finding the values that occur more than @math times in array b[O:n-1]. The second algorithm requires time @math and extra space @math . We prove that @math is a lower bound on the time required for any algorithm based on comparing array elements, so that the second algorithm is optimal. As special cases, determining whether a value occurs more than @math times requires linear time, but determining whether there are duplicates the case @math requires time @math . The algorithms may be interesting from a standpoint of programming methodology; each was developed as an extension of an algorithm for the simple case @math .",
"We give near-optimal space bounds in the streaming model for linear algebra problems that include estimation of matrix products, linear regression, low-rank approximation, and approximation of matrix rank. In the streaming model, sketches of input matrices are maintained under updates of matrix entries; we prove results for turnstile updates, given in an arbitrary order. We give the first lower bounds known for the space needed by the sketches, for a given estimation error e. We sharpen prior upper bounds, with respect to combinations of space, failure probability, and number of passes. The sketch we use for matrix A is simply STA, where S is a sign matrix. Our results include the following upper and lower bounds on the bits of space needed for 1-pass algorithms. Here A is an n x d matrix, B is an n x d' matrix, and c := d+d'. These results are given for fixed failure probability; for failure probability δ>0, the upper bounds require a factor of log(1 δ) more space. We assume the inputs have integer entries specified by O(log(nc)) bits, or O(log(nd)) bits. (Matrix Product) Output matrix C with F(ATB-C) ≤ e F(A) F(B). We show that Θ(ce-2log(nc)) space is needed. (Linear Regression) For d'=1, so that B is a vector b, find x so that Ax-b ≤ (1+e) minx' ∈ Reald Ax'-b. We show that Θ(d2e-1 log(nd)) space is needed. (Rank-k Approximation) Find matrix tAk of rank no more than k, so that F(A-tAk) ≤ (1+e) F A-Ak , where Ak is the best rank-k approximation to A. Our lower bound is Ω(ke-1(n+d)log(nd)) space, and we give a one-pass algorithm matching this when A is given row-wise or column-wise. For general updates, we give a one-pass algorithm needing [O(ke-2(n + d e2)log(nd))] space. We also give upper and lower bounds for algorithms using multiple passes, and a sketching analog of the CUR decomposition.",
"We consider processing an n x d matrix A in a stream with row-wise updates according to a recent algorithm called Frequent Directions (Liberty, KDD 2013). This algorithm maintains an e x d matrix Q deterministically, processing each row in O(de2) time; the processing time can be decreased to O(de) with a slight modification in the algorithm and a constant increase in space. Then for any unit vector x, the matrix Q satisfies [EQUATION] We show that if one sets e = ⌈k + k e⌉ and returns Qk, a k x d matrix that is simply the top k rows of Q, then we achieve the following properties: [EQUATION] and where πQk (A) is the projection of A onto the rowspace of Qk then [EQUATION] We also show that Frequent Directions cannot be adapted to a sparse version in an obvious way that retains e original rows of the matrix, as opposed to a linear combination or sketch of the rows.",
""
]
}
|
1404.7571
|
2952036987
|
Tracking and approximating data matrices in streaming fashion is a fundamental challenge. The problem requires more care and attention when data comes from multiple distributed sites, each receiving a stream of data. This paper considers the problem of "tracking approximations to a matrix" in the distributed streaming model. In this model, there are m distributed sites each observing a distinct stream of data (where each element is a row of a distributed matrix) and has a communication channel with a coordinator, and the goal is to track an eps-approximation to the norm of the matrix along any direction. To that end, we present novel algorithms to address the matrix approximation problem. Our algorithms maintain a smaller matrix B, as an approximation to a distributed streaming matrix A, such that for any unit vector x: | ||A x||^2 - ||B x||^2 | <= eps ||A||_F^2. Our algorithms work in streaming fashion and incur small communication, which is critical for distributed computation. Our best method is deterministic and uses only O((m eps) log(beta N)) communication, where N is the size of stream (at the time of the query) and beta is an upper-bound on the squared norm of any row of the matrix. In addition to proving all algorithmic properties theoretically, extensive experiments with real large datasets demonstrate the efficiency of these protocols.
|
An extension of FD to derive streaming sketch results with bounds on relative errors, i.e., to ensure that @math , appeared in @cite_31 . It also gives that @math where @math is the top @math rows of @math and @math is the projection of @math onto the row-space of @math . This latter bound is interesting because, as we will see, it indicates that when most of the variation is captured in the first @math principal components, then we can almost recover the entire matrix exactly.
|
{
"cite_N": [
"@cite_31"
],
"mid": [
"2962984690"
],
"abstract": [
"We consider processing an n x d matrix A in a stream with row-wise updates according to a recent algorithm called Frequent Directions (Liberty, KDD 2013). This algorithm maintains an e x d matrix Q deterministically, processing each row in O(de2) time; the processing time can be decreased to O(de) with a slight modification in the algorithm and a constant increase in space. Then for any unit vector x, the matrix Q satisfies [EQUATION] We show that if one sets e = ⌈k + k e⌉ and returns Qk, a k x d matrix that is simply the top k rows of Q, then we achieve the following properties: [EQUATION] and where πQk (A) is the projection of A onto the rowspace of Qk then [EQUATION] We also show that Frequent Directions cannot be adapted to a sparse version in an obvious way that retains e original rows of the matrix, as opposed to a linear combination or sketch of the rows."
]
}
|
1404.7571
|
2952036987
|
Tracking and approximating data matrices in streaming fashion is a fundamental challenge. The problem requires more care and attention when data comes from multiple distributed sites, each receiving a stream of data. This paper considers the problem of "tracking approximations to a matrix" in the distributed streaming model. In this model, there are m distributed sites each observing a distinct stream of data (where each element is a row of a distributed matrix) and has a communication channel with a coordinator, and the goal is to track an eps-approximation to the norm of the matrix along any direction. To that end, we present novel algorithms to address the matrix approximation problem. Our algorithms maintain a smaller matrix B, as an approximation to a distributed streaming matrix A, such that for any unit vector x: | ||A x||^2 - ||B x||^2 | <= eps ||A||_F^2. Our algorithms work in streaming fashion and incur small communication, which is critical for distributed computation. Our best method is deterministic and uses only O((m eps) log(beta N)) communication, where N is the size of stream (at the time of the query) and beta is an upper-bound on the squared norm of any row of the matrix. In addition to proving all algorithmic properties theoretically, extensive experiments with real large datasets demonstrate the efficiency of these protocols.
|
Babcock and Olston @cite_24 designed some deterministic heuristics called as to compute top- @math frequent items. Fuller and Kantardzid modified their technique and proposed @cite_33 , a heuristic method, to track the heavy hitters while reducing communication cost and improving overall quality of results. Manjhi al @cite_44 studied @math -heavy hitter tracking in a hierarchical communication model.
|
{
"cite_N": [
"@cite_24",
"@cite_44",
"@cite_33"
],
"mid": [
"",
"2166767032",
"2116745197"
],
"abstract": [
"",
"We consider the problem of maintaining frequency counts for items occurring frequently in the union of multiple distributed data streams. Naive methods of combining approximate frequency counts from multiple nodes tend to result in excessively large data structures that are costly to transfer among nodes. To minimize communication requirements, the degree of precision maintained by each node while counting item frequencies must be managed carefully. We introduce the concept of a precision gradient for managing precision when nodes are arranged in a hierarchical communication structure. We then study the optimization problem of how to set the precision gradient so as to minimize communication, and provide optimal solutions that minimize worst-case communication load over all possible inputs. We then introduce a variant designed to perform well in practice, with input data that does not conform to worst-case characteristics. We verify the effectiveness of our approach empirically using real-world data, and show that our methods incur substantially less communication than naive approaches while providing the same error guarantees on answers.",
"Many applications require the discovery of items which have occur frequently within multiple distributed data streams. Past solutions for this problem either require a high degree of error tolerance or can only provide results periodically. In this paper we introduce a new algorithm designed for continuously tracking frequent items over distributed data streams providing either exact or approximate answers. We tested the efficiency of our method using two real-world data sets. The results indicated significant reduction in communication cost when compared to naive approaches and an existing efficient algorithm called Top-K Monitoring. Since our method does not rely upon approximations to reduce communication overhead and is explicitly designed for tracking frequent items, our method also shows increased quality in its tracking results."
]
}
|
1404.7571
|
2952036987
|
Tracking and approximating data matrices in streaming fashion is a fundamental challenge. The problem requires more care and attention when data comes from multiple distributed sites, each receiving a stream of data. This paper considers the problem of "tracking approximations to a matrix" in the distributed streaming model. In this model, there are m distributed sites each observing a distinct stream of data (where each element is a row of a distributed matrix) and has a communication channel with a coordinator, and the goal is to track an eps-approximation to the norm of the matrix along any direction. To that end, we present novel algorithms to address the matrix approximation problem. Our algorithms maintain a smaller matrix B, as an approximation to a distributed streaming matrix A, such that for any unit vector x: | ||A x||^2 - ||B x||^2 | <= eps ||A||_F^2. Our algorithms work in streaming fashion and incur small communication, which is critical for distributed computation. Our best method is deterministic and uses only O((m eps) log(beta N)) communication, where N is the size of stream (at the time of the query) and beta is an upper-bound on the squared norm of any row of the matrix. In addition to proving all algorithmic properties theoretically, extensive experiments with real large datasets demonstrate the efficiency of these protocols.
|
Cormode and Garofalakis @cite_19 proposed another method by maintaining a summary of the input stream and a prediction sketch at each site. If the summary varies from the prediction sketch by more than a user defined tolerance amount, the summary and (possibly) a new prediction sketch is sent to a coordinator. The coordinator can use the information gathered from each site to continuously report frequent items. Sketches maintained by each site in this method require @math space and @math time per update, where @math is a probabilistic confidence.
|
{
"cite_N": [
"@cite_19"
],
"mid": [
"2107443258"
],
"abstract": [
"Emerging large-scale monitoring applications require continuous tracking of complex data-analysis queries over collections of physically-distributed streams. Effective solutions have to be simultaneously space time efficient (at each remote monitor site), communication efficient (across the underlying communication network), and provide continuous, guaranteed-quality approximate query answers. In this paper, we propose novel algorithmic solutions for the problem of continuously tracking a broad class of complex aggregate queries in such a distributed-streams setting. Our tracking schemes maintain approximate query answers with provable error guarantees, while simultaneously optimizing the storage space and processing time at each remote site, and the communication cost across the network. They rely on tracking general-purpose randomized sketch summaries of local streams at remote sites along with concise prediction models of local site behavior in order to produce highly communication- and space time-efficient solutions. The result is a powerful approximate query tracking framework that readily incorporates several complex analysis queries (including distributed join and multi-join aggregates, and approximate wavelet representations), thus giving the first known low-overhead tracking solution for such queries in the distributed-streams model."
]
}
|
1404.7571
|
2952036987
|
Tracking and approximating data matrices in streaming fashion is a fundamental challenge. The problem requires more care and attention when data comes from multiple distributed sites, each receiving a stream of data. This paper considers the problem of "tracking approximations to a matrix" in the distributed streaming model. In this model, there are m distributed sites each observing a distinct stream of data (where each element is a row of a distributed matrix) and has a communication channel with a coordinator, and the goal is to track an eps-approximation to the norm of the matrix along any direction. To that end, we present novel algorithms to address the matrix approximation problem. Our algorithms maintain a smaller matrix B, as an approximation to a distributed streaming matrix A, such that for any unit vector x: | ||A x||^2 - ||B x||^2 | <= eps ||A||_F^2. Our algorithms work in streaming fashion and incur small communication, which is critical for distributed computation. Our best method is deterministic and uses only O((m eps) log(beta N)) communication, where N is the size of stream (at the time of the query) and beta is an upper-bound on the squared norm of any row of the matrix. In addition to proving all algorithmic properties theoretically, extensive experiments with real large datasets demonstrate the efficiency of these protocols.
|
Yi and Zhang @cite_47 provided a deterministic algorithm with communication cost @math and @math space at each site to continuously track @math -heavy hitters and the @math -quantiles. In their method, every site and the coordinator have as many counters as the type of items plus one more counter for the total items. Every site keeps track of number of items it receives in each round, once this number reaches roughly @math times of the total counter at the coordinator, the site sends the counter to the coordinator. After the coordinator receives @math such messages, it updates its counters and broadcasts them to all sites. Sites reset their counter values and continue to next round. To lower space usage at sites, they suggested using space-saving sketch @cite_10 . The authors also gave matching lower bounds on the communication costs for both problems, showing their algorithms are optimal in the deterministic setting.
|
{
"cite_N": [
"@cite_47",
"@cite_10"
],
"mid": [
"1997642935",
"1970779762"
],
"abstract": [
"We consider the problem of tracking heavy hitters and quantiles in the distributed streaming model. The heavy hitters and quantiles are two important statistics for characterizing a data distribution. Let A be a multiset of elements, drawn from the universe U= 1,ź,u . For a given 0≤ź≤1, the ź-heavy hitters are those elements of A whose frequency in A is at least ź|A|; the ź-quantile of A is an element x of U such that at most ź|A| elements of A are smaller than A and at most (1źź)|A| elements of A are greater than x. Suppose the elements of A are received at k remote sites over time, and each of the sites has a two-way communication channel to a designated coordinator, whose goal is to track the set of ź-heavy hitters and the ź-quantile of A approximately at all times with minimum communication. We give tracking algorithms with worst-case communication cost O(k ∈źlogn) for both problems, where n is the total number of items in A, and ∈ is the approximation error. This substantially improves upon the previous known algorithms. We also give matching lower bounds on the communication costs for both problems, showing that our algorithms are optimal. We also consider a more general version of the problem where we simultaneously track the ź-quantiles for all 0≤ź≤1.",
"We propose an approximate integrated approach for solving both problems of finding the most popular k elements, and finding frequent elements in a data stream coming from a large domain. Our solution is space efficient and reports both frequent and top-k elements with tight guarantees on errors. For general data distributions, our top-k algorithm returns k elements that have roughly the highest frequencies; and it uses limited space for calculating frequent elements. For realistic Zipfian data, the space requirement of the proposed algorithm for solving the exact frequent elements problem decreases dramatically with the parameter of the distribution; and for top-k queries, the analysis ensures that only the top-k elements, in the correct order, are reported. The experiments, using real and synthetic data sets, show space reductions with hardly any loss in accuracy. Having proved the effectiveness of the proposed approach through both analysis and experiments, we extend it to be able to answer continuous queries about frequent and top-k elements. Although the problems of incremental reporting of frequent and top-k elements are useful in many applications, to the best of our knowledge, no solution has been proposed."
]
}
|
1404.7571
|
2952036987
|
Tracking and approximating data matrices in streaming fashion is a fundamental challenge. The problem requires more care and attention when data comes from multiple distributed sites, each receiving a stream of data. This paper considers the problem of "tracking approximations to a matrix" in the distributed streaming model. In this model, there are m distributed sites each observing a distinct stream of data (where each element is a row of a distributed matrix) and has a communication channel with a coordinator, and the goal is to track an eps-approximation to the norm of the matrix along any direction. To that end, we present novel algorithms to address the matrix approximation problem. Our algorithms maintain a smaller matrix B, as an approximation to a distributed streaming matrix A, such that for any unit vector x: | ||A x||^2 - ||B x||^2 | <= eps ||A||_F^2. Our algorithms work in streaming fashion and incur small communication, which is critical for distributed computation. Our best method is deterministic and uses only O((m eps) log(beta N)) communication, where N is the size of stream (at the time of the query) and beta is an upper-bound on the squared norm of any row of the matrix. In addition to proving all algorithmic properties theoretically, extensive experiments with real large datasets demonstrate the efficiency of these protocols.
|
Later, Huang al @cite_25 proposed a randomized algorithm that uses @math space at each site and @math total communication and tracks heavy hitters in a distributed stream. For each item @math in the stream a site chooses to send a message with a probability @math where @math is a @math -approximation of the total count. It then sends @math the total count of messages at site @math where @math , to the coordinator. Again an approximation heavy-hitter count @math can be used at each site to reduce space.
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"2949236774"
],
"abstract": [
"We show that randomization can lead to significant improvements for a few fundamental problems in distributed tracking. Our basis is the count-tracking problem, where there are @math players, each holding a counter @math that gets incremented over time, and the goal is to track an @math -approximation of their sum @math continuously at all times, using minimum communication. While the deterministic communication complexity of the problem is @math , where @math is the final value of @math when the tracking finishes, we show that with randomization, the communication cost can be reduced to @math . Our algorithm is simple and uses only O(1) space at each player, while the lower bound holds even assuming each player has infinite computing power. Then, we extend our techniques to two related distributed tracking problems: frequency-tracking and rank-tracking , and obtain similar improvements over previous deterministic algorithms. Both problems are of central importance in large data monitoring and analysis, and have been extensively studied in the literature."
]
}
|
1404.7571
|
2952036987
|
Tracking and approximating data matrices in streaming fashion is a fundamental challenge. The problem requires more care and attention when data comes from multiple distributed sites, each receiving a stream of data. This paper considers the problem of "tracking approximations to a matrix" in the distributed streaming model. In this model, there are m distributed sites each observing a distinct stream of data (where each element is a row of a distributed matrix) and has a communication channel with a coordinator, and the goal is to track an eps-approximation to the norm of the matrix along any direction. To that end, we present novel algorithms to address the matrix approximation problem. Our algorithms maintain a smaller matrix B, as an approximation to a distributed streaming matrix A, such that for any unit vector x: | ||A x||^2 - ||B x||^2 | <= eps ||A||_F^2. Our algorithms work in streaming fashion and incur small communication, which is critical for distributed computation. Our best method is deterministic and uses only O((m eps) log(beta N)) communication, where N is the size of stream (at the time of the query) and beta is an upper-bound on the squared norm of any row of the matrix. In addition to proving all algorithmic properties theoretically, extensive experiments with real large datasets demonstrate the efficiency of these protocols.
|
Other related work. Lastly, our work falls into the general problem of tracking a function in distributed streaming model. Many existing works have studied this general problem for various specific functions, and we have reviewed the most related ones on heavy hitters. A detailed survey of results on other functions (that are much less relevant to our study) is beyond the scope of this work, and we refer interested readers to @cite_14 @cite_2 and references therein.
|
{
"cite_N": [
"@cite_14",
"@cite_2"
],
"mid": [
"2123430048",
"2012299525"
],
"abstract": [
"We study what we call functional monitoring problems. We have k players each tracking their inputs, say player i tracking a multiset Ai(t) up until time t, and communicating with a central coordinator. The coordinator's task is to monitor a given function f computed over the union of the inputs ∪iAi(t), continuously at all times t. The goal is to minimize the number of bits communicated between the players and the coordinator. A simple example is when f is the sum, and the coordinator is required to alert when the sum of a distributed set of values exceeds a given threshold τ. Of interest is the approximate version where the coordinator outputs 1 if f ≥ τ and 0 if f ≤ (1 - e)τ. This defines the (k, f, τ, e) distributed, functional monitoring problem. Functional monitoring problems are fundamental in distributed systems, in particular sensor networks, where we must minimize communication; they also connect to problems in communication complexity, communication theory, and signal processing. Yet few formal bounds are known for functional monitoring. We give upper and lower bounds for the (k, f, τ, e) problem for some of the basic f's. In particular, we study frequency moments (F0, F1, F2). For F0 and F1, we obtain continuously monitoring algorithms with costs almost the same as their one-shot computation algorithms. However, for F2 the monitoring problem seems much harder. We give a carefully constructed multi-round algorithm that uses \"sketch summaries\" at multiple levels of detail and solves the (k, F2, τ, e) problem with communication O(k2 e+ (√k e)3). Since frequency moment estimation is central to other problems, our results have immediate applications to histograms, wavelet computations, and others. Our algorithmic techniques are likely to be useful for other functional monitoring problems as well.",
"In the model of continuous distributed monitoring, a number of observers each see a stream of observations. Their goal is to work together to compute a function of the union of their observations. This can be as simple as counting the total number of observations, or more complex non-linear functions such as tracking the entropy of the induced distribution. Assuming that it is too costly to simply centralize all the observations, it becomes quite challenging to design solutions which provide a good approximation to the current answer, while bounding the communication cost of the observers, and their other resources such as their space usage. This survey introduces this model, and describe a selection results in this setting, from the simple counting problem to a variety of other functions that have been studied."
]
}
|
1404.7152
|
2021278340
|
Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data. Our method infers an unknown user's location by examining their friend's locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user's ego network can be used as a per-user accuracy measure which is effective at removing outlying errors. Leave-many-out evaluation shows that our method is able to infer location for 101, 846, 236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80 of public tweets.
|
Work by @cite_14 and @cite_29 provided methods based on identifying and searching for location-specific terms in Twitter text. More recent work by @cite_1 (extended in @cite_3 ) builds on these approaches with ensemble classifiers that account for additional features such as time zone and volume of tweets per hour. To the best of our knowledge, @cite_3 showcases the current state-of-the-art for content-based geotagging: @math test users at $68 Geotagging via natural language processing requires that users from different geographic regions tweet in different dialects, and that these differences are great enough to make accurate location inference possible. Evidence against this possibility appears in @cite_21 , where the author examined a large Twitter dataset and found minimal agreement between language models and proximity. Additionally, language-based geotagging methods often rely on sophisticated language-specific natural language processing and are thus difficult to extend worldwide.
|
{
"cite_N": [
"@cite_14",
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_3"
],
"mid": [
"2018277822",
"2142889507",
"50479354",
"2277420157",
"2006239241"
],
"abstract": [
"We propose and evaluate a probabilistic framework for estimating a Twitter user's city-level location based purely on the content of the user's tweets, even in the absence of any other geospatial cues. By augmenting the massive human-powered sensing capabilities of Twitter and related microblogging services with content-derived location information, this framework can overcome the sparsity of geo-enabled features in these services and enable new location-based personalized information services, the targeting of regional advertisements, and so on. Three of the key features of the proposed approach are: (i) its reliance purely on tweet content, meaning no need for user IP information, private login information, or external knowledge bases; (ii) a classification component for automatically identifying words in tweets with a strong local geo-scope; and (iii) a lattice-based neighborhood smoothing model for refining a user's location estimate. The system estimates k possible locations for each user in descending order of confidence. On average we find that the location estimates converge quickly (needing just 100s of tweets), placing 51 of Twitter users within 100 miles of their actual location.",
"The rapid growth of geotagged social media raises new computational possibilities for investigating geographic linguistic variation. In this paper, we present a multi-level generative model that reasons jointly about latent topics and geographical regions. High-level topics such as \"sports\" or \"entertainment\" are rendered differently in each geographic region, revealing topic-specific regional distinctions. Applied to a new dataset of geotagged microblogs, our model recovers coherent topics and their regional variants, while identifying geographic areas of linguistic consistency. The model also enables prediction of an author's geographic location from raw text, outperforming both text regression and supervised topic models.",
"Social networks are often grounded in spatial locality where individuals form relationships with those they meet nearby. However, the location of individuals in online social networking platforms is often unknown. Prior approaches have tried to infer individuals' locations from the content they produce online or their online relations, but often are limited by the available location-related data. We propose a new method for social networks that accurately infers locations for nearly all of individuals by spatially propagating location assignments through the social network, using only a small number of initial locations. In five experiments, we demonstrate the effectiveness in multiple social networking platforms, using both precise and noisy data to start the inference, and present heuristics for improving performance. In one experiment, we demonstrate the ability to infer the locations of a group of users who generate over 74 of the daily Twitter message volume with an estimated median location error of 10km. Our results open the possibility of gathering large quantities of location-annotated data from social media platforms.",
"We present a new algorithm for inferring the home locations of Twitter users at different granularities, such as city, state, or time zone, using the content of their tweets and their tweeting behavior. Unlike existing approaches, our algorithm uses an ensemble of statistical and heuristic classifiers to predict locations. We find that a hierarchical classification approach can improve prediction accuracy. Experimental evidence suggests that our algorithm works well in practice and outperforms the best existing algorithms for predicting the location of Twitter users.",
"We present a new algorithm for inferring the home location of Twitter users at different granularities, including city, state, time zone, or geographic region, using the content of users’ tweets and their tweeting behavior. Unlike existing approaches, our algorithm uses an ensemble of statistical and heuristic classifiers to predict locations and makes use of a geographic gazetteer dictionary to identify place-name entities. We find that a hierarchical classification approach, where time zone, state, or geographic region is predicted first and city is predicted next, can improve prediction accuracy. We have also analyzed movement variations of Twitter users, built a classifier to predict whether a user was travelling in a certain period of time, and use that to further improve the location detection accuracy. Experimental evidence suggests that our algorithm works well in practice and outperforms the best existing algorithms for predicting the home location of Twitter users."
]
}
|
1404.7152
|
2021278340
|
Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data. Our method infers an unknown user's location by examining their friend's locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user's ego network can be used as a per-user accuracy measure which is effective at removing outlying errors. Leave-many-out evaluation shows that our method is able to infer location for 101, 846, 236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80 of public tweets.
|
Large-scale language-agnostic geotagging is possible by inferring a user's location with the known locations of their friends. Influential work in this field was conducted at Facebook Inc. in @cite_13 . Here, the authors infer a user's home address with a maximum likelihood estimate that best fits an empirically observed power law. Surprisingly, the results of @cite_13 indicate that social network based methods are accurate than IP-based geolocation. Since the IP addresses of Twitter users are never public, and our research involves public data alone, we can not report any results about how the present work compares against IP-based Twitter geolocation.
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"2168346693"
],
"abstract": [
"Geography and social relationships are inextricably intertwined; the people we interact with on a daily basis almost always live near us. As people spend more time online, data regarding these two dimensions -- geography and social relationships -- are becoming increasingly precise, allowing us to build reliable models to describe their interaction. These models have important implications in the design of location-based services, security intrusion detection, and social media supporting local communities. Using user-supplied address data and the network of associations between members of the Facebook social network, we can directly observe and measure the relationship between geography and friendship. Using these measurements, we introduce an algorithm that predicts the location of an individual from a sparse set of located users with performance that exceeds IP-based geolocation. This algorithm is efficient and scalable, and could be run on a network containing hundreds of millions of users."
]
}
|
1404.7152
|
2021278340
|
Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data. Our method infers an unknown user's location by examining their friend's locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user's ego network can be used as a per-user accuracy measure which is effective at removing outlying errors. Leave-many-out evaluation shows that our method is able to infer location for 101, 846, 236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80 of public tweets.
|
Geotagging work on Flickr and Twitter by Sadilek @cite_26 uses social ties to infer location and then studies the converse problem: using location to infer social ties. They conclude that location alone is insufficient for this task. Very recent work by @cite_16 uses network structure as well as language processing of user profiles to identify landmark'' users in the United States for whom location inference is optimal while @cite_31 showcases a similar result on Korean Twitter users.
|
{
"cite_N": [
"@cite_31",
"@cite_26",
"@cite_16"
],
"mid": [
"2296740734",
"2069090820",
"2038022961"
],
"abstract": [
"Geographic locations of users form an important axis in public polls and localized advertising, but are not available by default. The number of users who make their locations public or use GPS tagging is relatively small, compared to the huge number of users in online social networking services and social media platforms. In this work we propose a new framework to infer a user's main location of activities in Twitter using their textual contents. Our approach is based on a probabilistic generative model that filters local words, employs data binning for scalability, and applies a map projection technique for performance. For Korean Twitter users, we report that 60 of users are identified within 10 km of their locations, a significant improvement over existing approaches.",
"Location plays an essential role in our lives, bridging our online and offline worlds. This paper explores the interplay between people's location, interactions, and their social ties within a large real-world dataset. We present and evaluate Flap, a system that solves two intimately related tasks: link and location prediction in online social networks. For link prediction, Flap infers social ties by considering patterns in friendship formation, the content of people's messages, and user location. We show that while each component is a weak predictor of friendship alone, combining them results in a strong model, accurately identifying the majority of friendships. For location prediction, Flap implements a scalable probabilistic model of human mobility, where we treat users with known GPS positions as noisy sensors of the location of their friends. We explore supervised and unsupervised learning scenarios, and focus on the efficiency of both learning and inference. We evaluate Flap on a large sample of highly active users from two distinct geographical areas and show that it (1) reconstructs the entire friendship graph with high accuracy even when no edges are given; and (2) infers people's fine-grained location, even when they keep their data private and we can only access the location of their friends. Our models significantly outperform current comparable approaches to either task.",
"Location profiles of user accounts in social media can be utilized for various applications, such as disaster warnings and location-aware recommendations. In this paper, we propose a scheme to infer users' home locations in social media. A large portion of existing studies assume that connected users (i.e., friends) in social graphs are located in close proximity. Although this assumption holds for some fraction of connected pairs, sometimes connected pairs live far from each other. To address this issue, we introduce a novel concept of landmarks, which are defined as users with a lot of friends who live in a small region. Landmarks have desirable features to infer users' home locations such as providing strong clues and allowing the locations of numerous users to be inferred using a small number of landmarks. Based on this concept, we propose a landmark mixture model (LMM) to infer users' location. The experimental results using a large-scale Twitter dataset show that our method improves the accuracy of the state-of-the-art method by about 27 ."
]
}
|
1404.7152
|
2021278340
|
Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data. Our method infers an unknown user's location by examining their friend's locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user's ego network can be used as a per-user accuracy measure which is effective at removing outlying errors. Leave-many-out evaluation shows that our method is able to infer location for 101, 846, 236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80 of public tweets.
|
Most closely related to our work is that of @cite_21 , where the author developed a node-wise algorithm, Spatial Label Propagation'', which iteratively estimates Twitter user locations by propagating the locations of GPS-known users across a Twitter social network. While not discussed in @cite_21 , it turns out that Spatial Label Propagation is in fact a parallel coordinate descent method applied to total variation minimization. Later in this work we show how our technique can reduce to Spatial Label Propagation by removing constraints on ). While Spatial Label Propagation was demonstrated at scale, our study reaches higher coverage and accuracy: @math M users at a median error of @math km vs. @math M users at a median error of @math km.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"50479354"
],
"abstract": [
"Social networks are often grounded in spatial locality where individuals form relationships with those they meet nearby. However, the location of individuals in online social networking platforms is often unknown. Prior approaches have tried to infer individuals' locations from the content they produce online or their online relations, but often are limited by the available location-related data. We propose a new method for social networks that accurately infers locations for nearly all of individuals by spatially propagating location assignments through the social network, using only a small number of initial locations. In five experiments, we demonstrate the effectiveness in multiple social networking platforms, using both precise and noisy data to start the inference, and present heuristics for improving performance. In one experiment, we demonstrate the ability to infer the locations of a group of users who generate over 74 of the daily Twitter message volume with an estimated median location error of 10km. Our results open the possibility of gathering large quantities of location-annotated data from social media platforms."
]
}
|
1404.7152
|
2021278340
|
Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data. Our method infers an unknown user's location by examining their friend's locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user's ego network can be used as a per-user accuracy measure which is effective at removing outlying errors. Leave-many-out evaluation shows that our method is able to infer location for 101, 846, 236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80 of public tweets.
|
Research indicating that Twitter contact is independent of proximity can be found in @cite_24 , where the author examines GPS-known retweet pairings and finds an average distance of @math miles between users. Averages, however, are sensitive to outliers which are often present in social data. In this work, we will make use of robust statistics to estimate center and spread for sets of locations.
|
{
"cite_N": [
"@cite_24"
],
"mid": [
"2093588201"
],
"abstract": [
"In just under seven years, Twitter has grown to count nearly 3 of the entire global population among its active users who have sent more than 170 billion 140-character messages. Today the service plays such a significant role in American culture that the Library of Congress has assembled a permanent archive of the site back to its first tweet, updated daily. With its open API, Twitter has become one of the most popular data sources for social research, yet the majority of the literature has focused on it as a text or network graph source, with only limited efforts to date focusing exclusively on the geography of Twitter, assessing the various sources of geographic information on the service and their accuracy. More than 3 of all tweets are found to have native location information available, while a naive geocoder based on a simple major cities gazetteer and relying on the user-provided Location and Profile fields is able to geolocate more than a third of all tweets with high accuracy when measured against the GPS-based baseline. Geographic proximity is found to play a minimal role both in who users communicate with and what they communicate about, providing evidence that social media is shifting the communicative landscape."
]
}
|
1404.7067
|
149130401
|
We define an extension of time Petri nets such that the time at which a transition can fire, also called its firing date, may be dynamically updated. Our extension provides two mechanisms for updating the timing constraints of a net. First, we propose to change the static time interval of a transition each time it is newly enabled; in this case the new time interval is given as a function of the current marking. Next, we allow to update the firing date of a transition when it is persistent, that is when a concurrent transition fires. We show how to carry the widely used state class abstraction to this new kind of time Petri nets and define a class of nets for which the abstraction is exact. We show the usefulness of our approach with two applications: first for scheduling preemptive task, as a poor man’s substitute for stopwatch, then to model hybrid systems with non trivial continuous behavior.
|
We have shown how to extend the construction to handle fickle transitions. The is certainly the most widely used state space abstraction for Time Petri nets: it is a convenient abstraction for LTL model checking; it is finite when the set of markings is bounded; and it preserves both the markings and traces of the net. The results are slightly different with dynamic , even for the restricted class of translation nets. In particular, we may have an infinite even when the net is bounded. This may be the case, for instance, if we have a transition that can stay persistent infinitely and that is associated to the fickle function @math . This entails that our construction may not terminate, even if the set of markings is bounded. This situation is quite comparable to what occurs with @cite_12 and, like in this model, it is possible to prove that the model-checking problem is undecidable in the general case. This does not mean that our construction is useless in practice, as we show in our examples of Sect. .
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2142830749"
],
"abstract": [
"We investigate extensions of Alur and Dill's timed automata, based on the possibility to update the clocks in a more elaborate way than simply reset them to zero. We call these automata updatable timed automata. They form an undecidable class of models, in the sense that emptiness checking is not decidable. However, using an extension of the region graph construction, we exhibit interesting decidable subclasses. In a surprising way, decidability depends on the nature of the clock constraints which are used, diagonal-free or not, whereas these constraints play identical roles in timed automata. We thus describe in a quite precise way the thin frontier between decidable and undecidable classes of updatable timed automata.We also study the expressive power of updatable timed automata. It turns out that any up-datable automaton belonging to some decidable subclass can be effectively transformed into an equivalent timed automaton without updates but with silent transitions. The transformation suffers from an enormous combinatorics blow-up which seems unavoidable. Therefore, updatable timed automata appear to be a concise model for representing and analyzing large classes of timed systems."
]
}
|
1404.7067
|
149130401
|
We define an extension of time Petri nets such that the time at which a transition can fire, also called its firing date, may be dynamically updated. Our extension provides two mechanisms for updating the timing constraints of a net. First, we propose to change the static time interval of a transition each time it is newly enabled; in this case the new time interval is given as a function of the current marking. Next, we allow to update the firing date of a transition when it is persistent, that is when a concurrent transition fires. We show how to carry the widely used state class abstraction to this new kind of time Petri nets and define a class of nets for which the abstraction is exact. We show the usefulness of our approach with two applications: first for scheduling preemptive task, as a poor man’s substitute for stopwatch, then to model hybrid systems with non trivial continuous behavior.
|
For future works, we plan to study an extension of our approach to other models of real-time systems and to other state-space abstractions. For instance the Strong construction of @cite_11 , that is finer than the construction but that is needed when considering the addition of priorities. The strong relies on the use of , rather than firing domains, and has some strong resemblance with the zone constructions commonly used for analysis of . Another, quite different, type of abstractions rely on the use of a discrete time semantics for . We can obtain a discrete semantic by, for instance, restricting continuous transitions @math to the case where @math is an integer. This approach could be useful when modeling hybrid systems, since it is a simple way to add a quantization over time as well as over values.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"1841167015"
],
"abstract": [
"This paper is concerned with construction of some state space abstractions for Time Petri nets. State class spaces were introduced long ago by Berthomieu and Menasche as finite representations for the typically infinite state spaces of Time Petri nets, preserving their linear time temporal properties. This paper proposes a similar construction that preserves their branching time temporal properties. The construction improves a previous proposal by Yoneda and Ryuba. The method has been implemented, computing experiments are reported."
]
}
|
1404.6878
|
2949205410
|
Hive is the most mature and prevalent data warehouse tool providing SQL-like interface in the Hadoop ecosystem. It is successfully used in many Internet companies and shows its value for big data processing in traditional industries. However, enterprise big data processing systems as in Smart Grid applications usually require complicated business logics and involve many data manipulation operations like updates and deletes. Hive cannot offer sufficient support for these while preserving high query performance. Hive using the Hadoop Distributed File System (HDFS) for storage cannot implement data manipulation efficiently and Hive on HBase suffers from poor query performance even though it can support faster data manipulation.There is a project based on Hive issue Hive-5317 to support update operations, but it has not been finished in Hive's latest version. Since this ACID compliant extension adopts same data storage format on HDFS, the update performance problem is not solved. In this paper, we propose a hybrid storage model called DualTable, which combines the efficient streaming reads of HDFS and the random write capability of HBase. Hive on DualTable provides better data manipulation support and preserves query performance at the same time. Experiments on a TPC-H data set and on a real smart grid data set show that Hive on DualTable is up to 10 times faster than Hive when executing update and delete operations.
|
To improve the performance or features of Hive, many HiveQL compatible systems have been developed, like Shark @cite_5 based on Spark @cite_1 , Cloudera Impala @cite_23 , and others. Technologies for in-memory processing, more efficient data reading and writing, and partial DAG execution are utilized to enhance the whole system or just particular kinds of applications like recursive data mining and ad-hoc queries.
|
{
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_23"
],
"mid": [
"2144461192",
"2131975293",
""
],
"abstract": [
"Shark is a research data analysis system built on a novel coarse-grained distributed shared-memory abstraction. Shark marries query processing with deep data analysis, providing a unified system for easy data manipulation using SQL and pushing sophisticated analysis closer to data. It scales to thousands of nodes in a fault-tolerant manner. Shark can answer queries 40X faster than Apache Hive and run machine learning programs 25X faster than MapReduce programs in Apache Hadoop on large datasets.",
"We present Resilient Distributed Datasets (RDDs), a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner. RDDs are motivated by two types of applications that current computing frameworks handle inefficiently: iterative algorithms and interactive data mining tools. In both cases, keeping data in memory can improve performance by an order of magnitude. To achieve fault tolerance efficiently, RDDs provide a restricted form of shared memory, based on coarse-grained transformations rather than fine-grained updates to shared state. However, we show that RDDs are expressive enough to capture a wide class of computations, including recent specialized programming models for iterative jobs, such as Pregel, and new applications that these models do not capture. We have implemented RDDs in a system called Spark, which we evaluate through a variety of user applications and benchmarks.",
""
]
}
|
1404.6230
|
2016552385
|
f-divergence estimation is an important problem in the fields of information theory, machine learning, and statistics. While several divergence estimators exist, relatively few of their convergence rates are known. We derive the MSE convergence rate for a density plug-in estimator of f-divergence. Then by applying the theory of optimally weighted ensemble estimation, we derive a divergence estimator with a convergence rate of O(1 T) that is simple to implement and performs well in high dimensions. We validate our theoretical results with experiments.
|
Several estimators for some @math -divergences already exist. For example, P ' o czos & Schneider @cite_0 established weak consistency of a bias-corrected @math -nn estimator for R ' e nyi- @math and other divergences of similar form. @cite_9 gave an estimator for the KL divergence. Other mutual information and divergence estimators based on plug-in histogram schemes have been proven to be consistent @cite_5 @cite_21 @cite_10 @cite_4 . However none of these works studied the convergence rates of their estimators while our ensemble approach requires an explicit expression of the asymptotic bias and variance. @cite_19 provided an estimator for R ' e nyi- @math divergence but assumed that one of the densities was known.
|
{
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_21",
"@cite_0",
"@cite_19",
"@cite_5",
"@cite_10"
],
"mid": [
"1974829031",
"2150879893",
"2149268774",
"121168560",
"2060677448",
"2127234432",
"2171585891"
],
"abstract": [
"Abstract The Darbellay–Vajda partition scheme is a well known method to estimate the information dependency. This estimator belongs to a class of data-dependent partition estimators. We would like to prove that with some simple conditions, the Darbellay–Vajda partition estimator is a strong consistency for the information dependency estimation of a bivariate random vector. This result is an extension of Silva and Narayanan, 2010a , Silva and Narayanan, 2010b work which gives some simple conditions to confirm that the Gessaman's partition estimator and the tree-quantization partition estimator, other estimators in the class of data-dependent partition estimators, are strongly consistent.",
"A new universal estimator of divergence is presented for multidimensional continuous densities based on k-nearest-neighbor (k-NN) distances. Assuming independent and identically distributed (i.i.d.) samples, the new estimator is proved to be asymptotically unbiased and mean-square consistent. In experiments with high-dimensional data, the k-NN approach generally exhibits faster convergence than previous algorithms. It is also shown that the speed of convergence of the k-NN method can be further improved by an adaptive choice of k.",
"We present a universal estimator of the divergence D(P spl par Q) for two arbitrary continuous distributions P and Q satisfying certain regularity conditions. This algorithm, which observes independent and identically distributed (i.i.d.) samples from both P and Q, is based on the estimation of the Radon-Nikodym derivative dP dQ via a data-dependent partition of the observation space. Strong convergence of this estimator is proved with an empirically equivalent segmentation of the space. This basic estimator is further improved by adaptive partitioning schemes and by bias correction. The application of the algorithms to data with memory is also investigated. In the simulations, we compare our estimators with the direct plug-in estimator and estimators based on other partitioning approaches. Experimental results show that our methods achieve the best convergence performance in most of the tested cases.",
"We propose new nonparametric, consistent Renyi-α and Tsallis-α divergence estimators for continuous distributions. Given two independent and identically distributed samples, a “naive” approach would be to simply estimate the underlying densities and plug the estimated densities into the corresponding formulas. Our proposed estimators, in contrast, avoid density estimation completely, estimating the divergences directly using only simple k-nearest-neighbor statistics. We are nonetheless able to prove that the estimators are consistent under certain conditions. We also describe how to apply these estimators to mutual information and demonstrate their efficiency via numerical experiments.",
"This article presents applications of entropic spanning graphs to imaging and feature clustering applications. Entropic spanning graphs span a set of feature vectors in such a way that the normalized spanning length of the graph converges to the entropy of the feature distribution as the number of random feature vectors increases. This property makes these graphs naturally suited to applications where entropy and information divergence are used as discriminants: texture classification, feature clustering, image indexing, and image registration. Among other areas, these problems arise in geographical information systems, digital libraries, medical information processing, video indexing, multisensor fusion, and content-based retrieval.",
"We demonstrate that it is possible to approximate the mutual information arbitrarily closely in probability by calculating the relative frequencies on appropriate partitions and achieving conditional independence on the rectangles of which the partitions are made. Empirical results, including a comparison with maximum-likelihood estimators, are presented.",
"This work studies the problem of information divergence estimation based on data-dependent partitions. A histogrambased data-dependent estimate is proposed adopting a version of Barron-type histogram-based estimate. The main result is the stipulation of su cient conditions on the partition scheme to make the estimate strongly consistent. Furthermore, when the distributions are equipped with density functions in ( d ;B( d )), we obtain su cient conditions that guarantee a density-free strongly consistent information divergence estimate. In this context, the result is presented for two emblematic partition schemes: the statistically equivalent blocks (Gessaman’s data-driven partition) and data-dependent tree-structured vector quantization (TSVQ)."
]
}
|
1404.6230
|
2016552385
|
f-divergence estimation is an important problem in the fields of information theory, machine learning, and statistics. While several divergence estimators exist, relatively few of their convergence rates are known. We derive the MSE convergence rate for a density plug-in estimator of f-divergence. Then by applying the theory of optimally weighted ensemble estimation, we derive a divergence estimator with a convergence rate of O(1 T) that is simple to implement and performs well in high dimensions. We validate our theoretical results with experiments.
|
@cite_1 proposed a method for estimating @math -divergences by estimating the likelihood ratio of the two densities by solving a convex optimization problem and then plugging it into the divergence formulas. For this method they prove that the minimax convergence rate is parametric ( @math ) when the likelihood ratio is in the bounded H " o lder class @math with @math . This assumption is weaker than ours which requires the densities to be at least @math times differentiable. However, solving the convex problem of @cite_1 is similar in complexity to training the SVM (between @math and @math ) which can be demanding when @math is very large. In contrast, our method of optimally weighted ensemble estimation depends only on simple density plug-in estimates and an offline convex optimization problem. Thus the most computationally demanding step in our approach is the calculation of the @math -nn distances which has complexity no greater than @math .
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2166944917"
],
"abstract": [
"We develop and analyze M-estimation methods for divergence functionals and the likelihood ratios of two probability distributions. Our method is based on a nonasymptotic variational characterization of f -divergences, which allows the problem of estimating divergences to be tackled via convex empirical risk optimization. The resulting estimators are simple to implement, requiring only the solution of standard convex programs. We present an analysis of consistency and convergence for these estimators. Given conditions only on the ratios of densities, we show that our estimators can achieve optimal minimax rates for the likelihood ratio and the divergence functionals in certain regimes. We derive an efficient optimization algorithm for computing our estimates, and illustrate their convergence behavior and practical viability by simulations."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.