aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1410.0571 | 2952332158 | In this paper, we address the problem of quick detection of high-degree entities in large online social networks. Practical importance of this problem is attested by a large number of companies that continuously collect and update statistics about popular entities, usually using the degree of an entity as an approximation of its popularity. We suggest a simple, efficient, and easy to implement two-stage randomized algorithm that provides highly accurate solutions for this problem. For instance, our algorithm needs only one thousand API requests in order to find the top-100 most followed users in Twitter, a network with approximately a billion of registered users, with more than 90 precision. Our algorithm significantly outperforms existing methods and serves many different purposes, such as finding the most popular users or the most popular interest groups in social networks. An important contribution of this work is the analysis of the proposed algorithm using Extreme Value Theory -- a branch of probability that studies extreme events and properties of largest order statistics in random samples. Using this theory, we derive an accurate prediction for the algorithm's performance and show that the number of API requests for finding the top-k most popular entities is sublinear in the number of entities. Moreover, we formally show that the high variability among the entities, expressed through heavy-tailed distributions, is the reason for the algorithm's efficiency. We quantify this phenomenon in a rigorous mathematical way. | An essential assumption of this work is that the network structure is not available and has to be discovered using API requests. This setting is similar to on-line computations, where information is obtained and immediately processed while crawling the network graph (for instance the World Wide Web). There is a large body of literature where such on-line algorithms are developed and analyzed. Many of these algorithms are developed for computing and updating the PageRank vector @cite_11 @cite_16 @cite_21 @cite_10 . In particular, the algorithm recently proposed in @cite_21 computes the PageRank vector in sublinear time. Furthermore, probabilistic Monte Carlo methods @cite_16 @cite_12 @cite_10 allow to continuously update the PageRank as the structure of the Web changes. | {
"cite_N": [
"@cite_21",
"@cite_16",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"1752265245",
"2107577105",
"2077943220",
"2950696249",
"2158601853"
],
"abstract": [
"In a network, identifying all vertices whose PageRank is more than a given threshold value Δ is a basic problem that has arisen in Web and social network analyses. In this paper, we develop a nearly optimal, sublinear time, randomized algorithm for a close variant of this problem. When given a directed network G=(V,E), a threshold value Δ, and a positive constant c>3, with probability 1−o(1), our algorithm will return a subset S⊆V with the property that S contains all vertices of PageRank at least Δ and no vertex with PageRank less than Δ c. The running time of our algorithm is always @math . In addition, our algorithm can be efficiently implemented in various network access models including the Jump and Crawl query model recently studied by [6], making it suitable for dealing with large social and information networks. As part of our analysis, we show that any algorithm for solving this problem must have expected time complexity of @math . Thus, our algorithm is optimal up to logarithmic factors. Our algorithm (for identifying vertices with significant PageRank) applies a multi-scale sampling scheme that uses a fast personalized PageRank estimator as its main subroutine. For that, we develop a new local randomized algorithm for approximating personalized PageRank which is more robust than the earlier ones developed by Jeh and Widom [9] and by Andersen, Chung, and Lang [2].",
"PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer, and thus it reflects the popularity of a Web page. Google computes the PageRank using the power iteration method, which requires about one week of intensive computations. In the present work we propose and analyze Monte Carlo-type methods for the PageRank computation. There are several advantages of the probabilistic Monte Carlo methods over the deterministic power iteration method: Monte Carlo methods already provide good estimation of the PageRank for relatively important pages after one iteration; Monte Carlo methods have natural parallel implementation; and finally, Monte Carlo methods allow one to perform continuous update of the PageRank as the structure of the Web changes.",
"Personalized PageRank expresses link-based page quality around userselected pages in a similar way as PageRank expresses quality over the entire web. Existing personalized PageRank algorithms can, however, serve online queries only for a restricted choice of pages. In this paper we achieve full personalization by a novel algorithm that precomputes a compact database; using this database, it can serve online responses to arbitrary user-selected personalization. The algorithm uses simulated random walks; we prove that for a fixed error probability the size of our database is linear in the number of web pages. We justify our estimation approach by asymptotic worst-case lower bounds: we show that on some sets of graphs, exact personalized PageRank values can only be obtained from a database of size quadratic in the number of vertices. Furthermore, we evaluate the precision of approximation experimentally on the Stanford WebBase graph.",
"In this paper, we analyze the efficiency of Monte Carlo methods for incremental computation of PageRank, personalized PageRank, and similar random walk based methods (with focus on SALSA), on large-scale dynamically evolving social networks. We assume that the graph of friendships is stored in distributed shared memory, as is the case for large social networks such as Twitter. For global PageRank, we assume that the social network has @math nodes, and @math adversarially chosen edges arrive in a random order. We show that with a reset probability of @math , the total work needed to maintain an accurate estimate (using the Monte Carlo method) of the PageRank of every node at all times is @math . This is significantly better than all known bounds for incremental PageRank. For instance, if we naively recompute the PageRanks as each edge arrives, the simple power iteration method needs @math total time and the Monte Carlo method needs @math total time; both are prohibitively expensive. Furthermore, we also show that we can handle deletions equally efficiently. We then study the computation of the top @math personalized PageRanks starting from a seed node, assuming that personalized PageRanks follow a power-law with exponent @math random walks starting from every node for large enough constant @math (using the approach outlined for global PageRank), then the expected number of calls made to the distributed social network database is @math . We also present experimental results from the social networking site, Twitter, verifying our assumptions and analyses. The overall result is that this algorithm is fast enough for real-time queries over a dynamic social network.",
"The computation of page importance in a huge dynamic graph has recently attracted a lot of attention because of the web. Page importance, or page rank is defined as the fixpoint of a matrix equation. Previous algorithms compute it off-line and require the use of a lot of extra CPU as well as disk resources (e.g. to store, maintain and read the link matrix). We introduce a new algorithm OPIC that works on-line, and uses much less resources. In particular, it does not require storing the link matrix. It is on-line in that it continuously refines its estimate of page importance while the web graph is visited. Thus it can be used to focus crawling to the most interesting pages. We prove the correctness of OPIC. We present Adaptive OPIC that also works on-line but adapts dynamically to changes of the web. A variant of this algorithm is now used by Xyleme.We report on experiments with synthetic data. In particular, we study the convergence and adaptiveness of the algorithms for various scheduling strategies for the pages to visit. We also report on experiments based on crawls of significant portions of the web."
]
} |
1410.0571 | 2952332158 | In this paper, we address the problem of quick detection of high-degree entities in large online social networks. Practical importance of this problem is attested by a large number of companies that continuously collect and update statistics about popular entities, usually using the degree of an entity as an approximation of its popularity. We suggest a simple, efficient, and easy to implement two-stage randomized algorithm that provides highly accurate solutions for this problem. For instance, our algorithm needs only one thousand API requests in order to find the top-100 most followed users in Twitter, a network with approximately a billion of registered users, with more than 90 precision. Our algorithm significantly outperforms existing methods and serves many different purposes, such as finding the most popular users or the most popular interest groups in social networks. An important contribution of this work is the analysis of the proposed algorithm using Extreme Value Theory -- a branch of probability that studies extreme events and properties of largest order statistics in random samples. Using this theory, we derive an accurate prediction for the algorithm's performance and show that the number of API requests for finding the top-k most popular entities is sublinear in the number of entities. Moreover, we formally show that the high variability among the entities, expressed through heavy-tailed distributions, is the reason for the algorithm's efficiency. We quantify this phenomenon in a rigorous mathematical way. | Randomized algorithms are also used for discovering the structure of social networks. In @cite_5 random walk methods are proposed to obtain a graph sample with similar properties as a whole graph. In @cite_23 an unbiased random walk, where each node is visited with equal probability, is constructed in order to find the degree distribution on Facebook. Random walk based methods are also used to analyse Peer-to-Peer networks @cite_9 . In @cite_14 traceroute algorithms are proposed to find the root node and to approximate several other characteristics in a preferential attachment graph. | {
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_14",
"@cite_23"
],
"mid": [
"2146008005",
"2159741581",
"2950657833",
"2137135938"
],
"abstract": [
"Given a huge real graph, how can we derive a representative sample? There are many known algorithms to compute interesting measures (shortest paths, centrality, betweenness, etc.), but several of them become impractical for large graphs. Thus graph sampling is essential.The natural questions to ask are (a) which sampling method to use, (b) how small can the sample size be, and (c) how to scale up the measurements of the sample (e.g., the diameter), to get estimates for the large graph. The deeper, underlying question is subtle: how do we measure success?.We answer the above questions, and test our answers by thorough experiments on several, diverse datasets, spanning thousands nodes and edges. We consider several sampling methods, propose novel methods to check the goodness of sampling, and develop a set of scaling laws that describe relations between the properties of the original and the sample.In addition to the theoretical contributions, the practical conclusions from our work are: Sampling strategies based on edge selection do not perform well; simple uniform random node selection performs surprisingly well. Overall, best performing methods are the ones based on random-walks and \"forest fire\"; they match very accurately both static as well as evolutionary graph patterns, with sample sizes down to about 15 of the original graph.",
"In this article we address the problem of counting the number of peers in a peer-to-peer system, and more generally of aggregating statistics of individual peers over the whole system. This functionality is useful in many applications, but hard to achieve when each node has only a limited, local knowledge of the whole system. We propose two generic techniques to solve this problem. The Random Tour method is based on the return time of a continuous time random walk to the node originating the query. The Sample and Collide method is based on counting the number of random samples gathered until a target number of redundant samples are obtained. It is inspired by the \"birthday paradox\" technique of [6], upon which it improves by achieving a target variance with fewer samples. The latter method relies on a sampling sub-routine which returns randomly chosen peers. Such a sampling algorithm is of independent interest. It can be used, for instance, for neighbour selection by new nodes joining the system. We use a continuous time random walk to obtain such samples. We analyse the complexity and accuracy of the two methods. We illustrate in particular how expansion properties of the overlay affect their performance.",
"We study the power of for optimization problems on social networks. We focus on sequential algorithms for which the network topology is initially unknown and is revealed only within a local neighborhood of vertices that have been irrevocably added to the output set. The distinguishing feature of this setting is that locality is necessitated by constraints on the network information visible to the algorithm, rather than being desirable for reasons of efficiency or parallelizability. In this sense, changes to the level of network visibility can have a significant impact on algorithm design. We study a range of problems under this model of algorithms with local information. We first consider the case in which the underlying graph is a preferential attachment network. We show that one can find the node of maximum degree in the network in a polylogarithmic number of steps, using an opportunistic algorithm that repeatedly queries the visible node of maximum degree. This addresses an open question of Bollob 'a s and Riordan. In contrast, local information algorithms require a linear number of queries to solve the problem on arbitrary networks. Motivated by problems faced by recruiters in online networks, we also consider network coverage problems such as finding a minimum dominating set. For this optimization problem we show that, if each node added to the output set reveals sufficient information about the set's neighborhood, then it is possible to design randomized algorithms for general networks that nearly match the best approximations possible even with full access to the graph structure. We show that this level of visibility is necessary. We conclude that a network provider's decision of how much structure to make visible to its users can have a significant effect on a user's ability to interact strategically with the network.",
"With more than 250 million active users, Facebook (FB) is currently one of the most important online social networks. Our goal in this paper is to obtain a representative (unbiased) sample of Facebook users by crawling its social graph. In this quest, we consider and implement several candidate techniques. Two approaches that are found to perform well are the Metropolis-Hasting random walk (MHRW) and a re-weighted random walk (RWRW). Both have pros and cons, which we demonstrate through a comparison to each other as well as to the \"ground-truth\" (UNI - obtained through true uniform sampling of FB userIDs). In contrast, the traditional Breadth-First-Search (BFS) and Random Walk (RW) perform quite poorly, producing substantially biased results. In addition to offline performance assessment, we introduce online formal convergence diagnostics to assess sample quality during the data collection process. We show how these can be used to effectively determine when a random walk sample is of adequate size and quality for subsequent use (i.e., when it is safe to cease sampling). Using these methods, we collect the first, to the best of our knowledge, unbiased sample of Facebook. Finally, we use one of our representative datasets, collected through MHRW, to characterize several key properties of Facebook."
]
} |
1410.0782 | 2949371380 | Designing efficient and fair algorithms for sharing multiple resources between heterogeneous demands is becoming increasingly important. Applications include compute clusters shared by multi-task jobs and routers equipped with middleboxes shared by flows of different types. We show that the currently preferred objective of Dominant Resource Fairness has a significantly less favorable efficiency-fairness tradeoff than alternatives like Proportional Fairness and our proposal, Bottleneck Max Fairness. In addition to other desirable properties, these objectives are equally strategyproof in any realistic scenario with dynamic demand. | We limit the present discussion to the most relevant related work. DRF @cite_5 and no justified complaints'' @cite_3 were placed in a more general economics framework in the work of Gutman and Nisan @cite_15 . Joe-Wang al also generalized DRF by introducing two families of allocations that allow a controlled tradeoff between efficiency and fairness @cite_6 . All these objectives are evaluated assuming a fixed set of transactions. The serve the most deprived job'' approach introduced in @cite_5 proves very versatile. It is used by Zeldes and Feitelson @cite_12 to implement bottleneck-based fairness and by Ghodsi and co-authors @cite_19 to account for compatibility constraints in task placement. Our proposed implementations of PF and BMF for sharing cluster resources are further illustrations of this versatility. | {
"cite_N": [
"@cite_6",
"@cite_3",
"@cite_19",
"@cite_5",
"@cite_15",
"@cite_12"
],
"mid": [
"",
"",
"2055748525",
"1890643295",
"103955348",
"2036318658"
],
"abstract": [
"",
"",
"Max-Min Fairness is a flexible resource allocation mechanism used in most datacenter schedulers. However, an increasing number of jobs have hard placement constraints, restricting the machines they can run on due to special hardware or software requirements. It is unclear how to define, and achieve, max-min fairness in the presence of such constraints. We propose Constrained Max-Min Fairness (CMMF), an extension to max-min fairness that supports placement constraints, and show that it is the only policy satisfying an important property that incentivizes users to pool resources. Optimally computing CMMF is challenging, but we show that a remarkably simple online scheduler, called Choosy, approximates the optimal scheduler well. Through experiments, analysis, and simulations, we show that Choosy on average differs 2 from the optimal CMMF allocation, and lets jobs achieve their fair share quickly.",
"We consider the problem of fair resource allocation in a system containing different resource types, where each user may have different demands for each resource. To address this problem, we propose Dominant Resource Fairness (DRF), a generalization of max-min fairness to multiple resource types. We show that DRF, unlike other possible policies, satisfies several highly desirable properties. First, DRF incentivizes users to share resources, by ensuring that no user is better off if resources are equally partitioned among them. Second, DRF is strategy-proof, as a user cannot increase her allocation by lying about her requirements. Third, DRF is envy-free, as no user would want to trade her allocation with that of another user. Finally, DRF allocations are Pareto efficient, as it is not possible to improve the allocation of a user without decreasing the allocation of another user. We have implemented DRF in the Mesos cluster resource manager, and show that it leads to better throughput and fairness than the slot-based fair sharing schemes in current cluster schedulers.",
"We consider the age-old problem of allocating items among different agents in a way that is efficient and fair. Two papers, by and , have recently studied this problem in the context of computer systems. Both papers had similar models for agent preferences, but advocated different notions of fairness. We formalize both fairness notions in economic terms, extending them to apply to a larger family of utilities. Noting that in settings with such utilities efficiency is easily achieved in multiple ways, we study notions of fairness as criteria for choosing between different efficient allocations. Our technical results are algorithms for finding fair allocations corresponding to two fairness notions: Regarding the notion suggested by , we present a polynomial-time algorithm that computes an allocation for a general class of fairness notions, in which their notion is included. For the other, suggested by , we show that a competitive market equilibrium achieves the desired notion of fairness, thereby obtaining a polynomial-time algorithm that computes such a fair allocation and solving the main open problem raised by",
"System bottlenecks, namely those resources which are subjected to high contention, constrain system performance. Hence effective resource management should be done by focusing on the bottleneck resources and allocating them to the most deserving clients. It has been shown that for any combination of entitlements and requests a fair allocation of bottleneck resources can be found, using an off-line algorithm that is given full information in advance regarding the needs of each client. We extend this result to the on-line case with no prior information. To this end we introduce a simple greedy algorithm. In essence, when a scheduling decision needs to be made, this algorithm selects the client that has the largest minimal gap between its entitlement and its current allocation among all the bottleneck resources. Importantly, this algorithm takes a global view of the system, and assigns each client a single priority based on his usage of all the resources; this single priority is then used to make coordinated scheduling decisions on all the resources. Extensive simulations show that this algorithm achieves fair allocations according to the desired entitlements for a wide range of conditions, without using any prior information regarding resource requirements. It also follows shifting usage patterns, including situations where the bottlenecks change with time."
]
} |
1410.0782 | 2949371380 | Designing efficient and fair algorithms for sharing multiple resources between heterogeneous demands is becoming increasingly important. Applications include compute clusters shared by multi-task jobs and routers equipped with middleboxes shared by flows of different types. We show that the currently preferred objective of Dominant Resource Fairness has a significantly less favorable efficiency-fairness tradeoff than alternatives like Proportional Fairness and our proposal, Bottleneck Max Fairness. In addition to other desirable properties, these objectives are equally strategyproof in any realistic scenario with dynamic demand. | The packet-based algorithms designed by Ghodsi al @cite_0 to realize DRFQ for shared router resources are based on start-time fair queuing. Wang and co-authors have proposed alternative realizations that adapt DRR @cite_14 to the multi-resource context @cite_2 . Our implementations of PF and BMF rely on SFQ though it appears straightforward to substitute alternative fair queuing algorithms if required. | {
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_2"
],
"mid": [
"2113551315",
"",
"2067209328"
],
"abstract": [
"Middleboxes are ubiquitous in today's networks and perform a variety of important functions, including IDS, VPN, firewalling, and WAN optimization. These functions differ vastly in their requirements for hardware resources (e.g., CPU cycles and memory bandwidth). Thus, depending on the functions they go through, different flows can consume different amounts of a middlebox's resources. While there is much literature on weighted fair sharing of link bandwidth to isolate flows, it is unclear how to schedule multiple resources in a middlebox to achieve similar guarantees. In this paper, we analyze several natural packet scheduling algorithms for multiple resources and show that they have undesirable properties. We propose a new algorithm, Dominant Resource Fair Queuing (DRFQ), that retains the attractive properties that fair sharing provides for one resource. In doing so, we generalize the concept of virtual time in classical fair queuing to multi-resource settings. The resulting algorithm is also applicable in other contexts where several resources need to be multiplexed in the time domain.",
"",
"Middleboxes are widely deployed in today's enterprise networks. They perform a wide range of important network functions, including WAN optimizations, intrusion detection systems, network and application level firewalls, etc. Depending on the processing requirement of traffic, packet processing for different traffic flows may consume vastly different amounts of hardware resources (e.g., CPU and link bandwidth). Multi-resource fair queueing allows each traffic flow to receive a fair share of multiple middlebox resources. Previous schemes for multi-resource fair queueing, however, are expensive to implement at high speeds. Specifically, the time complexity to schedule a packet is O(log n), where n is the number of backlogged flows. In this paper, we design a new multi-resource fair queueing scheme that schedules packets in a way similar to Elastic Round Robin. Our scheme requires only O(1) work to schedule a packet and is simple enough to implement in practice. We show, both analytically and experimentally, that our queueing scheme achieves nearly perfect Dominant Resource Fairness."
]
} |
1410.0782 | 2949371380 | Designing efficient and fair algorithms for sharing multiple resources between heterogeneous demands is becoming increasingly important. Applications include compute clusters shared by multi-task jobs and routers equipped with middleboxes shared by flows of different types. We show that the currently preferred objective of Dominant Resource Fairness has a significantly less favorable efficiency-fairness tradeoff than alternatives like Proportional Fairness and our proposal, Bottleneck Max Fairness. In addition to other desirable properties, these objectives are equally strategyproof in any realistic scenario with dynamic demand. | The need to evaluate the performance of resource sharing objectives under dynamic demand is still not widely recognized. The paper by Massouli 'e and Roberts @cite_20 was perhaps the first to note the importance of this while some of the most significant subsequent findings are summarized in Bonald al @cite_9 . Our earlier paper @cite_16 and the present work extend this analysis to the domain of multi-resource sharing. | {
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_20"
],
"mid": [
"2084765981",
"2962701284",
"1529132671"
],
"abstract": [
"We compare the performance of three usual allocations, namely max-min fairness, proportional fairness and balanced fairness, in a communication network whose resources are shared by a random number of data flows. The model consists of a network of processor-sharing queues. The vector of service rates, which is constrained by some compact, convex capacity set representing the network resources, is a function of the number of customers in each queue. This function determines the way network resources are allocated. We show that this model is representative of a rich class of wired and wireless networks. We give in this general framework the stability condition of max-min fairness, proportional fairness and balanced fairness and compare their performance on a number of toy networks.",
"Abstract The performance of cluster computing depends on how concurrent jobs share multiple data center resource types such as CPU, RAM and disk storage. Recent research has discussed efficiency and fairness requirements and identified a number of desirable scheduling objectives including so-called dominant resource fairness (DRF). We argue here that proportional fairness (PF), long recognized as a desirable objective in sharing network bandwidth between ongoing data transfers, is preferable to DRF. The superiority of PF is manifest under the realistic modeling assumption that the population of jobs in progress is a stochastic process. In random traffic the strategy-proof property of DRF proves unimportant while PF is shown by analysis and simulation to offer a significantly better efficiency–fairness tradeoff.",
"We consider the performance of a network like the Internet handling so?called elastic traffic where the rate of flows adjusts to fill available bandwidth. Realized throughput depends both on the way bandwidth is shared and on the random nature of traffic. We assume traffic consists of point to point transfers of individual documents of finite size arriving according to a Poisson process. Notable results are that weighted sharing has limited impact on perceived quality of service and that discrimination in favour of short documents leads to considerably better performance than fair sharing. In a linear network, max---min fairness is preferable to proportional fairness under random traffic while the converse is true under the assumption of a static configuration of persistent flows. Admission control is advocated as a necessary means to maintain goodput in case of traffic overload."
]
} |
1410.0868 | 1487199296 | In this paper we propose and study an optimization problem over a matrix group orbit that we call (GOO). We prove that GOO can be used to induce matrix decomposition techniques such as singular value decomposition (SVD), LU decomposition, QR decomposition, Schur decomposition and Cholesky decomposition, etc. This gives rise to a unified framework for matrix decomposition and allows us to bridge these matrix decomposition methods. Moreover, we generalize GOO for tensor decomposition. As a concrete application of GOO, we devise a new data decomposition method over a special linear group to normalize point cloud data. Experiment results show that our normalization method is able to obtain recovery well from distortions like shearing, rotation and squeezing. | In @cite_11 , a non-linear GOO is used to find texture invariant to rotation for 2D point cloud @math : [ | ( ( )) |_* . ] As @math is a unit group, the optimization is well defined and the induced matrix decomposition is found to be useful as a rotation-invariant representation for texture. The same paper also considers finding homography-invariant representation for texture for 2D point cloud @math : [ , ; ( ( ))= | [ ( )] |_* . ] Note that here a coefficient @math is intentionally added to ensure @math measure of the point cloud be preserved the action of @math . | {
"cite_N": [
"@cite_11"
],
"mid": [
"2906621894"
],
"abstract": [
"In this paper, we propose a new tool to efficiently extract a class of \"low-rank textures\" in a 3D scene from user-specified windows in 2D images despite significant corruptions and warping. The low-rank textures capture geometrically meaningful structures in an image, which encompass conventional local features such as edges and corners as well as many kinds of regular, symmetric patterns ubiquitous in urban environments and man-made objects. Our approach to finding these low-rank textures leverages the recent breakthroughs in convex optimization that enable robust recovery of a high-dimensional low-rank matrix despite gross sparse errors. In the case of planar regions with significant affine or projective deformation, our method can accurately recover both the intrinsic low-rank texture and the unknown transformation, and hence both the geometry and appearance of the associated planar region in 3D. Extensive experimental results demonstrate that this new technique works effectively for many regular and near-regular patterns or objects that are approximately low-rank, such as symmetrical patterns, building facades, printed text, and human faces."
]
} |
1410.0868 | 1487199296 | In this paper we propose and study an optimization problem over a matrix group orbit that we call (GOO). We prove that GOO can be used to induce matrix decomposition techniques such as singular value decomposition (SVD), LU decomposition, QR decomposition, Schur decomposition and Cholesky decomposition, etc. This gives rise to a unified framework for matrix decomposition and allows us to bridge these matrix decomposition methods. Moreover, we generalize GOO for tensor decomposition. As a concrete application of GOO, we devise a new data decomposition method over a special linear group to normalize point cloud data. Experiment results show that our normalization method is able to obtain recovery well from distortions like shearing, rotation and squeezing. | In @cite_23 the following formulation is used to get the Ky-Fan @math -norm @cite_25 of a matrix @math when @math and @math : [ ^ m k , ^ = , ^ n k , ^ = ( ^ ) . ] | {
"cite_N": [
"@cite_25",
"@cite_23"
],
"mid": [
"191339423",
"2952509110"
],
"abstract": [
"A method of manufacturing a semiconductor device, in particular a monolithic integrated circuit, in which highly doped zones are provided according to a given pattern on one side of a monocrystalline silicon substrate body by local diffusion of at least one impurity in a substantially flat surface of the substrate body and the substrate surface on said side is given a profile in a pattern which corresponds to the pattern of the highly doped zones, after which an epitaxial silicon layer is provided on said side and one or more semiconductor circuit elements are then formed while using at least one photoresist step, characterized in that the substantially flat substrate surface is given a crystal orientation lying between a 001 face and an adjacent 111 face, which orientation deviates at least 10 DEG from the said 001 face and at least 15 DEG from said 111 face and is present in a strip within 10 DEG from the crystallographic zone formed by the said two faces.",
"We study low rank matrix and tensor completion and propose novel algorithms that employ adaptive sampling schemes to obtain strong performance guarantees. Our algorithms exploit adaptivity to identify entries that are highly informative for learning the column space of the matrix (tensor) and consequently, our results hold even when the row space is highly coherent, in contrast with previous analyses. In the absence of noise, we show that one can exactly recover a @math matrix of rank @math from merely @math matrix entries. We also show that one can recover an order @math tensor using @math entries. For noisy recovery, our algorithm consistently estimates a low rank matrix corrupted with noise using @math entries. We complement our study with simulations that verify our theory and demonstrate the scalability of our algorithms."
]
} |
1410.0389 | 2143660872 | We introduce a learning framework called learning using privileged information (LUPI) to the computer vision field. We focus on the prototypical computer vision problem of teaching computers to recognize objects in images. We want the computers to be able to learn faster at the expense of providing extra information during training time. As additional information about the image data, we look at several scenarios that have been studied in computer vision before: attributes, bounding boxes and image tags. The information is privileged as it is available at training time but not at test time. We explore two maximum-margin techniques that are able to make use of this additional source of information, for binary and multiclass object classification. We interpret these methods as learning easiness and hardness of the objects in the privileged space and then transferring this knowledge to train a better classifier in the original space. We provide a thorough analysis and comparison of information transfer from privileged to the original data spaces for both LUPI methods. Our experiments show that incorporating privileged information can improve the classification accuracy. Finally, we conduct user studies to understand which samples are easy and which are hard for human learning, and explore how this information is related to easy and hard samples when learning a classifier. | In computer vision problems it is common to have access to multiple sources of information. Sometimes all of them are visual, such as when images are represented by color features as well as by texture features. Sometimes, the modalities are mixed, such as for images with text captions. If all modalities are present both at training and at test time, it is rather straight-forward to combine them for better prediction performance. This is studied, e.g., in the fields of or learning. Methods suggested here range from , where one simply concatenates the feature vectors of all data modalities, to complex adaptive methods for early or late data fusions @cite_2 , including @cite_22 and @cite_42 . | {
"cite_N": [
"@cite_42",
"@cite_22",
"@cite_2"
],
"mid": [
"2124372976",
"2538008885",
"1989085630"
],
"abstract": [
"A key ingredient in the design of visual object classification systems is the identification of relevant class specific aspects while being robust to intra-class variations. While this is a necessity in order to generalize beyond a given set of training images, it is also a very difficult problem due to the high variability of visual appearance within each class. In the last years substantial performance gains on challenging benchmark datasets have been reported in the literature. This progress can be attributed to two developments: the design of highly discriminative and robust image features and the combination of multiple complementary features based on different aspects such as shape, color or texture. In this paper we study several models that aim at learning the correct weighting of different features from training data. These include multiple kernel learning as well as simple baseline methods. Furthermore we derive ensemble methods inspired by Boosting which are easily extendable to several multiclass setting. All methods are thoroughly evaluated on object classification datasets using a multitude of feature descriptors. The key results are that even very simple baseline methods, that are orders of magnitude faster than learning techniques are highly competitive with multiple kernel learning. Furthermore the Boosting type methods are found to produce consistently better results in all experiments. We provide insight of when combination methods can be expected to work and how the benefit of complementary features can be exploited most efficiently.",
"Our objective is to obtain a state-of-the art object category detector by employing a state-of-the-art image classifier to search for the object in all possible image sub-windows. We use multiple kernel learning of Varma and Ray (ICCV 2007) to learn an optimal combination of exponential χ2 kernels, each of which captures a different feature channel. Our features include the distribution of edges, dense and sparse visual words, and feature descriptors at different levels of spatial organization.",
"Semantic analysis of multimodal video aims to index segments of interest at a conceptual level. In reaching this goal, it requires an analysis of several information streams. At some point in the analysis these streams need to be fused. In this paper, we consider two classes of fusion schemes, namely early fusion and late fusion. The former fuses modalities in feature space, the latter fuses modalities in semantic space. We show by experiment on 184 hours of broadcast video data and for 20 semantic concepts, that late fusion tends to give slightly better performance for most concepts. However, for those concepts where early fusion performs better the difference is more significant."
]
} |
1410.0389 | 2143660872 | We introduce a learning framework called learning using privileged information (LUPI) to the computer vision field. We focus on the prototypical computer vision problem of teaching computers to recognize objects in images. We want the computers to be able to learn faster at the expense of providing extra information during training time. As additional information about the image data, we look at several scenarios that have been studied in computer vision before: attributes, bounding boxes and image tags. The information is privileged as it is available at training time but not at test time. We explore two maximum-margin techniques that are able to make use of this additional source of information, for binary and multiclass object classification. We interpret these methods as learning easiness and hardness of the objects in the privileged space and then transferring this knowledge to train a better classifier in the original space. We provide a thorough analysis and comparison of information transfer from privileged to the original data spaces for both LUPI methods. Our experiments show that incorporating privileged information can improve the classification accuracy. Finally, we conduct user studies to understand which samples are easy and which are hard for human learning, and explore how this information is related to easy and hard samples when learning a classifier. | The situation we are interested in occurs when at training time we have an additional data representation compared to test time. Different settings of this kind have appeared in the computer vision literature, but each was studied in a separate way. For example, for clustering with multiple image modalities, it has been proposed to use CCA to learn a shared representation that can be computed from either of representations @cite_1 . Similarly the shared representation is also used for cross-modal retrieval @cite_43 . Alternatively, one can use the training data to learn a mapping from the image to the privileged modality and use this predictor to fill in the values missing at test time @cite_0 . Feature vectors made out of semantic attributes have been used to improve object categorization when very few or no training examples are available @cite_29 @cite_4 @cite_35 @cite_25 . @cite_40 it was shown that annotator rationales can act as additional sources of information during training, as long as the rationales can be expressed in the same data representation as the original data (e.g. characteristic regions within the training images). | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_29",
"@cite_1",
"@cite_0",
"@cite_43",
"@cite_40",
"@cite_25"
],
"mid": [
"1500937733",
"1597421340",
"2128532956",
"2123576058",
"2950359053",
"2106890447",
"2047956997",
"1777241655"
],
"abstract": [
"We propose a new learning method to infer a mid-level feature representation that combines the advantage of semantic attribute representations with the higher expressive power of non-semantic features. The idea lies in augmenting an existing attribute-based representation with additional dimensions for which an autoencoder model is coupled with a large-margin principle. This construction allows a smooth transition between the zero-shot regime with no training example, the unsupervised regime with training examples but without class labels, and the supervised regime with training examples and with class labels. The resulting optimization problem can be solved efficiently, because several of the necessity steps have closed-form solutions. Through extensive experiments we show that the augmented representation achieves better results in terms of object categorization accuracy than the semantic representation alone.",
"We present a discriminatively trained model for joint modelling of object class labels (e.g. \"person\", \"dog\", \"chair\", etc.) and their visual attributes (e.g. \"has head\", \"furry\", \"met al\", etc.). We treat attributes of an object as latent variables in our model and capture the correlations among attributes using an undirected graphical model built from training data. The advantage of our model is that it allows us to infer object class labels using the information of both the test image itself and its (latent) attributes. Our model unifies object class prediction and attribute prediction in a principled framework. It is also flexible enough to deal with different performance measurements. Our experimental results provide quantitative evidence that attributes can improve object naming.",
"We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.",
"We present a new method for spectral clustering with paired data based on kernel canonical correlation analysis, called correlational spectral clustering. Paired data are common in real world data sources, such as images with text captions. Traditional spectral clustering algorithms either assume that data can be represented by a single similarity measure, or by co-occurrence matrices that are then used in biclustering. In contrast, the proposed method uses separate similarity measures for each data representation, and allows for projection of previously unseen data that are only observed in one representation (e.g. images but not text). We show that this algorithm generalizes traditional spectral clustering algorithms and show consistent empirical improvement over spectral clustering on a variety of datasets of images with associated text.",
"Traditional multi-view learning approaches suffer in the presence of view disagreement,i.e., when samples in each view do not belong to the same class due to view corruption, occlusion or other noise processes. In this paper we present a multi-view learning approach that uses a conditional entropy criterion to detect view disagreement. Once detected, samples with view disagreement are filtered and standard multi-view learning methods can be successfully applied to the remaining samples. Experimental evaluation on synthetic and audio-visual databases demonstrates that the detection and filtering of view disagreement considerably increases the performance of traditional multi-view learning approaches.",
"We address the problem of metric learning for multi-view data, namely the construction of embedding projections from data in different representations into a shared feature space, such that the Euclidean distance in this space provides a meaningful within-view as well as between-view similarity. Our motivation stems from the problem of cross-media retrieval tasks, where the availability of a joint Euclidean distance function is a prerequisite to allow fast, in particular hashing-based, nearest neighbor queries. We formulate an objective function that expresses the intuitive concept that matching samples are mapped closely together in the output space, whereas non-matching samples are pushed apart, no matter in which view they are available. The resulting optimization problem is not convex, but it can be decomposed explicitly into a convex and a concave part, thereby allowing efficient optimization using the convex-concave procedure. Experiments on an image retrieval task show that nearest-neighbor based cross-view retrieval is indeed possible, and the proposed technique improves the retrieval accuracy over baseline techniques.",
"Traditional supervised visual learning simply asks annotators “what” label an image should have. We propose an approach for image classification problems requiring subjective judgment that also asks “why”, and uses that information to enrich the learned model. We develop two forms of visual annotator rationales: in the first, the annotator highlights the spatial region of interest he found most influential to the label selected, and in the second, he comments on the visual attributes that were most important. For either case, we show how to map the response to synthetic contrast examples, and then exploit an existing large-margin learning technique to refine the decision boundary accordingly. Results on multiple scene categorization and human attractiveness tasks show the promise of our approach, which can more accurately learn complex categories with the explanations behind the label choices.",
"We propose a probabilistic model to infer supervised latent variables in the Hamming space from observed data. Our model allows simultaneous inference of the number of binary latent variables, and their values. The latent variables preserve neighbourhood structure of the data in a sense that objects in the same semantic concept have similar latent values, and objects in different concepts have dissimilar latent values. We formulate the supervised infinite latent variable problem based on an intuitive principle of pulling objects together if they are of the same type, and pushing them apart if they are not. We then combine this principle with a flexible Indian Buffet Process prior on the latent variables. We show that the inferred supervised latent variables can be directly used to perform a nearest neighbour search for the purpose of retrieval. We introduce a new application of dynamically extending hash codes, and show how to effectively couple the structure of the hash codes with continuously growing structure of the neighbourhood preserving infinite latent feature space."
]
} |
1410.0389 | 2143660872 | We introduce a learning framework called learning using privileged information (LUPI) to the computer vision field. We focus on the prototypical computer vision problem of teaching computers to recognize objects in images. We want the computers to be able to learn faster at the expense of providing extra information during training time. As additional information about the image data, we look at several scenarios that have been studied in computer vision before: attributes, bounding boxes and image tags. The information is privileged as it is available at training time but not at test time. We explore two maximum-margin techniques that are able to make use of this additional source of information, for binary and multiclass object classification. We interpret these methods as learning easiness and hardness of the objects in the privileged space and then transferring this knowledge to train a better classifier in the original space. We provide a thorough analysis and comparison of information transfer from privileged to the original data spaces for both LUPI methods. Our experiments show that incorporating privileged information can improve the classification accuracy. Finally, we conduct user studies to understand which samples are easy and which are hard for human learning, and explore how this information is related to easy and hard samples when learning a classifier. | In @cite_26 , the authors proposed to explore privileged information as measure of uncertainty about samples, estimating the noise term in the Gaussian Processes classification from the privileged data, i.e. privileged noise. | {
"cite_N": [
"@cite_26"
],
"mid": [
"2171072927"
],
"abstract": [
"The learning with privileged information setting has recently attracted a lot of attention within the machine learning community, as it allows the integration of additional knowledge into the training process of a classifier, even when this comes in the form of a data modality that is not available at test time. Here, we show that privileged information can naturally be treated as noise in the latent function of a Gaussian process classifier (GPC). That is, in contrast to the standard GPC setting, the latent function is not just a nuisance but a feature: it becomes a natural measure of confidence about the training data by modulating the slope of the GPC probit likelihood function. Extensive experiments on public datasets show that the proposed GPC method using privileged noise, called GPC+, improves over a standard GPC without privileged knowledge, and also over the current state-of-the-art SVM-based method, SVM+. Moreover, we show that advanced neural networks and deep learning methods can be compressed as privileged information."
]
} |
1410.0412 | 2101695728 | Computational fluid dynamics (CFD) requires a vast amount of compute cycles on contemporary large-scale parallel computers. Hence, performance optimization is a pivotal activity in this field of computational science. Not only does it reduce the time to solution, but it also allows to minimize the energy consumption. In this work we study performance optimizations for an MPI-parallel lattice Boltzmann-based flow solver that uses a sparse lattice representation with indirect addressing. First we describe how this indirect addressing can be minimized in order to increase the single-core and chip-level performance. Second, the communication overhead is reduced via appropriate partitioning, but maintaining the single core performance improvements. Both optimizations allow to run the solver at an operating point with minimal energy consumption. | In case of lattice Boltzmann methods (LBM) the decisive quantity is called ( @math ) and is measured in bytes per fluid lattice site update ( @math ) @cite_17 . There are several propagation step variants for LBM that achieve lowest data traffic per FLUP: the two-grid one-step algorithm with non-temporal stores @cite_13 , Bailey et. al's AA-pattern @cite_9 , and Geier's Eso-Twist @cite_18 @cite_12 . The latter two both work with a single grid only and arrange the processing order in a clever way to work around data dependencies. The first two variants were successfully implemented in the fluid flow solver framework @cite_14 , which relies on a sparse lattice structure of the simulation domain @cite_0 . In contrast to a full array approach, this introduces indirect data accesses, but delivers, if done correctly, excellent performance not only for flow in simple geometries but also in porous media like fixed-bed reactors or foams. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_9",
"@cite_0",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2791769657",
"2950444855",
"2113754927",
"1980037648",
"1971876223",
"",
"2073875590"
],
"abstract": [
"Traditionell werden numerische Stromungssimulationen in einer zyklischen Sequenz autonomer Teilschritte durchgefuhrt. Seitens Wissenschaftlern existiert jedoch schon lange der Wunsch nach mehr Interaktion mit laufenden Simulationen. Seit dem masgeblichen Report der National Science Foundation im Jahre 1987 wurden daher neue Formen der wissenschaftlichen Visualisierung entwickelt, die sich grundlegend von den traditionellen Verfahren unterscheiden. Insbesondere hat der sogenannte Computational Steering-Ansatz reges Interesse bewirkt. Damals wie heute ist die Anwendung des Verfahrens jedoch eher die Ausnahme denn die Regel. Ursachlich dafur sind zu grosen Teilen Komplexitat und Restriktionen traditioneller Hochleistungssysteme. Im Rahmen dieser Arbeit wird daher als Alternative zu dem traditionellen Vorgehen die immense Leistungsfahigkeit moderner Grafikkartengenerationen fur die Berechnungen herangezogen. Das sogenannte GPGPU-Computing eignet sich insbesondere fur die Anwendung der Lattice-Boltzmann-Methode im Bereich numerischer Stromungssimulationen. Auf Grundlage des LBM-Verfahrens wird im Rahmen dieser Arbeit prototypisch eine interaktive Simulationsumgebung basierend auf dem Computational Steering-Paradigma entwickelt, das alle Prozesse zur Losung von Stromungsproblemen innerhalb einer einzelnen Anwendung integriert. Durch die Konvergenz der hohen massiv parallelen Rechenleistung der GPUs und der Interaktionsfahigkeiten in einer einzelnen Anwendung kann eine erhebliche Steigerung der Anwendungsqualitat erzielt werden. Dabei ist es durch Einsatz mehrerer GPUs moglich, dreidimensionale Stromungsprobleme mit praxisrelevanter Problemgrose zu berechnen und gleichzeitig eine interaktive Manipulation und Exploration des Stromungsgebiets zur Laufzeit zu ermoglichen. Dabei ist der erforderliche finanzielle Aufwand verglichen mit traditionellen massiv parallelen Verfahren verhaltnismasig gering.",
"Memory-bound algorithms show complex performance and energy consumption behavior on multicore processors. We choose the lattice-Boltzmann method (LBM) on an Intel Sandy Bridge cluster as a prototype scenario to investigate if and how single-chip performance and power characteristics can be generalized to the highly parallel case. First we perform an analysis of a sparse-lattice LBM implementation for complex geometries. Using a single-core performance model, we predict the intra-chip saturation characteristics and the optimal operating point in terms of energy to solution as a function of implementation details, clock frequency, vectorization, and number of active cores per chip. We show that high single-core performance and a correct choice of the number of active cores per chip are the essential optimizations for lowest energy to solution at minimal performance degradation. Then we extrapolate to the MPI-parallel level and quantify the energy-saving potential of various optimizations and execution modes, where we find these guidelines to be even more important, especially when communication overhead is non-negligible. In our setup we could achieve energy savings of 35 in this case, compared to a naive approach. We also demonstrate that a simple non-reflective reduction of the clock speed leaves most of the energy saving potential unused.",
"Lattice Boltzmann Methods (LBM) are used for the computational simulation of Newtonian fluid dynamics. LBM-based simulations are readily parallelizable; they have been implemented on general-purpose processors, field-programmable gate arrays (FPGAs), and graphics processing units (GPUs). Of the three methods, the GPU implementations achieved the highest simulation performance per chip. With memory bandwidth of up to 141 GB s and a theoretical maximum floating point performance of over 600 GFLOPS, CUDA-ready GPUs from NVIDIA provide an attractive platform for a wide range of scientific simulations, including LBM. This paper improves upon prior single-precision GPU LBM results for the D3Q19 model by increasing GPU multiprocessor occupancy, resulting in an increase in maximum performance by 20 , and by introducing a space-efficient storage method which reduces GPU RAM requirements by 50 at a slight detriment to performance. Both GPU implementations are over 28 times faster than a single-precision quad-core CPU version utilizing OpenMP.",
"Classic vector systems have all but vanished from recent TOP500 lists. Looking at the recently introduced NEC SX-9 series, we benchmark its memory subsystem using the low level vector triad and employ the kernel of an advanced lattice Boltzmann flow solver to demonstrate that classic vectors still combine excellent performance with a well-established optimization approach. To investigate the multi-node performance, the flow field in a real porous medium is simulated using the hybrid MPI OpenMP parallel ILBDC lattice Boltzmann application code. Results for a commodity Intel Nehalem-based cluster are provided for comparison. Clusters can keep up with the vector systems, however, require massive parallelism and thus much more effort to provide a good domain decomposition.",
"Abstract This report presents a comprehensive survey of the effect of different data layouts on the single processor performance characteristics for the lattice Boltzmann method both for commodity “off-the-shelf” (COTS) architectures and tailored HPC systems, such as vector computers. We cover modern 64-bit processors ranging from IA32 compatible (Intel Xeon Nocona, AMD Opteron), superscalar RISC (IBM Power4), IA64 (Intel Itanium 2) to classical vector (NEC SX6+) and novel vector (Cray X1) architectures. Combining different data layouts with architecture dependent optimization strategies we demonstrate that the optimal implementation strongly depends on the architecture used. In particular, the correct choice of the data layout could supersede complex cache-blocking techniques in our kernels. Furthermore our results demonstrate that vector systems can outperform COTS architectures by one order of magnitude.",
"",
"Several possibilities exist to implement the propagation step of lattice Boltzmann methods. This paper describes common implementations and compares the number of memory transfer operations they require per lattice node update. A performance model based on the memory bandwidth is then used to obtain an estimation of the maximum achievable performance on different machines. A subset of the discussed implementations of the propagation step are benchmarked on different Intel- and AMD-based compute nodes using the framework of an existing flow solver that is specially adapted to simulate flow in porous media, and the model is validated against the measurements. Advanced approaches for the propagation step like ''A-A pattern'' or ''Esoteric Twist'' require more programming effort but often sustain significantly better performance than non-naive but straightforward implementations."
]
} |
1410.0412 | 2101695728 | Computational fluid dynamics (CFD) requires a vast amount of compute cycles on contemporary large-scale parallel computers. Hence, performance optimization is a pivotal activity in this field of computational science. Not only does it reduce the time to solution, but it also allows to minimize the energy consumption. In this work we study performance optimizations for an MPI-parallel lattice Boltzmann-based flow solver that uses a sparse lattice representation with indirect addressing. First we describe how this indirect addressing can be minimized in order to increase the single-core and chip-level performance. Second, the communication overhead is reduced via appropriate partitioning, but maintaining the single core performance improvements. Both optimizations allow to run the solver at an operating point with minimal energy consumption. | In this work we present a modified version of the two-grid one-step algorithm with non-temporal stores (OS-NT) and the AA-pattern algorithm. Both are augmented by a technique called RIA (reduced indirect addressing), which can avoid the indirect access under certain conditions. This optimization is based on the idea of run length encoding and enables a reduction of the loop balance @math . It also allows for partial vectorization, which is usually incompatible with indirect addressing unless the hardware supports efficient gather operations. RIA and partial vectorization were already implemented in the code we employed for our earlier analysis @cite_14 ; here we describe them in due detail. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2950444855"
],
"abstract": [
"Memory-bound algorithms show complex performance and energy consumption behavior on multicore processors. We choose the lattice-Boltzmann method (LBM) on an Intel Sandy Bridge cluster as a prototype scenario to investigate if and how single-chip performance and power characteristics can be generalized to the highly parallel case. First we perform an analysis of a sparse-lattice LBM implementation for complex geometries. Using a single-core performance model, we predict the intra-chip saturation characteristics and the optimal operating point in terms of energy to solution as a function of implementation details, clock frequency, vectorization, and number of active cores per chip. We show that high single-core performance and a correct choice of the number of active cores per chip are the essential optimizations for lowest energy to solution at minimal performance degradation. Then we extrapolate to the MPI-parallel level and quantify the energy-saving potential of various optimizations and execution modes, where we find these guidelines to be even more important, especially when communication overhead is non-negligible. In our setup we could achieve energy savings of 35 in this case, compared to a naive approach. We also demonstrate that a simple non-reflective reduction of the clock speed leaves most of the energy saving potential unused."
]
} |
1409.8633 | 1929664596 | A key feature of the packet scheduler in LTE system is that it can allocate resources both in the time and frequency domain. Furthermore, the scheduler is acquainted with channel state information periodically reported by user equipments either in an aggregate form for the whole downlink channel, or distinguished for each available subchannel. This mechanism allows for wide discretion in resource allocation, thus promoting the flourishing of several scheduling algorithms, with different purposes. It is therefore of great interest to compare the performance of such algorithms in different scenarios. A very common simulation tool that can be used for this purpose is ns-3, which already supports a set of well known scheduling algorithms for LTE downlink, though it still lacks schedulers that provide throughput guarantees. In this work we contribute to fill this gap by implementing a scheduling algorithm that provides long-term throughput guarantees to the different users, while opportunistically exploiting the instantaneous channel fluctuations to increase the cell capacity. We then perform a thorough performance analysis of the different scheduling algorithms by means of extensive ns-3 simulations, both for saturated UDP and TCP traffic sources. The analysis makes it possible to appreciate the difference among the scheduling algorithms, and to assess the performance gain, both in terms of cell capacity and packet service time, obtained by allowing the schedulers to work on the frequency domain. | Performance analysis of downlink scheduler can be carried out either at the MAC layer or at the transport layer where in the latter the effect on transmission control protocol (TCP) is of high importance. While the MAC throughput analysis has its own benefits, a huge fraction of today's data is carried via HTTP @cite_11 which uses TCP because it is reliable, well understood, and can be conveniently managed by firewalls and security systems. However, it is not alway straightforward to infer TCP throughput from MAC performance for a given scheduling algorithm. It is therefore essential to investigate the performance of the scheduling algorithms when they handle TCP traffic as well. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2132454267"
],
"abstract": [
"TCP is widely used in commercial multimedia streaming systems, with recent measurement studies indicating that a significant fraction of Internet streaming media is currently delivered over HTTP TCP. These observations motivate us to develop analytic performance models to systematically investigate the performance of TCP for both live and stored-media streaming. We validate our models via ns simulations and experiments conducted over the Internet. Our models provide guidelines indicating the circumstances under which TCP streaming leads to satisfactory performance, showing, for example, that TCP generally provides good streaming performance when the achievable TCP throughput is roughly twice the media bitrate, with only a few seconds of startup delay."
]
} |
1409.8633 | 1929664596 | A key feature of the packet scheduler in LTE system is that it can allocate resources both in the time and frequency domain. Furthermore, the scheduler is acquainted with channel state information periodically reported by user equipments either in an aggregate form for the whole downlink channel, or distinguished for each available subchannel. This mechanism allows for wide discretion in resource allocation, thus promoting the flourishing of several scheduling algorithms, with different purposes. It is therefore of great interest to compare the performance of such algorithms in different scenarios. A very common simulation tool that can be used for this purpose is ns-3, which already supports a set of well known scheduling algorithms for LTE downlink, though it still lacks schedulers that provide throughput guarantees. In this work we contribute to fill this gap by implementing a scheduling algorithm that provides long-term throughput guarantees to the different users, while opportunistically exploiting the instantaneous channel fluctuations to increase the cell capacity. We then perform a thorough performance analysis of the different scheduling algorithms by means of extensive ns-3 simulations, both for saturated UDP and TCP traffic sources. The analysis makes it possible to appreciate the difference among the scheduling algorithms, and to assess the performance gain, both in terms of cell capacity and packet service time, obtained by allowing the schedulers to work on the frequency domain. | In a nutshell, there has been a growing interest in the design and performance comparison of the scheduling algorithms for LTE taking into account both the UDP and TCP traffic. However, the comparison with opportunistic but users' fair schedulers, capable of providing throughput guarantees to users while exploiting the specific resource allocation structure of LTE, has not been carried out. Secondly, although several well-known scheduling algorithms are already implemented in @math @cite_14 @cite_22 @cite_7 but to the best of our knowledge, QoS aware schedulers which provide fair throughput guarantees to the users and take into account the LTE resource allocation framework have not yet been implemented in @math . This paper aims at filling these gaps. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_7"
],
"mid": [
"2029426833",
"2059746594",
"2098835300"
],
"abstract": [
"In LTE systems, the downlink scheduler is an essential component for efficient radio resource utilization; hence, in the context of LTE simulation, the availability of good downlink scheduler models is very important. At the time this work started, the LTE module of the ns-3 simulator only supported two types of scheduler, namely Round Robin and Proportional Fair. To overcome this limitation, we implemented in ns-3 several well-known downlink LTE scheduler algorithms, namely maximum throughput, throughput to average, blind equal throughput, token bank fair queue and priority set. In this paper, we first describe in detail their design and implementation, and then discuss their validation done by comparison with the theoretical performance in some reference scenarios.",
"Packet scheduler at the medium access control (MAC) layer is essential to improve radio resource utilization in the Long Term Evolution (LTE) network. The MAC scheduler allocates resource blocks to user terminals (UEs) according to the priority metric, which varies in different scheduling algorithms. Although there have been many studies on the performance of LTE schedulers at the MAC layer, it is interesting to evaluate the impact of different LTE MAC schedulers on the transport layer, particularly on the transmission control protocol (TCP). In this study, we implement three mainstream LTE MAC schedulers in Network Simulator-3 (NS-3), namely, maximum throughput (MT), blind equal throughput (BET) and proportional fair (PF). Extensive simulations are conducted to examine the different TCP throughput achieved with the frequency domain version and the time domain version of these schedulers in a vehicular environment. The performance difference is attributed to important factors such as the resource allocation granularity, channel-awareness in scheduling, and the number of UEs.",
"Designing scheduling algorithms that work in synergy with TCP is a challenging problem in wireless networks. Extensive research on scheduling algorithms has focused on inelastic traffic, where there is no correlation between traffic dynamics and scheduling decisions. In this work, we study the performance of several scheduling algorithms in LTE networks, where the scheduling decisions are intertwined with wireless channel fluctuations to improve the system throughput. We use ns-3 simulations to study the performance of several scheduling algorithms with a specific focus on Max Weight (MW) schedulers with both UDP and TCP traffic, while considering the detailed behavior of OFDMA-based resource allocation in LTE networks. We show that, contrary to its performance with inelastic traffic, MW schedulers may not perform well in LTE networks in the presence of TCP traffic, as they are agnostic to the TCP congestion control mechanism. We then design a new scheduler called “Queue MW” (Q-MW) which is tailored specifically to TCP dynamics by giving higher priority to TCP flows whose queue at the base station is very small in order to encourage them to send more data at a faster rate. We have implemented Q-MW in ns-3 and studied its performance in a wide range of network scenarios in terms of queue size at the base station and round-trip delay. Our simulation results show that Q-MW achieves peak and average throughput gains of 37 and 10 compared to MW schedulers if tuned properly."
]
} |
1409.8650 | 1982656393 | In this paper, we deal with the problem of jointly determining the optimal coding strategy and the scheduling decisions when receivers obtain layered data from multiple servers. The layered data is encoded by means of prioritized random linear coding (PRLC) in order to be resilient to channel loss while respecting the unequal levels of importance in the data, and data blocks are transmitted simultaneously in order to reduce decoding delays and improve the delivery performance. We formulate the optimal coding and scheduling decisions problem in our novel framework with the help of Markov decision processes (MDP), which are effective tools for modeling adapting streaming systems. Reinforcement learning approaches are then proposed to derive reduced computational complexity solutions to the adaptive coding and scheduling problems. The novel reinforcement learning approaches and the MDP solution are examined in an illustrative example for scalable video transmission . Our methods offer large performance gains over competing methods that deliver the data blocks sequentially. The experimental evaluation also shows that our novel algorithms offer continuous playback and guarantee small quality variations which is not the case for baseline solutions. Finally, our work highlights the advantages of reinforcement learning algorithms to forecast the temporal evolution of data demands and to decide the optimal coding and scheduling decisions . | A variety of Application Layer Forward Error Correction (AL-FEC) schemes appropriate for transmission of scalable video over broadcast erasure channels have been recently presented in @cite_6 @cite_13 @cite_21 @cite_5 @cite_3 @cite_12 . In @cite_6 , the sliding window concept has been introduced and it has been later combined in @cite_13 with Unequal Error Protection (UEP) Raptor codes for providing enhanced error robustness. This scheme has been further improved in @cite_21 , where data replication is used prior to the application of Fountain codes. This results in stronger protection for the most important layers. The expanding window approach has been proposed in @cite_5 for video multicast as an alternative to the sliding window method. | {
"cite_N": [
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_5",
"@cite_13",
"@cite_12"
],
"mid": [
"2121496396",
"2132159785",
"2129446545",
"2117668435",
"2148011327",
"2134203932"
],
"abstract": [
"Application-layer forward error correction (FEC) is used in many multimedia communication systems to address the problem of packet loss in lossy packet networks. One powerful form of application-layer FEC is unequal error protection which protects the information symbols according to their importance. We propose a method for unequal error protection with a Fountain code. When the information symbols were partitioned into two protection classes (most important and least important), our method required a smaller transmission bit budget to achieve low bit error rates compared to the two state-of-the-art techniques. We also compared our method to the two state-of-the-art techniques for video unicast and multicast over a lossy network. Simulations for the scalable video coding (SVC) extension of the H.264 AVC standard showed that our method required a smaller transmission bit budget to achieve high-quality video.",
"This paper presents a framework for efficiently streaming scalable video from multiple servers over heterogeneous network paths. We propose to use rateless codes, or Fountain codes, such that each server acts as an independent source, without the need to coordinate its sending strategy with other servers. In this case, the problem of maximizing the received video quality and minimizing the bandwidth usage, is simply reduced to a rate allocation problem. We provide an optimal solution for an ideal scenario where the loss probvability on each server-client path is exactly known. We then present a heuristic-based algorithm, which implements an unequal error protection scheme for the more realistic case of imperfect knowledge of the loss probabilities. Simulation results finally demonstrate the efficiency of the proposed algorithm, in distributed streaming scenarios over lossy channels.",
"Digital fountain codes are becoming increasingly important for multimedia communications over networks subject to packet erasures. These codes have significantly lower complexity than Reed-Solomon ones, exhibit high erasure correction performance, and are very well suited to generating multiple equally important descriptions of a source. In this paper we propose an innovative scheme for streaming multimedia contents by using digital fountain codes applied over sliding windows, along with a suitably modified belief-propagation decoder. The use of overlapped windows allows one to have a virtually extended block, which yields superior performance in terms of packet recovery. Simulation results using LT codes show that the proposed algorithm has better performance in terms of efficiency, reliability and memory with respect to fixed-window encoding.",
"Fountain codes were introduced as an efficient and universal forward error correction (FEC) solution for data multicast over lossy packet networks. They have recently been proposed for large scale multimedia content delivery in practical multimedia distribution systems. However, standard fountain codes, such as LT or Raptor codes, are not designed to meet unequal error protection (UEP) requirements typical in real-time scalable video multicast applications. In this paper, we propose recently introduced UEP expanding window fountain (EWF) codes as a flexible and efficient solution for real-time scalable video multicast. We demonstrate that the design flexibility and UEP performance make EWF codes ideally suited for this scenario, i.e., EWF codes offer a number of design parameters to be ldquotunedrdquo at the server side to meet the different reception criteria of heterogeneous receivers. The performance analysis using both analytical results and simulation experiments of H.264 scalable video coding (SVC) multicast to heterogeneous receiver classes confirms the flexibility and efficiency of the proposed EWF-based FEC solution.",
"Digital fountain codes have emerged as a low-complexity alternative to Reed-Solomon codes for erasure correction. The applications of these codes are relevant especially in the field of wireless video, where low encoding and decoding complexity is crucial. In this paper, we introduce a new class of digital fountain codes based on a sliding-window approach applied to Raptor codes. These codes have several properties useful for video applications, and provide better performance than classical digital fountains. Then, we propose an application of sliding-window Raptor codes to wireless video broadcasting using scalable video coding. The rates of the base and enhancement layers, as well as the number of coded packets generated for each layer, are optimized so as to yield the best possible expected quality at the receiver side, and providing unequal loss protection to the different layers according to their importance. The proposed system has been validated in a UMTS broadcast scenario, showing that it improves the end-to-end quality, and is robust towards fluctuations in the packet loss rate.",
"Recent advances in forward error correction and scalable video coding enable new approaches for robust, distributed streaming in Mobile Ad Hoc Networks (MANETs). This paper presents an approach for distribution of real time video by uncoordinated peer-to-peer relay or source nodes in an overlay network on top of a MANET. The approach proposed here allows for distributed, rate-distortion optimized transmission-rate allocation for competing scalable video streams at relay nodes in the overlay network. The approach has the desirable feature of path source diversity that can be used for enhancing reliability in connectivity to serving nodes and or attaining a higher throughput. The distributed approach reduces signaling overhead as well as avoiding scalability issues that come with centralized processing in MANETs. Results show a significant performance gain over both single-server systems and previously proposed multi-source systems."
]
} |
1409.8650 | 1982656393 | In this paper, we deal with the problem of jointly determining the optimal coding strategy and the scheduling decisions when receivers obtain layered data from multiple servers. The layered data is encoded by means of prioritized random linear coding (PRLC) in order to be resilient to channel loss while respecting the unequal levels of importance in the data, and data blocks are transmitted simultaneously in order to reduce decoding delays and improve the delivery performance. We formulate the optimal coding and scheduling decisions problem in our novel framework with the help of Markov decision processes (MDP), which are effective tools for modeling adapting streaming systems. Reinforcement learning approaches are then proposed to derive reduced computational complexity solutions to the adaptive coding and scheduling problems. The novel reinforcement learning approaches and the MDP solution are examined in an illustrative example for scalable video transmission . Our methods offer large performance gains over competing methods that deliver the data blocks sequentially. The experimental evaluation also shows that our novel algorithms offer continuous playback and guarantee small quality variations which is not the case for baseline solutions. Finally, our work highlights the advantages of reinforcement learning algorithms to forecast the temporal evolution of data demands and to decide the optimal coding and scheduling decisions . | Unfortunately, digital Fountain codes such as Raptor codes and LT codes may perform poorly in terms of delay @cite_16 @cite_24 . To this aim, systematic Random Linear Codes (RLC) with feedback have been studied in @cite_16 for minimizing the average delay in database replication systems. The feedback messages contain information about the packets that have been delivered to the receivers. These messages are used to optimize the coding decisions such that the decoding delay is small. The decoding delay distribution has been investigated in @cite_24 for RLC systems. Markov chains are utilized in order to find the optimal coding solution for the case of two receivers. However, this problem is computationally intractable for three or more receivers and hence only a heuristic solution is presented. The delay benefits of a RLC based scheme have been examined in @cite_14 , where repair packets are generated on-the-fly through RLC according to feedback messages that are periodically received by the servers. This scheme copes efficiently with the packet erasures and is appropriate for real-time sources such as video. From the above studies, the advantages of RLC over Fountain codes in terms of delay become clear. | {
"cite_N": [
"@cite_24",
"@cite_14",
"@cite_16"
],
"mid": [
"2099479177",
"2115512986",
"1996340076"
],
"abstract": [
"A fundamental understanding of the delay behavior of network coding is key towards its successful application in real-time applications with strict message deadlines. Previous contributions focused mostly on the average decoding delay, which although useful in various scenarios of interest is not sufficient for providing worst-case delay guarantees. To overcome this challenge, we investigate the entire delay distribution of random linear network coding for any field size and arbitrary number of encoded symbols (or generation size). By introducing a Markov chain model we are able to obtain a complete solution for the erasure broadcast channel with two receivers. A comparison with Automatic Repeat reQuest (ARQ) with perfect feedback, round robin scheduling and a class of fountain codes reveals that network coding on GF(24) offers the best delay performance for two receivers. We also conclude that GF(2) induces a heavy tail in the delay distribution, which implies that network coding based on XOR operations although simple to implement bears a relevant cost in terms of worst-case delay. For the case of three receivers, which is mathematically challenging, we propose a brute-force methodology that gives the delay distribution of network coding for small generations and field size up to GF(24).",
"This paper introduces a robust point-to-point transmission scheme: Tetrys, that relies on a novel on-the-fly erasure coding concept which reduces the delay for recovering lost data at the receiver side. In current erasure coding schemes, the packets that are not rebuilt at the receiver side are either lost or delayed by at least one RTT before transmission to the application. The present contribution aims at demonstrating that Tetrys coding scheme can fill the gap between real-time applications requirements and full reliability. Indeed, we show that in several cases, Tetrys can recover lost packets below one RTT over lossy and best-effort networks. We also show that Tetrys allows to enable full reliability without delay compromise and as a result: significantly improves the performance of time constrained applications. For instance, our evaluations present that video-conferencing applications obtain a PSNR gain up to 7 dB compared to classic block-based erasure codes.",
"We consider the problem of minimizing delay when broadcasting over erasure channels with feedback. A sender wishes to communicate the same set of m messages to several receivers. The sender can broadcast a single message or a combination (encoding) of messages to all receivers at each timestep, through separate erasure channels. Receivers provide feedback as to whether the transmission was received. If, at some time step, a receiver cannot identify a new message, delay is incurred. Our notion of delay is motivated by real-time applications that request progressively refined input, such as the refinements or different parts of an image. Our setup is novel because it combines coding techniques with feedback information to the end of minimizing delay. Uncoded scheduling or use of multiple description (MDS) codes has been well-studied in the literature. We show that our setup allows O( m ) benefits as compared to both previous approaches for offline algorithms, while feedback allows online algorithms to achieve smaller delay compared to online algorithms without feedback. Our main complexity results are that the offline minimization problem is NP-hard when the sender only schedules single messages and that the general problem remains NP-hard even when coding is allowed. However we show that coding does offer complexity gains by exhibiting specific classes of erasure instances that become trivial under coding schemes. We also discuss online heuristics and evaluate their performance through simulations."
]
} |
1409.8572 | 2952218349 | To follow the dynamicity of the user's content, researchers have recently started to model interactions between users and the Context-Aware Recommender Systems (CARS) as a bandit problem where the system needs to deal with exploration and exploitation dilemma. In this sense, we propose to study the freshness of the user's content in CARS through the bandit problem. We introduce in this paper an algorithm named Freshness-Aware Thompson Sampling (FA-TS) that manages the recommendation of fresh document according to the user's risk of the situation. The intensive evaluation and the detailed analysis of the experimental results reveals several important discoveries in the exploration exploitation (exr exp) behaviour. | We refer, in the following, techniques that study the different dimensions of our problem. Recently, research works are dedicated to study the multi-armed bandit problem in RS, considering the user's behaviour as the context. In @cite_2 , authors model CARS as a contextual bandit problem. The authors propose an algorithm called Contextual- @math -greedy which a perform recommendation sequentially recommends documents based on contextual information about the users' documents. In @cite_9 , authors analyse the TS in contextual bandit problem. The study demonstrate that it has better empirical performance compared to the state-of-art methods. The authors in @cite_2 @cite_3 describe a smart way to balance exr exp, but do not consider the user's context and document freshness during the recommendation. | {
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_2"
],
"mid": [
"2166253248",
"",
"116854235"
],
"abstract": [
"Thompson Sampling is one of the oldest heuristics for multi-armed bandit problems. It is a randomized algorithm based on Bayesian ideas, and has recently generated significant interest after several studies demonstrated it to have better empirical performance compared to the state-of-the-art methods. However, many questions regarding its theoretical performance remained open. In this paper, we design and analyze a generalization of Thompson Sampling algorithm for the stochastic contextual multi-armed bandit problem with linear payoff functions, when the contexts are provided by an adaptive adversary. This is among the most important and widely studied version of the contextual bandits problem. We prove a high probability regret bound of O(d2 e√T1+e) in time T for any 0 < e < 1, where d is the dimension of each context vector and e is a parameter used by the algorithm. Our results provide the first theoretical guarantees for the contextual version of Thompson Sampling, and are close to the lower bound of Ω(d√T) for this problem. This essentially solves a COLT open problem of Chapelle and Li [COLT 2012].",
"",
"Most existing approaches in Mobile Context-Aware Recommender Systems focus on recommending relevant items to users taking into account contextual information, such as time, location, or social aspects. However, none of them has considered the problem of user's content evolution. We introduce in this paper an algorithm that tackles this dynamicity. It is based on dynamic exploration exploitation and can adaptively balance the two aspects by deciding which user's situation is most relevant for exploration or exploitation. Within a deliberately designed offline simulation framework we conduct evaluations with real online event log data. The experimental results demonstrate that our algorithm outperforms surveyed algorithms."
]
} |
1409.8484 | 166361333 | Due to the huge availability of documents in digital form, and the deception possibility raise bound to the essence of digital documents and the way they are spread, the authorship attribution problem has constantly increased its relevance. Nowadays, authorship attribution,for both information retrieval and analysis, has gained great importance in the context of security, trust and copyright preservation. This work proposes an innovative multi-agent driven machine learning technique that has been developed for authorship attribution. By means of a preprocessing for word-grouping and time-period related analysis of the common lexicon, we determine a bias reference level for the recurrence frequency of the words within analysed texts, and then train a Radial Basis Neural Networks (RBPNN)-based classifier to identify the correct author. The main advantage of the proposed approach lies in the generality of the semantic analysis, which can be applied to different contexts and lexical domains, without requiring any modification. Moreover, the proposed system is able to incorporate an external input, meant to tune the classifier, and then self-adjust by means of continuous learning reinforcement. | @cite_12 , data from the charge-discharge simulation of lithium-ions battery energy storage are used for classification purposes with recurrent NNs and PNNs by means of a theoretical framework based on signal theory. | {
"cite_N": [
"@cite_12"
],
"mid": [
"1994640288"
],
"abstract": [
"In this paper is reported a critical review, experiences and results about state of charge (SOC) and voltage prediction of Lithium-ions batteries obtained by recurrent neural network (RNN) and pipelined recurrent neural network (PRNN) based simulation. These soft computing technologies will be here presented, utilized and implemented to obtain the typical charge characteristics and the charge discharge simulation procedure of a commercial solid-polymer technology based cell. Simulations are compared with experimental data manufacturers."
]
} |
1409.8578 | 2950252383 | Religiosity is a powerful force shaping human societies, affecting domains as diverse as economic growth or the ability to cope with illness. As more religious leaders and organizations as well as believers start using social networking sites (e.g., Twitter, Facebook), online activities become important extensions to traditional religious rituals and practices. However, there has been lack of research on religiosity in online social networks. This paper takes a step toward the understanding of several important aspects of religiosity on Twitter, based on the analysis of more than 250k U.S. users who self-declared their religions belief, including Atheism, Buddhism, Christianity, Hinduism, Islam, and Judaism. Specifically, (i) we examine the correlation of geographic distribution of religious people between Twitter and offline surveys. (ii) We analyze users' tweets and networks to identify discriminative features of each religious group, and explore supervised methods to identify believers of different religions. (iii) We study the linkage preference of different religious groups, and observe a strong preference of Twitter users connecting to others sharing the same religion. | Some other studies have focused on online religious communities. For example, McKenna and West @cite_11 conduct a survey study of the online religious forums where believers interact with others who share the common faith @cite_11 . Lieberman and Winzelberg @cite_12 examine religious expressions within online support groups on women with breast cancer. It is reported that the same self and social benefits (e.g., social support, emotional well-being) found to be associated with the involvement in traditional religious organizations can also be gained by participation in online religious communities. | {
"cite_N": [
"@cite_12",
"@cite_11"
],
"mid": [
"2088508389",
"1963769620"
],
"abstract": [
"Shaw and his colleagues [Shaw, B., Han, J., Kim, E., Gustafson, D., Hawkins, R., Cleary, C., et al (2007). Effects of prayer and religious expression within computer support groups on women with breast cancer. Psycho-oncology, 16(7), 676-687] examined religious expression in breast cancer (BC) online support groups (OSG). Using Pennebaker's LIWC text analysis to assess religious expression, they found that the more frequent the expression of words related to religion the lower the levels of negative emotions and the higher the levels of health self-efficacy and functional well-being. Our study goal was to replicate their findings. Specifically, we tested their central hypothesis that the percentage of religious words written by members of BC OSG's are associated with improvement in psychological outcomes. Five BC OSG's from our previous work [Lieberman, M. A., & Goldstein, B. (2005a). Not all negative emotions are equal: The role of emotional expression in online support groups for women with breast cancer. Psycho-oncology. 15, 160-168; Lieberman, M. A., & Goldstein, B. (2005b). Self-help online: An outcome evaluation of breast cancer bulletin boards. Journal of Health Psychology, 10(6), 855-862] studied 91 participants at baseline and 6 months post. Significant changes in depression and quality of life was found over time. In the current study linear regressions examined the relationship between religious statements and outcomes. The results did not support the hypotheses of a positive relationship between religious expression and positive outcome in both OSG samples. Reviews of studies examining the role of religion in health outcomes report equivocal results on the benefits of religious expression.",
"Online religious forums allow individuals to meet and interact with others who share their faith, beliefs, and values from the privacy of their homes. Active membership in traditional religious organizations has been shown to fulfill important social needs and to be associated with a number of benefits for the individuals involved. The survey study we report here found that many of the self and social benefits derived from participation in local religious institutions also accrue for those who take part in virtual religious forums. These interactive online forums were found to attract both those who are actively engaged in their local religious organizations and those who are unaffiliated."
]
} |
1409.8578 | 2950252383 | Religiosity is a powerful force shaping human societies, affecting domains as diverse as economic growth or the ability to cope with illness. As more religious leaders and organizations as well as believers start using social networking sites (e.g., Twitter, Facebook), online activities become important extensions to traditional religious rituals and practices. However, there has been lack of research on religiosity in online social networks. This paper takes a step toward the understanding of several important aspects of religiosity on Twitter, based on the analysis of more than 250k U.S. users who self-declared their religions belief, including Atheism, Buddhism, Christianity, Hinduism, Islam, and Judaism. Specifically, (i) we examine the correlation of geographic distribution of religious people between Twitter and offline surveys. (ii) We analyze users' tweets and networks to identify discriminative features of each religious group, and explore supervised methods to identify believers of different religions. (iii) We study the linkage preference of different religious groups, and observe a strong preference of Twitter users connecting to others sharing the same religion. | While much research effort has been made to understand religious use of Internet technologies, we know very little about religiosity in online social networks. On the other hand, there is recently an explosion of studies on Twitter @cite_33 @cite_17 @cite_23 @cite_24 @cite_14 @cite_21 @cite_26 , yet we do not know much specifically about religiosity on Twitter. @cite_29 develop classifiers to detect Twitter users from different categories, including category ; Nguyen and Lim @cite_5 build classifiers to identify Christian and Muslim users using their Twitter data, but neither of the two studies addresses the analysis of the phenomenon of religion on Twitter. @cite_22 appears to be the most relevant study, which focuses on exploring the relationship between religion and happiness via examining the different use of words (e.g., sentiment words, words related to thinking styles) in tweets between Christians and Atheists. Our present work differs both in scope and purpose. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_33",
"@cite_22",
"@cite_29",
"@cite_21",
"@cite_24",
"@cite_23",
"@cite_5",
"@cite_17"
],
"mid": [
"2101196063",
"",
"2008803468",
"2108073218",
"2094373579",
"1629520119",
"",
"2091325569",
"2088337754",
"2064178288"
],
"abstract": [
"Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85 ) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it.",
"",
"We identified individual-level diurnal and seasonal mood rhythms in cultures across the globe, using data from millions of public Twitter messages. We found that individuals awaken in a good mood that deteriorates as the day progresses—which is consistent with the effects of sleep and circadian rhythm—and that seasonal change in baseline positive affect varies with change in daylength. People are happier on weekends, but the morning peak in positive affect is delayed by 2 hours, which suggests that people awaken later on weekends.",
"We analyze data from nearly 2 million text messages (tweets) across over 16,000 users on Twitter to examine differences between Christians and atheists in natural language. Analyses reveal that Christians use more positive emotion words and less negative emotion words than atheists. Moreover, two independent paths predict differences in expressions of happiness: frequency of words related to an intuitive (vs. analytic) thinking style and frequency of words related to social relationships. These findings provide the first evidence that the relationship between religion and happiness is partially mediated by thinking style. This research also provides support for previous laboratory studies and self-report data, suggesting that social connection partially mediates the relationship between religiosity and happiness. Implications for theory and the future of social science using computational methods to analyze social media are discussed.",
"Finding the ''right people'' is a central aspect of social media systems. Twitter has millions of users who have varied interests, professions and personalities. For those in fields such as advertising and marketing, it is important to identify certain characteristics of users to target. However, Twitter users do not generally provide sufficient information about themselves on their profile which makes this task difficult. In response, this work sets out to automatically infer professions (e.g., musicians, health sector workers, technicians) and personality related attributes (e.g., creative, innovative, funny) for Twitter users based on features extracted from their content, their interaction networks, attributes of their friends and their activity patterns. We develop a comprehensive set of latent features that are then employed to perform efficient classification of users along these two dimensions (profession and personality). Our experiments on a large sample of Twitter users demonstrate both a high overall accuracy in detecting profession and personality related attributes as well as highlighting the benefits and pitfalls of various types of features for particular categories of users.",
"This article investigates the impact of user homophily on the social process of information diffusion in online social media. Over several decades, social scientists have been interested in the idea that similarity breeds connection: precisely known as \"homophily\". Homophily has been extensively studied in the social sciences and refers to the idea that users in a social system tend to bond more with ones who are similar to them than to ones who are dissimilar. The key observation is that homophily structures the ego-networks of individuals and impacts their communication behavior. It is therefore likely to effect the mechanisms in which information propagates among them. To this effect, we investigate the interplay between homophily along diverse user attributes and the information diffusion process on social media. In our approach, we first extract diffusion characteristics---corresponding to the baseline social graph as well as graphs filtered on different user attributes (e.g. location, activity). Second, we propose a Dynamic Bayesian Network based framework to predict diffusion characteristics at a future time. Third, the impact of attribute homophily is quantified by the ability of the predicted characteristics in explaining actual diffusion, and external variables, including trends in search and news. Experimental results on a large Twitter dataset demonstrate that choice of the homophilous attribute can impact the prediction of information diffusion, given a specific metric and a topic. In most cases, attribute homophily is able to explain the actual diffusion and external trends by 15-25 over cases when homophily is not considered.",
"",
"The 5-year-old social media Web site Twitter now claims that more than 100 million users post 230 million \"tweets\" (text messages up to 140 characters long) every day. In that torrent of data, some social scientists see an unprecedented opportunity to study human communication and social networks. On page [1878][1] of this week's issue of Science , researchers report their effort to use Twitter to study the collective moods of millions of people in diverse cultures around the world in real time. Their findings paint a portrait of humanity's mood swings. @PARASPLIT [1]: http: www.sciencemag.org cgi content full 333 6051 1878",
"Religious belief plays an important role in how people behave, influencing how they form preferences, interpret events around them, and develop relationships with others. Traditionally, the religion labels of user population are obtained by conducting a large scale census study. Such an approach is both high cost and time consuming. In this paper, we study the problem of predicting users' religion labels using their microblogging data. We formulate religion label prediction as a classification task, and identify content, structure and aggregate features considering their self and social variants for representing a user. We introduce the notion of representative user to identify users who are important in the religious user community. We further define features using representative users. We show that SVM classifiers using our proposed features can accurately assign Christian and Muslim labels to a set of Twitter users with known religion labels.",
"The importance of quantifying the nature and intensity of emotional states at the level of populations is evident: we would like to know how, when, and why individuals feel as they do if we wish, for example, to better construct public policy, build more successful organizations, and, from a scientific perspective, more fully understand economic and social phenomena. Here, by incorporating direct human assessment of words, we quantify happiness levels on a continuous scale for a diverse set of large-scale texts: song titles and lyrics, weblogs, and State of the Union addresses. Our method is transparent, improvable, capable of rapidly processing Web-scale texts, and moves beyond approaches based on coarse categorization. Among a number of observations, we find that the happiness of song lyrics trends downward from the 1960s to the mid 1990s while remaining stable within genres, and that the happiness of blogs has steadily increased from 2005 to 2009, exhibiting a striking rise and fall with blogger age and distance from the Earth’s equator."
]
} |
1409.8083 | 67034526 | Probabilistic Latent Tensor Factorization (PLTF) is a recently proposed probabilistic framework for modelling multi-way data. Not only the common tensor factorization models but also any arbitrary tensor factorization structure can be realized by the PLTF framework. This paper presents full Bayesian inference via variational Bayes that facilitates more powerful modelling and allows more sophisticated inference on the PLTF framework. We illustrate our approach on model order selection and link prediction. | @cite_17 propose a global optimal solution to variational Bayesian matrix factorization (VBMF) that can be computed analytically by solving a quartic equation and it is highly advantageous over a popular VBMF algorithm based on iterated conditional modes (ICM), since it can only find a local optimal solution after iterations. Yoo and Choi @cite_3 present a hierarchical Bayesian model for matrix co-factorization in which they derive a variational inference algorithm to approximately compute posterior distributions over factor matrices. | {
"cite_N": [
"@cite_3",
"@cite_17"
],
"mid": [
"64947117",
"2127292230"
],
"abstract": [
"Matrix factorization is a popular method for collaborative prediction, where unknown ratings are predicted by user and item factor matrices which are determined to approximate a user-item matrix as their product. Bayesian matrix factorization is preferred over other methods for collaborative filtering, since Bayesian approach alleviates overfitting, integrating out all model parameters using variational inference or sampling methods. However, Bayesian matrix factorization still suffers from the cold-start problem where predictions of ratings for new items or of new users' preferences are required. In this paper we present Bayesian matrix co-factorization as an approach to exploiting side information such as content information and demographic user data, where multiple data matrices are jointly decomposed, i.e., each Bayesian decomposition is coupled by sharing some factor matrices. We derive variational inference algorithm for Bayesian matrix co-factorization. In addition, we compute Bayesian Cramer-Rao bound in the case of Gaussian likelihood, showing that Bayesian matrix co-factorization indeed improves the reconstruction over Bayesian factorization of single data matrix. Numerical experiments demonstrate the useful behavior of Bayesian matrix co-factorization in the case of cold-start problems.",
"Bayesian methods of matrix factorization (MF) have been actively explored recently as promising alternatives to classical singular value decomposition. In this paper, we show that, despite the fact that the optimization problem is non-convex, the global optimal solution of variational Bayesian (VB) MF can be computed analytically by solving a quartic equation. This is highly advantageous over a popular VBMF algorithm based on iterated conditional modes since it can only find a local optimal solution after iterations. We further show that the global optimal solution of empirical VBMF (hyperparameters are also learned from data) can also be analytically computed. We illustrate the usefulness of our results through experiments."
]
} |
1409.8083 | 67034526 | Probabilistic Latent Tensor Factorization (PLTF) is a recently proposed probabilistic framework for modelling multi-way data. Not only the common tensor factorization models but also any arbitrary tensor factorization structure can be realized by the PLTF framework. This paper presents full Bayesian inference via variational Bayes that facilitates more powerful modelling and allows more sophisticated inference on the PLTF framework. We illustrate our approach on model order selection and link prediction. | For Bayesian model selection, Sato @cite_26 derives an online version of the variational Bayes algorithm and proves its convergence by showing that it is a stochastic approximation for finding the maximum of the free energy. By combining sequential model selection procedures, the online variational Bayes algorithm provides a fully online learning method with a model selection mechanism. | {
"cite_N": [
"@cite_26"
],
"mid": [
"2171911691"
],
"abstract": [
"The Bayesian framework provides a principled way of model selection. This framework estimates a probability distribution over an ensemble of models, and the prediction is done by averaging over the ensemble of models. Accordingly, the uncertainty of the models is taken into account, and complex models with more degrees of freedom are penalized. However, integration over model parameters is often intractable, and some approximation scheme is needed. Recently, a powerful approximation scheme, called the variational bayes (VB) method, has been proposed. This approach defines the free energy for a trial probability distribution, which approximates a joint posterior probability distribution over model parameters and hidden variables. The exact maximization of the free energy gives the true posterior distribution. The VB method uses factorized trial distributions. The integration over model parameters can be done analytically, and an iterative expectation-maximization-like algorithm, whose convergence is guaranteed, is derived. In this article, we derive an online version of the VB algorithm and prove its convergence by showing that it is a stochastic approximation for finding the maximum of the free energy. By combining sequential model selection procedures, the online VB method provides a fully online learning method with a model selection mechanism. In preliminary experiments using synthetic data, the online VB method was able to adapt the model structure to dynamic environments."
]
} |
1409.8083 | 67034526 | Probabilistic Latent Tensor Factorization (PLTF) is a recently proposed probabilistic framework for modelling multi-way data. Not only the common tensor factorization models but also any arbitrary tensor factorization structure can be realized by the PLTF framework. This paper presents full Bayesian inference via variational Bayes that facilitates more powerful modelling and allows more sophisticated inference on the PLTF framework. We illustrate our approach on model order selection and link prediction. | We next turn to link prediction studies. Most often, an incomplete set of links is observed and the goal is to predict unobserved links (also referred to as the problem), or there is a temporal aspect: snapshots of the set of links up to time @math are given and the goal is to predict the links at time @math ( problem). Matrix and tensor factorization-based methods have recently been studied for temporal link prediction @cite_12 ; however, in this paper, we have considered the use of tensor factorizations for the missing link prediction problem. Applications of missing link prediction include predicting links in social networks @cite_5 ; predicting the participation of users in events such as email communications and co-authorship @cite_1 and predicting the preferences of users in online retailing @cite_10 . Matrix factorization and tensor factorization-based approaches have proved useful in terms of missing link prediction because missing link prediction is closely related to matrix and tensor completion studies, which have shown that by using a low-rank structure of a data set, it is possible to recover missing entries accurately for matrices @cite_7 and higher-order tensors @cite_4 @cite_21 . | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_21",
"@cite_1",
"@cite_5",
"@cite_10",
"@cite_12"
],
"mid": [
"1814521481",
"2952716509",
"2078677240",
"",
"2157082398",
"2054141820",
"1864134408"
],
"abstract": [
"Abstract The problem of incomplete data – i.e., data with missing or unknown values – in multi-way arrays is ubiquitous in biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, communication networks, etc. We consider the problem of how to factorize data sets with missing values with the goal of capturing the underlying latent structure of the data and possibly reconstructing missing values (i.e., tensor completion). We focus on one of the most well-known tensor factorizations that captures multi-linear structure, CANDECOMP PARAFAC (CP). In the presence of missing data, CP can be formulated as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) that uses a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factorize tensors with noise and up to 99 missing data. A unique aspect of our approach is that it scales to sparse large-scale data, e.g., 1000 × 1000 × 1000 with five million known entries (0.5 dense). We further demonstrate the usefulness of CP-WOPT on two real-world applications: a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes and the problem of modeling computer network traffic where data may be absent due to the expense of the data collection process.",
"On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries, and comes up in many areas of science and engineering including collaborative filtering, machine learning, control, remote sensing, and computer vision to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown low-rank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclear-norm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown n x n matrix of low rank r from just about nr log^2 n noisy samples with an error which is proportional to the noise level. We present numerical results which complement our quantitative analysis and show that, in practice, nuclear norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout.",
"In this paper we consider sparsity on a tensor level, as given by the n-rank of a tensor. In an important sparse-vector approximation problem (compressed sensing) and the low-rank matrix recovery problem, using a convex relaxation technique proved to be a valuable solution strategy. Here, we will adapt these techniques to the tensor setting. We use the n-rank of a tensor as a sparsity measure and consider the low-n-rank tensor recovery problem, i.e. the problem of finding the tensor of the lowest n-rank that fulfills some linear constraints. We introduce a tractable convex relaxation of the n-rank and propose efficient algorithms to solve the low-n-rank tensor recovery problem numerically. The algorithms are based on the Douglas–Rachford splitting technique and its dual variant, the alternating direction method of multipliers.",
"",
"Networks have recently emerged as a powerful tool to describe and quantify many complex systems, with applications in engineering, communications, ecology, biochemistry and genetics. A general technique to divide network vertices in groups and sub-groups is reported. Revealing such underlying hierarchies in turn allows the predicting of missing links from partial data with higher accuracy than previous methods.",
"As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.",
"The data in many disciplines such as social networks, Web analysis, etc. is link-based, and the link structure can be exploited for many different data mining tasks. In this article, we consider the problem of temporal link prediction: Given link data for times 1 through T, can we predict the links at time T + 1? If our data has underlying periodic structure, can we predict out even further in time, i.e., links at time T + 2, T + 3, etc.? In this article, we consider bipartite graphs that evolve over time and consider matrix- and tensor-based methods for predicting future links. We present a weight-based method for collapsing multiyear data into a single matrix. We show how the well-known Katz method for link prediction can be extended to bipartite graphs and, moreover, approximated in a scalable way using a truncated singular value decomposition. Using a CANDECOMP PARAFAC tensor decomposition of the data, we illustrate the usefulness of exploiting the natural three-dimensional structure of temporal link data. Through several numerical experiments, we demonstrate that both matrix- and tensor-based techniques are effective for temporal link prediction despite the inherent difficulty of the problem. Additionally, we show that tensor-based techniques are particularly effective for temporal data with varying periodic patterns."
]
} |
1409.8359 | 1908010992 | Multi-cell cooperative processing with limited backhaul traffic is studied for cellular uplinks. Aiming at reduced backhaul overhead, a sparsity-regularized multi-cell receive-filter design problem is formulated. Both unstructured distributed cooperation as well as clustered cooperation, in which base station groups are formed for tight cooperation, are considered. Dynamic clustered cooperation, where the sparse equalizer and the cooperation clusters are jointly determined, is solved via alternating minimization based on spectral clustering and group-sparse regression. Furthermore, decentralized implementations of both unstructured and clustered cooperation schemes are developed for scalability, robustness and computational efficiency. Extensive numerical tests verify the efficacy of the proposed methods. | MCP in the downlink with limited backhaul rates has been investigated in the literature. Cooperative downlink transmission with per-BS power and Quality-of-Service (QoS) constraints was studied in @cite_14 . A clustered cooperation scheme with linear processing in the downlink was proposed in @cite_13 . In the context of multi-cell MIMO heterogeneous networks, joint BS clustering and beamformer design for downlink transmission were considered in @cite_31 @cite_9 . An energy-constrained beamformer design and BS-MS association for uplink and downlink were considered in @cite_19 . Distributed precoder design and BS selection in a game theoretic framework were proposed in @cite_6 . A semidefinite relaxation-based approach for backhual-limited cooperation in the downlink was proposed in @cite_15 . Particle swarm optimization was used for zero-forcing-type beamformer design in @cite_28 . Our work focuses on the uplink and does not require coordination with MSs, with all MCP burden placed on the BSs. | {
"cite_N": [
"@cite_14",
"@cite_15",
"@cite_28",
"@cite_9",
"@cite_6",
"@cite_19",
"@cite_31",
"@cite_13"
],
"mid": [
"2085425663",
"2082133464",
"2005896700",
"",
"2134058948",
"2050726943",
"2079587612",
""
],
"abstract": [
"When the joint processing technique is applied in the coordinated multipoint (CoMP) downlink transmission, the user data for each mobile station needs to be shared among multiple base stations (BSs) via backhaul. If the number of users is large, this data exchange can lead to a huge backhaul signaling overhead. In this paper, we consider a multi-cell CoMP network with multi-antenna BSs and single antenna users. The problem that involves the joint design of transmit beamformers and user data allocation at BSs to minimize the backhaul user data transfer is addressed, which is subject to given quality-of-service and per-BS power constraints. We show that this problem can be cast into an l0-norm minimization problem, which is NP-hard. Inspired by recent results in compressive sensing, we propose two algorithms to tackle it. The first algorithm is based on reweighted l1-norm minimization, which solves a series of convex l0-norm minimization problems. In the second algorithm, we first solve the l2-norm relaxation of the joint clustering and beamforming problem and then iteratively remove the links that correspond to the smallest transmit power. The second algorithm enjoys a faster solution speed and can also be implemented in a semi-distributed manner under certain assumptions. Simulations show that both algorithms can significantly reduce the user data transfer in the backhaul.",
"Multicell cooperation has recently attracted tremendous attention because of its ability to eliminate intercell interference and increase spectral efficiency. However, the enormous amount of information being exchanged, including channel state information and user data, over backhaul links may deteriorate the network performance in a realistic system. This paper adopts a backhaul cost metric that considers the number of active directional cooperation links, which gives a first order measurement of the backhaul loading required in asymmetric Multiple-Input Multiple-Output (MIMO) cooperation. We focus on a downlink scenario for multi-antenna base stations and single-antenna mobile stations. The design problem is minimizing the number of active directional cooperation links and jointly optimizing the beamforming vectors among the cooperative BSs subject to signal-to-interference-and-noise-ratio (SINR) constraints at the mobile station. This problem is non-convex and solving it requires combinatorial search. A practical algorithm based on smooth approximation and semidefinite relaxation is proposed to solve the combinatorial problem efficiently. We show that semidefinite relaxation is tight with probability 1 in our algorithm and stationary convergence is guaranteed. Simulation results show the saving of backhaul cost and power consumption is notable compared with several baseline schemes and its effectiveness is demonstrated.",
"Joint processing between base stations is a promising technique to improve the quality of service to users at the cell edge, but this technique poses tremendous requirements on the backhaul signaling capabilities, such as the distribution of channel state information and the precoding weights to the base stations involved in joint processing. Partial joint processing is a technique aimed to reduce feedback load, in one approach the users feed back the channel state information of the best links based on a channel gain threshold mechanism. However, it has been shown in the literature that the reduction in the feedback load is not reflected in an equivalent backhaul reduction, unless additional scheduling or precoding techniques are applied. The reason is that reduced feedback from users yields sparse channel state information at the Central Coordination Node. Under these conditions, existing linear precoding techniques fail to remove the interference and reduce backhaul, simultaneously, unless constraints are imposed on scheduling. In this paper, a partial joint processing scheme with efficient backhauling is proposed, based on a stochastic optimization algorithm called particle swarm optimization. The use of particle swarm optimization in the design of the precoder promises efficient backhauling with improved sum rate.",
"",
"In a heterogeneous wireless cellular network, each user may be covered by multiple access points such as macro pico relay femto base stations (BS). An effective approach to maximize the sum utility (e.g., system throughput) in such a network is to jointly optimize users' linear procoders as well as their BS associations. In this paper, we first show that this joint optimization problem is NP-hard and thus is difficult to solve to global optimality. To find a locally optimal solution, we formulate the problem as a noncooperative game in which the users and the BSs both act as players. We introduce a set of new utility functions for the players and show that every Nash equilibrium (NE) of the resulting game is a stationary solution of the original sum utility maximization problem. Moreover, we develop a best-response type algorithm that allows the players to distributedly reach a NE of the game. Simulation results show that the proposed distributed algorithm can effectively relieve local BS congestion and simultaneously achieve high throughput and load balancing in a heterogeneous network.",
"The cloud radio access network (C-RAN) concept, in which densely deployed access points (APs) are empowered by cloud computing to cooperatively support mobile users (MUs), to improve mobile data rates, has been recently proposed. However, the high density of active APs results in severe interference and also inefficient energy consumption. Moreover, the growing popularity of highly interactive applications with stringent uplink (UL) requirements, e.g., network gaming and real-time broadcasting by wireless users, means that the UL transmission is becoming more crucial and requires special attention. Therefore in this paper, we propose a joint downlink (DL) and UL MU-AP association and beamforming design to coordinate interference in the C-RAN for energy minimization, a problem which is shown to be NP hard. Due to the new consideration of UL transmission, it is shown that the two state-of-the-art approaches for finding computationally efficient solutions of joint MU-AP association and beamforming considering only the DL, i.e., group-sparse optimization and relaxed-integer programming, cannot be modified in a straightforward way to solve our problem. Leveraging on the celebrated UL-DL duality result, we show that by establishing a virtual DL transmission for the original UL transmission, the joint DL and UL optimization problem can be converted to an equivalent DL problem in C-RAN with two inter-related subproblems for the original and virtual DL transmissions, respectively. Based on this transformation, two efficient algorithms for joint DL and UL MU-AP association and beamforming design are proposed, whose performances are evaluated and compared with other benchmarking schemes through extensive simulations.",
"We consider the interference management problem in a multicell MIMO heterogeneous network. Within each cell there is a large number of distributed micro pico base stations (BSs) that can be potentially coordinated for joint transmission. To reduce coordination overhead, we consider user-centric BS clustering so that each user is served by only a small number of (potentially overlapping) BSs. Thus, given the channel state information, our objective is to jointly design the BS clustering and the linear beamformers for all BSs in the network. In this paper, we formulate this problem from a sparse optimization perspective, and propose an efficient algorithm that is based on iteratively solving a sequence of group LASSO problems. A novel feature of the proposed algorithm is that it performs BS clustering and beamformer design jointly rather than separately as is done in the existing approaches for partial coordinated transmission. Moreover, the cluster size can be controlled by adjusting a single penalty parameter in the nonsmooth regularized utility function. The convergence of the proposed algorithm (to a stationary solution) is guaranteed, and its effectiveness is demonstrated via extensive simulation.",
""
]
} |
1409.8359 | 1908010992 | Multi-cell cooperative processing with limited backhaul traffic is studied for cellular uplinks. Aiming at reduced backhaul overhead, a sparsity-regularized multi-cell receive-filter design problem is formulated. Both unstructured distributed cooperation as well as clustered cooperation, in which base station groups are formed for tight cooperation, are considered. Dynamic clustered cooperation, where the sparse equalizer and the cooperation clusters are jointly determined, is solved via alternating minimization based on spectral clustering and group-sparse regression. Furthermore, decentralized implementations of both unstructured and clustered cooperation schemes are developed for scalability, robustness and computational efficiency. Extensive numerical tests verify the efficacy of the proposed methods. | Direct exchanges of the signal samples between the BSs were considered in the context of 3GPP LTE systems in @cite_42 , where distributed cooperation without central control was advocated. Overlapping clusters of BSs were elected based on proximity for uplink MCP in @cite_5 . A greedy algorithm for dynamic clustering was proposed to maximize the uplink sum-rate in @cite_8 . Successive interference cancellation was adopted under limited backhaul traffic in @cite_26 , and cooperative group decoding was considered in a similar setting in @cite_0 . Here, our intention is to concentrate on simple linear processing but address the backhaul traffic volume issue in both distributed and clustered cooperation settings in a consistent framework, and also derive distributed algorithms for scalable implementation. | {
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_42",
"@cite_0",
"@cite_5"
],
"mid": [
"2137815964",
"2154813399",
"2151583319",
"1976601907",
""
],
"abstract": [
"This paper studies an uplink multicell joint processing model in which the base-stations are connected to a centralized processing server via rate-limited digital backhaul links. We propose a simple scheme that performs Wyner-Ziv compress-and-forward relaying on a per-base-station basis followed by successive interference cancellation (SIC) at the central processor. The proposed scheme has a significantly reduced complexity as compared to joint decoding, resulting in an easily computable achievable rate region. Although suboptimal in general, this paper shows that the proposed per-base-station SIC scheme can achieve the sum capacity of a class of Wyner cellular model to within a constant gap. This paper also establishes that in order to achieve to within a constant gap to the maximum SIC rate with infinite backhaul, the limited-backhaul system must have backhaul capacities that scale logarithmically with the signal-to-interference-and-noise ratios (SINRs) at the base-stations. Further, this paper studies the optimal backhaul rate allocation problem for the per-base-station SIC model with a total backhaul capacity constraint, and shows that the sum-rate maximizing allocation should also have individual backhaul rates that scale logarithmically with the SINR at each base-station. Finally, the proposed per-base-station SIC scheme is evaluated in a practical multicell network to quantify the performance gain brought by multicell processing.",
"Multi-cell cooperative processing (MCP) has recently attracted a lot of attention because of its potential for co-channel interference (CCI) mitigation and spectral efficiency increase. MCP inevitably requires increased signaling overhead and inter-base communication. Therefore in practice, only a limited number of base stations (BSs) can cooperate in order for the overhead to be affordable. The intrinsic problem of which BSs shall cooperate in a realistic scenario has been only partially investigated. In this contribution linear beamforming has been considered for the sum-rate maximisation of the uplink. A novel dynamic greedy algorithm for the formation of the clusters of cooperating BSs is presented for a cellular network incorporating MCP. This approach is chosen to be evaluated under a fair MS scheduling scenario (round-robin). The objective of the clustering algorithm is sum-rate maximisation of the already selected MSs. The proposed cooperation scheme is compared with some fixed cooperation clustering schemes. It is shown that a dynamic clustering approach with a cluster consisting of 2 cells outperforms static coordination schemes with much larger cluster sizes.",
"Cellular systems in general suffer from co-channel interference, when simultaneous transmissions in other cells use the same physical resources. In order to mitigate such co-channel interference cooperating Base Stations (BSs) can perform joint multi-antenna signal processing across cell borders. This paper describes a concept of distributed cooperation, where BSs communicate directly via a BS-BS interface without central control. A serving BS can serve its terminals on its own or it can request cooperation from one or more supporting BSs. By collecting IQ samples from the supporting BSs' antenna elements, the serving BS can virtually increase its number of receive antennas. Exchanging additional parameters allows applying advanced receiver algorithms, e.g., interference rejection or cancelation. Performance evaluations by means of simulation show the capability of BS cooperation applied to 3GPP LTE in terms of cell and user throughput but it also shows the trade-off in terms of increased backhaul requirement due to BS-BS communication.",
"We consider transmit rate allocation and the associated cooperative decoding strategies in uplink multi-cell networks where base stations (BSs) are connected by capacity-limited backhaul (BH) links. In particular, we propose two cooperative group decoding (CGD) schemes for the BSs with partial decoding results shared via the BH. In parallel CGD, at each stage, each BS locally selects and jointly decodes a group of users by treating the remaining users as noise; then it forwards some partial decoding results to other BSs via the BH. After subtracting the decoded users and the received decoding results from other BSs, each BS repeats the same procedure until all its designated users are decoded. On the other hand, in sequential CGD, each user can be decoded at any BS. At each stage only one BS is selected to decode a group of users, which then forwards the decoding results to other BSs for interference cancellation. The process is repeated until all users are decoded. Numerical results are provided to demonstrate that the proposed CGD schemes offer significant gain in terms of the achievable rates.",
""
]
} |
1409.8359 | 1908010992 | Multi-cell cooperative processing with limited backhaul traffic is studied for cellular uplinks. Aiming at reduced backhaul overhead, a sparsity-regularized multi-cell receive-filter design problem is formulated. Both unstructured distributed cooperation as well as clustered cooperation, in which base station groups are formed for tight cooperation, are considered. Dynamic clustered cooperation, where the sparse equalizer and the cooperation clusters are jointly determined, is solved via alternating minimization based on spectral clustering and group-sparse regression. Furthermore, decentralized implementations of both unstructured and clustered cooperation schemes are developed for scalability, robustness and computational efficiency. Extensive numerical tests verify the efficacy of the proposed methods. | Compared to our conference precursor @cite_25 , decentralized implementation of the proposed distributed and clustered cooperation schemes is developed in the present work. Decentralized computation of spatial equalizers and clusters makes MCP scalable to large networks, robust to isolated points of failure, and more resource-efficient than centralized implementation, since pieces of the overall problem are solved concurrently at different BSs, coordinated by peer message exchanges. Decentralized implementation of the component algorithms, such as group-sparse regression, eigenvector computation for network Laplacians, and k-means clustering, is also discussed. In addition, extensive simulations are performed to verify the performance, including the use of multiple antennas, user fairness, and dynamic clustered cooperation scenarios. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2052658038"
],
"abstract": [
"Multi-cell cooperative processing with limited backhaul traffic is considered for cellular uplinks. To parsimoniously select a set of cooperating base stations, a sparse multi-cell receive-filter is obtained through convex optimization using compressive sensing techniques. Clustered cooperation is also considered, where sparsity is promoted on inter-cluster feedback. A joint equalizer design and dynamic partitioning problem is formulated and solved using an iterative spectral clustering approach. Numerical tests verify the efficacy of proposed methods."
]
} |
1409.8174 | 2093283549 | Keywords: BitTorrent Sync Peer-to-Peer Synchronisation Privacy Digital forensics abstract With professional and home Internet users becoming increasingly concerned with data protection and privacy, the privacy afforded by popular cloud file synchronisation services, such as Dropbox, OneDrive and Google Drive, is coming under scrutiny in the press. A number of these services have recently been reported as sharing information with governmental security agencies without warrants. BitTorrent Sync is seen as an alternative by many and has gathered over two million users by December 2013 (doubling since the previous month). The service is completely decentralised, offers much of the same syn- chronisation functionality of cloud powered services and utilises encryption for data transmission (and optionally for remote storage). The importance of understanding Bit- Torrent Sync and its resulting digital investigative implications for law enforcement and forensic investigators will be paramount to future investigations. This paper outlines the client application, its detected network traffic and identifies artefacts that may be of value as evidence for future digital investigations. a 2014 The Authors. Published by Elsevier Ltd on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http: creativecommons.org licenses by-nc-nd 3.0 ). | Forensic investigation of these utilities can be challenging, as presented by in their 2012 paper @cite_11 . Unless local synchronisation is completely up to date, the full picture of the data may reside across temporary files, volatile storage (such as the system's RAM) and across multiple data-centres of the service provider's cloud storage facilities. Any digital forensic examination of these systems must pay particular attention to the method of access, e.g., usually the Internet browser connecting to the service provider's access page. This temporary access serves to highlight the importance of live forensic techniques when investigating a suspect machine. Cutting power to the suspect machine may not only lose access to any currently opened documents, but would also lose any currently stored passwords or other authentication tokens that are stored in RAM. describe three main forms of online storage in use by consumers: | {
"cite_N": [
"@cite_11"
],
"mid": [
"1991458033"
],
"abstract": [
"Abstract The demand for cloud computing is increasing because of the popularity of digital devices and the wide use of the Internet. Among cloud computing services, most consumers use cloud storage services that provide mass storage. This is because these services give them various additional functions as well as storage. It is easy to access cloud storage services using smartphones. With increasing utilization, it is possible for malicious users to abuse cloud storage services. Therefore, a study on digital forensic investigation of cloud storage services is necessary. This paper proposes new procedure for investigating and analyzing the artifacts of all accessible devices, such as Windows system, Mac system, iPhone, and Android smartphone."
]
} |
1409.8174 | 2093283549 | Keywords: BitTorrent Sync Peer-to-Peer Synchronisation Privacy Digital forensics abstract With professional and home Internet users becoming increasingly concerned with data protection and privacy, the privacy afforded by popular cloud file synchronisation services, such as Dropbox, OneDrive and Google Drive, is coming under scrutiny in the press. A number of these services have recently been reported as sharing information with governmental security agencies without warrants. BitTorrent Sync is seen as an alternative by many and has gathered over two million users by December 2013 (doubling since the previous month). The service is completely decentralised, offers much of the same syn- chronisation functionality of cloud powered services and utilises encryption for data transmission (and optionally for remote storage). The importance of understanding Bit- Torrent Sync and its resulting digital investigative implications for law enforcement and forensic investigators will be paramount to future investigations. This paper outlines the client application, its detected network traffic and identifies artefacts that may be of value as evidence for future digital investigations. a 2014 The Authors. Published by Elsevier Ltd on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http: creativecommons.org licenses by-nc-nd 3.0 ). | Many Cloud storage utilities provide a method of synchronisation of files which involves some form of periodic checking to determine if changes have been made to any version being viewed locally or to compare offline copies with their online counterparts as soon as communication can be re-established (network connectivity re-enabled or the application or service restarted). For Dropbox, @cite_0 identified two sets of servers, the control servers owned and operated by Dropbox themselves and the storage management and cloud storage servers hosted by Amazon's EC2 and S3 services. This identification is also verified by @cite_5 . | {
"cite_N": [
"@cite_0",
"@cite_5"
],
"mid": [
"2119631914",
"2000980701"
],
"abstract": [
"Personal cloud storage services are gaining popularity. With a rush of providers to enter the market and an increasing offer of cheap storage space, it is to be expected that cloud storage will soon generate a high amount of Internet traffic. Very little is known about the architecture and the performance of such systems, and the workload they have to face. This understanding is essential for designing efficient cloud storage systems and predicting their impact on the network. This paper presents a characterization of Dropbox, the leading solution in personal cloud storage in our datasets. By means of passive measurements, we analyze data from four vantage points in Europe, collected during 42 consecutive days. Our contributions are threefold: Firstly, we are the first to study Dropbox, which we show to be the most widely-used cloud storage system, already accounting for a volume equivalent to around one third of the YouTube traffic at campus networks on some days. Secondly, we characterize the workload users in different environments generate to the system, highlighting how this reflects on network traffic. Lastly, our results show possible performance bottlenecks caused by both the current system architecture and the storage protocol. This is exacerbated for users connected far from storage data-centers. All measurements used in our analyses are publicly available in anonymized form at the SimpleWeb trace repository: http: traces.simpleweb.org dropbox",
"Powered by cloud computing, Dropbox not only provides reliable file storage but also enables effective file synchronization and user collaboration. This new generation of service, beyond conventional client server or peer-to-peer file hosting with storage only, has attracted a vast number of Internet users. It is however known that the synchronization delay of Dropbox-like systems is increasing with their expansion, often beyond the accepted level for practical collaboration. In this paper, we present an initial measurement to understand the design and performance bottleneck of the proprietary Dropbox system. Our measurement identifies the cloud servers instances utilized by Dropbox, revealing its hybrid design with both Amazon's S3 (for storage) and Amazon's EC2 (for computation). The mix of bandwidth-intensive tasks (such as content delivery) and computation-intensive tasks (such as compare hash values for the contents) in Dropbox enables seamless collaboration and file synchronization among multiple users; yet their interference, revealed in our experiments, creates a severe bottleneck that prolongs the synchronization delay with virtual machines in the cloud, which has not seen in conventional physical machines. We thus re-model the resource provisioning problem in the Dropbox-like systems and present an interference-aware solution that smartly allocates the Dropbox tasks to different cloud instances. Evaluation results show that our solution remarkably reduces the synchronization delay for this new generation of file hosting service."
]
} |
1409.8252 | 2953131073 | Cellular networks are among the major energy hoggers of communication networks, and their contributions to the global energy consumption increase rapidly due to the surges of data traffic. With the development of green energy technologies, base stations (BSs) can be powered by green energy in order to reduce the on-grid energy consumption, and subsequently reduce the carbon footprints. However, equipping a BS with a green energy system incurs additional capital expenditure (CAPEX) that is determined by the size of the green energy generator, the battery capacity, and other installation expenses. In this paper, we introduce and investigate the green energy provisioning (GEP) problem which aims to minimize the CAPEX of deploying green energy systems in BSs while satisfying the QoS requirements of cellular networks. The GEP problem is challenging because it involves the optimization over multiple time slots and across multiple BSs. We decompose the GEP problem into the weighted energy minimization problem and the green energy system sizing problem, and propose a green energy provisioning solution consisting of the provision cost aware traffic load balancing algorithm and the binary energy system sizing algorithm to solve the sub-problems and subsequently solve the GEP problem. We validate the performance and the viability of the proposed green energy provisioning solution through extensive simulations, which also conform to our analytically results. | To optimize the utilization of renewable energy, Ozel @cite_4 proposed to optimize the packet transmission policy for energy harvest wireless nodes. Zhou @cite_18 proposed the hand over parameter tuning algorithm and the power control algorithm to guide mobile users to access green energy powered BSs. Han and Ansari @cite_11 proposed an energy aware cell size adaptation algorithm named ICE, which balances the energy consumption among BSs powered by green energy, and enables more users to be served with green energy. Considering a network with multiple energy supplies, Han and Ansari @cite_7 also proposed to optimize the utilization of green energy, and reduce the on-grid energy consumption of cellular networks by the cell size optimization. Assuming the capacity of the green energy system is given, all these solutions are optimizing wireless cellular networks according to the availability of the green energy. However, for the GEP problem, the capacity of green energy system is to be determined. Therefore, the existing solutions on optimizing the green energy utilization cannot be directly applied to solve the GEP problem. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_11"
],
"mid": [
"2100998721",
"2145834571",
"",
"1984765569"
],
"abstract": [
"The spread of mobile connectivity is generating major social and economic benefits around the world, while along with the rapid growth of new telecommunication technologies like mobile broadband communication and M2M (Machine-to-Machine) networks, larger number of various base stations will be employed into the network, which will greatly increase the power expense and CO2 emission. In order to degrade the system power expense, variety of researches on new energy and novel transmission technology are put in agenda. In this paper, instead of reducing the absolute power expense, the research focuses on guiding more power consumption into green source energy, which implying that the UEs (User Equipment), especially the cell edge UEs, will have preferential access to the BSs (Base Station) with natural energy supply. To realize the tendentious connection, two detailed approaches are proposed, the HO (Hand Over) parameter tuning for target cell selection and power control for coverage optimization. The system evaluation shows that, by proper setting of parameters in HO and power control, both of the two approaches can achieve good balance between energy saving effect and system throughput impact.",
"Wireless systems comprised of rechargeable nodes have a significantly prolonged lifetime and are sustainable. A distinct characteristic of these systems is the fact that the nodes can harvest energy throughout the duration in which communication takes place. As such, transmission policies of the nodes need to adapt to these harvested energy arrivals. In this paper, we consider optimization of point-to-point data transmission with an energy harvesting transmitter which has a limited battery capacity, communicating in a wireless fading channel. We consider two objectives: maximizing the throughput by a deadline, and minimizing the transmission completion time of the communication session. We optimize these objectives by controlling the time sequence of transmit powers subject to energy storage capacity and causality constraints. We, first, study optimal offline policies. We introduce a directional water-filling algorithm which provides a simple and concise interpretation of the necessary optimality conditions. We show the optimality of an adaptive directional water-filling algorithm for the throughput maximization problem. We solve the transmission completion time minimization problem by utilizing its equivalence to its throughput maximization counterpart. Next, we consider online policies. We use stochastic dynamic programming to solve for the optimal online policy that maximizes the average number of bits delivered by a deadline under stochastic fading and energy arrival processes with causal channel state feedback. We also propose near-optimal policies with reduced complexity, and numerically study their performances along with the performances of the offline and online optimal policies under various different configurations.",
"",
"This letter proposes Intelligent Cell brEathing (ICE) to optimize the utilization of green energy in cellular networks by minimizing the maximal energy depleting rate of the low-power base stations powered by green energy. Minimizing the maximal depleting rate is an NP-hard problem. ICE is thus proposed to achieve low computational complexity. ICE, in each iteration, finds the energy dependent set and the vector of beacon power level decrements for low-power base stations in the set, and then shrinks the coverage area of these base stations by reducing their beacon power levels. The algorithm iterates until the optimal solution is found. ICE balances the energy consumptions among LBSs, enables more users to be served with green energy, and therefore reduces the on-grid energy consumption."
]
} |
1409.8133 | 2950262091 | Given an @math -vertex graph @math and two positive integers @math , the ( @math )-differential coloring problem asks for a coloring of the vertices of @math (if one exists) with distinct numbers from 1 to @math (treated as ), such that the minimum difference between the two colors of any adjacent vertices is at least @math . While it was known that the problem of determining whether a general graph is ( @math )-differential colorable is NP-complete, our main contribution is a complete characterization of bipartite, planar and outerplanar graphs that admit ( @math )-differential colorings. For practical reasons, we consider also color ranges larger than @math , i.e., @math . We show that it is NP-complete to determine whether a graph admits a ( @math )-differential coloring. The same negative result holds for the ( @math -differential coloring problem, even in the case where the input graph is planar. | The maximum differential coloring problem is a well-studied problem, which dates back in 1984, when @cite_5 introduced it under the name separation number'' and showed its NP-completeness. It is worth mentioning though that the maximum differential coloring problem is also known as dual bandwidth'' @cite_4 and anti-bandwidth'' @cite_13 , since it is the complement of the bandwidth minimization problem @cite_14 . Due to the hardness of the problem, heuristics are often used for coloring general graphs, e.g., LP-formulations @cite_9 , memetic algorithms @cite_0 and spectral based methods @cite_6 . The differential chromatic number is known only for special graph classes, such as Hamming graphs @cite_12 , meshes @cite_3 , hypercubes @cite_3 @cite_10 , complete binary trees @cite_7 , complete @math -ary trees for odd values of @math @cite_13 , other special types of trees @cite_7 , and complements of interval graphs, threshold graphs and arborescent comparability graphs @cite_2 . Upper bounds on the differential chromatic number are given by @cite_5 for connected graphs and by Miller and Pritikin @cite_20 for bipartite graphs. For a more detailed bibliographic overview refer to @cite_1 . | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_10",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_20",
"@cite_13",
"@cite_12"
],
"mid": [
"85690038",
"",
"",
"1971217502",
"2064388010",
"2018399909",
"1520709705",
"2037252568",
"2020148023",
"2123342283",
"2092518753",
"2154213027",
"2174253040",
"2179389847"
],
"abstract": [
"The Problem of minimizing the bandwidth of the nonzero entries of a sparse symmetric matrix by permuting its rows and columns and some related combinatorial problems are shown to be NP-Complete.",
"",
"",
"The Hales numbered n-dimensional hypercube exhibits interesting recursive structures in n. These structures lead to a very simple proof of the well-known bandwidth formula for hypercubes proposed by Harper, whose proof was thought to be surprisingly difficult. Harper also proposed an optimal numbering for a related problem called the antibandwidth of hypercubes. In a recent publication, approximated the hypercube antibandwidth up to the third-order term. In this paper, we find the exact value in light of the above recursive structures.",
"This article proposes a linear integer programming formulation and several heuristics based on GRASP and path relinking for the antibandwidth problem. In the antibandwidth problem, one is given an undirected graph with n nodes and must label the nodes in a way that each node receives a unique label from the set 1, 2,…,n , such that, among all adjacent node pairs, the minimum difference between the node labels is maximized. Computational results show that only small instances of this problem can be solved exactly (to optimality) with a commercial integer programming solver and that the heuristics find high-quality solutions in much less time than the commercial solver. © 2010 Wiley Periodicals, Inc. NETWORKS, Vol. 58(3), 171–189 2011 © 2011 Wiley Periodicals, Inc.",
"Abstract We study the maximum differential coloring problem , where the vertices of an n -vertex graph must be labeled with distinct numbers ranging from 1 to n , so that the minimum absolute difference between two labels of any two adjacent vertices is maximized. As the problem is NP -hard for general graphs [16] , we consider planar graphs and subclasses thereof. We prove that the maximum differential coloring problem remains NP -hard, even for planar graphs. We also present tight bounds for regular caterpillars and spider graphs. Using these new bounds, we prove that the Miller–Pritikin labeling scheme [19] for forests is optimal for regular caterpillars and for spider graphs.",
"We study the maximum differential graph coloring problem, in which the goal is to find a vertex labeling for a given undirected graph that maximizes the label difference along the edges. This problem has its origin in map coloring, where not all countries are necessarily contiguous. We define the differential chromatic number and establish the equivalence of the maximum differential coloring problem to that of k-Hamiltonian path. As computing the maximum differential coloring is NP-Complete, we describe an exact backtracking algorithm and a spectral-based heuristic. We also discuss lower bounds and upper bounds for the differential chromatic number for several classes of graphs.",
"The antibandwidth problem consists of placing the vertices of a graph on a line in consecutive integer points in such a way that the minimum difference of adjacent vertices is maximised. The problem was originally introduced in [J.Y.-T. Leung, O. Vornberger, J.D. Witthoff, On some variants of the bandwidth minimisation problem, SIAM Journal of Computing 13 (1984) 650-667] in connection with the multiprocessor scheduling problems and can also be understood as a dual problem to the well-known bandwidth problem, as a special radiocolouring problem or as a variant of obnoxious facility location problems. The antibandwidth problem is NP-hard, there are a few classes of graphs with polynomial time complexities. Exact results for nontrivial graphs are very rare. Miller and Pritikin [Z. Miller, D. Pritikin, On the separation number of a graph, Networks 19 (1989) 651-666] showed tight bounds for the two-dimensional meshes and hypercubes. We solve the antibandwidth problem precisely for two-dimensional meshes, tori and estimate the antibandwidth value for hypercubes up to the third-order term. The cyclic antibandwidth problem is to embed an n-vertex graph into the cycle C\"n, such that the minimum distance (measured in the cycle) of adjacent vertices is maximised. This is a natural extension of the antibandwidth problem or a dual problem to the cyclic bandwidth problem. We start investigating this invariant for typical graphs and prove basic facts and exact results for the same product graphs as for the antibandwidth.",
"The antibandwidth maximization problem (AMP) consists of labeling the vertices of a n-vertex graph G with distinct integers from 1 to n such that the minimum difference of labels of adjacent vertices is maximized. This problem can be formulated as a dual problem to the well known bandwidth problem. Exact results have been proved for some standard graphs like paths, cycles, 2 and 3-dimensional meshes, tori, some special trees etc., however, no algorithm has been proposed for the general graphs. In this paper, we propose a memetic algorithm for the antibandwidth maximization problem, wherein we explore various breadth first search generated level structures of a graph--an imperative feature of our algorithm. We design a new heuristic which exploits these level structures to label the vertices of the graph. The algorithm is able to achieve the exact antibandwidth for the standard graphs as mentioned. Moreover, we conjecture the antibandwidth of some 3-dimensional meshes and complement of power graphs, supported by our experimental results.",
"We give a simple proof that the obvious necessary conditions for a graph to contain the kth power of a Hamiltonian path are sufficient for the class of interval graphs. The proof is based on showing that a greedy algorithm tests for the existence of Hamiltonian path powers in interval graphs. We will also discuss covers by powers of paths and analogues of the Hamiltonian completion number. © 1998 John Wiley & Sons, Inc. J Graph Theory 28: 31–38, 1998",
"We consider the following variants of the bandwidth minimization problem: (1) the cycle-bandwidth problem which for a given graph G and positive integer k, asks if there is a circular layout such that every pair of adjacent vertexes have a distance at most k, (2) the separation problem which asks if there is a linear layout such that every pair of adjacent vertexes have a distance greater than k, and (3) the cycle-separation problem which asks if there is a circular layout such that every pair of adjacent vertexes have a distance greater than k.We show that the cycle-bandwidth problem is NP-complete for each fixed @math the separation and cycle-separation problems are both NP-complete for each fixed @math , and the directed separation problem is NP-complete for arbitrary k. We give polynomial time algorithms for several special cases of the directed separation problem. Finally, we show the relationships of the directed separation problem with several scheduling problems by giving reduct...",
"We consider the following graph labeling problem, introduced by (J. Y-T. Leung, O. Vornberger, and J. D. Witthoff, On some variants of the bandwidth minimization problem. SIAM J. Comput. 13 (1984) 650–667). Let G be a graph of order n, and f a bijection from V(G) to the integers 1 through n. Let |f|, and define s(G), the separation number of G, to be the maximum of |f| among all such bijections f. We first derive some basic relations between s(G) and other graph parameters. Using a general strategy for analyzing separation number in bipartite graphs, we obtain exact values for certain classes of forests and asymptotically optimal lower bounds for grids and hypercubes.",
"The antibandwidth problem is to label vertices of a n-vertex graph injectively by 1,2,3,...n, so that the minimum difference between labels of adjacent vertices is maximised. The problem is motivated by the obnoxious facility location problem, radiocolouring, work and game scheduling and is dual to the well known bandwidth problem. We prove exact results for the antibandwidth of complete k-ary trees, k even, and estimate the parameter for odd k up to the second order term. This extends previous results for complete binary trees.",
"The antibandwidth problem is to label vertices of graph G(V,E) bijectively by integers 0,1,…,|V|−1 in such a way that the minimal difference of labels of adjacent vertices is maximised. In this paper we study the antibandwidth of Hamming graphs. We provide labeling algorithms and tight upper bounds for general Hamming graphs ∏k=1dKnk. We have exact values for special choices of ni's and equality between antibandwidth and cyclic antibandwidth values."
]
} |
1409.7489 | 2952202563 | To bring their innovative ideas to market, those embarking in new ventures have to raise money, and, to do so, they have often resorted to banks and venture capitalists. Nowadays, they have an additional option: that of crowdfunding. The name refers to the idea that funds come from a network of people on the Internet who are passionate about supporting others' projects. One of the most popular crowdfunding sites is Kickstarter. In it, creators post descriptions of their projects and advertise them on social media sites (mainly Twitter), while investors look for projects to support. The most common reason for project failure is the inability of founders to connect with a sufficient number of investors, and that is mainly because hitherto there has not been any automatic way of matching creators and investors. We thus set out to propose different ways of recommending investors found on Twitter for specific Kickstarter projects. We do so by conducting hypothesis-driven analyses of pledging behavior and translate the corresponding findings into different recommendation strategies. The best strategy achieves, on average, 84 of accuracy in predicting a list of potential investors' Twitter accounts for any given project. Our findings also produced key insights about the whys and wherefores of investors deciding to support innovative efforts. | Crowdfunding has recently attracted the attention of researchers in various disciplines, from business and economics to computer science. Economists have investigated pleading behavior and they, for example, found that crowdfunding eliminates distance-related economic frictions, yet initial findings tend often to come from family, friends and acquaintances @cite_17 . | {
"cite_N": [
"@cite_17"
],
"mid": [
"2133263811"
],
"abstract": [
"Perhaps the most striking feature of \"crowdfunding\" is the broad geographic dispersion of investors in small, early-stage projects. This contrasts with existing theories that predict entrepreneurs and investors will be co-located due to distance-sensitive costs. We examine a crowdfunding setting that connects artist-entrepreneurs with investors over the internet for financing musical projects. The average distance between artists and investors is about 3,000 miles, suggesting a reduced role for spatial proximity. Still, distance does play a role. Within a single round of financing, local investors invest relatively early, and they appear less responsive to decisions by other investors. We show this geography effect is driven by investors who likely have a personal connection with the artist-entrepreneur (\"family and friends\"). Although the online platform seems to eliminate most distance-related economic frictions such as monitoring progress, providing input, and gathering information, it does not eliminate social-related frictions."
]
} |
1409.7595 | 2949976131 | We study procurement games where each seller supplies multiple units of his item, with a cost per unit known only to him. The buyer can purchase any number of units from each seller, values different combinations of the items differently, and has a budget for his total payment. For a special class of procurement games, the bounded knapsack problem, we show that no universally truthful budget-feasible mechanism can approximate the optimal value of the buyer within @math , where @math is the total number of units of all items available. We then construct a polynomial-time mechanism that gives a @math -approximation for procurement games with concave additive valuations , which include bounded knapsack as a special case. Our mechanism is thus optimal up to a constant factor. Moreover, for the bounded knapsack problem, given the well-known FPTAS, our results imply there is a provable gap between the optimization domain and the mechanism design domain. Finally, for procurement games with sub-additive valuations , we construct a universally truthful budget-feasible mechanism that gives an @math -approximation in polynomial time with a demand oracle. | In @cite_2 the author considered settings where each seller has multiple items. Although it was discussed why such settings are harder than single-item settings, no explicit upper bound on the approximation ratio was given. Instead, the focus there was a different benchmark. The author provided a constant approximation of his benchmark for sub-modular valuations, but the mechanism does not run in polynomial time. Also, budget-feasible mechanisms where each seller has one unit of an infinitely divisible item have been considered in @cite_35 , under the large-market assumption: that is, the cost of buying each item completely is much smaller than the budget. The authors constructed a deterministic mechanism which is a @math approximation for additive valuations and which they also prove to be optimal. In our study we do not impose any assumption about the sellers costs, and the cost of buying all units of an item may or may not exceed the budget. Moreover, in @cite_21 the authors studied online procurements and provided a randomized posted-price mechanism that is an @math -approximation for sub-modular valuations under the random ordering assumption. | {
"cite_N": [
"@cite_35",
"@cite_21",
"@cite_2"
],
"mid": [
"1987352480",
"",
"1580387990"
],
"abstract": [
"In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester on getting hired. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is -- if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility? We note that although the previous work (Singer (2010) (2011)) has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve a competitive ratio better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves a competitive ratio of 1 -- 1 e = 0.63. Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism, which is used in all the previous works so far on this problem. Interestingly, we can also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1 -- 1 e, thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the market is large.",
"",
"This paper discusses two advancements in the theory of designing truthful randomized mechanisms. Our first contribution is a new framework for developing truthful randomized mechanisms. The framework enables the construction of mechanisms with polynomially small failure probability. This is in contrast to previous mechanisms that fail with constant probability. Another appealing feature of the new framework is that bidding truthfully is a stronglydominant strategy. The power of the framework is demonstrated by an @math -mechanism for combinatorial auctions that succeeds with probability @math . The other major result of this paper is an O(logmloglogm) randomized truthful mechanism for combinatorial auction with subadditivebidders. The best previously-known truthful mechanism for this setting guaranteed an approximation ratio of @math . En route, the new mechanism also provides the best approximation ratio for combinatorial auctions with submodularbidders currently achieved by truthful mechanisms."
]
} |
1409.7552 | 2259002239 | Gathering the most information by picking the least amount of data is a common task in experimental design or when exploring an unknown environment in reinforcement learning and robotics. A widely used measure for quantifying the information contained in some distribution of interest is its entropy. Greedily minimizing the expected entropy is therefore a standard method for choosing samples in order to gain strong beliefs about the underlying random variables. We show that this approach is prone to temporally getting stuck in local optima corresponding to wrongly biased beliefs. We suggest instead maximizing the expected cross entropy between old and new belief, which aims at challenging refutable beliefs and thereby avoids these local optima. We show that both criteria are closely related and that their dierence can be traced back to the asymmetry of the Kullback-Leibler divergence. In illustrative examples as well as simulated and real-world experiments we demonstrate the advantage of cross entropy over simple entropy for practical applications. | Similar to our robot experiment is the work of @cite_6 . They state that the KL divergence is the information gain about a distribution, but turn it around without further explanation and analysis. In this way, they implemented our MaxCE criterion, as we will show. Their results support our finding that MaxCE is an improvement above traditional Bayesian experimental design. | {
"cite_N": [
"@cite_6"
],
"mid": [
"1570540164"
],
"abstract": [
"We introduce a particle filter-based approach to representing and actively reducing uncertainty over articulated motion models. The presented method provides a probabilistic model that integrates visual observations with feedback from manipulation actions to best characterize a distribution of possible articulation models. We evaluate several action selection methods to efficiently reduce the uncertainty about the articulation model. The full system is experimentally evaluated using a PR2 mobile manipulator. Our experiments demonstrate that the proposed system allows for intelligent reasoning about sparse, noisy data in a number of common manipulation scenarios."
]
} |
1409.7758 | 201412263 | In this paper we analyze and extend the neural network based associative memory proposed by Gripon and Berrou. This associative memory resembles the celebrated Will- shaw model with an added partite cluster structure. In the literature, two retrieving schemes have been proposed for the network dynamics, namely SUM-OF-SUM and SUM-OF-MAX. They both offer considerably better performance than Willshaw and Hopfield networks, when comparable retrieval scenarios are considered. Former discussions and experiments concentrate on the erasure scenario, where a partial message is used as a probe to the network, in the hope of retrieving the full message. In this regard, SUM-OF-MAX outperforms SUM-OF-SUM in terms of retrieval rate by a large margin. However, we observe that when noise and errors are present and the network is queried by a corrupt probe, SUM-OF-MAX faces a severe limitation as its stringent activation rule prevents a neuron from reviving back into play once deactivated. In this manuscript, we categorize and analyze different error scenarios so that both the erasure and the corrupt scenarios can be treated consistently. We make an amendment to the network structure to improve the retrieval rate, at the cost of an extra scalar per neuron. Afterwards, five different approaches are proposed to deal with corrupt probes. As a result, we extend the network capability, and also increase the robustness of the retrieving procedure. We then experimentally compare all these proposals and discuss pros and cons of each approach under different types of errors. Simulation results show that if carefully designed, the network is able to preserve both a high retrieval rate and a low running time simultaneously, even when queried by a corrupt probe. | Gripon and Berrou propose the network structure in @cite_4 . @cite_8 , they show that using the same amount of storage, CSAMs outperform Hopfield networks in diversity (the number of patterns a network can store for a targeted performance), capacity (the maximum amount of stored information in bits for a targeted performance) and efficiency (the ratio between capacity and the amount of information in bits consumed by the network when capacity reaches its maximum) simultaneously. They later interpret CSAMs using the formalism of error correcting codes @cite_21 and propose a new decoding scheme called , which significantly decreases retrieval error. Jiang al @cite_20 modify CSAMs to store long sequences by incorporating directed links. Aboudib al @cite_11 extend the structure so that messages of different lengths can be stored in the same network. They also summarize criteria to build possible retrieving schemes and study the number of iterations required by each scheme. Yao al @cite_22 discover a previously overlooked problem that the network may converge to a bogus fixed point and propose heuristics to mitigate the issue. A novel post-processing algorithms is also developed, customized to the partite structure of CSAMs, which brings notably better retrieval rates than the standard scheme. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_8",
"@cite_21",
"@cite_20",
"@cite_11"
],
"mid": [
"2106169150",
"1666036116",
"2121160181",
"2086523057",
"2611060325",
"2964027214"
],
"abstract": [
"Associative memories are devices that are able to learn messages and to recall them in presence of errors or erasures. Their mechanics is similar to that of error correcting decoders. However, the role of correlation is opposed in the two devices, used as the essence of the retrieval process in the first one and avoided in the latter. In this paper, original codes are introduced to allow the effective combination of the two domains. The main idea is to associate a clique in a binary neural network with each message to learn. The obtained performance is dramatically better than that given by the state of the art, for instance Hopfield Neural Networks. Moreover, the model proposed is biologically plausible; it uses sparse binary connections between clusters of neurons provided with only two operations: sum and selection of maximum.",
"The Gripon-Berrou neural network (GBNN) is a recently invented recurrent neural network embracing a LDPC-like sparse encoding setup which makes it extremely resilient to noise and errors. A natural use of GBNN is as an associative memory. There are two activation rules for the neuron dynamics, namely sum-of-sum and sum-of-max. The latter outperforms the former in terms of retrieval rate by a huge margin. In prior discussions and experiments, it is believed that although sum-of-sum may lead the network to oscillate, sum-of-max always converges to an ensemble of neuron cliques corresponding to previously stored patterns. However, this is not entirely correct. In fact, sum-of-max often converges to bogus fixed points where the ensemble only comprises a small subset of the converged state. By taking advantage of this overlooked fact, we can greatly improve the retrieval rate. We discuss this particular issue and propose a number of heuristics to push sum-of-max beyond these bogus fixed points. To tackle the problem directly and completely, a novel post-processing algorithm is also developed and customized to the structure of GBNN. Experimental results show that the new algorithm achieves a huge performance boost in terms of both retrieval rate and run-time, compared to the standard sum-of-max and all the other heuristics.",
"Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages that are much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures. The performance of the network is assessed as a classifier and as an associative memory.",
"A new family of sparse neural networks achieving nearly optimal performance has been recently introduced. In these networks, messages are stored as cliques in clustered graphs. In this paper, we interpret these networks using the formalism of error correcting codes. To achieve this, we introduce two original codes, the thrifty code and the clique code, that are both sub-families of binary constant weight codes. We also provide the networks with an enhanced retrieving rule that enables a property of answer correctness and that improves performance.",
"An original architecture of oriented sparse neural networks that enables the introduction of sequentiality in associative memories is proposed in this paper. This architecture can be regarded as a generalization of a recently proposed non oriented binary network based on cliques. Using a limited neuron resource, the network is able to learn very long sequences and to retrieve them only from the knowledge of some consecutive symbols.",
"Associative memories are data structures addressed using part of the content rather than an index. They offer good fault reliability and biological plausibility. Among different families of associative memories, sparse ones are known to offer the best efficiency (ratio of the amount of bits stored to that of bits used by the network itself). Their retrieval process performance has been shown to benefit from the use of iterations. We introduce several families of algorithms to enhance the performance of the retrieval process inrecently proposed sparse associative memories based on binary neural networks. We show that these algorithms provide better performance than existing techniques and discuss their biological plausibility. We also analyze the required number of iterations and derive corresponding curves."
]
} |
1409.7758 | 201412263 | In this paper we analyze and extend the neural network based associative memory proposed by Gripon and Berrou. This associative memory resembles the celebrated Will- shaw model with an added partite cluster structure. In the literature, two retrieving schemes have been proposed for the network dynamics, namely SUM-OF-SUM and SUM-OF-MAX. They both offer considerably better performance than Willshaw and Hopfield networks, when comparable retrieval scenarios are considered. Former discussions and experiments concentrate on the erasure scenario, where a partial message is used as a probe to the network, in the hope of retrieving the full message. In this regard, SUM-OF-MAX outperforms SUM-OF-SUM in terms of retrieval rate by a large margin. However, we observe that when noise and errors are present and the network is queried by a corrupt probe, SUM-OF-MAX faces a severe limitation as its stringent activation rule prevents a neuron from reviving back into play once deactivated. In this manuscript, we categorize and analyze different error scenarios so that both the erasure and the corrupt scenarios can be treated consistently. We make an amendment to the network structure to improve the retrieval rate, at the cost of an extra scalar per neuron. Afterwards, five different approaches are proposed to deal with corrupt probes. As a result, we extend the network capability, and also increase the robustness of the retrieving procedure. We then experimentally compare all these proposals and discuss pros and cons of each approach under different types of errors. Simulation results show that if carefully designed, the network is able to preserve both a high retrieval rate and a low running time simultaneously, even when queried by a corrupt probe. | Aside from the architectural and algorithmic aspects of the network mentioned above, efficient implementations and applications are being developed as well. Jarollahi al @cite_14 use the (FPGA) to implement on a small sized network. Later in @cite_3 , they implement which runs @math faster, thanks to bitwise operations replacing the resource demanding summation and comparison units required by . The same group of authors also develop a content addressable memory in @cite_0 saving @math 1165 @math 2 @math 900 $ is witnessed without any loss of accuracy. | {
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_3"
],
"mid": [
"2055611966",
"1977602574",
"2037808679"
],
"abstract": [
"A low-power Content-Addressable Memory (CAM) is introduced employing a new mechanism for associativity between the input tags and the corresponding address of the output data. The proposed architecture is based on a recently developed clustered-sparse network using binary-weighted connections that on-average will eliminate most of the parallel comparisons performed during a search. Therefore, the dynamic energy consumption of the proposed design is significantly lower compared to that of a conventional low-power CAM design. Given an input tag, the proposed architecture computes a few possibilities for the location of the matched tag and performs the comparisons on them to locate a single valid match. A 0.13μm CMOS technology was used for simulation purposes. The energy consumption and the search delay of the proposed design are 9.5 , and 30.4 of that of the conventional NAND architecture respectively with a 3.4 higher number of transistors.",
"Associative memories are alternatives to indexed memories that when implemented in hardware can benefit many applications such as data mining. The classical neural network based methodology is impractical to implement since in order to increase the size of the memory, the number of information bits stored per memory bit (efficiency) approaches zero. In addition, the length of a message to be stored and retrieved needs to be the same size as the number of nodes in the network causing the total number of messages the network is capable of storing (diversity) to be limited. Recently, a novel algorithm based on sparse clustered neural networks has been proposed that achieves nearly optimal efficiency and large diversity. In this paper, a proof-of-concept hardware implementation of these networks is presented. The limitations and possible future research areas are discussed.",
"Associative memories retrieve stored information given partial or erroneous input patterns. Recently, a new family of associative memories based on Clustered-Neural-Networks (CNNs) was introduced that can store many more messages than classical Hopfield-Neural Networks (HNNs). In this paper, we propose hardware architectures of such memories for partial or erroneous inputs. The proposed architectures eliminate winner-take-all modules and thus reduce the hardware complexity by consuming 65 fewer FPGA lookup tables and increase the operating frequency by approximately 1.9 times compared to that of previous work."
]
} |
1409.7580 | 2407376844 | Finding the physical location of a specific network node is a prototypical task for navigation inside a wireless network. In this paper, we consider in depth the implications of wireless communication as a measurement input of gradient-based taxis algorithms. We discuss how gradients can be measured and determine the errors of this estimation. We then introduce a gradient-based taxis algorithm as an example of a family of gradient-based, convergent algorithms and discuss its convergence in the context of network robotics. We also conduct an exemplary experiment to show how to overcome some of the specific problems related to network robotics. Finally, we show how to adapt this framework to more complex objectives. | The network community has developed some algorithms similar to the gradient-based taxis algorithm discussed here. There exist for example algorithms to calculate relative bearing from gradients @cite_1 . These algorithms employ, in contrast to the finite differences used here, principal component analysis to estimate gradients. Furthermore, gradients can be used to localize network nodes by fitting a local model to the measured signal strength data @cite_17 . This and similar algorithms need precise position information acquired for example via GPS measurements or using laser range finders @cite_18 . The most straightforward linear model was explored successfully in @cite_13 @cite_12 . | {
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2023292064",
"2124780186",
"2056861524",
"",
"2119835893"
],
"abstract": [
"In this paper we explore methods for the online mapping of received radio signal strength with mobile robots and localizing the source of the radio signal. By utilizing Gaussian processes, we are able to build an online model of the signal-strength map that can, in turn, be used to provide the current maximum likelihood estimate of the source location. Furthermore, using the estimate of the source location, the Gaussian process model allows for prediction of received signal strength with confidence bounds in regions of the environment that have not been explored. Finally, we develop a control law for collecting samples of the signal strength with mobile robots that allows for online estimation of the radio signal source.",
"Relative bearing between robots is important in applications like pursuit-evasion [11] and SLAM [7]. This is also true in sensor networks, where the bearing of one sensor node relative to another has been used for localization [5], [18], [20] and topology control [14], [21], [6]. Most systems use dedicated sensors like an IR array or a camera to obtain relative bearing. We study the use of radio signal strength (RSS) in commodity radios for obtaining relative bearing. We show that by using the robot's mobility, commodity radios can be used to obtain coarse relative bearing. This measurement can be used for a suite of applications that do not require very precise bearing measurement. We analyze signal strength variations in simulation and experiment and also show an algorithm that uses this coarse bearing computation in a practical setting.",
"Radio signal strength (RSS) is a reasonable proxy for link quality, but its accurate estimation requires frequency and spatial diversity due to fluctuation caused by fading. We consider a Rayleigh Rician fading model, and gather RSS measurements during motion in a complex environment to enable gradient estimation. Using the RSS gradient, we develop control laws to track active sources. These may be used to establish and preserve connectivity among collaborative autonomous agents, to locate and approach radio sources, as well as deploying agents to assist mobile ad hoc networks (MANETs).",
"",
"Many previous studies have examined the placement of access points (APs) to improve the community's understanding of the deployment and behavioral characteristics of wireless networks. A key implicit assumption in these studies is that one can estimate the AP location accurately from wardriving-like measurements. However, existing localization algorithms exhibit high error because they over-simplify the complex nature of signal propagation. In this work, we propose a novel approach that localizes APs using directional information derived from local signal strength variations. Our algorithm only uses signal strength information, and improves localization accuracy over existing techniques. Furthermore, the algorithm is robust to the sampling biases and non-uniform shadowing, which are common in wardriving measurements."
]
} |
1409.7580 | 2407376844 | Finding the physical location of a specific network node is a prototypical task for navigation inside a wireless network. In this paper, we consider in depth the implications of wireless communication as a measurement input of gradient-based taxis algorithms. We discuss how gradients can be measured and determine the errors of this estimation. We then introduce a gradient-based taxis algorithm as an example of a family of gradient-based, convergent algorithms and discuss its convergence in the context of network robotics. We also conduct an exemplary experiment to show how to overcome some of the specific problems related to network robotics. Finally, we show how to adapt this framework to more complex objectives. | A taxis algorithm from the same family as the algorithm introduced here was presented in @cite_2 for general abstract taxis. This algorithm is based on RDSA known from the stochastic approximation literature @cite_4 @cite_3 . RDSA uses a very efficient but rather unintuitive formulation of the gradient estimation. The authors discuss the noise characteristics of the physical model only very briefly and fail to mention motor noise at all. As discussed later, small scale fading is especially problematic and can violate some of the convergence conditions if not dealt with correctly (see sec:SmallScaleFading ). Because of that we discuss the physical characteristics of wireless communication and the robot in detail in the next sections. | {
"cite_N": [
"@cite_4",
"@cite_3",
"@cite_2"
],
"mid": [
"1499021337",
"",
"1968890289"
],
"abstract": [
"Introduction 1 Review of Continuous Time Models 1.1 Martingales and Martingale Inequalities 1.2 Stochastic Integration 1.3 Stochastic Differential Equations: Diffusions 1.4 Reflected Diffusions 1.5 Processes with Jumps 2 Controlled Markov Chains 2.1 Recursive Equations for the Cost 2.2 Optimal Stopping Problems 2.3 Discounted Cost 2.4 Control to a Target Set and Contraction Mappings 2.5 Finite Time Control Problems 3 Dynamic Programming Equations 3.1 Functionals of Uncontrolled Processes 3.2 The Optimal Stopping Problem 3.3 Control Until a Target Set Is Reached 3.4 A Discounted Problem with a Target Set and Reflection 3.5 Average Cost Per Unit Time 4 Markov Chain Approximation Method: Introduction 4.1 Markov Chain Approximation 4.2 Continuous Time Interpolation 4.3 A Markov Chain Interpolation 4.4 A Random Walk Approximation 4.5 A Deterministic Discounted Problem 4.6 Deterministic Relaxed Controls 5 Construction of the Approximating Markov Chains 5.1 One Dimensional Examples 5.2 Numerical Simplifications 5.3 The General Finite Difference Method 5.4 A Direct Construction 5.5 Variable Grids 5.6 Jump Diffusion Processes 5.7 Reflecting Boundaries 5.8 Dynamic Programming Equations 5.9 Controlled and State Dependent Variance 6 Computational Methods for Controlled Markov Chains 6.1 The Problem Formulation 6.2 Classical Iterative Methods 6.3 Error Bounds 6.4 Accelerated Jacobi and Gauss-Seidel Methods 6.5 Domain Decomposition 6.6 Coarse Grid-Fine Grid Solutions 6.7 A Multigrid Method 6.8 Linear Programming 7 The Ergodic Cost Problem: Formulation and Algorithms 7.1 Formulation of the Control Problem 7.2 A Jacobi Type Iteration 7.3 Approximation in Policy Space 7.4 Numerical Methods 7.5 The Control Problem 7.6 The Interpolated Process 7.7 Computations 7.8 Boundary Costs and Controls 8 Heavy Traffic and Singular Control 8.1 Motivating Examples &nb",
"",
"The objective of source seeking problems is to determine the minimum of an unknown signal field, which represents a physical quantity of interest, such as heat, chemical concentration, or sound. This paper proposes a strategy for source seeking in a noisy signal field using a mobile robot and based on a stochastic gradient descent algorithm. Our scheme does not require a prior map of the environment or a model of the signal field and is simple enough to be implemented on platforms with limited computational power. We discuss the asymptotic convergence guarantees of algorithm and give specific guidelines for its application to mobile robots in unknown indoor environments with obstacles. Both simulations and real-world experiments were carried out to evaluate the performance of our approach. The results suggest that the algorithm has good finite time performance in complex environments."
]
} |
1409.7186 | 1577498620 | We consider the university course timetabling problem, which is one of the most studied problems in educational timetabling. In particular, we focus our attention on the formulation known as the curriculum-based course timetabling problem, which has been tackled by many researchers and for which there are many available benchmarks. The contribution of this paper is twofold. First, we propose an effective and robust single-stage simulated annealing method for solving the problem. Secondly, we design and apply an extensive and statistically-principled methodology for the parameter tuning procedure. The outcome of this analysis is a methodology for modeling the relationship between search method parameters and instance features that allows us to set the parameters for unseen instances on the basis of a simple inspection of the instance itself. Using this methodology, our algorithm, despite its apparent simplicity, has been able to achieve high quality results on a set of popular benchmarks. A final contribution of the paper is a novel set of real-world instances, which could be used as a benchmark for future comparison. | In this section we review the literature on . The presentation is organized as follows: we firstly describe the solution approaches based on metaheuristic techniques; secondly, we report the contributions on exact approaches and on methods for obtaining lower bounds; finally, we discuss papers that investigate additional aspects related to the problem, such as instance generation and multi-objective formulations. A recent survey covering all these topics is provided by @cite_5 . | {
"cite_N": [
"@cite_5"
],
"mid": [
"2079600472"
],
"abstract": [
"In 2007, the Second International Timetabling Competition (ITC-2007) has been organized and a formal definition of the Curriculum-Based Course Timetabling (CB-CTT) problem has been given, by taking into account several real-world constraints and objectives while keeping the problem general. CB-CTT consists of finding the best weekly assignment of university course lectures to rooms and time periods. A feasible schedule must satisfy a set of hard constraints and must also take into account a set of soft constraints, whose violation produces penalty terms to be minimized in the objective function. From ITC-2007, many researchers have developed advanced models and methods to solve CB-CTT. This survey is devoted to review the main works on the topic, with focus on mathematical models, lower bounds, and exact and heuristic algorithms. Besides giving an overview of these approaches, we highlight interesting extensions that could make the study of CB-CTT even more challenging and closer to reality."
]
} |
1409.7186 | 1577498620 | We consider the university course timetabling problem, which is one of the most studied problems in educational timetabling. In particular, we focus our attention on the formulation known as the curriculum-based course timetabling problem, which has been tackled by many researchers and for which there are many available benchmarks. The contribution of this paper is twofold. First, we propose an effective and robust single-stage simulated annealing method for solving the problem. Secondly, we design and apply an extensive and statistically-principled methodology for the parameter tuning procedure. The outcome of this analysis is a methodology for modeling the relationship between search method parameters and instance features that allows us to set the parameters for unseen instances on the basis of a simple inspection of the instance itself. Using this methodology, our algorithm, despite its apparent simplicity, has been able to achieve high quality results on a set of popular benchmarks. A final contribution of the paper is a novel set of real-world instances, which could be used as a benchmark for future comparison. | @cite_3 solves the problem by applying a constraint-based solver that incorporates several local search algorithms operating in three stages: (i) a construction phase that uses an algorithm to find a feasible solution, (ii) a first search phase delegated to a algorithm, followed by (iii) a or strategy to escape from local minima. The algorithm was not specifically designed for but it was intended to be employed on all three tracks of ITC-2007 (including, besides and , also Examination Timetabling). The solver was the winner of two out of three competition tracks, and it was among the finalists in the third one. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2028480548"
],
"abstract": [
"This paper provides a brief description of a constraint-based solver that was successfully applied by the author to the problem instances in all three tracks of the International Timetabling Competition 2007 (For more details see the official competition website at http: www.cs.qub.ac.uk itc2007.). The solver presented in this paper was among the finalists in all three tracks and the winner of two."
]
} |
1409.7186 | 1577498620 | We consider the university course timetabling problem, which is one of the most studied problems in educational timetabling. In particular, we focus our attention on the formulation known as the curriculum-based course timetabling problem, which has been tackled by many researchers and for which there are many available benchmarks. The contribution of this paper is twofold. First, we propose an effective and robust single-stage simulated annealing method for solving the problem. Secondly, we design and apply an extensive and statistically-principled methodology for the parameter tuning procedure. The outcome of this analysis is a methodology for modeling the relationship between search method parameters and instance features that allows us to set the parameters for unseen instances on the basis of a simple inspection of the instance itself. Using this methodology, our algorithm, despite its apparent simplicity, has been able to achieve high quality results on a set of popular benchmarks. A final contribution of the paper is a novel set of real-world instances, which could be used as a benchmark for future comparison. | The proposed by @cite_1 follows a three stage scheme: in the initialization phase a feasible timetable is built using a fast heuristic; then the intensification and diversification phases are alternated through an adaptive tabu search in order to reduce the violations of soft constraints. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2073634948"
],
"abstract": [
"This paper presents an Adaptive Tabu Search algorithm (denoted by ATS) for solving a problem of curriculum-based course timetabling. The proposed algorithm follows a general framework composed of three phases: initialization, intensification and diversification. The initialization phase constructs a feasible initial timetable using a fast greedy heuristic. Then an adaptively combined intensification and diversification phase is used to reduce the number of soft constraint violations while maintaining the satisfaction of hard constraints. The proposed ATS algorithm integrates several distinguished features such as an original double Kempe chains neighborhood structure, a penalty-guided perturbation operator and an adaptive search mechanism. Computational results show the high effectiveness of the proposed ATS algorithm, compared with five reference algorithms as well as the current best known results. This paper also shows an analysis to explain which are the essential ingredients of the ATS algorithm."
]
} |
1409.7186 | 1577498620 | We consider the university course timetabling problem, which is one of the most studied problems in educational timetabling. In particular, we focus our attention on the formulation known as the curriculum-based course timetabling problem, which has been tackled by many researchers and for which there are many available benchmarks. The contribution of this paper is twofold. First, we propose an effective and robust single-stage simulated annealing method for solving the problem. Secondly, we design and apply an extensive and statistically-principled methodology for the parameter tuning procedure. The outcome of this analysis is a methodology for modeling the relationship between search method parameters and instance features that allows us to set the parameters for unseen instances on the basis of a simple inspection of the instance itself. Using this methodology, our algorithm, despite its apparent simplicity, has been able to achieve high quality results on a set of popular benchmarks. A final contribution of the paper is a novel set of real-world instances, which could be used as a benchmark for future comparison. | A novel hybrid metaheuristic technique, obtained by combining and the algorithm, was employed by @cite_2 who obtained high-quality results on both and testbeds. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2004797988"
],
"abstract": [
"This paper describes the development of a novel metaheuristic that combines an electromagnetic-like mechanism (EM) and the great deluge algorithm (GD) for the University course timetabling problem. This well-known timetabling problem assigns lectures to specific numbers of timeslots and rooms maximizing the overall quality of the timetable while taking various constraints into account. EM is a population-based stochastic global optimization algorithm that is based on the theory of physics, simulating attraction and repulsion of sample points in moving toward optimality. GD is a local search procedure that allows worse solutions to be accepted based on some given upper boundary or level'. In this paper, the dynamic force calculated from the attraction-repulsion mechanism is used as a decreasing rate to update the level' within the search process. The proposed method has been applied to a range of benchmark university course timetabling test problems from the literature. Moreover, the viability of the method has been tested by comparing its results with other reported results from the literature, demonstrating that the method is able to produce improved solutions to those currently published. We believe this is due to the combination of both approaches and the ability of the resultant algorithm to converge all solutions at every search process."
]
} |
1409.7186 | 1577498620 | We consider the university course timetabling problem, which is one of the most studied problems in educational timetabling. In particular, we focus our attention on the formulation known as the curriculum-based course timetabling problem, which has been tackled by many researchers and for which there are many available benchmarks. The contribution of this paper is twofold. First, we propose an effective and robust single-stage simulated annealing method for solving the problem. Secondly, we design and apply an extensive and statistically-principled methodology for the parameter tuning procedure. The outcome of this analysis is a methodology for modeling the relationship between search method parameters and instance features that allows us to set the parameters for unseen instances on the basis of a simple inspection of the instance itself. Using this methodology, our algorithm, despite its apparent simplicity, has been able to achieve high quality results on a set of popular benchmarks. A final contribution of the paper is a novel set of real-world instances, which could be used as a benchmark for future comparison. | Finally, @cite_4 investigated the search performance of different neighborhood relations typically used by local search algorithms to solve this problem. The neighborhoods are compared using different evaluation criteria, and new combinations of neighborhoods are explored and analyzed. | {
"cite_N": [
"@cite_4"
],
"mid": [
"1982402598"
],
"abstract": [
"In this paper, we present an in-depth analysis of neighborhood relations for local search algorithms. Using a curriculum-based course timetabling problem as a case study, we investigate the search capability of four neighborhoods based on three evaluation criteria: percentage of improving neighbors, improvement strength and search steps. This analysis shows clear correlations of the search performance of a neighborhood with these criteria and provides useful insights on the very nature of the neighborhood. This study helps understand why a neighborhood performs better than another one and why and how some neighborhoods can be favorably combined to increase their search power. This study reduces the existing gap between reporting experimental assessments of local search-based algorithms and understanding their behaviors."
]
} |
1409.7254 | 2949098399 | The hypothesis of selective exposure assumes that people seek out information that supports their views and eschew information that conflicts with their beliefs, and that has negative consequences on our society. Few researchers have recently found counter evidence of selective exposure in social media: users are exposed to politically diverse articles. No work has looked at what happens after exposure, particularly how individuals react to such exposure, though. Users might well be exposed to diverse articles but share only the partisan ones. To test this, we study partisan sharing on Facebook: the tendency for users to predominantly share like-minded news articles and avoid conflicting ones. We verified four main hypotheses. That is, whether partisan sharing: 1) exists at all; 2) changes across individuals (e.g., depending on their interest in politics); 3) changes over time (e.g., around elections); and 4) changes depending on perceived importance of topics. We indeed find strong evidence for partisan sharing. To test whether it has any consequence in the real world, we built a web application for BBC viewers of a popular political program, resulting in a controlled experiment involving more than 70 individuals. Based on what they share and on survey data, we find that partisan sharing has negative consequences: distorted perception of reality. However, we do also find positive aspects of partisan sharing: it is associated with people who are more knowledgeable about politics and engage more with it as they are more likely to vote in the general elections. | By analyzing news consumption on a variety of media (which included TV, radio, magazines, newspapers, online), Stroud @cite_16 concluded that people tend to preferentially choose, read, and enjoy partisan news. A large body of literature shows supportive evidence for her findings @cite_30 @cite_10 @cite_28 @cite_37 @cite_14 . More recently, some researchers have reported situations in which selective exposure is lower than expected or totally missing. LaCour did not find any evidence for it in the TV and radio consumption of 920 individuals in Chicago and New York @cite_20 ; Shapiro found an extremely low level of it online @cite_1 ; and even found that Twitter friends expand one's diversity of political news @cite_8 . | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_14",
"@cite_8",
"@cite_28",
"@cite_1",
"@cite_16",
"@cite_10",
"@cite_20"
],
"mid": [
"2112687664",
"2121993559",
"2073675482",
"1897880214",
"2166155258",
"",
"",
"1968917437",
"1904709910"
],
"abstract": [
"In a news media environment characterized by abundant choice, it is becoming increasingly easy for Americans to choose news sources slanted toward their own political views rather than sources providing more diverse perspectives. This development poses a challenge to ideals of deliberative democracy if people who consume politically like-minded news disproportionately populate the electoral process, while those presumably reaping the benefits of exposure to more diverse views in the news (e.g., more informed, tolerant attitudes) withdraw from politics. Using panel data collected during the 2008 presidential campaign, this study investigates the proposition that exposure to news slanted toward one's own partisan views increases political participation, while exposure to news with the opposite partisan slant depresses participation. The results suggest that, while exposure to partisan news does not alter the strongly habitual decision to turn out, the hypothesized energizing and enervating effects of exposure do appear for other behavior during the campaign; the partisan hue of the news sources citizens choose to consume affects both when voters decide and their levels of participation over time.",
"There is active debate among political scientists and political theorists over the relationship between participation and deliberation among citizens with different political viewpoints. Internet based blogs provide an important testing ground for these scholars' theories, especially as political activity on the Internet becomes increasingly important. In this article, we use the first major dataset describing blog readership to examine the relationship between deliberation, polarization and political participation among blog readers. We find that, as existing theories might predict, blog readers tend to read blogs that accord with their political beliefs. Cross-cutting readership of blogs on both the left and right of the spectrum is relatively rare. Furthermore, we find strong evidence of polarization among blogreaders, who tend to be more polarized than both non-blog-readers and consumers of various television news, and roughly as polarized as US Senators. Blog readers are also substantially more likely to participate in politics than non-blog readers. However, in contrast to previous research on offline social networks, we do not find that cross-cutting exposure to blogs of different ideological dispositions lowers participations. Instead, we find that cross-cutting blog readers are about as likely as left wing blog readers to participate in politics, and that both are significantly more likely than right wing blog readers to participate. We suggest that this may reflect social movement building efforts by left wing bloggers.",
"We propose a framework for understanding how the Internet has affected the U.S. political news market. The framework is driven by the lower cost of production for online news and consumers' tendency to seek out media that conform to their own beliefs. The framework predicts that consumers of Internet news sources should hold more extreme political views and be interested in more diverse political issues than those who solely consume mainstream television news. We test these predictions using two large datasets with questions about news exposure and political views. Generally speaking, we find that consumers of generally left-of-center (right-of-center) cable news sources who combine their cable news viewing with online sources are more liberal (conservative) than those who do not. We also find that those who use online news content are more likely than those who consume only television news content to be interested in niche political issues.",
"We present a preliminary but groundbreaking study of the media landscape of Twitter. We use public data on whom follows who to uncover common behaviour in media consumption, the relationship between various classes of media, and the diversity of media content which social links may bring. Our analysis shows that there is a non-negligible amount of indirect media exposure, either through friends who follow particular media sources, or via retweeted messages. We show that the indirect media exposure expands the political diversity of news to which users are exposed to a surprising extent, increasing the range by between 60-98 . These results are valuable because they have not been readily available to traditional media, and they can help predict how we will read news, and how publishers will interact with us in the future.",
"We show that the demand for news varies with the perceived affinity of the news organization to the consumer’s political preferences. In an experimental setting, conservatives and Republicans preferred to read news reports attributed to Fox News and to avoid news from CNN and NPR. Democrats and liberals exhibited exactly the opposite syndrome—dividing their attention equally between CNN and NPR, but avoiding Fox News. This pattern of selective exposure based on partisan affinity held not only for news coverage of controversial issues but also for relatively ‘‘soft’’ subjects such as crime and travel. The tendency to select news based on anticipated agreement was also strengthened among more politically engaged partisans. Overall, these results suggest that the further proliferation of new media and enhanced media choices may contribute to the further polarization of the news audience.",
"",
"",
"We hypothesize that in the real world, as opposed to the lab, the norm is for people to experience friendly media that favor their political predispositions when political favoritism is perceived at all. For this reason, media are generally limited in their ability to create cross-cutting exposure. We test this hypothesis using representative survey data drawn from 11 different countries with varying media systems. We further hypothesize that television will contribute more to cross-cutting exposure than newspapers. Finally, and most importantly, we test the hypothesis that the more the structure of a country's media system parallels that of its political parties, the more that country's population will be dominated by exposure to like-minded views via mass media. We find confirmation for all 3 of these hypotheses and discuss their implications for the role of mass media in providing exposure to cross-cutting political perspectives.",
"This study provides the first direct assessment of the extent to which citizens encounter news and opinion challenging their political views via mass media. The widely accepted conjecture that people refuse to hear the other side is based upon self-reports of media exposure, rather than direct observation of it. In light of this long-acknowledged limitation, I leverage unique data tracking partisanship as well as actual exposure to media collected 24 7 via passive tracking devices. Contrary to previous understandings, the vast majority of citizens consume predominately centrist information, while frequently encountering ideological programming challenging their views. In fact, the best predictor of how much conservative news you watch is how much liberal news you watch, regardless of partisanship. The demonstration of widespread exposure to diverse viewpoints challenges claims asserting that resistance to political influence occurs at the exposure stage of the persuasion process."
]
} |
1409.7254 | 2949098399 | The hypothesis of selective exposure assumes that people seek out information that supports their views and eschew information that conflicts with their beliefs, and that has negative consequences on our society. Few researchers have recently found counter evidence of selective exposure in social media: users are exposed to politically diverse articles. No work has looked at what happens after exposure, particularly how individuals react to such exposure, though. Users might well be exposed to diverse articles but share only the partisan ones. To test this, we study partisan sharing on Facebook: the tendency for users to predominantly share like-minded news articles and avoid conflicting ones. We verified four main hypotheses. That is, whether partisan sharing: 1) exists at all; 2) changes across individuals (e.g., depending on their interest in politics); 3) changes over time (e.g., around elections); and 4) changes depending on perceived importance of topics. We indeed find strong evidence for partisan sharing. To test whether it has any consequence in the real world, we built a web application for BBC viewers of a popular political program, resulting in a controlled experiment involving more than 70 individuals. Based on what they share and on survey data, we find that partisan sharing has negative consequences: distorted perception of reality. However, we do also find positive aspects of partisan sharing: it is associated with people who are more knowledgeable about politics and engage more with it as they are more likely to vote in the general elections. | However, despite its breadth, such a work and, for that matter, similar others suffer from the data under study: self-reported (and, as such, error-prone) data of news consumption. Starting from this criticism, LaCour directly measured how 920 individuals from New York and Chicago have been exposed to news for 85 days @cite_20 . These measurements were taken by cell phones that recorded participants' audio. He showed that self-reported data grossly overestimates exposure. It turns out that most people do not care much about politics and are thus on a meager news diet -- consequently, it does not really matter whether that diet is balanced or not. The problem is that audio-recording cell phones report what people are exposed to but not necessarily what they are paying attention to. A similar problem applies to 's work @cite_8 . The authors analyzed Twitter streams and found that Twitter friends greatly expand one's diversity of political news. However, it is not possible to quantify the extent to which a Twitter user is actually paying attention to his her own stream. | {
"cite_N": [
"@cite_20",
"@cite_8"
],
"mid": [
"1904709910",
"1897880214"
],
"abstract": [
"This study provides the first direct assessment of the extent to which citizens encounter news and opinion challenging their political views via mass media. The widely accepted conjecture that people refuse to hear the other side is based upon self-reports of media exposure, rather than direct observation of it. In light of this long-acknowledged limitation, I leverage unique data tracking partisanship as well as actual exposure to media collected 24 7 via passive tracking devices. Contrary to previous understandings, the vast majority of citizens consume predominately centrist information, while frequently encountering ideological programming challenging their views. In fact, the best predictor of how much conservative news you watch is how much liberal news you watch, regardless of partisanship. The demonstration of widespread exposure to diverse viewpoints challenges claims asserting that resistance to political influence occurs at the exposure stage of the persuasion process.",
"We present a preliminary but groundbreaking study of the media landscape of Twitter. We use public data on whom follows who to uncover common behaviour in media consumption, the relationship between various classes of media, and the diversity of media content which social links may bring. Our analysis shows that there is a non-negligible amount of indirect media exposure, either through friends who follow particular media sources, or via retweeted messages. We show that the indirect media exposure expands the political diversity of news to which users are exposed to a surprising extent, increasing the range by between 60-98 . These results are valuable because they have not been readily available to traditional media, and they can help predict how we will read news, and how publishers will interact with us in the future."
]
} |
1409.7254 | 2949098399 | The hypothesis of selective exposure assumes that people seek out information that supports their views and eschew information that conflicts with their beliefs, and that has negative consequences on our society. Few researchers have recently found counter evidence of selective exposure in social media: users are exposed to politically diverse articles. No work has looked at what happens after exposure, particularly how individuals react to such exposure, though. Users might well be exposed to diverse articles but share only the partisan ones. To test this, we study partisan sharing on Facebook: the tendency for users to predominantly share like-minded news articles and avoid conflicting ones. We verified four main hypotheses. That is, whether partisan sharing: 1) exists at all; 2) changes across individuals (e.g., depending on their interest in politics); 3) changes over time (e.g., around elections); and 4) changes depending on perceived importance of topics. We indeed find strong evidence for partisan sharing. To test whether it has any consequence in the real world, we built a web application for BBC viewers of a popular political program, resulting in a controlled experiment involving more than 70 individuals. Based on what they share and on survey data, we find that partisan sharing has negative consequences: distorted perception of reality. However, we do also find positive aspects of partisan sharing: it is associated with people who are more knowledgeable about politics and engage more with it as they are more likely to vote in the general elections. | Computer scientists have proposed different news aggregating systems that encourage politically diverse news consumption. For example, BLEWS @cite_13 and NewsCube @cite_18 gather and visualize news articles on the same subject matter but with different political leanings, making people aware of the existence of media bias. Munson and Resnick have studied how different presentation techniques make politically diverse news articles more appealing than others. It turns out that making hostile news more appealing is quite challenging @cite_35 . | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_13"
],
"mid": [
"2119758323",
"2024633545",
"84889933"
],
"abstract": [
"Is a polarized society inevitable, where people choose to be exposed to only political news and commentary that reinforces their existing viewpoints? We examine the relationship between the numbers of supporting and challenging items in a collection of political opinion items and readers' satisfaction, and then evaluate whether simple presentation techniques such as highlighting agreeable items or showing them first can increase satisfaction when fewer agreeable items are present. We find individual differences: some people are diversity-seeking while others are challenge-averse. For challenge-averse readers, highlighting appears to make satisfaction with sets of mostly agreeable items more extreme, but does not increase satisfaction overall, and sorting agreeable content first appears to decrease satisfaction rather than increasing it. These findings have important implications for builders of websites that aggregate content reflecting different positions.",
"The bias in the news media is an inherent flaw of the news production process. The resulting bias often causes a sharp increase in political polarization and in the cost of conflict on social issues such as Iraq war. It is very difficult, if not impossible, for readers to have penetrating views on realities against such bias. This paper presents NewsCube, a novel Internet news service aiming at mitigating the effect of media bias. NewsCube automatically creates and promptly provides readers with multiple classified viewpoints on a news event of interest. As such, it effectively helps readers understand a fact from a plural of viewpoints and formulate their own, more balanced viewpoints. While media bias problem has been studied extensively in communications and social sciences, our work is the first to develop a news service as a solution and study its effect. We discuss the effect of the service through various user studies.",
"An overwhelming number of news articles are available every day via the internet. Unfortunately, it is impossible for us to peruse more than a handful; furthermore it is difficult to ascertain an article’s social context, i.e., is it popular, what sorts of people are reading it, etc. In this paper, we develop a system to address this problem in the restricted domain of political news by harnessing implicit and explicit contextual information from the blogosphere. Specifically, we track thousands of blogs and the news articles they cite, collapsing news articles that have highly overlapping content. We then tag each article with the number of blogs citing it, the political orientation of those blogs, and the level of emotional charge expressed in the blog posts that link to the news article. We summarize and present the results to the user via a novel visualization which displays this contextual information; the user can then find the most popular articles, the articles most cited by liberals, the articles most emotionally discussed in the political blogosphere, etc."
]
} |
1409.7244 | 2952882091 | With the goal of optimizing the CM capacity of a finite constellation over a Rayleigh fading channel, we construct for all dimensions which are a power of 2 families of rotation matrices which optimize a certain objective function controlling the CM capacity. Our construction does not depend on any assumptions about the constellation, dimension, or signal-to-noise ratio. We confirm the benefits of our construction for uniform and non-uniform constellations at a large range of SNR values through numerous simulations. We show that in two and four dimensions one can obtain a further potential increase in CM capacity by jointly considering non-uniform and rotated constellations. | Our work is partially inspired by @cite_10 , in which the authors constructed good rotation matrices for @math -QAM and @math -QAM constellations in @math and @math with the goal of optimizing capacity. Complex multi-dimensional rotations have been used in @cite_3 to increase the performance of BICM-ID systems for Rayleigh fading channels. Furthermore, two-dimensional rotations have been considered in @cite_6 , @cite_0 to improve BICM capacity, and in @cite_9 , @cite_14 in conjunction with LDPC codes. | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_10"
],
"mid": [
"2156622156",
"2087061822",
"",
"2152549909",
"1552601993",
"2074254009"
],
"abstract": [
"For multi-level modulation methods, rotation of the signal constellation together with in-phase and quadrature phase channel interleaving (signal space diversity) are known to provide good performance gains over fading channels. This paper studies the extension of such schemes with a low density parity check (LDPC) code. It is shown that for both coded and uncoded Gray-mapped MPSK modulation formats with signal space diversity on a Rayleigh fading channel, a well-considered choice of the rotation angle may lead to a significant gain over the conventional unrotated constellation. However, the optimum rotation angle for the coded scheme may be different from the corresponding optimization angle of the uncoded scheme.",
"A non-binary low density parity check (LDPC) coded modulation solution coupling with signal space diversity (SSD) and symbol interleaver is proposed to get performance gains over fading channels for high order modulation. Compared to the traditional binary channel code with bit interleaved coded modulation with iterative demodulation and decode (BICM-ID),it can efficiently avoid the iteration demodulation without performance degradation. And we also analyze the principle of choosing rotation angle and labeling. It is shown that a well-considered choice of the rotation angle and labeling can lead to about 2 dB gain over the conventional unrotated constellation in Rayleigh channel. The related modification in demodulator is also introduced.",
"",
"This paper presents a performance analysis of bit-interleaved coded-modulation with iterative decoding (BICM-ID) and complex N-dimensional signal space diversity in fading channels to investigate its performance limitation, the choice of the rotation matrix and the design of a low-complexity receiver. The tight error bound is first analytically derived. Based on the design criterion obtained from the error bound, the optimality of the rotation matrix is then established. It is shown that using the class of the optimal rotation matrices, the performance of BICM-ID systems over a Rayleigh fading channel approaches that of the BICM-ID systems over an AWGN channel when the dimension of the signal constellation increases. Furthermore, by exploiting the sigma mapping for any M-ary QAM constellation, a very simple sub-optimal, but yet effective iterative receiver structure suitable for signal constellations with large dimensions is proposed. Simulation results in various cases and conditions indicate that the proposed receiver can achieve the analytical performance bounds with low complexity",
"We present a combined method of bit-interleaved coded modulation (BICM) with coordinate interleaving and a rotated constellation, also known as signal space diversity. The diversity order can be increased to twice the diversity order of BICM, which is the minimum Hamming distance of a code. This greatly improves the performance over Rayleigh fading while preserving minimum squared Euclidean distance of BICM. Our simulation results show that the new combined scheme greatly outperforms BICM and bit-interleaved I-Q TCM as well as coordinate-interleaved TCM over Rayleigh fading channels. We first describe BICM and signal space diversity schemes, then follow with the design of BICM with signal space diversity. The complexity reduction technique is also investigated.",
"This paper studies the mutual information improvement attained by rotated multidimensional (multi-D) constellations via a unitary precoder G in Rayleigh fading. At first, based on the symmetric cut-off rate of the N-D signal space, we develop a design criterion with regard to the precoder G. It is then demonstrated that the use of rotated constellations in only a reasonably low dimensional signal space can significantly increase the mutual information in high-rate regimes. Based on parameterizations of unitary matrices, we then construct good unitary precoder G in 4-D signal space using a simple optimization problem, which involves only four real variables and it is applicable to any modulation scheme. To further illustrate the potential of multi-D constellation and to show the practical use of mutual information improvement, we propose a simple yet powerful bit-interleaved coded modulation (BICM) scheme in which a (multi-D) mapping technique employed in a multi-D rotated constellation is concatenated with a short-memory high-rate convolutional code. By using extrinsic information transfer (EXIT) charts, it is shown that the proposed technique provides an exceptionally good error performance. In particular, both EXIT chart analysis and simulation results indicate that a turbo pinch-off and a bit error rate around 10-6 happen at a signal-to-noise ratio that is well below the coded modulation and BICM capacities using traditional signal sets. For example, with code rates ranging from 2 3 to 7 8, the proposed system can operate 0.82 dB-2.93 dB lower than the BICM capacity with QPSK and Gray labeling. The mutual information gain offered by rotated constellations can be therefore utilized to design simple yet near Shannon limit systems in the high-rate regions."
]
} |
1409.6673 | 2011559788 | The operation of the power grid is becoming more stressed, due to the addition of new large loads represented by electric vehicles (EVs) and a more intermittent supply due to the incorporation of renewable sources. As a consequence, the coordination and control of projected EV demand in a network of fast charging stations becomes a critical and challenging problem. In this paper, we introduce a game theoretic based decentralized control mechanism to alleviate negative impacts from the EV demand. The proposed mechanism takes into consideration the nonuniform spatial distribution of EVs that induces uneven power demand at each charging facility, and aims to: 1) avoid straining grid resources by offering price incentives, so that customers accept being routed to less busy stations; 2) maximize total revenue by serving more customers with the same amount of grid resources; and 3) provide charging service to customers with a certain level of quality-of-service (QoS), the latter defined as the long term customer blocking probability. We examine three scenarios of increased complexity that gradually approximate real world settings. The obtained results show that the proposed framework leads to substantial performance improvements in terms of the aforementioned goals when compared to current state of affairs. | There has been an increasing body of literature on control mechanisms for EV charging, while the literature on designs for charging stations is rather sparse. The predominant assumption for the control of EV charging is that the vehicles are located either in their drivers' residencies or at large parking lots and also depending on the charger technology the duration of the charging session is of the order of hours. The vast majority of the literature threats EVs as smart loads" and management of their demand is achieved by optimally adjusting the charging current. Overall, related literature can be classified into two categories: centralized and decentralized (distributed) control @cite_20 . In centralized control a network operator (dispatcher) possesses global information about all users and to a large extent controls and mandates charging times, rates etc. For instance, the authors of @cite_17 propose a direct load management for EV chargings. Similarly, in our previous work @cite_14 , we introduce a central EV allocation framework for a network of fast charging stations. | {
"cite_N": [
"@cite_14",
"@cite_20",
"@cite_17"
],
"mid": [
"2023366187",
"2529606212",
"2098857399"
],
"abstract": [
"In order to increase the penetration of electric vehicles, a network of fast charging stations that can provide drivers with a certain level of quality of service (QoS) is needed. However, given the strain that such a network can exert on the power grid, and the mobility of loads represented by electric vehicles, operating it efficiently is a challenging and complex problem. In this paper, we examine a network of charging stations equipped with an energy storage device and propose a scheme that allocates power to them from the grid, as well as routes customers. We examine three scenarios, gradually increasing their complexity. In the first one, all stations have identical charging capabilities and energy storage devices, draw constant power from the grid and no routing decisions of customers are considered. It represents the current state of affairs and serves as a baseline for evaluating the performance of the proposed scheme. In the second scenario, power to the stations is allocated in an optimal manner from the grid and in addition a certain percentage of customers can be routed to nearby stations. In the final scenario, optimal allocation of both power from the grid and customers to stations is considered. The three scenarios are evaluated using real traffic traces corresponding to weekday rush hour from a large metropolitan area in the US. The results indicate that the proposed scheme offers substantial improvements of performance compared to the current mode of operation; namely, more customers can be served with the same amount of power, thus enabling the station operators to increase their profitability. Further, the scheme provides guarantees to customers in terms of the probability of being blocked (and hence not served) by the closest charging station to their location. Overall, the paper addresses key issues related to the efficient operation, both from the perspective of the power grid and the drivers satisfaction, of a network of charging stations.",
"Research predicts that in 2030, around 30 of all vehicles in Belgium will be plug-in hybrid electric vehicles (PHEVs). Because most PHEVs are charged after working hours, the existing peak load in the evening will increase significantly. Large peak loads cause more expensive production and can even damage the electricity infrastructure. In a Smart Grid, the charging of PHEVs can be controlled to reduce peak load, denoted as demand side management (DSM). The goal of our research is to compare several solutions for DSM of PHEVs. This paper takes a first step by benchmarking a multi-agent solution against an optimal quadratic programming (QP) scheduler solution. Simulations show that a QP scheduler is able to optimally flatten peak loads while sufficiently charging the PHEV batteries. However, this solution is unfeasible in practice because it scales poorly and requires complete information on when and how much PHEVs need to charge beforehand, which is not available. The MAS solution proves to be scalable and adaptable to incomplete and unpredictable information while peaks are still reduced with an efficiency up to 95 compared to the QP scheduler.",
"Accurate real-time load forecasting is essential for the reliable and efficient operation of a power system. Assuming that the number of electrical vehicles will increase substantially in the near future, the load profiles of the system will become too volatile and unpredictable for the current forecasting techniques. We propose to utilize the accurate reporting of the emerging Advanced Metering Infrastructure (AMI) to track the incoming PHEV load requests and their statistics. We propose a model for the PHEV loads statistics and an optimization of the generation dispatch that uses the full statistical information. This model offers an example of the potential impact of the smart metering infrastructure currently being deployed."
]
} |
1409.6673 | 2011559788 | The operation of the power grid is becoming more stressed, due to the addition of new large loads represented by electric vehicles (EVs) and a more intermittent supply due to the incorporation of renewable sources. As a consequence, the coordination and control of projected EV demand in a network of fast charging stations becomes a critical and challenging problem. In this paper, we introduce a game theoretic based decentralized control mechanism to alleviate negative impacts from the EV demand. The proposed mechanism takes into consideration the nonuniform spatial distribution of EVs that induces uneven power demand at each charging facility, and aims to: 1) avoid straining grid resources by offering price incentives, so that customers accept being routed to less busy stations; 2) maximize total revenue by serving more customers with the same amount of grid resources; and 3) provide charging service to customers with a certain level of quality-of-service (QoS), the latter defined as the long term customer blocking probability. We examine three scenarios of increased complexity that gradually approximate real world settings. The obtained results show that the proposed framework leads to substantial performance improvements in terms of the aforementioned goals when compared to current state of affairs. | In decentralized control, individuals choose their own service patterns based on cost minimization principles. In this case, utilities aim to shape demand profiles by incentivizing customers via pricing-based control mechanisms. The work presented in @cite_8 proposes a game theoretic framework to optimally control the demand of a large scale stationary EV population. Further, the authors of decentralized5 and @cite_16 present decentralized control mechanisms for EVs and a detailed survey can be found in @cite_10 . Note that decentralized control eliminates the need for advanced monitoring tools, whereas centralized control leads to better utilization of charging resources. | {
"cite_N": [
"@cite_16",
"@cite_10",
"@cite_8"
],
"mid": [
"1980303097",
"2071720978",
"2088077079"
],
"abstract": [
"In this paper, we present a scalable approach for DSM (demand side management) of PHEVs (plug-in hybrid electric vehicles). Essentially, our approach consists of three steps: aggregation, optimization, and control. In the aggregation step, individual PHEV charging constraints are aggregated upwards in a tree structure. In the optimization step, the aggregated constraints are used for scalable computation of a collective charging plan, which minimizes costs for electricity supply. In the real-time control step, this charging plan is used to create an incentive signal for all PHEVs, determined by a market-based priority scheme. These three steps are executed iteratively to cope with uncertainty and dynamism. In simulation experiments, the proposed three-step approach is benchmarked against classic, fully centralized approaches. Results show that our approach is able to charge PHEVs with comparable quality to optimal, centrally computed charging plans, while significantly improving scalability.",
"This paper gives a structured literature overview of coordinated charging of electric vehicles (EVs). The optimization objective, scale and method of each coordination strategy are the three parameters used to characterize and compare different approaches. The correlation between the three parameters and the research category are investigated, resulting in a correlation mapping of the different approaches.",
"This paper develops a strategy to coordinate the charging of autonomous plug-in electric vehicles (PEVs) using concepts from non-cooperative games. The foundation of the paper is a model that assumes PEVs are cost-minimizing and weakly coupled via a common electricity price. At a Nash equilibrium, each PEV reacts optimally with respect to a commonly observed charging trajectory that is the average of all PEV strategies. This average is given by the solution of a fixed point problem in the limit of infinite population size. The ideal solution minimizes electricity generation costs by scheduling PEV demand to fill the overnight non-PEV demand “valley”. The paper's central theoretical result is a proof of the existence of a unique Nash equilibrium that almost satisfies that ideal. This result is accompanied by a decentralized computational algorithm and a proof that the algorithm converges to the Nash equilibrium in the infinite system limit. Several numerical examples are used to illustrate the performance of the solution strategy for finite populations. The examples demonstrate that convergence to the Nash equilibrium occurs very quickly over a broad range of parameters, and suggest this method could be useful in situations where frequent communication with PEVs is not possible. The method is useful in applications where fully centralized control is not possible, but where optimal or near-optimal charging patterns are essential to system operation."
]
} |
1409.6673 | 2011559788 | The operation of the power grid is becoming more stressed, due to the addition of new large loads represented by electric vehicles (EVs) and a more intermittent supply due to the incorporation of renewable sources. As a consequence, the coordination and control of projected EV demand in a network of fast charging stations becomes a critical and challenging problem. In this paper, we introduce a game theoretic based decentralized control mechanism to alleviate negative impacts from the EV demand. The proposed mechanism takes into consideration the nonuniform spatial distribution of EVs that induces uneven power demand at each charging facility, and aims to: 1) avoid straining grid resources by offering price incentives, so that customers accept being routed to less busy stations; 2) maximize total revenue by serving more customers with the same amount of grid resources; and 3) provide charging service to customers with a certain level of quality-of-service (QoS), the latter defined as the long term customer blocking probability. We examine three scenarios of increased complexity that gradually approximate real world settings. The obtained results show that the proposed framework leads to substantial performance improvements in terms of the aforementioned goals when compared to current state of affairs. | Related work on charging station design can be classified into two categories. The first category includes queueing based models, where the goal is to evaluate the charging station performances with respect to long-term statistical metrics e.g., blocking probability, waiting time, etc. For example, the work in @cite_1 proposes an M M s queueing based mathematical model for EV demand for fast charging stations located near highway exits. On the other hand, the design perspective of the second category is related to power engineering. The authors of @cite_11 propose a charging station architecture with a DC bus system, and similar to our model they employ a local energy storage unit to alleviate the stress in the power grid. Related literature is presented in Fig. . | {
"cite_N": [
"@cite_1",
"@cite_11"
],
"mid": [
"2047902637",
"2041741439"
],
"abstract": [
"This paper presents a spatial and temporal model of electric vehicle charging demand for a rapid charging station located near a highway exit. Most previous studies have assumed a fixed charging location and fixed charging time during the off-peak hours for anticipating electric vehicle charging demand. Some other studies have based on limited charging scenarios at typical locations instead of a mathematical model. Therefore, from a distribution system perspective, electric vehicle charging demand is still unidentified quantity which may vary by space and time. In this context, this study proposes a mathematical model of electric vehicle charging demand for a rapid charging station. The mathematical model is based on the fluid dynamic traffic model and the M M s queueing theory. Firstly, the arrival rate of discharged vehicles at a charging station is predicted by the fluid dynamic model. Then, charging demand is forecasted by the M M s queueing theory with the arrival rate of discharged vehicles. This mathematical model of charging demand may allow grid's distribution planners to anticipate a charging demand profile at a charging station. A numerical example shows that the proposed model is able to capture the spatial and temporal dynamics of charging demand in a highway charging station.",
"In this paper the optimum design of a fast-charging station for PHEVs and EVs is proposed to minimize the strain on the power grid while supplying the vehicles with the required power. By studying the power demand of the charging station, a conclusion is reached that the size of the grid tie can be reduced substantially by sizing the grid tie for the average rather than the peak power demand. Therefore the charging station architecture with a single AC DC conversion and a DC distribution to DC DC charging units is proposed. An energy storage system is connected to the DC bus to supply power when the demand exceeds the average that can be provided from the grid. Various topologies for both the AC DC and DC DC conversion are studied to find the optimum design for this application."
]
} |
1409.6673 | 2011559788 | The operation of the power grid is becoming more stressed, due to the addition of new large loads represented by electric vehicles (EVs) and a more intermittent supply due to the incorporation of renewable sources. As a consequence, the coordination and control of projected EV demand in a network of fast charging stations becomes a critical and challenging problem. In this paper, we introduce a game theoretic based decentralized control mechanism to alleviate negative impacts from the EV demand. The proposed mechanism takes into consideration the nonuniform spatial distribution of EVs that induces uneven power demand at each charging facility, and aims to: 1) avoid straining grid resources by offering price incentives, so that customers accept being routed to less busy stations; 2) maximize total revenue by serving more customers with the same amount of grid resources; and 3) provide charging service to customers with a certain level of quality-of-service (QoS), the latter defined as the long term customer blocking probability. We examine three scenarios of increased complexity that gradually approximate real world settings. The obtained results show that the proposed framework leads to substantial performance improvements in terms of the aforementioned goals when compared to current state of affairs. | In this study, we build on our previous work @cite_14 and @cite_22 that proposed a fast charging station architecture employing local energy storage and also introduced a stochastic model for customers arrival and service (both from the grid and the local energy storage device) to assess its performance (depicted in Fig. ). Next, we summarize the dynamics of the single charging station model introduced in @cite_22 . Each charging station draws constant power from the power grid, which is expressed by the number of EVs that can be charged simultaneously, up to capacity of @math vehicles. Similarly a local energy storage is employed which can accommodate up to @math EVs in a fully charged state. Energy storage will be used to meet spikes in stochastic power demand that exceed the available grid power level. Since we always draw constant power from the grid due to a long term contractual agreement between the grid and the station, this model also aims to isolate the former from demand spikes and hence enhances its reliability. | {
"cite_N": [
"@cite_14",
"@cite_22"
],
"mid": [
"2023366187",
"1994915435"
],
"abstract": [
"In order to increase the penetration of electric vehicles, a network of fast charging stations that can provide drivers with a certain level of quality of service (QoS) is needed. However, given the strain that such a network can exert on the power grid, and the mobility of loads represented by electric vehicles, operating it efficiently is a challenging and complex problem. In this paper, we examine a network of charging stations equipped with an energy storage device and propose a scheme that allocates power to them from the grid, as well as routes customers. We examine three scenarios, gradually increasing their complexity. In the first one, all stations have identical charging capabilities and energy storage devices, draw constant power from the grid and no routing decisions of customers are considered. It represents the current state of affairs and serves as a baseline for evaluating the performance of the proposed scheme. In the second scenario, power to the stations is allocated in an optimal manner from the grid and in addition a certain percentage of customers can be routed to nearby stations. In the final scenario, optimal allocation of both power from the grid and customers to stations is considered. The three scenarios are evaluated using real traffic traces corresponding to weekday rush hour from a large metropolitan area in the US. The results indicate that the proposed scheme offers substantial improvements of performance compared to the current mode of operation; namely, more customers can be served with the same amount of power, thus enabling the station operators to increase their profitability. Further, the scheme provides guarantees to customers in terms of the probability of being blocked (and hence not served) by the closest charging station to their location. Overall, the paper addresses key issues related to the efficient operation, both from the perspective of the power grid and the drivers satisfaction, of a network of charging stations.",
"The universal acceptance of electric vehicles depends on the widespread presence of charging stations. These stations have to be designed intelligently so as not to overwhelm the fragile power grid with the additional load. In this paper we extend our previous work in [1] and examine how the charging station performance, namely the blocking probability, is affected both by the energy storage technology used, and the employed charging strategy. We consider two strategies: charging from the energy storage first, and charging from the power grid first. We compare their performance for different sets of system parameters and identify the optimum operating rule. Finally, we describe an economic model, which allows us to determine the trade-offs involved when choosing between an energy storage with higher capacity or one with a higher power rating."
]
} |
1409.6673 | 2011559788 | The operation of the power grid is becoming more stressed, due to the addition of new large loads represented by electric vehicles (EVs) and a more intermittent supply due to the incorporation of renewable sources. As a consequence, the coordination and control of projected EV demand in a network of fast charging stations becomes a critical and challenging problem. In this paper, we introduce a game theoretic based decentralized control mechanism to alleviate negative impacts from the EV demand. The proposed mechanism takes into consideration the nonuniform spatial distribution of EVs that induces uneven power demand at each charging facility, and aims to: 1) avoid straining grid resources by offering price incentives, so that customers accept being routed to less busy stations; 2) maximize total revenue by serving more customers with the same amount of grid resources; and 3) provide charging service to customers with a certain level of quality-of-service (QoS), the latter defined as the long term customer blocking probability. We examine three scenarios of increased complexity that gradually approximate real world settings. The obtained results show that the proposed framework leads to substantial performance improvements in terms of the aforementioned goals when compared to current state of affairs. | This work also builds on @cite_14 , that evaluated the above described charging model in a network context using real world traces obtained from the Seattle public bus system. The obtained results indicated that the spatial distribution of EVs follows a Beta distribution, which is also used in the present study. Consequently, at given times (e.g., rush hour during weekdays), some regions (e.g., downtown) are busier than others. Accordingly, stations near a high density area are busier than other stations and unless an EV routing mechanism is in place, they would fail to meet preset QoS requirements. block = [rectangle, draw,top color =white , bottom color = processblue!20, text width=5em, text centered, rounded corners, minimum height=3em] line = [draw, -triangle 45] | {
"cite_N": [
"@cite_14"
],
"mid": [
"2023366187"
],
"abstract": [
"In order to increase the penetration of electric vehicles, a network of fast charging stations that can provide drivers with a certain level of quality of service (QoS) is needed. However, given the strain that such a network can exert on the power grid, and the mobility of loads represented by electric vehicles, operating it efficiently is a challenging and complex problem. In this paper, we examine a network of charging stations equipped with an energy storage device and propose a scheme that allocates power to them from the grid, as well as routes customers. We examine three scenarios, gradually increasing their complexity. In the first one, all stations have identical charging capabilities and energy storage devices, draw constant power from the grid and no routing decisions of customers are considered. It represents the current state of affairs and serves as a baseline for evaluating the performance of the proposed scheme. In the second scenario, power to the stations is allocated in an optimal manner from the grid and in addition a certain percentage of customers can be routed to nearby stations. In the final scenario, optimal allocation of both power from the grid and customers to stations is considered. The three scenarios are evaluated using real traffic traces corresponding to weekday rush hour from a large metropolitan area in the US. The results indicate that the proposed scheme offers substantial improvements of performance compared to the current mode of operation; namely, more customers can be served with the same amount of power, thus enabling the station operators to increase their profitability. Further, the scheme provides guarantees to customers in terms of the probability of being blocked (and hence not served) by the closest charging station to their location. Overall, the paper addresses key issues related to the efficient operation, both from the perspective of the power grid and the drivers satisfaction, of a network of charging stations."
]
} |
1409.6680 | 2951848893 | Constructing a good conference schedule for a large multi-track conference needs to take into account the preferences and constraints of organizers, authors, and attendees. Creating a schedule which has fewer conflicts for authors and attendees, and thematically coherent sessions is a challenging task. Cobi introduced an alternative approach to conference scheduling by engaging the community to play an active role in the planning process. The current Cobi pipeline consists of committee-sourcing and author-sourcing to plan a conference schedule. We further explore the design space of community-sourcing by introducing attendee-sourcing -- a process that collects input from conference attendees and encodes them as preferences and constraints for creating sessions and schedule. For CHI 2014, a large multi-track conference in human-computer interaction with more than 3,000 attendees and 1,000 authors, we collected attendees' preferences by making available all the accepted papers at the conference on a paper recommendation tool we built called Confer, for a period of 45 days before announcing the conference program (sessions and schedule). We compare the preferences marked on Confer with the preferences collected from Cobi's author-sourcing approach. We show that attendee-sourcing can provide insights beyond what can be discovered by author-sourcing. For CHI 2014, the results show value in the method and attendees' participation. It produces data that provides more alternatives in scheduling and complements data collected from other methods for creating coherent sessions and reducing conflicts. | The most recent work in this area is Cobi @cite_2 which initially involves committee members to group papers and then invites the authors of the accepted papers to identify papers that would fit well in a session with their own, and which papers would they like to see. Cobi makes a design choice of giving the authors a list of 20 similar papers to choose from. While simplifying the task for authors, they can't indicate any paper outside the presented list (20 papers) as similar to their paper'', or would like to see''. We argue that it is possible to miss many paper-similarities because of such design. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2108376501"
],
"abstract": [
"Effectively planning a large multi-track conference requires an understanding of the preferences and constraints of organizers, authors, and attendees. Traditionally, the onus of scheduling the program falls on a few dedicated organizers. Resolving conflicts becomes difficult due to the size and complexity of the schedule and the lack of insight into community members' needs and desires. Cobi presents an alternative approach to conference scheduling that engages the entire community in the planning process. Cobi comprises (a) communitysourcing applications that collect preferences, constraints, and affinity data from community members, and (b) a visual scheduling interface that combines communitysourced data and constraint-solving to enable organizers to make informed improvements to the schedule. This paper describes Cobi's scheduling tool and reports on a live deployment for planning CHI 2013, where organizers considered input from 645 authors and resolved 168 scheduling conflicts. Results show the value of integrating community input with an intelligent user interface to solve complex planning tasks."
]
} |
1409.6680 | 2951848893 | Constructing a good conference schedule for a large multi-track conference needs to take into account the preferences and constraints of organizers, authors, and attendees. Creating a schedule which has fewer conflicts for authors and attendees, and thematically coherent sessions is a challenging task. Cobi introduced an alternative approach to conference scheduling by engaging the community to play an active role in the planning process. The current Cobi pipeline consists of committee-sourcing and author-sourcing to plan a conference schedule. We further explore the design space of community-sourcing by introducing attendee-sourcing -- a process that collects input from conference attendees and encodes them as preferences and constraints for creating sessions and schedule. For CHI 2014, a large multi-track conference in human-computer interaction with more than 3,000 attendees and 1,000 authors, we collected attendees' preferences by making available all the accepted papers at the conference on a paper recommendation tool we built called Confer, for a period of 45 days before announcing the conference program (sessions and schedule). We compare the preferences marked on Confer with the preferences collected from Cobi's author-sourcing approach. We show that attendee-sourcing can provide insights beyond what can be discovered by author-sourcing. For CHI 2014, the results show value in the method and attendees' participation. It produces data that provides more alternatives in scheduling and complements data collected from other methods for creating coherent sessions and reducing conflicts. | A number of recent systems explore the use of crowds for making and executing plans @cite_0 , @cite_16 , @cite_12 , and @cite_7 . Prior work on community-sourcing has shown that community-sourcing can successfully elicit high-quality expert work from specific communities. @cite_5 show that students can grade exams with 2 Another thread of research explored automatically generating an optimal schedule. In the context of conference scheduling, @cite_14 introduced formulations for maximizing the number of talks of interest attendees can attend. However, @cite_2 identified that while automated scheduling is appropriate when the parameters and constraints of the optimization problem are well-specified, interviews with past CHI organizers show that they attempt to tackle soft constraints and other tacit considerations. We discuss how attendee-sourcing data could be useful for automatically generating a draft schedule, which the organizers can refine with their domain expertise and subjective considerations. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_12"
],
"mid": [
"2141287371",
"2010405649",
"2182782998",
"2108376501",
"2132962756",
"",
"2166145477"
],
"abstract": [
"Conference scheduling involves organizing presentations into sessions which are assigned to time periods and rooms. This includes assuring a feasible utilization of time periods and rooms, and avoiding individual schedule conflicts. The problem increases in complexity by considering the preferences of presenters: for time periods, for rooms, etc. A greater level of complexity comes from considering the preferences of conference attendees, which we define as preference-based conference scheduling (PBCS). This article provides a structure on which PBCS problems are founded, including empirical demonstration of solution techniques. In addition, real-world strategic planning issues of flexibility and preference detail are explored.",
"This paper introduces privacy and accountability techniques for crowd-powered systems. We focus on email task management: tasks are an implicit part of every inbox, but the overwhelming volume of incoming email can bury important requests. We present EmailValet, an email client that recruits remote assistants from an expert crowdsourcing marketplace. By annotating each email with its implicit tasks, EmailValet's assistants create a task list that is automatically populated from emails in the user's inbox. The system is an example of a valet approach to crowdsourcing, which aims for parsimony and transparency in access con-trol for the crowd. To maintain privacy, users specify rules that define a sliding-window subset of their inbox that they are willing to share with assistants. To support accountability, EmailValet displays the actions that the assistant has taken on each email. In a weeklong field study, participants completed twice as many of their email-based tasks when they had access to crowdsourced assistants, and they became increasingly comfortable sharing their inbox with assistants over time.",
"Behind every search query is a high-level mission that the user wants to accomplish. While current search engines can often provide relevant information in response to well-specified queries, they place the heavy burden of making a plan for achieving a mission on the user. We take the alternative approach of tackling users' high-level missions directly by introducing a human computation system that generates simple plans, by decomposing a mission into goals and retrieving search results tailored to each goal. Results show that our system is able to provide users with diverse, actionable search results and useful roadmaps for accomplishing their missions.",
"Effectively planning a large multi-track conference requires an understanding of the preferences and constraints of organizers, authors, and attendees. Traditionally, the onus of scheduling the program falls on a few dedicated organizers. Resolving conflicts becomes difficult due to the size and complexity of the schedule and the lack of insight into community members' needs and desires. Cobi presents an alternative approach to conference scheduling that engages the entire community in the planning process. Cobi comprises (a) communitysourcing applications that collect preferences, constraints, and affinity data from community members, and (b) a visual scheduling interface that combines communitysourced data and constraint-solving to enable organizers to make informed improvements to the schedule. This paper describes Cobi's scheduling tool and reports on a live deployment for planning CHI 2013, where organizers considered input from 645 authors and resolved 168 scheduling conflicts. Results show the value of integrating community input with an intelligent user interface to solve complex planning tasks.",
"Online labor markets, such as Amazon's Mechanical Turk, have been used to crowdsource simple, short tasks like image labeling and transcription. However, expert knowledge is often lacking in such markets, making it impossible to complete certain classes of tasks. In this work we introduce an alternative mechanism for crowdsourcing tasks that require specialized knowledge or skill: communitysourcing --- the use of physical kiosks to elicit work from specific populations. We investigate the potential of communitysourcing by designing, implementing and evaluating Umati: the communitysourcing vending machine. Umati allows users to earn credits by performing tasks using a touchscreen attached to the machine. Physical rewards (in this case, snacks) are dispensed through traditional vending mechanics. We evaluated whether communitysourcing can accomplish expert work by using Umati to grade Computer Science exams. We placed Umati in a university Computer Science building, targeting students with grading tasks for snacks. Over one week, 328 unique users (302 of whom were students) completed 7771 tasks (7240 by students). 80 of users had never participated in a crowdsourcing market before. We found that Umati was able to grade exams with 2 higher accuracy (at the same price) or at 33 lower cost (at equivalent accuracy) than traditional single-expert grading. Mechanical Turk workers had no success grading the same exams. These results indicate that communitysourcing can successfully elicit high-quality expert work from specific communities.",
"",
"An important class of tasks that are underexplored in current human computation systems are complex tasks with global constraints. One example of such a task is itinerary planning, where solutions consist of a sequence of activities that meet requirements specified by the requester. In this paper, we focus on the crowdsourcing of such plans as a case study of constraint-based human computation tasks and introduce a collaborative planning system called Mobi that illustrates a novel crowdware paradigm. Mobi presents a single interface that enables crowd participants to view the current solution context and make appropriate contributions based on current needs. We conduct experiments that explain how Mobi enables a crowd to effectively and collaboratively resolve global constraints, and discuss how the design principles behind Mobi can more generally facilitate a crowd to tackle problems involving global constraints."
]
} |
1409.6680 | 2951848893 | Constructing a good conference schedule for a large multi-track conference needs to take into account the preferences and constraints of organizers, authors, and attendees. Creating a schedule which has fewer conflicts for authors and attendees, and thematically coherent sessions is a challenging task. Cobi introduced an alternative approach to conference scheduling by engaging the community to play an active role in the planning process. The current Cobi pipeline consists of committee-sourcing and author-sourcing to plan a conference schedule. We further explore the design space of community-sourcing by introducing attendee-sourcing -- a process that collects input from conference attendees and encodes them as preferences and constraints for creating sessions and schedule. For CHI 2014, a large multi-track conference in human-computer interaction with more than 3,000 attendees and 1,000 authors, we collected attendees' preferences by making available all the accepted papers at the conference on a paper recommendation tool we built called Confer, for a period of 45 days before announcing the conference program (sessions and schedule). We compare the preferences marked on Confer with the preferences collected from Cobi's author-sourcing approach. We show that attendee-sourcing can provide insights beyond what can be discovered by author-sourcing. For CHI 2014, the results show value in the method and attendees' participation. It produces data that provides more alternatives in scheduling and complements data collected from other methods for creating coherent sessions and reducing conflicts. | Confex @cite_6 is a commercial solution for conference management, which includes tools for scheduling. To the best of our knowledge, no existing conference management solution incorporates community input directly in the scheduling process. Conference Navigator @cite_13 and Conferator @cite_8 provide a similar set of features to Confer, including bookmarking papers of interest and building personalized schedules with paper recommendations. These systems, however, have not used the attendees' data outside of the tool itself. Our attendee-sourcing introduces a novel mechanism that uses the by-product of attendees' paper exploration in the scheduling process. Our method can even use data generated by these other tools for improving the schedule. | {
"cite_N": [
"@cite_13",
"@cite_6",
"@cite_8"
],
"mid": [
"2222933349",
"",
"2014328246"
],
"abstract": [
"As the sheer volume of information grows, information overload challenges users in many ways. Large conferences are one of the venues suffering from this overload. Faced with several parallel sessions and large volumes of papers covering diverse areas of interest, conference participants often struggle to identify the most relevant sessions to attend. The Conference Navigator 2.0 system was created to help conference participants go examine the schedule of paper presentation, add most interesting papers to individual schedule, and export this schedule to a calendar application. In addition, as a social system, the Conference Navigator 2.0 collects the wisdom of the user community and make it available through community-based recommendation interface to help individuals in making scheduling decisions.",
"",
"This paper presents an anatomy of Hypertext 2011 -- focusing on the dynamic and static behavior of the participants. We consider data collected by the CONFERATOR system at the conference, and provide statistics concerning participants, presenters, session chairs, different communities, and according roles. Additionally, we perform an in-depth analysis of these actors during the conference concerning their communication and track visiting behavior."
]
} |
1409.6431 | 190804648 | When nodes in a mobile network cluster together or move according to common external factors (e.g., cars that follow the road network), the resulting contact patterns become correlated. In this work we address the question of modelling such correlated mobility movements for the analysis of intermittently connected networks. We propose to use the concept of node colouring time to characterise dynamic node contact patterns. We analyse how this model compares to existing work, and demonstrate how to extract the relevant data from actual trace files. Moreover, we show how this information can be used to derive the latency distribution of DTN routing protocols. Our model achieves a very good fit to simulated results based on real vehicular mobility traces, whereas models which assumes independent contacts do not. | There is a rich body of work discussing detailed analytic models for latency and delivery ratio in delay-tolerant networks. The work ranges from experimentally grounded papers aiming to find models and frameworks that fit to observed data to more abstract models dealing with asymptotic bounds on information propagation. Many of these approaches are based on or inspired by epidemiological models @cite_25 . We have previously characterised the worst-case latency of broadcast for such networks using expander graph techniques @cite_15 . In a preliminary version of this paper we developed the basic framework for deriving the latency of ideal epidemic routing @cite_11 . This paper extends the latter work by incorporating an analysis of multi-copy routing as well as presenting a more rigrorous matematical basis with a hierarchy of connectivity models. | {
"cite_N": [
"@cite_15",
"@cite_25",
"@cite_11"
],
"mid": [
"2151343869",
"",
"1489677252"
],
"abstract": [
"Worst-case latency is an important characteristic of information dissemination protocols. However, in sparse mobile ad hoc networks where end-to-end connectivity cannot be achieved and store-carry-forward algorithms are needed, such worst-case analyses have not been possible to perform on real mobility traces due to lack of suitable models. We propose a new metric called delay expansion that reflects connectivity and reachability properties of intermittently connected networks. Using the delay expansion, we show how bounds on worst-case latency can be derived for a general class of broadcast protocols and a wide range of real mobility patterns. The paper includes theoretical results that show how worst-case latency can be related with delay expansion for a given mobility scenario, as well as simulations to validate the theoretical model.",
"",
"Given a mobility pattern that entails intermittent wireless ad hoc connectivity, what is the best message delivery ratio and latency that can be achieved for a delay-tolerant routing protocol? We address this question by introducing a general scheme for deriving the routing latency distribution for a given mobility trace. Prior work on determining latency distributions has focused on models where the node mobility is characterised by independent contacts between nodes. We demonstrate through simulations with synthetic and real data traces that such models fail to predict the routing latency for cases with heterogeneous and correlated mobility. We demonstrate that our approach, which is based on characterising mobility through a colouring process, achieves a very good fit to simulated results also for such complex mobility patterns."
]
} |
1409.6431 | 190804648 | When nodes in a mobile network cluster together or move according to common external factors (e.g., cars that follow the road network), the resulting contact patterns become correlated. In this work we address the question of modelling such correlated mobility movements for the analysis of intermittently connected networks. We propose to use the concept of node colouring time to characterise dynamic node contact patterns. We analyse how this model compares to existing work, and demonstrate how to extract the relevant data from actual trace files. Moreover, we show how this information can be used to derive the latency distribution of DTN routing protocols. Our model achieves a very good fit to simulated results based on real vehicular mobility traces, whereas models which assumes independent contacts do not. | Closest to our work in this paper is that of Resta and Santi @cite_10 , where the authors present an analytical framework for predicting routing performance in delay-tolerant networks. The authors analyse epidemic and two-hops routing using a colouring process under similar assumptions as in our paper. The main difference is that our work considers heterogeneous node mobility (including correlated inter-contact times), whereas the work by Resta and Santi assumes independent exponential inter-contact times. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2101946268"
],
"abstract": [
"In this paper, we present a framework for analyzing routing performance in delay tolerant networks (DTNs). Differently from previous work, our framework is aimed at characterizing the exact distribution of relevant performance metrics, which is a substantial improvement over existing studies characterizing either the expected value of the metric, or an asymptotic approximation of the actual distribution. In particular, the considered performance metrics are packet delivery delay, and communication cost, expressed as number of copies of a packet circulating in the network at the time of delivery. Our proposed framework is based on a characterization of the routing process as a stochastic coloring process and can be applied to model performance of most stateless delay tolerant routing protocols, such as epidemic, two-hops, and spray and wait. After introducing the framework, we present examples of its application to derive the packet delivery delay and communication cost distribution of two such protocols, namely epidemic and two-hops routing. Characterizing packet delivery delay and communication cost distribution is important to investigate fundamental properties of delay tolerant networks. As an example, we show how packet delivery delay distribution can be used to estimate how epidemic routing performance changes in presence of different degrees of node cooperation within the network. More specifically, we consider fully cooperative, noncooperative, and probabilistic cooperative scenarios, and derive nearly exact expressions of the packet delivery rate (PDR) under these scenarios based on our proposed framework. The comparison of the obtained packet delivery rate estimation in the various cooperation scenarios suggests that even a modest level of node cooperation (probabilistic cooperation with a low probability of cooperation) is sufficient to achieve 2-fold performance improvement with respect to the most pessimistic scenario in which all potential forwarders drop packets."
]
} |
1409.6431 | 190804648 | When nodes in a mobile network cluster together or move according to common external factors (e.g., cars that follow the road network), the resulting contact patterns become correlated. In this work we address the question of modelling such correlated mobility movements for the analysis of intermittently connected networks. We propose to use the concept of node colouring time to characterise dynamic node contact patterns. We analyse how this model compares to existing work, and demonstrate how to extract the relevant data from actual trace files. Moreover, we show how this information can be used to derive the latency distribution of DTN routing protocols. Our model achieves a very good fit to simulated results based on real vehicular mobility traces, whereas models which assumes independent contacts do not. | @cite_18 analyse epidemic routing taking into account more factors such as limited buffer space and signalling. Their model is based on differential equations also assuming independent exponentially distributed inter-contact times. A similar technique is used by @cite_17 , and extended to deal with multiple classes of mobility movements by @cite_22 . | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_17"
],
"mid": [
"2109528718",
"2144447514",
"2068755173"
],
"abstract": [
"In this paper, we develop a rigorous, unified framework based on ordinary differential equations (ODEs) to study epidemic routing and its variations. These ODEs can be derived as limits of Markovian models under a natural scaling as the number of nodes increases. While an analytical study of Markovian models is quite complex and numerical solution impractical for large networks, the corresponding ODE models yield closed-form expressions for several performance metrics of interest, and a numerical solution complexity that does not increase with the number of nodes. Using this ODE approach, we investigate how resources such as buffer space and the number of copies made for a packet can be traded for faster delivery, illustrating the differences among various forwarding and recovery schemes considered. We perform model validations through simulation studies. Finally we consider the effect of buffer management by complementing the forwarding models with Markovian and fluid buffer models.",
"Communication networks are traditionally assumed to be connected. However, emerging wireless applications such as vehicular networks, pocket-switched networks, etc., coupled with volatile links, node mobility, and power outages, will require the network to operate despite frequent disconnections. To this end, opportunistic routing techniques have been proposed, where a node may store-and-carry a message for some time, until a new forwarding opportunity arises. Although a number of such algorithms exist, most focus on relatively homogeneous settings of nodes. However, in many envisioned applications, participating nodes might include handhelds, vehicles, sensors, etc. These various \"classesrdquo have diverse characteristics and mobility patterns, and will contribute quite differently to the routing process. In this paper, we address the problem of routing in intermittently connected wireless networks comprising multiple classes of nodes. We show that proposed solutions, which perform well in homogeneous scenarios, are not as competent in this setting. To this end, we propose a class of routing schemes that can identify the nodes of \"highest utilityrdquo for routing, improving the delay and delivery ratio by four to five times. Additionally, we propose an analytical framework based on fluid models that can be used to analyze the performance of various opportunistic routing strategies, in heterogeneous settings.",
"We study fluid approximations for a class of monotone relay policies in delay tolerant ad-hoc networks. This class includes the epidemic routing and the two-hops routing protocols. We enhance relay policies with probabilistic forwarding, i.e., a message is forwarded to a relay with some probability p. We formulate an optimal control problem where a tradeoff between delay and energy consumption is captured and optimized. We compute both the optimal static value of p as well as the optimal time dependent value of p. We show that the time-dependent problem is optimized by threshold type policies, and we compute explicitly the value of the optimal threshold for some special classes of relay policies."
]
} |
1409.6431 | 190804648 | When nodes in a mobile network cluster together or move according to common external factors (e.g., cars that follow the road network), the resulting contact patterns become correlated. In this work we address the question of modelling such correlated mobility movements for the analysis of intermittently connected networks. We propose to use the concept of node colouring time to characterise dynamic node contact patterns. We analyse how this model compares to existing work, and demonstrate how to extract the relevant data from actual trace files. Moreover, we show how this information can be used to derive the latency distribution of DTN routing protocols. Our model achieves a very good fit to simulated results based on real vehicular mobility traces, whereas models which assumes independent contacts do not. | The assumption of exponential inter-contact times was first challenged by @cite_9 who observed a power law of the distribution for a set of real mobility traces (i.e., meaning that there is a relatively high likelihood of very long inter-contact times). Later work by @cite_8 as well as @cite_21 showed that the power law applied only for a part of the distributions and that from a certain time point, the exponential model better explains the data. Pasarella and Conti @cite_12 present a model suggesting that an aggregate power law distribution can in fact be the result of pairs with different but still independent exponentially distributed contacts. Such heterogeneous but still independent contact patterns have also been analysed in terms of delay performance by Lee and Eun @cite_27 . | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_27",
"@cite_12"
],
"mid": [
"2092003045",
"2096509679",
"2084182494",
"2026329727",
"1875423025"
],
"abstract": [
"We examine the fundamental properties that determine the basic performance metrics for opportunistic communications. We first consider the distribution of intercontact times between mobile devices. Using a diverse set of measured mobility traces, we find as an invariant property that there is a characteristic time, order of half a day, beyond which the distribution decays exponentially. Up to this value, the distribution in many cases follows a power law, as shown in recent work. This power law finding was previously used to support the hypothesis that intercontact time has a power law tail, and that common mobility models are not adequate. However, we observe that the timescale of interest for opportunistic forwarding may be of the same order as the characteristic time, and thus, the exponential tail is important. We further show that already simple models such as random walk and random waypoint can exhibit the same dichotomy in the distribution of intercontact time as in empirical traces. Finally, we perform an extensive analysis of several properties of human mobility patterns across several dimensions, and we present empirical evidence that the return time of a mobile device to its favorite location site may already explain the observed dichotomy. Our findings suggest that existing results on the performance of forwarding schemes based on power law tails might be overly pessimistic.",
"We study data transfer opportunities between wireless devices carried by humans. We observe that the distribution of the intercontact time (the time gap separating two contacts between the same pair of devices) may be well approximated by a power law over the range [10 minutes; 1 day]. This observation is confirmed using eight distinct experimental data sets. It is at odds with the exponential decay implied by the most commonly used mobility models. In this paper, we study how this newly uncovered characteristic of human mobility impacts one class of forwarding algorithms previously proposed. We use a simplified model based on the renewal theory to study how the parameters of the distribution impact the performance in terms of the delivery delay of these algorithms. We make recommendations for the design of well-founded opportunistic forwarding algorithms in the context of human-carried devices",
"Inter-contact time between moving vehicles is one of the key metrics in vehicular ad hoc networks (VANETs) and central to forwarding algorithms and the end-to-end delay. Due to prohibitive costs, little work has conducted experimental study on inter-contact time in urban vehicular environments. In this paper, we carry out an extensive experiment involving thousands of operational taxies in Shanghai city. Studying the taxi trace data on the frequency and duration of transfer opportunities between taxies, we observe that the tail distribution of the inter-contact time, that is the time gap separating two contacts of the same pair of taxies, exhibits a light tail such as one of an exponential distribution, over a large range of timescale. This observation is in sharp contrast to recent empirical data studies based on human mobility, in which the distribution of the inter-contact time obeys a power law. By performing a least squares fit, we establish an exponential model that can accurately depict the tail behavior of the inter-contact time in VANETs. Our results thus provide fundamental guidelines on design of new vehicular mobility models in urban scenarios, new data forwarding protocols and their performance analysis.",
"Heterogeneity arises in a wide range of scenarios in mobile opportunistic networks and is one of key factors that govern the performance of packet forwarding algorithms. While the heterogeneity has been empirically investigated and exploited in the design of new forwarding algorithms, it has been typically ignored or marginalized when it comes to rigorous performance analysis of such algorithms. In this paper, we develop an analytical framework to the performance gain achievable by exploiting the heterogeneity in mobile nodes' contact dynamics. In particular, we derive a delay upper bound of a heterogeneity-aware forwarding policy per a given number of message copies and obtain its closed-form expression, which enables our quantitative study on the benefit of leveraging underlying heterogeneity structure in the design of forwarding algorithms. We then analytically show that less than 20 of total (unlimited) message copies is only enough under various heterogeneous network settings to achieve the same delay as that obtained using the unlimited message copies when the networks become homogeneous. We also provide independent simulation results including real trace-driven evaluation to support our analytical results.",
"A pioneering body of work in the area of mobile opportunistic networks has shown that characterising inter-contact times between pairs of nodes is crucial. In particular, when inter-contact times follow a power-law distribution, the expected delay of a large family of forwarding protocols may be infinite. The most common approach adopted in the literature to study inter-contact times consists in looking at the distribution of the inter-contact times aggregated over all nodes pairs, assuming it correctly represents the distributions of individual pairs. In this paper we challenge this assumption. We present an analytical model that describes the dependence between the individual pairs and the aggregate distributions. By using the model we show that in heterogeneous networks - when not all pairs contact patterns are the same - most of the time the aggregate distribution is not representative of the individual pairs distributions, and that looking at the aggregate can lead to completely wrong conclusions on the key properties of the network. For example, we show that aggregate power-law inter-contact times (suggesting infinite expected delays) can frequently emerge in networks where individual pairs inter-contact times are exponentially distributed (meaning that the expected delay is finite). From a complementary standpoint, our results show that heterogeneity of individual pairs contact patterns plays a crucial role in determining the aggregate inter-contact times statistics, and that focusing on the latter only can be misleading."
]
} |
1409.6431 | 190804648 | When nodes in a mobile network cluster together or move according to common external factors (e.g., cars that follow the road network), the resulting contact patterns become correlated. In this work we address the question of modelling such correlated mobility movements for the analysis of intermittently connected networks. We propose to use the concept of node colouring time to characterise dynamic node contact patterns. We analyse how this model compares to existing work, and demonstrate how to extract the relevant data from actual trace files. Moreover, we show how this information can be used to derive the latency distribution of DTN routing protocols. Our model achieves a very good fit to simulated results based on real vehicular mobility traces, whereas models which assumes independent contacts do not. | Our work on the other hand, suggests that the exact characteristic of the inter-contact distribution is less relevant when contacts are not independent. Correlated and heterogeneous mobility and the effect on routing have recently been discussed in several papers @cite_6 @cite_16 @cite_4 @cite_26 , but to our knowledge, we are the first to provide a framework that accurately captures the routing latency distribution for real traces with heterogeneous and correlated movements. | {
"cite_N": [
"@cite_16",
"@cite_4",
"@cite_6",
"@cite_26"
],
"mid": [
"",
"2055696952",
"2031089293",
"2137893669"
],
"abstract": [
"",
"We extend the analysis of the scaling laws of wireless ad hoc networks to the case of correlated nodes movements, which are commonly found in real mobility processes. We consider a simple version of the Reference Point Group Mobility model, in which nodes belonging to the same group are constrained to lie in a disc area, whose center moves uniformly across the network according to the i.i.d. model. We assume fast mobility conditions and take as a primary goal the maximization of per-node throughput. We discover that correlated node movements have a huge impact on asymptotic throughput and delay and can sometimes lead to better performance than the one achievable under independent nodes movements.",
"Recent discovery of the mixture (power-law and exponential) behavior of inter-meeting time distribution of mobile nodes presents new challenge to the problem of mobility modeling and its effect on the network performance. Existing studies on this problem via the average inter-meeting time become insufficient when the inter-meeting time distribution starts to deviate from exponential one. This insufficiency necessarily leads to the increasing difficulty in the performance analysis of forwarding algorithms in mobile ad-hoc networks (MANET). In this paper, we analyze the effect of mobility patterns on the inter-meeting time distribution. We first identify the critical timescale in the inter-meeting distribution, at which the transition from power-law to exponential takes place, in terms of the domain size and the statistics of the mobility pattern. We then prove that stronger correlations in mobility patterns lead to heavier (non-exponential) 'head' of the inter-meeting time distribution. We also prove that there exists an invariance property for several contact-based metrics such as inter-meeting, contact, inter-any-contact time under both distance-based (Boolean) and physical interference (SINR) based models, in that the averages of those contact-based metrics do not depend on the degree of correlations in the mobility patterns. Our results collectively suggest a convex ordering relationship among inter-meeting times of various mobility models indexed by their degrees of correlation, which is in good agreement with the ordering of network performance under a set of mobility patterns whose inter-meeting time distributions have power-law 'head' followed by exponential 'tail'.",
"Realistic mobility models are crucial for the simulation of Delay Tolerant and Opportunistic Networks. The long standing benchmark of reproducing realistic pairwise statistics (e.g., contact and inter-contact time distributions) is today mastered by state-of-the-art models. However, mobility models should also reflect the macroscopic community structure of who meets whom. While some existing models reproduce realistic community structure - reflecting groups of nodes who work or live together - they fail in correctly capturing what happens between such communities: they are often connected by few bridging links between nodes who socialize outside of the context and location of their home communities. In a first step, we analyze the bridging behavior in mobility traces and show how it differs to that of mobility models. By analyzing the context and location of contacts, we then show that it is the social nature of bridges which makes them differ from intra-community links. Based on these insights, we propose a Hypergraph to model time-synchronized meetings of nodes from different communities as a social overlay. Applying this as an extension to two existing mobility models we show that it reproduces correct bridging behavior while keeping other features of the original models intact."
]
} |
1409.6813 | 2950477025 | Existing techniques for 3D action recognition are sensitive to viewpoint variations because they extract features from depth images which are viewpoint dependent. In contrast, we directly process pointclouds for cross-view action recognition from unknown and unseen views. We propose the Histogram of Oriented Principal Components (HOPC) descriptor that is robust to noise, viewpoint, scale and action speed variations. At a 3D point, HOPC is computed by projecting the three scaled eigenvectors of the pointcloud within its local spatio-temporal support volume onto the vertices of a regular dodecahedron. HOPC is also used for the detection of Spatio-Temporal Keypoints (STK) in 3D pointcloud sequences so that view-invariant STK descriptors (or Local HOPC descriptors) at these key locations only are used for action recognition. We also propose a global descriptor computed from the normalized spatio-temporal distribution of STKs in 4-D, which we refer to as STK-D. We have evaluated the performance of our proposed descriptors against nine existing techniques on two cross-view and three single-view human action recognition datasets. The Experimental results show that our techniques provide significant improvement over state-of-the-art methods. | Based on the data type, action recognition methods can be divided into three categories including color-based, skeleton-based and depth-based methods. In color videos, a significant portion of the existing work has been proposed for single-view action recognition, where the training and test videos are captured from the same view. In order to recognize actions across different views, one approach is to collect data from all possible views and train a separate classifier for each view. However, this approach does not scale well due to the requirement of a large number of labeled samples for each view and it becomes infeasible as the number of action categories increases. To overcome this problem, some techniques infer 3D scene structure and use geometric transformations to achieve view invariance @cite_5 @cite_8 @cite_43 @cite_20 @cite_45 . These methods critically rely on accurate detection of the body joints and contours, which are still open problems in real-world settings. Other methods focus on spatio-temporal features which are inherently view-invariant @cite_22 @cite_19 @cite_28 @cite_44 @cite_11 @cite_32 @cite_48 . However, these methods have limitations as some of them require access to mocap data while others compromise discriminative power to achieve view invariance @cite_60 . | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_28",
"@cite_48",
"@cite_60",
"@cite_32",
"@cite_44",
"@cite_43",
"@cite_19",
"@cite_45",
"@cite_5",
"@cite_20",
"@cite_11"
],
"mid": [
"1984219317",
"2166070055",
"2125854396",
"2013076218",
"",
"2010399676",
"2097290566",
"2156135524",
"2021733262",
"",
"",
"2104020497",
"1575912008"
],
"abstract": [
"Human activity recognition is central to many practical applications, ranging from visual surveillance to gaming interfacing. Most approaches addressing this problem are based on localized spatio-temporal features that can vary significantly when the viewpoint changes. As a result, their performances rapidly deteriorate as the difference between the viewpoints of the training and testing data increases. In this paper, we introduce a new type of feature, the “Hankelet” that captures dynamic properties of short tracklets. While Hankelets do not carry any spatial information, they bring invariant properties to changes in viewpoint that allow for robust cross-view activity recognition, i.e. when actions are recognized using a classifier trained on data from a different viewpoint. Our experiments on the IXMAS dataset show that using Hanklets improves the state of the art performance by over 20 .",
"In this paper, we address the problem of learning compact, view-independent, realistic 3D models of human actions recorded with multiple cameras, for the purpose of recognizing those same actions from a single or few cameras, without prior knowledge about the relative orientations between the cameras and the subjects. To this aim, we propose a new framework where we model actions using three dimensional occupancy grids, built from multiple viewpoints, in an exemplar-based HMM. The novelty is, that a 3D reconstruction is not required during the recognition phase, instead learned 3D exemplars are used to produce 2D image information that is compared to the observations. Parameters that describe image projections are added as latent variables in the recognition process. In addition, the temporal Markov dependency applied to view parameters allows them to evolve during recognition as with a smoothly moving camera. The effectiveness of the framework is demonstrated with experiments on real datasets and with challenging recognition scenarios.",
"Analysis of human perception of motion shows that information for representing the motion is obtained from the dramatic changes in the speed and direction of the trajectory. In this paper, we present a computational representation of human action to capture these dramatic changes using spatio-temporal curvature of 2-D trajectory. This representation is compact, view-invariant, and is capable of explaining an action in terms of meaningful action units called dynamic instants and intervals. A dynamic instant is an instantaneous entity that occurs for only one frame, and represents an important change in the motion characteristics. An interval represents the time period between two dynamic instants during which the motion characteristics do not change. Starting without a model, we use this representation for recognition and incremental learning of human actions. The proposed method can discover instances of the same action performed by different people from different view points. Experiments on 47 actions performed by 7 individuals in an environment with no constraints shows the robustness of the proposed method.",
"Action recognition is an important and challenging topic in computer vision, with many important applications including video surveillance, automated cinematography and understanding of social interaction. Yet, most current work in gesture or action interpretation remains rooted in view-dependent representations. This paper introduces Motion History Volumes (MHV) as a free-viewpoint representation for human actions in the case of multiple calibrated, and background-subtracted, video cameras. We present algorithms for computing, aligning and comparing MHVs of different actions performed by different people in a variety of viewpoints. Alignment and comparisons are performed efficiently using Fourier transforms in cylindrical coordinates around the vertical axis. Results indicate that this representation can be used to learn and recognize basic human action classes, independently of gender, body size and viewpoint.",
"",
"Human action in video sequences can be seen as silhouettes of a moving torso and protruding limbs undergoing articulated motion. We regard human actions as three-dimensional shapes induced by the silhouettes in the space-time volume. We adopt a recent approach by (2004) for analyzing 2D shapes and generalize it to deal with volumetric space-time action shapes. Our method utilizes properties of the solution to the Poisson equation to extract space-time features such as local space-time saliency, action dynamics, shape structure and orientation. We show that these features are useful for action recognition, detection and clustering. The method is fast, does not require video alignment and is applicable in (but not limited to) many scenarios where the background is known. Moreover, we demonstrate the robustness of our method to partial occlusions, non-rigid deformations, significant changes in scale and viewpoint, high irregularities in the performance of an action and low quality video",
"A first step towards an understanding of the semantic content in a video is the reliable detection and recognition of actions performed by objects. This is a difficult problem due to the enormous variability in an action's appearance when seen from different viewpoints and or at different times. In this paper we address the recognition of actions by taking a novel approach that models actions as special types of 3D objects. Specifically, we observe that any action can be represented as a generalized cylinder, called the action cylinder. Reliable recognition is achieved by recovering the viewpoint transformation between the reference (model) and given action cylinders. A set of 8 corresponding points from time-wise corresponding cross-sections is shown to be sufficient to align the two cylinders under perspective projection. A surprising conclusion from visualizing actions as objects is that rigid, articulated, and nonrigid actions can all be modeled in a uniform framework.",
"3D human pose recovery is considered as a fundamental step in view-invariant human action recognition. However, inferring 3D poses from a single view usually is slow due to the large number of parameters that need to be estimated and recovered poses are often ambiguous due to the perspective projection. We present an approach that does not explicitly infer 3D pose at each frame. Instead, from existing action models we search for a series of actions that best match the input sequence. In our approach, each action is modeled as a series of synthetic 2D human poses rendered from a wide range of viewpoints. The constraints on transition of the synthetic poses is represented by a graph model called Action Net. Given the input, silhouette matching between the input frames and the key poses is performed first using an enhanced Pyramid Match Kernel algorithm. The best matched sequence of actions is then tracked using the Viterbi algorithm. We demonstrate this approach on a challenging video sets consisting of 15 complex action classes.",
"This paper presents an approach for viewpoint invariant human action recognition, an area that has received scant attention so far, relative to the overall body of work in human action recognition. It has been established previously that there exist no invariants for 3D to 2D projection. However, there exist a wealth of techniques in 2D invariance that can be used to advantage in 3D to 2D projection. We exploit these techniques and model actions in terms of view-invariant canonical body poses and trajectories in 2D invariance space, leading to a simple and effective way to represent and recognize human actions from a general viewpoint. We first evaluate the approach theoretically and show why a straightforward application of the 2D invariance idea will not work. We describe strategies designed to overcome inherent problems in the straightforward approach and outline the recognition algorithm. We then present results on 2D projections of publicly available human motion capture data as well on manually segmented real image sequences. In addition to robustness to viewpoint change, the approach is robust enough to handle different people, minor variabilities in a given action, and the speed of aciton (and hence, frame-rate) while encoding sufficient distinction among actions.",
"",
"",
"We present a vision system for the 3-D model-based tracking of unconstrained human movement. Using image sequences acquired simultaneously from multiple views, we recover the 3-D body pose at each time instant without the use of markers. The pose-recovery problem is formulated as a search problem and entails finding the pose parameters of a graphical human model whose synthesized appearance is most similar to the actual appearance of the real human in the multi-view images. The models used for this purpose are acquired from the images. We use a decomposition approach and a best-first technique to search through the high dimensional pose parameter space. A robust variant of chamfer matching is used as a fast similarity measure between synthesized and real edge images. We present initial tracking results from a large new Humans-in-Action (HIA) database containing more than 2500 frames in each of four orthogonal views. They contain subjects involved in a variety of activities, of various degrees of complexity, ranging from the more simple one-person hand waving to the challenging two-person close interaction in the Argentine Tango.",
"This paper presents a general framework for image-based analysis of 3D repeating motions that addresses two limitations in the state of the art. First, the assumption that a motion be perfectly even from one cycle to the next is relaxed. Real repeating motions tend not to be perfectly even, i.e., the length of a cycle varies through time because of physically important changes in the scene. A generalization of period is defined for repeating motions that makes this temporal variation explicit. This representation, called the period trace, is compact and purely temporal, describing the evolution of an object or scene without reference to spatial quantities such as position or velocity. Second, the requirement that the observer be stationary is removed. Observer motion complicates image analysis because an object that undergoes a 3D repeating motion will generally not produce a repeating sequence of images. Using principles of affine invariance, we derive necessary and sufficient conditions for an image sequence to be the projection of a 3D repeating motion, accounting for changes in viewpoint and other camera parameters. Unlike previous work in visual invariance, however, our approach is applicable to objects and scenes whose motion is highly non-rigid. Experiments on real image sequences demonstrate how the approach may be used to detect several types of purely temporal motion features, relating to motion trends and irregularities. Applications to athletic and medical motion analysis are discussed."
]
} |
1409.6813 | 2950477025 | Existing techniques for 3D action recognition are sensitive to viewpoint variations because they extract features from depth images which are viewpoint dependent. In contrast, we directly process pointclouds for cross-view action recognition from unknown and unseen views. We propose the Histogram of Oriented Principal Components (HOPC) descriptor that is robust to noise, viewpoint, scale and action speed variations. At a 3D point, HOPC is computed by projecting the three scaled eigenvectors of the pointcloud within its local spatio-temporal support volume onto the vertices of a regular dodecahedron. HOPC is also used for the detection of Spatio-Temporal Keypoints (STK) in 3D pointcloud sequences so that view-invariant STK descriptors (or Local HOPC descriptors) at these key locations only are used for action recognition. We also propose a global descriptor computed from the normalized spatio-temporal distribution of STKs in 4-D, which we refer to as STK-D. We have evaluated the performance of our proposed descriptors against nine existing techniques on two cross-view and three single-view human action recognition datasets. The Experimental results show that our techniques provide significant improvement over state-of-the-art methods. | More recently, knowledge transfer based methods @cite_17 @cite_55 @cite_12 @cite_6 @cite_4 @cite_35 @cite_24 @cite_40 have become popular. These methods find a view independent latent space in which features extracted from different views are directly comparable. Such methods are either not applicable or perform poorly when the recognition is performed on videos from unknown and, more importantly, from unseen views. To overcome this problem, @cite_17 proposed cross-view action representation by exploiting the compositional structure in spatio-temporal patterns and geometrical relations among views. Although their method can be applied to action recognition from unknown and unseen views, it requires 3D skeleton data for training which is not always available. Our proposed approach also falls in this category except that it uses 3D pointcloud sequences and does not require skeleton data. To the best of our knowledge, we are the first to propose cross-view action recognition using 3D pointcloud videos. | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_55",
"@cite_6",
"@cite_24",
"@cite_40",
"@cite_12",
"@cite_17"
],
"mid": [
"2536349189",
"1487322600",
"2057232399",
"2159001013",
"2010243644",
"2090834590",
"2169560406",
"2949462896"
],
"abstract": [
"Recognition using appearance features is confounded by phenomena that cause images of the same object to look different, or images of different objects to look the same. This may occur because the same object looks different from different viewing directions, or because two generally different objects have views from which they look similar. In this paper, we introduce the idea of discriminative aspect, a set of latent variables that encode these phenomena. Changes in view direction are one cause of changes in discriminative aspect, but others include changes in texture or lighting. However, images are not labelled with relevant discriminative aspect parameters. We describe a method to improve discrimination by inferring and then using latent discriminative aspect parameters. We apply our method to two parallel problems: object category recognition and human activity recognition. In each case, appearance features are powerful given appropriate training data, but traditionally fail badly under large changes in view. Our method can recognize an object quite reliably in a view for which it possesses no training example. Our method also reweights features to discount accidental similarities in appearance. We demonstrate that our method produces a significant improvement on the state of the art for both object and activity recognition.",
"Appearance features are good at discriminating activities in a fixed view, but behave poorly when aspect is changed. We describe a method to build features that are highly stable under change of aspect. It is not necessary to have multiple views to extract our features. Our features make it possible to learn a discriminative model of activity in one view, and spot that activity in another view, for which one might poses no labeled examples at all. Our construction uses labeled examples to build activity models, and unlabeled, but corresponding, examples to build an implicit model of how appearance changes with aspect. We demonstrate our method with challenging sequences of real human motion, where discriminative methods built on appearance alone fail badly.",
"We describe a new approach to transfer knowledge across views for action recognition by using examples from a large collection of unlabelled mocap data. We achieve this by directly matching purely motion based features from videos to mocap. Our approach recovers 3D pose sequences without performing any body part tracking. We use these matches to generate multiple motion projections and thus add view invariance to our action recognition model. We also introduce a closed form solution for approximate non-linear Circulant Temporal Encoding (nCTE), which allows us to efficiently perform the matches in the frequency domain. We test our approach on the challenging unsupervised modality of the IXMAS dataset, and use publicly available motion capture data for matching. Without any additional annotation effort, we are able to significantly outperform the current state of the art.",
"In this paper, we propose a novel method for cross-view action recognition via a continuous virtual path which connects the source view and the target view. Each point on this virtual path is a virtual view which is obtained by a linear transformation of the action descriptor. All the virtual views are concatenated into an infinite-dimensional feature to characterize continuous changes from the source to the target view. However, these infinite-dimensional features cannot be used directly. Thus, we propose a virtual view kernel to compute the value of similarity between two infinite-dimensional features, which can be readily used to construct any kernelized classifiers. In addition, there are a lot of unlabeled samples from the target view, which can be utilized to improve the performance of classifiers. Thus, we present a constraint strategy to explore the information contained in the unlabeled samples. The rationality behind the constraint is that any action video belongs to only one class. Our method is verified on the IXMAS dataset, and the experimental results demonstrate that our method achieves better performance than the state-of-the-art methods.",
"In this paper, we present a novel approach to recognizing human actions from different views by view knowledge transfer. An action is originally modelled as a bag of visual-words (BoVW), which is sensitive to view changes. We argue that, as opposed to visual words, there exist some higher level features which can be shared across views and enable the connection of action models for different views. To discover these features, we use a bipartite graph to model two view-dependent vocabularies, then apply bipartite graph partitioning to co-cluster two vocabularies into visual-word clusters called bilingual-words (i.e., high-level features), which can bridge the semantic gap across view-dependent vocabularies. Consequently, we can transfer a BoVW action model into a bag-of-bilingual-words (BoBW) model, which is more discriminative in the presence of view changes. We tested our approach on the IXMAS data set and obtained very promising results. Moreover, to further fuse view knowledge from multiple views, we apply a Locally Weighted Ensemble scheme to dynamically weight transferred models based on the local distribution structure around each test example. This process can further improve the average recognition rate by about 7 .",
"We propose an approach for cross-view action recognition by way of ‘virtual views’ that connect the action descriptors extracted from one (source) view to those extracted from another (target) view. Each virtual view is associated with a linear transformation of the action descriptor, and the sequence of transformations arising from the sequence of virtual views aims at bridging the source and target views while preserving discrimination among action categories. Our approach is capable of operating without access to labeled action samples in the target view and without access to corresponding action instances in the two views, and it also naturally incorporate and exploit corresponding instances or partial labeling in the target view when they are available. The proposed approach achieves improved or competitive performance relative to existing methods when instance correspondences or target labels are available, and it goes beyond the capabilities of these methods by providing some level of discrimination even when neither correspondences nor target labels exist.",
"We present an approach to jointly learn a set of view-specific dictionaries and a common dictionary for cross-view action recognition. The set of view-specific dictionaries is learned for specific views while the common dictionary is shared across different views. Our approach represents videos in each view using both the corresponding view-specific dictionary and the common dictionary. More importantly, it encourages the set of videos taken from different views of the same action to have similar sparse representations. In this way, we can align view-specific features in the sparse feature spaces spanned by the view-specific dictionary set and transfer the view-shared features in the sparse feature space spanned by the common dictionary. Meanwhile, the incoherence between the common dictionary and the view-specific dictionary set enables us to exploit the discrimination information encoded in view-specific features and view-shared features separately. In addition, the learned common dictionary not only has the capability to represent actions from unseen views, but also makes our approach effective in a semi-supervised setting where no correspondence videos exist and only a few labels exist in the target view. Extensive experiments using the multi-view IXMAS dataset demonstrate that our approach outperforms many recent approaches for cross-view action recognition.",
"Existing methods on video-based action recognition are generally view-dependent, i.e., performing recognition from the same views seen in the training data. We present a novel multiview spatio-temporal AND-OR graph (MST-AOG) representation for cross-view action recognition, i.e., the recognition is performed on the video from an unknown and unseen view. As a compositional model, MST-AOG compactly represents the hierarchical combinatorial structures of cross-view actions by explicitly modeling the geometry, appearance and motion variations. This paper proposes effective methods to learn the structure and parameters of MST-AOG. The inference based on MST-AOG enables action recognition from novel views. The training of MST-AOG takes advantage of the 3D human skeleton data obtained from Kinect cameras to avoid annotating enormous multi-view video frames, which is error-prone and time-consuming, but the recognition does not need 3D information and is based on 2D video input. A new Multiview Action3D dataset has been created and will be released. Extensive experiments have demonstrated that this new action representation significantly improves the accuracy and robustness for cross-view action recognition on 2D videos."
]
} |
1409.6813 | 2950477025 | Existing techniques for 3D action recognition are sensitive to viewpoint variations because they extract features from depth images which are viewpoint dependent. In contrast, we directly process pointclouds for cross-view action recognition from unknown and unseen views. We propose the Histogram of Oriented Principal Components (HOPC) descriptor that is robust to noise, viewpoint, scale and action speed variations. At a 3D point, HOPC is computed by projecting the three scaled eigenvectors of the pointcloud within its local spatio-temporal support volume onto the vertices of a regular dodecahedron. HOPC is also used for the detection of Spatio-Temporal Keypoints (STK) in 3D pointcloud sequences so that view-invariant STK descriptors (or Local HOPC descriptors) at these key locations only are used for action recognition. We also propose a global descriptor computed from the normalized spatio-temporal distribution of STKs in 4-D, which we refer to as STK-D. We have evaluated the performance of our proposed descriptors against nine existing techniques on two cross-view and three single-view human action recognition datasets. The Experimental results show that our techniques provide significant improvement over state-of-the-art methods. | Motion trajectory based action recognition methods @cite_3 @cite_34 @cite_58 @cite_22 are also not reliable in depth sequences @cite_37 . Therefore, recent depth based action recognition methods resorted to alternative ways to extract more reliable interest points. @cite_7 proposed Haar features to be extracted from each random subvolume. Xia and Aggarwal @cite_56 proposed a filtering method to extract spatio-temporal interest points. Their approach fails when the action execution speed is faster than the flip of the signal caused by sensor noise. Moreover, both techniques are not robust to viewpoint variations. | {
"cite_N": [
"@cite_37",
"@cite_22",
"@cite_7",
"@cite_58",
"@cite_3",
"@cite_56",
"@cite_34"
],
"mid": [
"2085735683",
"1984219317",
"2169251375",
"2117082993",
"2105101328",
"",
"2126574503"
],
"abstract": [
"We present a new descriptor for activity recognition from videos acquired by a depth sensor. Previous descriptors mostly compute shape and motion features independently, thus, they often fail to capture the complex joint shape-motion cues at pixel-level. In contrast, we describe the depth sequence using a histogram capturing the distribution of the surface normal orientation in the 4D space of time, depth, and spatial coordinates. To build the histogram, we create 4D projectors, which quantize the 4D space and represent the possible directions for the 4D normal. We initialize the projectors using the vertices of a regular polychoron. Consequently, we refine the projectors using a discriminative density measure, such that additional projectors are induced in the directions where the 4D normals are more dense and discriminative. Through extensive experiments, we demonstrate that our descriptor better captures the joint shape-motion cues in the depth sequence, and thus outperforms the state-of-the-art on all relevant benchmarks.",
"Human activity recognition is central to many practical applications, ranging from visual surveillance to gaming interfacing. Most approaches addressing this problem are based on localized spatio-temporal features that can vary significantly when the viewpoint changes. As a result, their performances rapidly deteriorate as the difference between the viewpoints of the training and testing data increases. In this paper, we introduce a new type of feature, the “Hankelet” that captures dynamic properties of short tracklets. While Hankelets do not carry any spatial information, they bring invariant properties to changes in viewpoint that allow for robust cross-view activity recognition, i.e. when actions are recognized using a classifier trained on data from a different viewpoint. Our experiments on the IXMAS dataset show that using Hanklets improves the state of the art performance by over 20 .",
"We study the problem of action recognition from depth sequences captured by depth cameras, where noise and occlusion are common problems because they are captured with a single commodity camera. In order to deal with these issues, we extract semi-local features called random occupancy pattern ROP features, which employ a novel sampling scheme that effectively explores an extremely large sampling space. We also utilize a sparse coding approach to robustly encode these features. The proposed approach does not require careful parameter tuning. Its training is very fast due to the use of the high-dimensional integral image, and it is robust to the occlusions. Our technique is evaluated on two datasets captured by commodity depth cameras: an action dataset and a hand gesture dataset. Our classification results are superior to those obtained by the state of the art approaches on both datasets.",
"Recognition of human actions in a video acquired by a moving camera typically requires standard preprocessing steps such as motion compensation, moving object detection and object tracking. The errors from the motion compensation step propagate to the object detection stage, resulting in miss-detections, which further complicates the tracking stage, resulting in cluttered and incorrect tracks. Therefore, action recognition from a moving camera is considered very challenging. In this paper, we propose a novel approach which does not follow the standard steps, and accordingly avoids the aforementioned difficulties. Our approach is based on Lagrangian particle trajectories which are a set of dense trajectories obtained by advecting optical flow over time, thus capturing the ensemble motions of a scene. This is done in frames of unaligned video, and no object detection is required. In order to handle the moving camera, we propose a novel approach based on low rank optimization, where we decompose the trajectories into their camera-induced and object-induced components. Having obtained the relevant object motion trajectories, we compute a compact set of chaotic invariant features which captures the characteristics of the trajectories. Consequently, a SVM is employed to learn and recognize the human actions using the computed motion features. We performed intensive experiments on multiple benchmark datasets and two new aerial datasets called ARG and APHill, and obtained promising results.",
"Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.",
"",
"Feature trajectories have shown to be efficient for representing videos. Typically, they are extracted using the KLT tracker or matching SIFT descriptors between frames. However, the quality as well as quantity of these trajectories is often not sufficient. Inspired by the recent success of dense sampling in image classification, we propose an approach to describe videos by dense trajectories. We sample dense points from each frame and track them based on displacement information from a dense optical flow field. Given a state-of-the-art optical flow algorithm, our trajectories are robust to fast irregular motions as well as shot boundaries. Additionally, dense trajectories cover the motion information in videos well. We, also, investigate how to design descriptors to encode the trajectory information. We introduce a novel descriptor based on motion boundary histograms, which is robust to camera motion. This descriptor consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos. We evaluate our video description in the context of action classification with a bag-of-features approach. Experimental results show a significant improvement over the state of the art on four datasets of varying difficulty, i.e. KTH, YouTube, Hollywood2 and UCF sports."
]
} |
1409.6080 | 2395449577 | A tracklet is a short sequence of detections of an entity of interest, such as a person's face, in contiguous frames of a video. In this paper we consider clustering tracklets in long videos, an important problem of current interest in Computer Vision. It involves identifying tracklets in short segments of the video and finding associations between tracklets over the entire video. These tasks require careful modelling of spatio-temporal cues. State of the art [27] proposes a probabilistic model which incorporates the spatio-temporal cues by parametrizing them through clique potentials in HMRF. The attendant learning and inference problems make the model un-wieldy resulting in failure to handle long videos with many cluster labels. In this paper we circumvent the problem of explicitly encoding spatio-temporal cues by exploiting Temporal Coherence (TC). The major contribution of the paper is to develop Temporally Coherent Chinese Restaurant Process (TC-CRP), a novel Bayesian Non-parametric (BNP) prior which models Temporal Coherence. To the best of our knowledge this is the first work which models TC in a BNP framework and thus makes a significant addition to BNP priors suited for video analytics. On an interesting problem of discovering persons and their tracklets from user-uploaded videos of long TV series episodes from Youtube we show that TC-CRP is very effective. It can also filter out tracklets resulting from false detections. We explore alternative approaches based on low-rank matrix recovery and constrained subspace clustering, but find these to be very slow and less accurate than our method. Finally, we show that TC-CRP can be useful in Low-rank Matrix Recovery when the desired matrix has sets of identical columns. | is a task which has recently received attention in Computer Vision. Cast Listing @cite_33 is aimed to choose a representative subset of the face detections or face tracks in a movie TV series episode. Another task is to label in a video, but this requires movie scripts @cite_10 or labelled training videos having the same characters @cite_23 . Scene segmentation and person discovery are done simultaneously using a generative model in @cite_30 , but once again with the help of scripts. An unsupervised version of this task is considered in @cite_32 , which performs in presence of spatio-temporal constraints as already discussed. For this purpose they use a Markov Random Field, and encode the constraints as clique potentials. Another recent approach to face clustering is @cite_31 which incorporates some spatio-temporal constraints into subspace clustering. | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_32",
"@cite_23",
"@cite_31",
"@cite_10"
],
"mid": [
"2075505710",
"",
"1969014310",
"2090011161",
"1997201895",
"2113333558"
],
"abstract": [
"In this paper, we propose an automatic approach to simultaneously name faces and discover scenes in TV shows. We follow the multi-modal idea of utilizing script to assist video content understanding, but without using timestamp (provided by script-subtitles alignment) as the connection. Instead, the temporal relation between faces in the video and names in the script is investigated in our approach, and an global optimal video-script alignment is inferred according to the character correspondence. The contribution of this paper is two-fold: (1) we propose a generative model, named TVParser, to depict the temporal character correspondence between video and script, from which face-name relationship can be automatically learned as a model parameter, and meanwhile, video scene structure can be effectively inferred as a hidden state sequence; (2) we find fast algorithms to accelerate both model parameter learning and state inference, resulting in an efficient and global optimal alignment. We conduct extensive comparative experiments on popular TV series and report comparable and even superior performance over existing methods.",
"",
"In this paper, we focus on face clustering in videos. Given the detected faces from real-world videos, we partition all faces into K disjoint clusters. Different from clustering on a collection of facial images, the faces from videos are organized as face tracks and the frame index of each face is also provided. As a result, many pair wise constraints between faces can be easily obtained from the temporal and spatial knowledge of the face tracks. These constraints can be effectively incorporated into a generative clustering model based on the Hidden Markov Random Fields (HMRFs). Within the HMRF model, the pair wise constraints are augmented by label-level and constraint-level local smoothness to guide the clustering process. The parameters for both the unary and the pair wise potential functions are learned by the simulated field algorithm, and the weights of constraints can be easily adjusted. We further introduce an efficient clustering framework specially for face clustering in videos, considering that faces in adjacent frames of the same face track are very similar. The framework is applicable to other clustering algorithms to significantly reduce the computational cost. Experiments on two face data sets from real-world videos demonstrate the significantly improved performance of our algorithm over state-of-the art algorithms.",
"",
"In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.",
"Identification of characters in films, although very intuitive to humans, still poses a significant challenge to computer methods. In this paper, we investigate the problem of identifying characters in feature-length films using video and film script. Different from the state-of-the-art methods on naming faces in the videos, most of which used the local matching between a visible face and one of the names extracted from the temporally local video transcript, we attempt to do a global matching between names and clustered face tracks under the circumstances that there are not enough local name cues that can be found. The contributions of our work include: 1) A graph matching method is utilized to build face-name association between a face affinity network and a name affinity network which are, respectively, derived from their own domains (video and script). 2) An effective measure of face track distance is presented for face track clustering. 3) As an application, the relationship between characters is mined using social network analysis. The proposed framework is able to create a new experience on character-centered film browsing. Experiments are conducted on ten feature-length films and give encouraging results."
]
} |
1409.6080 | 2395449577 | A tracklet is a short sequence of detections of an entity of interest, such as a person's face, in contiguous frames of a video. In this paper we consider clustering tracklets in long videos, an important problem of current interest in Computer Vision. It involves identifying tracklets in short segments of the video and finding associations between tracklets over the entire video. These tasks require careful modelling of spatio-temporal cues. State of the art [27] proposes a probabilistic model which incorporates the spatio-temporal cues by parametrizing them through clique potentials in HMRF. The attendant learning and inference problems make the model un-wieldy resulting in failure to handle long videos with many cluster labels. In this paper we circumvent the problem of explicitly encoding spatio-temporal cues by exploiting Temporal Coherence (TC). The major contribution of the paper is to develop Temporally Coherent Chinese Restaurant Process (TC-CRP), a novel Bayesian Non-parametric (BNP) prior which models Temporal Coherence. To the best of our knowledge this is the first work which models TC in a BNP framework and thus makes a significant addition to BNP priors suited for video analytics. On an interesting problem of discovering persons and their tracklets from user-uploaded videos of long TV series episodes from Youtube we show that TC-CRP is very effective. It can also filter out tracklets resulting from false detections. We explore alternative approaches based on low-rank matrix recovery and constrained subspace clustering, but find these to be very slow and less accurate than our method. Finally, we show that TC-CRP can be useful in Low-rank Matrix Recovery when the desired matrix has sets of identical columns. | is a core topic in computer vision, in which a target object is located in each frame based on appearance similarity and spatio- temporal locality. A more advanced task is @cite_27 , in which several targets are present per frame. A tracking paradigm that is particularly helpful in multi-target tracking is @cite_16 , where object-specific detectors like @cite_9 are run per frame (or on a subset of frames), and the detection responses are linked to form tracks. From this came the concept of @cite_11 which attempts to do the linking hierarchically. This requires pairwise similarity measures between tracklets. Multi-target tracking via tracklets is usually cast as Bipartite Matching, which is solved using Hungarian Algorithm. Tracklet association and face clustering are done simultaneously in @cite_20 using HMRF. The main difference of face tracklet clustering and person discovery is that, the number of clusters to be formed is not known in the latter. | {
"cite_N": [
"@cite_9",
"@cite_27",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"2168356304",
"2035153336",
"2138302688",
"",
"2122469558"
],
"abstract": [
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.",
"We introduce an online learning approach for multitarget tracking. Detection responses are gradually associated into tracklets in multiple levels to produce final tracks. Unlike most previous approaches which only focus on producing discriminative motion and appearance models for all targets, we further consider discriminative features for distinguishing difficult pairs of targets. The tracking problem is formulated using an online learned CRF model, and is transformed into an energy minimization problem. The energy functions include a set of unary functions that are based on motion and appearance models for discriminating all targets, as well as a set of pairwise functions that are based on models for differentiating corresponding pairs of tracklets. The online CRF approach is more powerful at distinguishing spatially close targets with similar appearances, as well as in dealing with camera motions. An efficient algorithm is introduced for finding an association with low energy cost. We evaluate our approach on three public data sets, and show significant improvements compared with several state-of-art methods.",
"Both detection and tracking people are challenging problems, especially in complex real world scenes that commonly involve multiple people, complicated occlusions, and cluttered or even moving backgrounds. People detectors have been shown to be able to locate pedestrians even in complex street scenes, but false positives have remained frequent. The identification of particular individuals has remained challenging as well. Tracking methods are able to find a particular individual in image sequences, but are severely challenged by real-world scenarios such as crowded street scenes. In this paper, we combine the advantages of both detection and tracking in a single framework. The approximate articulation of each person is detected in every frame based on local features that model the appearance of individual body parts. Prior knowledge on possible articulations and temporal coherency within a walking cycle are modeled using a hierarchical Gaussian process latent variable model (hGPLVM). We show how the combination of these results improves hypotheses for position and articulation of each person in several subsequent frames. We present experimental results that demonstrate how this allows to detect and track multiple people in cluttered scenes with reoccurring occlusions.",
"",
"We present a detection-based three-level hierarchical association approach to robustly track multiple objects in crowded environments from a single camera. At the low level, reliable tracklets (i.e. short tracks for further analysis) are generated by linking detection responses based on conservative affinity constraints. At the middle level, these tracklets are further associated to form longer tracklets based on more complex affinity measures. The association is formulated as a MAP problem and solved by the Hungarian algorithm. At the high level, entries, exits and scene occluders are estimated using the already computed tracklets, which are used to refine the final trajectories. This approach is applied to the pedestrian class and evaluated on two challenging datasets. The experimental results show a great improvement in performance compared to previous methods."
]
} |
1409.6080 | 2395449577 | A tracklet is a short sequence of detections of an entity of interest, such as a person's face, in contiguous frames of a video. In this paper we consider clustering tracklets in long videos, an important problem of current interest in Computer Vision. It involves identifying tracklets in short segments of the video and finding associations between tracklets over the entire video. These tasks require careful modelling of spatio-temporal cues. State of the art [27] proposes a probabilistic model which incorporates the spatio-temporal cues by parametrizing them through clique potentials in HMRF. The attendant learning and inference problems make the model un-wieldy resulting in failure to handle long videos with many cluster labels. In this paper we circumvent the problem of explicitly encoding spatio-temporal cues by exploiting Temporal Coherence (TC). The major contribution of the paper is to develop Temporally Coherent Chinese Restaurant Process (TC-CRP), a novel Bayesian Non-parametric (BNP) prior which models Temporal Coherence. To the best of our knowledge this is the first work which models TC in a BNP framework and thus makes a significant addition to BNP priors suited for video analytics. On an interesting problem of discovering persons and their tracklets from user-uploaded videos of long TV series episodes from Youtube we show that TC-CRP is very effective. It can also filter out tracklets resulting from false detections. We explore alternative approaches based on low-rank matrix recovery and constrained subspace clustering, but find these to be very slow and less accurate than our method. Finally, we show that TC-CRP can be useful in Low-rank Matrix Recovery when the desired matrix has sets of identical columns. | Finally, has been studied for a few years in the Computer Vision community. The aim is to provide a short but comprehensive summary of videos. This summary is usually in the form of a few , and sometimes as a short segment of the video around these keyframes. A recent example is @cite_2 which models a video as a matrix, each frame as a column, and each keyframe as a , in terms of which the other columns are expressed. A more recent work @cite_34 considers a kernel matrix to encode similarities between pairs of frames, uses it for of the video, assigns an importance label to each of these segments using an SVM (trained from segmented and labelled videos), and creates the summary with the important segments. However, such summaries are in terms of low-level visual features, rather than high-level semantic features which humans use. An attempt to bridge this gap was made in @cite_35 , which defined movie scenes and summaries in terms of characters. This work used face detections along with for semantic segmentation into shots and scenes, which were used for summarization. | {
"cite_N": [
"@cite_35",
"@cite_34",
"@cite_2"
],
"mid": [
"2144125753",
"",
"2105174364"
],
"abstract": [
"A decent movie summary is helpful for movie producer to promote the movie as well as audience to capture the theme of the movie before watching the whole movie. Most exiting automatic movie summarization approaches heavily rely on video content only, which may not deliver ideal result due to the semantic gap between computer calculated low-level features and human used high-level understanding. In this paper, we incorporate script into movie analysis and propose a novel character-based movie summarization approach, which is validated by modern film theory that what actually catches audiences' attention is the character. We first segment scenes in the movie by analysis and alignment of script and movie. Then we conduct substory discovery and content attention analysis based on the scent analysis and character interaction features. Given obtained movie structure and content attention value, we calculate movie attraction scores at both shot and scene levels and adopt this as criterion to generate movie summary. The promising experimental results demonstrate that character analysis is effective for movie summarization and movie content understanding.",
"",
"The rapid growth of consumer videos requires an effective and efficient content summarization method to provide a user-friendly way to manage and browse the huge amount of video data. Compared with most previous methods that focus on sports and news videos, the summarization of personal videos is more challenging because of its unconstrained content and the lack of any pre-imposed video structures. We formulate video summarization as a novel dictionary selection problem using sparsity consistency, where a dictionary of key frames is selected such that the original video can be best reconstructed from this representative dictionary. An efficient global optimization algorithm is introduced to solve the dictionary selection model with the convergence rates as O(1 K2) (where K is the iteration counter), in contrast to traditional sub-gradient descent methods of O(1 √K). Our method provides a scalable solution for both key frame extraction and video skim generation, because one can select an arbitrary number of key frames to represent the original videos. Experiments on a human labeled benchmark dataset and comparisons to the state-of-the-art methods demonstrate the advantages of our algorithm."
]
} |
1409.6253 | 81948250 | Time-Basic Petri nets, is a powerful formalism for modeling real-time systems where time constraints are expressed through time functions of marking's time description associated with transition, representing possible firing times. We introduce a technique for coverability analysis based on the building of a finite graph. This technique further exploits the time anonymous concept [5,6], in order to deal with topologically unbounded nets, exploits the concept of a coverage of TA tokens, i.e., a sort of anonymous timestamp. Such a coverability analysis technique is able to construct coverability trees graphs for unbounded Time-Basic Petri net models. The termination of the algorithm is guaranteed as long as, within the input model, tokens growing without limit, can be anonymized. This means that we are able to manage models that do not exhibit Zeno behavior and do not express actions depending on infinite past events. This is actually a reasonable limitation because, generally, real-world examples do not exhibit such a behavior. | For timed Petri nets (TPNs), although the set of reachable states (i.e. all the markings from which a final marking is reachable) is computable @cite_14 , the set of reachable states is in general not computable. Therefore any procedure for performing forward reachability analysis on TPNs is incomplete. In @cite_3 , an abstraction of the set of reachable markings of TPNs is proposed. It introduces a symbolic representation for downward closed sets, so called region generators (i.e. the union of an infinite number of regions @cite_1 ). Anyway, the termination of the forward analysis by means of this abstraction is not guaranteed. | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_3"
],
"mid": [
"2151720555",
"59365162",
"1503128752"
],
"abstract": [
"We consider (unbounded) Timed Petri Nets (TPNs) where each token is equipped with a real-valued clock representing the \"age\" of the token. Each arc in the net is provided with a subinterval of the natural numbers, restricting the ages of the tokens travelling the arc. We apply a methodology developed in [AN00], based on the theory of better quasi orderings (BQOs), to derive an efficient constraint system for automatic verification of safety properties for TPNs. We have implemented a prototype based on our method and applied it for verification of a parametrized version of Fischer's protocol.",
"To model the behavior of finite-state asynchronous real-time systems we propose the notion of timed Buchi automata (TBA). TBAs are Buchi automata coupled with a mechanism to express constant bounds on the timing delays between system events. These automata accept languages of timed traces, traces in which each event has an associated real-valued time of occurrence.",
"We consider verification of safety properties for concurrent real-timed systems modelled as timed Petri nets by performing symbolic forward reachability analysis. We introduce a formalism, called region generators, for representing sets of markings of timed Petri nets. Region generators characterize downward closed sets of regions and provide exact abstractions of sets of reachable states with respect to safety properties. We show that the standard operations needed for performing symbolic reachability analysis are computable for region generators. Since forward reachability analysis is necessarily incomplete, we introduce an acceleration technique to make the procedure terminate more often on practical examples. We have implemented a prototype for analyzing timed Petri nets and used it to verify a parameterized version of Fischer's protocol, Lynch and Shavit's mutual exclusion protocol and a producer-consumer protocol. We also used the tool to extract finite-state abstractions of these protocols."
]
} |
1409.5980 | 2951249478 | We use public data from Twitter to study the breakups of the romantic relationships of 661 couples. Couples are identified through profile references such as @user1 writing "@user2 is the best boyfriend ever!!". Using this data set we find evidence for a number of existing hypotheses describing psychological processes including (i) pre-relationship closeness being indicative of post-relationship closeness, (ii) "stonewalling", i.e., ignoring messages by a partner, being indicative of a pending breakup, and (iii) post-breakup depression. We also observe a previously undocumented phenomenon of "batch un-friending and being un-friended" where users who break up experience sudden drops of 15-20 followers and friends. Our work shows that public Twitter data can be used to gain new insights into psychological processes surrounding relationship dissolutions, something that most people go through at least once in their lifetime. | The only work we know of on studying romantic relationships on Twitter is @cite_8 . Using answers to specific questions (from surveys) from a few hundred users, they look at how Twitter mediates conflict between couples. They find evidence that active Twitter use leads to greater amounts of Twitter-related conflict among romantic partners, which in turn leads to infidelity, breakup, and divorce''. | {
"cite_N": [
"@cite_8"
],
"mid": [
"1992418814"
],
"abstract": [
"This study was the first to examine the impact of unmarried relationship break-up on psychological distress and life satisfaction using a within-subjects design. Among unmarried 18 to 35-year olds (N = 1295), 36.5 had one or more break-ups over a 20-month period. Experiencing a break-up was associated with an increase in psychological distress and a decline in life satisfaction (from pre- to post-dissolution). In addition, several characteristics of the relationship or of the break-up were associated with the magnitude of the changes in life satisfaction following a break-up. Specifically, having been cohabiting and having had plans for marriage were associated with larger declines in life satisfaction while having begun to date someone new was associated with smaller declines. Interestingly, having higher relationship quality at the previous wave was associated with smaller declines in life satisfaction following a break-up. No relationship or break-up characteristics were significantly associated with the magnitude of changes in psychological distress after a break-up. Existing theories are used to explain the results. Implications for clinical work and future research on unmarried relationships are also discussed."
]
} |
1409.5980 | 2951249478 | We use public data from Twitter to study the breakups of the romantic relationships of 661 couples. Couples are identified through profile references such as @user1 writing "@user2 is the best boyfriend ever!!". Using this data set we find evidence for a number of existing hypotheses describing psychological processes including (i) pre-relationship closeness being indicative of post-relationship closeness, (ii) "stonewalling", i.e., ignoring messages by a partner, being indicative of a pending breakup, and (iii) post-breakup depression. We also observe a previously undocumented phenomenon of "batch un-friending and being un-friended" where users who break up experience sudden drops of 15-20 followers and friends. Our work shows that public Twitter data can be used to gain new insights into psychological processes surrounding relationship dissolutions, something that most people go through at least once in their lifetime. | Currently, we are using Twitter merely as a data source to study relationship breakups per se. However, one could also study the more intricate relationship between technology use and personal relationships. Weisskirch, et al @cite_33 look at the attachment styles of couples involved in a relationship breakup online. It is the only work that we are aware of that looks at the act of breaking up through technology. Manual inspection of tweets around breakup revealed a few instances of actual breakups through public (!) tweets in our dataset too. | {
"cite_N": [
"@cite_33"
],
"mid": [
"2037122935"
],
"abstract": [
"Abstract Relationship dissolution now occurs through technologies like text messaging, e-mail, and social networking sites (SNS). Individuals who experience relationship dissolution via technology may differ in their attachment pattern and gender role attitudes from those who have not had that experience. One hundred five college students (males=21 and females=84) completed an online questionnaire about technology-mediated breakups, attachment style, and gender role attitudes. More than a quarter of the sample had experienced relationship dissolution via technology. Attachment anxiety predicted those subject to technology-mediated breakups. Attachment avoidance and less traditional gender roles were associated with increased likelihood of technology use in relationship dissolution. Implications are discussed in regards to future research and practice."
]
} |
1409.5980 | 2951249478 | We use public data from Twitter to study the breakups of the romantic relationships of 661 couples. Couples are identified through profile references such as @user1 writing "@user2 is the best boyfriend ever!!". Using this data set we find evidence for a number of existing hypotheses describing psychological processes including (i) pre-relationship closeness being indicative of post-relationship closeness, (ii) "stonewalling", i.e., ignoring messages by a partner, being indicative of a pending breakup, and (iii) post-breakup depression. We also observe a previously undocumented phenomenon of "batch un-friending and being un-friended" where users who break up experience sudden drops of 15-20 followers and friends. Our work shows that public Twitter data can be used to gain new insights into psychological processes surrounding relationship dissolutions, something that most people go through at least once in their lifetime. | Apart from facilitating breakups, increased importance of technology in romantic relationships @cite_27 potentially has other negative impact on romantic relationships such as jealousy, or surveillance @cite_38 @cite_4 @cite_26 . On the positive side, researchers have looked at if technologies such as video chat can positively affect long-distance relationships by making it easier to feel connected @cite_5 @cite_29 . | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_4",
"@cite_29",
"@cite_27",
"@cite_5"
],
"mid": [
"2169265842",
"1980366054",
"",
"2123741245",
"",
"2004946470"
],
"abstract": [
"Abstract Many studies document how individuals use Facebook to meet partners or develop and maintain relationships. Less is known about information-seeking behaviors during the stages of relationship termination. Relational dissolution is a socially embedded activity, and affordances of social network sites offer many advantages in reducing uncertainty after a breakup. A survey collected responses from 110 individuals who use Facebook to gather information about their romantic ex-partners. Results indicated that after breakup, partners may take advantage of the system's information visibility and the relative invisibility of movement depending on relational factors (initiator role and breakup uncertainty), social factors (perceived network approval of Facebook surveillance), and individual privacy concerns. This investigation addresses questions such as what type of information-seeking foci do individuals employ and how do individuals use Facebook as a form of surveillance? What factors motivate surveilla...",
"Abstract In this study, we examined two behaviors that could evoke Facebook jealousy and cause relationship problems among romantic partners: (1) Facebook solicitation behaviors (i.e., making or accepting friend requests with romantic interests) while in the current relationship, and (2) having romantic interests on existing Facebook friends lists. In our sample of 148 undergraduates, those who had lower commitment to their partners were more likely to make and accept Facebook friend requests with romantic interests during their relationship. However, commitment was unrelated to the number of romantic alternatives contained on one’s Facebook friends list or the frequency of Facebook solicitation while single. Additionally, attachment anxiety predicted Facebook solicitation behaviors, but this relationship was mediated by Facebook jealousy. Our findings confirm that Facebook is used to solicit connections with romantic interests both while single and during committed relationships; however, it is only those connections that are made during the relationship that are markers of lower commitment. Moreover, our study adds to a growing body of research that connects face-to-face relationship theories to the virtual environment.",
"",
"Many couples live a portion of their lives in a long-distance relationship (LDR). This includes a large number of dating college students as well as couples who are geographically-separated because of situational demands such as work. We conducted interviews with individuals in LDRs to understand how they make use of video chat systems to maintain their relationships. In particular, we have investigated how couples use video to \"hang out\" together and engage in activities over extended periods of time. Our results show that regardless of the relationship situation, video chat affords a unique opportunity for couples to share presence over distance, which in turn provides intimacy. While beneficial, couples still face challenges in using video chat, including contextual (e.g., location of partners, time zones), technical (e.g., mobility, audio video quality, networking), and personal (e.g., a lack of physicality needed by most for intimate sexual acts) challenges.",
"",
"A wealth of evidence suggests that love, closeness, and intimacy---in short relatedness---are important for people’s psychological well-being. Nowadays, however, couples are often forced to live apart. Accordingly, there has been a growing and flourishing interest in designing technologies that mediate (and create) a feeling of relatedness when being separated, beyond the explicit verbal communication and simple emoticons available technologies offer. This article provides a review of 143 published artifacts (i.e., design concepts, technologies). Based on this, we present six strategies used by designers researchers to create a relatedness experience: Awareness, expressivity, physicalness, gift giving, joint action, and memories. We understand those strategies as starting points for the experience-oriented design of technology."
]
} |
1409.5980 | 2951249478 | We use public data from Twitter to study the breakups of the romantic relationships of 661 couples. Couples are identified through profile references such as @user1 writing "@user2 is the best boyfriend ever!!". Using this data set we find evidence for a number of existing hypotheses describing psychological processes including (i) pre-relationship closeness being indicative of post-relationship closeness, (ii) "stonewalling", i.e., ignoring messages by a partner, being indicative of a pending breakup, and (iii) post-breakup depression. We also observe a previously undocumented phenomenon of "batch un-friending and being un-friended" where users who break up experience sudden drops of 15-20 followers and friends. Our work shows that public Twitter data can be used to gain new insights into psychological processes surrounding relationship dissolutions, something that most people go through at least once in their lifetime. | @cite_30 study the importance of social networks in the dissolution of a romantic relationship. They define certain factors such as the overlap of networks of partners or social capital and study how these factors affect breakup. Though we did not collect data for the Twitter social , or its changes over time, it would be possible to validate their findings on Twitter using our approach of identifying breakups. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2109758624"
],
"abstract": [
"Previous research on the dissolution of long-term romantic relationships has mostly focused on determinants that reflect either the characteristics of the individual partners or the characteristics of the relationship itself. The role of the social context in which couples are embedded has received less attention. This study assesses the association between three characteristics of the social context and the dissolution of long-term romantic relationships simultaneously: the prevalence of divorce in the network of the couple, the extent to which the networks of partners overlap each other, and the amount of social capital in the network of the couple. Using nationally representative panel data from the first and second waves of the Netherlands Kinship Panel Study, partial support was found for the link between the prevalence of divorce and network overlap on the one hand, and the likelihood to dissolve long-term romantic relationships on the other hand, among a sample of 3406 married and 648 unmarried coh..."
]
} |
1409.5980 | 2951249478 | We use public data from Twitter to study the breakups of the romantic relationships of 661 couples. Couples are identified through profile references such as @user1 writing "@user2 is the best boyfriend ever!!". Using this data set we find evidence for a number of existing hypotheses describing psychological processes including (i) pre-relationship closeness being indicative of post-relationship closeness, (ii) "stonewalling", i.e., ignoring messages by a partner, being indicative of a pending breakup, and (iii) post-breakup depression. We also observe a previously undocumented phenomenon of "batch un-friending and being un-friended" where users who break up experience sudden drops of 15-20 followers and friends. Our work shows that public Twitter data can be used to gain new insights into psychological processes surrounding relationship dissolutions, something that most people go through at least once in their lifetime. | @cite_17 used the network structure of an individual's ego network to identify their romantic partner. Note that a social tie on Facebook is not the same as one on Twitter, mainly because, (i) Twitter network is directed, (ii) the use of Facebook and Twitter may be different. Still, the notion of dispersion' defined in their paper might be related to the loss of friends followers in our study (see ). @cite_12 study relationship dissolution on Facebook, mainly focusing on the phases and behavior of users who go through breakups on Facebook. There is evidence of limiting profile access in order to manage the breakup which is similar to our findings in . | {
"cite_N": [
"@cite_12",
"@cite_17"
],
"mid": [
"2154106761",
"2022266816"
],
"abstract": [
"The present study explores how people use social networking sites to adjust to breakups by studying their postdissolution behaviors. We apply Rollie and Duck’s (2006) relationship dissolution model by examining how collegiate Facebook users (N = 208) enact behaviors in breakups to extend the model to online environments during and after breakups. Furthermore, we employed a retrospective design utilizing qualitative methods to define categories of behavioral responses to a breakup on Facebook. The analysis revealed online behaviors that overlapped with the dissolution model as well as paralleled previous research into online behaviors. Results are discussed using the relationship dissolution model framework to individuals modifying online relationship statuses, “unfriending” previous partners, and limiting profile access in order to manage relationship termination.",
"A crucial task in the analysis of on-line social-networking systems is to identify important people --- those linked by strong social ties --- within an individual's network neighborhood. Here we investigate this question for a particular category of strong ties, those involving spouses or romantic partners. We organize our analysis around a basic question: given all the connections among a person's friends, can you recognize his or her romantic partner from the network structure alone? Using data from a large sample of Facebook users, we find that this task can be accomplished with high accuracy, but doing so requires the development of a new measure of tie strength that we term dispersion' --- the extent to which two people's mutual friends are not themselves well-connected. The results offer methods for identifying types of structurally significant people in on-line applications, and suggest a potential expansion of existing theories of tie strength."
]
} |
1409.5980 | 2951249478 | We use public data from Twitter to study the breakups of the romantic relationships of 661 couples. Couples are identified through profile references such as @user1 writing "@user2 is the best boyfriend ever!!". Using this data set we find evidence for a number of existing hypotheses describing psychological processes including (i) pre-relationship closeness being indicative of post-relationship closeness, (ii) "stonewalling", i.e., ignoring messages by a partner, being indicative of a pending breakup, and (iii) post-breakup depression. We also observe a previously undocumented phenomenon of "batch un-friending and being un-friended" where users who break up experience sudden drops of 15-20 followers and friends. Our work shows that public Twitter data can be used to gain new insights into psychological processes surrounding relationship dissolutions, something that most people go through at least once in their lifetime. | Researchers conducting retrospective @cite_28 and diary @cite_9 studies of emotional adjustment following a breakup have found evidence of negative emotional responses including sadness and anger. In contrast to the current findings, found no difference between rejectors and rejectees in the extent of negative emotion following a breakup, and suggested that this might reflect difficulties in accurately identifying who initiated the breakup. Though imperfect, the current approach of identifying the first person to remove a profile mention as the initiator'' or rejector'' may provide a good proxy for being the person who is more ready to terminate the relationship or who feels more control over the breakup; this latter feature of controllability has been found to predict better adjustment post breakup @cite_28 @cite_2 . It may also be that the larger sample size in the current study provides more statistical power to detect these effects than has been available in smaller survey studies. | {
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_2"
],
"mid": [
"2087564825",
"2123019719",
"2112271830"
],
"abstract": [
"The purpose of this study was to examine correlates of initial distress and current recovery among individuals who have experienced the breakup of a dating relationship, including factors associated with commitment to the relationship (i.e. satisfaction, duration, closeness, perceived alternatives) and factors associated with coping with life stressors (i.e. perceptions of the controllability of the breakup, social support and self-esteem). Participants were 34 males and 51 females who had experienced the breakup of a dating relationship within the past 6 months. Hierarchical regression analyses indicated that these variables accounted for between 21 and 47 percent of the variance in the measures of initial distress and current recovery. The coping-related variables added significantly to the prediction of initial distress and current recovery once the commitment-related variables were taken into account, but were more strongly related to recovery than to initial distress. Implications for research and pr...",
"This paper examined the emotional sequelae of nonmarital relationship dissolution among 58 young adults. Participants were recruited while in a serious dating relationship, and when it ended, were signaled randomly with beepers for 28 days to complete an emotions diary. Compared to participants in intact dating relationships, dissolution participants reported more emotional volatility, especially immediately following the breakup. Multilevel growth modeling showed a linear decline in love and curvilinear patterns for sadness, anger, and relief. Contact with a former partner slowed the decline for love and sadness, and attachment style and the impact of the breakup predicted the emotional start-points and rate(s) of change over time. The results are discussed in terms of the functional role of postrelationship emotions as well as the importance of understanding patterns of intraindividual variability and differential predictors of emotional change.",
"The purpose of this investigation was to identify the factors associated with the distress experienced after the breakup of a romantic relationship, both at the time of the breakup (assessed retrospectively) and at the time the questionnaire was completed. Four categories of variables were examined as possible correlates of post-breakup distress: variables associated with the initiation of the relationship, characteristics of the relationship while it was intact, conditions at the time of the breakup and individual difference variables. The sample consisted of 257 young adults (primarily college students; 83 male and 174 female) who had experienced a recent breakup (M = 21 weeks since breakup). The variables most highly associated with distress at the time of the breakup were non-mutuality in alternatives (i.e. partner having more inter-est in alternatives), commitment, satisfaction, greater effort in relationship initiation, being left' by the other and fearful attachment style. The variables most highl..."
]
} |
1409.6197 | 2951702123 | The problem of online privacy is often reduced to individual decisions to hide or reveal personal information in online social networks (OSNs). However, with the increasing use of OSNs, it becomes more important to understand the role of the social network in disclosing personal information that a user has not revealed voluntarily: How much of our private information do our friends disclose about us, and how much of our privacy is lost simply because of online social interaction? Without strong technical effort, an OSN may be able to exploit the assortativity of human private features, this way constructing shadow profiles with information that users chose not to share. Furthermore, because many users share their phone and email contact lists, this allows an OSN to create full shadow profiles for people who do not even have an account for this OSN. We empirically test the feasibility of constructing shadow profiles of sexual orientation for users and non-users, using data from more than 3 Million accounts of a single OSN. We quantify a lower bound for the predictive power derived from the social network of a user, to demonstrate how the predictability of sexual orientation increases with the size of this network and the tendency to share personal information. This allows us to define a privacy leak factor that links individual privacy loss with the decision of other individuals to disclose information. Our statistical analysis reveals that some individuals are at a higher risk of privacy loss, as prediction accuracy increases for users with a larger and more homogeneous first- and second-order neighborhood of their social network. While we do not provide evidence that shadow profiles exist at all, our results show that disclosing of private information is not restricted to an individual choice, but becomes a collective decision that has implications for policy and privacy regulation. | Understanding privacy in OSNs starts with the individual motivation to share personal information and its associated risk of sharing this information with undesired contacts @cite_45 @cite_40 . Most OSNs include highly customizable modules to control privacy settings, which can lead to higher efforts and uncertainty how to use the site @cite_37 , or to distancing from those users that have a lower awareness of possible data leakage @cite_4 @cite_11 @cite_15 . Recent technologies promise to alleviate user privacy concerns. For example, distributed recommender systems can put a limit to privacy disclosure @cite_42 , deployment of OSNs in the cloud can avoid the centralization of user data @cite_3 @cite_8 , techniques for picture encryption @cite_33 and content anonymization @cite_16 can prevent undesired access to private content. | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_33",
"@cite_8",
"@cite_42",
"@cite_3",
"@cite_40",
"@cite_45",
"@cite_15",
"@cite_16",
"@cite_11"
],
"mid": [
"",
"2072908603",
"2092782098",
"",
"2017922319",
"2131041789",
"2045802490",
"2097427281",
"",
"2168661288",
""
],
"abstract": [
"",
"The task of protecting users' privacy is made more difficult by their attitudes towards information disclosure without full awareness and the economics of the tracking and advertising industry. Even after numerous press reports and widespread disclosure of leakages on the Web and on popular Online Social Networks, many users appear not be fully aware of the fact that their information may be collected, aggregated and linked with ambient information for a variety of purposes. Past attempts at alleviating this problem have addressed individual aspects of the user's data collection. In this paper we move towards a comprehensive and efficient client-side tool that maximizes users' awareness of the extent of their information leakage. We show that such a customizable tool can help users to make informed decisions on controlling their privacy footprint.",
"While Online Social Networks (OSNs) enable users to share photos easily, they also expose users to several privacy threats from both the OSNs and external entities. The current privacy controls on OSNs are far from adequate, resulting in inappropriate flows of information when users fail to understand their privacy settings or OSNs fail to implement policies correctly. OSNs may further complicate privacy expectations when they reserve the right to analyze uploaded photos using automated face identification techniques. In this paper, we propose the design, implementation and evaluation of Cryptagram, a system designed to enhance online photo privacy. Cryptagram enables users to convert photos into encrypted images, which the users upload to OSNs. Users directly manage access control to those photos via shared keys that are independent of OSNs or other third parties. OSNs apply standard image transformations (JPEG compression) to all uploaded images so Cryptagram provides an image encoding and encryption mechanism that is tolerant to these transformations. Cryptagram guarantees that the recipient with the right credentials can completely retrieve the original image from the transformed version of the uploaded encrypted image while the OSN cannot infer the original image. Cryptagram's browser extension integrates seamlessly with preexisting OSNs, including Facebook and Google+, and currently has over 400 active users.",
"",
"Recommender systems predict user preferences based on a range of available information. For systems in which users generate streams of content (e.g., blogs, periodically-updated newsfeeds), users may rate the produced content that they read, and be given accurate predictions about future content they are most likely to prefer. We design a distributed mechanism for predicting user ratings that avoids the disclosure of information to a centralized authority or an untrusted third party: users disclose the rating they give to certain content only to the user that produced this content. We demonstrate how rating prediction in this context can be formulated as a matrix factorization problem. Using this intuition, we propose a distributed gradient descent algorithm for its solution that abides with the above restriction on how information is exchanged between users. We formally analyse the convergence properties of this algorithm, showing that it reduces a weighted root mean square error of the accuracy of predictions. Although our algorithm may be used many different ways, we evaluate it on the Neflix data set and prediction problem as a benchmark. In addition to the improved privacy properties that stem from its distributed nature, our algorithm is competitive with current centralized solutions. Finally, we demonstrate the algorithm's fast convergence in practice by conducting an online experiment with a prototype user-generated content exchange system implemented as a Facebook application.",
"While highly successful, today's online social networks (OSNs) have made a conscious decision to sacrifice privacy for availability and centralized control. Unfortunately, tradeoffs in this \"walled garden\" architecture naturally pit the economic interests of OSN providers against the privacy goals of OSN users, a battle that users cannot win. While some alternative OSN designs preserve user control over data, they do so by de-prioritizing issues of economic incentives and sustainability. In contrast, we believe any practical alternative to today's centralized architecture must consider incentives for providers as a key goal. In this paper, we propose a distributed OSN architecture that significantly improves user privacy while preserving economic incentives for OSN providers. We do so by using a standardized API to create a competitive provider marketplace for different components of the OSN, thus allowing users to perform their own tradeoffs between cost, performance, and privacy. We describe Polaris, a system where users leverage smartphones as a highly available identity provider and access control manager, and use application prototypes to show how it allows data monetization while limiting the visibility of any single party to users' private data.",
"We measure users' attitudes toward interpersonal privacy concerns on Facebook and measure users' strategies for reconciling their concerns with their desire to share content online. To do this, we recruited 260 Facebook users to install a Facebook application that surveyed their privacy concerns, their friend network compositions, the sensitivity of posted content, and their privacy-preserving strategies. By asking participants targeted questions about people randomly selected from their friend network and posts shared on their profiles, we were able to quantify the extent to which users trust their \"friends\" and the likelihood that their content was being viewed by unintended audiences. We found that while strangers are the most concerning audience, almost 95 of our participants had taken steps to mitigate those concerns. At the same time, we observed that 16.5 of participants had at least one post that they were uncomfortable sharing with a specific friend---someone who likely already had the ability to view it---and that 37 raised more general concerns with sharing their content with friends. We conclude that the current privacy controls allow users to effectively manage the outsider threat, but that they are unsuitable for mitigating concerns over the insider threat---members of the friend network who dynamically become inappropriate audiences based on the context of a post.",
"Traditional theory suggests consumers should be able to manage their privacy. Yet, empirical and theoretical research suggests that consumers often lack enough information to make privacy-sensitive decisions and, even with sufficient information, are likely to trade off long-term privacy for short-term benefits",
"",
"Building on the popularity of online social networks (OSNs) such as Facebook, social content-sharing applications allow users to form communities around shared interests. Millions of users worldwide use them to share recommendations on everything from music and books to resources on the web. However, their increasing popularity is beginning to attract the attention of malicious attackers. As social network credentials become valued targets of phishing attacks and social worms, attackers look to leverage compromised accounts for further financial gain. In this paper, we analyze the state of privacy protection in social content-sharing applications, describe effective privacy attacks against today's social networks, and propose anonymization techniques to protect users. We show that simple protection mechanisms such as anonymizing shared data can still leave users open to social intersection attacks, where a small number of compromised users can identify the originators of shared content. Modeling this as a graph anonymization problem, we propose to provide users with k-anonymity privacy guarantees by augmenting the social graph with \"latent edges.\" We identify StarClique, a locally minimal graph structure required for users to attain k-anonymity, where at worst, a user is identified as one of k possible contributors of a data object. We prove the correctness of our approach using analysis. Finally, using experiments driven by traces from the del.icio.us social bookmark site, we demonstrate the practicality and effectiveness of our approach on real-world systems.",
""
]
} |
1409.6197 | 2951702123 | The problem of online privacy is often reduced to individual decisions to hide or reveal personal information in online social networks (OSNs). However, with the increasing use of OSNs, it becomes more important to understand the role of the social network in disclosing personal information that a user has not revealed voluntarily: How much of our private information do our friends disclose about us, and how much of our privacy is lost simply because of online social interaction? Without strong technical effort, an OSN may be able to exploit the assortativity of human private features, this way constructing shadow profiles with information that users chose not to share. Furthermore, because many users share their phone and email contact lists, this allows an OSN to create full shadow profiles for people who do not even have an account for this OSN. We empirically test the feasibility of constructing shadow profiles of sexual orientation for users and non-users, using data from more than 3 Million accounts of a single OSN. We quantify a lower bound for the predictive power derived from the social network of a user, to demonstrate how the predictability of sexual orientation increases with the size of this network and the tendency to share personal information. This allows us to define a privacy leak factor that links individual privacy loss with the decision of other individuals to disclose information. Our statistical analysis reveals that some individuals are at a higher risk of privacy loss, as prediction accuracy increases for users with a larger and more homogeneous first- and second-order neighborhood of their social network. While we do not provide evidence that shadow profiles exist at all, our results show that disclosing of private information is not restricted to an individual choice, but becomes a collective decision that has implications for policy and privacy regulation. | Even with full individual control, the possibility of third-parties to infer private attributes still exists @cite_27 . The discovery of unknown hidden parts of a network based on its visible properties is a well studied problem, in particular with respect to @cite_5 @cite_12 @cite_36 . Such hidden links have been shown to be predictable by geographic coincidences @cite_9 , using geotagged photo data from Flickr. The method introduced in @cite_9 utilizes the number and proximity in time and space of co-occurences among pairs of individuals to infer the likelihood of a social tie between them. The link prediction problem has also been applied to predict links between non-users of Facebook @cite_31 , given only the link information towards non-members from the known network. Additionally, the problem aims to infer both missing links and nodes, where it has been shown that the missing part of the network can be inferred based only on the connectivity patterns of the observed part @cite_12 . | {
"cite_N": [
"@cite_36",
"@cite_9",
"@cite_27",
"@cite_5",
"@cite_31",
"@cite_12"
],
"mid": [
"2420733993",
"1995629273",
"2094006385",
"2062435795",
"2037900945",
"1595449516"
],
"abstract": [
"Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link prediction problem, and develop approaches to link prediction based on measures the \"proximity\" of nodes in a network. Experiments on large co-authorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures.",
"We investigate the extent to which social ties between people can be inferred from co-occurrence in time and space: Given that two people have been in approximately the same geographic locale at approximately the same time, on multiple occasions, how likely are they to know each other? Furthermore, how does this likelihood depend on the spatial and temporal proximity of the co-occurrences? Such issues arise in data originating in both online and offline domains as well as settings that capture interfaces between online and offline behavior. Here we develop a framework for quantifying the answers to such questions, and we apply this framework to publicly available data from a social media site, finding that even a very small number of co-occurrences can result in a high empirical likelihood of a social tie. We then present probabilistic models showing how such large probabilities can arise from a natural model of proximity and co-occurrence in the presence of social ties. In addition to providing a method for establishing some of the first quantifiable estimates of these measures, our findings have potential privacy implications, particularly for the ways in which social structures can be inferred from public online records that capture individuals’ physical locations over time.",
"We posit that access control, the dominant model for modeling and managing privacy in today’s online world, is fundamentally inadequate. First, with access control, users must a priori specify precisely who can or cannot access information by enumerating users, groups, or roles—a task that is difficult to get right. Second, access control fails to separate who can access information from who actually does, because it ignores the difficulty of finding information. Third, access control does not capture if and how a person who has access to some information redistributes that information. Fourth, access control fails to account for information that can be inferred from other, public information. We present exposure as an alternate model for information privacy; exposure captures the set of people expected to learn an item of information eventually. We believe the model takes an important step towards enabling users to model and control their privacy effectively.",
"Consider a social network and suppose that we are only given the number of common friends between each pair of users. Can we reconstruct the underlying networkq Similarly, consider a set of documents and the words that appear in them. If we only know the number of common words for every pair of documents, as well as the number of common documents for every pair of words, can we infer which words appear in which documentsq In this article, we develop a general methodology for answering questions like these. We formalize these questions in what we call the R econstruct problem: given information about the common neighbors of nodes in a network, our goal is to reconstruct the hidden binary matrix that indicates the presence or absence of relationships between individual nodes. In fact, we propose two different variants of this problem: one where the number of connections of every node (i.e., the degree of every node) is known and a second one where it is unknown. We call these variants the degree-aware and the degree-oblivious versions of the R econstruct problem, respectively. Our algorithms for both variants exploit the properties of the singular value decomposition of the hidden binary matrix. More specifically, we show that using the available neighborhood information, we can reconstruct the hidden matrix by finding the components of its singular value decomposition and then combining them appropriately. Our extensive experimental study suggests that our methods are able to reconstruct binary matrices of different characteristics with up to 100p accuracy.",
"Members of social network platforms often choose to reveal private information, and thus sacrifice some of their privacy, in exchange for the manifold opportunities and amenities offered by such platforms. In this article, we show that the seemingly innocuous combination of knowledge of confirmed contacts between members on the one hand and their email contacts to non-members on the other hand provides enough information to deduce a substantial proportion of relationships between non-members. Using machine learning we achieve an area under the (receiver operating characteristic) curve () of at least for predicting whether two non-members known by the same member are connected or not, even for conservative estimates of the overall proportion of members, and the proportion of members disclosing their contacts.",
"Network structures, such as social networks, web graphs and networks from systems biology, play important roles in many areas of science and our everyday lives. In order to study the networks one needs to first collect reliable large scale network data. While the social and information networks have become ubiquitous, the challenge of collecting complete network data still persists. Many times the collected network data is incomplete with nodes and edges missing. Commonly, only a part of the network can be observed and we would like to infer the unobserved part of the network. We address this issue by studying the Network Completion Problem: Given a network with missing nodes and edges, can we complete the missing part? We cast the problem in the Expectation Maximization (EM) framework where we use the observed part of the network to fit a model of network structure, and then we estimate the missing part of the network using the model, re-estimate the parameters and so on. We combine the EM with the Kronecker graphs model and design a scalable Metropolized Gibbs sampling approach that allows for the estimation of the model parameters as well as the inference about missing nodes and edges of the network. Experiments on synthetic and several real-world networks show that our approach can effectively recover the network even when about half of the nodes in the network are missing. Our algorithm outperforms not only classical link-prediction approaches but also the state of the art Stochastic block modeling approach. Furthermore, our algorithm easily scales to networks with tens of thousands of nodes."
]
} |
1409.6197 | 2951702123 | The problem of online privacy is often reduced to individual decisions to hide or reveal personal information in online social networks (OSNs). However, with the increasing use of OSNs, it becomes more important to understand the role of the social network in disclosing personal information that a user has not revealed voluntarily: How much of our private information do our friends disclose about us, and how much of our privacy is lost simply because of online social interaction? Without strong technical effort, an OSN may be able to exploit the assortativity of human private features, this way constructing shadow profiles with information that users chose not to share. Furthermore, because many users share their phone and email contact lists, this allows an OSN to create full shadow profiles for people who do not even have an account for this OSN. We empirically test the feasibility of constructing shadow profiles of sexual orientation for users and non-users, using data from more than 3 Million accounts of a single OSN. We quantify a lower bound for the predictive power derived from the social network of a user, to demonstrate how the predictability of sexual orientation increases with the size of this network and the tendency to share personal information. This allows us to define a privacy leak factor that links individual privacy loss with the decision of other individuals to disclose information. Our statistical analysis reveals that some individuals are at a higher risk of privacy loss, as prediction accuracy increases for users with a larger and more homogeneous first- and second-order neighborhood of their social network. While we do not provide evidence that shadow profiles exist at all, our results show that disclosing of private information is not restricted to an individual choice, but becomes a collective decision that has implications for policy and privacy regulation. | In this article, we evaluate the accuracy of partial and full shadow profiles for the sexual orientation of users and non-users of the Friendster social network. Our analysis builds on the sequence of users joining Friendster to evaluate predictions over individuals without a user account in a similar manner as done in @cite_31 where the links between non-users are inferred. Knowing in which sequence the users joined Friendster has freed us from having to utilize a network growth model in our analysis. Furthermore, we pay special attention to the ratios of friends belonging to each orientation in the neighborhood of users at a given time in the growth of the network. Our results should be compared with previous work on sexual orientation of users in smaller datasets @cite_28 . To our knowledge, our work is the first to address the possibility of creating full shadow profiles for the sexual orientation of non-users from a large scale OSN. | {
"cite_N": [
"@cite_28",
"@cite_31"
],
"mid": [
"2085427378",
"2037900945"
],
"abstract": [
"Public information about one's coworkers, friends, family, and acquaintances, as well as one's associations with them, implicitly reveals private information. Social-networking websites, e-mail, instant messaging, telephone, and VoIP are all technologies steeped in network data—data relating one person to another. Network data shifts the locus of information control away from individuals, as the individual's traditional and absolute discretion is replaced by that of his social-network. Our research demonstrates a method for accurately predicting the sexual orientation of Facebook users by analyzing friendship associations. After analyzing 4,080 Facebook profiles from the MIT network, we determined that the percentage of a given user's friends who self-identify as gay male is strongly correlated with the sexual orientation of that user, and we developed a logistic regression classifier with strong predictive power. Although we studied Facebook friendship ties, network data is pervasive in the broader context of computer-mediated communication, raising significant privacy issues for communication technologies to which there are no neat solutions.",
"Members of social network platforms often choose to reveal private information, and thus sacrifice some of their privacy, in exchange for the manifold opportunities and amenities offered by such platforms. In this article, we show that the seemingly innocuous combination of knowledge of confirmed contacts between members on the one hand and their email contacts to non-members on the other hand provides enough information to deduce a substantial proportion of relationships between non-members. Using machine learning we achieve an area under the (receiver operating characteristic) curve () of at least for predicting whether two non-members known by the same member are connected or not, even for conservative estimates of the overall proportion of members, and the proportion of members disclosing their contacts."
]
} |
1409.5995 | 2060271434 | Random intersection graphs have received much attention for nearly two decades, and currently have a wide range of applications ranging from key predistribution in wireless sensor networks to modeling social networks. In this paper, we investigate the strengths of connectivity and robustness in a general random intersection graph model. Specifically, we establish sharp asymptotic zero-one laws for k-connectivity and k-robustness, as well as the asymptotically exact probability of k-connectivity, for any positive integer k. The k-connectivity property quantifies how resilient is the connectivity of a graph against node or edge failures. On the other hand, k-robustness measures the effectiveness of local diffusion strategies (that do not use global graph topology information) in spreading information over the graph in the presence of misbehaving nodes. In addition to presenting the results under the general random intersection graph model, we consider two special cases of the general model, a binomial random intersection graph and a uniform random intersection graph, which both have numerous applications as well. For these two specialized graphs, our results on asymptotically exact probabilities of k-connectivity and asymptotic zero-one laws for k-robustness are also novel in the literature. | For connectivity (i.e., @math -connectivity with @math ) in binomial random intersection graph @math , Rybarczyk establishes the exact probability @cite_6 and a zero--one law @cite_24 @cite_6 . She further shows a zero--one law for @math -connectivity @cite_24 @cite_6 . Our Theorem provides not only a zero--one law, but also the exact probability to deliver a precise understanding of @math -connectivity. | {
"cite_N": [
"@cite_24",
"@cite_6"
],
"mid": [
"1902204033",
"1636592403"
],
"abstract": [
"We present a new method which enables us to find threshold functions for many properties in random intersection graphs. This method is used to establish sharp threshold functions in random intersection graphs for @math –connectivity, perfect matching containment and Hamilton cycle containment.",
"We present new results concerning threshold functions for a wide family of random intersection graphs. To this end we apply the coupling method used for establishing threshold functions for homogeneous random intersection graphs introduced by Karo 'nski, Scheinerman, and Singer--Cohen. In the case of inhomogeneous random intersection graphs the method has to be considerably modified and extended. By means of the altered method we are able to establish threshold functions for a general random intersection graph for such properties as @math -connectivity, matching containment or hamiltonicity. Moreover using the new approach we manage to sharpen the best known results concerning homogeneous random intersection graph."
]
} |
1409.5995 | 2060271434 | Random intersection graphs have received much attention for nearly two decades, and currently have a wide range of applications ranging from key predistribution in wireless sensor networks to modeling social networks. In this paper, we investigate the strengths of connectivity and robustness in a general random intersection graph model. Specifically, we establish sharp asymptotic zero-one laws for k-connectivity and k-robustness, as well as the asymptotically exact probability of k-connectivity, for any positive integer k. The k-connectivity property quantifies how resilient is the connectivity of a graph against node or edge failures. On the other hand, k-robustness measures the effectiveness of local diffusion strategies (that do not use global graph topology information) in spreading information over the graph in the presence of misbehaving nodes. In addition to presenting the results under the general random intersection graph model, we consider two special cases of the general model, a binomial random intersection graph and a uniform random intersection graph, which both have numerous applications as well. For these two specialized graphs, our results on asymptotically exact probabilities of k-connectivity and asymptotic zero-one laws for k-robustness are also novel in the literature. | For connectivity in uniform random intersection graph @math , Rybarczyk @cite_12 derives the exact probability and a zero--one law, while Blackburn and Gerke @cite_10 , Ya g an and Makowski @cite_20 , and Zhao @cite_3 @cite_0 also obtain zero--one laws. Rybarczyk @cite_24 implicitly shows a zero--one law for @math -connectivity in @math . Our Theorem also gives a zero--one law. In addition, it gives the exact probability to provide an accurate understanding of @math -connectivity. | {
"cite_N": [
"@cite_3",
"@cite_0",
"@cite_24",
"@cite_10",
"@cite_12",
"@cite_20"
],
"mid": [
"2056254754",
"",
"1902204033",
"2008111483",
"2063090892",
"2041282222"
],
"abstract": [
"Random key predistribution scheme of Eschenauer and Gligor (EG) is a typical solution for ensuring secure communications in a wireless sensor network (WSN). Connectivity of the WSNs under this scheme has received much interest over the last decade, and most of the existing work is based on the assumption of unconstrained sensor-to-sensor communications. In this paper, we study the k-connectivity of WSNs under the EG scheme with physical link constraints; k-connectivity is defined as the property that the network remains connected despite the failure of any (k - 1) sensors. We use a simple communication model, where unreliable wireless links are modeled as independent on off channels, and derive zero-one laws for the properties that i) the WSN is k-connected, and ii) each sensor is connected to at least k other sensors. These zero-one laws improve the previous results by Rybarczyk on the k-connectivity under a fully connected communication model. Moreover, under the on off channel model, we provide a stronger form of the zero-one law for the 1-connectivity as compared to that given by Yagan.",
"",
"We present a new method which enables us to find threshold functions for many properties in random intersection graphs. This method is used to establish sharp threshold functions in random intersection graphs for @math –connectivity, perfect matching containment and Hamilton cycle containment.",
"A uniform random intersection graphG(n,m,k) is a random graph constructed as follows. Label each of n nodes by a randomly chosen set of k distinct colours taken from some finite set of possible colours of size m. Nodes are joined by an edge if and only if some colour appears in both their labels. These graphs arise in the study of the security of wireless sensor networks, in particular when modelling the network graph of the well-known key predistribution technique due to Eschenauer and Gligor. The paper determines the threshold for connectivity of the graph G(n,m,k) when n-> in many situations. For example, when k is a function of n such that k>=2 and [email protected]?n^@[email protected]? for some fixed positive real number @a then G(n,m,k) is almost surely connected when lim infk^2n mlogn>1, and G(n,m,k) is almost surely disconnected when lim supk^2n mlogn<1.",
"We study properties of the uniform random intersection graph model G(n,m,d). We find asymptotic estimates on the diameter of the largest connected component of the graph near the phase transition and connectivity thresholds. Moreover we manage to prove an asymptotically tight bound for the connectivity and phase transition thresholds for all possible ranges of d, which has not been obtained before. The main motivation of our research is the usage of the random intersection graph model in the studies of wireless sensor networks.",
"The random key graph is a random graph naturally associated with the random key predistribution scheme introduced by Eschenauer and Gligor in the context of wireless sensor networks (WSNs). For this class of random graphs, we establish a new version of a conjectured zero-one law for graph connectivity as the number of nodes becomes unboundedly large. The results reported here complement and strengthen recent work on this conjecture by Blackburn and Gerke. In particular, the results are given under conditions which are more realistic for applications to WSNs."
]
} |
1409.5995 | 2060271434 | Random intersection graphs have received much attention for nearly two decades, and currently have a wide range of applications ranging from key predistribution in wireless sensor networks to modeling social networks. In this paper, we investigate the strengths of connectivity and robustness in a general random intersection graph model. Specifically, we establish sharp asymptotic zero-one laws for k-connectivity and k-robustness, as well as the asymptotically exact probability of k-connectivity, for any positive integer k. The k-connectivity property quantifies how resilient is the connectivity of a graph against node or edge failures. On the other hand, k-robustness measures the effectiveness of local diffusion strategies (that do not use global graph topology information) in spreading information over the graph in the presence of misbehaving nodes. In addition to presenting the results under the general random intersection graph model, we consider two special cases of the general model, a binomial random intersection graph and a uniform random intersection graph, which both have numerous applications as well. For these two specialized graphs, our results on asymptotically exact probabilities of k-connectivity and asymptotic zero-one laws for k-robustness are also novel in the literature. | For general random intersection graph @math , Godehardt and Jaworski @cite_7 investigate its degree distribution and Bloznelis @cite_14 explore its component evolution, but provides neither a zero--one law nor the exact probability of its @math -connectivity property reported in our work. | {
"cite_N": [
"@cite_14",
"@cite_7"
],
"mid": [
"1636592403",
"188357332"
],
"abstract": [
"We present new results concerning threshold functions for a wide family of random intersection graphs. To this end we apply the coupling method used for establishing threshold functions for homogeneous random intersection graphs introduced by Karo 'nski, Scheinerman, and Singer--Cohen. In the case of inhomogeneous random intersection graphs the method has to be considerably modified and extended. By means of the altered method we are able to establish threshold functions for a general random intersection graph for such properties as @math -connectivity, matching containment or hamiltonicity. Moreover using the new approach we manage to sharpen the best known results concerning homogeneous random intersection graph.",
"Graph concepts generally are usefulfor defining and detecting clusters. We consider basic properties of random intersection graphs generated by a random bipartite graph BG n, m on n+m vertices. In particular, we focus on the distr ibution of the number of isolated vertices, and on the distribution of the vertex degrees. These results are applied to study the asymptotic properties of such random intersection graphs for the special case that the distribution P (m) is degenerated. The application of this model to find clusters and to test their randomness especially for non-metric data is discussed."
]
} |
1409.5995 | 2060271434 | Random intersection graphs have received much attention for nearly two decades, and currently have a wide range of applications ranging from key predistribution in wireless sensor networks to modeling social networks. In this paper, we investigate the strengths of connectivity and robustness in a general random intersection graph model. Specifically, we establish sharp asymptotic zero-one laws for k-connectivity and k-robustness, as well as the asymptotically exact probability of k-connectivity, for any positive integer k. The k-connectivity property quantifies how resilient is the connectivity of a graph against node or edge failures. On the other hand, k-robustness measures the effectiveness of local diffusion strategies (that do not use global graph topology information) in spreading information over the graph in the presence of misbehaving nodes. In addition to presenting the results under the general random intersection graph model, we consider two special cases of the general model, a binomial random intersection graph and a uniform random intersection graph, which both have numerous applications as well. For these two specialized graphs, our results on asymptotically exact probabilities of k-connectivity and asymptotic zero-one laws for k-robustness are also novel in the literature. | To date, there have not been any results reported on the ( @math -)robustness of random intersection graphs by others. As noted in Lemma , Zhang and Sundaram @cite_18 present a zero--one law for @math -robustness in an Erd o s--R ' e nyi graph. Specifically, their result is that if @math , then Erd o s--R ' e nyi graph @math is almost surely @math -robust (resp., not @math -robust) if @math (resp., @math ). | {
"cite_N": [
"@cite_18"
],
"mid": [
"2962884567"
],
"abstract": [
"We study a graph-theoretic property known as robustness, which plays a key role in the behavior of certain classes of dynamics on networks (such as resilient consensus and contagion). This property is much stronger than other graph properties such as connectivity and minimum degree, in that one can construct graphs with high connectivity and minimum degree but low robustness. In this paper, we investigate the robustness of common random graph models for complex networks (Erdős-Renyi, geometric random, and preferential attachment graphs). We show that the notions of connectivity and robustness coincide on these random graph models: the properties share the same threshold function in the Erdős-Renyi model, cannot be very different in the geometric random graph model, and are equivalent in the preferential attachment model. This indicates that a variety of purely local diffusion dynamics will be effective at spreading information in such networks."
]
} |
1409.5995 | 2060271434 | Random intersection graphs have received much attention for nearly two decades, and currently have a wide range of applications ranging from key predistribution in wireless sensor networks to modeling social networks. In this paper, we investigate the strengths of connectivity and robustness in a general random intersection graph model. Specifically, we establish sharp asymptotic zero-one laws for k-connectivity and k-robustness, as well as the asymptotically exact probability of k-connectivity, for any positive integer k. The k-connectivity property quantifies how resilient is the connectivity of a graph against node or edge failures. On the other hand, k-robustness measures the effectiveness of local diffusion strategies (that do not use global graph topology information) in spreading information over the graph in the presence of misbehaving nodes. In addition to presenting the results under the general random intersection graph model, we consider two special cases of the general model, a binomial random intersection graph and a uniform random intersection graph, which both have numerous applications as well. For these two specialized graphs, our results on asymptotically exact probabilities of k-connectivity and asymptotic zero-one laws for k-robustness are also novel in the literature. | For random intersection graphs in this paper, two nodes have an edge in between if their object sets share at least one object. A natural variant is to define graphs with edges only between nodes which have at least @math objects in common (instead of just @math ) for some positive integer @math . Zhao @cite_5 @cite_25 @cite_11 consider @math -connectivity in graphs under this definition. In addition, ( @math )-connectivity of other random graphs have also been investigated in the literature @cite_4 @cite_34 . | {
"cite_N": [
"@cite_4",
"@cite_5",
"@cite_34",
"@cite_25",
"@cite_11"
],
"mid": [
"2005079131",
"2079098913",
"2052426738",
"104125573",
"2963768473"
],
"abstract": [
"In this paper, we consider a wireless network with unreliable links and investigate its minimum node degree and k- connectivity. In such network, n nodes are uniformly distributed in a region, and edges are established for any two nodes within a certain distance and with a probabilistically active link in between. On a torus topology, we present a zero-one law for the property of minimum degree being at least k, leading to a zero-law for k-connectivity and an intermediate result towards a one-law. On a square with boundary effect, we establish a one- law for minimum degree. Our results are derived from rigorous analysis and also confirmed by the simulation, and they provide guidelines for the design of wireless networks. Index Terms—Connectivity, Erdos-Renyi graph, geometric graph, graph intersection, minimum node degree, random graph, unreliable link, wireless network, zero-one law.",
"The q-composite key predistribution scheme [1] is used prevalently for secure communications in large-scale wireless sensor networks (WSNs). Prior work [2]-[4] explores topological properties of WSNs employing the q-composite scheme for q = 1 with unreliable communication links modeled as independent on off channels. In this paper, we investigate topological properties related to the node degree in WSNs operating under the q-composite scheme and the on off channel model. Our results apply to general q and are stronger than those reported for the node degree in prior work even for the case of q being 1. Specifically, we show that the number of nodes with certain degree asymptotically converges in distribution to a Poisson random variable, present the asymptotic probability distribution for the minimum degree of the network, and establish the asymptotically exact probability for the property that the minimum degree is at least an arbitrary value. Numerical experiments confirm the validity of our analytical findings.",
"To be considered for an IEEE Jack Keil Wolf ISIT Student Paper Award. We study the secure and reliable connectivity of wireless sensor networks. Security is assumed to be ensured by the random pairwise key predistribution scheme of Chan, Perrig, and Song, and unreliable wireless links are represented by independent on off channels. Modeling the network by an intersection of a random K-out graph and an Erdős-Renyi graph, we present scaling conditions (on the number of nodes, the scheme parameter K, and the probability of a wireless channel being on) such that the resulting graph contains no nodes with degree less than k with high probability, when the number of nodes gets large. Results are given in the form of zero-one laws and are shown to improve the previous results by Yagan and Makowski on the absence of isolated nodes (i.e., absence of nodes with degree zero). Via simulations, the established zero-one laws are shown to hold also for the property of k-connectivity; i.e., the property that graph remains connected despite the deletion of any k − 1 nodes or edges.",
"The seminal q-composite key predistribution scheme [3] (IEEE S&P 2003) is used prevalently for secure communications in large-scale wireless sensor networks (WSNs). Yagan [12] (IEEE IT 2012) and we [15] (IEEE ISIT 2013) explore topological properties of WSNs employing the q-composite scheme in the case of q = 1 with unreliable communication links modeled as independent on off channels. However, it is challenging to derive results for general q under such on off channel model. In this paper, we resolve such challenge and investigate topological properties related to node degree in WSNs operating under the q-composite scheme and the on off channel model. Our results apply to general q, yet there has not been any work in the literature reporting the corresponding results even for q = 1, which are stronger than those about node degree in [12], [15]. Specifically, we show that the number of nodes with an arbitrary degree asymptotically converges to a Poisson distribution, present the asymptotic probability distribution for the minimum node degree of the network, and establish the asymptotically exact probability for the property that the minimum node degree is at least an arbitrary value. Numerical experiments confirm the validity our analytical findings.",
"Random s-intersection graphs have recently received much interest in a wide range of application areas. Broadly speaking, a random s-intersection graph is constructed by first assigning each vertex a set of items in some random manner, and then putting an undirected edge between all pairs of vertices that share at least s items (the graph is called a random intersection graph when s = 1). A special case of particular interest is a uniform random s-intersection graph, where each vertex independently selects the same number of items uniformly at random from a common item pool. Another important case is a binomial random s-intersection graph, where each item from a pool is independently assigned to each vertex with the same probability. Both models have found numerous applications thus far including cryptanalysis, and modeling recommender systems, secure sensor networks, online social networks, trust networks and small-world networks (uniform random s-intersection graphs), as well as clustering analysis, classification, and the design of integrated circuits (binomial random s-intersection graphs). In this paper, for binomial uniform random s-intersection graphs, we present results related to k-connectivity and minimum vertex degree. Specifically, we derive the asymptotically exact probabilities and zero--one laws for the following three properties: (i) k-vertex-connectivity, (ii) k-edge-connectivity and (iii) the property of minimum vertex degree being at least k."
]
} |
1409.5872 | 73637996 | Program analysis is on the brink of mainstream in embedded systems development. Formal verification of behavioural requirements, finding runtime errors and automated test case generation are some of the most common applications of automated verification tools based on Bounded Model Checking. Existing industrial tools for embedded software use an off-the-shelf Bounded Model Checker and apply it iteratively to verify the program with an increasing number of unwindings. This approach unnecessarily wastes time repeating work that has already been done and fails to exploit the power of incremental SAT solving. This paper reports on the extension of the software model checker CBMC to support incremental Bounded Model Checking and its successful integration with the industrial embedded software verification tool BTC EmbeddedTester. We present an extensive evaluation over large industrial embedded programs, which shows that incremental Bounded Model Checking cuts runtimes by one order of magnitude in comparison to the standard non-incremental approach, enabling the application of formal verification to large and complex embedded software. | Most related is recent work on a prototype tool @cite_32 implementing incremental BMC using SMT solvers. They show the advantages of incremental software BMC. However, they do not consider industrial embedded software and have evaluated their tool only on small benchmarks that are very easy for both, incremental and non-incremental, approaches (runtimes @math 1s). Unfortunately, a working version of the tool was not available at time of submission. | {
"cite_N": [
"@cite_32"
],
"mid": [
"2063913712"
],
"abstract": [
"Conventional Bounded Software Model Checking tools generate a symbolic representation of all feasible executions of a program up to a predetermined bound. An insufficiently large bound results in missed bugs, and a subsequent increase of the bound necessitates the complete reconstruction of the instance and a restart of the underlying solver. Conversely, exceedingly large bounds result in prohibitively large decision problems, causing the verifier to run out of resources before it can provide a result. We present an incremental approach to Bounded Software Model Checking, which enables increasing the bound without incurring the overhead of a restart. Further, we provide an LLVM-based open-source implementation which supports a wide range of incremental SMT solvers. We compare our implementation to other traditional non-incremental software model checkers and show the advantages of performing incremental verification by analyzing the overhead incurred on a common suite of benchmarks."
]
} |
1409.5530 | 2027115776 | The problem of imbalance detection in a three-phase power system using a phasor measurement unit (PMU) is considered. A general model for the zero, positive, and negative sequences from a PMU measurement at off-nominal frequencies is presented and a hypothesis testing framework is formulated. The new formulation takes into account the fact that minor degree of imbalance in the system is acceptable and does not indicate subsequent interruptions, failures, or degradation of physical components. A generalized likelihood ratio test (GLRT) is developed and shown to be a function of the negative-sequence phasor estimator and the acceptable level of imbalances for nominal system operations. As a by-product to the proposed detection method, a constrained estimation of the positive and negative phasors and the frequency deviation is obtained for both balanced and unbalanced situations. The theoretical and numerical performance analyses show improved performance over benchmark techniques and robustness to the presence of additional harmonics. | Under perfectly balanced three-phase operating conditions, the zero and negative sequences are absent, hence the state-estimation and signal analysis in this case are carried out using only the positive-sequence model @cite_13 @cite_41 . When system imbalance occurs, the zero and negative sequences are nonzero, and the PMU’s output exhibits nonstationary frequency deviations @cite_12 , @cite_29 . In addition, the positive-sequence measurements become non-circular as described in @cite_10 , @cite_17 . In the pioneering works of @cite_10 and @cite_17 , new methods were derived for frequency-estimation based on non-circular models and the Clarke’s transformation. These methods use the positive and negative sequences and analyze the measurements in the time domain. The mismatch estimation error caused by using the balanced state estimation under imbalance is studied in @cite_13 and the influence of imperfect synchronization on the state estimation is described in @cite_16 . In @cite_6 a distribution system state estimator suitable for monitoring unbalanced distribution networks is presented. A practical procedure to decrease the state estimation error introduced by load imbalances is developed in @cite_37 . | {
"cite_N": [
"@cite_13",
"@cite_37",
"@cite_41",
"@cite_29",
"@cite_6",
"@cite_16",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"2101990196",
"1989296314",
"2167838480",
"2155833003",
"2032366196",
"1985336094",
"2042663762",
"2121499406",
"1980072023"
],
"abstract": [
"This paper investigates the errors introduced in the positive sequence state estimation due to the usual assumptions of having fully balanced bus loads generations and continuously transposed transmission lines. A three-phase state estimator is first developed in order to verify the actual error free solution in the phase coordinates. Then, several tests are conducted using different assumptions regarding the availability of single and multi-phase measurements. It is demonstrated that incomplete metering of three-phase system quantities may lead to significant errors in the positive sequence state estimates for certain cases. Such cases may also lead to incorrect bad data detection and elimination, further deteriorating the quality of the state estimate. IEEE 30 bus test system is used to illustrate these cases.",
"Most energy management system applications are based on the positive sequence network model. Since most systems do not operate under fully-balanced operating conditions, methods to minimize the impact of the balanced operation assumption on the network applications must be developed. This paper studies the impact of load imbalances on state estimation results by comparing state estimates using different measurement assumptions. In particular, the use of PMUs and systematic tuning of measurement weights are studied as practical ways of addressing this issue. Several scenarios are simulated using IEEE test systems with different measurement configurations and performance improvement of the state estimator in response to the proposed changes is illustrated by simulations.",
"The main objective of this paper is to describe a multilevel framework that facilitates seamless integration of existing state estimators (SEs) that are designed to function at different levels of modeling hierarchy in order to accomplish very large-scale monitoring of interconnected power systems. This has been a major challenge for decades as power systems grew pretty much independently in different areas, which had to operate in an interconnected and synchronized fashion. The paper initially provides a brief historical perspective which also explains the existing state estimation paradigm. This is followed by a review of the recent technological and regulatory drivers that are responsible for the new developments in the energy management functions. The paper then shows that a common theoretical framework can be used to implement a hierarchical scheme by which even very large-scale power systems can be efficiently and accurately monitored. This is illustrated for substation level, transmission system level as well as for a level between different transmission system operators in a given power system. Finally, the paper describes the use and benefits of phasor measurements when incorporated at these different levels of the proposed infrastructure. Numerical examples are included to illustrate performance of the proposed multilevel schemes.",
"With the advent of Substation Computer Systems dedicated to protection, control and data logging functions in a Substation, it becomes possible to develop new applications which can utilize the processing power available within the substation. The microcomputer based Symmetrical Component Distance Relay (SCDR) described in the references cited at the end of this paper possesses certain characteristics which facilitate real-time monitoring of positive sequence voltage phasor at the local power system bus. With a regression analysis the frequency and rate-of-change of frequency at the bus can also be determined from the positive sequence voltage phase angle. This paper describes the theoretical basis of these computations and describes results of experiments performed in the AEP power system simulation laboratory. Plans for future field tests on the AEP system are also outlined.",
"Distribution networks are only partially monitored networks, with loads often unbalanced on the phases of the system. Moreover, last generation of distribution networks are facing significant challenges, due to, as an example, the increasing amount of distributed generation (DG), installed in an unplanned manner. Such changing scenario emphasizes issues such as voltage unbalance and imposes new operational requirements, such as distributed voltage control and demand side management. Traditional state estimation techniques are a reliable instrument for monitoring the transmission system, but their application to distribution systems is still under development. In this work, a distribution system state estimator (DSSE) suitable for accurately monitoring unbalanced distribution networks is presented and the impact of the different kind of measurement devices is analyzed and discussed by means of simulations performed on the 13-bus IEEE distribution network.",
"Phasor measurement units (PMUs) are time synchronized sensors primarily used for power system state estimation. Despite their increasing incorporation and the ongoing research on state estimation using measurements from these sensors, estimation with imperfect phase synchronization has not been sufficiently investigated. Inaccurate synchronization is an inevitable problem that large scale deployment of PMUs has to face. In this paper, we introduce a model for power system state estimation using PMUs with phase mismatch. We propose alternating minimization and parallel Kalman filtering for state estimation using static and dynamic models, respectively, under different assumptions. Numerical examples demonstrate the improved accuracy of our algorithms compared with traditional algorithms when imperfect synchronization is present. We conclude that when a sufficient number of PMUs with small delays are employed, the imperfect synchronization can be largely compensated in the estimation stage.",
"Accurate estimation of system frequency in real time is a prerequisite for the future smart grid, where the generation, loading, and topology will all be dynamically updated. In this article, we introduce a unified framework for the estimation of instantaneous frequency in both balanced and unbalanced conditions in a three-phase system, thus consolidating the existing approaches and providing next-generation solutions capable of joint adaptive frequency estimation and system fault identification. This is achieved by employing recent developments in the statistics of complex variables (augmented statistics) and the associated widely linear models, allowing us to benefit from a rigorous account of varying degrees of noncircularity corresponding to different sources of frequency variations. The advantages of such an approach are illustrated for both balanced and unbalanced conditions, including voltage sags, harmonics and supply-demand mismatch, all major obstacles for accurate frequency estimation in the smart grid.",
"This paper endeavors to present a comprehensive summary of the causes and effects of voltage unbalance and to discuss related standards, definitions, and mitigation techniques. Several causes of voltage unbalance on the power system and in industrial facilities are presented as well as the resulting adverse effects on the system and on equipment such as induction motors and power electronic converters and drives. Standards addressing voltage unbalance are discussed and clarified, and several mitigation techniques are suggested to correct voltage unbalance problems. This paper makes apparent the importance of identifying potential unbalance problems for the benefit of both the utility and customer.",
"A novel technique for online estimation of the fundamental frequency of unbalanced three-phase power systems is proposed. Based on Clarke's transformation and widely linear complex domain modeling, the proposed method makes use of the full second-order information within three-phase signals, thus promising enhanced and robust frequency estimation. The structure, mathematical formulation, and theoretical stability and statistical performance analysis of the proposed technique illustrate that, in contrast to conventional linear adaptive estimators, the proposed method is well matched to unbalanced system conditions and also provides unbiased frequency estimation. The proposed method is also less sensitive to the variations of the three-phase voltage amplitudes over time and in the presence of higher order harmonics. Simulations on both synthetic and real-world unbalanced power systems support the analysis."
]
} |
1409.5530 | 2027115776 | The problem of imbalance detection in a three-phase power system using a phasor measurement unit (PMU) is considered. A general model for the zero, positive, and negative sequences from a PMU measurement at off-nominal frequencies is presented and a hypothesis testing framework is formulated. The new formulation takes into account the fact that minor degree of imbalance in the system is acceptable and does not indicate subsequent interruptions, failures, or degradation of physical components. A generalized likelihood ratio test (GLRT) is developed and shown to be a function of the negative-sequence phasor estimator and the acceptable level of imbalances for nominal system operations. As a by-product to the proposed detection method, a constrained estimation of the positive and negative phasors and the frequency deviation is obtained for both balanced and unbalanced situations. The theoretical and numerical performance analyses show improved performance over benchmark techniques and robustness to the presence of additional harmonics. | In the literature, various definitions are given for imbalance in a power system, where the fundamental performance measures are the voltage unbalance factor (VUF) @cite_12 , @cite_19 , @cite_21 and the percent voltage unbalance (PVU) @cite_4 . The VUF is the ratio of the magnitudes of negative- and positive-sequence voltages and the PUV is equal to the ratio of the maximum voltage magnitude deviation of the zero, positive, and negative sequences from the average of the three-phase voltage magnitudes @cite_26 . The phase angle imbalance, which is not reflected in either the VUF or PVU measures, can be described by the phase voltage unbalance factor (PVUF) @cite_28 and the complex VUF (CVUF) @cite_15 , @cite_1 . The limitations of these commonly-used methods can be found, for example, in @cite_39 . An online identification method of the level, location, and effects of voltage imbalance in a distribution network is derived in @cite_36 based on distribution system state estimation. However, the existing non-parametric methods for detection of imbalance are insufficient (e.g., @cite_39 @cite_15 @cite_1 @cite_20 ). Derivation of parametric detection methods is expected to improve the detection performance. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_28",
"@cite_36",
"@cite_21",
"@cite_1",
"@cite_39",
"@cite_19",
"@cite_15",
"@cite_20",
"@cite_12"
],
"mid": [
"2144680725",
"2051741728",
"2050298298",
"2157726340",
"",
"2147565285",
"1993182583",
"",
"2122812677",
"2127394658",
"2121499406"
],
"abstract": [
"This paper uses a real load test to investigate the effects of an unbalanced voltage supply on an induction motor's performance. Based upon various experiments, including: (1) cases with the same unbalance voltage factor but different unbalanced voltages; (2) cases with only one unbalanced voltage but different degrees of unbalance; and (3) cases with the same positive-sequence voltage but different negative-sequence voltages, the importance of the positive-sequence voltage in the motor's apparent performance and of the negative-sequence voltage in the hidden damage are pointed out. Finally, it is strongly suggested that the related regulations, and a motor's derating factor and temperature rise curves should be based on not only a voltage unbalance factor, but also the magnitude of the positive-sequence voltage.",
"In this paper an analysis of five different definitions of voltage unbalance developed by various power communities (CIGRE, NEMA, IEEE, and IEC) is presented, in order to understand their performance and difference among them. The purpose of this work is to analyze and quantify the voltage unbalance, so that these results may become useful to minimize the harmful effects of voltage unbalance. To quantify the voltage unbalance, five existing voltage unbalance definitions are calculated and their numerical values are used in analyzing the behavior of these definitions for the unit change in their magnitude and angle of voltage phasors.",
"Most of the loads in industrial power distribution systems are balanced and connected to three power systems. However, voltage unbalance is generated at the user's 3-phase 4-wire distribution systems with single & three phase. Voltage unbalance is mainly affected by load system rather than power system. Unbalanced voltage will draws a highly unbalanced current and results in the temperature rise and the low output characteristics at the machine. It is necessary to analyze correct voltage unbalance factor for reduction of side effects in the industrial sites. Voltage unbalance is usually defined by the maximum percent deviation of voltages from their average value, by the method of symmetrical components or by the expression in a more user-friendly form which requires only the three line voltage readings. If the neutral point is moved at the 3-phase -4-wire system by the unbalanced load, by the conventional analytical method, line and phase voltage unbalance leads to different results due to zero-sequence component. This paper presents a new analytical method for phase and line voltage unbalance factor in 4-wire systems. Two methods indicate exact results",
"Voltage unbalance is a costly and potentially damaging phenomenon. It affects both distribution network operators (DNOs) and customers. As part of a move toward greater network visibility, DNOs are increasingly motivated to monitor their networks in finer detail using an increasingly diverse range of monitoring devices. If the information from monitoring devices can be collated intelligently, the impact of unbalance and other power-quality issues can be accurately estimated throughout the network. In this paper, a new methodology is presented which utilizes distribution system state estimation (DSSE) to estimate the level, location, and impact of voltage unbalance on a real distribution network. The developed methodology is novel, pulling together and advancing existing research on DSSE and unbalance. The methodology is validated using data from a real U.K. distribution network with significant unbalance. The methodology is shown to be capable of statistically estimating the level, location, and effects of unbalance within the network, even when some areas of the network are unobservable.",
"",
"The negative effects of a particular unbalanced voltage on the performance of an induction motor are studied in this paper. The paper suggests that the available definitions of unbalanced voltages are not comprehensive and complete. Therefore, the results of these analyses on motor performance are not very reliable. To prove this claim, a three-phase 25-hp squirrel-cage induction motor is analyzed under different unbalanced conditions. It is shown that it is necessary to define a more precise unbalanced factor for more accurate results. Experimental results verify the theoretical analysis.",
"This paper presents the main aspects observed in analyzing Complex Voltage Unbalance Factor (CVUF) behavior resulting from the variation of voltage magnitudes and angles. The goal is to identify possible incoherencies regarding the use of the CVUF, and also to investigate whether this factor is more sensitive to variations in magnitude or angles under various voltage unbalance conditions. This study also evaluates the efficiency of the use of the CVUF angle and its association with positive component magnitudes. The results indicate that the CVUF should not be used as a single and sufficient parameter for the quantification of voltage unbalances, which highlights the need to develop a new indicator which may establish a more clear and simple association between this disturbance and its effects on electrical equipment.",
"",
"This paper examines the proper application of induction machines when supplied by unbalanced voltages in the presence of over- and undervoltages. Differences in the definition of voltage unbalance are also examined. The approach adopted is to use NEMA derating for unbalanced voltages as a basis to include the effects of undervoltages and overvoltages, through motor loss calculations.",
"Performance analysis of three-phase induction motors under supply voltage unbalance conditions is normally carried out using the well-known symmetrical components analysis. In this analysis, the voltage unbalance level at the terminals of machine is assessed by means of the NEMA or IEC definitions. Both definitions lead to a relatively large error in predicting the performance of a machine. A method has recently been proposed in which, in addition to voltage unbalance factor (VUF), the phase angle has been accounted. This means that the voltage unbalance factor is regarded as a complex value. This paper shows that although the use of the complex VUF reduces the computational error considerably, it is still high. This is proven by evaluating the derating factor of a three-phase induction motor. A method is introduced to determine the derating factor precisely using the complex unbalance factor for an induction motor operating under any unbalanced supply condition. A practical case for derating of a typical three-phase squirrel-cage induction motor supplied by an unbalanced voltage is studied in the paper.",
"This paper endeavors to present a comprehensive summary of the causes and effects of voltage unbalance and to discuss related standards, definitions, and mitigation techniques. Several causes of voltage unbalance on the power system and in industrial facilities are presented as well as the resulting adverse effects on the system and on equipment such as induction motors and power electronic converters and drives. Standards addressing voltage unbalance are discussed and clarified, and several mitigation techniques are suggested to correct voltage unbalance problems. This paper makes apparent the importance of identifying potential unbalance problems for the benefit of both the utility and customer."
]
} |
1409.5530 | 2027115776 | The problem of imbalance detection in a three-phase power system using a phasor measurement unit (PMU) is considered. A general model for the zero, positive, and negative sequences from a PMU measurement at off-nominal frequencies is presented and a hypothesis testing framework is formulated. The new formulation takes into account the fact that minor degree of imbalance in the system is acceptable and does not indicate subsequent interruptions, failures, or degradation of physical components. A generalized likelihood ratio test (GLRT) is developed and shown to be a function of the negative-sequence phasor estimator and the acceptable level of imbalances for nominal system operations. As a by-product to the proposed detection method, a constrained estimation of the positive and negative phasors and the frequency deviation is obtained for both balanced and unbalanced situations. The theoretical and numerical performance analyses show improved performance over benchmark techniques and robustness to the presence of additional harmonics. | In most situations, frequency deviations and minor imbalances can be mitigated by frequency regulation or load compensation techniques @cite_0 . In the literature, several mitigation techniques have been suggested to correct significant voltage imbalance problems @cite_12 , on both the power system and user facility levels. Voltage imbalance is ultimately fixed by manually or automatically rebalancing loads and removing asymmetric network line configurations @cite_36 , where these are costly processes and inappropriate for frequent but small imbalances. For example, the compensation of the voltage imbalance can be achieved by reducing the negative-sequence voltage using a series active power filter or based on shunt compensation, as described in @cite_32 , or by advanced control strategies @cite_34 @cite_25 @cite_38 . In addition, the compensation of voltage harmonics, which can be generated by a nonlinear unbalanced load, can be considered by separating the positive and negative sequences of each harmonic order @cite_32 . | {
"cite_N": [
"@cite_38",
"@cite_36",
"@cite_32",
"@cite_0",
"@cite_34",
"@cite_25",
"@cite_12"
],
"mid": [
"1996218375",
"2157726340",
"2073703226",
"2122321109",
"1980187301",
"2141421654",
"2121499406"
],
"abstract": [
"The increasing presence of single-phase distributed generators and unbalanced loads in the electric power system may lead to unbalance of the three phase voltages, resulting in increased losses and heating. The distribution network operators (DNOs) are increasingly being challenged to maintain the required power quality. To reduce voltage unbalance DNOs are seeking to connect larger DG units to the three phases instead of a single-phase connection. The three-phase connection can be realised by three single-phase inverters or by a three-phase inverter. Each inverter topology can be implemented with different control strategies. The control can be equiped with active power filtering functions which can improve the power quality. In this paper, the effect of connecting DG units by means of a three-phase connection instead of a single-phase connection on voltage unbalance is studied. Besides two commonly used control strategies, two other control strategies that combine DG and active power filtering functions are implemented and their effect on voltage unbalance is studied. The last two control strategies lead to the reduction of voltage unbalance such that the voltage requirements are maintained.",
"Voltage unbalance is a costly and potentially damaging phenomenon. It affects both distribution network operators (DNOs) and customers. As part of a move toward greater network visibility, DNOs are increasingly motivated to monitor their networks in finer detail using an increasingly diverse range of monitoring devices. If the information from monitoring devices can be collated intelligently, the impact of unbalance and other power-quality issues can be accurately estimated throughout the network. In this paper, a new methodology is presented which utilizes distribution system state estimation (DSSE) to estimate the level, location, and impact of voltage unbalance on a real distribution network. The developed methodology is novel, pulling together and advancing existing research on DSSE and unbalance. The methodology is validated using data from a real U.K. distribution network with significant unbalance. The methodology is shown to be capable of statistically estimating the level, location, and effects of unbalance within the network, even when some areas of the network are unobservable.",
"Recently, there has been an increasing interest in using distributed generators (DGs) not only to inject power into the grid but also to enhance the power quality. In this paper, a stationary-frame control method for voltage unbalance compensation in an islanded microgrid is proposed. This method is based on the proper control of DGs interface converters. The DGs are properly controlled to autonomously compensate for voltage unbalance while sharing the compensation effort and also active and reactive powers. The control system of the DGs mainly consists of active and reactive power droop controllers, a virtual impedance loop, voltage and current controllers, and an unbalance compensator. The design approach of the control system is discussed in detail, and simulation and experimental results are presented. The results demonstrate the effectiveness of the proposed method in the compensation of voltage unbalance.",
"This paper presents a new approach for generating reference currents for an active filter and or a static compensator. It is assumed that the compensator is connected to a load that may either be connected in star or in delta. The load can be unbalanced and may also draw harmonic currents. The purpose of the compensating scheme is to balance the load, as well as make the supply side power factor a desired value. The authors use the theory of instantaneous symmetrical components to obtain an algorithm to compute three phase reference currents which, when injected to the power system, produce desired results. They also propose a suitable compensator structure that will track the reference currents in a hysteresis band control scheme. Finally, the feasibility of such a scheme is demonstrated through simulation studies.",
"Advanced control strategies are vital components for realization of microgrids. This paper reviews the status of hierarchical control strategies applied to microgrids and discusses the future trends. This hierarchical control structure consists of primary, secondary, and tertiary levels, and is a versatile tool in managing stationary and dynamic performance of microgrids while incorporating economical aspects. Various control approaches are compared and their respective advantages are highlighted. In addition, the coordination among different control hierarchies is discussed.",
"This paper presents a generalized control algorithm for voltage disturbance extraction and mitigation. The proposed mitigating device is the dynamic voltage restorer (DVR). A DVR is commonly used to mitigate the voltage sags. In this paper the proposed DVR can compensate the voltage unbalance and mitigate voltage harmonics in the time of normal operation as well as performs its basic function during the fault condition. The suggested control algorithm employs an adaptive perceptron to effectively and adaptively track and extract the most common voltage harmonics, voltage unbalance (which include negative and zero sequence voltage drops), and different types of voltage sags, which include balanced and unbalanced voltage sags. Digital simulation results are obtained using PSCAD EMTDC to verify the effectiveness of the proposed control algorithm. Experimental results are demonstrated to prove the practicality of the mitigating device.",
"This paper endeavors to present a comprehensive summary of the causes and effects of voltage unbalance and to discuss related standards, definitions, and mitigation techniques. Several causes of voltage unbalance on the power system and in industrial facilities are presented as well as the resulting adverse effects on the system and on equipment such as induction motors and power electronic converters and drives. Standards addressing voltage unbalance are discussed and clarified, and several mitigation techniques are suggested to correct voltage unbalance problems. This paper makes apparent the importance of identifying potential unbalance problems for the benefit of both the utility and customer."
]
} |
1409.5546 | 2953337984 | Provenance is derivative journal information about the origin and activities of system data and processes. For a highly dynamic system like the cloud, provenance can be accurately detected and securely used in cloud digital forensic investigation activities. This paper proposes watchword oriented provenance cognition algorithm for the cloud environment. Additionally time-stamp based buffer verifying algorithm is proposed for securing the access to the detected cloud provenance. Performance analysis of the novel algorithms proposed here yields a desirable detection rate of 89.33 and miss rate of 8.66 . The securing algorithm successfully rejects 64 of malicious requests, yielding a cumulative frequency of 21.43 for MR. | A process aware approach to worm attacks and contamination have been designed, implemented and evaluated in @cite_7 . The authors identified the important issue that provenance un-awareness leads to problems in quick and accurate identification of worm attack point @cite_7 . Process coloring can be assigned to uniquely identify processes, it is inherited by the child processes and diffused through process actions. | {
"cite_N": [
"@cite_7"
],
"mid": [
"1980115980"
],
"abstract": [
"Previous provenance models have assumed that there is complete certainty in the provenance relationships. But what if this assumption does not hold? In this work, emaiwe propose a probabilistic provenance graph (PPG) model to characterize scenarios where provenance relationships are uncertain. We describe two motivating examples. The first example demonstrates the uncertainty associated with the provenance of an email. The second example demonstrates and characterizes the uncertainty associated with the provenance of statements in documents."
]
} |
1409.5546 | 2953337984 | Provenance is derivative journal information about the origin and activities of system data and processes. For a highly dynamic system like the cloud, provenance can be accurately detected and securely used in cloud digital forensic investigation activities. This paper proposes watchword oriented provenance cognition algorithm for the cloud environment. Additionally time-stamp based buffer verifying algorithm is proposed for securing the access to the detected cloud provenance. Performance analysis of the novel algorithms proposed here yields a desirable detection rate of 89.33 and miss rate of 8.66 . The securing algorithm successfully rejects 64 of malicious requests, yielding a cumulative frequency of 21.43 for MR. | Identified security as an important issue that decelerates the widespread growth of cloud computing @cite_3 . Complications for data privacy and data protection have plagued the market of the cloud. The assurance of data integrity is a necessity to ensure acceptance of cloud services in all sectors. The authors suggested a new model that extends the existing model in this regard. However the new model should not threaten the features of the existing model. | {
"cite_N": [
"@cite_3"
],
"mid": [
"1608562624"
],
"abstract": [
"Many applications which require provenance are now moving to cloud infrastructures. However, it is not widely realised that clouds have their own need for provenance due to their dynamic nature and the burden this places on their administrators. We analyse the structure of cloud computing to identify the unique challenges facing provenance collection and the scenarios in which additional provenance data could be useful."
]
} |
1409.5546 | 2953337984 | Provenance is derivative journal information about the origin and activities of system data and processes. For a highly dynamic system like the cloud, provenance can be accurately detected and securely used in cloud digital forensic investigation activities. This paper proposes watchword oriented provenance cognition algorithm for the cloud environment. Additionally time-stamp based buffer verifying algorithm is proposed for securing the access to the detected cloud provenance. Performance analysis of the novel algorithms proposed here yields a desirable detection rate of 89.33 and miss rate of 8.66 . The securing algorithm successfully rejects 64 of malicious requests, yielding a cumulative frequency of 21.43 for MR. | Cloud service users need to be vigilant about the security breaches that can occur in the cloud @cite_3 . The authors have identified critical security issues due to the nature of service delivery model in the cloud. They have contributed to the cloud research field by identifying critical research questions regarding cloud security. The authors did not identify any malware related vulnerability, neither did their survey bring about any real life scenarios. | {
"cite_N": [
"@cite_3"
],
"mid": [
"1608562624"
],
"abstract": [
"Many applications which require provenance are now moving to cloud infrastructures. However, it is not widely realised that clouds have their own need for provenance due to their dynamic nature and the burden this places on their administrators. We analyse the structure of cloud computing to identify the unique challenges facing provenance collection and the scenarios in which additional provenance data could be useful."
]
} |
1409.5400 | 2017512655 | Abstract The task of a visual landmark recognition system is to identify photographed buildings or objects in query photos and to provide the user with relevant information on them. With their increasing coverage of the world’s landmark buildings and objects, Internet photo collections are now being used as a source for building such systems in a fully automatic fashion. This process typically consists of three steps: clustering large amounts of images by the objects they depict; determining object names from user-provided tags; and building a robust, compact, and efficient recognition index. To this date, however, there is little empirical information on how well current approaches for those steps perform in a large-scale open-set mining and recognition task. Furthermore, there is little empirical information on how recognition performance varies for different types of landmark objects and where there is still potential for improvement. With this paper, we intend to fill these gaps. Using a dataset of 500 k images from Paris, we analyze each component of the landmark recognition pipeline in order to answer the following questions: How many and what kinds of objects can be discovered automatically? How can we best use the resulting image clusters to recognize the object in a query? How can the object be efficiently represented in memory for recognition? How reliably can semantic information be extracted? And finally: What are the limiting factors in the resulting pipeline from query to semantics? We evaluate how different choices of methods and parameters for the individual pipeline steps affect overall system performance and examine their effects for different query categories such as buildings, paintings or sculptures. | We created our own query set and ground truth for this paper, because available benchmarks do not support such an evaluation. Most datasets only cover very few, mostly building-scale, landmarks ( , @cite_40 , , and @cite_13 , @cite_41 , @cite_51 ). Another problem is that their ground truths are designed for other tasks. Image retrieval datasets ( @cite_41 , @cite_51 , @cite_16 ) are not suitable for our evaluation, because we want to evaluate object recognition, recognizing the object(s) in a query image, and not image retrieval, retrieving images similar to a query from a database. Image-based localization datasets ( @cite_56 , @cite_4 , and @cite_53 ) evaluate how accurately the camera pose of the query image can be estimated. While this is more related to our problem, our goal differs from pose estimation, because camera pose does not necessarily determine what object the camera is really seeing (See Sec. for more details.) The @cite_49 and @cite_37 datasets are closest to our requirements, but both of them focus on large, building-level landmarks while we are explicitly interested in also evaluating the recognition of smaller, non-building objects. | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_41",
"@cite_53",
"@cite_56",
"@cite_40",
"@cite_49",
"@cite_51",
"@cite_16",
"@cite_13"
],
"mid": [
"1616969904",
"2125795712",
"2141362318",
"1565312575",
"2046166954",
"2121348293",
"2155823702",
"2148809531",
"1556531089",
"2058650633"
],
"abstract": [
"We address the problem of determining where a photo was taken by estimating a full 6-DOF-plus-intrincs camera pose with respect to a large geo-registered 3D point cloud, bringing together research on image localization, landmark recognition, and 3D pose estimation. Our method scales to datasets with hundreds of thousands of images and tens of millions of 3D points through the use of two new techniques: a co-occurrence prior for RANSAC and bidirectional matching of image features with 3D points. We evaluate our method on several large data sets, and show state-of-the-art results on landmark recognition as well as the ability to locate cameras to within meters, requiring only seconds per query.",
"Efficient view registration with respect to a given 3D reconstruction has many applications like inside-out tracking in indoor and outdoor environments, and geo-locating images from large photo collections. We present a fast location recognition technique based on structure from motion point clouds. Vocabulary tree-based indexing of features directly returns relevant fragments of 3D models instead of documents from the images database. Additionally, we propose a compressed 3D scene representation which improves recognition rates while simultaneously reducing the computation time and the memory consumption. The design of our method is based on algorithms that efficiently utilize modern graphics processing units to deliver real-time performance for view registration. We demonstrate the approach by matching hand-held outdoor videos to known 3D urban models, and by registering images from online photo collections to the corresponding landmarks.",
"In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, \"web-scale \" image corpora.",
"We present a fast, simple location recognition and image localization method that leverages feature correspondence and geometry estimated from large Internet photo collections. Such recovered structure contains a significant amount of useful information about images and image features that is not available when considering images in isolation. For instance, we can predict which views will be the most common, which feature points in a scene are most reliable, and which features in the scene tend to co-occur in the same image. Based on this information, we devise an adaptive, prioritized algorithm for matching a representative set of SIFT features covering a large scene to a query image for efficient localization. Our approach is based on considering features in the scene database, and matching them to query image features, as opposed to more conventional methods that match image features to visual words or database features. We find this approach results in improved performance, due to the richer knowledge of characteristics of the database features compared to query image features. We present experiments on two large city-scale photo collections, showing that our algorithm compares favorably to image retrieval-style approaches to location recognition.",
"To reliably determine the camera pose of an image relative to a 3D point cloud of a scene, correspondences between 2D features and 3D points are needed. Recent work has demonstrated that directly matching the features against the points outperforms methods that take an intermediate image retrieval step in terms of the number of images that can be localized successfully. Yet, direct matching is inherently less scalable than retrievalbased approaches. In this paper, we therefore analyze the algorithmic factors that cause the performance gap and identify false positive votes as the main source of the gap. Based on a detailed experimental evaluation, we show that retrieval methods using a selective voting scheme are able to outperform state-of-the-art direct matching methods. We explore how both selective voting and correspondence computation can be accelerated by using a Hamming embedding of feature descriptors. Furthermore, we introduce a new dataset with challenging query images for the evaluation of image-based localization.",
"State of the art data mining and image retrieval in community photo collections typically focus on popular subsets, e.g. images containing landmarks or associated to Wikipedia articles. We propose an image clustering scheme that, seen as vector quantization compresses a large corpus of images by grouping visually consistent ones while providing a guaranteed distortion bound. This allows us, for instance, to represent the visual content of all thousands of images depicting the Parthenon in just a few dozens of scene maps and still be able to retrieve any single, isolated, non-landmark image like a house or graffiti on a wall. Starting from a geo-tagged dataset, we first group images geographically and then visually, where each visual cluster is assumed to depict different views of the the same scene. We align all views to one reference image and construct a 2D scene map by preserving details from all images while discarding repeating visual features. Our indexing, retrieval and spatial matching scheme then operates directly on scene maps. We evaluate the precision of the proposed method on a challenging one-million urban image dataset.",
"We address the problem of large scale place-of-interest recognition in cell phone images of urban scenarios. Here, we go beyond what has been shown in earlier approaches by exploiting the nowadays often available 3D building information (e.g. from extruded floor plans) and massive street-view like image data for database creation. Exploiting vanishing points in query images and thus fully removing 3D rotation from the recognition problem allows then to simplify the feature invariance to a pure homothetic problem, which we show leaves more discriminative power in feature descriptors than classical SIFT. We rerank visual word based document queries using a fast stratified homothetic verification that is tailored for repetitive patterns like window grids on facades and in most cases boosts the correct document to top positions if it was in the short list. Since we exploit 3D building information, the approach finally outputs the camera pose in real world coordinates ready for augmenting the cell phone image with virtual 3D information. The whole system is demonstrated to outperform traditional approaches on city scale experiments for different sources of street-view like image data and a challenging set of cell phone images.",
"The state of the art in visual object retrieval from large databases is achieved by systems that are inspired by text retrieval. A key component of these approaches is that local regions of images are characterized using high-dimensional descriptors which are then mapped to ldquovisual wordsrdquo selected from a discrete vocabulary.This paper explores techniques to map each visual region to a weighted set of words, allowing the inclusion of features which were lost in the quantization stage of previous systems. The set of visual words is obtained by selecting words based on proximity in descriptor space. We describe how this representation may be incorporated into a standard tf-idf architecture, and how spatial verification is modified in the case of this soft-assignment. We evaluate our method on the standard Oxford Buildings dataset, and introduce a new dataset for evaluation. Our results exceed the current state of the art retrieval performance on these datasets, particularly on queries with poor initial recall where techniques like query expansion suffer. Overall we show that soft-assignment is always beneficial for retrieval with large vocabularies, at a cost of increased storage requirements for the index.",
"This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy.",
"This article presents an approach for modeling landmarks based on large-scale, heavily contaminated image collections gathered from the Internet. Our system efficiently combines 2D appearance and 3D geometric constraints to extract scene summaries and construct 3D models. In the first stage of processing, images are clustered based on low-dimensional global appearance descriptors, and the clusters are refined using 3D geometric constraints. Each valid cluster is represented by a single iconic view, and the geometric relationships between iconic views are captured by an iconic scene graph. Using structure from motion techniques, the system then registers the iconic images to efficiently produce 3D models of the different aspects of the landmark. To improve coverage of the scene, these 3D models are subsequently extended using additional, non-iconic views. We also demonstrate the use of iconic images for recognition and browsing. Our experimental results demonstrate the ability to process datasets containing up to 46,000 images in less than 20 hours, using a single commodity PC equipped with a graphics card. This is a significant advance towards Internet-scale operation."
]
} |
1409.5400 | 2017512655 | Abstract The task of a visual landmark recognition system is to identify photographed buildings or objects in query photos and to provide the user with relevant information on them. With their increasing coverage of the world’s landmark buildings and objects, Internet photo collections are now being used as a source for building such systems in a fully automatic fashion. This process typically consists of three steps: clustering large amounts of images by the objects they depict; determining object names from user-provided tags; and building a robust, compact, and efficient recognition index. To this date, however, there is little empirical information on how well current approaches for those steps perform in a large-scale open-set mining and recognition task. Furthermore, there is little empirical information on how recognition performance varies for different types of landmark objects and where there is still potential for improvement. With this paper, we intend to fill these gaps. Using a dataset of 500 k images from Paris, we analyze each component of the landmark recognition pipeline in order to answer the following questions: How many and what kinds of objects can be discovered automatically? How can we best use the resulting image clusters to recognize the object in a query? How can the object be efficiently represented in memory for recognition? How reliably can semantic information be extracted? And finally: What are the limiting factors in the resulting pipeline from query to semantics? We evaluate how different choices of methods and parameters for the individual pipeline steps affect overall system performance and examine their effects for different query categories such as buildings, paintings or sculptures. | Landmark Recognition Engines are typically based on a visual search index from objects discovered in Internet photo collections @cite_40 @cite_27 @cite_9 @cite_48 . The underlying approaches perform visual @cite_38 @cite_13 @cite_0 @cite_9 @cite_55 @cite_24 or geographical @cite_29 @cite_54 clustering, or a combination of the two @cite_40 @cite_48 @cite_30 . Zheng al @cite_48 show that online tourist guides can be a valuable additional data source, and Gammeter al @cite_27 use descriptions determined from user-provided tags to search for additional images on the web. In this work however, we focus on methods based solely on the images from Internet photo collections and their metadata. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_48",
"@cite_9",
"@cite_55",
"@cite_29",
"@cite_54",
"@cite_0",
"@cite_24",
"@cite_27",
"@cite_40",
"@cite_13"
],
"mid": [
"1982321556",
"2150307973",
"2136451880",
"1979572481",
"2164488048",
"2103388840",
"2536627426",
"2537826750",
"2064859577",
"1826823045",
"2121348293",
"2058650633"
],
"abstract": [
"With the popularization of mobile devices, recent years have witnessed an emerging potential for mobile landmark search. In this scenario, the user experience heavily depends on the efficiency of query transmission over a wireless link. As sending a query photo is time consuming, recent works have proposed to extract compact visual descriptors directly on the mobile end towards low bit rate transmission. Typically, these descriptors are extracted based solely on the visual content of a query, and the location cues from the mobile end are rarely exploited. In this paper, we present a Location Discriminative Vocabulary Coding (LDVC) scheme, which achieves extremely low bit rate query transmission, discriminative landmark description, as well as scalable descriptor delivery in a unified framework. Our first contribution is a compact and location discriminative visual landmark descriptor, which is offline learnt in two-step: First, we adopt spectral clustering to segment a city map into distinct geographical regions, where both visual and geographical similarities are fused to optimize the partition of city-scale geo-tagged photos. Second, we propose to learn LDVC in each region with two schemes: (1) a Ranking Sensitive PCA and (2) a Ranking Sensitive Vocabulary Boosting. Both schemes embed location cues to learn a compact descriptor, which minimizes the retrieval ranking loss by replacing the original high-dimensional signatures. Our second contribution is a location aware online vocabulary adaption: We store a single vocabulary in the mobile end, which is efficiently adapted for a region specific LDVC coding once a mobile device enters a given region. The learnt LDVC landmark descriptor is extremely compact (typically 10---50 bits with arithmetical coding) and performs superior over state-of-the-art descriptors. We implemented the framework in a real-world mobile landmark search prototype, which is validated in a million-scale landmark database covering typical areas e.g. Beijing, New York City, Lhasa, Singapore, and Florence.",
"We propose a randomized data mining method that finds clusters of spatially overlapping images. The core of the method relies on the min-Hash algorithm for fast detection of pairs of images with spatial overlap, the so-called cluster seeds. The seeds are then used as visual queries to obtain clusters which are formed as transitive closures of sets of partially overlapping images that include the seed. We show that the probability of finding a seed for an image cluster rapidly increases with the size of the cluster. The properties and performance of the algorithm are demonstrated on data sets with 104, 105, and 5 × 106 images. The speed of the method depends on the size of the database and the number of clusters. The first stage of seed generation is close to linear for databases sizes up to approximately 234 ? 1010 images. On a single 2.4 GHz PC, the clustering process took only 24 minutes for a standard database of more than 100,000 images, i.e., only 0.014 seconds per image.",
"Modeling and recognizing landmarks at world-scale is a useful yet challenging task. There exists no readily available list of worldwide landmarks. Obtaining reliable visual models for each landmark can also pose problems, and efficiency is another challenge for such a large scale system. This paper leverages the vast amount of multimedia data on the Web, the availability of an Internet image search engine, and advances in object recognition and clustering techniques, to address these issues. First, a comprehensive list of landmarks is mined from two sources: (1) 20 million GPS-tagged photos and (2) online tour guide Web pages. Candidate images for each landmark are then obtained from photo sharing Websites or by querying an image search engine. Second, landmark visual models are built by pruning candidate images using efficient image matching and unsupervised clustering techniques. Finally, the landmarks and their visual models are validated by checking authorship of their member images. The resulting landmark recognition engine incorporates 5312 landmarks from 1259 cities in 144 countries. The experiments demonstrate that the engine can deliver satisfactory recognition performance with high efficiency.",
"In this paper, we describe an approach for mining images of objects (such as touristic sights) from community photo collections in an unsupervised fashion. Our approach relies on retrieving geotagged photos from those web-sites using a grid of geospatial tiles. The downloaded photos are clustered into potentially interesting entities through a processing pipeline of several modalities, including visual, textual and spatial proximity. The resulting clusters are analyzed and are automatically classified into objects and events. Using mining techniques, we then find text labels for these clusters, which are used to again assign each cluster to a corresponding Wikipedia article in a fully unsupervised manner. A final verification step uses the contents (including images) from the selected Wikipedia article to verify the cluster-article assignment. We demonstrate this approach on several urban areas, densely covering an area of over 700 square kilometers and mining over 200,000 photos, making it probably the largest experiment of its kind to date.",
"In this paper, we propose a novel algorithm for automatic landmark building discovery in large, unstructured image collections. In contrast to other approaches which aim at a hard clustering, we regard the task as a mode estimation problem. Our algorithm searches for local attractors in the image distribution that have a maximal mutual homography overlap with the images in their neighborhood. Those attractors correspond to central, iconic views of single objects or buildings, which we efficiently extract using a medoid shift search with a novel distance measure. We propose efficient algorithms for performing this search. Most importantly, our approach performs only an efficient local exploration of the matching graph that makes it applicable for large-scale analysis of photo collections. We show experimental results validating our approach on a dataset of 500k images of the inner city of Paris.",
"We investigate how to organize a large collection of geotagged photos, working with a dataset of about 35 million images collected from Flickr. Our approach combines content analysis based on text tags and image data with structural analysis based on geospatial data. We use the spatial distribution of where people take photos to define a relational structure between the photos that are taken at popular places. We then study the interplay between this structure and the content, using classification methods for predicting such locations from visual, textual and temporal features of the photos. We find that visual and temporal features improve the ability to estimate the location of a photo, compared to using just textual features. We illustrate using these techniques to organize a large photo collection, while also revealing various interesting properties about popular cities and landmarks at a global scale.",
"With the rise of photo-sharing websites such as Facebook and Flickr has come dramatic growth in the number of photographs online. Recent research in object recognition has used such sites as a source of image data, but the test images have been selected and labeled by hand, yielding relatively small validation sets. In this paper we study image classification on a much larger dataset of 30 million images, including nearly 2 million of which have been labeled into one of 500 categories. The dataset and categories are formed automatically from geotagged photos from Flickr, by looking for peaks in the spatial geotag distribution corresponding to frequently-photographed landmarks. We learn models for these landmarks with a multiclass support vector machine, using vector-quantized interest point descriptors as features. We also explore the non-visual information available on modern photo-sharing sites, showing that using textual tags and temporal constraints leads to significant improvements in classification rate. We find that in some cases image features alone yield comparable classification accuracy to using text tags as well as to the performance of human observers.",
"Automatic organization of large, unordered image collections is an extremely challenging problem with many potential applications. Often, what is required is that images taken in the same place, of the same thing, or of the same person be conceptually grouped together. This work focuses on grouping images containing the same object, despite significant changes in scale, viewpoint and partial occlusions, in very large (1M+) image collections automatically gathered from Flicker. The scale of the data and the extreme variation in imaging conditions makes the problem very challenging. We describe a scalable method that first computes a matching graph over all the images. Image groups can then be mined from this graph using standard clustering techniques. The novelty we bring is that both the matching graph and the clustering methods are able to use the spatial consistency between the images arising from the common object (if there is one). We demonstrate our methods on a publicly available dataset of 5 K images of Oxford, a 37 K image dataset containing images of the Statue of Liberty, and a much larger 1M image dataset of Rome. This is, to our knowledge, the largest dataset to which image-based data mining has been applied.",
"The recognition of a place depicted in an image typically adopts methods from image retrieval in large-scale databases. First, a query image is described as a “bag-of-features” and compared to every image in the database. Second, the most similar images are passed to a geometric verification stage. However, this is an inefficient approach when considering that some database images may be almost identical, and many image features may not repeatedly occur. We address this issue by clustering similar database images to represent distinct scenes, and tracking local features that are consistently detected to form a set of real-world landmarks. Query images are then matched to landmarks rather than features, and a probabilistic model of landmark properties is learned from the cluster to appropriately verify or reject putative feature matches. We present novelties in both a bag-of-features retrieval and geometric verification stage based on this concept. Results on a database of 200K images of popular tourist destinations show improvements in both recognition performance and efficiency compared to traditional image retrieval methods.",
"Most of the recent work on image-based object recognition and 3D reconstruction has focused on improving the underlying algorithms. In this paper we present a method to automatically improve the quality of the reference database, which, as we will show, also affects recognition and reconstruction performances significantly. Starting out from a reference database of clustered images we expand small clusters. This is done by exploiting cross-media information, which allows for crawling of additional images. For large clusters redundant information is removed by scene analysis. We show how these techniques make object recognition and 3D reconstruction both more efficient and more precise - we observed up to 14.8 improvement for the recognition task. Furthermore, the methods are completely data-driven and fully automatic.",
"State of the art data mining and image retrieval in community photo collections typically focus on popular subsets, e.g. images containing landmarks or associated to Wikipedia articles. We propose an image clustering scheme that, seen as vector quantization compresses a large corpus of images by grouping visually consistent ones while providing a guaranteed distortion bound. This allows us, for instance, to represent the visual content of all thousands of images depicting the Parthenon in just a few dozens of scene maps and still be able to retrieve any single, isolated, non-landmark image like a house or graffiti on a wall. Starting from a geo-tagged dataset, we first group images geographically and then visually, where each visual cluster is assumed to depict different views of the the same scene. We align all views to one reference image and construct a 2D scene map by preserving details from all images while discarding repeating visual features. Our indexing, retrieval and spatial matching scheme then operates directly on scene maps. We evaluate the precision of the proposed method on a challenging one-million urban image dataset.",
"This article presents an approach for modeling landmarks based on large-scale, heavily contaminated image collections gathered from the Internet. Our system efficiently combines 2D appearance and 3D geometric constraints to extract scene summaries and construct 3D models. In the first stage of processing, images are clustered based on low-dimensional global appearance descriptors, and the clusters are refined using 3D geometric constraints. Each valid cluster is represented by a single iconic view, and the geometric relationships between iconic views are captured by an iconic scene graph. Using structure from motion techniques, the system then registers the iconic images to efficiently produce 3D models of the different aspects of the landmark. To improve coverage of the scene, these 3D models are subsequently extended using additional, non-iconic views. We also demonstrate the use of iconic images for recognition and browsing. Our experimental results demonstrate the ability to process datasets containing up to 46,000 images in less than 20 hours, using a single commodity PC equipped with a graphics card. This is a significant advance towards Internet-scale operation."
]
} |
1409.5400 | 2017512655 | Abstract The task of a visual landmark recognition system is to identify photographed buildings or objects in query photos and to provide the user with relevant information on them. With their increasing coverage of the world’s landmark buildings and objects, Internet photo collections are now being used as a source for building such systems in a fully automatic fashion. This process typically consists of three steps: clustering large amounts of images by the objects they depict; determining object names from user-provided tags; and building a robust, compact, and efficient recognition index. To this date, however, there is little empirical information on how well current approaches for those steps perform in a large-scale open-set mining and recognition task. Furthermore, there is little empirical information on how recognition performance varies for different types of landmark objects and where there is still potential for improvement. With this paper, we intend to fill these gaps. Using a dataset of 500 k images from Paris, we analyze each component of the landmark recognition pipeline in order to answer the following questions: How many and what kinds of objects can be discovered automatically? How can we best use the resulting image clusters to recognize the object in a query? How can the object be efficiently represented in memory for recognition? How reliably can semantic information be extracted? And finally: What are the limiting factors in the resulting pipeline from query to semantics? We evaluate how different choices of methods and parameters for the individual pipeline steps affect overall system performance and examine their effects for different query categories such as buildings, paintings or sculptures. | Chum al @cite_38 use to find images and grow landmark clusters by query expansion @cite_36 . Philbin al @cite_0 over-segment the matching graph using spectral clustering and merge clusters of the same object based on image overlap. Gammeter al @cite_23 and Quack al @cite_9 perform hierarchical agglomerative clustering in a local matching graph. Avrithis al @cite_40 use Kernel Vector Quantization to create a clustering with an upper bound on intra-cluster dissimilarity. by Weyand al @cite_55 finds popular objects at different scales using mode search based on a homography overlap distance. We choose Iconoid Shift as our analysis tool, because it produces an overlapping clustering and discovers landmarks at varying levels of granularity, thus also discovering, , building details. | {
"cite_N": [
"@cite_38",
"@cite_36",
"@cite_9",
"@cite_55",
"@cite_0",
"@cite_40",
"@cite_23"
],
"mid": [
"2150307973",
"2100398441",
"1979572481",
"2164488048",
"2537826750",
"2121348293",
"2540698934"
],
"abstract": [
"We propose a randomized data mining method that finds clusters of spatially overlapping images. The core of the method relies on the min-Hash algorithm for fast detection of pairs of images with spatial overlap, the so-called cluster seeds. The seeds are then used as visual queries to obtain clusters which are formed as transitive closures of sets of partially overlapping images that include the seed. We show that the probability of finding a seed for an image cluster rapidly increases with the size of the cluster. The properties and performance of the algorithm are demonstrated on data sets with 104, 105, and 5 × 106 images. The speed of the method depends on the size of the database and the number of clusters. The first stage of seed generation is close to linear for databases sizes up to approximately 234 ? 1010 images. On a single 2.4 GHz PC, the clustering process took only 24 minutes for a standard database of more than 100,000 images, i.e., only 0.014 seconds per image.",
"Given a query image of an object, our objective is to retrieve all instances of that object in a large (1M+) image database. We adopt the bag-of-visual-words architecture which has proven successful in achieving high precision at low recall. Unfortunately, feature detection and quantization are noisy processes and this can result in variation in the particular visual words that appear in different images of the same object, leading to missed results. In the text retrieval literature a standard method for improving performance is query expansion. A number of the highly ranked documents from the original query are reissued as a new query. In this way, additional relevant terms can be added to the query. This is a form of blind rele- vance feedback and it can fail if 'outlier' (false positive) documents are included in the reissued query. In this paper we bring query expansion into the visual domain via two novel contributions. Firstly, strong spatial constraints between the query image and each result allow us to accurately verify each return, suppressing the false positives which typically ruin text-based query expansion. Secondly, the verified images can be used to learn a latent feature model to enable the controlled construction of expanded queries. We illustrate these ideas on the 5000 annotated image Oxford building database together with more than 1M Flickr images. We show that the precision is substantially boosted, achieving total recall in many cases.",
"In this paper, we describe an approach for mining images of objects (such as touristic sights) from community photo collections in an unsupervised fashion. Our approach relies on retrieving geotagged photos from those web-sites using a grid of geospatial tiles. The downloaded photos are clustered into potentially interesting entities through a processing pipeline of several modalities, including visual, textual and spatial proximity. The resulting clusters are analyzed and are automatically classified into objects and events. Using mining techniques, we then find text labels for these clusters, which are used to again assign each cluster to a corresponding Wikipedia article in a fully unsupervised manner. A final verification step uses the contents (including images) from the selected Wikipedia article to verify the cluster-article assignment. We demonstrate this approach on several urban areas, densely covering an area of over 700 square kilometers and mining over 200,000 photos, making it probably the largest experiment of its kind to date.",
"In this paper, we propose a novel algorithm for automatic landmark building discovery in large, unstructured image collections. In contrast to other approaches which aim at a hard clustering, we regard the task as a mode estimation problem. Our algorithm searches for local attractors in the image distribution that have a maximal mutual homography overlap with the images in their neighborhood. Those attractors correspond to central, iconic views of single objects or buildings, which we efficiently extract using a medoid shift search with a novel distance measure. We propose efficient algorithms for performing this search. Most importantly, our approach performs only an efficient local exploration of the matching graph that makes it applicable for large-scale analysis of photo collections. We show experimental results validating our approach on a dataset of 500k images of the inner city of Paris.",
"Automatic organization of large, unordered image collections is an extremely challenging problem with many potential applications. Often, what is required is that images taken in the same place, of the same thing, or of the same person be conceptually grouped together. This work focuses on grouping images containing the same object, despite significant changes in scale, viewpoint and partial occlusions, in very large (1M+) image collections automatically gathered from Flicker. The scale of the data and the extreme variation in imaging conditions makes the problem very challenging. We describe a scalable method that first computes a matching graph over all the images. Image groups can then be mined from this graph using standard clustering techniques. The novelty we bring is that both the matching graph and the clustering methods are able to use the spatial consistency between the images arising from the common object (if there is one). We demonstrate our methods on a publicly available dataset of 5 K images of Oxford, a 37 K image dataset containing images of the Statue of Liberty, and a much larger 1M image dataset of Rome. This is, to our knowledge, the largest dataset to which image-based data mining has been applied.",
"State of the art data mining and image retrieval in community photo collections typically focus on popular subsets, e.g. images containing landmarks or associated to Wikipedia articles. We propose an image clustering scheme that, seen as vector quantization compresses a large corpus of images by grouping visually consistent ones while providing a guaranteed distortion bound. This allows us, for instance, to represent the visual content of all thousands of images depicting the Parthenon in just a few dozens of scene maps and still be able to retrieve any single, isolated, non-landmark image like a house or graffiti on a wall. Starting from a geo-tagged dataset, we first group images geographically and then visually, where each visual cluster is assumed to depict different views of the the same scene. We align all views to one reference image and construct a 2D scene map by preserving details from all images while discarding repeating visual features. Our indexing, retrieval and spatial matching scheme then operates directly on scene maps. We evaluate the precision of the proposed method on a challenging one-million urban image dataset.",
"The state-of-the art in visual object retrieval from large databases allows to search millions of images on the object level. Recently, complementary works have proposed systems to crawl large object databases from community photo collections on the Internet. We combine these two lines of work to a large-scale system for auto-annotation of holiday snaps. The resulting method allows for automatic labeling objects such as landmark buildings, scenes, pieces of art etc. at the object level in a fully automatic manner. The labeling is multi-modal and consists of textual tags, geographic location, and related content on the Internet. Furthermore, the efficiency of the retrieval process is optimized by creating more compact and precise indices for visual vocabularies using background information obtained in the crawling stage of the system. We demonstrate the scalability and precision of the proposed method by conducting experiments on millions of images downloaded from community photo collections on the Internet."
]
} |
1409.5400 | 2017512655 | Abstract The task of a visual landmark recognition system is to identify photographed buildings or objects in query photos and to provide the user with relevant information on them. With their increasing coverage of the world’s landmark buildings and objects, Internet photo collections are now being used as a source for building such systems in a fully automatic fashion. This process typically consists of three steps: clustering large amounts of images by the objects they depict; determining object names from user-provided tags; and building a robust, compact, and efficient recognition index. To this date, however, there is little empirical information on how well current approaches for those steps perform in a large-scale open-set mining and recognition task. Furthermore, there is little empirical information on how recognition performance varies for different types of landmark objects and where there is still potential for improvement. With this paper, we intend to fill these gaps. Using a dataset of 500 k images from Paris, we analyze each component of the landmark recognition pipeline in order to answer the following questions: How many and what kinds of objects can be discovered automatically? How can we best use the resulting image clusters to recognize the object in a query? How can the object be efficiently represented in memory for recognition? How reliably can semantic information be extracted? And finally: What are the limiting factors in the resulting pipeline from query to semantics? We evaluate how different choices of methods and parameters for the individual pipeline steps affect overall system performance and examine their effects for different query categories such as buildings, paintings or sculptures. | Image Retrieval. Most based approaches use efficient methods @cite_6 @cite_41 @cite_45 that allow searching for images matching a query in a database consisting of potentially millions of images. Several approaches @cite_9 @cite_48 @cite_23 @cite_41 @cite_45 implement a strategy (Sec. ) where the query is matched against the database of representatives and the object cluster corresponding to the best match is returned. While Quack al @cite_9 and Zheng al @cite_48 perform a precise but computationally expensive direct feature matching, Gammeter al @cite_23 retrieve images using inverted indexing and bags-of-visual-words (BoVWs) @cite_41 @cite_45 . Li al @cite_13 only want to decide the query image contains a specific landmark. Given a dataset of photos of one landmark, they perform image retrieval based on both Gist features and BoVWs and apply a threshold to the retrieval score to decide if the query contains the object. Both Avrithis al @cite_40 and Johns al @cite_24 compress the images in a cluster into a joint BoVW representation and perform inverted file retrieval to find the best matching scene models for a query image. | {
"cite_N": [
"@cite_41",
"@cite_48",
"@cite_9",
"@cite_6",
"@cite_24",
"@cite_40",
"@cite_45",
"@cite_23",
"@cite_13"
],
"mid": [
"2141362318",
"2136451880",
"1979572481",
"2128017662",
"2064859577",
"2121348293",
"2131846894",
"2540698934",
"2058650633"
],
"abstract": [
"In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, \"web-scale \" image corpora.",
"Modeling and recognizing landmarks at world-scale is a useful yet challenging task. There exists no readily available list of worldwide landmarks. Obtaining reliable visual models for each landmark can also pose problems, and efficiency is another challenge for such a large scale system. This paper leverages the vast amount of multimedia data on the Web, the availability of an Internet image search engine, and advances in object recognition and clustering techniques, to address these issues. First, a comprehensive list of landmarks is mined from two sources: (1) 20 million GPS-tagged photos and (2) online tour guide Web pages. Candidate images for each landmark are then obtained from photo sharing Websites or by querying an image search engine. Second, landmark visual models are built by pruning candidate images using efficient image matching and unsupervised clustering techniques. Finally, the landmarks and their visual models are validated by checking authorship of their member images. The resulting landmark recognition engine incorporates 5312 landmarks from 1259 cities in 144 countries. The experiments demonstrate that the engine can deliver satisfactory recognition performance with high efficiency.",
"In this paper, we describe an approach for mining images of objects (such as touristic sights) from community photo collections in an unsupervised fashion. Our approach relies on retrieving geotagged photos from those web-sites using a grid of geospatial tiles. The downloaded photos are clustered into potentially interesting entities through a processing pipeline of several modalities, including visual, textual and spatial proximity. The resulting clusters are analyzed and are automatically classified into objects and events. Using mining techniques, we then find text labels for these clusters, which are used to again assign each cluster to a corresponding Wikipedia article in a fully unsupervised manner. A final verification step uses the contents (including images) from the selected Wikipedia article to verify the cluster-article assignment. We demonstrate this approach on several urban areas, densely covering an area of over 700 square kilometers and mining over 200,000 photos, making it probably the largest experiment of its kind to date.",
"A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CDs. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.",
"The recognition of a place depicted in an image typically adopts methods from image retrieval in large-scale databases. First, a query image is described as a “bag-of-features” and compared to every image in the database. Second, the most similar images are passed to a geometric verification stage. However, this is an inefficient approach when considering that some database images may be almost identical, and many image features may not repeatedly occur. We address this issue by clustering similar database images to represent distinct scenes, and tracking local features that are consistently detected to form a set of real-world landmarks. Query images are then matched to landmarks rather than features, and a probabilistic model of landmark properties is learned from the cluster to appropriately verify or reject putative feature matches. We present novelties in both a bag-of-features retrieval and geometric verification stage based on this concept. Results on a database of 200K images of popular tourist destinations show improvements in both recognition performance and efficiency compared to traditional image retrieval methods.",
"State of the art data mining and image retrieval in community photo collections typically focus on popular subsets, e.g. images containing landmarks or associated to Wikipedia articles. We propose an image clustering scheme that, seen as vector quantization compresses a large corpus of images by grouping visually consistent ones while providing a guaranteed distortion bound. This allows us, for instance, to represent the visual content of all thousands of images depicting the Parthenon in just a few dozens of scene maps and still be able to retrieve any single, isolated, non-landmark image like a house or graffiti on a wall. Starting from a geo-tagged dataset, we first group images geographically and then visually, where each visual cluster is assumed to depict different views of the the same scene. We align all views to one reference image and construct a 2D scene map by preserving details from all images while discarding repeating visual features. Our indexing, retrieval and spatial matching scheme then operates directly on scene maps. We evaluate the precision of the proposed method on a challenging one-million urban image dataset.",
"We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films.",
"The state-of-the art in visual object retrieval from large databases allows to search millions of images on the object level. Recently, complementary works have proposed systems to crawl large object databases from community photo collections on the Internet. We combine these two lines of work to a large-scale system for auto-annotation of holiday snaps. The resulting method allows for automatic labeling objects such as landmark buildings, scenes, pieces of art etc. at the object level in a fully automatic manner. The labeling is multi-modal and consists of textual tags, geographic location, and related content on the Internet. Furthermore, the efficiency of the retrieval process is optimized by creating more compact and precise indices for visual vocabularies using background information obtained in the crawling stage of the system. We demonstrate the scalability and precision of the proposed method by conducting experiments on millions of images downloaded from community photo collections on the Internet.",
"This article presents an approach for modeling landmarks based on large-scale, heavily contaminated image collections gathered from the Internet. Our system efficiently combines 2D appearance and 3D geometric constraints to extract scene summaries and construct 3D models. In the first stage of processing, images are clustered based on low-dimensional global appearance descriptors, and the clusters are refined using 3D geometric constraints. Each valid cluster is represented by a single iconic view, and the geometric relationships between iconic views are captured by an iconic scene graph. Using structure from motion techniques, the system then registers the iconic images to efficiently produce 3D models of the different aspects of the landmark. To improve coverage of the scene, these 3D models are subsequently extended using additional, non-iconic views. We also demonstrate the use of iconic images for recognition and browsing. Our experimental results demonstrate the ability to process datasets containing up to 46,000 images in less than 20 hours, using a single commodity PC equipped with a graphics card. This is a significant advance towards Internet-scale operation."
]
} |
1409.5400 | 2017512655 | Abstract The task of a visual landmark recognition system is to identify photographed buildings or objects in query photos and to provide the user with relevant information on them. With their increasing coverage of the world’s landmark buildings and objects, Internet photo collections are now being used as a source for building such systems in a fully automatic fashion. This process typically consists of three steps: clustering large amounts of images by the objects they depict; determining object names from user-provided tags; and building a robust, compact, and efficient recognition index. To this date, however, there is little empirical information on how well current approaches for those steps perform in a large-scale open-set mining and recognition task. Furthermore, there is little empirical information on how recognition performance varies for different types of landmark objects and where there is still potential for improvement. With this paper, we intend to fill these gaps. Using a dataset of 500 k images from Paris, we analyze each component of the landmark recognition pipeline in order to answer the following questions: How many and what kinds of objects can be discovered automatically? How can we best use the resulting image clusters to recognize the object in a query? How can the object be efficiently represented in memory for recognition? How reliably can semantic information be extracted? And finally: What are the limiting factors in the resulting pipeline from query to semantics? We evaluate how different choices of methods and parameters for the individual pipeline steps affect overall system performance and examine their effects for different query categories such as buildings, paintings or sculptures. | Classification. An alternative approach is to view the task as a classification problem where each landmark is a class. Gronat al @cite_31 learn exemplar SVMs based on the BoVWs of the visual features of the database images. Li al @cite_54 learn a multi-class SVM and additionally use the BoWs (bags-of-words) of the textual tags of the images as features. Bergamo al @cite_18 use a similar approach, but perform classification using 1-vs-all SVMs. Instead of using approximate k-Means @cite_41 for feature quantization, they reconstruct the landmarks using structure-from-motion and train random forests on the descriptors of each structure-from-motion feature track. These random forests are then used for quantizing descriptors. While discriminative methods often yield higher accuracy than nearest neighbor matching, they also have disadvantages. For example, they assign image a landmark label regardless of whether it contains a landmark. Moreover, discriminative models need to be re-trained every time new images and landmarks are added. | {
"cite_N": [
"@cite_41",
"@cite_31",
"@cite_54",
"@cite_18"
],
"mid": [
"2141362318",
"1995288918",
"2536627426",
"2147854204"
],
"abstract": [
"In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, \"web-scale \" image corpora.",
"The aim of this work is to localize a query photograph by finding other images depicting the same place in a large geotagged image database. This is a challenging task due to changes in viewpoint, imaging conditions and the large size of the image database. The contribution of this work is two-fold. First, we cast the place recognition problem as a classification task and use the available geotags to train a classifier for each location in the database in a similar manner to per-exemplar SVMs in object recognition. Second, as only few positive training examples are available for each location, we propose a new approach to calibrate all the per-location SVM classifiers using only the negative examples. The calibration we propose relies on a significance measure essentially equivalent to the p-values classically used in statistical hypothesis testing. Experiments are performed on a database of 25,000 geotagged street view images of Pittsburgh and demonstrate improved place recognition accuracy of the proposed approach over the previous work.",
"With the rise of photo-sharing websites such as Facebook and Flickr has come dramatic growth in the number of photographs online. Recent research in object recognition has used such sites as a source of image data, but the test images have been selected and labeled by hand, yielding relatively small validation sets. In this paper we study image classification on a much larger dataset of 30 million images, including nearly 2 million of which have been labeled into one of 500 categories. The dataset and categories are formed automatically from geotagged photos from Flickr, by looking for peaks in the spatial geotag distribution corresponding to frequently-photographed landmarks. We learn models for these landmarks with a multiclass support vector machine, using vector-quantized interest point descriptors as features. We also explore the non-visual information available on modern photo-sharing sites, showing that using textual tags and temporal constraints leads to significant improvements in classification rate. We find that in some cases image features alone yield comparable classification accuracy to using text tags as well as to the performance of human observers.",
"In this paper we propose a new technique for learning a discriminative codebook for local feature descriptors, specifically designed for scalable landmark classification. The key contribution lies in exploiting the knowledge of correspondences within sets of feature descriptors during code-book learning. Feature correspondences are obtained using structure from motion (SfM) computation on Internet photo collections which serve as the training data. Our codebook is defined by a random forest that is trained to map corresponding feature descriptors into identical codes. Unlike prior forest-based codebook learning methods, we utilize fine-grained descriptor labels and address the challenge of training a forest with an extremely large number of labels. Our codebook is used with various existing feature encoding schemes and also a variant we propose for importance-weighted aggregation of local features. We evaluate our approach on a public dataset of 25 landmarks and our new dataset of 620 landmarks (614K images). Our approach significantly outperforms the state of the art in landmark classification. Furthermore, our method is memory efficient and scalable."
]
} |
1409.5400 | 2017512655 | Abstract The task of a visual landmark recognition system is to identify photographed buildings or objects in query photos and to provide the user with relevant information on them. With their increasing coverage of the world’s landmark buildings and objects, Internet photo collections are now being used as a source for building such systems in a fully automatic fashion. This process typically consists of three steps: clustering large amounts of images by the objects they depict; determining object names from user-provided tags; and building a robust, compact, and efficient recognition index. To this date, however, there is little empirical information on how well current approaches for those steps perform in a large-scale open-set mining and recognition task. Furthermore, there is little empirical information on how recognition performance varies for different types of landmark objects and where there is still potential for improvement. With this paper, we intend to fill these gaps. Using a dataset of 500 k images from Paris, we analyze each component of the landmark recognition pipeline in order to answer the following questions: How many and what kinds of objects can be discovered automatically? How can we best use the resulting image clusters to recognize the object in a query? How can the object be efficiently represented in memory for recognition? How reliably can semantic information be extracted? And finally: What are the limiting factors in the resulting pipeline from query to semantics? We evaluate how different choices of methods and parameters for the individual pipeline steps affect overall system performance and examine their effects for different query categories such as buildings, paintings or sculptures. | Pose Estimation. The goal of pose estimation is to determine the camera location and orientation for a given query image. There are several approaches for solving this task by matching the query against street level imagery such as Google Street View panoramas @cite_49 @cite_35 @cite_12 @cite_47 @cite_14 @cite_17 using local feature based image retrieval @cite_6 @cite_41 @cite_45 . Other approaches are based on 3D point clouds created by applying structure-from-motion on Internet photo collections or manually collected photos @cite_50 @cite_19 @cite_53 @cite_37 . Since image retrieval methods cannot be applied here, these approaches directly match the query descriptors against the descriptors of the image features that the 3D points were reconstructed from. After a set of 2D-3D correspondences has been established, the camera pose is determined by solving the perspective-n-point (PnP) problem @cite_20 . Since the descriptor matching problem becomes computationally expensive when matching against very large 3D models, hybrid methods have been proposed that @cite_7 @cite_4 @cite_56 first perform efficient image retrieval using inverted files and then solve the PnP problem based on the relatively small set of 3D points associated with the 2D features of the retrieved images. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_41",
"@cite_53",
"@cite_6",
"@cite_56",
"@cite_19",
"@cite_45",
"@cite_49",
"@cite_50",
"@cite_47",
"@cite_20",
"@cite_12",
"@cite_17"
],
"mid": [
"1987488988",
"1616969904",
"2134446283",
"2125795712",
"2144990732",
"2141362318",
"1565312575",
"2128017662",
"2046166954",
"153084048",
"2131846894",
"2155823702",
"2129000642",
"2076482495",
"2033819227",
"1537528663",
"2072242701"
],
"abstract": [
"With recent advances in mobile computing, the demand for visual localization or landmark identification on mobile devices is gaining interest. We advance the state of the art in this area by fusing two popular representations of street-level image data - facade-aligned and viewpoint-aligned - and show that they contain complementary information that can be exploited to significantly improve the recall rates on the city scale. We also improve feature detection in low contrast parts of the street-level data, and discuss how to incorporate priors on a user's position (e.g. given by noisy GPS readings or network cells), which previous approaches often ignore. Finally, and maybe most importantly, we present our results according to a carefully designed, repeatable evaluation scheme and make publicly available a set of 1.7 million images with ground truth labels, geotags, and calibration data, as well as a difficult set of cell phone query images. We provide these resources as a benchmark to facilitate further research in the area.",
"We address the problem of determining where a photo was taken by estimating a full 6-DOF-plus-intrincs camera pose with respect to a large geo-registered 3D point cloud, bringing together research on image localization, landmark recognition, and 3D pose estimation. Our method scales to datasets with hundreds of thousands of images and tens of millions of 3D points through the use of two new techniques: a co-occurrence prior for RANSAC and bidirectional matching of image features with 3D points. We evaluate our method on several large data sets, and show state-of-the-art results on landmark recognition as well as the ability to locate cameras to within meters, requiring only seconds per query.",
"We look at the problem of location recognition in a large image dataset using a vocabulary tree. This entails finding the location of a query image in a large dataset containing 3times104 streetside images of a city. We investigate how the traditional invariant feature matching approach falls down as the size of the database grows. In particular we show that by carefully selecting the vocabulary using the most informative features, retrieval performance is significantly improved, allowing us to increase the number of database images by a factor of 10. We also introduce a generalization of the traditional vocabulary tree search algorithm which improves performance by effectively increasing the branching factor of a fixed vocabulary tree.",
"Efficient view registration with respect to a given 3D reconstruction has many applications like inside-out tracking in indoor and outdoor environments, and geo-locating images from large photo collections. We present a fast location recognition technique based on structure from motion point clouds. Vocabulary tree-based indexing of features directly returns relevant fragments of 3D models instead of documents from the images database. Additionally, we propose a compressed 3D scene representation which improves recognition rates while simultaneously reducing the computation time and the memory consumption. The design of our method is based on algorithms that efficiently utilize modern graphics processing units to deliver real-time performance for view registration. We demonstrate the approach by matching hand-held outdoor videos to known 3D urban models, and by registering images from online photo collections to the corresponding landmarks.",
"Recognizing the location of a query image by matching it to a database is an important problem in computer vision, and one for which the representation of the database is a key issue. We explore new ways for exploiting the structure of a database by representing it as a graph, and show how the rich information embedded in a graph can improve a bag-of-words-based location recognition method. In particular, starting from a graph on a set of images based on visual connectivity, we propose a method for selecting a set of sub graphs and learning a local distance function for each using discriminative techniques. For a query image, each database image is ranked according to these local distance functions in order to place the image in the right part of the graph. In addition, we propose a probabilistic method for increasing the diversity of these ranked database images, again based on the structure of the image graph. We demonstrate that our methods improve performance over standard bag-of-words methods on several existing location recognition datasets.",
"In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, \"web-scale \" image corpora.",
"We present a fast, simple location recognition and image localization method that leverages feature correspondence and geometry estimated from large Internet photo collections. Such recovered structure contains a significant amount of useful information about images and image features that is not available when considering images in isolation. For instance, we can predict which views will be the most common, which feature points in a scene are most reliable, and which features in the scene tend to co-occur in the same image. Based on this information, we devise an adaptive, prioritized algorithm for matching a representative set of SIFT features covering a large scene to a query image for efficient localization. Our approach is based on considering features in the scene database, and matching them to query image features, as opposed to more conventional methods that match image features to visual words or database features. We find this approach results in improved performance, due to the richer knowledge of characteristics of the database features compared to query image features. We present experiments on two large city-scale photo collections, showing that our algorithm compares favorably to image retrieval-style approaches to location recognition.",
"A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CDs. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.",
"To reliably determine the camera pose of an image relative to a 3D point cloud of a scene, correspondences between 2D features and 3D points are needed. Recent work has demonstrated that directly matching the features against the points outperforms methods that take an intermediate image retrieval step in terms of the number of images that can be localized successfully. Yet, direct matching is inherently less scalable than retrievalbased approaches. In this paper, we therefore analyze the algorithmic factors that cause the performance gap and identify false positive votes as the main source of the gap. Based on a detailed experimental evaluation, we show that retrieval methods using a selective voting scheme are able to outperform state-of-the-art direct matching methods. We explore how both selective voting and correspondence computation can be accelerated by using a Hamming embedding of feature descriptors. Furthermore, we introduce a new dataset with challenging query images for the evaluation of image-based localization.",
"We propose a powerful pipeline for determining the pose of a query image relative to a point cloud reconstruction of a large scene consisting of more than one million 3D points. The key component of our approach is an efficient and effective search method to establish matches between image features and scene points needed for pose estimation. Our main contribution is a framework for actively searching for additional matches, based on both 2D-to-3D and 3D-to-2D search. A unified formulation of search in both directions allows us to exploit the distinct advantages of both strategies, while avoiding their weaknesses. Due to active search, the resulting pipeline is able to close the gap in registration performance observed between efficient search methods and approaches that are allowed to run for multiple seconds, without sacrificing run-time efficiency. Our method achieves the best registration performance published so far on three standard benchmark datasets, with run-times comparable or superior to the fastest state-of-the-art methods.",
"We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films.",
"We address the problem of large scale place-of-interest recognition in cell phone images of urban scenarios. Here, we go beyond what has been shown in earlier approaches by exploiting the nowadays often available 3D building information (e.g. from extruded floor plans) and massive street-view like image data for database creation. Exploiting vanishing points in query images and thus fully removing 3D rotation from the recognition problem allows then to simplify the feature invariance to a pure homothetic problem, which we show leaves more discriminative power in feature descriptors than classical SIFT. We rerank visual word based document queries using a fast stratified homothetic verification that is tailored for repetitive patterns like window grids on facades and in most cases boosts the correct document to top positions if it was in the short list. Since we exploit 3D building information, the approach finally outputs the camera pose in real world coordinates ready for augmenting the cell phone image with virtual 3D information. The whole system is demonstrated to outperform traditional approaches on city scale experiments for different sources of street-view like image data and a challenging set of cell phone images.",
"Recently developed Structure from Motion (SfM) reconstruction approaches enable the creation of large scale 3D models of urban scenes. These compact scene representations can then be used for accurate image-based localization, creating the need for localization approaches that are able to efficiently handle such large amounts of data. An important bottleneck is the computation of 2D-to-3D correspondences required for pose estimation. Current stateof- the-art approaches use indirect matching techniques to accelerate this search. In this paper we demonstrate that direct 2D-to-3D matching methods have a considerable potential for improving registration performance. We derive a direct matching framework based on visual vocabulary quantization and a prioritized correspondence search. Through extensive experiments, we show that our framework efficiently handles large datasets and outperforms current state-of-the-art methods.",
"We seek to predict the GPS location of a query image given a database of images localized on a map with known GPS locations. The contributions of this work are three-fold: (1) we formulate the image-based localization problem as a regression on an image graph with images as nodes and edges connecting close-by images; (2) we design a novel image matching procedure, which computes similarity between the query and pairs of database images using edges of the graph and considering linear combinations of their feature vectors. This improves generalization to unseen viewpoints and illumination conditions, while reducing the database size; (3) we demonstrate that the query location can be predicted by interpolating locations of matched images in the graph without the costly estimation of multi-view geometry. We demonstrate benefits of the proposed image matching scheme on the standard Oxford building benchmark, and show localization results on a database of 8,999 panoramic Google Street View images of Pittsburgh.",
"From the Publisher: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.",
"We seek to recognize the place depicted in a query image using a database of \"street side\" images annotated with geolocation information. This is a challenging task due to changes in scale, viewpoint and lighting between the query and the images in the database. One of the key problems in place recognition is the presence of objects such as trees or road markings, which frequently occur in the database and hence cause significant confusion between different places. As the main contribution, we show how to avoid features leading to confusion of particular places by using geotags attached to database images as a form of supervision. We develop a method for automatic detection of image-specific and spatially-localized groups of confusing features, and demonstrate that suppressing them significantly improves place recognition performance while reducing the database size. We show the method combines well with the state of the art bag-of-features model including query expansion, and demonstrate place recognition that generalizes over wide range of viewpoints and lighting conditions. Results are shown on a geotagged database of over 17K images of Paris downloaded from Google Street View.",
"This paper proposes a new framework for visual place recognition that incrementally learns models of each place and offers adaptability to dynamic elements in the scene. Traditional Bag-Of-Words (BOW) image-retrieval approaches to place recognition typically treat images in a holistic manner and are not capable of dealing with sub-scene dynamics, such as structural changes to a building facade or seasonal effects on foliage. However, by treating local features as observations of real-world landmarks in a scene that is observed repeatedly over a period of time, such dynamics can be modelled at a local level, and the spatio-temporal properties of each landmark can be independently updated incrementally. The method proposed models each place as a set of such landmarks and their geometric relationships. A new BOW filtering stage and geometric verification scheme are introduced to compute a similarity score between a query image and each scene model. As further training images are acquired for each place, the landmark properties are updated over time and in the long term, the model can adapt to dynamic behaviour in the scene. Results on an outdoor dataset of images captured along a 7 km path, over a period of 5 months, show an improvement in recognition performance when compared to state-of-the-art image retrieval approaches to place recognition."
]
} |
1409.5400 | 2017512655 | Abstract The task of a visual landmark recognition system is to identify photographed buildings or objects in query photos and to provide the user with relevant information on them. With their increasing coverage of the world’s landmark buildings and objects, Internet photo collections are now being used as a source for building such systems in a fully automatic fashion. This process typically consists of three steps: clustering large amounts of images by the objects they depict; determining object names from user-provided tags; and building a robust, compact, and efficient recognition index. To this date, however, there is little empirical information on how well current approaches for those steps perform in a large-scale open-set mining and recognition task. Furthermore, there is little empirical information on how recognition performance varies for different types of landmark objects and where there is still potential for improvement. With this paper, we intend to fill these gaps. Using a dataset of 500 k images from Paris, we analyze each component of the landmark recognition pipeline in order to answer the following questions: How many and what kinds of objects can be discovered automatically? How can we best use the resulting image clusters to recognize the object in a query? How can the object be efficiently represented in memory for recognition? How reliably can semantic information be extracted? And finally: What are the limiting factors in the resulting pipeline from query to semantics? We evaluate how different choices of methods and parameters for the individual pipeline steps affect overall system performance and examine their effects for different query categories such as buildings, paintings or sculptures. | Several methods have been proposed to reduce the size of the visual search index. An obvious method is to apply standard compression techniques @cite_5 , which reduces memory consumption at the cost of computational efficiency. Instead, we are interested in already before index construction. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2543932557"
],
"abstract": [
"One of the main limitations of image search based on bag-of-features is the memory usage per image. Only a few million images can be handled on a single machine in reasonable response time. In this paper, we first evaluate how the memory usage is reduced by using lossless index compression. We then propose an approximate representation of bag-of-features obtained by projecting the corresponding histogram onto a set of pre-defined sparse projection functions, producing several image descriptors. Coupled with a proper indexing structure, an image is represented by a few hundred bytes. A distance expectation criterion is then used to rank the images. Our method is at least one order of magnitude faster than standard bag-of-features while providing excellent search quality."
]
} |
1409.5400 | 2017512655 | Abstract The task of a visual landmark recognition system is to identify photographed buildings or objects in query photos and to provide the user with relevant information on them. With their increasing coverage of the world’s landmark buildings and objects, Internet photo collections are now being used as a source for building such systems in a fully automatic fashion. This process typically consists of three steps: clustering large amounts of images by the objects they depict; determining object names from user-provided tags; and building a robust, compact, and efficient recognition index. To this date, however, there is little empirical information on how well current approaches for those steps perform in a large-scale open-set mining and recognition task. Furthermore, there is little empirical information on how recognition performance varies for different types of landmark objects and where there is still potential for improvement. With this paper, we intend to fill these gaps. Using a dataset of 500 k images from Paris, we analyze each component of the landmark recognition pipeline in order to answer the following questions: How many and what kinds of objects can be discovered automatically? How can we best use the resulting image clusters to recognize the object in a query? How can the object be efficiently represented in memory for recognition? How reliably can semantic information be extracted? And finally: What are the limiting factors in the resulting pipeline from query to semantics? We evaluate how different choices of methods and parameters for the individual pipeline steps affect overall system performance and examine their effects for different query categories such as buildings, paintings or sculptures. | Several works have addressed this problem at the , by removing redundant images from the index. Li al @cite_13 summarize the input image collection in a set of images by applying k-means clustering based on Gist descriptors, and use only these images to represent a landmark in retrieval. Gammeter al @cite_27 identify sets of very similar images using complete-link hierarchical agglomerative clustering and replace them by just one image. This step yields a slight compression of the index without loss in performance. Instead of performing clustering Yang al @cite_11 only determine a set of canonical views by applying PageRank on the matching graph of the image collection. They then discard all other views and match the query only against the canonical views. | {
"cite_N": [
"@cite_27",
"@cite_13",
"@cite_11"
],
"mid": [
"1826823045",
"2058650633",
"2153697991"
],
"abstract": [
"Most of the recent work on image-based object recognition and 3D reconstruction has focused on improving the underlying algorithms. In this paper we present a method to automatically improve the quality of the reference database, which, as we will show, also affects recognition and reconstruction performances significantly. Starting out from a reference database of clustered images we expand small clusters. This is done by exploiting cross-media information, which allows for crawling of additional images. For large clusters redundant information is removed by scene analysis. We show how these techniques make object recognition and 3D reconstruction both more efficient and more precise - we observed up to 14.8 improvement for the recognition task. Furthermore, the methods are completely data-driven and fully automatic.",
"This article presents an approach for modeling landmarks based on large-scale, heavily contaminated image collections gathered from the Internet. Our system efficiently combines 2D appearance and 3D geometric constraints to extract scene summaries and construct 3D models. In the first stage of processing, images are clustered based on low-dimensional global appearance descriptors, and the clusters are refined using 3D geometric constraints. Each valid cluster is represented by a single iconic view, and the geometric relationships between iconic views are captured by an iconic scene graph. Using structure from motion techniques, the system then registers the iconic images to efficiently produce 3D models of the different aspects of the landmark. To improve coverage of the scene, these 3D models are subsequently extended using additional, non-iconic views. We also demonstrate the use of iconic images for recognition and browsing. Our experimental results demonstrate the ability to process datasets containing up to 46,000 images in less than 20 hours, using a single commodity PC equipped with a graphics card. This is a significant advance towards Internet-scale operation.",
"We study the problem of place recognition. Given a photo, we estimate its location by scene matching to a large database of internet photos of known locations. Traditional strategies, which involve a linear scan of the database to find matching scenes, fail to scale. On the other hand, internet photos contain a massive amount of noise and redundancy, which is of little help for place recognition. By exploiting the scene distribution of photos, we summarize the database by a set of canonical views. The set of canonical views eliminates the noise and redundancy in internet photos, and provides a compact representation for the database. By restricting scene matching to the set of canonical views, we observe a good tradeoff between efficiency and recall: the average processing time for a query photo is reduced by 97 , while the recall rate for place recognition remains at 75 ."
]
} |
1409.5400 | 2017512655 | Abstract The task of a visual landmark recognition system is to identify photographed buildings or objects in query photos and to provide the user with relevant information on them. With their increasing coverage of the world’s landmark buildings and objects, Internet photo collections are now being used as a source for building such systems in a fully automatic fashion. This process typically consists of three steps: clustering large amounts of images by the objects they depict; determining object names from user-provided tags; and building a robust, compact, and efficient recognition index. To this date, however, there is little empirical information on how well current approaches for those steps perform in a large-scale open-set mining and recognition task. Furthermore, there is little empirical information on how recognition performance varies for different types of landmark objects and where there is still potential for improvement. With this paper, we intend to fill these gaps. Using a dataset of 500 k images from Paris, we analyze each component of the landmark recognition pipeline in order to answer the following questions: How many and what kinds of objects can be discovered automatically? How can we best use the resulting image clusters to recognize the object in a query? How can the object be efficiently represented in memory for recognition? How reliably can semantic information be extracted? And finally: What are the limiting factors in the resulting pipeline from query to semantics? We evaluate how different choices of methods and parameters for the individual pipeline steps affect overall system performance and examine their effects for different query categories such as buildings, paintings or sculptures. | Other works have addressed the problem at the . Turcot al @cite_2 perform a full pairwise matching of the images in the dataset and remove all features that are not at least once inliers a homography. They report a significant reduction of the number of features while maintaining similar retrieval performance. Avrithis al @cite_40 and Johns al @cite_24 combine the images in a cluster into a joint BoVW representation. Avrithis al @cite_40 use Kernel Vector Quantization to cluster redundant features and keep only the cluster centers. While this method only yields a slight compression, the aggregation of features into a Scene Map brings significant improvements in recognition performance. Johns al @cite_24 performs structure-from-motion and summarize features that are part of the same feature track. Gammeter al @cite_23 estimate bounding boxes around the landmark in each image in a cluster and remove every visual word from the index that never occurs inside a bounding box. This is reported to yield an index size reduction of about a third with decreasing precision. | {
"cite_N": [
"@cite_24",
"@cite_40",
"@cite_23",
"@cite_2"
],
"mid": [
"2064859577",
"2121348293",
"2540698934",
"1976591483"
],
"abstract": [
"The recognition of a place depicted in an image typically adopts methods from image retrieval in large-scale databases. First, a query image is described as a “bag-of-features” and compared to every image in the database. Second, the most similar images are passed to a geometric verification stage. However, this is an inefficient approach when considering that some database images may be almost identical, and many image features may not repeatedly occur. We address this issue by clustering similar database images to represent distinct scenes, and tracking local features that are consistently detected to form a set of real-world landmarks. Query images are then matched to landmarks rather than features, and a probabilistic model of landmark properties is learned from the cluster to appropriately verify or reject putative feature matches. We present novelties in both a bag-of-features retrieval and geometric verification stage based on this concept. Results on a database of 200K images of popular tourist destinations show improvements in both recognition performance and efficiency compared to traditional image retrieval methods.",
"State of the art data mining and image retrieval in community photo collections typically focus on popular subsets, e.g. images containing landmarks or associated to Wikipedia articles. We propose an image clustering scheme that, seen as vector quantization compresses a large corpus of images by grouping visually consistent ones while providing a guaranteed distortion bound. This allows us, for instance, to represent the visual content of all thousands of images depicting the Parthenon in just a few dozens of scene maps and still be able to retrieve any single, isolated, non-landmark image like a house or graffiti on a wall. Starting from a geo-tagged dataset, we first group images geographically and then visually, where each visual cluster is assumed to depict different views of the the same scene. We align all views to one reference image and construct a 2D scene map by preserving details from all images while discarding repeating visual features. Our indexing, retrieval and spatial matching scheme then operates directly on scene maps. We evaluate the precision of the proposed method on a challenging one-million urban image dataset.",
"The state-of-the art in visual object retrieval from large databases allows to search millions of images on the object level. Recently, complementary works have proposed systems to crawl large object databases from community photo collections on the Internet. We combine these two lines of work to a large-scale system for auto-annotation of holiday snaps. The resulting method allows for automatic labeling objects such as landmark buildings, scenes, pieces of art etc. at the object level in a fully automatic manner. The labeling is multi-modal and consists of textual tags, geographic location, and related content on the Internet. Furthermore, the efficiency of the retrieval process is optimized by creating more compact and precise indices for visual vocabularies using background information obtained in the crawling stage of the system. We demonstrate the scalability and precision of the proposed method by conducting experiments on millions of images downloaded from community photo collections on the Internet.",
"There has been recent progress on the problem of recognizing specific objects in very large datasets. The most common approach has been based on the bag-of-words (BOW) method, in which local image features are clustered into visual words. This can provide significant savings in memory compared to storing and matching each feature independently. In this paper we take an additional step to reducing memory requirements by selecting only a small subset of the training features to use for recognition. This is based on the observation that many local features are unreliable or represent irrelevant clutter. We are able to select “useful” features, which are both robust and distinctive, by an unsupervised preprocessing step that identifies correctly matching features among the training images. We demonstrate that this selection approach allows an average of 4 of the original features per image to provide matching performance that is as accurate as the full set. In addition, we employ a graph to represent the matching relationships between images. Doing so enables us to effectively augment the feature set for each image through merging of useful features of neighboring images. We demonstrate adjacent and 2-adjacent augmentation, both of which give a substantial boost in performance."
]
} |
1409.5400 | 2017512655 | Abstract The task of a visual landmark recognition system is to identify photographed buildings or objects in query photos and to provide the user with relevant information on them. With their increasing coverage of the world’s landmark buildings and objects, Internet photo collections are now being used as a source for building such systems in a fully automatic fashion. This process typically consists of three steps: clustering large amounts of images by the objects they depict; determining object names from user-provided tags; and building a robust, compact, and efficient recognition index. To this date, however, there is little empirical information on how well current approaches for those steps perform in a large-scale open-set mining and recognition task. Furthermore, there is little empirical information on how recognition performance varies for different types of landmark objects and where there is still potential for improvement. With this paper, we intend to fill these gaps. Using a dataset of 500 k images from Paris, we analyze each component of the landmark recognition pipeline in order to answer the following questions: How many and what kinds of objects can be discovered automatically? How can we best use the resulting image clusters to recognize the object in a query? How can the object be efficiently represented in memory for recognition? How reliably can semantic information be extracted? And finally: What are the limiting factors in the resulting pipeline from query to semantics? We evaluate how different choices of methods and parameters for the individual pipeline steps affect overall system performance and examine their effects for different query categories such as buildings, paintings or sculptures. | There is also work in that aims to eliminate redundancy in the dataset. In their hybrid 2D-3D pose estimation approach, Irschara al @cite_4 generate a set of synthetic views by projecting the SfM points onto a set of virtual cameras placed at regular intervals in the scene. They then decimate the set of synthetic views using a greedy set cover approach that finds a minimal subset of views such that each view in the subset has at least 150 3D points in common with an original view. Cao al @cite_33 use a similar criterion, but instead of views, they decimate the set of points in an SfM point cloud used for localization. Instead of set cover, they use a probabilistic variant of the K-Cover algorithm. | {
"cite_N": [
"@cite_4",
"@cite_33"
],
"mid": [
"2125795712",
"1979660104"
],
"abstract": [
"Efficient view registration with respect to a given 3D reconstruction has many applications like inside-out tracking in indoor and outdoor environments, and geo-locating images from large photo collections. We present a fast location recognition technique based on structure from motion point clouds. Vocabulary tree-based indexing of features directly returns relevant fragments of 3D models instead of documents from the images database. Additionally, we propose a compressed 3D scene representation which improves recognition rates while simultaneously reducing the computation time and the memory consumption. The design of our method is based on algorithms that efficiently utilize modern graphics processing units to deliver real-time performance for view registration. We demonstrate the approach by matching hand-held outdoor videos to known 3D urban models, and by registering images from online photo collections to the corresponding landmarks.",
"How much data do we need to describe a location? We explore this question in the context of 3D scene reconstructions created from running structure from motion on large Internet photo collections, where reconstructions can contain many millions of 3D points. We consider several methods for computing much more compact representations of such reconstructions for the task of location recognition, with the goal of maintaining good performance with very small models. In particular, we introduce a new method for computing compact models that takes into account both image-point relationships and feature distinctiveness, and we show that this method produces small models that yield better recognition performance than previous model reduction techniques."
]
} |
1409.5400 | 2017512655 | Abstract The task of a visual landmark recognition system is to identify photographed buildings or objects in query photos and to provide the user with relevant information on them. With their increasing coverage of the world’s landmark buildings and objects, Internet photo collections are now being used as a source for building such systems in a fully automatic fashion. This process typically consists of three steps: clustering large amounts of images by the objects they depict; determining object names from user-provided tags; and building a robust, compact, and efficient recognition index. To this date, however, there is little empirical information on how well current approaches for those steps perform in a large-scale open-set mining and recognition task. Furthermore, there is little empirical information on how recognition performance varies for different types of landmark objects and where there is still potential for improvement. With this paper, we intend to fill these gaps. Using a dataset of 500 k images from Paris, we analyze each component of the landmark recognition pipeline in order to answer the following questions: How many and what kinds of objects can be discovered automatically? How can we best use the resulting image clusters to recognize the object in a query? How can the object be efficiently represented in memory for recognition? How reliably can semantic information be extracted? And finally: What are the limiting factors in the resulting pipeline from query to semantics? We evaluate how different choices of methods and parameters for the individual pipeline steps affect overall system performance and examine their effects for different query categories such as buildings, paintings or sculptures. | The most common approach to perform of the discovered landmark clusters is by statistical analysis of user-provided image tags, titles and descriptions. In order to remove uninformative tags like vacation'', Quack al @cite_9 first apply a stoplist and then perform frequent itemset analysis to generate candidate names. These names are verified by querying Wikipedia and matching images from retrieved articles against the landmark cluster. Zheng al @cite_48 also apply a stoplist and then simply use the most frequent n-gram in the cluster. Crandall al @cite_29 deal with uninformative tags in a more general way by dividing the number of occurrences of a tag in a cluster by its total number of occurrences in the dataset. Simon al @cite_26 additionally account for tags that are only used by individual users by computing a conditional probability for a cluster given a tag, marginalizing out the users. | {
"cite_N": [
"@cite_48",
"@cite_9",
"@cite_29",
"@cite_26"
],
"mid": [
"2136451880",
"1979572481",
"2103388840",
"2163334428"
],
"abstract": [
"Modeling and recognizing landmarks at world-scale is a useful yet challenging task. There exists no readily available list of worldwide landmarks. Obtaining reliable visual models for each landmark can also pose problems, and efficiency is another challenge for such a large scale system. This paper leverages the vast amount of multimedia data on the Web, the availability of an Internet image search engine, and advances in object recognition and clustering techniques, to address these issues. First, a comprehensive list of landmarks is mined from two sources: (1) 20 million GPS-tagged photos and (2) online tour guide Web pages. Candidate images for each landmark are then obtained from photo sharing Websites or by querying an image search engine. Second, landmark visual models are built by pruning candidate images using efficient image matching and unsupervised clustering techniques. Finally, the landmarks and their visual models are validated by checking authorship of their member images. The resulting landmark recognition engine incorporates 5312 landmarks from 1259 cities in 144 countries. The experiments demonstrate that the engine can deliver satisfactory recognition performance with high efficiency.",
"In this paper, we describe an approach for mining images of objects (such as touristic sights) from community photo collections in an unsupervised fashion. Our approach relies on retrieving geotagged photos from those web-sites using a grid of geospatial tiles. The downloaded photos are clustered into potentially interesting entities through a processing pipeline of several modalities, including visual, textual and spatial proximity. The resulting clusters are analyzed and are automatically classified into objects and events. Using mining techniques, we then find text labels for these clusters, which are used to again assign each cluster to a corresponding Wikipedia article in a fully unsupervised manner. A final verification step uses the contents (including images) from the selected Wikipedia article to verify the cluster-article assignment. We demonstrate this approach on several urban areas, densely covering an area of over 700 square kilometers and mining over 200,000 photos, making it probably the largest experiment of its kind to date.",
"We investigate how to organize a large collection of geotagged photos, working with a dataset of about 35 million images collected from Flickr. Our approach combines content analysis based on text tags and image data with structural analysis based on geospatial data. We use the spatial distribution of where people take photos to define a relational structure between the photos that are taken at popular places. We then study the interplay between this structure and the content, using classification methods for predicting such locations from visual, textual and temporal features of the photos. We find that visual and temporal features improve the ability to estimate the location of a photo, compared to using just textual features. We illustrate using these techniques to organize a large photo collection, while also revealing various interesting properties about popular cities and landmarks at a global scale.",
"We formulate the problem of scene summarization as selecting a set of images that efficiently represents the visual content of a given scene. The ideal summary presents the most interesting and important aspects of the scene with minimal redundancy. We propose a solution to this problem using multi-user image collections from the Internet. Our solution examines the distribution of images in the collection to select a set of canonical views to form the scene summary, using clustering techniques on visual features. The summaries we compute also lend themselves naturally to the browsing of image collections, and can be augmented by analyzing user-specified image tag data. We demonstrate the approach using a collection of images of the city of Rome, showing the ability to automatically decompose the images into separate scenes, and identify canonical views for each scene."
]
} |
1409.5400 | 2017512655 | Abstract The task of a visual landmark recognition system is to identify photographed buildings or objects in query photos and to provide the user with relevant information on them. With their increasing coverage of the world’s landmark buildings and objects, Internet photo collections are now being used as a source for building such systems in a fully automatic fashion. This process typically consists of three steps: clustering large amounts of images by the objects they depict; determining object names from user-provided tags; and building a robust, compact, and efficient recognition index. To this date, however, there is little empirical information on how well current approaches for those steps perform in a large-scale open-set mining and recognition task. Furthermore, there is little empirical information on how recognition performance varies for different types of landmark objects and where there is still potential for improvement. With this paper, we intend to fill these gaps. Using a dataset of 500 k images from Paris, we analyze each component of the landmark recognition pipeline in order to answer the following questions: How many and what kinds of objects can be discovered automatically? How can we best use the resulting image clusters to recognize the object in a query? How can the object be efficiently represented in memory for recognition? How reliably can semantic information be extracted? And finally: What are the limiting factors in the resulting pipeline from query to semantics? We evaluate how different choices of methods and parameters for the individual pipeline steps affect overall system performance and examine their effects for different query categories such as buildings, paintings or sculptures. | Unfortunately, a much larger problem, also observed by Simon al @cite_26 , exists for the task of semantic assignment that is much harder to fix: For most clusters accurate tags are simply not available. In our analysis (Sec. ), we will show for which clusters these methods will still result in accurate descriptions and point out the sources of this problem. | {
"cite_N": [
"@cite_26"
],
"mid": [
"2163334428"
],
"abstract": [
"We formulate the problem of scene summarization as selecting a set of images that efficiently represents the visual content of a given scene. The ideal summary presents the most interesting and important aspects of the scene with minimal redundancy. We propose a solution to this problem using multi-user image collections from the Internet. Our solution examines the distribution of images in the collection to select a set of canonical views to form the scene summary, using clustering techniques on visual features. The summaries we compute also lend themselves naturally to the browsing of image collections, and can be augmented by analyzing user-specified image tag data. We demonstrate the approach using a collection of images of the city of Rome, showing the ability to automatically decompose the images into separate scenes, and identify canonical views for each scene."
]
} |
1409.4695 | 2071348697 | The massive presence of silent members in online communities, the so-called lurkers, has long attracted the attention of researchers in social science, cognitive psychology, and computer–human interaction. However, the study of lurking phenomena represents an unexplored opportunity of research in data mining, information retrieval and related fields. In this paper, we take a first step towards the formal specification and analysis of lurking in social networks. We address the new problem of lurker ranking and propose the first centrality methods specifically conceived for ranking lurkers in social networks. Our approach utilizes only the network topology without probing into text contents or user relationships related to media. Using Twitter, Flickr, FriendFeed and GooglePlus as cases in point, our methods’ performance was evaluated against data-driven rankings as well as existing centrality methods, including the classic PageRank and alpha-centrality. Empirical evidence has shown the significance of our lurker ranking approach, and its uniqueness in effectively identifying and ranking lurkers in an online social network. | To the best of our knowledge, there has been no study other than ours that provides a formal computational methodology for lurker ranking. The study in @cite_53 , which aims to develop classification methods for the various OSN actors, actually treats the lurking problem margin -ally, and in fact lurking cases are left out of experimental evaluation. Similarly, @cite_54 analyzes various factors that influence lifetime of OSN users, also distinguishing between active and passive lifetime; however, analyzing passive lifetime is made possible only when the user's last login date is known, which is a rarely available information. | {
"cite_N": [
"@cite_53",
"@cite_54"
],
"mid": [
"1974570422",
"1989320763"
],
"abstract": [
"In this paper, we present two methods for classification of different social network actors (individuals or organizations) such as leaders (e.g., news groups), lurkers, spammers and close associates. The first method is a two-stage process with a fuzzy-set theoretic (FST) approach to evaluation of the strengths of network links (or equivalently, actor-actor relationships) followed by a simple linear classifier to separate the actor classes. Since this method uses a lot of contextual information including actor profiles, actor-actor tweet and reply frequencies, it may be termed as a context-dependent approach. To handle the situation of limited availability of actor data for learning network link strengths, we also present a second method that performs actor classification by matching their short-term (say, roughly 25 days) tweet patterns with the generic tweet patterns of the prototype actors of different classes. Since little contextual information is used here, this can be called a context-independent approach. Our experimentation with over 500 randomly sampled records from a twitter database consists of 441,234 actors, 2,045,804 links, 6,481,900 tweets, and 2,312,927 total reply messages indicates that, in the context-independent analysis, a multilayer perceptron outperforms on both on classification accuracy and a new F-measure for classification performance, the Bayes classifier and Random Forest classifiers. However, as expected, the context-dependent analysis using link strengths evaluated using the FST approach in conjunction with some actor information reveals strong clustering of actor data based on their types, and hence can be considered as a superior approach when data available for training the system is abundant.",
"Online social network (OSN) operators are interested in promoting usage among their users, and try a variety of strategies to encourage use. Some recruit celebrities to their site, some allow third parties to develop applications that run on their sites, and all have features intended to encourage use. As important as usage is, there are few studies into what influences users to be active and to remain online. This article studies the lifetime of OSN users, examining the factors that influence lifetime in two OSNs, Twitter and Buzznet. The major contributions of this work are the study of active lifetime, the features and behaviors that encourage activity, and the comparison of active lifetime to passive lifetime."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.