aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1305.5998
2098232509
The metric uncapacitated facility location problem (UFL) enjoys a special stature in approximation algorithms as a testbed for various techniques. Two generalizations of UFL are capacitated facility location (CFL) and lower-bounded facility location (LBFL). In the former, every facility has a capacity which is the maximum demand that can be assigned to it, while in the latter, every open facility is required to serve a given minimum amount of demand. Both CFL and LBFL are approximable within a constant factor but their respective natural LP relaxations have an unbounded integrality gap. According to Shmoys and Williamson, the existence of a relaxation-based algorithm for CFL is one of the top 10 open problems in approximation algorithms. In this paper we give the first results on this problem. We provide substantial evidence against the existence of a good LP relaxation for CFL by showing unbounded integrality gaps for two families of strengthened formulations. The first family we consider is the hierarchy of LPs resulting from repeated applications of the lift-and-project Lov ' a sz-Schrijver procedure starting from the standard relaxation. We show that the LP relaxation for CFL resulting after @math rounds, where @math is the number of facilities in the instance, has unbounded integrality gap. Note that the Lov ' a sz-Schrijver procedure is known to yield an exact formulation for CFL in at most @math rounds. We also introduce the family of proper relaxations which generalizes to its logical extreme the classic star relaxation, an equivalent form of the natural LP. We characterize the integrality gap of proper relaxations for both LBFL and CFL and show a threshold phenomenon under which it decreases from unbounded to 1.
@cite_40 gave the first constant-factor approximation algorithm for uniform . Chudak and Williamson @cite_60 obtained a ratio of @math subsequently improved to @math @cite_39 . P ' a @cite_52 gave the first constant-factor approximation for non-uniform . This was improved by Mahdian and P ' a l @cite_15 and @cite_30 to a @math -approximation algorithm. As mentioned, the currently best guarantee is 5, due to @cite_8 . All these approaches use local search.
{ "cite_N": [ "@cite_30", "@cite_60", "@cite_8", "@cite_52", "@cite_39", "@cite_40", "@cite_15" ], "mid": [ "1966016308", "2149106399", "", "2144926522", "1854155592", "2033687872", "1559577696" ], "abstract": [ "We present a multi-exchange local search algorithm for approximating the capacitated facility location problem (CFLP), where a new local improvement operation is introduced that possibly exchanges multiple facilities simultaneously. We give a tight analysis for our algorithm and show that the performance guarantee of the algorithm is between (3+2 2 - ) and (3+2 2 + ) for any given constant e> 0. Previously known best approximation ratio for the CFLP is 7.88, due to Mahdian and Pal (2003), based on the operations proposed by Pal, Tardos and Wexler (2001). Our upper bound (3+2 2 + ) also matches the best known ratio, obtained by Chudak and Williamson (1999), for the CFLP with uniform capacities. In order to obtain the tight bound of our new algorithm, we make interesting use of the notion of exchange graph of and techniques from the area of parallel machine scheduling.", "In a surprising result, Korupolu, Plaxton, and Rajaraman [13] showed that a simple local search heuristic for the capacitated facility location problem (CFLP) in which the service costs obey the triangle inequality produces a solution in polynomial time which is within a factor of 8+e of the value of an optimal solution. By simplifying their analysis, we are able to show that the same heuristic produces a solution which is within a factor of 6(1+e) of the value of an optimal solution. Our simplified analysis uses the supermodularity of the cost function of the problem and the integrality of the transshipment polyhedron.", "", "The authors give the first constant factor approximation algorithm for the facility location problem with nonuniform, hard capacities. Facility location problems have received a great deal of attention in recent years. Approximation algorithms have been developed for many variants. Most of these algorithms are based on linear programming, but the LP techniques developed thus far have been unsuccessful in dealing with hard capacities. A local-search based approximation algorithm (M. , 1998; F.A. Chudak and D.P. Williamson, 1999) is known for the special case of hard but uniform capacities. We present a local-search heuristic that yields an approximation guarantee of 9 + spl epsi for the case of nonuniform hard capacities. To obtain this result, we introduce new operations that are natural in this context. Our proof is based on network flow techniques.", "We present improved combinatorial approximation algorithms for the uncapacitated facility location and k-median problems. Two central ideas in most of our results are cost scaling and greedy improvement. We present a simple greedy local search algorithm which achieves an approximation ratio of 2.414+ spl epsiv in O spl tilde (n sup 2 spl epsiv ) time. This also yields a bicriteria approximation tradeoff of (1+ spl gamma , 1+2 spl gamma ) for facility cost versus service cost which is better than previously known tradeoffs and close to the best possible. Combining greedy improvement and cost scaling with a recent primal dual algorithm for facility location due to K. Jain and V. Vazirani (1999), we get an approximation ratio of 1.853 in O spl tilde (n sup 3 ) time. This is already very close to the approximation guarantee of the best known algorithm which is LP-based. Further combined with the best known LP-based algorithm for facility location, we get a very slight improvement in the approximation factor for facility location, achieving 1.728. We present improved approximation algorithms for capacitated facility location and a variant. We also present a 4-approximation for the k-median problem, using similar ideas, building on the 6-approximation of Jain and Vazirani. The algorithm runs in O spl tilde (n sup 3 ) time.", "In this paper, we study approximation algorithms for several NP-hard facility location problems. We prove that a simple local search heuristic yields polynomial-time constant-factor approximation bounds for the metric versions of the uncapacitated k-median problem and the uncapacitated facility location problem. (For the k-median problem, our algorithms require a constant-factor blowup in the parameter k.) This local search heuristic was first proposed several decades ago, and has been shown to exhibit good practical performance in empirical studies. We also extend the above results to obtain constant-factor approximation bounds for the metric versions of capacitated k-median and facility location problems.", "In the Universal Facility Location problem we are given a set of demand points and a set of facilities. The goal is to assign the demands to facilities in such a way that the sum of service and facility costs is minimized. The service cost is proportional to the distance each unit of demand has to travel to its assigned facility, whereas the facility cost of each facility i depends on the amount of demand assigned to that facility and is given by a cost function f i (·). We present a (7.88 + e)-approximation algorithm for the Universal Facility Location problem based on local search, under the assumption that the cost functions f i are nondecreasing. The algorithm chooses local improvement steps by solving a knapsack-like subproblem using dynamic programming. This is the first constant-factor approximation algorithm for this problem. Our algorithm also slightly improves the best known approximation ratio for the capacitated facility location problem with non-uniform hard capacities." ] }
1305.5998
2098232509
The metric uncapacitated facility location problem (UFL) enjoys a special stature in approximation algorithms as a testbed for various techniques. Two generalizations of UFL are capacitated facility location (CFL) and lower-bounded facility location (LBFL). In the former, every facility has a capacity which is the maximum demand that can be assigned to it, while in the latter, every open facility is required to serve a given minimum amount of demand. Both CFL and LBFL are approximable within a constant factor but their respective natural LP relaxations have an unbounded integrality gap. According to Shmoys and Williamson, the existence of a relaxation-based algorithm for CFL is one of the top 10 open problems in approximation algorithms. In this paper we give the first results on this problem. We provide substantial evidence against the existence of a good LP relaxation for CFL by showing unbounded integrality gaps for two families of strengthened formulations. The first family we consider is the hierarchy of LPs resulting from repeated applications of the lift-and-project Lov ' a sz-Schrijver procedure starting from the standard relaxation. We show that the LP relaxation for CFL resulting after @math rounds, where @math is the number of facilities in the instance, has unbounded integrality gap. Note that the Lov ' a sz-Schrijver procedure is known to yield an exact formulation for CFL in at most @math rounds. We also introduce the family of proper relaxations which generalizes to its logical extreme the classic star relaxation, an equivalent form of the natural LP. We characterize the integrality gap of proper relaxations for both LBFL and CFL and show a threshold phenomenon under which it decreases from unbounded to 1.
@cite_51 gave a 5-approximation algorithm, based on the standard LP, for the special case of where all facilities have the same opening cost. In the soft-capacitated facility location problem one is allowed to open multiple copies of the same facility. Work on this problem includes @cite_38 @cite_26 @cite_37 @cite_2 . As observed in @cite_59 a @math -approximation for yields a @math -approximation for the case with soft capacities. Mahdian, Ye and Zhang @cite_7 noticed a sharper tradeoff and obtained a @math -approximation. A tradeoff between the blowup of capacities and the cost approximation for was studied in @cite_20 . Bicriteria approximations for appeared in @cite_4 @cite_32 .
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_26", "@cite_4", "@cite_7", "@cite_32", "@cite_59", "@cite_2", "@cite_51", "@cite_20" ], "mid": [ "", "", "", "2124295613", "1973529814", "2104701012", "2052494364", "", "2028549267", "" ], "abstract": [ "", "", "", "A networking problem of present-day interest is that of distributing a single data item to multiple clients while minimizing network usage. Steiner tree algorithms are a natural solution method, but only when the set of clients requesting the data is known. We study what can be done without this global knowledge, when a given vertex knows only the probability that any other client wishes to be connected, and must simply specify a fixed path to the data to be used in case it is requested. Our problem is an example of a class of network design problems with concave cost functions (which arise when the design problem exhibits economies of scale). In order to solve our problem, we introduce a new version of the facility location problem: one in which every open facility is required to have some minimum amount of demand assigned to it. We present a simple bicriterion approximation for this problem, one which is loose in both assignment cost and minimum demand, but within a constant factor of the optimum for both. This suffices for our application. We leave open the question of finding an algorithm that produces a truly feasible approximate solution.", "In this paper we present a 1.52-approximation algorithm for the metric uncapacitated facility location problem, and a 2-approximation algorithm for the metric capacitated facility location problem with soft capacities. Both these algorithms improve the best previously known approximation factor for the corresponding problem, and our soft-capacitated facility location algorithm achieves the integrality gap of the standard linear programming relaxation of the problem. Furthermore, we will show, using a result of Thorup, that our algorithms can be implemented in quasi-linear time.", "Gives constant approximations for a number of layered network design problems. We begin by modeling hierarchical caching, where the caches are placed in layers and each layer satisfies a fixed percentage of the demand (bounded miss rates). We present a constant approximation to the minimum total cost of placing the caches and to the routing demand through the layers. We extend this model to cover more general layered caching scenarios, giving a constant combinatorial approximation to the well-studied multi-level facility location problem. We consider a facility location variant, the load-balanced facility location problem, in which every demand is served by a unique facility and each open facility must serve at least a certain amount of demand. By combining load-balanced facility location with our results on hierarchical caching, we give a constant approximation for the access network design problem.", "In this article, we will formalize the method of dual fitting and the idea of factor-revealing LP. This combination is used to design and analyze two greedy algorithms for the metric uncapacitated facility location problem. Their approximation factors are 1.861 and 1.61, with running times of O(m log m) and O(n3), respectively, where n is the total number of vertices and m is the number of edges in the underlying complete bipartite graph between cities and facilities. The algorithms are used to improve recent results for several variants of the problem.", "", "In the capacitated facility location problem with hard capacities, we are given a set of facilities, @math , and a set of clients @math in a common metric space. Each facility i has a facility opening cost f i and capacity u i that specifies the maximum number of clients that may be assigned to this facility. We want to open some facilities from the set @math and assign each client to an open facility so that at most u i clients are assigned to any open facility i. The cost of assigning client j to facility i is given by the distance c ij , and our goal is to minimize the sum of the facility opening costs and the client assignment costs. The only known approximation algorithms that deliver solutions within a constant factor of optimal for this NP-hard problem are based on local search techniques. It is an open problem to devise an approximation algorithm for this problem based on a linear programming lower bound (or indeed, to prove a constant integrality gap for any LP relaxation). We make progress on this question by giving a 5-approximation algorithm for the special case in which all of the facility costs are equal, by rounding the optimal solution to the standard LP relaxation. One notable aspect of our algorithm is that it relies on partitioning the input into a collection of single-demand capacitated facility location problems, approximately solving them, and then combining these solutions in a natural way.", "" ] }
1305.5998
2098232509
The metric uncapacitated facility location problem (UFL) enjoys a special stature in approximation algorithms as a testbed for various techniques. Two generalizations of UFL are capacitated facility location (CFL) and lower-bounded facility location (LBFL). In the former, every facility has a capacity which is the maximum demand that can be assigned to it, while in the latter, every open facility is required to serve a given minimum amount of demand. Both CFL and LBFL are approximable within a constant factor but their respective natural LP relaxations have an unbounded integrality gap. According to Shmoys and Williamson, the existence of a relaxation-based algorithm for CFL is one of the top 10 open problems in approximation algorithms. In this paper we give the first results on this problem. We provide substantial evidence against the existence of a good LP relaxation for CFL by showing unbounded integrality gaps for two families of strengthened formulations. The first family we consider is the hierarchy of LPs resulting from repeated applications of the lift-and-project Lov ' a sz-Schrijver procedure starting from the standard relaxation. We show that the LP relaxation for CFL resulting after @math rounds, where @math is the number of facilities in the instance, has unbounded integrality gap. Note that the Lov ' a sz-Schrijver procedure is known to yield an exact formulation for CFL in at most @math rounds. We also introduce the family of proper relaxations which generalizes to its logical extreme the classic star relaxation, an equivalent form of the natural LP. We characterize the integrality gap of proper relaxations for both LBFL and CFL and show a threshold phenomenon under which it decreases from unbounded to 1.
For hard capacities and general demands the feasiblity of the unsplittable case, where the demand of each client has to be assigned to a single facility, is NP-complete, as Partition reduces to it. Bateni and Hajiaghayi @cite_27 considered the unsplittable problem with an @math violation of the capacities and obtained an @math -approximation.
{ "cite_N": [ "@cite_27" ], "mid": [ "2129486506" ], "abstract": [ "In a Content Distribution Network (CDN), there are m servers storing the data; each of them has a specific bandwidth. All the requests from a particular client should be assigned to one server because of the routing protocol used. The goal is to minimize the total cost of these assignments—cost of each is proportional to the distance between the client and the server as well as the request size—while the load on each server is kept below its bandwidth limit. When each server also has a setup cost, this is an unsplittable hard-capacitated facility location problem. As much attention as facility location problems have received, there has been no nontrivial approximation algorithm when we have hard capacities (i.e., there can only be one copy of each facility whose capacity cannot be violated) and demands are unsplittable (i.e., all the demand from a client has to be assigned to a single facility). We observe it is NP-hard to approximate the cost to within any bounded factor in this case. Thus, for an arbitrary constant e>0, we relax the capacities to a 1+e factor. For the case where capacities are almost uniform, we give a bicriteria O(log n, 1+e)-approximation algorithm for general metrics and a (1+e, 1+e)-approximation algorithm for tree metrics. A bicriteria (α,β)-approximation algorithm produces a solution of cost at most α times the optimum, while violating the capacities by no more than a β factor. We can get the same guarantees for nonuniform capacities if we allow quasipolynomial running time. In our algorithm, some clients guess the facility they are assigned to, and facilities decide the size of the clients they serve. A straightforward approach results in exponential running time. When costs do not satisfy metricity, we show that a 1.5 violation of capacities is necessary to obtain any approximation. It is worth noting that our results generalize bin packing (zero connection costs and facility costs equal to one), knapsack (single facility with all costs being zero), minimum makespan scheduling for related machines (all connection costs being zero), and some facility location problems." ] }
1305.4987
2111947236
Annotation errors can significantly hurt classifier performance, yet datasets are only growing noisier with the increased use of Amazon Mechanical Turk and techniques like distant supervision that automatically generate labels. In this paper, we present a robust extension of logistic regression that incorporates the possibility of mislabelling directly into the objective. Our model can be trained through nearly the same means as logistic regression, and retains its efficiency on high-dimensional datasets. Through named entity recognition experiments, we demonstrate that our approach can provide a significant improvement over the standard model when annotation errors are present.
One obvious issue with these methods is that the noise-detecting classifiers are themselves trained on noisy labels. Such methods may suffer from well-known effects like , where several mislabelled examples mask' each other and go undetected, and , in which the mislabelled points are so influential that they cast doubt on the correct examples @cite_9 . Figure 1 gives an example of these phenomena in the context of linear regression. Unsupervised filtering tries to avoid this problem by clustering training instances based solely on their features, then using the clusters to detect labelling anomalies @cite_5 . Recently, applied this approach to distantly-supervised relation extraction, using heuristics such as the number of mentions per tuple to eliminate suspicious examples.
{ "cite_N": [ "@cite_5", "@cite_9" ], "mid": [ "2144985721", "1969515697" ], "abstract": [ "This paper presents PWEM, a technique for detecting class label noise in training data. PWEM detects mislabeled examples by assigning to each training example a probability that its label is correct. PWEM calculates this probability by clustering examples from pairs of classes together and analyzing the distribution of labels within each cluster to derive the probability of each label's correctness. We discuss how one can use the probabilities output by PWEM to filter, mitigate, or correct mislabeled training examples. We then provide an in-depth discussion of how we applied PWEM to a sulfur detector that labels pixels from Hyperion images of the Borup-Fiord pass in Northern Canada. PWEM assigned a large number of the sulfur training examples low probabilities, indicating severe mislabeling within the sulfur class. The filtering of those low confidence examples resulted in a cleaner training set and improved the median false positive rate of the classifier by at least 29 .", "This article studies the outlier detection problem from the standpoint of penalized regression. In the regression model, we add one mean shift parameter for each of the n data points. We then apply a regularization favoring a sparse vector of mean shift parameters. The usual L1 penalty yields a convex criterion, but fails to deliver a robust estimator. The L1 penalty corresponds to soft thresholding. We introduce a thresholding (denoted by Θ) based iterative procedure for outlier detection (Θ–IPOD). A version based on hard thresholding correctly identifies outliers on some hard test problems. We describe the connection between Θ–IPOD and M-estimators. Our proposed method has one tuning parameter with which to both identify outliers and estimate regression coefficients. A data-dependent choice can be made based on the Bayes information criterion. The tuned Θ–IPOD shows outstanding performance in identifying outliers in various situations compared with other existing approaches. In addition, Θ–IPOD is much ..." ] }
1305.4987
2111947236
Annotation errors can significantly hurt classifier performance, yet datasets are only growing noisier with the increased use of Amazon Mechanical Turk and techniques like distant supervision that automatically generate labels. In this paper, we present a robust extension of logistic regression that incorporates the possibility of mislabelling directly into the objective. Our model can be trained through nearly the same means as logistic regression, and retains its efficiency on high-dimensional datasets. Through named entity recognition experiments, we demonstrate that our approach can provide a significant improvement over the standard model when annotation errors are present.
There is a growing body of literature on learning from several annotators, each of whom may be inaccurate @cite_3 @cite_0 . It is important to note that we are considering a separate, and perhaps more general, problem: we have only one source of noisy labels, and the errors need not come from the human annotators, but could be introduced through contamination or automatic labelling.
{ "cite_N": [ "@cite_0", "@cite_3" ], "mid": [ "2010135967", "2949654880" ], "abstract": [ "We describe a probabilistic approach for supervised learning when we have multiple experts annotators providing (possibly noisy) labels but no absolute gold standard. The proposed algorithm evaluates the different experts and also gives an estimate of the actual hidden labels. Experimental results indicate that the proposed method is superior to the commonly used majority voting baseline.", "We propose a new probabilistic graphical model that jointly models the difficulties of questions, the abilities of participants and the correct answers to questions in aptitude testing and crowdsourcing settings. We devise an active learning adaptive testing scheme based on a greedy minimization of expected model entropy, which allows a more efficient resource allocation by dynamically choosing the next question to be asked based on the previous responses. We present experimental results that confirm the ability of our model to infer the required parameters and demonstrate that the adaptive testing scheme requires fewer questions to obtain the same accuracy as a static test scenario." ] }
1305.4905
2048048782
Network Coding encourages information coding across a communication network. While the necessity, benefit and complexity of network coding are sensitive to the underlying graph structure of a network, existing theory on network coding often treats the network topology as a black box, focusing on algebraic or information theoretic aspects of the problem. This work aims at an in-depth examination of the relation between algebraic coding and network topologies. We mathematically establish a series of results along the direction of: if network coding is necessary beneficial, or if a particular finite field is required for coding, then the network must have a corresponding hidden structure embedded in its underlying topology, and such embedding is computationally efficient to verify. Specifically, we first formulate a meta-conjecture, the NC-Minor Conjecture, that articulates such a connection between graph theory and network coding, in the language of graph minors. We next prove that the NC-Minor Conjecture is almost equivalent to the Hadwiger Conjecture, which connects graph minors with graph coloring. Such equivalence implies the existence of @math , @math , @math , and @math minors, for networks requiring @math , @math , @math and @math , respectively. We finally prove that network coding can make a difference from routing only if the network contains a @math minor, and this minor containment result is tight. Practical implications of the above results are discussed.
Two concurrent work also examine the connection between algebraic coding and network topologies. (1) Ebrahimi and Fragouli @cite_18 investigate such a connection using an algebraic approach. Based on the algebraic framework due to Koetter and M 'edard @cite_23 , they scrutinize the network polynomial that is used for multicast code assignment. The goal is to understand what structures in the network lead to which type of monomials in the network polynomial, and hence to bound the necessary field size by bounding the highest degree of the monomials. (2) Xiahou @cite_11 investigate such a connection using a graph coloring approach, in planar and pseudo planar networks, and special types of planar networks where all relays or all terminals appear on a common face. Their work is complementary to ours in that they design efficient network code assignment algorithms over small fields, while our work proves the sufficiency of small fields in more general types of networks.
{ "cite_N": [ "@cite_18", "@cite_23", "@cite_11" ], "mid": [ "2102620382", "2138928022", "" ], "abstract": [ "It is well known that transfer polynomials play an important role in the network code design problem. In this paper we provide a graph theoretical description of the terms of such polynomials. We consider acyclic networks with arbitrary number of receivers and min-cut h between each source-receiver pair. We show that the associated polynomial can be described in terms of certain subgraphs of the network.1", "We take a new look at the issue of network capacity. It is shown that network coding is an essential ingredient in achieving the capacity of a network. Building on recent work by (see Proc. 2001 IEEE Int. Symp. Information Theory, p.102), who examined the network capacity of multicast networks, we extend the network coding framework to arbitrary networks and robust networking. For networks which are restricted to using linear network codes, we find necessary and sufficient conditions for the feasibility of any given set of connections over a given network. We also consider the problem of network recovery for nonergodic link failures. For the multicast setup we prove that there exist coding strategies that provide maximally robust networks and that do not require adaptation of the network interior to the failure pattern in question. The results are derived for both delay-free networks and networks with delays.", "" ] }
1305.3312
2081371310
Estimation of a covariance matrix or its inverse plays a central role in many statistical methods. For these methods to work reliably, estimated matrices must not only be invertible but also well-conditioned. The current paper introduces a novel prior to ensure a well-conditioned maximum a posteriori (MAP) covariance estimate. The prior shrinks the sample covariance estimator towards a stable target and leads to a MAP estimator that is consistent and asymptotically efficient. Thus, the MAP estimator gracefully transitions towards the sample covariance matrix as the number of samples grows relative to the number of covariates. The utility of the MAP estimator is demonstrated in two standard applications-discriminant analysis and EM clustering-in challenging sampling regimes.
@cite_0 @cite_9 show that linear shrinkage works well when @math is large or the population eigenvalues are close to one another. On the other hand, if @math is small or the population eigenvalues are dispersed, linear shrinkage yields marginal improvements over the sample covariance. Nonlinear shrinkage estimators may present avenues for further improvement . Our shrinkage estimator is closest in spirit to the estimator of @cite_6 , who put a prior on the condition number of the covariance matrix.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_6" ], "mid": [ "2062125287", "2013122247", "2080610970" ], "abstract": [ "Many applied problems require a covariance matrix estimator that is not only invertible, but also well-conditioned (that is, inverting it does not amplify estimation error). For large-dimensional covariance matrices, the usual estimator--the sample covariance matrix--is typically not well-conditioned and may not even be invertible. This paper introduces an estimator that is both well-conditioned and more accurate than the sample covariance matrix asymptotically. This estimator is distribution-free and has a simple explicit formula that is easy to compute and interpret. It is the asymptotically optimal convex linear combination of the sample covariance matrix with the identity matrix. Optimality is meant with respect to a quadratic loss function, asymptotically as the number of observations and the number of variables go to infinity together. Extensive Monte Carlo confirm that the asymptotic results tend to hold well in finite sample.", "Many statistical applications require an estimate of a covariance matrix and or its inverse. When the matrix dimension is large compared to the sample size, which happens frequently, the sample covariance matrix is known to perform poorly and may suffer from ill-conditioning. There already exists an extensive literature concerning improved estimators in such situations. In the absence of further knowledge about the structure of the true covariance matrix, the most successful approach so far, arguably, has been shrinkage estimation. Shrinking the sample covariance matrix to a multiple of the identity, by taking a weighted average of the two, turns out to be equivalent to linearly shrinking the sample eigenvalues to their grand mean, while retaining the sample eigenvectors. Our paper extends this approach by considering nonlinear transformations of the sample eigenvalues. We show how to construct an estimator that is asymptotically equivalent to an oracle estimator suggested in previous work. As demonstrated in extensive Monte Carlo simulations, the resulting bona fide estimator can result in sizeable improvements over the sample covariance matrix and also over linear shrinkage.", "A method for simultaneous modelling of the Cholesky decomposition of several covariance matrices is presented. We highlight the conceptual and computational advantages of the unconstrained parameterization of the Cholesky decomposition and compare the results with those obtained using the classical spectral (eigenvalue) and variance-correlation decompositions. All these methods amount to decomposing complicated covariance matrices into ''dependence'' and ''variance'' components, and then modelling them virtually separately using regression techniques. The entries of the ''dependence'' component of the Cholesky decomposition have the unique advantage of being unconstrained so that further reduction of the dimension of its parameter space is fairly simple. Normal theory maximum likelihood estimates for complete and incomplete data are presented using iterative methods such as the EM (Expectation-Maximization) algorithm and their improvements. These procedures are illustrated using a dataset from a growth hormone longitudinal clinical trial." ] }
1305.3207
2951314997
We give a highly efficient "semi-agnostic" algorithm for learning univariate probability distributions that are well approximated by piecewise polynomial density functions. Let @math be an arbitrary distribution over an interval @math which is @math -close (in total variation distance) to an unknown probability distribution @math that is defined by an unknown partition of @math into @math intervals and @math unknown degree- @math polynomials specifying @math over each of the intervals. We give an algorithm that draws @math samples from @math , runs in time @math , and with high probability outputs a piecewise polynomial hypothesis distribution @math that is @math -close (in total variation distance) to @math . This sample complexity is essentially optimal; we show that even for @math , any algorithm that learns an unknown @math -piecewise degree- @math probability distribution over @math to accuracy @math must use @math samples from the distribution, regardless of its running time. Our algorithm combines tools from approximation theory, uniform convergence, linear programming, and dynamic programming. We apply this general algorithm to obtain a wide range of results for many natural problems in density estimation over both continuous and discrete domains. These include state-of-the-art results for learning mixtures of log-concave distributions; mixtures of @math -modal distributions; mixtures of Monotone Hazard Rate distributions; mixtures of Poisson Binomial Distributions; mixtures of Gaussians; and mixtures of @math -monotone densities. Our general technique yields computationally efficient algorithms for all these problems, in many cases with provably optimal sample complexities (up to logarithmic factors) in all parameters.
We work in a PAC-type model similar to that of @cite_5 and to well-studied statistical frameworks for density estimation. The learning algorithm has access to i.i.d. draws from an unknown probability distribution @math . which is assumed to belong to a (known) class @math of possible target distributions. Well, we do not really need to assume this, since our results are quasi-agnostic. We explain the agnostic view of our algorithms later -- I think it's simpler to start off with this as the initial explanation. It must output a hypothesis distribution @math such that with high probability the total variation distance @math between @math and @math is at most @math (Recall that the total variation distance between two distributions @math and @math is @math for continuous distributions, and is @math for discrete distributions.) We shall be centrally concerned with obtaining learning algorithms that both use few samples and are computationally efficient.
{ "cite_N": [ "@cite_5" ], "mid": [ "2095374884" ], "abstract": [ "We introduce and investigate a new model of learning probability distributions from independent draws. Our model is inspired by the popular Probably Approximately Correct (PAC) model for learning boolean functions from labeled examples [24], in the sense that we emphasize efficient and approximate learning, and we study the learnability of restricted classes of target distributions. The dist ribut ion classes we examine are often defined by some simple computational mechanism for transforming a truly random string of input bits (which is not visible to the learning algorithm) into the stochastic observation (output) seen by the learning algorithm. In this paper, we concentrate on discrete distributions over O, I n. The problem of inferring an approximation to an unknown probability distribution on the basis of independent draws has a long and complex history in the pattern recognition and statistics literature. For instance, the problem of estimating the parameters of a Gaussian density in highdimensional space is one of the most studied statistical problems. Distribution learning problems have often been investigated in the context of unsupervised learning, in which a linear mixture of two or more distributions is generating the observations, and the final goal is not to model the distributions themselves, but to predict from which distribution each observation was drawn. Data clustering methods are a common tool here. There is also a large literature on nonpararnetric density estimation, in which no assumptions are made on the unknown target density. Nearest-neighbor approaches to the unsupervised learning problem often arise in the nonparametric setting. While we obviously cannot do justice to these areas here, the books of Duda and Hart [9] and Vapnik [25] provide excellent overviews and introductions to the pattern recognition work, as well as many pointers for further reading. See also Izenman’s recent survey article [16]. Roughly speaking, our work departs from the traditional statistical and pattern recognition approaches in two ways. First, we place explicit emphasis on the comput ationrd complexity of distribution learning. It seems fair to say that while previous research has provided an excellent understanding of the information-theoretic issues involved in dis-" ] }
1305.3207
2951314997
We give a highly efficient "semi-agnostic" algorithm for learning univariate probability distributions that are well approximated by piecewise polynomial density functions. Let @math be an arbitrary distribution over an interval @math which is @math -close (in total variation distance) to an unknown probability distribution @math that is defined by an unknown partition of @math into @math intervals and @math unknown degree- @math polynomials specifying @math over each of the intervals. We give an algorithm that draws @math samples from @math , runs in time @math , and with high probability outputs a piecewise polynomial hypothesis distribution @math that is @math -close (in total variation distance) to @math . This sample complexity is essentially optimal; we show that even for @math , any algorithm that learns an unknown @math -piecewise degree- @math probability distribution over @math to accuracy @math must use @math samples from the distribution, regardless of its running time. Our algorithm combines tools from approximation theory, uniform convergence, linear programming, and dynamic programming. We apply this general algorithm to obtain a wide range of results for many natural problems in density estimation over both continuous and discrete domains. These include state-of-the-art results for learning mixtures of log-concave distributions; mixtures of @math -modal distributions; mixtures of Monotone Hazard Rate distributions; mixtures of Poisson Binomial Distributions; mixtures of Gaussians; and mixtures of @math -monotone densities. Our general technique yields computationally efficient algorithms for all these problems, in many cases with provably optimal sample complexities (up to logarithmic factors) in all parameters.
The main result of @cite_15 is an efficient algorithm for learning any @math -mixture of @math -piecewise constant distributions:
{ "cite_N": [ "@cite_15" ], "mid": [ "1763164889" ], "abstract": [ "Let @math be a class of probability distributions over the discrete domain @math We show that if @math satisfies a rather general condition -- essentially, that each distribution in @math can be well-approximated by a variable-width histogram with few bins -- then there is a highly efficient (both in terms of running time and sample complexity) algorithm that can learn any mixture of @math unknown distributions from @math We analyze several natural types of distributions over @math , including log-concave, monotone hazard rate and unimodal distributions, and show that they have the required structural property of being well-approximated by a histogram with few bins. Applying our general algorithm, we obtain near-optimally efficient algorithms for all these mixture learning problems." ] }
1305.3268
2949885909
In Rothvo it was shown that there exists a 0 1 polytope (a polytope whose vertices are in 0,1 ^ n ) such that any higher-dimensional polytope projecting to it must have 2^ (n) facets, i.e., its linear extension complexity is exponential. The question whether there exists a 0 1 polytope with high PSD extension complexity was left open. We answer this question in the affirmative by showing that there is a 0 1 polytope such that any spectrahedron projecting to it must be the intersection of a semidefinite cone of dimension 2^ (n) and an affine space. Our proof relies on a new technique to rescale semidefinite factorizations.
The basis for the study of linear and semidefinite extended formulations is the work of Yannakakis (see @cite_11 and @cite_15 ). The existence of a 0 1 polytope with exponential extension complexity was shown in @cite_13 which in turn was inspired by @cite_9 . The first explicit example, answering a long standing open problem of Yannakakis, was provided in @cite_7 which, together with @cite_2 , also lay the foundation for the study of extended formulations over general closed convex cones. In @cite_7 it was also shown that there exist matrices with large nonnegative rank but small semidefinite rank, indicating that semidefinite extended formulations can be exponentially stronger than linear ones, however falling short of giving an explicit proof. They thereby separated the expressive power of linear programs from those of semidefinite programs and raised the question:
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_2", "@cite_15", "@cite_13", "@cite_11" ], "mid": [ "2119368733", "2068422506", "2138677859", "", "2952253187", "87237596" ], "abstract": [ "We solve a 20-year old problem posed by Yannakakis and prove that there exists no polynomial-size linear program (LP) whose associated polytope projects to the traveling salesman polytope, even if the LP is not required to be symmetric. Moreover, we prove that this holds also for the cut polytope and the stable set polytope. These results were discovered through a new connection that we make between one-way quantum communication protocols and semidefinite programming reformulations of LPs.", "THE theory of switching circuits may be divided into two major divisions, analysis and synthesis. The problem of analysis, determining the manner of operation of a given switching circuit, is comparatively simple. The inverse problem of finding a circuit satisfying certain given operating conditions, and in particular the best circuit is, in general, more difficult and more important from the practical standpoint. A basic part of the general synthesis problem is the design of a two-terminal network with given operating characteristics, and we shall consider some aspects of this problem.", "In this paper, we address the basic geometric question of when a given convex set is the image under a linear map of an affine slice of a given closed convex cone. Such a representation or lift of the convex set is especially useful if the cone admits an efficient algorithm for linear optimization over its affine slices. We show that the existence of a lift of a convex set to a cone is equivalent to the existence of a factorization of an operator associated to the set and its polar via elements in the cone and its dual. This generalizes a theorem of Yannakakis that established a connection between polyhedral lifts of a polytope and nonnegative factorizations of its slack matrix. Symmetric lifts of convex sets can also be characterized similarly. When the cones live in a family, our results lead to the definition of the rank of a convex set with respect to this family. We present results about this rank in the context of cones of positive semidefinite matrices. Our methods provide new tools for understanding cone lifts of convex sets.", "", "We prove that there are 0 1 polytopes P that do not admit a compact LP formulation. More precisely we show that for every n there is a sets X 0,1 ^n such that conv(X) must have extension complexity at least 2^ n 2 * (1-o(1)) . In other words, every polyhedron Q that can be linearly projected on conv(X) must have exponentially many facets. In fact, the same result also applies if conv(X) is restricted to be a matroid polytope. Conditioning on NP not contained in P_ poly , our result rules out the existence of any compact formulation for the TSP polytope, even if the formulation may contain arbitrary real numbers.", "" ] }
1305.3014
2953055137
Online advertising has been introduced as one of the most efficient methods of advertising throughout the recent years. Yet, advertisers are concerned about the efficiency of their online advertising campaigns and consequently, would like to restrict their ad impressions to certain websites and or certain groups of audience. These restrictions, known as targeting criteria, limit the reachability for better performance. This trade-off between reachability and performance illustrates a need for a forecasting system that can quickly predict estimate (with good accuracy) this trade-off. Designing such a system is challenging due to (a) the huge amount of data to process, and, (b) the need for fast and accurate estimates. In this paper, we propose a distributed fault tolerant system that can generate such estimates fast with good accuracy. The main idea is to keep a small representative sample in memory across multiple machines and formulate the forecasting problem as queries against the sample. The key challenge is to find the best strata across the past data, perform multivariate stratified sampling while ensuring fuzzy fall-back to cover the small minorities. Our results show a significant improvement over the uniform and simple stratified sampling strategies which are currently widely used in the industry.
Some authors have proposed probabilistic methods for selectivity estimation @cite_21 . They model the data set as joint distribution over the variables on which GROUP BY or JOIN queries are run. A naive approach to modeling the joint probability distribution will lead to exponential no of entries, so they propose to use conditional independence found in many real-life data sets. Based on the input query and the conditional independence among the various variables, a Bayesian Network is formulated and answering the query simply boils down to inference on the Bayesian Network. Our approach has some similarity to thier work in the sense that we also rely on the conditional independence property to reduce our computation. However the authors seem to focus more on the model complexity and do not provide much informaation on the runtime of the model building and inference. We propose to use a state of the art Markov Random Field method which is learned in distributed environment.
{ "cite_N": [ "@cite_21" ], "mid": [ "2168865746" ], "abstract": [ "Estimating the result size of complex queries that involve selection on multiple attributes and the join of several relations is a difficult but fundamental task in database query processing. It arises in cost-based query optimization, query profiling, and approximate query answering. In this paper, we show how probabilistic graphical models can be effectively used for this task as an accurate and compact approximation of the joint frequency distribution of multiple attributes across multiple relations. Probabilistic Relational Models (PRMs) are a recent development that extends graphical statistical models such as Bayesian Networks to relational domains. They represent the statistical dependencies between attributes within a table, and between attributes across foreign-key joins. We provide an efficient algorithm for constructing a PRM front a database, and show how a PRM can be used to compute selectivity estimates for a broad class of queries. One of the major contributions of this work is a unified framework for the estimation of queries involving both select and foreign-key join operations. Furthermore, our approach is not limited to answering a small set of predetermined queries; a single model can be used to effectively estimate the sizes of a wide collection of potential queries across multiple tables. We present results for our approach on several real-world databases. For both single-table multi-attribute queries and a general class of select-join queries, our approach produces more accurate estimates than standard approaches to selectivity estimation, using comparable space and time." ] }
1305.3011
2952648423
Today, billions of display ad impressions are purchased on a daily basis through a public auction hosted by real time bidding (RTB) exchanges. A decision has to be made for advertisers to submit a bid for each selected RTB ad request in milliseconds. Restricted by the budget, the goal is to buy a set of ad impressions to reach as many targeted users as possible. A desired action (conversion), advertiser specific, includes purchasing a product, filling out a form, signing up for emails, etc. In addition, advertisers typically prefer to spend their budget smoothly over the time in order to reach a wider range of audience accessible throughout a day and have a sustainable impact. However, since the conversions occur rarely and the occurrence feedback is normally delayed, it is very challenging to achieve both budget and performance goals at the same time. In this paper, we present an online approach to the smooth budget delivery while optimizing for the conversion performance. Our algorithm tries to select high quality impressions and adjust the bid price based on the prior performance distribution in an adaptive manner by distributing the budget optimally across time. Our experimental results from real advertising campaigns demonstrate the effectiveness of our proposed approach.
Eq. is typically called online linear programming, and many practical problems, such as online bidding @cite_22 @cite_0 , online keyword matching @cite_10 , online packing @cite_11 , and online resource allocation @cite_15 , can be formulated in the similar form. However, we do not attempt to provide a comprehensive survey of all the related methods as this has been in a number of papers @cite_9 @cite_7 . Instead we summarize couple of representative methods in the following.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_9", "@cite_0", "@cite_15", "@cite_10", "@cite_11" ], "mid": [ "2140160814", "2165436315", "2951300037", "2160085520", "2050199369", "2155551092", "2949804691" ], "abstract": [ "Multi-level hierarchical models provide an attractive framework for incorporating correlations induced in a response variable that is organized hierarchically. Model fitting is challenging, especially for a hierarchy with a large number of nodes. We provide a novel algorithm based on a multi-scale Kalman filter that is both scalable and easy to implement. For Gaussian response, we show our method provides the maximum a-posteriori (MAP) parameter estimates; for non-Gaussian response, parameter estimation is performed through a Laplace approximation. However, the Laplace approximation provides biased parameter estimates that is corrected through a parametric bootstrap procedure. We illustrate through simulation studies and analyses of real world data sets in health care and online advertising.", "We present generalized secretary problems as a framework for online auctions. Elements, such as potential employees or customers, arrive one by one online. After observing the value derived from an element, but without knowing the values of future elements, the algorithm has to make an irrevocable decision whether to retain the element as part of a solution, or reject it. The way in which the secretary framework differs from traditional online algorithms is that the elements arrive in uniformly random order. Many natural online auction scenarios can be cast as generalized secretary problems, by imposing natural restrictions on the feasible sets. For many such settings, we present surprisingly strong constant factor guarantees on the expected value of solutions obtained by online algorithms. The framework is also easily augmented to take into account time-discounted revenue and incentive compatibility. We give an overview of recent results and future research directions.", "A natural optimization model that formulates many online resource allocation and revenue management problems is the online linear program (LP) in which the constraint matrix is revealed column by column along with the corresponding objective coefficient. In such a model, a decision variable has to be set each time a column is revealed without observing the future inputs and the goal is to maximize the overall objective function. In this paper, we provide a near-optimal algorithm for this general class of online problems under the assumption of random order of arrival and some mild conditions on the size of the LP right-hand-side input. Specifically, our learning-based algorithm works by dynamically updating a threshold price vector at geometric time intervals, where the dual prices learned from the revealed columns in the previous period are used to determine the sequential decisions in the current period. Due to the feature of dynamic learning, the competitiveness of our algorithm improves over the past study of the same problem. We also present a worst-case example showing that the performance of our algorithm is near-optimal.", "This paper is concerned with the joint allocation of bid price and campaign budget in sponsored search. In this application, an advertiser can create a number of campaigns and set a budget for each of them. In a campaign, he she can further create several ad groups with bid keywords and bid prices. Data analysis shows that many advertisers are dealing with a very large number of campaigns, bid keywords, and bid prices at the same time, which poses a great challenge to the optimality of their campaign management. As a result, the budgets of some campaigns might be too low to achieve the desired performance goals while those of some other campaigns might be wasted; the bid prices for some keywords may be too low to win competitive auctions while those of some other keywords may be unnecessarily high. In this paper, we propose a novel algorithm to automatically address this issue. In particular, we model the problem as a constrained optimization problem, which maximizes the expected advertiser revenue subject to the constraints of the total budget of the advertiser and the ranges of bid price change. By solving this optimization problem, we can obtain an optimal budget allocation plan as well as an optimal bid price setting. Our simulation results based on the sponsored search log of a commercial search engine have shown that by employing the proposed method, we can effectively improve the performances of the advertisers while at the same time we also see an increase in the revenue of the search engine. In addition, the results indicate that this method is robust to the second-order effects caused by the bid fluctuations from other advertisers.", "Display ads on the Internet are often sold in bundles of thousands or millions of impressions over a particular time period, typically weeks or months. Ad serving systems that assign ads to pages on behalf of publishers must satisfy these contracts, but at the same time try to maximize overall quality of placement. This is usually modeled in the literature as an online allocation problem, where contracts are represented by overall delivery constraints over a finite time horizon. However this model misses an important aspect of ad delivery: time homogeneity. Advertisers who buy these packages expect their ad to be shown smoothly throughout the purchased time period, in order to reach a wider audience, to have a sustained impact, and to support the ads they are running on other media (e.g., television). In this paper we formalize this problem using several nested packing constraints, and develop a tight (1-1 e)-competitive online algorithm for this problem. Our algorithms and analysis require novel techniques as they involve online computation of multiple dual variables per ad. We then show the effectiveness of our algorithms through exhaustive simulation studies on real data sets.", "We consider the budget-constrained bidding optimization problem for sponsored search auctions, and model it as an online (multiple-choice) knapsack problem. We design both deterministic and randomized algorithms for the online (multiple-choice) knapsack problems achieving a provably optimal competitive ratio. This translates back to fully automatic bidding strategies maximizing either profit or revenue for the budget-constrained advertiser. Our bidding strategy for revenue maximization is oblivious (i.e., without knowledge) of other bidders' prices and or click-through-rates for those positions. We evaluate our bidding algorithms using both synthetic data and real bidding data gathered manually, and also discuss a sniping heuristic that strictly improves bidding performance. With sniping and parameter tuning enabled, our bidding algorithms can achieve a performance ratio above 90 against the optimum by the omniscient bidder.", "Inspired by online ad allocation, we study online stochastic packing linear programs from theoretical and practical standpoints. We first present a near-optimal online algorithm for a general class of packing linear programs which model various online resource allocation problems including online variants of routing, ad allocations, generalized assignment, and combinatorial auctions. As our main theoretical result, we prove that a simple primal-dual training-based algorithm achieves a (1 - o(1))-approximation guarantee in the random order stochastic model. This is a significant improvement over logarithmic or constant-factor approximations for the adversarial variants of the same problems (e.g. factor 1 - 1 e for online ad allocation, and m for online routing). We then focus on the online display ad allocation problem and study the efficiency and fairness of various training-based and online allocation algorithms on data sets collected from real-life display ad allocation system. Our experimental evaluation confirms the effectiveness of training-based primal-dual algorithms on real data sets, and also indicate an intrinsic trade-off between fairness and efficiency." ] }
1305.3011
2952648423
Today, billions of display ad impressions are purchased on a daily basis through a public auction hosted by real time bidding (RTB) exchanges. A decision has to be made for advertisers to submit a bid for each selected RTB ad request in milliseconds. Restricted by the budget, the goal is to buy a set of ad impressions to reach as many targeted users as possible. A desired action (conversion), advertiser specific, includes purchasing a product, filling out a form, signing up for emails, etc. In addition, advertisers typically prefer to spend their budget smoothly over the time in order to reach a wider range of audience accessible throughout a day and have a sustainable impact. However, since the conversions occur rarely and the occurrence feedback is normally delayed, it is very challenging to achieve both budget and performance goals at the same time. In this paper, we present an online approach to the smooth budget delivery while optimizing for the conversion performance. Our algorithm tries to select high quality impressions and adjust the bid price based on the prior performance distribution in an adaptive manner by distributing the budget optimally across time. Our experimental results from real advertising campaigns demonstrate the effectiveness of our proposed approach.
Zhou @cite_10 modeled the budget constrained bidding optimization problem as an online knapsack problem. They proposed a simple strategy to select high quality ad requests based on an exponential function with respect to the budget period. As time goes by, the proposed algorithm will select higher and higher quality of ad requests. However, this approach has an underlying assumption of unlimited supply; i.e., there are infinite amount of ad requests in the RTB environment. This assumption is impractical especially for those campaigns with strict audience targeting constraints.
{ "cite_N": [ "@cite_10" ], "mid": [ "2155551092" ], "abstract": [ "We consider the budget-constrained bidding optimization problem for sponsored search auctions, and model it as an online (multiple-choice) knapsack problem. We design both deterministic and randomized algorithms for the online (multiple-choice) knapsack problems achieving a provably optimal competitive ratio. This translates back to fully automatic bidding strategies maximizing either profit or revenue for the budget-constrained advertiser. Our bidding strategy for revenue maximization is oblivious (i.e., without knowledge) of other bidders' prices and or click-through-rates for those positions. We evaluate our bidding algorithms using both synthetic data and real bidding data gathered manually, and also discuss a sniping heuristic that strictly improves bidding performance. With sniping and parameter tuning enabled, our bidding algorithms can achieve a performance ratio above 90 against the optimum by the omniscient bidder." ] }
1305.3011
2952648423
Today, billions of display ad impressions are purchased on a daily basis through a public auction hosted by real time bidding (RTB) exchanges. A decision has to be made for advertisers to submit a bid for each selected RTB ad request in milliseconds. Restricted by the budget, the goal is to buy a set of ad impressions to reach as many targeted users as possible. A desired action (conversion), advertiser specific, includes purchasing a product, filling out a form, signing up for emails, etc. In addition, advertisers typically prefer to spend their budget smoothly over the time in order to reach a wider range of audience accessible throughout a day and have a sustainable impact. However, since the conversions occur rarely and the occurrence feedback is normally delayed, it is very challenging to achieve both budget and performance goals at the same time. In this paper, we present an online approach to the smooth budget delivery while optimizing for the conversion performance. Our algorithm tries to select high quality impressions and adjust the bid price based on the prior performance distribution in an adaptive manner by distributing the budget optimally across time. Our experimental results from real advertising campaigns demonstrate the effectiveness of our proposed approach.
Babaioff @cite_21 formulated the problem of dynamic bidding price using multi-armed bandit framework, and then applied the strategy of upper confidence bound to explore the optimal price of online transactions. This approach does not require any information about the prior distribution. However, multi-armed bandit framework typically needs to collect feedback quickly from the environment in order to update the utility function. Unfortunately, the collection of bidding and performance information has longer delay for display advertising in RTB environment.
{ "cite_N": [ "@cite_21" ], "mid": [ "2111957769" ], "abstract": [ "We consider the problem of designing revenue-maximizing online posted-price mechanisms when the seller has limited supply. A seller has k identical items for sale and is facing n potential buyers (“agents”) that are arriving sequentially. Each agent is interested in buying one item. Each agent’s value for an item is an independent sample from some fixed (but unknown) distribution with support [0,1]. The seller offers a take-it-or-leave-it price to each arriving agent (possibly different for different agents), and aims to maximize his expected revenue. We focus on mechanisms that do not use any information about the distribution; such mechanisms are called detail-free (or prior-independent). They are desirable because knowing the distribution is unrealistic in many practical scenarios. We study how the revenue of such mechanisms compares to the revenue of the optimal offline mechanism that knows the distribution (“offline benchmark”). We present a detail-free online posted-price mechanism whose revenue is at most O((k log n)2 3) less than the offline benchmark, for every distribution that is regular. In fact, this guarantee holds without any assumptions if the benchmark is relaxed to fixed-price mechanisms. Further, we prove a matching lower bound. The performance guarantee for the same mechanism can be improved to O(√k log n), with a distribution-dependent constant, if the ratio k n is sufficiently small. We show that, in the worst case over all demand distributions, this is essentially the best rate that can be obtained with a distribution-specific constant. On a technical level, we exploit the connection to multiarmed bandits (MAB). While dynamic pricing with unlimited supply can easily be seen as an MAB problem, the intuition behind MAB approaches breaks when applied to the setting with limited supply. Our high-level conceptual contribution is that even the limited supply setting can be fruitfully treated as a bandit problem." ] }
1305.3011
2952648423
Today, billions of display ad impressions are purchased on a daily basis through a public auction hosted by real time bidding (RTB) exchanges. A decision has to be made for advertisers to submit a bid for each selected RTB ad request in milliseconds. Restricted by the budget, the goal is to buy a set of ad impressions to reach as many targeted users as possible. A desired action (conversion), advertiser specific, includes purchasing a product, filling out a form, signing up for emails, etc. In addition, advertisers typically prefer to spend their budget smoothly over the time in order to reach a wider range of audience accessible throughout a day and have a sustainable impact. However, since the conversions occur rarely and the occurrence feedback is normally delayed, it is very challenging to achieve both budget and performance goals at the same time. In this paper, we present an online approach to the smooth budget delivery while optimizing for the conversion performance. Our algorithm tries to select high quality impressions and adjust the bid price based on the prior performance distribution in an adaptive manner by distributing the budget optimally across time. Our experimental results from real advertising campaigns demonstrate the effectiveness of our proposed approach.
Agrawal @cite_9 proposed an general online linear programming algorithm to solve many practical online problems. First they applied the standard linear programming solver to compute the optimal dual solution for the data which have been seen in the system. Then, the solution for the new instance can be decided by checking if the dual solution with the new instance satisfies the constraint. The problem is that the true value @math and cost @math for the incoming ad request is unknown when it arrives to the system. If @math and @math is estimated by some statistical models or other alternative solutions, the dual solution needs to be re-computed more frequently for each campaign in order to impose budget constraints accurately. This introduces high computational cost in the real time bidding system.
{ "cite_N": [ "@cite_9" ], "mid": [ "2951300037" ], "abstract": [ "A natural optimization model that formulates many online resource allocation and revenue management problems is the online linear program (LP) in which the constraint matrix is revealed column by column along with the corresponding objective coefficient. In such a model, a decision variable has to be set each time a column is revealed without observing the future inputs and the goal is to maximize the overall objective function. In this paper, we provide a near-optimal algorithm for this general class of online problems under the assumption of random order of arrival and some mild conditions on the size of the LP right-hand-side input. Specifically, our learning-based algorithm works by dynamically updating a threshold price vector at geometric time intervals, where the dual prices learned from the revealed columns in the previous period are used to determine the sequential decisions in the current period. Due to the feature of dynamic learning, the competitiveness of our algorithm improves over the past study of the same problem. We also present a worst-case example showing that the performance of our algorithm is near-optimal." ] }
1305.2254
2952386310
In many probabilistic first-order representation systems, inference is performed by "grounding"---i.e., mapping it to a propositional representation, and then performing propositional inference. With a large database of facts, groundings can be very large, making inference and learning computationally expensive. Here we present a first-order probabilistic language which is well-suited to approximate "local" grounding: every query @math can be approximately grounded with a small graph. The language is an extension of stochastic logic programs where inference is performed by a variant of personalized PageRank. Experimentally, we show that the approach performs well without weight learning on an entity resolution task; that supervised weight-learning improves accuracy; and that grounding time is independent of DB size. We also show that order-of-magnitude speedups are possible by parallelizing learning.
Although we have chosen here to compare experimentally to MLNs @cite_6 @cite_18 , ProPPR represents a rather different philosophy toward language design: rather than beginning with a highly-expressive but intractible logical core, we begin with a limited logical inference scheme and add to it a minimal set of extensions that allow probabilistic reasoning, while maintaining stable, efficient inference and learning. While ProPPR is less expressive than MLNs (for instance, it is limited to definite clause theories) it is also much more efficient. This philosophy is similar to that illustrated by probabilistic similarity logic (PSL) @cite_3 ; however, unlike ProPPR, PSL does not include a local'' grounding procedure, which leads to small inference problems, even for large databases.
{ "cite_N": [ "@cite_18", "@cite_3", "@cite_6" ], "mid": [ "2171472464", "", "1977970897" ], "abstract": [ "Entity resolution is the problem of determining which records in a database refer to the same entities, and is a crucial and expensive step in the data mining process. Interest in it has grown rapidly in recent years, and many approaches have been proposed. However, they tend to address only isolated aspects of the problem, and are often ad hoc. This paper proposes a well-founded, integrated solution to the entity resolution problem based on Markov logic. Markov logic combines first-order logic and probabilistic graphical models by attaching weights to first-order formulas, and viewing them as templates for features of Markov networks. We show how a number of previous approaches can be formulated and seamlessly combined in Markov logic, and how the resulting learning and inference problems can be solved efficiently. Experiments on two citation databases show the utility of this approach, and evaluate the contribution of the different components.", "", "We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach." ] }
1305.2254
2952386310
In many probabilistic first-order representation systems, inference is performed by "grounding"---i.e., mapping it to a propositional representation, and then performing propositional inference. With a large database of facts, groundings can be very large, making inference and learning computationally expensive. Here we present a first-order probabilistic language which is well-suited to approximate "local" grounding: every query @math can be approximately grounded with a small graph. The language is an extension of stochastic logic programs where inference is performed by a variant of personalized PageRank. Experimentally, we show that the approach performs well without weight learning on an entity resolution task; that supervised weight-learning improves accuracy; and that grounding time is independent of DB size. We also show that order-of-magnitude speedups are possible by parallelizing learning.
Technically, ProPPR is most similar to stochastic logic programs (SLPs) @cite_5 . The key innovation is the integration of a restart into the random-walk process, which, as we have seen, leads to very different computational properties.
{ "cite_N": [ "@cite_5" ], "mid": [ "1530235063" ], "abstract": [ "Stochastic logic programs (SLPs) are logic programs with parameterised clauses which define a log-linear distribution over refutations of goals. The log-linear distribution provides, by marginalisation, a distribution over variable bindings, allowing SLPs to compactly represent quite complex distributions. We analyse the fundamental statistical properties of SLPs addressing issues concerning infinite derivations, ‘unnormalised’ SLPs and impure SLPs. After detailing existing approaches to parameter estimation for log-linear models and their application to SLPs, we present a new algorithm called failure-adjusted maximisation (FAM). FAM is an instance of the EM algorithm that applies specifically to normalised SLPs and provides a closed-form for computing parameter updates within an iterative maximisation approach. We empirically show that FAM works on some small examples and discuss methods for applying it to bigger problems." ] }
1305.2254
2952386310
In many probabilistic first-order representation systems, inference is performed by "grounding"---i.e., mapping it to a propositional representation, and then performing propositional inference. With a large database of facts, groundings can be very large, making inference and learning computationally expensive. Here we present a first-order probabilistic language which is well-suited to approximate "local" grounding: every query @math can be approximately grounded with a small graph. The language is an extension of stochastic logic programs where inference is performed by a variant of personalized PageRank. Experimentally, we show that the approach performs well without weight learning on an entity resolution task; that supervised weight-learning improves accuracy; and that grounding time is independent of DB size. We also show that order-of-magnitude speedups are possible by parallelizing learning.
ProPPR is also closely related to the Path Ranking Algorithm (PRA), learning algorithm for link prediction @cite_17 . Like ProPPR, PRA uses random-walk methods to approximate logical inference. However, the set of inference rules'' learned by PRA corresponds roughly to a logic program in a particular form---namely, the form [ ] ProPPR allows much more general logic programs. However, unlike PRA, we do not consider the task of searching for new logic program clauses.
{ "cite_N": [ "@cite_17" ], "mid": [ "2029249040" ], "abstract": [ "Scientific literature with rich metadata can be represented as a labeled directed graph. This graph representation enables a number of scientific tasks such as ad hoc retrieval or named entity recognition (NER) to be formulated as typed proximity queries in the graph. One popular proximity measure is called Random Walk with Restart (RWR), and much work has been done on the supervised learning of RWR measures by associating each edge label with a parameter. In this paper, we describe a novel learnable proximity measure which instead uses one weight per edge label sequence: proximity is defined by a weighted combination of simple \"path experts\", each corresponding to following a particular sequence of labeled edges. Experiments on eight tasks in two subdomains of biology show that the new learning method significantly outperforms the RWR model (both trained and untrained). We also extend the method to support two additional types of experts to model intrinsic properties of entities: query-independent experts, which generalize the PageRank measure, and popular entity experts which allow rankings to be adjusted for particular entities that are especially important." ] }
1305.2190
1737318476
We present PIE, a scalable routing scheme that achieves 100 packet delivery and low path stretch. It is easy to implement in a distributed fashion and works well when costs are associated to links. Scalability is achieved by using virtual coordinates in a space of concise dimensionality, which enables greedy routing based only on local knowledge. PIE is a general routing scheme, meaning that it works on any graph. We focus however on the Internet, where routing scalability is an urgent concern. We show analytically and by using simulation that the scheme scales extremely well on Internet-like graphs. In addition, its geometric nature allows it to react efficiently to topological changes or failures by finding new paths in the network at no cost, yielding better delivery ratios than standard algorithms. The proposed routing scheme needs an amount of memory polylogarithmic in the size of the network and requires only local communication between the nodes. Although each node constructs its coordinates and routes packets locally, the path stretch remains extremely low, even lower than for centralized or less scalable state-of-the-art algorithms: PIE always finds short paths and often enough finds the shortest paths.
Solutions such as have been proposed to guarantee the success of geographic routing when local minima are present, see for instance @cite_21 . These methods apply greedy routing by default and use a recovery mechanism when the packet is trapped in a local minimum. These deterministic recovery mechanisms only guarantee success of routing when the dimensionality of the underlying space is no more than two @cite_42 . In addition, backtracking out of local minima significantly inflates paths lengths and induce high congestion @cite_32 .
{ "cite_N": [ "@cite_21", "@cite_42", "@cite_32" ], "mid": [ "2085463780", "", "2110348561" ], "abstract": [ "All too often a seemingly insurmountable divide between theory and practice can be witnessed. In this paper we try to contribute to narrowing this gap in the field of ad-hoc routing. In particular we consider two aspects: We propose a new geometric routing algorithm which is outstandingly efficient on practical average-case networks, however is also in theory asymptotically worst-case optimal. On the other hand we are able to drop the formerly necessary assumption that the distance between network nodes may not fall below a constant value, an assumption that cannot be maintained for practical networks. Abandoning this assumption we identify from a theoretical point of view two fundamentamentally different classes of cost metrics for routing in ad-hoc networks.", "", "Geographic forwarding has been widely studied as a routing strategy for large wireless networks, mainly due to the low complexity of the routing algorithm, scalability of the routing information with network size and fast convergence times of routes. On a planar network with no holes, Gupta and Kumar (2000) have shown that a uniform traffic demand of ominus(1 radicn log n) is achievable. However, in a network with routing holes (regions on the plane which do not have active nodes), geographic routing schemes such as GPSR or GOAFR could cause the throughput capacity to significantly drop due to concentration of traffic on the face of the holes. Similarly, geographic schemes could fail to support non-uniform traffic patterns due to spatial congestion (traffic concentration) caused by greedy \"straight-line\" routing. In this paper, we first propose a randomized geographic routing scheme that can achieve a throughput capacity of ominus(1 radicn) (within a poly-logarithmic factor) even in networks with routing holes. Thus, we show that our scheme is throughput optimal (up to a poly-logarithmic factor) while preserving the inherent advantages of geographic routing. We also show that the routing delay incurred by our scheme is within a poly-logarithmic factor of the optimal throughput-delay trade-off curve. Next, we construct a geographic forwarding based routing scheme that can support wide variations in the traffic requirements (as much as ominus(1) rates for some nodes, while supporting ominus(1 radicn) for others). We finally show that the above two schemes can be combined to support non-uniform traffic demands in networks with holes." ] }
1305.2190
1737318476
We present PIE, a scalable routing scheme that achieves 100 packet delivery and low path stretch. It is easy to implement in a distributed fashion and works well when costs are associated to links. Scalability is achieved by using virtual coordinates in a space of concise dimensionality, which enables greedy routing based only on local knowledge. PIE is a general routing scheme, meaning that it works on any graph. We focus however on the Internet, where routing scalability is an urgent concern. We show analytically and by using simulation that the scheme scales extremely well on Internet-like graphs. In addition, its geometric nature allows it to react efficiently to topological changes or failures by finding new paths in the network at no cost, yielding better delivery ratios than standard algorithms. The proposed routing scheme needs an amount of memory polylogarithmic in the size of the network and requires only local communication between the nodes. Although each node constructs its coordinates and routes packets locally, the path stretch remains extremely low, even lower than for centralized or less scalable state-of-the-art algorithms: PIE always finds short paths and often enough finds the shortest paths.
For some categories of graphs, it is possible to perform the embedding in a two dimensional Euclidean space. Indeed, @cite_7 famously conjectured that such a space could embed any planar triangulation, and @cite_16 confirms the conjecture. However, @math bits are required to differentiate the points in the coordinate space.
{ "cite_N": [ "@cite_16", "@cite_7" ], "mid": [ "2020668600", "2169273227" ], "abstract": [ "Geographic routing is a family of routing algorithms that uses geographic point locations as addresses for the purposes of routing. Such routing algorithms have proven to be both simple to implement and heuristically effective when applied to wireless sensor networks. Greedy routing is a natural abstraction of this model in which nodes are assigned virtual coordinates in a metric space, and these coordinates are used to perform point-to-point routing. Here we resolve a conjecture of Papadimitriou and Ratajczak that every 3-connected planar graph admits a greedy embedding into the Euclidean plane. This immediately implies that all 3-connected graphs that exclude K3.3 as a minor admit a greedy embedding into the Euclidean plane. Additionally, we provide the first non-trivial examples of graphs that admit no such embedding. These structural results provide efficiently verifiable certificates that a graph admits a greedy embedding or that a graph admits no greedy embedding into the Euclidean plane.", "We conjecture that any planar 3-connected graph can be embedded in the plane in such a way that for any nodes s and t, there is a path from s to t such that the Euclidean distance to t decreases monotonically along the path. A consequence of this conjecture would be that in any ad hoc network containing such a graph as a spanning subgraph, two-dimensional virtual coordinates for the nodes can be found for which the method of purely greedy geographic routing is guaranteed to work. We discuss this conjecture and its equivalent forms show that its hypothesis is as weak as possible, and show a result delimiting the applicability of our approach: any 3-connected K3,3-free graph has a planar 3-connected spanning subgraph. We also present two alternative versions of greedy routing on virtual coordinates that provably work. Using Steinitz's theorem we show that any 3-connected planar graph can be embedded in three dimensions so that greedy routing works, albeit with a modified notion of distance; we present experimental evidence that this scheme can be implemented effectively in practice. We also present a simple but provably robust version of greedy routing that works for any graph with a 3-connected planar spanning subgraph." ] }
1305.2190
1737318476
We present PIE, a scalable routing scheme that achieves 100 packet delivery and low path stretch. It is easy to implement in a distributed fashion and works well when costs are associated to links. Scalability is achieved by using virtual coordinates in a space of concise dimensionality, which enables greedy routing based only on local knowledge. PIE is a general routing scheme, meaning that it works on any graph. We focus however on the Internet, where routing scalability is an urgent concern. We show analytically and by using simulation that the scheme scales extremely well on Internet-like graphs. In addition, its geometric nature allows it to react efficiently to topological changes or failures by finding new paths in the network at no cost, yielding better delivery ratios than standard algorithms. The proposed routing scheme needs an amount of memory polylogarithmic in the size of the network and requires only local communication between the nodes. Although each node constructs its coordinates and routes packets locally, the path stretch remains extremely low, even lower than for centralized or less scalable state-of-the-art algorithms: PIE always finds short paths and often enough finds the shortest paths.
Kleinberg @cite_2 and @cite_43 consider hyperbolic spaces of 2 dimensions and @cite_2 demonstrates how to greedily embed any tree. However, here again the schemes results in coordinates of size @math bits, and do not produce a significant gain in scalability. Very recently, @cite_30 observed that uniform repartition of nodes onto a hyperbolic plane produces scale-free (Internet-like) graphs, and that the corresponding coordinates in the hyperbolic plane have desirable properties for greedy routing in these graphs. The reverse procedure has been used in @cite_20 to find the hyperbolic coordinates of the Internet ASs that fit the actual AS topology as well as possible. Although this work gives precious insights to understand the relations between scale-free graphs and the hyperbolic space, it yields an embedding that is not greedy and it does not provide 100 does not try to fit the coordinates to a predetermined space, but lets the embedding space be determined by the topology, using only local communications between the nodes.
{ "cite_N": [ "@cite_30", "@cite_43", "@cite_20", "@cite_2" ], "mid": [ "2963380201", "2118347867", "1980940295", "2152948207" ], "abstract": [ "We show that complex (scale-free) network topologies naturally emerge from hyperbolic metric spaces. Hyperbolic geometry facilitates maximally efficient greedy forwarding in these networks. Greedy forwarding is topology-oblivious. Nevertheless, greedy packets find their destinations with 100 probability following almost optimal shortest paths. This remarkable efficiency sustains even in highly dynamic networks. Our findings suggest that forwarding information through complex networks, such as the Internet, is possible without the overhead of existing routing protocols, and may also find practical applications in overlay networks for tasks such as application-level routing, information sharing, and data distribution.", "We propose an embedding and routing scheme for arbitrary network connectivity graphs, based on greedy routing and utilizing virtual node coordinates. In dynamic multihop packet-switching communication networks, routing elements can join or leave during network operation or exhibit intermittent failures. We present an algorithm for online greedy graph embedding in the hyperbolic plane that enables incremental embedding of network nodes as they join the network, without disturbing the global embedding. Even a single link or node removal may invalidate the greedy routing success guarantees in network embeddings based on an embedded spanning tree subgraph. As an alternative to frequent reembedding of temporally dynamic network graphs in order to retain the greedy embedding property, we propose a simple but robust generalization of greedy distance routing called Gravity-Pressure (GP) routing. Our routing method always succeeds in finding a route to the destination provided that a path exists, even if a significant fraction of links or nodes is removed subsequent to the embedding. GP routing does not require precomputation or maintenance of special spanning subgraphs and, as demonstrated by our numerical evaluation, is particularly suitable for operation in tandem with our proposed algorithm for online graph embedding.", "Routing packets on the growing and changing underlying structure of the Internet is challenging and currently based only on local connectivity. Here, a global Internet map is devised: with a greedy forwarding algorithm, it is robust with respect to network growth, and allows speeds close to the theoretical best.", "We propose a scalable and reliable point-to-point routing algorithm for ad hoc wireless networks and sensor-nets. Our algorithm assigns to each node of the network a virtual coordinate in the hyperbolic plane, and performs greedy geographic routing with respect to these virtual coordinates. Unlike other proposed greedy routing algorithms based on virtual coordinates, our embedding guarantees that the greedy algorithm is always successful in finding a route to the destination, if such a route exists. We describe a distributed algorithm for computing each node's virtual coordinates in the hyperbolic plane, and for greedily routing packets to a destination point in the hyperbolic plane. (This destination may be the address of another node of the network, or it may be an address associated to a piece of content in a Distributed Hash Table. In the latter case we prove that the greedy routing strategy makes a consistent choice of the node responsible for the address, irrespective of the source address of the request.) We evaluate the resulting algorithm in terms of both path stretch and node congestion." ] }
1305.2190
1737318476
We present PIE, a scalable routing scheme that achieves 100 packet delivery and low path stretch. It is easy to implement in a distributed fashion and works well when costs are associated to links. Scalability is achieved by using virtual coordinates in a space of concise dimensionality, which enables greedy routing based only on local knowledge. PIE is a general routing scheme, meaning that it works on any graph. We focus however on the Internet, where routing scalability is an urgent concern. We show analytically and by using simulation that the scheme scales extremely well on Internet-like graphs. In addition, its geometric nature allows it to react efficiently to topological changes or failures by finding new paths in the network at no cost, yielding better delivery ratios than standard algorithms. The proposed routing scheme needs an amount of memory polylogarithmic in the size of the network and requires only local communication between the nodes. Although each node constructs its coordinates and routes packets locally, the path stretch remains extremely low, even lower than for centralized or less scalable state-of-the-art algorithms: PIE always finds short paths and often enough finds the shortest paths.
@cite_10 constructs a fully distributed practical embedding by projecting an @math -dimensional graph topology onto a @math dimension Euclidean space using the Johnson-Lindenstrauss lemma. Despite attempting to preserve the relative distance between points, this method is -greedy and introduces some distortion in the embedded topology, which creates local minima. It therefore requires a recovery mechanism that significantly increases the path stretch.
{ "cite_N": [ "@cite_10" ], "mid": [ "2144253712" ], "abstract": [ "We investigate the construction of greedy embeddings in polylogarithmic dimensional Euclidian spaces in order to achieve scalable routing through geographic routing. We propose a practical algorithm which uses random projection to achieve greedy forwarding on a space of dimension O(log(n)) where nodes have coordinates of size O(log(n)), thus achieving greedy forwarding using a route table at each node of polylogarithmic size with respect to the number of nodes. We further improve this algorithm by using a quasi-greedy algorithm which ensures greedy forwarding works along a path-wise construction, allowing us to further reduce the dimension of the embedding. The proposed algorithm, denoted GLoVE-U, is fully distributed and practical to implement. We evaluate the performance using extensive simulations and show that our greedy forwarding algorithm delivers low path stretch and scales properly." ] }
1305.2190
1737318476
We present PIE, a scalable routing scheme that achieves 100 packet delivery and low path stretch. It is easy to implement in a distributed fashion and works well when costs are associated to links. Scalability is achieved by using virtual coordinates in a space of concise dimensionality, which enables greedy routing based only on local knowledge. PIE is a general routing scheme, meaning that it works on any graph. We focus however on the Internet, where routing scalability is an urgent concern. We show analytically and by using simulation that the scheme scales extremely well on Internet-like graphs. In addition, its geometric nature allows it to react efficiently to topological changes or failures by finding new paths in the network at no cost, yielding better delivery ratios than standard algorithms. The proposed routing scheme needs an amount of memory polylogarithmic in the size of the network and requires only local communication between the nodes. Although each node constructs its coordinates and routes packets locally, the path stretch remains extremely low, even lower than for centralized or less scalable state-of-the-art algorithms: PIE always finds short paths and often enough finds the shortest paths.
@cite_5 and @cite_24 find a bounded stretch of @math with @math coordinates for planar graphs @cite_5 and combinatorial unit disk graphs @cite_24 . For arbitrary graphs, the scheme of @cite_24 also provides a stretch of @math . However, these algorithms require a full, centralized knowledge of the topology in input.
{ "cite_N": [ "@cite_24", "@cite_5" ], "mid": [ "2158969254", "2125082389" ], "abstract": [ "Greedy routing is a novel routing paradigm where messages are always forwarded to the neighbor that is closest to the destination. Our main result is a polynomial-time algorithm that embeds combinatorial unit disk graphs (CUDGs - a CUDG is a UDG without any geometric information) into O(log 2 n)- dimensional space, permitting greedy routing with constant stretch. To the best of our knowledge, this is the first greedy embedding with stretch guarantees for this class of networks. Our main technical contribution involves extracting, in polynomial time, a constant number of isometric and balanced tree separators from a given CUDG. We do this by extending the celebrated Lipton-Tarjan separator theorem for planar graphs to CUDGs. Our techniques extend to other classes of graphs; for example, for general graphs, we obtain an O(log n)-stretch greedy embedding into O(log 2 n)-dimensional space. The greedy embeddings constructed by our algorithm can also be viewed as a constant-stretch compact routing scheme in which each node is assigned an O(log 3 n)-bit label. To the best of our knowledge, this result yields the best known stretch-space trade-off for compact routing on CUDGs. Extensive simulations on random wireless networks indicate that the average routing overhead is about 10 ; only few routes have a stretch above 1.5.", "A new packet routing model proposed by the Internet Engineering Task Force is MultiProtocol Label Switching, or MPLS [B. Davie and Y. Rekhter, MPLS: Technology and Applications, Morgan Kaufmann (Elsevier), New York, 2000]. Instead of each router's parsing the packet network layer header and doing its lookups based on that analysis (as in much of conventional packet routing), MPLS ensures that the analysis of the header is performed just once. The packet is then assigned a stack of labels, where the labels are usually much smaller than the packet headers themselves. When a router receives a packet, it examines the label at the top of the label stack and makes the decision of where the packet is forwarded based solely on that label. It can pop the top label off the stack if it so desires, and can also push some new labels onto the stack, before forwarding the packet. This scheme has several advantages over conventional routing protocols, the two primary ones being (a) reduced amount of header analysis at intermediate routers, which allows for faster switching times, and (b) better traffic engineering capabilities and hence easier handling of quality of service issues. However, essentially nothing is known at a theoretical level about the performance one can achieve with this protocol, or about the intrinsic trade-offs in its use of resources. This paper initiates a theoretical study of MPLS protocols, and routing algorithms and lower bounds are given for a variety of situations. We first study the routing problem on the line, a case which is already nontrivial, and give routing protocols whose trade-offs are close to optimality. We then extend our results for paths to trees, and thence onto more general graphs. These routing algorithms on general graphs are obtained by finding a tree cover of a graph, i.e., a small family of subtrees of the graph such that, for each pair of vertices, one of the trees in the family contains an (almost-)shortest path between them. Our results show tree covers of logarithmic size for planar graphs and graphs with bounded separators, which may be of independent interest." ] }
1305.2190
1737318476
We present PIE, a scalable routing scheme that achieves 100 packet delivery and low path stretch. It is easy to implement in a distributed fashion and works well when costs are associated to links. Scalability is achieved by using virtual coordinates in a space of concise dimensionality, which enables greedy routing based only on local knowledge. PIE is a general routing scheme, meaning that it works on any graph. We focus however on the Internet, where routing scalability is an urgent concern. We show analytically and by using simulation that the scheme scales extremely well on Internet-like graphs. In addition, its geometric nature allows it to react efficiently to topological changes or failures by finding new paths in the network at no cost, yielding better delivery ratios than standard algorithms. The proposed routing scheme needs an amount of memory polylogarithmic in the size of the network and requires only local communication between the nodes. Although each node constructs its coordinates and routes packets locally, the path stretch remains extremely low, even lower than for centralized or less scalable state-of-the-art algorithms: PIE always finds short paths and often enough finds the shortest paths.
@cite_4 adapts the scheme of for power-law graphs and obtains better scalability for the routing state, although still a fractional power of @math .
{ "cite_N": [ "@cite_4" ], "mid": [ "1527483699" ], "abstract": [ "We adapt the compact routing scheme by Thorup and Zwick to optimize it for power-law graphs. We analyze our adapted routing scheme based on the theory of unweighted random power-law graphs with fixed expected degree sequence by Aiello, Chung, and Lu. Our result is the first theoretical bound coupled to the parameter of the power-law graph model for a compact routing scheme. In particular, we prove that, for stretch 3, instead of routing tables with O(n1 2) bits as in the general scheme by Thorup and Zwick, expected sizes of O(nγ log n) bits are sufficient, and that all the routing tables can be constructed at once in expected time O(n1 + γ log n), with γ = τ - 2 2τ - 3 + Ɛ, where τ ∈ (2, 3) is the power-law exponent and Ɛ > 0. Both bounds also hold with probability at least 1 - 1 n (independent of Ɛ). The routing scheme is a labeled scheme, requiring a stretch-5 handshaking step and using addresses and message headers with O(log n log log n) bits, with probability at least 1 - o(1). We further demonstrate the effectiveness of our scheme by simulations on real-world graphs as well as synthetic power-law graphs. With the same techniques as for the compact routing scheme, we also adapt the approximate distance oracle by Thorup and Zwick for stretch 3 and obtain a new upper bound of expected O(n1+γ) for space and preprocessing." ] }
1305.2190
1737318476
We present PIE, a scalable routing scheme that achieves 100 packet delivery and low path stretch. It is easy to implement in a distributed fashion and works well when costs are associated to links. Scalability is achieved by using virtual coordinates in a space of concise dimensionality, which enables greedy routing based only on local knowledge. PIE is a general routing scheme, meaning that it works on any graph. We focus however on the Internet, where routing scalability is an urgent concern. We show analytically and by using simulation that the scheme scales extremely well on Internet-like graphs. In addition, its geometric nature allows it to react efficiently to topological changes or failures by finding new paths in the network at no cost, yielding better delivery ratios than standard algorithms. The proposed routing scheme needs an amount of memory polylogarithmic in the size of the network and requires only local communication between the nodes. Although each node constructs its coordinates and routes packets locally, the path stretch remains extremely low, even lower than for centralized or less scalable state-of-the-art algorithms: PIE always finds short paths and often enough finds the shortest paths.
Distributed Hash Tables (DHTs) have been used to improve the scalability of routing as well (for instance, VRR @cite_13 ). However, such DHTs map to source routes that require @math bits to be stored on many topologies, and @math in the worst case. @cite_31 and references therein use Delaunay triangulations to enable greedy forwarding with bounded stretch. However, unlike our work, they assume that the nodes exist in a Euclidean space. We assume nodes in an arbitrary connectivity graph. In particular, it has been shown that Euclidean spaces are not well suited to represent Internet nodes @cite_25 .
{ "cite_N": [ "@cite_31", "@cite_13", "@cite_25" ], "mid": [ "2155837321", "2133828884", "2134684361" ], "abstract": [ "Large scale decentralized communication systems have motivated a new trend towards online routing where routing decisions are performed based on a limited and localized knowledge of the network. Geometrical greedy routing has been among the simplest and most common online routing schemes. While a geometrical online routing scheme is expected to deliver each packet to the point in the network that is closest to the destination, geometrical greedy routing, when applied over generalized substrate graphs, does not guarantee such delivery as its forwarding decision might deliver packets to a localized minimum instead. This letter investigates the necessary and sufficient conditions of greedy supporting graphs that would guarantee such delivery when used as a greedy routing substrate.", "This paper presents Virtual Ring Routing (VRR), a new network routing protocol that occupies a unique point in the design space. VRR is inspired by overlay routing algorithms in Distributed Hash Tables (DHTs) but it does not rely on an underlying network routing protocol. It is implemented directly on top of the link layer. VRR provides both raditional point-to-point network routing and DHT routing to the node responsible for a hash table key.VRR can be used with any link layer technology but this paper describes a design and several implementations of VRR that are tuned for wireless networks. We evaluate the performance of VRR using simulations and measurements from a sensor network and an 802.11a testbed. The experimental results show that VRR provides robust performance across a wide range of environments and workloads. It performs comparably to, or better than, the best wireless routing protocol in each experiment. VRR performs well because of its unique features: it does not require network flooding or trans-lation between fixed identifiers and location-dependent addresses.", "In this paper, we investigate the suitability of embedding Internet hosts into a Euclidean space given their pairwise distances (as measured by round-trip time). Using the classical scaling and matrix perturbation theories, we first establish the (sum of the) magnitude of negative eigenvalues of the (doubly centered, squared) distance matrix as a measure of suitability of Euclidean embedding. We then show that the distance matrix among Internet hosts contains negative eigenvalues of large magnitude, implying that embedding the Internet hosts in a Euclidean space would incur relatively large errors. Motivated by earlier studies, we demonstrate that the inaccuracy of Euclidean embedding is caused by a large degree of triangle inequality violation (TIV) in the Internet distances, which leads to negative eigenvalues of large magnitude. Moreover, we show that the TIVs are likely to occur locally; hence the distances among these close-by hosts cannot be estimated accurately using a global Euclidean embedding. In addition, increasing the dimension of embedding does not reduce the embedding errors. Based on these insights, we propose a new hybrid model for embedding the network nodes using only a two-dimensional Euclidean coordinate system and small error adjustment terms. We show that the accuracy of the proposed embedding technique is as good as, if not better than, that of a seven-dimensional Euclidean embedding." ] }
1305.2319
2117481577
Environmental science is often fragmented: data is collected using mismatched formats and conventions, and models are misaligned and run in isolation. Cloud computing offers a lot of potential in the way of resolving such issues by supporting data from different sources and at various scales, by facilitating the integration of models to create more sophisticated software services, and by providing a sustainable source of suitable computational and storage resources. In this paper, we highlight some of our experiences in building the Environmental Virtual Observatory pilot (EVOp), a tailored cloud-based infrastructure and associated web-based tools designed to enable users from different backgrounds to access data concerning different environmental issues. We review our architecture design, the current deployment and prototypes. We also reflect on lessons learned. We believe that such experiences are of benefit to other scientific communities looking to assemble virtual observatories or similar virtual research environments.
Efforts for designing domain-driven solutions include the following two architectural proposals. @cite_9 presents a use case of similar architectural elements and hybrid infrastructure deployment, but does not use RESTful services. @cite_12 defines a generic high-level framework for assembling virtual research environments.
{ "cite_N": [ "@cite_9", "@cite_12" ], "mid": [ "77689030", "2145480576" ], "abstract": [ "Cloud Computing is one of the latest hypes in the mainstream IT world. In this context, Spatial Data Infrastructures (SDIs) have not been considered yet. This paper reviews this novel technology and identifies the paradigm behind it with regard to SDIs. Concepts of SDIs are analyzed in respect to common gaps which can be solved by Cloud Computing technologies. A real world use case will be presented, which benefits largely from Cloud Computing as a proof-of-concept demonstration. This use case shows that SDI components can be integrated into the cloud as value-added services. Thereby SDI components are shifted from a Software as a Service cloud layer to the Platform as a Service cloud layer, which can be regarded as a future direction for SDIs to enable geospatial cloud interoperability.", "Virtual collaboration is an important aspect for the success of scientific projects, especially if participating researchers are distributed over the whole globe. In the recent past some systems -- so called virtual research environments -- were presented to support collaborative work restricted to certain research domains. Within this article a concept of a generic framework for building personal, cloud-based virtual research environments easily is proposed. Such an environment could be defined by composing arbitrary services, appropriate to the requirements of a particular scientist. Due to low funds in some scientific areas, we also provide a flexible billing strategy using the cloud specific pay-per-use model. Thus, each service has just to be paid as long as it is utilized." ] }
1305.0674
2950721324
We show how to compress string dictionaries using the Lempel-Ziv (LZ78) data compression algorithm. Our approach is validated experimentally on dictionaries of up to 1.5 GB of uncompressed text. We achieve compression ratios often outperforming the existing alternatives, especially on dictionaries containing many repeated substrings. Our query times remain competitive.
Research on compressed string dictionaries is more recent, and we are aware of only a few works tackling this problem. The first is the @cite_5 that builds on the Burrows-Wheeler transformation. It supports a rich set of operations for IR tasks, but if restricted to our simple access lookup functionality its space is not competitive. @cite_2 evaluate the practical performance of techniques like Huffman coding, hashing, front coding, grammar-based compression, and full text indexing. In brief, they find that (a) front coding with Hu-Tucker character compression and (b) Re-Pair-based indices provide the best time space trade-offs. The most recent work is due to Grossi and Ottaviano @cite_10 and builds on previous ideas of the first author @cite_7 . It augments the basic trie idea with , and is shown to often perform better than @cite_2 . All approaches employ engineered implementations of to achieve good practical performance.
{ "cite_N": [ "@cite_5", "@cite_10", "@cite_7", "@cite_2" ], "mid": [ "2050635028", "2951419104", "2001450952", "2949432833" ], "abstract": [ "The Permuterm index lGarfield 1976r is a time-efficient and elegant solution to the string dictionary problem in which pattern queries may possibly include one wild-card symbol (called Tolerant Retrieval problem). Unfortunately the Permuterm index is space inefficient because it quadruples the dictionary size. In this article we propose the Compressed Permuterm Index which solves the Tolerant Retrieval problem in time proportional to the length of the searched pattern, and space close to the kth order empirical entropy of the indexed dictionary. We also design a dynamic version of this index that allows to efficiently manage insertion in, and deletion from, the dictionary of individual strings. The result is based on a simple variant of the Burrows-Wheeler Transform, defined on a dictionary of strings of variable length, that allows to efficiently solve the Tolerant Retrieval problem via known (dynamic) compressed indexes lNavarro and Makinen 2007r. We will complement our theoretical study with a significant set of experiments that show that the Compressed Permuterm Index supports fast queries within a space occupancy that is close to the one achievable by compressing the string dictionary via gzip or bzip. This improves known approaches based on Front-Coding 1999r by more than 50p in absolute space occupancy, still guaranteeing comparable query time.", "Tries are popular data structures for storing a set of strings, where common prefixes are represented by common root-to-node paths. Over fifty years of usage have produced many variants and implementations to overcome some of their limitations. We explore new succinct representations of path-decomposed tries and experimentally evaluate the corresponding reduction in space usage and memory latency, comparing with the state of the art. We study two cases of applications: (1) a compressed dictionary for (compressed) strings, and (2) a monotone minimal perfect hash for strings that preserves their lexicographic order. For (1), we obtain data structures that outperform other state-of-the-art compressed dictionaries in space efficiency, while obtaining predictable query times that are competitive with data structures preferred by the practitioners. In (2), our tries perform several times faster than other trie-based monotone perfect hash functions, while occupying nearly the same space.", "Current data structures for searching large string collections either fail to achieve minimum space or cause too many cache misses. In this paper we discuss some edge linearizations of the classic trie data structure that are simultaneously cache-friendly and compressed. We provide new insights on front coding [24], introduce other novel linearizations, and study how close their space occupancy is to the information-theoretic minimum. The moral is that they are not just heuristics. Our second contribution is a novel dictionary encoding scheme that builds upon such linearizations and achieves nearly optimal space, offers competitive I O-search time, and is also conscious of the query distribution. Finally, we combine those data structures with cache-oblivious tries [2, 5] and obtain a succinct variant whose space is close to the information-theoretic minimum.", "The problem of storing a set of strings --- a string dictionary --- in compact form appears naturally in many cases. While classically it has represented a small part of the whole data to be processed (e.g., for Natural Language processing or for indexing text collections), more recent applications in Web engines, Web mining, RDF graphs, Internet routing, Bioinformatics, and many others, make use of very large string dictionaries, whose size is a significant fraction of the whole data. Thus novel approaches to compress them efficiently are necessary. In this paper we experimentally compare time and space performance of some existing alternatives, as well as new ones we propose. We show that space reductions of up to 20 of the original size of the strings is possible while supporting fast dictionary searches." ] }
1305.1319
1540841088
We consider the unsupervised alignment of the full text of a book with a human-written summary. This presents challenges not seen in other text alignment problems, including a disparity in length and, consequent to this, a violation of the expectation that individual words and phrases should align, since large passages and chapters can be distilled into a single summary phrase. We present two new methods, based on hidden Markov models, specifically targeted to this problem, and demonstrate gains on an extractive book summarization task. While there is still much room for improvement, unsupervised alignment holds intrinsic value in offering insight into what features of a book are deemed worthy of summarization.
This work builds on a long history of unsupervised word and phrase alignment originating in the machine translation literature, both for the task of learning alignments across parallel text @cite_6 @cite_5 @cite_13 @cite_10 and between monolingual @cite_9 and comparable corpora @cite_17 . For the related task of document abstract alignment, we draw on work in document summarization @cite_2 @cite_18 @cite_16 . Past approaches to fictional summarization, including both short stories @cite_0 and books @cite_12 , have tended toward non-discriminative methods; one notable exception is Ceylan , which applies the Viterbi alignment method of Jing and McKeown to a set of 31 literary novels.
{ "cite_N": [ "@cite_18", "@cite_10", "@cite_9", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_16", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "1977593346", "2115526192", "174630521", "2097333193", "2159637096", "1996333538", "2038698865", "2170869176", "2156985047", "1517524280", "2061235289" ], "abstract": [ "A maximum entropy classifier can be used to extract sentences from documents. Experiments using technical documents show that such a classifier tends to treat features in a categorical manner. This results in performance that is worse than when extracting sentences using a naive Bayes classifier. Addition of an optimised prior to the maximum entropy classifier improves performance over and above that of naive Bayes (even when naive Bayes is also extended with a similar prior). Further experiments show that, should we have at our disposal extremely informative features, then maximum entropy is able to yield excellent results. Naive Bayes, in contrast, cannot exploit these features and so fundamentally limits sentence extraction performance.", "We describe the first tractable Gibbs sampling procedure for estimating phrase pair frequencies under a probabilistic model of phrase alignment. We propose and evaluate two nonparametric priors that successfully avoid the degenerate behavior noted in previous work, where overly large phrases memorize the training data. Phrase table weights learned under our model yield an increase in BLEU score over the word-alignment based heuristic estimates used regularly in phrase-based translation systems.", "We apply statistical machine translation (SMT) tools to generate novel paraphrases of input sentences in the same language. The system is trained on large volumes of sentence pairs automatically extracted from clustered news articles available on the World Wide Web. Alignment Error Rate (AER) is measured to gauge the quality of the resulting corpus. A monotone phrasal decoder generates contextual replacements. Human evaluation shows that this system outperforms baseline paraphrase generation techniques and, in a departure from previous work, offers better coverage and scalability than the current best-of-breed paraphrasing approaches.", "In this paper, we present a statistical approach to machine translation. We describe the application of our approach to translation from French to English and give preliminary results.", "We present an approach to the automatic creation of extractive summaries of literary short stories. The summaries are produced with a specific objective in mind: to help a reader decide whether she would be interested in reading the complete story. To this end, the summaries give the user relevant information about the setting of the story without revealing its plot. The system relies on assorted surface indicators about clauses in the short story, the most important of which are those related to the aspectual type of a clause and to the main entities in a story. Fifteen judges evaluated the summaries on a number of extrinsic and intrinsic measures. The outcome of this evaluation suggests that the summaries are helpful in achieving the original objective.", "", "In this paper, we describe a new model for word alignment in statistical translation and present experimental results. The idea of the model is to make the alignment probabilities dependent on the differences in the alignment positions rather than on the absolute positions. To achieve this goal, the approach uses a first-order Hidden Markov model (HMM) for the word alignment problem as they are used successfully in speech recognition for the time alignment problem. The difference to the time alignment HMM is that there is no monotony constraint for the possible word orderings. We describe the details of the model and test the model on several bilingual corpora.", "Current research in automatic single-document summarization is dominated by two effective, yet naive approaches: summarization by sentence extraction and headline generation via bag-of-words models. While successful in some tasks, neither of these models is able to adequately capture the large set of linguistic devices utilized by humans when they produce summaries. One possible explanation for the widespread use of these models is that good techniques have been developed to extract appropriate training data for them from existing document abstract and document headline corpora. We believe that future progress in automatic summarization will be driven both by the development of more sophisticated, linguistically informed models, as well as a more effective leveraging of document abstract corpora. In order to open the doors to simultaneously achieving both of these goals, we have developed techniques for automatically producing word-to-word and phrase-to-phrase alignments between documents and their human-written abstracts. These alignments make explicit the correspondences that exist in such document abstract pairs and create a potentially rich data source from which complex summarization algorithms may learn. This paper describes experiments we have carried out to analyze the ability of humans to perform such alignments, and based on these analyses, we describe experiments for creating them automatically. Our model for the alignment task is based on an extension of the standard hidden Markov model and learns to create alignments in a completely unsupervised fashion. We describe our model in detail and present experimental results that show that our model is able to learn to reliably identify word- and phrase-level alignments in a corpus of document, abstract pairs.", "We present and compare various methods for computing word alignments using statistical or heuristic models. We consider the five alignment models presented in Brown, Della Pietra, Della Pietra, and Mercer (1993), the hidden Markov alignment model, smoothing techniques, and refinements. These statistical models are compared with two heuristic models based on the Dice coefficient. We present different methods for combining word alignments to perform a symmetrization of directed statistical alignment models. As evaluation criterion, we use the quality of the resulting Viterbi alignment compared to a manually produced reference alignment. We evaluate the models on the German-English Verbmobil task and the French-English Hansards task. We perform a detailed analysis of various design decisions of our statistical alignment system and evaluate these on training corpora of various sizes. An important result is that refined alignment models with a first-order dependence and a fertility model yield significantly better results than simple heuristic models. In the Appendix, we present an efficient training algorithm for the alignment models presented.", "Most of the text summarization research carried out to date has been concerned with the summarization of short documents (e.g., news stories, technical reports), and very little work if any has been done on the summarization of very long documents. In this paper, we try to address this gap and explore the problem of book summarization. We introduce a new data set specifically designed for the evaluation of systems for book summarization, and describe summarization techniques that explicitly account for the length of the documents.", "We address the problem of sentence alignment for monolingual corpora, a phenomenon distinct from alignment in parallel corpora. Aligning large comparable corpora automatically would provide a valuable resource for learning of text-to-text rewriting rules. We incorporate context into the search for an optimal alignment in two complementary ways: learning rules for matching paragraphs using topic structure and further refining the matching through local alignment to find good sentence pairs. Evaluation shows that our alignment method outperforms state-of-the-art systems developed for the same task." ] }
1305.0203
1915747843
The Nystr " o m method is routinely used for out-of-sample extension of kernel matrices. We describe how this method can be applied to find the singular value decomposition (SVD) of general matrices and the eigenvalue decomposition (EVD) of square matrices. We take as an input a matrix @math , a user defined integer @math and @math , a matrix sampled from the columns and rows of @math . These are used to construct an approximate rank- @math SVD of @math in @math operations. If @math is square, the rank- @math EVD can be similarly constructed in @math operations. Thus, the matrix @math is a compressed version of @math . We discuss the choice of @math and propose an algorithm that selects a good initial sample for a pivoted version of @math . The proposed algorithm performs well for general matrices and kernel matrices whose spectra exhibit fast decay.
In @cite_12 , the -means clustering algorithm is used for selecting the sub-sample. The -means cluster centers are shown to minimize an error criterion related to the Nystr " o m approximation error. Finally, Incomplete Cholesky Decomposition (ICD) ( @cite_31 ) employs the pivoted Choleksy algorithm and uses a greedy stopping criterion to determine the required sample size for a given approximation accuracy.
{ "cite_N": [ "@cite_31", "@cite_12" ], "mid": [ "2137557016", "1967934524" ], "abstract": [ "SVM training is a convex optimization problem which scales with the training set size rather than the feature space dimension. While this is usually considered to be a desired quality, in large scale problems it may cause training to be impractical. The common techniques to handle this difficulty basically build a solution by solving a sequence of small scale subproblems. Our current effort is concentrated on the rank of the kernel matrix as a source for further enhancement of the training procedure. We first show that for a low rank kernel matrix it is possible to design a better interior point method (IPM) in terms of storage requirements as well as computational complexity. We then suggest an efficient use of a known factorization technique to approximate a given kernel matrix by a low rank matrix, which in turn will be used to feed the optimizer. Finally, we derive an upper bound on the change in the objective function value based on the approximation error and the number of active constraints (support vectors). This bound is general in the sense that it holds regardless of the approximation method.", "Low-rank matrix approximation is an effective tool in alleviating the memory and computational burdens of kernel methods and sampling, as the mainstream of such algorithms, has drawn considerable attention in both theory and practice. This paper presents detailed studies on the Nystrom sampling scheme and in particular, an error analysis that directly relates the Nystrom approximation quality with the encoding powers of the landmark points in summarizing the data. The resultant error bound suggests a simple and efficient sampling scheme, the k-means clustering algorithm, for Nystrom low-rank approximation. We compare it with state-of-the-art approaches that range from greedy schemes to probabilistic sampling. Our algorithm achieves significant performance gains in a number of supervised unsupervised learning tasks including kernel PCA and least squares SVM." ] }
1305.0203
1915747843
The Nystr " o m method is routinely used for out-of-sample extension of kernel matrices. We describe how this method can be applied to find the singular value decomposition (SVD) of general matrices and the eigenvalue decomposition (EVD) of square matrices. We take as an input a matrix @math , a user defined integer @math and @math , a matrix sampled from the columns and rows of @math . These are used to construct an approximate rank- @math SVD of @math in @math operations. If @math is square, the rank- @math EVD can be similarly constructed in @math operations. Thus, the matrix @math is a compressed version of @math . We discuss the choice of @math and propose an algorithm that selects a good initial sample for a pivoted version of @math . The proposed algorithm performs well for general matrices and kernel matrices whose spectra exhibit fast decay.
The Cholesky decomposition of a matrix factors it into @math , where @math is an upper triangular matrix. Initially, @math . The ICD algorithm applies the Cholesky decomposition to @math while symmetrically pivoting the columns and rows of @math according to a greedy criterion. The algorithm has an outer loop that scans the columns of @math according to a pivoting order. The results for each column determine the next column to scan. This loop is terminated early after @math columns were scanned by using a heuristic on the trace of the residual @math . This algorithm ( @cite_31 ) approximates @math . This is equivalent to a Nystr " o m approximation where the initial sample is taken as the intersection of the pivoted columns and rows.
{ "cite_N": [ "@cite_31" ], "mid": [ "2137557016" ], "abstract": [ "SVM training is a convex optimization problem which scales with the training set size rather than the feature space dimension. While this is usually considered to be a desired quality, in large scale problems it may cause training to be impractical. The common techniques to handle this difficulty basically build a solution by solving a sequence of small scale subproblems. Our current effort is concentrated on the rank of the kernel matrix as a source for further enhancement of the training procedure. We first show that for a low rank kernel matrix it is possible to design a better interior point method (IPM) in terms of storage requirements as well as computational complexity. We then suggest an efficient use of a known factorization technique to approximate a given kernel matrix by a low rank matrix, which in turn will be used to feed the optimizer. Finally, we derive an upper bound on the change in the objective function value based on the approximation error and the number of active constraints (support vectors). This bound is general in the sense that it holds regardless of the approximation method." ] }
1304.8109
2952706605
We present a solution to the problem of privacy invasion in a multiparty digital rights management scheme. (Roaming) users buy content licenses from a content provider and execute it at any nearby content distributor. Our approach, which does not need any trusted third party--in contrast to most related work on privacy-preserving DRM--is based on a re-encryption scheme that runs on any mobile Android device. Only a minor security-critical part needs to be performed on the device's smartcard which could, for instance, be a SIM card.
allows a proxy to transform an encrypted message under @math 's public key into another encrypted message under @math 's public key---without seeing the message in plain text. For this, a re-encryption key @math is used. The proxy does not need the private key of @math to decrypt the message and encrypt it again under @math 's public key. @cite_14 introduce several proxy re-encryption schemes---one of them is covered in preliminaries .
{ "cite_N": [ "@cite_14" ], "mid": [ "2114428623" ], "abstract": [ "In 1998, Blaze, Bleumer, and Strauss (BBS) proposed an application called atomic proxy re-encryption, in which a semitrusted proxy converts a ciphertext for Alice into a ciphertext for Bob without seeing the underlying plaintext. We predict that fast and secure re-encryption will become increasingly popular as a method for managing encrypted file systems. Although efficiently computable, the wide-spread adoption of BBS re-encryption has been hindered by considerable security risks. Following recent work of Dodis and Ivan, we present new re-encryption schemes that realize a stronger notion of security and demonstrate the usefulness of proxy re-encryption as a method of adding access control to a secure file system. Performance measurements of our experimental file system demonstrate that proxy re-encryption can work effectively in practice." ] }
1304.8109
2952706605
We present a solution to the problem of privacy invasion in a multiparty digital rights management scheme. (Roaming) users buy content licenses from a content provider and execute it at any nearby content distributor. Our approach, which does not need any trusted third party--in contrast to most related work on privacy-preserving DRM--is based on a re-encryption scheme that runs on any mobile Android device. Only a minor security-critical part needs to be performed on the device's smartcard which could, for instance, be a SIM card.
The anonymous payment scheme used in this paper has been introduced by @cite_9 . Analyses of this method have proven it to be more suited for our application in comparison to the frequently cited scheme by @cite_8 in @cite_17 . This is because of the chosen system being more flexible. Another reason behind choosing this protocol is that the (POS) devices used provide another layer of anonymity for the user since these device serve as a proxy between the user and the payee. Contrasting the basic version @cite_9 , however, we provide extensions such as the payment of change in case the user does not have the right amount of money in his her wallet. The anonymous payment scheme does not allow any party to get to know which content has been purchased if the user makes legitimate payments only. If the user tries to defraud some party, however, his her identity can be unveiled so that he she can be held accountable.
{ "cite_N": [ "@cite_9", "@cite_17", "@cite_8" ], "mid": [ "", "2039877753", "2031618446" ], "abstract": [ "", "We present a privacy-friendly architecture for a future cloud computing scenario where software licensing and software payment plays a major role. We show how digital rights management as a technical solution for software licensing can be achieved in a privacy-friendly manner. In our scenario, users who buy software from software providers and execute it at computing centres stay anonymous. At the same time, our approach guarantees that software licences are bound to users and that their validity is checked before execution. Thus, digital rights management constitutes an incentive for software providers to take part in such a future cloud computing scenario. We employ a software re-encryption scheme so that computing centres are not able to build profiles of their users - not even under a pseudonym. We make sure that malicious users are unable to relay software to others.", "The large-scale automated transaction systems of the near future can be designed to protect the privacy and maintain the security of both individuals and organizations." ] }
1304.8109
2952706605
We present a solution to the problem of privacy invasion in a multiparty digital rights management scheme. (Roaming) users buy content licenses from a content provider and execute it at any nearby content distributor. Our approach, which does not need any trusted third party--in contrast to most related work on privacy-preserving DRM--is based on a re-encryption scheme that runs on any mobile Android device. Only a minor security-critical part needs to be performed on the device's smartcard which could, for instance, be a SIM card.
In @cite_15 , propose a scenario where a content owner provides its (encrypted) content to users via a number of different (local) content distributors---which is similar to our scenario. Employing this scheme, users can buy licenses for content from a license server, acting as trusted third party. Once a license is bought, the user gets in possession of the decryption key which allows him her to access the content as often as desired. Differentiated license models are not intended in their approach---however, if license enforcement additionally took place on the client-side, differentiated license models could be implemented. As content download and license buying are done anonymously, none of the parties can build profiles of users' interest in content.
{ "cite_N": [ "@cite_15" ], "mid": [ "2028381593" ], "abstract": [ "Traditional Digital Rights Management (DRM) systems are one level distributor system which involve single distributor. However, for a flexible and scalable content distribution mechanism, it is necessary to accommodate multiple distributors in DRM model so that different strategies can be implemented in diverse geographical areas. We develop a multiparty multilevel DRM model using facility location and design a prototype DRM system that provides transparent and flexible content distribution mechanism while maintaining the users' privacy along with accountability in the system." ] }
1304.8109
2952706605
We present a solution to the problem of privacy invasion in a multiparty digital rights management scheme. (Roaming) users buy content licenses from a content provider and execute it at any nearby content distributor. Our approach, which does not need any trusted third party--in contrast to most related work on privacy-preserving DRM--is based on a re-encryption scheme that runs on any mobile Android device. Only a minor security-critical part needs to be performed on the device's smartcard which could, for instance, be a SIM card.
@cite_4 present a privacy-preserving DRM scheme for two- and multiparty scenarios without needing a TTP. A user anonymously requests a token set from the content owner that allows anonymous purchase of content licenses from content providers. A drawback is that content providers are able to build usage profiles of content executions under a pseudonym. @cite_5 present a DRM scenario that allows users to anonymously buy software from any software provider and execute it at any computing center within the cloud. The users' permission to execute the software is checked before every single execution. Their solution is resistant against profile building. The authors suggest employing a software re-encryption scheme that is based on secret sharing and homomorphic encryption to achieve unlinkability of software executions towards the computing center. Their software re-encryption scheme is rather complex and implies a huge communication overhead. The approach is extended in @cite_18 by employing an adapted version of proxy re-encryption @cite_14 . The scheme makes explicit use of a service provider as a TTP.
{ "cite_N": [ "@cite_5", "@cite_18", "@cite_14", "@cite_4" ], "mid": [ "2166076876", "", "2114428623", "2147063946" ], "abstract": [ "We come up with a digital rights management (DRM) concept for cloud computing and show how license management for software within the cloud can be achieved in a privacy-friendly manner. In our scenario, users who buy software from software providers stay anonymous. At the same time, our approach guarantees that software licenses are bound to users and their validity is checked before execution. We employ a software re-encryption scheme so that computing centers which execute users' software are not able to build user profiles -- not even under pseudonym -- of their users. We combine secret sharing and homomorphic encryption. We make sure that malicious users are unable to relay software to others. DRM constitutes an incentive for software providers to take partin a future cloud computing scenario. We make this scenario more attractive for users by preserving their privacy.", "", "In 1998, Blaze, Bleumer, and Strauss (BBS) proposed an application called atomic proxy re-encryption, in which a semitrusted proxy converts a ciphertext for Alice into a ciphertext for Bob without seeing the underlying plaintext. We predict that fast and secure re-encryption will become increasingly popular as a method for managing encrypted file systems. Although efficiently computable, the wide-spread adoption of BBS re-encryption has been hindered by considerable security risks. Following recent work of Dodis and Ivan, we present new re-encryption schemes that realize a stronger notion of security and demonstrate the usefulness of proxy re-encryption as a method of adding access control to a secure file system. Performance measurements of our experimental file system demonstrate that proxy re-encryption can work effectively in practice.", "A content distribution mechanism for DRM needs to satisfy the security and accountability requirements while respecting the privacy of the parties involved. However, achieving privacy along with accountability in the same framework is not easy as the requirement for achieving these attributes are conflicting each other. Most of the current content distribution mechanisms rely on trusted third parties to achieve privacy along with these attributes. In this paper, we propose a privacy preserving content distribution mechanism without requiring trust over any third party by using the mechanisms of blind decryption and one way hash chain. We prove that our scheme is not prone to the ‘oracle problem’ of the blind decryption mechanism. Our mechanism supports revocation of even malicious users without violating their privacy." ] }
1304.8109
2952706605
We present a solution to the problem of privacy invasion in a multiparty digital rights management scheme. (Roaming) users buy content licenses from a content provider and execute it at any nearby content distributor. Our approach, which does not need any trusted third party--in contrast to most related work on privacy-preserving DRM--is based on a re-encryption scheme that runs on any mobile Android device. Only a minor security-critical part needs to be performed on the device's smartcard which could, for instance, be a SIM card.
The approach towards privacy-preserving DRM by @cite_2 also requires a TTP for license checking before content execution. It makes use of a number of cryptographic primitives such as proxy re-encryption, ring signatures and an anonymous recipient scheme to provide unlinkability of content executions. The scheme's advantage is the reduced computation and communication overhead compared to the approaches above.
{ "cite_N": [ "@cite_2" ], "mid": [ "2034884491" ], "abstract": [ "We propose a privacy-preserving digital rights management scheme for (future) cloud computing. Users buy software from software providers and execute it at computing centers. Our solution allows software providers to provide different license models, like execute at most n-times models. Users' anonymity and unlinkability of actions are preserved and thus, profile building is not even possible under (a) pseudonym. Privacy protection in the honest-but-curious model is achieved by combining ring signatures with an anonymous recipient scheme. We employ secret sharing in a unique manner that allows the software provider to expose the user's identity if the user commits fraud, e.g. by exceeding the execution limit n." ] }
1304.7992
2009282793
Systemic approaches to the study of a biological cell or tissue rely increasingly on the use of context-specific metabolic network models. The reconstruction of such a model from high-throughput data can routinely involve large numbers of tests under different conditions and extensive parameter tuning, which calls for fast algorithms. We present fastcore, a generic algorithm for reconstructing context-specific metabolic network models from global genome-wide metabolic network models such as Recon X. fastcore takes as input a core set of reactions that are known to be active in the context of interest (e.g., cell or tissue), and it searches for a flux consistent subnetwork of the global network that contains all reactions from the core set and a minimal set of additional reactions. Our key observation is that a minimal consistent reconstruction can be defined via a set of sparse modes of the global network, and fastcore iteratively computes such a set via a series of linear programs. Experiments on liver data demonstrate speedups of several orders of magnitude, and significantly more compact reconstructions, over a rival method. Given its simplicity and its excellent performance, fastcore can form the backbone of many future metabolic network reconstruction algorithms.
Several algorithms have been published in the last years for extracting condition-specific models from generic genome-wide models like Recon ,1. Among them, mCADRE @cite_22 , INIT @cite_40 , iMAT @cite_11 , MBA @cite_6 and GIMME @cite_24 are the most commonly used (see table3 for an overview). Here we provide a short outline of the different algorithms, and refer to @cite_7 for a more extensive overview. For GIMME, iMAT, and MBA, we briefly discuss some notable differences to .
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_6", "@cite_24", "@cite_40", "@cite_11" ], "mid": [ "2170974939", "2107007801", "2103950919", "2162666840", "2144589197", "2172129062" ], "abstract": [ "Background Human tissues perform diverse metabolic functions. Mapping out these tissue-specific functions in genome-scale models will advance our understanding of the metabolic basis of various physiological and pathological processes. The global knowledgebase of metabolic functions categorized for the human genome (Human Recon 1) coupled with abundant high-throughput data now makes possible the reconstruction of tissue-specific metabolic models. However, the number of available tissue-specific models remains incomplete compared with the large diversity of human tissues.", "With the advent of high-throughput technologies, the field of systems biology has amassed an abundance of “omics” data, quantifying thousands of cellular components across a variety of scales, ranging from mRNA transcript levels to metabolite quantities. Methods are needed to not only integrate this omics data but to also use this data to heighten the predictive capabilities of computational models. Several recent studies have successfully demonstrated how flux balance analysis (FBA), a constraint-based modeling approach, can be used to integrate transcriptomic data into genome-scale metabolic network reconstructions to generate predictive computational models. In this review, we summarize such FBA-based methods for integrating expression data into genome-scale metabolic network reconstructions, highlighting their advantages as well as their limitations.", "The computational study of human metabolism has been advanced with the advent of the first generic (non-tissue specific) stoichiometric model of human metabolism. In this study, we present a new algorithm for rapid reconstruction of tissue-specific genome-scale models of human metabolism. The algorithm generates a tissue-specific model from the generic human model by integrating a variety of tissue-specific molecular data sources, including literature-based knowledge, transcriptomic, proteomic, metabolomic and phenotypic data. Applying the algorithm, we constructed the first genome-scale stoichiometric model of hepatic metabolism. The model is verified using standard cross-validation procedures, and through its ability to carry out hepatic metabolic functions. The model's flux predictions correlate with flux measurements across a variety of hormonal and dietary conditions, and improve upon the predictive performance obtained using the original, generic human model (prediction accuracy of 0.67 versus 0.46). Finally, the model better predicts biomarker changes in genetic metabolic disorders than the generic human model (accuracy of 0.67 versus 0.59). The approach presented can be used to construct other human tissue-specific models, and be applied to other organisms.", "Reconstructions of cellular metabolism are publicly available for a variety of different microorganisms and some mammalian genomes. To date, these reconstructions are “genome-scale” and strive to include all reactions implied by the genome annotation, as well as those with direct experimental evidence. Clearly, many of the reactions in a genome-scale reconstruction will not be active under particular conditions or in a particular cell type. Methods to tailor these comprehensive genome-scale reconstructions into context-specific networks will aid predictive in silico modeling for a particular situation. We present a method called Gene Inactivity Moderated by Metabolism and Expression (GIMME) to achieve this goal. The GIMME algorithm uses quantitative gene expression data and one or more presupposed metabolic objectives to produce the context-specific reconstruction that is most consistent with the available data. Furthermore, the algorithm provides a quantitative inconsistency score indicating how consistent a set of gene expression data is with a particular metabolic objective. We show that this algorithm produces results consistent with biological experiments and intuition for adaptive evolution of bacteria, rational design of metabolic engineering strains, and human skelet al muscle cells. This work represents progress towards producing constraint-based models of metabolism that are specific to the conditions where the expression profiling data is available.", "Development of high throughput analytical methods has given physicians the potential access to extensive and patient-specific data sets, such as gene sequences, gene expression profiles or metabolite footprints. This opens for a new approach in health care, which is both personalized and based on system-level analysis. Genome-scale metabolic networks provide a mechanistic description of the relationships between different genes, which is valuable for the analysis and interpretation of large experimental data-sets. Here we describe the generation of genome-scale active metabolic networks for 69 different cell types and 16 cancer types using the INIT (Integrative Network Inference for Tissues) algorithm. The INIT algorithm uses cell type specific information about protein abundances contained in the Human Proteome Atlas as the main source of evidence. The generated models constitute the first step towards establishing a Human Metabolic Atlas, which will be a comprehensive description (accessible online) of the metabolism of different human cell types, and will allow for tissue-level and organism-level simulations in order to achieve a better understanding of complex diseases. A comparative analysis between the active metabolic networks of cancer types and healthy cell types allowed for identification of cancer-specific metabolic features that constitute generic potential drug targets for cancer treatment.", "Summary: iMAT is an Integrative Metabolic Analysis Tool, enabling the integration of transcriptomic and proteomic data with genomescale metabolic network models to predict enzymes’ metabolic flux, based on the method previously described by (Shlomi, 2008). The prediction of metabolic fluxes based on high-throughput molecular data sources could help advance our understanding of cellular metabolism, since current experimental approaches are limited to measuring fluxes through merely a few dozen enzymes." ] }
1304.7992
2009282793
Systemic approaches to the study of a biological cell or tissue rely increasingly on the use of context-specific metabolic network models. The reconstruction of such a model from high-throughput data can routinely involve large numbers of tests under different conditions and extensive parameter tuning, which calls for fast algorithms. We present fastcore, a generic algorithm for reconstructing context-specific metabolic network models from global genome-wide metabolic network models such as Recon X. fastcore takes as input a core set of reactions that are known to be active in the context of interest (e.g., cell or tissue), and it searches for a flux consistent subnetwork of the global network that contains all reactions from the core set and a minimal set of additional reactions. Our key observation is that a minimal consistent reconstruction can be defined via a set of sparse modes of the global network, and fastcore iteratively computes such a set via a series of linear programs. Experiments on liver data demonstrate speedups of several orders of magnitude, and significantly more compact reconstructions, over a rival method. Given its simplicity and its excellent performance, fastcore can form the backbone of many future metabolic network reconstruction algorithms.
INIT @cite_40 uses data retrieved from public databases in order to assess the presence of a certain reaction-respective metabolites in the cell type of interest. INIT uses mixed integer linear programming to build a model in which all reactions can carry a flux. Contrary to other algorithms, INIT does not rely on the assumption of a steady state, but it allows small net accumulation of all metabolites of the model.
{ "cite_N": [ "@cite_40" ], "mid": [ "2144589197" ], "abstract": [ "Development of high throughput analytical methods has given physicians the potential access to extensive and patient-specific data sets, such as gene sequences, gene expression profiles or metabolite footprints. This opens for a new approach in health care, which is both personalized and based on system-level analysis. Genome-scale metabolic networks provide a mechanistic description of the relationships between different genes, which is valuable for the analysis and interpretation of large experimental data-sets. Here we describe the generation of genome-scale active metabolic networks for 69 different cell types and 16 cancer types using the INIT (Integrative Network Inference for Tissues) algorithm. The INIT algorithm uses cell type specific information about protein abundances contained in the Human Proteome Atlas as the main source of evidence. The generated models constitute the first step towards establishing a Human Metabolic Atlas, which will be a comprehensive description (accessible online) of the metabolism of different human cell types, and will allow for tissue-level and organism-level simulations in order to achieve a better understanding of complex diseases. A comparative analysis between the active metabolic networks of cancer types and healthy cell types allowed for identification of cancer-specific metabolic features that constitute generic potential drug targets for cancer treatment." ] }
1304.7793
2952268951
This paper investigates co-scheduling algorithms for processing a set of parallel applications. Instead of executing each application one by one, using a maximum degree of parallelism for each of them, we aim at scheduling several applications concurrently. We partition the original application set into a series of packs, which are executed one by one. A pack comprises several applications, each of them with an assigned number of processors, with the constraint that the total number of processors assigned within a pack does not exceed the maximum number of available processors. The objective is to determine a partition into packs, and an assignment of processors to applications, that minimize the sum of the execution times of the packs. We thoroughly study the complexity of this optimization problem, and propose several heuristics that exhibit very good performance on a variety of workloads, whose application execution times model profiles of parallel scientific codes. We show that co-scheduling leads to to faster workload completion time and to faster response times on average (hence increasing system throughput and saving energy), for significant benefits over traditional scheduling from both the user and system perspectives.
In this paper, we deal with scheduling for parallel tasks, aiming at makespan minimization (recall that the makespan is the total execution time). The corresponding problem with sequential tasks (tasks that execute on a single processor) is easy to solve for the makespan minimization objective: simply make a out of the largest @math tasks, and proceed likewise while there remain tasks. Note that the scheduling problem with sequential tasks has been widely studied for other objective functions, see @cite_14 for various job cost functions, and Potts and Kovalyov @cite_19 for a survey. Back to the problem with sequential tasks and the makespan objective, Koole and Righter in @cite_12 deal with the case where the execution time of each task is unknown but defined by a probabilistic distribution. They showed counter-intuitive properties, that enabled them to derive an algorithm that computes the optimal policy when there are two processors, improving the result of Deb and Serfozo @cite_13 , who considered the stochastic problem with identical jobs.
{ "cite_N": [ "@cite_19", "@cite_14", "@cite_13", "@cite_12" ], "mid": [ "1966523365", "2136186989", "2322096602", "2162935473" ], "abstract": [ "There is an extensive literature on models that integrate scheduling with batching decisions. Jobs may be batched if they share the same setup on a machine. Another reason for batching occurs when a machine can process several jobs simultaneously. This paper reviews the literature on scheduling with batching, giving details of the basic algorithms, and referencing other significant results. Special attention is given to the design of efficient dynamic programming algorithms for solving these types of problems.", "We address the problem of scheduling n jobs on a batching machine to minimize regular scheduling criteria that are non-decreasing in the job completion times. A batching machine is a machine that can handle up to b jobs simultaneously. The jobs that are processed together form a batch, and all jobs in a batch start and complete at the same time. The processing time of a batch is equal to the largest processing time of any job in the batch. We analyse two variants: the unbounded model, where b⩾n; and the bounded model, where b<n. For the unbounded model, we give a characterization of a class of optimal schedules, which leads to a generic dynamic programming algorithm that solves the problem of minimizing an arbitrary regular cost function in pseudopolynomial time. The characterization leads to more efficient dynamic programming algorithms for specific cost functions: a polynomial algorithm for minimizing the maximum cost, an O(n3) time algorithm for minimizing the number of tardy jobs, an O(n2) time algorithm for minimizing the maximum lateness, and an O(n log n) time algorithm for minimizing the total weighted completion time. Furthermore, we prove that minimizing the weighted number of tardy jobs and the total weighted tardiness are NP-hard problems. For the bounded model, we derive an O(nb(b−1)) time dynamic programming algorithm for minimizing total completion time when b>1; for the case with m different processing times, we give a dynamic programming algorithm that requires O(b2m22m) time. Moreover, we prove that due date based scheduling criteria give rise to NP-hard problems. Finally, we show that an arbitrary regular cost function can be minimized in polynomial time for a fixed number of batches. © 1998 John Wiley & Sons, Ltd.", "A batch service queue is considered where each batch size and its time of service is subject to control. Costs are incurred for serving the customers and for holding them in the system. Viewing the system as a Markov decision process (i.e., dynamic program) with unbounded costs, we show that policies which minimize the expected continuously discounted cost and the expected cost per unit time over an infinite time horizon are of the form: at a review point when x customers are waiting, serve min x, Q customers (Q being the, possibly infinite, service capacity) if and only if x exceeds a certain optimal level M. Methods of computing M for both the discounted and average cost contexts are presented.", "We consider a batch scheduling problem in which the processing time of a batch of jobs equals the maximum of the processing times of all jobs in the batch. This is the case, for example, for burn-in operations in semiconductor manufacturing and other testing operations. Processing times are assumed to be random, and we consider minimizing the makespan and the flow time. The problem is much more difficult than the corresponding deterministic problem, and the optimal policy may have many counterintuitive properties. We prove various structural properties of the optimal policy and use these to develop a polynomial-time algorithm to compute the optimal policy." ] }
1304.7793
2952268951
This paper investigates co-scheduling algorithms for processing a set of parallel applications. Instead of executing each application one by one, using a maximum degree of parallelism for each of them, we aim at scheduling several applications concurrently. We partition the original application set into a series of packs, which are executed one by one. A pack comprises several applications, each of them with an assigned number of processors, with the constraint that the total number of processors assigned within a pack does not exceed the maximum number of available processors. The objective is to determine a partition into packs, and an assignment of processors to applications, that minimize the sum of the execution times of the packs. We thoroughly study the complexity of this optimization problem, and propose several heuristics that exhibit very good performance on a variety of workloads, whose application execution times model profiles of parallel scientific codes. We show that co-scheduling leads to to faster workload completion time and to faster response times on average (hence increasing system throughput and saving energy), for significant benefits over traditional scheduling from both the user and system perspectives.
To the best of our knowledge, the problem with parallel tasks has not been studied as such. However, it was introduced by in @cite_21 as a moldable-by-phase model to approximate the moldable problem. The moldable task model is similar to the -scheduling model, but one does not have the additional constraint ( constraint) that the execution of new tasks cannot start before all tasks in the current are completed. in @cite_21 provide an optimal polynomial-time solution for the problem of scheduling identical independent tasks, using a dynamic programming algorithm. This is the only instance of -scheduling with parallel tasks that we found in the literature.
{ "cite_N": [ "@cite_21" ], "mid": [ "2401502609" ], "abstract": [ "Scheduling is a crucial problem in parallel and distributed processing. It consists of determining where and when the tasks of parallel programs will be executed. The design of parallel algorithms has to be reconsidered by the influence of new execution supports (namely, clusters of workstations, grid computing and global computing) which are characterized by a larger number of heterogeneous processors, often organized by hierarchical sub-systems. Parallel Tasks model (tasks that require more than one processor for their execution) has been introduced about 15 years ago as a promising alternative for scheduling parallel applications, especially in the case of slow communication media. The basic idea is to consider the application at a rough level of granularity (larger tasks in order to decrease the relative weight of communications). As the main difficulty for scheduling in actual systems comes from handling efficiently the communications, this new view of the problem allows us to consider them implicitly, thus leading to more tractable problems. We kindly invite the reader to look at the chapter of Maciej Drozdowski (in this book) for a detailed presentation of various kinds of Parallel Tasks in a general context and the survey paper from Feitelsonsurvey for a discussion in the field of parallel processing. Even if the basic problem of scheduling Parallel Tasks remains NP-hard, some approximation algorithms can be designed. A lot of results have been derived recently for scheduling the different types of Parallel Tasks, namely, Rigid, Moldable or Malleable ones. We will distinguish Parallel Tasks inside the same application or between applications in a multi-user context. Various optimization criteria will be discussed. This chapter aims to present several approximation algorithms for scheduling moldable and malleable tasks with a special emphasis on new execution supports." ] }
1304.7793
2952268951
This paper investigates co-scheduling algorithms for processing a set of parallel applications. Instead of executing each application one by one, using a maximum degree of parallelism for each of them, we aim at scheduling several applications concurrently. We partition the original application set into a series of packs, which are executed one by one. A pack comprises several applications, each of them with an assigned number of processors, with the constraint that the total number of processors assigned within a pack does not exceed the maximum number of available processors. The objective is to determine a partition into packs, and an assignment of processors to applications, that minimize the sum of the execution times of the packs. We thoroughly study the complexity of this optimization problem, and propose several heuristics that exhibit very good performance on a variety of workloads, whose application execution times model profiles of parallel scientific codes. We show that co-scheduling leads to to faster workload completion time and to faster response times on average (hence increasing system throughput and saving energy), for significant benefits over traditional scheduling from both the user and system perspectives.
Several recent publications @cite_16 @cite_3 @cite_10 consider co-scheduling at a single multicore node, when contention for resources by co-schedu -led tasks leads to complex tradeoffs between energy and performance measures. @cite_3 predict and utilize inter-thread cache contention at a multicore in order to improve performance. Hankendi and Coskun @cite_10 show that there can be measurable gains in energy per unit of work through the application of their multi-level co-scheduling technique at runtime which is based on classifying tasks according to specific performance measures. Bhaduria and McKee @cite_16 consider local search heuristics to co-schedule tasks in a resource-aware manner at a multicore node to achieve significant gains in thread throughput per watt.
{ "cite_N": [ "@cite_16", "@cite_10", "@cite_3" ], "mid": [ "2054604581", "", "2155396321" ], "abstract": [ "We develop real-time scheduling techniques for improving performance and energy for multiprogrammed workloads that scale non-uniformly with increasing thread counts. Multithreaded programs generally deliver higher throughput than single-threaded programs on chip multiprocessors, but performance gains from increasing threads decrease when there is contention for shared resources. We use analytic metrics to derive local search heuristics for creating efficient multiprogrammed, multithreaded workload schedules. Programs are allocated fewer cores than requested, and scheduled to space-share the CMP to improve global throughput. Our holistic approach attempts to co-schedule programs that complement each other with respect to shared resource consumption. We find application co-scheduling for performance and energy in a resource-aware manner achieves better results than solely targeting total throughput or concurrently co-scheduling all programs. Our schedulers improve overall energy delay (E*D) by a factor of 1.5 over time-multiplexed gang scheduling.", "", "This paper studies the impact of L2 cache sharing on threads that simultaneously share the cache, on a chip multi-processor (CMP) architecture. Cache sharing impacts threads nonuniformly, where some threads may be slowed down significantly, while others are not. This may cause severe performance problems such as sub-optimal throughput, cache thrashing, and thread starvation for threads that fail to occupy sufficient cache space to make good progress. Unfortunately, there is no existing model that allows extensive investigation of the impact of cache sharing. To allow such a study, we propose three performance models that predict the impact of cache sharing on co-scheduled threads. The input to our models is the isolated L2 cache stack distance or circular sequence profile of each thread, which can be easily obtained on-line or off-line. The output of the models is the number of extra L2 cache misses for each thread due to cache sharing. The models differ by their complexity and prediction accuracy. We validate the models against a cycle-accurate simulation that implements a dual-core CMP architecture, on fourteen pairs of mostly SPEC benchmarks. The most accurate model, the inductive probability model, achieves an average error of only 3.9 . Finally, to demonstrate the usefulness and practicality of the model, a case study that details the relationship between an application's temporal reuse behavior and its cache sharing impact is presented." ] }
1304.7158
2115632234
We consider the problem of embedding entities and relations of knowledge bases in lowdimensional vector spaces. Unlike most existing approaches, which are primarily efficient for modeling equivalence relations, our approach is designed to explicitly model irreflexive relations, such as hierarchies, by interpreting them as translations operating on the low-dimensional embeddings of the entities. Preliminary experiments show that, despite its simplicity and a smaller number of parameters than previous approaches, our approach achieves state-of-the-art performance according to standard evaluation protocols on data from WordNet and Freebase.
These methods can provide interpretations and analysis of the data but are slow and do not scale to large databases, due to the high cost of inference. In terms of scalability, models based on tensor factorization (like those from or @cite_4 ) have shown to be efficient. However, they have been outperformed by energy-based models @cite_3 @cite_0 @cite_1 @cite_5 . These methods represent entities as low-dimensional embeddings and relations as linear or bilinear operators on them and are trained via an online process, which allows them to scale well to large numbers of entities and relation types. In , we compare our new approach to @cite_3 and @cite_1 .
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_3", "@cite_0", "@cite_5" ], "mid": [ "205829674", "2951131188", "2156954687", "2101802482", "1771625187" ], "abstract": [ "Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.", "Large-scale relational learning becomes crucial for handling the huge amounts of structured data generated daily in many application domains ranging from computational biology or information retrieval, to natural language processing. In this paper, we present a new neural network architecture designed to embed multi-relational graphs into a flexible continuous vector space in which the original data is kept and enhanced. The network is trained to encode the semantics of these graphs in order to assign high probabilities to plausible components. We empirically show that it reaches competitive performance in link prediction on standard datasets from the literature.", "", "Many data such as social networks, movie preferences or knowledge bases are multi-relational, in that they describe multiple relations between entities. While there is a large body of work focused on modeling these data, modeling these multiple types of relations jointly remains challenging. Further, existing approaches tend to breakdown when the number of these types grows. In this paper, we propose a method for modeling large multi-relational datasets, with possibly thousands of relations. Our model is based on a bilinear structure, which captures various orders of interaction of the data, and also shares sparse latent factors across different relations. We illustrate the performance of our approach on standard tensor-factorization datasets where we attain, or outperform, state-of-the-art results. Finally, a NLP application demonstrates our scalability and the ability of our model to learn efficient and semantically meaningful verb representations.", "Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 75.8 ." ] }
1304.7399
2951286760
A new system for object detection in cluttered RGB-D images is presented. Our main contribution is a new method called Bingham Procrustean Alignment (BPA) to align models with the scene. BPA uses point correspondences between oriented features to derive a probability distribution over possible model poses. The orientation component of this distribution, conditioned on the position, is shown to be a Bingham distribution. This result also applies to the classic problem of least-squares alignment of point sets, when point features are orientation-less, and gives a principled, probabilistic way to measure pose uncertainty in the rigid alignment problem. Our detection system leverages BPA to achieve more reliable object detections in clutter.
Since the release of the Kinect in 2010, much progress has been made on 3-D object detection in cluttered RGB-D scenes. The two most succesful systems to date are Aldoma et. al @cite_11 and Tang et. al @cite_5 . Aldoma's system is purely geometric, and uses SHOT features @cite_0 for model-scene correspondences. It relies heavily on pose clustering of feature correspondences to suggest model placements This is essentially a sparse version of the Hough transform @cite_13 , which is limited by the number of visible features on an object, and is why their recall rates tend to be lower than in our system for objects that are heavily occluded. . The main contribution of Aldoma's system is that they jointly optimize multiple model placements for consistency, which inspired our own multiple object detection system.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_13", "@cite_11" ], "mid": [ "2160643963", "2084635560", "22745672", "" ], "abstract": [ "This paper deals with local 3D descriptors for surface matching. First, we categorize existing methods into two classes: Signatures and Histograms. Then, by discussion and experiments alike, we point out the key issues of uniqueness and repeatability of the local reference frame. Based on these observations, we formulate a novel comprehensive proposal for surface representation, which encompasses a new unique and repeatable local reference frame as well as a new 3D descriptor. The latter lays at the intersection between Signatures and Histograms, so as to possibly achieve a better balance between descriptiveness and robustness. Experiments on publicly available datasets as well as on range scans obtained with Spacetime Stereo provide a thorough validation of our proposal.", "We present an object recognition system which leverages the additional sensing and calibration information available in a robotics setting together with large amounts of training data to build high fidelity object models for a dataset of textured household objects. We then demonstrate how these models can be used for highly accurate detection and pose estimation in an end-to-end robotic perception system incorporating simultaneous segmentation, object classification, and pose fitting. The system can handle occlusions, illumination changes, multiple objects, and multiple instances of the same object. The system placed first in the ICRA 2011 Solutions in Perception instance recognition challenge. We believe the presented paradigm of building rich 3D models at training time and including depth information at test time is a promising direction for practical robotic perception systems.", "Abstract The Hough transform is a method for detecting curves by exploiting the duality between points on a curve and parameters of that curve. The initial work showed how to detect both analytic curves (1,2) and non-analytic curves, (3) but these methods were restricted to binary edge images. This work was generalized to the detection of some analytic curves in grey level images, specifically lines, (4) circles (5) and parabolas. (6) The line detection case is the best known of these and has been ingeniously exploited in several applications. (7,8,9) We show how the boundaries of an arbitrary non-analytic shape can be used to construct a mapping between image space and Hough transform space. Such a mapping can be exploited to detect instances of that particular shape in an image. Furthermore, variations in the shape such as rotations, scale changes or figure ground reversals correspond to straightforward transformations of this mapping. However, the most remarkable property is that such mappings can be composed to build mappings for complex shapes from the mappings of simpler component shapes. This makes the generalized Hough transform a kind of universal transform which can be used to find arbitrarily complex shapes.", "" ] }
1304.7399
2951286760
A new system for object detection in cluttered RGB-D images is presented. Our main contribution is a new method called Bingham Procrustean Alignment (BPA) to align models with the scene. BPA uses point correspondences between oriented features to derive a probability distribution over possible model poses. The orientation component of this distribution, conditioned on the position, is shown to be a Bingham distribution. This result also applies to the classic problem of least-squares alignment of point sets, when point features are orientation-less, and gives a principled, probabilistic way to measure pose uncertainty in the rigid alignment problem. Our detection system leverages BPA to achieve more reliable object detections in clutter.
The Bingham distribution was first used for 3-D cluttered object detection in Glover et. al @cite_8 . However, that system was incomplete in that it lacked any alignment step, and differs greatly from this work because it did not use feature correspondences.
{ "cite_N": [ "@cite_8" ], "mid": [ "2296018510" ], "abstract": [ "The success of personal service robotics hinges upon reliable manipulation of everyday household objects, such as dishes, bottles, containers, and furniture. In order to accurately manipulate such objects, robots need to know objects’ full 6-DOF pose, which is made difficult by clutter and occlusions. Many household objects have regular structure that can be used to effectively guess object pose given an observation of just a small patch on the object. In this paper, we present a new method to model the spatial distribution of oriented local features on an object, which we use to infer object pose given small sets of observed local features. The orientation distribution for local features is given by a mixture of Binghams on the hypersphere of unit quaternions, while the local feature distribution for position given orientation is given by a locally-weighted (Quaternion kernel) likelihood. Experiments on 3D point cloud data of cluttered and uncluttered scenes generated from a structured light stereo image sensor validate our approach." ] }
1304.6822
2950065238
Opportunistic spectrum access (OSA) is a key technique enabling the secondary users (SUs) in a cognitive radio (CR) network to transmit over the "spectrum holes" unoccupied by the primary users (PUs). In this paper, we focus on the OSA design in the presence of reactive PUs, where PU's access probability in a given channel is related to SU's past access decisions. We model the channel occupancy of the reactive PU as a 4-state discrete-time Markov chain. We formulate the optimal OSA design for SU throughput maximization as a constrained finite-horizon partially observable Markov decision process (POMDP) problem. We solve this problem by first considering the conventional short-term conditional collision probability (SCCP) constraint. We then adopt a long-term PU throughput (LPUT) constraint to effectively protect the reactive PU transmission. We derive the structure of the optimal OSA policy under the LPUT constraint and propose a suboptimal policy with lower complexity. Numerical results are provided to validate the proposed studies, which reveal some interesting new tradeoffs between SU throughput maximization and PU transmission protection in a practical interaction scenario.
Most existing work on OSA with time-slotted PUs, including the aforementioned one, has assumed a PU model, where PU's transmission over a particular channel evolves as a 2-state on off Markov chain with fixed state transition probabilities. Similar assumptions can also be found in the experimental based work on OSA with unslotted PUs, such as @cite_15 and @cite_5 . Although greatly simplifying the OSA design, the non-reactive PU model might not be practical since existing wireless systems are mostly intelligent enough to adapt their transmissions upon experiencing collision or interference. For example, a PU may increase transmit power to compensate the link loss due to the received interference. Alternatively, it may reduce the channel access probability when collision occurs in a carrier sensing multiple access (CSMA) based primary system. In this paper, we refer to such PUs as PUs, to differentiate from their non-reactive counterparts.
{ "cite_N": [ "@cite_5", "@cite_15" ], "mid": [ "2182692758", "1578918008" ], "abstract": [ "Dynamic spectrum access is a promising approach to alleviate the spectrum scarcity that wireless communications face today. In short, it aims at reusing sparsely occupied frequency bands while causing no (or insignificant) interference to the actual licensees. This article focuses on applying this concept in the time domain by exploiting idle periods between bursty transmissions of multi-access communication channels and addresses WLAN as an example of practical importance. A statistical model based on empirical data is presented, and it is shown how to use this model for deriving access strategies. The coexistence of Bluetooth and WLAN is considered as a concrete example.", "Opportunistic access to spectrum and secondary allocation of spectrum are topics being studied by regulatory bodies and organizations with interest in spectrum utilization. The DARPA ATO neXt Generation (XG) Program is investigating opportunistic use of spectrum wherein users would dynamically access spectrum based on its availability. Such access may embody changes to regulatory policies governing access to the RF spectrum. Additionally, the methods studied by the XG program could be used for secondary access within a fixed portion of spectrum. Opportunistic access would open spectrum that is sparsely used (temporally and spatially) to users who otherwise would be confined to inadequate frequency bands. An XG-enabled radio would sense and characterize spectral activity, identify spectral opportunities for use, and coordinate access, with the goal of not interfering with the primary, non-XG, and users. This paper describes a sensor suite and media access control (MAC) concepts representative of XG. A prototypical experiment has been conducted with the XG MAC in an environment of 802.11b radios. Results are presented that illustrate the operation of the MAC concept and demonstrate performance parameterized by the load carried by both 802.11b and XG" ] }
1304.6276
1662004680
Dynamic Epistemic Logic makes it possible to model and reason about information change in multi-agent systems. Information change is mathematically modeled through epistemic action Kripke models introduced by Also, van Ditmarsch interprets the information change as a relation between epistemic states and sets of epistemic states and to describe it formally, he considers a special constructor LB called learning operator. Inspired by this, it seems natural to us that the basic source of information change in a multi-agent system should be learning an announcement by some agents together, privately, concurrently or even wrongly. Hence moving along this path, we introduce the notion of a learning program and prove that all finite K45 action models can be described by our learning programs
In @cite_0 , an epistemic action is interpreted as a between @math epistemic states and sets of @math epistemic states. There are two main differences between the interpretation of epistemic action in concurrent dynamic epistemic logic and epistemic learning programs.
{ "cite_N": [ "@cite_0" ], "mid": [ "1500025499" ], "abstract": [ "When giving an analysis of knowledge in multiagent systems, one needs a framework in which higher-order information and its dynamics can both be represented. A recent tradition starting in original work by Plaza treats all of knowledge, higher-order knowledge, and its dynamics on the same foot. Our work is in that tradition. It also fits in approaches that not only dynamize the epistemics, but also epistemize the dynamics: the actions that (groups of) agents perform are epistemic actions. Different agents may have different information about which action is taking place, including higher-order information. We demonstrate that such information changes require subtle descriptions. Our contribution is to provide a complete axiomatization for an action language of van Ditmarsch, where an action is interpreted as a relation between epistemic states (pointed models) and sets of epistemic states. The applicability of the framework is found in every context where multiagent strategic decision making is at stake, and already demonstrated in game-like scenarios such as Cluedo and card games." ] }
1304.6276
1662004680
Dynamic Epistemic Logic makes it possible to model and reason about information change in multi-agent systems. Information change is mathematically modeled through epistemic action Kripke models introduced by Also, van Ditmarsch interprets the information change as a relation between epistemic states and sets of epistemic states and to describe it formally, he considers a special constructor LB called learning operator. Inspired by this, it seems natural to us that the basic source of information change in a multi-agent system should be learning an announcement by some agents together, privately, concurrently or even wrongly. Hence moving along this path, we introduce the notion of a learning program and prove that all finite K45 action models can be described by our learning programs
By introducing @math models and actions, we may think of a theory of multi-agent belief revision. A related work is @cite_11 , which generalize AGM @cite_12 , to a multi-agent belief revision theory.
{ "cite_N": [ "@cite_12", "@cite_11" ], "mid": [ "2149420462", "1544854221" ], "abstract": [ "This paper extends earlier work by its authors on formal aspects of the processes of contracting a theory to eliminate a proposition and revising a theory to introduce a proposition. In the course of the earlier work, Gardenfors developed general postulates of a more or less equational nature for such processes, whilst Alchourron and Makinson studied the particular case of contraction functions that are maximal, in the sense of yielding a maximal subset of the theory (or alternatively, of one of its axiomatic bases), that fails to imply the proposition being eliminated. In the present paper, the authors study a broader class, including contraction functions that may be less than maximal. Specifically, they investigate “partial meet contraction functions”, which are defined to yield the intersection of some nonempty family of maximal subsets of the theory that fail to imply the proposition being eliminated. Basic properties of these functions are established: it is shown in particular that they satisfy the Gardenfors postulates, and moreover that they are sufficiently general to provide a representation theorem for those postulates. Some special classes of partial meet contraction functions, notably those that are “relational” and “transitively relational”, are studied in detail, and their connections with certain “supplementary postulates” of Gardenfors investigated, with a further representation theorem established.", "We generalize AGM belief revision theory to the multi-agent case. To do so, we first generalize the semantics of the single-agent case, based on the notion of interpretation, to the multi-agent case. Then we show that, thanks to the shape of our new semantics, all the results of the AGM framework transfer. Afterwards we investigate some postulates that are specific to our multi-agent setting." ] }
1304.5872
2951811221
A Bloom filter is a method for reducing the space (memory) required for representing a set by allowing a small error probability. In this paper we consider a : a data structure that, given a stream of elements, supports membership queries of the set of the last @math elements (a sliding window), while allowing a small error probability. We formally define the data structure and its relevant parameters and analyze the time and memory requirements needed to achieve them. We give a low space construction that runs in O(1) time per update with high probability (that is, for all sequences with high probability all operations take constant time) and provide an almost matching lower bound on the space that shows that our construction has the best possible space consumption up to an additive lower order term.
A lot of attention was devoted for determining the exact space and time requirements of the approximate set membership problem. @cite_0 proved an entropy lower bound of @math , when the universe @math is large. They also provided a reduction from approximate membership to exact membership, which we use in our construction. The retrieval problem associates additional data with each element of the set. In the static setting, where the elements are fixed and given in advance, Dietzfelbinger and Pagh propose a reduction from the retrieval problem to approximate membership @cite_10 . Their construction gets arbitrarily close to the entropy lower bound.
{ "cite_N": [ "@cite_0", "@cite_10" ], "mid": [ "2018423671", "1917123765" ], "abstract": [ "In this paper we consider the question of how much space is needed to represent a set. Given a finite universe U and some subset V (called the vocabulary), an exact membership tester is a procedure that for each element s in U determines if s is in V. An approximate membership tester is allowed to make mistakes: we require that the membership tester correctly accepts every element of V, but we allow it to also accept a small fraction of the elements of U - V.", "The retrieval problem is the problem of associating data with keys in a set. Formally, the data structure must store a function f: U -> 0,1 ^r that has specified values on the elements of a given set S, a subset of U, |S|=n, but may have any value on elements outside S. Minimal perfect hashing makes it possible to avoid storing the set S, but this induces a space overhead of Theta(n) bits in addition to the nr bits needed for function values. In this paper we show how to eliminate this overhead. Moreover, we show that for any k query time O(k) can be achieved using space that is within a factor 1+e^ -k of optimal, asymptotically for large n. If we allow logarithmic evaluation time, the additive overhead can be reduced to O(log log n) bits whp. The time to construct the data structure is O(n), expected. A main technical ingredient is to utilize existing tight bounds on the probability of almost square random matrices with rows of low weight to have full row rank. In addition to direct constructions, we point out a close connection between retrieval structures and hash tables where keys are stored in an array and some kind of probing scheme is used. Further, we propose a general reduction that transfers the results on retrieval into analogous results on approximate membership, a problem traditionally addressed using Bloom filters. Again, we show how to eliminate the space overhead present in previously known methods, and get arbitrarily close to the lower bound. The evaluation procedures of our data structures are extremely simple (similar to a Bloom filter). For the results stated above we assume free access to fully random hash functions. However, we show how to justify this assumption using extra space o(n) to simulate full randomness on a RAM." ] }
1304.5872
2951811221
A Bloom filter is a method for reducing the space (memory) required for representing a set by allowing a small error probability. In this paper we consider a : a data structure that, given a stream of elements, supports membership queries of the set of the last @math elements (a sliding window), while allowing a small error probability. We formally define the data structure and its relevant parameters and analyze the time and memory requirements needed to achieve them. We give a low space construction that runs in O(1) time per update with high probability (that is, for all sequences with high probability all operations take constant time) and provide an almost matching lower bound on the space that shows that our construction has the best possible space consumption up to an additive lower order term.
In the dynamic case, Lovett and Porat @cite_15 proved that the entropy lower bound cannot be achieved for any constant error rate. They show a lower bound of @math where @math depends only on @math . Pagh, Segev and Wieder @cite_8 showed that if the size @math is not known in advance then at least @math bits of space must be used. The Sliding Bloom Filter is in particular also a Bloom Filter in a dynamic setting, thus the @cite_15 and @cite_8 bounds are applicable.
{ "cite_N": [ "@cite_15", "@cite_8" ], "mid": [ "2191403852", "2952367104" ], "abstract": [ "An approximate membership data structure is a randomized data structure for representing a set which supports membership queries. It allows for a small false positive error rate but has no false negative errors. Such data structures were first introduced by Bloom in the 1970's, and have since had numerous applications, mainly in distributed systems, database systems, and networks. The algorithm of Bloom is quite effective: it can store a set @math of size @math by using only @math bits while having false positive error @math . This is within a constant factor of the entropy lower bound of @math for storing such sets. Closing this gap is an important open problem, as Bloom filters are widely used is situations were storage is at a premium. Bloom filters have another property: they are dynamic. That is, they support the iterative insertions of up to @math elements. In fact, if one removes this requirement, there exist static data structures which receive the entire set at once and can almost achieve the entropy lower bound, they require only @math bits. Our main result is a new lower bound for the memory requirements of any dynamic approximate membership data structure. We show that for any constant @math , any such data structure which achieves false positive error rate of @math must use at least @math memory bits, where @math depends only on @math . This shows that the entropy lower bound cannot be achieved by dynamic data structures for any constant error rate. In fact, our lower bound holds even in the setting where the insertion and query algorithms may use shared randomness, and where they are only required to perform well on average.", "The dynamic approximate membership problem asks to represent a set S of size n, whose elements are provided in an on-line fashion, supporting membership queries without false negatives and with a false positive rate at most epsilon. That is, the membership algorithm must be correct on each x in S, and may err with probability at most epsilon on each x not in S. We study a well-motivated, yet insufficiently explored, variant of this problem where the size n of the set is not known in advance. Existing optimal approximate membership data structures require that the size is known in advance, but in many practical scenarios this is not a realistic assumption. Moreover, even if the eventual size n of the set is known in advance, it is desirable to have the smallest possible space usage also when the current number of inserted elements is smaller than n. Our contribution consists of the following results: - We show a super-linear gap between the space complexity when the size is known in advance and the space complexity when the size is not known in advance. - We show that our space lower bound is tight, and can even be matched by a highly efficient data structure." ] }
1304.4964
1988720938
Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process e.g. count data, which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton and quasi-Newton methods. We compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.
NMF in the LS case was first proposed by Paatero and Tapper @cite_36 and also studied by Bro [p. 169] BroPhD . Lee and Seung later consider the problem for both LS and K-L formulations and introduce multiplicative updates based on the convex subproblems @cite_12 @cite_0 . Their work is extended to tensors by Welling and Weber @cite_11 . Many other works have been published on the LS versions of NMF @cite_32 @cite_49 @cite_42 @cite_19 @cite_44 and NTF @cite_15 @cite_1 @cite_45 @cite_2 @cite_6 .
{ "cite_N": [ "@cite_36", "@cite_42", "@cite_1", "@cite_32", "@cite_6", "@cite_0", "@cite_19", "@cite_44", "@cite_45", "@cite_49", "@cite_2", "@cite_15", "@cite_12", "@cite_11" ], "mid": [ "2059745395", "", "", "2110096996", "", "", "", "", "", "", "", "2098098075", "1902027874", "2073502026" ], "abstract": [ "A new variant ‘PMF’ of factor analysis is described. It is assumed that X is a matrix of observed data and σ is the known matrix of standard deviations of elements of X. Both X and σ are of dimensions n × m. The method solves the bilinear matrix problem X = GF + E where G is the unknown left hand factor matrix (scores) of dimensions n × p, F is the unknown right hand factor matrix (loadings) of dimensions p × m, and E is the matrix of residuals. The problem is solved in the weighted least squares sense: G and F are determined so that the Frobenius norm of E divided (element-by-element) by σ is minimized. Furthermore, the solution is constrained so that all the elements of G and F are required to be non-negative. It is shown that the solutions by PMF are usually different from any solutions produced by the customary factor analysis (FA, i.e. principal component analysis (PCA) followed by rotations). Usually PMF produces a better fit to the data than FA. Also, the result of PF is guaranteed to be non-negative, while the result of FA often cannot be rotated so that all negative entries would be eliminated. Different possible application areas of the new method are briefly discussed. In environmental data, the error estimates of data can be widely varying and non-negativity is often an essential feature of the underlying models. Thus it is concluded that PMF is better suited than FA or PCA in many environmental applications. Examples of successful applications of PMF are shown in companion papers.", "", "", "Nonnegative matrix factorization (NMF) can be formulated as a minimization problem with bound constraints. Although bound-constrained optimization has been studied extensively in both theory and practice, so far no study has formally applied its techniques to NMF. In this letter, we propose two projected gradient methods for NMF, both of which exhibit strong optimization properties. We discuss efficient implementations and demonstrate that one of the proposed methods converges faster than the popular multiplicative update approach. A simple Matlab code is also provided.", "", "", "", "", "", "", "", "SUMMARY In this paper a modification of the standard algorithm for non-negativity-constrained linear least squares regression is proposed. The algorithm is specifically designed for use in multiway decomposition methods such as PARAFAC and N-mode principal component analysis. In those methods the typical situation is that there is a high ratio between the numbers of objects and variables in the regression problems solved. Furthermore, very similar regression problems are solved many times during the iterative procedures used. The algorithm proposed is based on the de facto standard algorithm NNLS by Lawson and Hanson, but modified to take advantage of the special characteristics of iterative algorithms involving repeated use of non-negativity constraints. The principle behind the NNLS algorithm is described in detail and a comparison is made between this standard algorithm and the new algorithm called FNNLS (fast NNLS). © 1997 John Wiley & Sons, Ltd.", "Is perception of the whole based on perception of its parts? There is psychological1 and physiological2,3 evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations4,5. But little is known about how brains or computers might learn the parts of objects. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts-based, representations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign.", "Abstract A novel fixed point algorithm for positive tensor factorization (PTF) is introduced. The update rules efficiently minimize the reconstruction error of a positive tensor over positive factors. Tensors of arbitrary order can be factorized, which extends earlier results in the literature. Experiments show that the factors of PTF are easier to interpret than those produced by methods based on the singular value decomposition, which might contain negative values. We also illustrate the tendency of PTF to generate sparsely distributed codes." ] }
1304.4964
1988720938
Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process e.g. count data, which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton and quasi-Newton methods. We compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.
Lee and Seung's multiplicative update method @cite_12 @cite_0 @cite_11 is the basis for most NTF algorithms that minimize the K-L divergence function. Chi and Kolda provide an improved multiplicative update scheme for K-L that addresses performance and convergence issues as elements approach zero @cite_9 ; we compare to their method in . By interpreting the K-L divergence as an alternative Csiszar-Tusnady procedure, Zafeiriou and Petrou @cite_48 provide a probabilistic interpretation of NTF along with a new multiplicative update scheme. The multiplicative update is equivalent to a scaled steepest-descent step @cite_7 , so it is a first-order optimization method. Since our method uses second-order information, it allows for convergence to higher accuracy and a better determination of sparsity in the factorization.
{ "cite_N": [ "@cite_7", "@cite_48", "@cite_9", "@cite_0", "@cite_12", "@cite_11" ], "mid": [ "2135029798", "2129455341", "2024356620", "", "1902027874", "2073502026" ], "abstract": [ "Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence.", "In this paper we study Nonnegative Tensor Factorization (NTF) based on the Kullback---Leibler (KL) divergence as an alternative Csiszar---Tusnady procedure. We propose new update rules for the aforementioned divergence that are based on multiplicative update rules. The proposed algorithms are built on solid theoretical foundations that guarantee that the limit point of the iterative algorithm corresponds to a stationary solution of the optimization procedure. Moreover, we study the convergence properties of the optimization procedure and we present generalized pythagorean rules. Furthermore, we provide clear probabilistic interpretations of these algorithms. Finally, we discuss the connections between generalized Probabilistic Tensor Latent Variable Models (PTLVM) and NTF, proposing in that way algorithms for PTLVM for arbitrary multivariate probabilistic mass functions.", "Tensors have found application in a variety of fields, ranging from chemometrics to signal processing and beyond. In this paper, we consider the problem of multilinear modeling of sparse count data. Our goal is to develop a descriptive tensor factorization model of such data, along with appropriate algorithms and theory. To do so, we propose that the random variation is best described via a Poisson distribution, which better describes the zeros observed in the data as compared to the typical assumption of a Gaussian distribution. Under a Poisson assumption, we fit a model to observed data using the negative log-likelihood score. We present a new algorithm for Poisson tensor factorization called CANDECOMP--PARAFAC alternating Poisson regression (CP-APR) that is based on a majorization-minimization approach. It can be shown that CP-APR is a generalization of the Lee--Seung multiplicative updates. We show how to prevent the algorithm from converging to non-KKT points and prove convergence of CP-APR under mil...", "", "Is perception of the whole based on perception of its parts? There is psychological1 and physiological2,3 evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations4,5. But little is known about how brains or computers might learn the parts of objects. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts-based, representations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign.", "Abstract A novel fixed point algorithm for positive tensor factorization (PTF) is introduced. The update rules efficiently minimize the reconstruction error of a positive tensor over positive factors. Tensors of arbitrary order can be factorized, which extends earlier results in the literature. Experiments show that the factors of PTF are easier to interpret than those produced by methods based on the singular value decomposition, which might contain negative values. We also illustrate the tendency of PTF to generate sparsely distributed codes." ] }
1304.4964
1988720938
Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process e.g. count data, which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton and quasi-Newton methods. We compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.
Recently, Hsiel and Dhillon @cite_13 reported algorithms for NMF with both LS and K-L objectives. Their method updates one variable at a time, solving a nonlinear scalar function using Newton's method with a constant step size. They achieve good performance for the LS objective by taking the variables in a particular order based on gradient information; however, for the more complex K-L objective, they must cycle through all the variables one by one. Our algorithms solve convex row subproblems with @math variables using second-order information; solving these subproblems one variable at a time by coordinate descent will likely have a much slower rate of convergence [pp. 230-231] nocedal1999numerical .
{ "cite_N": [ "@cite_13" ], "mid": [ "2022712430" ], "abstract": [ "Nonnegative Matrix Factorization (NMF) is an effective dimension reduction method for non-negative dyadic data, and has proven to be useful in many areas, such as text mining, bioinformatics and image processing. NMF is usually formulated as a constrained non-convex optimization problem, and many algorithms have been developed for solving it. Recently, a coordinate descent method, called FastHals, has been proposed to solve least squares NMF and is regarded as one of the state-of-the-art techniques for the problem. In this paper, we first show that FastHals has an inefficiency in that it uses a cyclic coordinate descent scheme and thus, performs unneeded descent steps on unimportant variables. We then present a variable selection scheme that uses the gradient of the objective function to arrive at a new coordinate descent method. Our new method is considerably faster in practice and we show that it has theoretical convergence guarantees. Moreover when the solution is sparse, as is often the case in real applications, our new method benefits by selecting important variables to update more often, thus resulting in higher speed. As an example, on a text dataset RCV1, our method is 7 times faster than FastHals, and more than 15 times faster when the sparsity is increased by adding an L1 penalty. We also develop new coordinate descent methods when error in NMF is measured by KL-divergence by applying the Newton method to solve the one-variable sub-problems. Experiments indicate that our algorithm for minimizing the KL-divergence is faster than the Lee & Seung multiplicative rule by a factor of 10 on the CBCL image dataset." ] }
1304.4964
1988720938
Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process e.g. count data, which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton and quasi-Newton methods. We compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.
A row subproblem reformulation similar to ours is noted in earlier papers exploring the LS objective, but it never led to Hessian-based methods that exploit sparsity as ours do. Gonzales and Zhang use the reformulation with a multiplicative update method for NMF @cite_35 but do not generalize to tensors or the K-L objective. @ @cite_23 note the reformulation is suitable for parallelizing a Hessian-based method for NTF using LS. Kim and Park use the reformulation for NTF with LS @cite_16 , deriving small bound-constrained LS subproblems. Their method solves the LS subproblems by exact matrix factorization, without exploiting sparsity, and features a block principal pivoting method for choosing the active set. Other works solve the LS objective by taking advantage of row-by-row or column-by-column subproblem decomposition @cite_5 @cite_17 @cite_14 .
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_23", "@cite_5", "@cite_16", "@cite_17" ], "mid": [ "88153722", "", "1542843314", "2026034143", "1561391377", "" ], "abstract": [ "Approximate nonnegative matrix factorization is an emerging technique with a wide spectrum of potential applications in data analysis. Currently, the most-used algorithms for this problem are those proposed by Lee and Seung [7]. In this paper we present a variation of one of the Lee-Seung algorithms with a notably improved performance. We also show that algorithms of this type do not necessarily converge to local minima.", "", "Alternative least squares (ALS) algorithm is considered as a \"work-horse\" algorithm for general tensor factorizations. For nonnegative tensor factorizations (NTF), we usually use a nonlinear projection (rectifier) to remove negative entries during the iteration process. However, this kind of ALS algorithm often fails and cannot converge to the desired solution. In this paper, we proposed a novel algorithm for NTF by recursively solving nonnegative quadratic programming problems. The validity and high performance of the proposed algorithm has been confirmed for difficult benchmarks, and also in an application of object classification.", "Nonnegative matrix factorization (NMF) and its extensions such as Nonnegative Tensor Factorization (NTF) have become prominent techniques for blind sources separation (BSS), analysis of image databases, data mining and other information retrieval and clustering applications. In this paper we propose a family of efficient algorithms for NMF NTF, as well as sparse nonnegative coding and representation, that has many potential applications in computational neuroscience, multi-sensory processing, compressed sensing and multidimensional data analysis. We have developed a class of optimized local algorithms which are referred to as Hierarchical Alternating Least Squares (HALS) algorithms. For these purposes, we have performed sequential constrained minimization on a set of squared Euclidean distances. We then extend this approach to robust cost functions using the alpha and beta divergences and derive flexible update rules. Our algorithms are locally stable and work well for NMF-based blind source separation (BSS) not only for the over-determined case but also for an under-determined (over-complete) case (i.e., for a system which has less sensors than sources) if data are sufficiently sparse. The NMF learning rules are extended and generalized for N-th order nonnegative tensor factorization (NTF). Moreover, these algorithms can be tuned to different noise statistics by adjusting a single parameter. Extensive experimental results confirm the accuracy and computational performance of the developed algorithms, especially, with usage of multi-layer hierarchical NMF approach [3].", "We introduce an efficient algorithm for computing a low-rank nonnegative CANDECOMP PARAFAC (NNCP) decomposition. In text mining, signal processing, and computer vision among other areas, imposing nonnegativity constraints to the low-rank factors of matrices and tensors has been shown an effective technique providing physically meaningful interpretation. A principled methodology for computing NNCP is alternating nonnegative least squares, in which the nonnegativity-constrained least squares (NNLS) problems are solved in each iteration. In this chapter, we propose to solve the NNLS problems using the block principal pivoting method. The block principal pivoting method overcomes some difficulties of the classical active method for the NNLS problems with a large number of variables. We introduce techniques to accelerate the block principal pivoting method for multiple right-hand sides, which is typical in NNCP computation. Computational experiments show the state-of-the-art performance of the proposed method.", "" ] }
1304.4964
1988720938
Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process e.g. count data, which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton and quasi-Newton methods. We compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.
Our algorithms are similar in spirit to the work of Kim, Sra and Dhillon @cite_24 , which applies a projected quasi-Newton algorithm (called PQN in this paper) to solving NMF with a K-L objective. Like PQN, our algorithms identify active variables, compute a Newton-like direction in the space of free variables, and find a new iterate using a projected backtracking line search. We differ from PQN in reformulating the subproblem and in computing a damped Newton direction; both improvements make a huge difference in performance for large-scale tensor problems. We compare to PQN in .
{ "cite_N": [ "@cite_24" ], "mid": [ "2028912194" ], "abstract": [ "Numerous scientific applications across a variety of fields depend on box-constrained convex optimization. Box-constrained problems therefore continue to attract research interest. We address box-constrained (strictly convex) problems by deriving two new quasi-Newton algorithms. Our algorithms are positioned between the projected-gradient [J. B. Rosen, J. SIAM, 8 (1960), pp. 181-217] and projected-Newton [D. P. Bertsekas, SIAM J. Control Optim., 20 (1982), pp. 221-246] methods. We also prove their convergence under a simple Armijo step-size rule. We provide experimental results for two particular box-constrained problems: nonnegative least squares (NNLS), and nonnegative Kullback-Leibler (NNKL) minimization. For both NNLS and NNKL our algorithms perform competitively as compared to well-established methods on medium-sized problems; for larger problems our approach frequently outperforms the competition." ] }
1304.4964
1988720938
Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process e.g. count data, which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton and quasi-Newton methods. We compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.
All-at-once optimization methods, including Hessian-based algorithms, have been applied to NTF with the LS objective function. As an example, Paatero replaces the nonnegativity constraints with a barrier function @cite_31 to yield an unconstrained optimization problem, and Phan, Tichavsky and Cichocki @cite_26 apply a fast damped Gauss-Newton algorithm for minimizing a similar penalized objective. We are not aware of any work on all-at-once methods for the K-L objective in NTF.
{ "cite_N": [ "@cite_31", "@cite_26" ], "mid": [ "2022242697", "2108020291" ], "abstract": [ "Abstract A time-efficient algorithm PMF3 is presented for solving the three-way PARAFAC (CANDECOMP) factor analytic model. In contrast to the usual alternating least squares, the PMF3 algorithm computes changes to all three modes simultaneously. This typically leads to convergence in 40–100 iteration steps. The equations of the weighted multilinear least squares fit are given. The optional non-negativity is achieved by imposing a logarithmic penalty function. The algorithm contains a possibility for dynamical reweighting of the data during the iteration, allowing a robust analysis of outlier-containing data. The problems typical of PARAFAC models are discussed (but not solved): multiple local solutions, degenerate solutions, non-identifiable solutions. The question of how to verify the solution is discussed at length. The program PMF3 is available for 486-Pentium based PC computers.", "Alternating optimization algorithms for canonical polyadic decomposition (with without nonnegative constraints) often accompany update rules with low computational cost, but could face problems of swamps, bottlenecks, and slow convergence. All-at-once algorithms can deal with such problems, but always demand significant temporary extra-storage, and high computational cost. In this paper, we propose an all-at-once algorithm with low complexity for sparse and nonnegative tensor factorization based on the damped Gauss-Newton iteration. Especially, for low-rank approximations, the proposed algorithm avoids building up Hessians and gradients, reduces the computational cost dramatically. Moreover, we proposed selection strategies for regularization parameters. The proposed algorithm has been verified to overwhelmingly outperform “state-of-the-art” NTF algorithms for difficult benchmarks, and for real-world application such as clustering of the ORL face database." ] }
1304.4964
1988720938
Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process e.g. count data, which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton and quasi-Newton methods. We compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.
Finally, we note that all methods, including ours, find only a locally optimal solution to the NTF problem. Finding the global solution is generally much harder; for instance, Vavasis @cite_34 proves it is NP-hard for an NMF model that fits the data exactly.
{ "cite_N": [ "@cite_34" ], "mid": [ "2124172487" ], "abstract": [ "Nonnegative matrix factorization (NMF) has become a prominent technique for the analysis of image databases, text databases, and other information retrieval and clustering applications. The problem is most naturally posed as continuous optimization. In this report, we define an exact version of NMF. Then we establish several results about exact NMF: (i) that it is equivalent to a problem in polyhedral combinatorics; (ii) that it is NP-hard; and (iii) that a polynomial-time local search heuristic exists." ] }
1304.5472
2949063410
The author's presentation of multilevel Monte Carlo path simulation at the MCQMC 2006 conference stimulated a lot of research into multilevel Monte Carlo methods. This paper reviews the progress since then, emphasising the simplicity, flexibility and generality of the multilevel Monte Carlo approach. It also offers a few original ideas and suggests areas for future research.
Prior to the author's first publications @cite_11 @cite_8 on MLMC for Brownian path simulations, Heinrich developed a multilevel Monte Carlo method for parametric integration, the evaluation of functionals arising from the solution of integral equations, and weakly singular integral operators @cite_3 @cite_31 @cite_51 @cite_6 @cite_14 . Parametric integration concerns the estimation of @math where @math is a finite-dimensional random variable and @math is a parameter. In the simplest case in which @math is a real variable in the range @math , having estimated the value of @math and @math , one can use @math as a control variate when estimating the value of @math . This approach can then be applied recursively for other intermediate values of @math , yielding large savings if @math is sufficiently smooth with respect to @math . Although this does not quite fit into the general MLMC form given in the previous section, the recursive control variate approach is very similar and the complexity analysis is also very similar to the analysis to be presented in the next section.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_3", "@cite_6", "@cite_31", "@cite_51", "@cite_11" ], "mid": [ "2050871708", "", "2004973016", "2076544308", "155909673", "", "2117358914" ], "abstract": [ "The Monte Carlo complexity of computing integrals depending on a parameter is analyzed for smooth integrands. An optimal algorithm is developed on the basis of a multigrid variance reduction technique. The complexity analysis implies that our algorithm attains a higher convergence rate than any deterministic algorithm. Moreover, because of savings due to computation on multiple grids, this rate is also higher than that of previously developed Monte Carlo algorithms for parametric integration.", "", "The problem of the global solution of Fredholm integral equations is studied. This means that one seeks to approximate the full solution function (as opposed to the local problem, where only the value of the solution in a single point or a functional of the solution is sought). The Monte Carlo complexity, i.e., the complexity of the stochastic solution of this problem, is analyzed. The framework for this analysis is provided by information-based complexity theory. The investigations complement previous ones on the stochastic complexity of the local solution and on deterministic complexity of both local and global solutions. The results show that even in the global case Monte Carlo algorithms can perform better than deterministic ones, although the difference is not as large as in the local case.", "We study the randomized approximation of weakly singular integral operators. For a suitable class of kernels having a standard type of singularity and being otherwise of finite smoothness, we develop a Monte Carlo multilevel method, give convergence estimates and prove lower bounds which show the optimality of this method and establish the complexity. As an application we obtain optimal methods for and the complexity of randomized solution of the Poisson equation in simple domains, when the solution is sought on subdomains of arbitrary dimension.", "Approximation properties of the underlying estimator are used to improve the efficiency of the method of dependent tests. A multilevel approximation procedure is developed such that in each level the number of samples is balanced with the level-dependent variance, resulting in a considerable reduction of the overall computational cost. The new technique is applied to the Monte Carlo estimation of integrals depending on a parameter.", "", "In this paper we show that the Milstein scheme can be used to improve the convergence of the multilevel Monte Carlo method for scalar stochastic differential equations. Numerical results for Asian, lookback, barrier and digital options demonstrate that the computational cost to achieve a root-mean-square error of e is reduced to O(e -2). This is achieved through a careful construction of the multilevel estimator which computes the difference in expected payoff when using different numbers of timesteps." ] }
1304.5472
2949063410
The author's presentation of multilevel Monte Carlo path simulation at the MCQMC 2006 conference stimulated a lot of research into multilevel Monte Carlo methods. This paper reviews the progress since then, emphasising the simplicity, flexibility and generality of the multilevel Monte Carlo approach. It also offers a few original ideas and suggests areas for future research.
In 2005, Kebaier @cite_45 developed a two-level approach for path simulation which is very similar to the author's approach presented in the next section. The only differences are the use of only two levels, and the use of a general multiplicative factor as in the standard control variate approach. A similar multilevel approach was under development at the same time by Speight, but was not published until later @cite_15 @cite_27 .
{ "cite_N": [ "@cite_27", "@cite_15", "@cite_45" ], "mid": [ "2035567783", "160467666", "2024591553" ], "abstract": [ "I present a self-contained introduction to multigrid methods with an emphasis on techniques relevant to dynamic programming and related problems. A probabilistic interpretation of the numerical principles is highlighted. Multigrid solvers are shown to be naturally matched to the challenges posed by intractable structural dynamic models routinely encountered in applied economics. I argue that multigrid techniques have potential to substantially extend the scale and complexity of models under consideration. Multigrid also provides a unified computational framework to extend model solvers to perform sensitivity analysis, calibration, estimation, and counterfactual policy experiments.", "We present a new variance reduction technique that naturally applies to price financial derivatives by Monte Carlo simulation. Inspired by multigrid methods for solving PDEs, the technique is based on control variates derived from a sequence of approximations that converge pathwise to a limiting model. It applies to a large class of problems, and is easy to implement. Theory and computational results show this method can substantially reduce computational time relative to crude Monte Carlo estimation and is competitive with other variance reduction techniques under Monte Carlo sampling.", "We study the approximation of Ef(X-T) by a Monte Carlo algorithm, where X is the solution of a stochastic differential equation and f is a given function. We introduce a new variance reduction method, which can be viewed as a statistical analogue of Romberg extrapolation method. Namely, we use two Euler schemes with steps delta and delta(beta), 0 < beta < 1. This leads to an algorithm which, for a given level of the statistical error, has a complexity significantly lower than the complexity of the standard Monte Carlo method. We analyze the asymptotic error of this algorithm in the context of general (possibly degenerate) diffusions. In order to find the optimal beta (which turns out to be beta = 1 2), we establish a central limit: type theorem, based on a result of Jacod and Protter for the asymptotic distribution of the error in the Euler scheme. We test our method on various examples. In particular, we adapt it to Asian options. In this setting, we have a CLT and, as a by-product, an explicit expansion of the discretization error." ] }
1304.5575
2119464254
In this paper we address the problem of estimating the ratio @math where @math is a density function and @math is another density, or, more generally an arbitrary function. Knowing or approximating this ratio is needed in various problems of inference and integration, in particular, when one needs to average a function with respect to one probability distribution, given a sample from another. It is often referred as importance sampling in statistical inference and is also closely related to the problem of covariate shift in transfer learning as well as to various MCMC methods. It may also be useful for separating the underlying geometry of a space, say a manifold, from the density function defined on it. Our approach is based on reformulating the problem of estimating @math as an inverse problem in terms of an integral operator corresponding to a kernel, and thus reducing it to an integral equation, known as the Fredholm problem of the first kind. This formulation, combined with the techniques of regularization and kernel methods, leads to a principled kernel-based framework for constructing algorithms and for analyzing them theoretically. The resulting family of algorithms (FIRE, for Fredholm Inverse Regularized Estimator) is flexible, simple and easy to implement. We provide detailed theoretical analysis including concentration bounds and convergence rates for the Gaussian kernel in the case of densities defined on @math , compact domains in @math and smooth @math -dimensional sub-manifolds of the Euclidean space. We also show experimental results including applications to classification and semi-supervised learning within the covariate shift framework and demonstrate some encouraging experimental comparisons. We also show how the parameters of our algorithms can be chosen in a completely unsupervised manner.
The algorithm most closely related to our approach is Kernel Mean Matching (KMM) @cite_16 . KMM is based on the observation that @math , where @math is the feature map corresponding to an RKHS @math . It is rewritten as an optimization problem The quantity on the right can be estimated given a sample from @math and a sample from @math and the minimization becomes a quadratic optimization problem over the values of @math at the points sampled from @math . Writing down the feature map explicitly, i.e., recalling that @math , we see that the equality @math is equivalent to the integral equation Eq. considered as an identity in the Hilbert space @math . Thus the problem of KMM can be viewed within our setting Type I (see the Remark 2 in the introduction), with a RKHS norm but a different optimization algorithm.
{ "cite_N": [ "@cite_16" ], "mid": [ "2112483442" ], "abstract": [ "We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover sampling distributions and then make appropriate corrections based on the distribution estimate. We present a nonparametric method which directly produces resampling weights without distribution estimation. Our method works by matching distributions between training and testing sets in feature space. Experimental results demonstrate that our method works well in practice." ] }
1304.5575
2119464254
In this paper we address the problem of estimating the ratio @math where @math is a density function and @math is another density, or, more generally an arbitrary function. Knowing or approximating this ratio is needed in various problems of inference and integration, in particular, when one needs to average a function with respect to one probability distribution, given a sample from another. It is often referred as importance sampling in statistical inference and is also closely related to the problem of covariate shift in transfer learning as well as to various MCMC methods. It may also be useful for separating the underlying geometry of a space, say a manifold, from the density function defined on it. Our approach is based on reformulating the problem of estimating @math as an inverse problem in terms of an integral operator corresponding to a kernel, and thus reducing it to an integral equation, known as the Fredholm problem of the first kind. This formulation, combined with the techniques of regularization and kernel methods, leads to a principled kernel-based framework for constructing algorithms and for analyzing them theoretically. The resulting family of algorithms (FIRE, for Fredholm Inverse Regularized Estimator) is flexible, simple and easy to implement. We provide detailed theoretical analysis including concentration bounds and convergence rates for the Gaussian kernel in the case of densities defined on @math , compact domains in @math and smooth @math -dimensional sub-manifolds of the Euclidean space. We also show experimental results including applications to classification and semi-supervised learning within the covariate shift framework and demonstrate some encouraging experimental comparisons. We also show how the parameters of our algorithms can be chosen in a completely unsupervised manner.
We also note the connections of the methods in this paper to properties of density-dependent operators in classification and clustering @cite_5 @cite_30 . There are also connections to geometry and density-dependent norms for semi-supervised learning, e.g., @cite_32 .
{ "cite_N": [ "@cite_30", "@cite_5", "@cite_32" ], "mid": [ "", "1487115931", "2104290444" ], "abstract": [ "", "Keywords: Gaussian process ; Nystroem approximation Reference EPFL-CONF-161323 Record created on 2010-12-02, modified on 2016-08-09", "We propose a family of learning algorithms based on a new form of regularization that allows us to exploit the geometry of the marginal distribution. We focus on a semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner. Some transductive graph learning algorithms and standard methods including support vector machines and regularized least squares can be obtained as special cases. We use properties of reproducing kernel Hilbert spaces to prove new Representer theorems that provide theoretical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we obtain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semi-supervised algorithms are able to use unlabeled data effectively. Finally we have a brief discussion of unsupervised and fully supervised learning within our general framework." ] }
1304.5068
2949568172
This paper introduces a redundancy adaptation algorithm for an on-the-fly erasure network coding scheme called Tetrys in the context of real-time video transmission. The algorithm exploits the relationship between the redundancy ratio used by Tetrys and the gain or loss in encoding bit rate from changing a video quality parameter called the Quantization Parameter (QP). Our evaluations show that with equal or less bandwidth occupation, the video protected by Tetrys with redundancy adaptation algorithm obtains a PSNR gain up to or more 4 dB compared to the video without Tetrys protection. We demonstrate that the Tetrys redundancy adaptation algorithm performs well with the variations of both loss pattern and delay induced by the networks. We also show that Tetrys with the redundancy adaptation algorithm outperforms FEC with and without redundancy adaptation.
Our approach differs from the existing work in the following aspects. First, we use an on-the-fly and systematic erasure network coding scheme that shows better performances than FEC codes in terms of packet recovery rate in both single-path and multi-path transmissions @cite_16 @cite_17 . Secondly, the Tetrys redundancy adaptation algorithm focuses on real-time video transmission with a stringent delay constraint required by applications such as video conferencing while the existing proposals target the context where the receiver has a large playout buffer @cite_18 @cite_11 . Lastly, our algorithm does not add extra bit rate by exploiting the relationship between the redundancy ratio and the varition of the Quantization Parameter @cite_10 . In @cite_9 , the authors propose a FEC redundancy adaptation algorithm inside the Encoded Multipath Streaming (EMS) scheme. This algorithm increases the redundancy ratio if the residual loss rate after decoding is greater than a certain threshold and vice versa. Our approach is to minimize the residual loss rate to increase the video quality experienced by end users. Furthermore, the redundancy adjustment in @cite_9 is not video-aware while our algorithm adjusts the redundancy ratio based on the changes in the Quantization Parameter.
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_17", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "2111176225", "2136073452", "2953124311", "2143752912", "2157072019", "2121703624" ], "abstract": [ "This paper proposes a Random Early Detection Forward Error Correction (RED-FEC) mechanism to improve the quality of video delivered over Wireless Local Area Networks (WLANs). In contrast to previous FEC schemes, in which the rate determination information is fed back from the receiver side, in our proposed method, the redundancy rate is calculated directly at the wireless Access Point (AP) in accordance with the network traffic load, as indicated by the AP queue length. An analytical model is developed to predict the effective packet loss rate of a video stream delivered over a WLAN with RED-FEC protection. The numerical results show that the proposed RED-FEC mechanism consistently achieves higher recovery efficiency than a conventional FEC scheme under high and low network loading conditions.", "Multipath streaming protocols have recently attracted much attention because they provide an effective means to provide high-quality streaming over the Internet. However, many existing schemes require a long start-up delay and thus are not suitable for interactive applications such as video conferencing and tele-presence. In this paper, we focus on real-time live streaming applications with stringent end-to-end latency requirement, say several hundreds of milliseconds. To address these challenges, we take a joint multipath and FEC approach that intelligently splits the FEC-encoded stream among multiple available paths. We develop an analytical model and use asymptotic analysis to derive closed-form, optimal load splitting solutions, which are surprisingly simple yet insightful. To our best knowledge, this is the first work that provides such closed-form optimal solutions. Based on the analytical insights, we have designed and implemented a novel Encoded Multipath Streaming (EMS) scheme for real-time live streaming. EMS strives to continuously satisfy the application's QoS requirements by dynamically adjusting the load splitting decisions and the FEC settings. Our simulation results have shown that EMS can not only outperform the existing multipath streaming schemes, but also adapt to the dynamic loss and delay characteristics of the network with minimal overhead.", "Most of multipath multimedia streaming proposals use Forward Error Correction (FEC) approach to protect from packet losses. However, FEC does not sustain well burst of losses even when packets from a given FEC block are spread over multiple paths. In this article, we propose an online multipath convolutional coding for real-time multipath streaming based on an on-the-fly coding scheme called Tetrys. We evaluate the benefits brought out by this coding scheme inside an existing FEC multipath load splitting proposal known as Encoded Multipath Streaming (EMS). We demonstrate that Tetrys consistently outperforms FEC in both uniform and burst losses with EMS scheme. We also propose a modification of the standard EMS algorithm that greatly improves the performance in terms of packet recovery. Finally, we analyze different spreading policies of the Tetrys redundancy traffic between available paths and observe that the longer propagation delay path should be preferably used to carry repair packets.", "A new coding and queue management algorithm is proposed for communication networks that employ linear network coding. The algorithm has the feature that the encoding process is truly online, as opposed to a block-by-block approach. The setup assumes a packet erasure broadcast channel with stochastic arrivals and full feedback, but the proposed scheme is potentially applicable to more general lossy networks with link-by-link feedback. The algorithm guarantees that the physical queue size at the sender tracks the backlog in degrees of freedom (also called the virtual queue size). The new notion of a node ldquoseeingrdquo a packet is introduced. In terms of this idea, our algorithm may be viewed as a natural extension of ARQ schemes to coded networks. Our approach, known as the drop-when-seen algorithm, is compared with a baseline queuing approach called drop-when-decoded. It is shown that the expected queue size for our approach is O[(1) (1-rho)] as opposed to Omega[(1) (1-rho)2] for the baseline approach, where rho is the load factor.", "Delay Tolerant Networking (DTN) is currently an open research area following the interest of space companies in the deployment of Internet protocols for the space Internet. Thus, these last years have seen an increase in the number of DTN protocol proposals such as Saratoga or LTP-T. However, the goal of these protocols are more to send much error-free data during a short contact time rather than operating to a strictly speaking reliable data transfer. Beside this, several research work have proposed efficient acknowledgment schemes based on the SNACK mechanism. However, these acknowledgement strategies are not compliant with the DTN protocol principle. In this paper, we propose a novel reliability mechanism with an implicit acknowledgment strategy that could be used either within these new DTN proposals or in the context of multicast transport protocols. This proposal is based on a new erasure coding concept specifically designed to operate efficient reliable transfer over bi-directional links.", "Current adaptive FEC schemes used for video streaming applications alter the redundancy in a block of message packets to adapt to varying channel conditions. However, for many popular streaming applications, both the source-rate and the available bandwidth are constrained. In this paper, we present FEC codes that can adapt in real-time to provide higher source-packets recovery without changing the FEC block (N, K) pair constraint. The FEC code profile is changed as function of the number of losses to facilitate an improved data recovery even under severe channel conditions (e.g., number of losses within an N-packet FEC block is larger than N-K). We present a feedback based adaptive FEC scheme, which can adapt in a rate-constrained manner. We also illustrate the utility of this scheme for video streaming applications by analyzing the results of extensive video simulations and comparing our performance to adaptive Reed Solomon FEC schemes. We consider a variety of video sequences and use actual packet traces from WLAN (802.11b) and wired Internet environments. Comparison between the two schemes is conducted on the basis of message packet recovery, PSNR, model based perceptual evaluation and visual subjective evaluation. It is shown that the proposed scheme can significantly improve the video quality and in particular reduce the jerkiness in the received video." ] }
1304.4303
2949730304
To help a user specify and verify quantified queries --- a class of database queries known to be very challenging for all but the most expert users --- one can question the user on whether certain data objects are answers or non-answers to her intended query. In this paper, we analyze the number of questions needed to learn or verify qhorn queries, a special class of Boolean quantified queries whose underlying form is conjunctions of quantified Horn expressions. We provide optimal polynomial-question and polynomial-time learning and verification algorithms for two subclasses of the class qhorn with upper constant limits on a query's causal density.
Our work is influenced by the field of computational learning theory. Using membership questions to learn Boolean formulas was introduced in 1988 @cite_7 . demonstrated the polynomial learnability of conjunctions of (non-quantified) Horn clauses using membership questions and a more powerful class of questions known as equivalence questions @cite_14 . The learning algorithm runs in time @math where @math is the number of variables and @math is the number of clauses. Interestingly, Angluin proved that there is no PTIME algorithm for learning conjunctions of Horn clauses that only uses membership questions. 's algorithm for learning conjunctions of Horn formula was extended to learn first-order Horn expressions @cite_10 @cite_20 . First-order Horn expressions contain quantifiers. We differ from this prior work in that in qhorn we quantify over tuples of an object's nested relation; we do not quantify over the values of variables. Our syntactic restrictions on qhorn have counterparts in Boolean formulas. Both qhorn-1 and Boolean formulas @cite_13 allow variables to occur at most once. Both role-preserving qhorn queries and @cite_0 do not allow variables to be both head and body variables.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_10", "@cite_0", "@cite_13", "@cite_20" ], "mid": [ "1511224498", "2139709458", "2912171497", "2162122584", "2071210909", "" ], "abstract": [ "An algorithm for learning the class of Boolean formulas that are expressible as conjunctions of Horn clauses is presented. (A Horn clause is a disjunction of literals, all but at most one of which is a negated variable). The algorithm uses equivalence queries and membership queries to produce a formula that is logically equivalent to the unknown formula to be learned. The amount of time used by the algorithm is polynomial in the number of variables and the number of clauses in the unknown formula. >", "We consider the problem of using queries to learn an unknown concept. Several types of queries are described and studied: membership, equivalence, subset, superset, disjointness, and exhaustiveness queries. Examples are given of efficient learning methods using various subsets of these queries for formal domains, including the regular languages, restricted classes of context-free languages, the pattern languages, and restricted types of prepositional formulas. Some general lower bound techniques are given. Equivalence queries are compared with Valiant's criterion of probably approximately correct identification under random sampling.", "We study the problem of learning conjunctive concepts from examples on structural domains like the blocks world. This class of concepts is formally defined, and it is shown that even for samples in which each example (positive or negative) is a two-object scene, it is NP-complete to determine if there is any concept in this class that is consistent with the sample. We demonstrate how this result affects the feasibility of Mitchell's version of space approach and how it shows that it is unlikely that this class of concepts is polynomially learnable from random examples alone in the PAC framework of Valiant. On the other hand, we show that for any fixed bound on the number of objects per scene, this class is polynomially learnable if, in addition to providing random examples, we allow the learning algorithm to make subset queries. In establishing this result, we calculate the capacity of the hypothesis space of conjunctive concepts in a structural domain and use a general theorem of Vapnik and Chervonenkis. This latter result can also be used to estimate a sample size sufficient for heuristic learning techniques that do not use queries.", "A revision algorithm is a learning algorithm that identifies the target concept, starting from an initial concept. Such an algorithm is considered efficient if its complexity (in terms of the measured resource) is polynomial in the syntactic distance between the initial and the target concept, but only polylogarithmic in the number of variables in the universe. We give efficient revision algorithms in the model of learning with equivalence and membership queries. The algorithms work in a general revision model where both deletion and addition revision operators are allowed. In this model one of the main open problems is the efficient revision of Horn formulas. Two revision algorithms are presented for special cases of this problem: for depth-1 acyclic Horn formulas, and for definite Horn formulas with unique heads.", "A read-once formula is a Boolean formula in which each variable occurs, at most, once. Such formulas are also called μ-formulas or Boolean trees. This paper treats the problem of exactly identifying an unknown read-once formula using specific kinds of queries. The main results are a polynomial-time algorithm for exact identification of monotone read-once formulas using only membership queries, and a polynomial-time algorithm for exact identification of general read-once formulas using equivalence and membership queries (a protocol based on the notion of a minimally adequate teacher [1]). The results of the authors improve on Valiant's previous results for read-once formulas [26]. It is also shown, that no polynomial-time algorithm using only membership queries or only equivalence queries can exactly identify all read-once formulas.", "" ] }
1304.4303
2949730304
To help a user specify and verify quantified queries --- a class of database queries known to be very challenging for all but the most expert users --- one can question the user on whether certain data objects are answers or non-answers to her intended query. In this paper, we analyze the number of questions needed to learn or verify qhorn queries, a special class of Boolean quantified queries whose underlying form is conjunctions of quantified Horn expressions. We provide optimal polynomial-question and polynomial-time learning and verification algorithms for two subclasses of the class qhorn with upper constant limits on a query's causal density.
Using membership (and more powerful) questions to learn concepts within the database domain is not novel. For example, Cate, Dalmau and Kolaitis use membership and equivalence questions to learn schema mappings @cite_15 . A schema mapping is a collection of first-order statements that specify the relationship between the attributes of a source and a target schema. Another example is Staworko's and Wieczorek's work on using example XML documents given by the user to infer XML queries @cite_4 . In both these works, the concept class learned is quite different from the qhorn query class.
{ "cite_N": [ "@cite_15", "@cite_4" ], "mid": [ "2068680051", "1973082955" ], "abstract": [ "A schema mapping is a high-level specification of the relationship between a source schema and a target schema. Recently, a line of research has emerged that aims at deriving schema mappings automatically or semi-automatically with the help of data examples, i.e., pairs consisting of a source instance and a target instance that depict, in some precise sense, the intended behavior of the schema mapping. Several different uses of data examples for deriving, refining, or illustrating a schema mapping have already been proposed and studied. In this paper, we use the lens of computational learning theory to systematically investigate the problem of obtaining algorithmically a schema mapping from data examples. Our aim is to leverage the rich body of work on learning theory in order to develop a framework for exploring the power and the limitations of the various algorithmic methods for obtaining schema mappings from data examples. We focus on GAV schema mappings, that is, schema mappings specified by GAV (Global-As-View) constraints. GAV constraints are the most basic and the most widely supported language for specifying schema mappings. We present an efficient algorithm for learning GAV schema mappings using Angluin's model of exact learning with membership and equivalence queries. This is optimal, since we show that neither membership queries nor equivalence queries suffice, unless the source schema consists of unary relations only. We also obtain results concerning the learnability of schema mappings in the context of Valiant's well known PAC (Probably-Approximately-Correct) learning model. Finally, as a byproduct of our work, we show that there is no efficient algorithm for approximating the shortest GAV schema mapping fitting a given set of examples, unless the source schema consists of unary relations only.", "We investigate the problem of learning XML queries, path queries and twig queries, from examples given by the user. A learning algorithm takes on the input a set of XML documents with nodes annotated by the user and returns a query that selects the nodes in a manner consistent with the annotation. We study two learning settings that differ with the types of annotations. In the first setting the user may only indicate required nodes that the query must select (i.e., positive examples). In the second, more general, setting, the user may also indicate forbidden nodes that the query must not select (i.e., negative examples). The query may or may not select any node with no annotation. We formalize what it means for a class of queries to be learnable. One requirement is the existence of a learning algorithm that is sound i.e., always returning a query consistent with the examples given by the user. Furthermore, the learning algorithm should be complete i.e., able to produce every query with sufficiently rich examples. Other requirements involve tractability of the learning algorithm and its robustness to nonessential examples. We identify practical classes of Boolean and unary, path and twig queries that are learnable from positive examples. We also show that adding negative examples to the picture renders learning unfeasible." ] }
1304.4658
1682003616
Personalalized PageRank uses random walks to determine the importance or authority of nodes in a graph from the point of view of a given source node. Much past work has considered how to compute personalized PageRank from a given source node to other nodes. In this work we consider the problem of computing personalized PageRanks to a given target node from all source nodes. This problem can be interpreted as finding who supports the target or who is interested in the target. We present an efficient algorithm for computing personalized PageRank to a given target up to any given accuracy. We give a simple analysis of our algorithm's running time in both the average case and the parameterized worst-case. We show that for any graph with @math nodes and @math edges, if the target node is randomly chosen and the teleport probability @math is given, the algorithm will compute a result with @math error in time @math . This is much faster than the previously proposed method of computing personalized PageRank separately from every source node, and it is comparable to the cost of computing personalized PageRank from a single source. We present results from experiments on the Twitter graph which show that the constant factors in our running time analysis are small and our algorithm is efficient in practice.
Personalized PageRank was first suggested in the original PageRank paper @cite_6 , and much follow up work has considered how to compute it efficiently. Our approach of propagating estimate updates is similar to the approach taken by Jeh and Widom @cite_5 and Berkin @cite_3 to compute personalized PageRank from a single source. Our equation appears as equation (10) in @cite_5 . Both of these works suggest the heuristic of propagating from the node with the largest unpropagated estimate. Our work is different because we are interested in estimating the values @math for a single target @math , while earlier work was concerned with the values for a single source @math . Because of this, our analysis is completely different, and we are able to prove running time bounds.
{ "cite_N": [ "@cite_5", "@cite_3", "@cite_6" ], "mid": [ "2069153192", "2039191721", "1854214752" ], "abstract": [ "Recent web search techniques augment traditional text matching with a global notion of \"importance\" based on the linkage structure of the web, such as in Google's PageRank algorithm. For more refined searches, this global notion of importance can be specialized to create personalized views of importance--for example, importance scores can be biased according to a user-specified set of initially-interesting pages. Computing and storing all possible personalized views in advance is impractical, as is computing personalized views at query time, since the computation of each view requires an iterative computation over the web graph. We present new graph-theoretical results, and a new technique based on these results, that encode personalized views as partial vectors. Partial vectors are shared across multiple personalized views, and their computation and storage costs scale well with the number of views. Our approach enables incremental computation, so that the construction of personalized views from partial vectors is practical at query time. We present efficient dynamic programming algorithms for computing partial vectors, an algorithm for constructing personalized views from partial vectors, and experimental results demonstrating the effectiveness and scalability of our techniques.", "We introduce a novel bookmark-coloring algorithm (BCA) that computes authority weights over the web pages utilizing the web hyperlink structure. The computed vector (BCV) is similar to the PageRank vector defined for a page-specific teleportation. Meanwhile, BCA is very fast, and BCV is sparse. BCA also has important algebraic properties. If several BCVs corresponding to a set of pages (called hub) are known, they can be leveraged in computing arbitrary BCV via a straightforward algebraic process and hub BCVs can be efficiently computed and encoded.", "The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation." ] }
1304.4658
1682003616
Personalalized PageRank uses random walks to determine the importance or authority of nodes in a graph from the point of view of a given source node. Much past work has considered how to compute personalized PageRank from a given source node to other nodes. In this work we consider the problem of computing personalized PageRanks to a given target node from all source nodes. This problem can be interpreted as finding who supports the target or who is interested in the target. We present an efficient algorithm for computing personalized PageRank to a given target up to any given accuracy. We give a simple analysis of our algorithm's running time in both the average case and the parameterized worst-case. We show that for any graph with @math nodes and @math edges, if the target node is randomly chosen and the teleport probability @math is given, the algorithm will compute a result with @math error in time @math . This is much faster than the previously proposed method of computing personalized PageRank separately from every source node, and it is comparable to the cost of computing personalized PageRank from a single source. We present results from experiments on the Twitter graph which show that the constant factors in our running time analysis are small and our algorithm is efficient in practice.
To the best of our knowledge, the only previous work to consider the problem of computing personalized PageRank to a target node was by @cite_9 , where it is used as one phase of an algorithm to identify link-spam. They observe that a node @math 's global PageRank is the average over all nodes @math of @math . Thus to determine how a node @math achieves its global PageRank score, they propose we first find the nodes @math with a high value of @math . Once that set has been found, it can be analyzed to determine if it looks like an organic set of nodes or an artificial link-farm. To compute the values of @math for each @math , they propose taking random walks from every source node and do not consider other methods.
{ "cite_N": [ "@cite_9" ], "mid": [ "2603834791" ], "abstract": [ "Spammers intend to increase the PageRank of certain spam pages by creating a large number of links pointing to them. We propose a novel method based on the concept of personalized PageRank that detects pages with an undeserved high PageRank value without the need of any kind of white or blacklists or other means of human intervention. We assume that spammed pages have a biased distribution of pages that contribute to the undeserved high PageRank value. We define SpamRank by penalizing pages that originate a suspicious PageRank share and personalizing PageRank on the penalties. Our method is tested on a 31 M page crawl of the .de domain with a manually classified 1000-page stratified random sample with bias towards large PageRank values." ] }
1304.4553
2165926628
Edge connectivity and vertex connectivity are two fundamental concepts in graph theory. Although by now there is a good understanding of the structure of graphs based on their edge connectivity, our knowledge in the case of vertex connectivity is much more limited. An essential tool in capturing edge connectivity are edge-disjoint spanning trees. The famous results of Tutte and Nash-Williams show that a graph with edge connectivity @math contains @math edge-disjoint spanning trees. We present connected dominating set (CDS) partition and packing as tools that are analogous to edge-disjoint spanning trees and that help us to better grasp the structure of graphs based on their vertex connectivity. The objective of the CDS partition problem is to partition the nodes of a graph into as many connected dominating sets as possible. The CDS packing problem is the corresponding fractional relaxation, where CDSs are allowed to overlap as long as this is compensated by assigning appropriate weights. CDS partition and CDS packing can be viewed as the counterparts of the well-studied edge-disjoint spanning trees, focusing on vertex disjointedness rather than edge disjointness. We constructively show that every @math -vertex-connected graph with @math nodes has a CDS packing of size @math and a CDS partition of size @math . We prove that the @math CDS packing bound is existentially optimal. Using CDS packing, we show that if vertices of a @math -vertex-connected graph are independently sampled with probability @math , then the graph induced by the sampled vertices has vertex connectivity @math . Moreover, using our @math CDS packing, we get a store-and-forward broadcast algorithm with optimal throughput in the networking model where in each round, each node can send one bounded-size message to all its neighbors.
The domatic number of a graph is the size of the largest partition of a graph into dominating sets. In @cite_28 it is shown that for graphs with minimum degree @math , nodes can be partition into @math dominating sets efficiently. This implies a @math -approximation, which is shown to be best possible unless @math . Further, Hedetniemi and Laskar @cite_2 present an extensive collection of results revolving around dominating sets. The CDS partition problem was first introduced in @cite_0 where the size of a maximum CDS partition of a graph @math is called the connected domatic number of @math . Zelinka @cite_26 shows a number of results about the connected domatic number; in particular, that it is upper bounded by the vertex connectivity. @cite_22 shows that the connected domatic number of planar graphs is at most @math and also describes some relations between the number of edges of a graph and its connected domatic number. Finally, @cite_29 argues that a large CDS partition can be useful for balancing energy usage in wireless sensor networks.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_28", "@cite_29", "@cite_0", "@cite_2" ], "mid": [ "1552743905", "1495272862", "1515157528", "2137996019", "202326759", "70941519" ], "abstract": [ "All graphs considered in this paper are finite graphs without loops and multiple edges. The domatic number of a graph was defined by E. J. Cockayne and S. T. Hedetniemi [1]. Later some related concepts were introduced. The same authors together with R. M. Dawes [2] have introduced the total domatic number; R. Laskar and S. T. Hedetniemi [3] have introduced the connected domatic number. A dominating set (or a total dominating set) in an undirected graph G is a subset D of the vertex set V(G) of G with the property that to each vertex x e V(G) — D (or to each vertex xeV(G) respectively) there exists a vertex y e D adjacent to x. A connected dominating set of G is a dominating set of G with the property that the subgraph of G induced by it is connected. A domatic (or total domatic, or connected domatic) partition of G is a partition of V(G), all of whose classes are dominating (or total dominating, or connected dominating, respectively) sets of G. The maximum number of classes of a domatic (or total domatic, or connected domatic) partition of G is called the domatic (or total domatic, or connected domatic, respectively) number of G. The domatic number of G is denoted by d(G), its total domatic number by <4(G), its connected domatic number by dc(G). The connected domatic number of a graph is well defined only for connected graphs; in a disconnected graph there exists no connected dominating set and thus no connected domatic partition, while in every connected graph there exists at least one connected domatic partition, namely that which consists of one class. The connected domatic number of G is closely related to the vertex connectivity number of G. If G is a connected graph, then a vertex cut of G is a subset R of V(G) with the property that the subgraph of G induced by V(G) — R is disconnected. If G is not a complete graph, then the vertex connectivity number x(G) is the minimum cardinality of a vertex cut of G. If G is a complete graph (i. e. without vertex cuts) with n vertices, then we put x(G) = n — 1. Lemma. Let G be a connected graph which is not complete, let R be its vertex cut, let D be its connected dominating set. Then DnR^0.", "A dominating set in a graph G is a connected dominating set of G if it induces a connected subgraph of G. The connected domatic number of G is the maximum number of pairwise disjoint, connected dominating sets in V(G). We establish a sharp lower bound on the number of edges in a connected graph with a given order and given connected domatic number. We also show that a planar graph has connected domatic number at most 4 and give a characterization of planar graphs having connected domatic number 3.", "A set of vertices in a graph is a dominating set if every vertex outside the set has a neighbor in the set. The domatic number problem is that of partitioning the vertices of a graph into the maximum number of disjoint dominating sets. Let n denote the number of vertices, @math the minimum degree, and @math the maximum degree. We show that every graph has a domatic partition with @math dominating sets and, moreover, that such a domatic partition can be found in polynomial-time. This implies a @math -approximation algorithm for domatic number, since the domatic number is always at most @math . We also show this to be essentially best possible. Namely, extending the approximation hardness of set cover by combining multiprover protocols with zero-knowledge techniques, we show that for every @math , a @math -approximation implies that @math . This makes domatic number the first natural maximization problem (known to the authors) that is provably approximable to within polylogarithmic factors but no better. We also show that every graph has a domatic partition with @math dominating sets, where the \"o(1)\" term goes to zero as @math increases. This can be turned into an efficient algorithm that produces a domatic partition of @math sets.", "Wireless ad hoc and sensor networks (WSNs) often require a connected dominating set (CDS) as the underlying virtual backbone for efficient routing. Nodes in a CDS have extra computation and communication load for their role as dominator, subjecting them to an early exhaustion of their battery. A simple mechanism to address this problem is to switch from one CDS to another fresh CDS, rotating the active CDS through a disjoint set of CDSs. This gives rise to the connected domatic partition (CDP) problem, which essentially involves partitioning the nodes V(G) of a graph G into node disjoint CDSs. We have developed a distributed algorithm for constructing the CDP using our maximal independent set (MlS)-based proximity heuristics, which depends only on connectivity information and does not rely on geographic or geometric information. We show that the size of a CDP that is identified by our algorithm is at least [delta+1 beta(c+1)] - f, where delta is the minimum node degree of G, beta les 2, c les 11 is a constant for a unit disk graph (UDG), and the expected value of f is epsidelta|V|, where epsi Lt 1 is a positive constant, and delta ges 48. Results of varied testing of our algorithm are positive even for a network of a large number of sensor nodes. Our scheme also performs better than other related techniques such as the ID-based scheme.", "In a data processing system comprising a plurality of systems each including a plurality of console type typewriters for establishing communications pertaining to data processing operations between an operator and the system, the communication information which is entered into and derived from a central processing unit in each system by the console type typewriter is stored in a character buffer unit corresponding to the console type typewriter in a monitor transfer control unit, and a buffer scanning unit scans the character buffers to transfer the contents therein into a statistical analyzer or processing unit so that communication information may be automatically analyzed and summarized to obtain the data per day or month required for determining whether or not the data processing system has been effectively operated.", "Introduction (S.T. Hedetniemi, R.C. Laskar). Theoretical. Chessboard Domination Problems (E.J. Cockayne). On the Queen Domination Problem (C.M. Grinstead, B. Hahne, D. Van Stone). Recent Problems and Results About Kernels in Directed Graphs (C. Berge, P. Duchet). Critical Concepts in Domination (D.P. Summer). The Bondage Number of a Graph (J.F. ). Chordal Graphs and Upper Irredundance, Upper Domination and Independence (M.S. Jacobson and K. Peters). Regular Totally Domatically Full Graphs (B. Zelinka). Domatically Critical and Domatically Full Graphs (D. Rall). On Generalised Minimal Domination Parameters for Paths (B. Bollobas, E.J. Cockayne, C.M. Mynhardt). New Models. Dominating Cliques in Graphs (M.B. Cozzens, L.L. Kelleher). Covering all Cliques of Graph (Z. Tuza). Factor Domination in Graphs (R.C. Brigham, R. Dutton). The Least Point Covering and Domination Numbers of a Graph (E. Sampathkumar). Algorithmic. Dominating Sets in Perfect Graphs (D.G. Corneil, L.K. Stewart). Unit Disk Graphs (B.N. Clark, C.J. Colbourn, D.S. Johnson). Permutation Graphs: Connected Domination and Steiner Trees (C.J. Colbourn, L.K. Stewart). The Discipline Number of a Graph (V. Chvatal, W.J. Cook). Best Location of Service Centers in a Tree-Like Network under Budget Constraints (J. McHugh, Y. Perl). Dominating Cycles in Halin Graphs (M. Skowronska, M.M. Syslo). Finding Dominating Cliques Efficiently, in Strongly Chordal Graphs and Undirected Path Graphs (D. Kratsch). On Minimum Dominating Sets with Minimum Intersection (D.L. Grinstead, P.J. Slater). Bibliography. Bibliography on Domination in Graphs and Some Basic Definitions of Domination Parameters (S.T. Hedetniemi, R.C. Laskar)." ] }
1304.3742
2953044842
Nutrition is a key factor in people's overall health. Hence, understanding the nature and dynamics of population-wide dietary preferences over time and space can be valuable in public health. To date, studies have leveraged small samples of participants via food intake logs or treatment data. We propose a complementary source of population data on nutrition obtained via Web logs. Our main contribution is a spatiotemporal analysis of population-wide dietary preferences through the lens of logs gathered by a widely distributed Web-browser add-on, using the access volume of recipes that users seek via search as a proxy for actual food consumption. We discover that variation in dietary preferences as expressed via recipe access has two main periodic components, one yearly and the other weekly, and that there exist characteristic regional differences in terms of diet within the United States. In a second study, we identify users who show evidence of having made an acute decision to lose weight. We characterize the shifts in interests that they express in their search queries and focus on changes in their recipe queries in particular. Last, we correlate nutritional time series obtained from recipe queries with time-aligned data on hospital admissions, aimed at understanding how behavioral data captured in Web logs might be harnessed to identify potential relationships between diet and acute health problems. In this preliminary study, we focus on patterns of sodium identified in recipes over time and patterns of admission for congestive heart failure, a chronic illness that can be exacerbated by increases in sodium intake.
Studies with search logs can provide valuable insights on associations between concepts @cite_7 , and previously unknown evidence of associations between nutritional deficiencies and medical conditions can be mined from the medical literature @cite_9 @cite_4 . Researchers have studied trends over short periods of time to learn about the behavior of the querying population at large @cite_14 , or clustered terms by temporal frequency to understand daily or weekly variations @cite_35 . Temporal trends and periodicities in longer-term query volume have been leveraged in approaches that aggregate data at the user @cite_30 or the population level @cite_24 . @cite_25 proposed methods for discovering semantically similar queries by identifying queries with similar demand patterns over time. More recently, @cite_34 predict time-varying user behavior using smoothing and trends and explore other dynamics of Web behaviors, such as the detection of periodicities and surprises. Particularly relevant here is research on the prediction of disease epidemics using logs; e.g., @cite_10 used query logs as a form of surveillance for early detection of influenza. Known seasonal variations in influenza outbreaks also visible in the search logs play an important part in their predictions.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_14", "@cite_4", "@cite_7", "@cite_9", "@cite_24", "@cite_34", "@cite_10", "@cite_25" ], "mid": [ "2136814520", "2040546864", "2064522604", "2067096102", "", "2011726136", "", "2012354735", "2117239687", "2057714964" ], "abstract": [ "In this article, we demonstrate the value of long-term query logs. Most work on query logs to date considers only short-term (within-session) query information. In contrast, we show that long-term query logs can be used to learn about the world we live in. There are many applications of this that lead not only to improving the search engine for its users, but also potentially to advances in other disciplines such as medicine, sociology, economics, and more. In this article, we will show how long-term query logs can be used for these purposes, and that their potential is severely reduced if the logs are limited to short time horizons. We show that query effects are long-lasting, provide valuable information, and might be used to automatically make medical discoveries, build concept hierarchies, and generally learn about the sociological behavior of users. We believe these applications are only the beginning of what can be done with the information contained in long-term query logs, and see this work as a step toward unlocking their potential.", "We investigate the idea of finding semantically related search engine queries based on their temporal correlation; in other words, we infer that two queries are related if their popularities behave similarly over time. To this end, we first define a new measure of the temporal correlation of two queries based on the correlation coefficient of their frequency functions. We then conduct extensive experiments using our measure on two massive query streams from the MSN search engine, revealing that this technique can discover a wide range of semantically similar queries. Finally, we develop a method of efficiently finding the highest correlated queries for a given input query using far less space and time than the naive approach, making real-time implementation possible.", "We review a query log of hundreds of millions of queries that constitute the total query traffic for an entire week of a general-purpose commercial web search service. Previously, query logs have been studied from a single, cumulative view. In contrast, our analysis shows changes in popularity and uniqueness of topically categorized queries across the hours of the day. We examine query traffic on an hourly basis by matching it against lists of queries that have been topically pre-categorized by human editors. This represents 13 of the query traffic. We show that query traffic from particular topical categories differs both from the query stream as a whole and from other categories. This analysis provides valuable insight for improving retrieval effectiveness and efficiency. It is also relevant to the development of enhanced query disambiguation, routing, and caching algorithms.", "Allan Bloom's cytologic epistemology invites further analysis. A scientific article is like a cell that interacts with its neighbors to form an organ-like cluster—a set of articles or a \"literature\" addressed to a common set of problems and topics. These articles interact by citing one another—by conversing in print. The clusters themselves can be seen as interacting, to varying degrees, with other clusters. This essay will focus on certain failures of intercluster communication.", "", "Divide and conquer—the strategy that science uses to cope with the mountains of printed matter it produces—appears on the surface to serve us well. Science organizes itself into manageable units—scientific specialties—and so its literature is created and assimilated in manageable chunks or units. But a few clouds on the horizon ought not to go unexamined. First, most of the units are no doubt logically related to other units. Second, there are far more combinations of units, therefore far more potential relationships among the units, than there are units. Third, the system is not organized to cope with combinations. I suggest that important relationships might be escaping our notice. Individual units of literature are created to some degree independently of one another, and, insofar as that is so, the logical connections among the units, though inevitable, may be unintended by and even unknown to their creators. Until those fragments, like scattered pieces of a puzzle, are brought together, the relationships among them may remain undiscovered—even though the isolated pieces might long have been public knowledge. My purpose in this essay is to show, by means of an example, how this might happen. I shall identify two units of literature that are logically connected but noninteractive; neither seems to acknowledge the other to any substantial degree. Yet the logical connections, once apparent, lead to a potentially useful and possibly new hypothesis.", "", "User behavior on the Web changes over time. For example, the queries that people issue to search engines, and the underlying informational goals behind the queries vary over time. In this paper, we examine how to model and predict this temporal user behavior. We develop a temporal modeling framework adapted from physics and signal processing that can be used to predict time-varying user behavior using smoothing and trends. We also explore other dynamics of Web behaviors, such as the detection of periodicities and surprises. We develop a learning procedure that can be used to construct models of users' activities based on features of current and historical behaviors. The results of experiments indicate that by using our framework to predict user behavior, we can achieve significant improvements in prediction compared to baseline models that weight historical evidence the same for all queries. We also develop a novel learning algorithm that explicitly learns when to apply a given prediction model among a set of such models. Our improved temporal modeling of user behavior can be used to enhance query suggestions, crawling policies, and result ranking.", "This report introduces a computational model based on internet search queries for real-time surveillance of influenza-like illness (ILI), which reproduces the patterns observed in ILI data from the Centers for Disease Control and Prevention.", "We present several methods for mining knowledge from the query logs of the MSN search engine. Using the query logs, we build a time series for each query word or phrase (e.g., 'Thanksgiving' or 'Christmas gifts') where the elements of the time series are the number of times that a query is issued on a day. All of the methods we describe use sequences of this form and can be applied to time series data generally. Our primary goal is the discovery of semantically similar queries and we do so by identifying queries with similar demand patterns. Utilizing the best Fourier coefficients and the energy of the omitted components, we improve upon the state-of-the-art in time-series similarity matching. The extracted sequence features are then organized in an efficient metric tree index structure. We also demonstrate how to efficiently and accurately discover the important periods in a time-series. Finally we propose a simple but effective method for identification of bursts (long or short-term). Using the burst information extracted from a sequence, we are able to efficiently perform 'query-by-burst' on the database of time-series. We conclude the presentation with the description of a tool that uses the described methods, and serves as an interactive exploratory data discovery tool for the MSN query database." ] }
1304.3742
2953044842
Nutrition is a key factor in people's overall health. Hence, understanding the nature and dynamics of population-wide dietary preferences over time and space can be valuable in public health. To date, studies have leveraged small samples of participants via food intake logs or treatment data. We propose a complementary source of population data on nutrition obtained via Web logs. Our main contribution is a spatiotemporal analysis of population-wide dietary preferences through the lens of logs gathered by a widely distributed Web-browser add-on, using the access volume of recipes that users seek via search as a proxy for actual food consumption. We discover that variation in dietary preferences as expressed via recipe access has two main periodic components, one yearly and the other weekly, and that there exist characteristic regional differences in terms of diet within the United States. In a second study, we identify users who show evidence of having made an acute decision to lose weight. We characterize the shifts in interests that they express in their search queries and focus on changes in their recipe queries in particular. Last, we correlate nutritional time series obtained from recipe queries with time-aligned data on hospital admissions, aimed at understanding how behavioral data captured in Web logs might be harnessed to identify potential relationships between diet and acute health problems. In this preliminary study, we focus on patterns of sodium identified in recipes over time and patterns of admission for congestive heart failure, a chronic illness that can be exacerbated by increases in sodium intake.
Current food consumption patterns are influenced by a range of factors including an evolved preference for sugar and fat to palatability, nutritional value, culture, ease of production, and climate @cite_17 @cite_16 @cite_29 . Factors such as location and the price of locally produced foods can also affect nutrient intake @cite_28 . Others have mined recipe data from sites such as Allrecipes.com to better understand culinary practice; @cite_8 introduced the flavor network,' capturing the flavor compounds shared by culinary ingredients. This focuses on the creation of dishes (ingredient pairs in recipes) rather than estimating their consumption, something that we believe is possible via logs. Many studies have explored how people attempt to change their consumption habits as part of weight-loss programs @cite_26 @cite_6 . Psychological models, such as the transtheoretical model of change @cite_27 , can generalize to dieting @cite_22 , and in this realm, too, log-based methods are emerging for analyzing behavior @cite_31 .
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_8", "@cite_28", "@cite_29", "@cite_6", "@cite_27", "@cite_31", "@cite_16", "@cite_17" ], "mid": [ "2138862223", "2142458819", "2082396928", "2409857475", "1486813270", "2146463166", "2142912849", "2103410297", "1983487946", "2185894887" ], "abstract": [ "This study examined the degree to which hu- mans compensate for a reduction in dietary fat by increasing energy intake. Thirteen females were randomly assigned to either a low-fat diet (20-25 of calories as fat) or a control diet (35- 40 fat) for 1 1 wk. After a 7-wk washout period, the conditions were reversed for another 1 1 wk. Energy intake on the low-fat diet gradually increased by 0.092 kJ wk resulting in a total caloric compensation of35 by the end ofthe 1 1-wk treatment period. This failure to compensate calorically on the low-fat diet resulted in a deficit of 1.22 kJ d and a weight loss of 2.5 kg in I 1 wk, twice the amount ofweight lost on the control diet. These results demonstrate that body weight can be lost merely by reducing the fat content ofthe diet without the need to voluntarily restrict foodintake. AmfClinNutr l99l;53:ll24-9.", "This study evaluated how well predictions from the transtheoretical model (TTM) generalized from smoking to diet. Longitudinal data were used from a randomized control trial on reducing dietary fat consumption in adults (n =1207) recruited from primary care practices. Predictive power was evaluated by making a priori predictions of the magnitude of change expected in the TTM constructs of temptation, pros and cons, and 10 processes of change when an individual transitions between the stages of change. Generalizability was evaluated by testing predictions based on smoking data. Three sets of predictions were made for each stage: Precontemplation (PC), Contemplation (C) and Preparation (PR) based on stage transition categories of no progress, progress and regression determined by stage at baseline versus stage at the 12-month follow-up. Univariate analysis of variance between stage transition groups was used to calculate the effect size [omega squared (ω2)]. For diet predictions based on diet data, there was a high degree of confirmation: 92 , 95 and 92 for PC, C and PR, respectively. For diet predictions based on smoking data, 77 , 79 and 85 were confirmed, respectively, suggesting a moderate degree of generalizability. This study revised effect size estimates for future theory testing on the TTM applied to dietary fat.", "The cultural diversity of culinary practice, as illustrated by the variety of regional cuisines, raises the question of whether there are any general patterns that determine the ingredient combinations used in food today or principles that transcend individual tastes and recipes. We introduce a flavor network that captures the flavor compounds shared by culinary ingredients. Western cuisines show a tendency to use ingredient pairs that share many flavor compounds, supporting the so-called food pairing hypothesis. By contrast, East Asian cuisines tend to avoid compound sharing ingredients. Given the increasing availability of information on food preparation, our data-driven investigation opens new avenues towards a systematic understanding of culinary practice.", "", "FOOD AND CULTURE is the market-leading text for the cultural foods courses, providing information on the health, culture, food, and nutrition habits of the most common ethnic and racial groups living in the United States. It is designed to help health professionals, chefs, and others in the food service industry learn to work effectively with members of different ethnic and religious groups in a culturally sensitive manner. Authors Pamela Goyan Kittler and Kathryn P. Sucher include comprehensive coverage of key ethnic, religious, and regional groups, including Native Americans, Europeans, Africans, Mexicans and Central Americans, Caribbean Islanders, South Americans, Chinese, Japanese, Koreans, Southeast Asians, Pacific Islanders, People of the Balkans, Middle Easterners, Asian Indians, and regional Americans.", "Background Trials comparing the effectiveness and safety of weight-loss diets are frequently limited by short follow-up times and high dropout rates. Methods In this 2-year trial, we randomly assigned 322 moderately obese subjects (mean age, 52 years; mean body-mass index [the weight in kilograms divided by the square of the height in meters], 31; male sex, 86 ) to one of three diets: low-fat, restricted-calorie; Mediterranean, restricted-calorie; or low-carbohydrate, non–restricted-calorie.", "The transtheoretical model posits that health behavior change involves progress through six stages of change: precontemplation, contemplation, preparation, action, maintenance, and termination. Ten processes of change have been identified for producing progress along with decisional balance, self-efficacy, and temptations. Basic research has generated a rule of thumb for at-risk populations: 40 in precontemplation, 40 in contemplation, and 20 in preparation. Across 12 health behaviors, consistent patterns have been found between the pros and cons of changing and the stages of change. Applied research has demonstrated dramatic improvements in recruitment, retention, and progress using stage-matched interventions and proactive recruitment procedures. The most promising outcomes to date have been found with computer-based individualized and interactive interventions. The most promising enhancement to the computer-based programs are personalized counselors. One of the most striking results to date for stag...", "Healthcare is shifting from being reactive to preventive, with a focus on maintaining general wellness through positive decisions on diet, exercise, and lifestyle. In this paper, we investigate search behavior as people navigate the Web and find support for dietary and weight loss plans. Inspecting the Web search logs of nearly 2,000 users, we show that people progressively narrow their searches to support their progress through these plans. Interestingly, people that visit online health forums seem to progress through the plans' phases more quickly. Based on these results, we conducted a survey to further explore the roles and importance of online forums in supporting dieting and weight loss.", "Abstract Sixteen normal-weight subjects rated the perceived intensity of sweetness, fatness, and creaminess of 20 different mixtures of milk, cream, and sugar, and assigned an overall pleasantness (hedonic) rating to each sample. Intensity estimates increased as power functions of ingredient concentration and no significant mixture suppression effect was observed. In contrast, hedonic responses strongly depended on the relative proportions of sucrose and fat in the samples tasted. Hedonic preference ratings first rose and then declined with increasing sucrose concentration, but continued to rise with increasing dairy fat content. The addition of sucrose greatly enhanced hedonic ratings for high-fat stimuli. Changes in hedonic responsiveness were monitored using a mathematical modelling technique known as the Response Surface Method, which allows computerized simulation of the hedonic response surface as a function of perceived ingredient levels. Overnight fasting did not affect the perception or hedonics for sweet or “fatty” tastes. The observed preference for sweetened high-fat foods may have implications for the development of dietary-induced obesity in man.", "Publisher Summary This chapter discusses the selection of food by rats, humans, and other animals, and focuses on the complex problems, especially in food recognition and choice, in the omnivores or generalists. Food selection implies food ingestion. Food ingestion implies the presence of food. Therefore, background for the study of food selection includes the food search process: search images and search mechanisms for finding appropriate food stimuli in the environment. Honey bees provide fine examples of a highly developed food search system. Food selection also implies the ability to obtain or capture food, and to assimilate it, for which many often exotic mechanisms have been evolved. Given the presence of potential food, ingestion then usually depends on an internal state or detector indicating a “need” for the particular food or class of foods, and recognition of the potential food as food. Omnivores, such as rats and humans, faced with an enormous number of potential foods, must choose wisely. They are always in danger of eating something harmful or eating too much of a good thing. Although there are some helpful internal mechanisms, such as poison detoxification, nutrient biosynthesis, and nutrient storage, the major share of the burden for maintaining nutritional balance must out of necessity come from incorporation of appropriate nutrients in the environment and, hence, behavior. The most striking parallel between human and rat feeding is in the neophobia seen in both. The chapter discusses the multiple determinants of food selection in man that are divided into biological factors and effects of individual experience, on one hand, and cultural influences, on the other." ] }
1304.3875
2171455140
In the mobile communication services, users wish to subscribe to high quality service with a low price level, which leads to competition between mobile network operators (MNOs). The MNOs compete with each other by service prices after deciding the extent of investment to improve quality of service (QoS). Unfortunately, the theoretic backgrounds of price dynamics are not known to us, and as a result, effective network planning and regulative actions are hard to make in the competitive market. To explain this competition more detail, we formulate and solve an optimization problem applying the two-stage Cournot and Bertrand competition model. Consequently, we derive a price dynamics that the MNOs increase and decrease their service prices periodically, which completely explains the subsidy dynamics in the real world. Moving forward, to avoid this instability and inefficiency, we suggest a simple regulation rule which leads to a Pareto-optimal equilibrium point. Moreover, we suggest regulator's optimal actions corresponding to user welfare and the regulator's revenue.
In @cite_10 , the author shows that service price and QoS are inter-related in communications networks, and suggests Paris metro pricing (PMP) for Internet. PMP is a kind of price discrimination over different QoS levels; the higher QoS, the higher price. In that paper, the author finds that the service price and QoS will converge to an equilibrium point after a number of interactions. PMP is further extended by Walrand @cite_0 , who formulates an Internet pricing model under price and QoS constraints. In that work, the author investigates how much PMP improves the operator's profit compared to a single optimal service price. The author also analyzes price competition between two homogeneous network operators, the network capacities of which are fixed. In @cite_2 , we show the dynamics of price competition ( price war ) using the Walrand model @cite_0 , and suggest a regulation for price level convergence.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_2" ], "mid": [ "1822550920", "2068649504", "2085487367" ], "abstract": [ "Standard performance evaluations of communication networks focus on the technology layer where protocols define precise rules of operations. Those studies assume a model of network utilization and of network characteristics and derive performance measures. However, performance affects how users utilize the network. Also, investments by network providers affect performance and consequently network utilization. We call the actions of users and network providers the “economic layer” of the network because their decisions depend largely on economic incentives. The economic and technology layers interact in a complex way and they should be studied together. This tutorial explores economic models of networks that combine the economic and technology layers.", "A simple approach, called PMP (Paris Metro Pricing), is suggested for providing differentiated services in packet networks such as the Internet. It is to partition a network into several logically separate channels, each of which would treat all packets equally on a best effort basis. There would be no formal guarantees of quality of service. These channels would differ only in the prices paid for using them. Channels with higher prices would attract less traffic, and thereby provide better service. Price would be the primary tool of traffic management. PMP is the simplest differentiated services solution. It is designed to accommodate user preferences at the cost of sacrificing some of the utilization efficiency of the network.", "In recent years, to satisfy people's need for wireless services, the number of access points of the 3G-based systems, WiFi and WiMAX has been increased exponentially by wireless service providers (WSPs). As a result, there are many WSPs coexisting in the same hotspot area, which drives price competition among WSPs. Existing researche shows that each WSP will lower its price to increase revenue or market share, and this kind of price competition will eventually damage every WSP with the revenue decrease. However in this paper, we show that there is another type of price competition, where the WSPs' decreasing or increasing price levels occurs periodically and there is no equilibrium point. We illustrate it by using an example of the duopoly price competition and suggest a simple regulation rule that leads to an equilibrium point. Moreover, we show that the equilibrium point is Pareto-optimal and is well balanced in the aspects of total revenue, fairness and social welfare." ] }
1304.3875
2171455140
In the mobile communication services, users wish to subscribe to high quality service with a low price level, which leads to competition between mobile network operators (MNOs). The MNOs compete with each other by service prices after deciding the extent of investment to improve quality of service (QoS). Unfortunately, the theoretic backgrounds of price dynamics are not known to us, and as a result, effective network planning and regulative actions are hard to make in the competitive market. To explain this competition more detail, we formulate and solve an optimization problem applying the two-stage Cournot and Bertrand competition model. Consequently, we derive a price dynamics that the MNOs increase and decrease their service prices periodically, which completely explains the subsidy dynamics in the real world. Moving forward, to avoid this instability and inefficiency, we suggest a simple regulation rule which leads to a Pareto-optimal equilibrium point. Moreover, we suggest regulator's optimal actions corresponding to user welfare and the regulator's revenue.
The price war in communication service is observed in Yu - Kong . Particularly, in @cite_18 and @cite_15 , if one operator lowers its price to increase revenue or to monopolize the entire market, then the other operators will also lower their price to match the price leader. The price down competition will occur repeatedly among all operators, eventually damaging every operator with a revenue decrease.
{ "cite_N": [ "@cite_18", "@cite_15" ], "mid": [ "2142699278", "2129931089" ], "abstract": [ "In this paper, we apply the game theory to study some strategic actions for retailers to fight a price war. We start by modeling a noncooperative pure pricing game among multiple competing retailers who sell a certain branded product under price-dependent stochastic demands. A unique Nash equilibrium is proven to exist under some mild conditions. We demonstrate mathematically the incentives for retailers to start a price war. Based on a strategic framework via the game theory, we illustrate the use of service level to build price walls which can prevent a huge drop in price, as well as profit. Three kinds of price walls are proposed, and the respective strengths and weaknesses have been studied. Analytical conditions, under which a price wall can effectively prevent big drops in both market share and profit, are developed. Aside from the proposed price walls, two other pricing strategies, which can lead to an all-win situation, are examined.", "The concept of dynamic spectrum access (DSA) enables the licensed spectrum to be traded in an open market where the unlicensed users can freely buy and use the available licensed spectrum bands. However, like in the other traditional commodity markets, spectrum trading is inevitably accompanied by various competitions and challenges. In this paper, we study an important business competition activity - price war in the DSA market. A non-cooperative pricing game is formulated to model the contention among multiple wireless spectrum providers for higher market share and revenues. We calculate the Pareto optimal pricing strategies for all providers and analyze the motivations behind the price war. The potential responses to the price war are in-depth discussed. Numerical results demonstrate the efficiency of the Pareto optimal strategy for the game and the impact of the price war to all participants." ] }
1304.3875
2171455140
In the mobile communication services, users wish to subscribe to high quality service with a low price level, which leads to competition between mobile network operators (MNOs). The MNOs compete with each other by service prices after deciding the extent of investment to improve quality of service (QoS). Unfortunately, the theoretic backgrounds of price dynamics are not known to us, and as a result, effective network planning and regulative actions are hard to make in the competitive market. To explain this competition more detail, we formulate and solve an optimization problem applying the two-stage Cournot and Bertrand competition model. Consequently, we derive a price dynamics that the MNOs increase and decrease their service prices periodically, which completely explains the subsidy dynamics in the real world. Moving forward, to avoid this instability and inefficiency, we suggest a simple regulation rule which leads to a Pareto-optimal equilibrium point. Moreover, we suggest regulator's optimal actions corresponding to user welfare and the regulator's revenue.
Competition among network operators occurs not only by price differentiation. The capacity of the network is another important variable. This is because users will select a network operator based on decision criteria including not only service price but also QoS level, and the QoS is directly related to the network capacity. Therefore, each operator jointly optimizes the service price and network capacity. All of the previous work mentioned above focuses only on price competition, assuming the network capacity is given as an external value. In @cite_1 , the authors consider competition among multiple network operators with single- or two-service classes. In that work, service prices are fixed and price competition does not occur. To attract more users, the operators decide only the network capacity.
{ "cite_N": [ "@cite_1" ], "mid": [ "2117031884" ], "abstract": [ "This paper investigates Internet service provider (ISP) incentives with a single-service class and with two-service classes in the Internet. We consider multiple competing ISPs who offer network access to a fixed user base, consisting of end-users who differ in their quality requirements and willingness to pay for the access. We model user-ISP interactions as a game in which each ISP makes capacity and pricing decisions to maximize its profits and the end-users only decide which service to buy (if any) and from which ISP. Our model provides pricing for networks with single- and two-service classes for any number of competing ISPs. Our results indicate that multiple service classes are socially desirable, but could be blocked due to the unfavorable distributional consequences that it inflicts on the existing Internet users. We propose a simple regulatory tool to alleviate the political economic constraints and thus make multiple service classes in the Internet feasible." ] }
1304.3872
1667101373
Given a computable probability measure P over natural numbers or infinite binary sequences, there is no computable, randomized method that can produce an arbitrarily large sample such that none of its members are outliers of P.
This paper is an excerpt of my thesis under the advisorship of Leonid A. Levin. The study of Kolmogorov complexity originated from the work of @cite_11 . The canonical self-delimiting form of Kolmogorov complexity was introduced in @cite_3 and treated later in @cite_1 . The universal probability @math relies on intuition similar to that of @cite_22 . More information about the history of the concepts used in this paper can be found the textbook @cite_5 .
{ "cite_N": [ "@cite_22", "@cite_1", "@cite_3", "@cite_5", "@cite_11" ], "mid": [ "2135625884", "2020311636", "2029308360", "1638203394", "2005097301" ], "abstract": [ "1. Summary In Part I, four ostensibly different theoretical models of induction are presented, in which the problem dealt with is the extrapolation of a very long sequence of symbols—presumably containing all of the information to be used in the induction. Almost all, if not all problems in induction can be put in this form. Some strong heuristic arguments have been obtained for the equivalence of the last three models. One of these models is equivalent to a Bayes formulation, in which a priori probabilities are assigned to sequences of symbols on the basis of the lengths of inputs to a universal Turing machine that are required to produce the sequence of interest as output. Though it seems likely, it is not certain whether the first of the four models is equivalent to the other three. Few rigorous results are presented. Informal investigations are made of the properties of these models. There are discussions of their consistency and meaningfulness, of their degree of independence of the exact nature of the Turing machine used, and of the accuracy of their predictions in comparison to those of other induction methods. In Part II these models are applied to the solution of three problems—prediction of the Bernoulli sequence, extrapolation of a certain kind of Markov chain, and the use of phrase structure grammars for induction. Though some approximations are used, the first of these problems is treated most rigorously. The result is Laplace's rule of succession. The solution to the second problem uses less certain approximations, but the properties of the solution that are discussed, are fairly independent of these approximations. The third application, using phrase structure grammars, is least exact of the three. First a formal solution is presented. Though it appears to have certain deficiencies, it is hoped that presentation of this admittedly inadequate model will suggest acceptable improvements in it. This formal solution is then applied in an approximate way to the determination of the “optimum” phrase structure grammar for a given set of strings. The results that are obtained are plausible, but subject to the uncertainties of the approximation used.", "A new definition of program-size complexity is made. H(A,B C,D) is defined to be the size in bits of the shortest self-delimiting program for calculating strings A and B if one is given a minimal-size self-delimiting program for calculating strings C and D. This differs from previous definitions: (1) programs are required to be self-delimiting, i.e. no program is a prefix of another, and (2) instead of being given C and D directly, one is given a program for calculating them that is minimal in size. Unlike previous definitions, this one has precisely the formal properties of the entropy concept of information theory. For example, H(A,B) = H(A) + H(B A) - 0(1). Also, if a program of length k is assigned measure 2 -k, then H(A) = -log2 (the probability that the standard universal computer will calculate A) - - 0(1).", "In?1964 Kolmogorov introduced the concept of the complexity of a?finite object (for instance, the words in a certain alphabet). He defined complexity as the minimum number of binary signs containing all the information about a?given object that are sufficient for its recovery (decoding). This definition depends essentially on the method of decoding. However, by means of the general theory of algorithms, Kolmogorov was able to give an invariant (universal) definition of complexity. Related concepts were investigated by Solomonoff (U.S.A.) and Markov. Using the concept of complexity, Kolmogorov gave definitions of the quantity of information in finite objects and of the concept of a?random sequence (which was then defined more precisely by Martin-L?f). Afterwards, this circle of questions developed rapidly. In particular, an interesting development took place of the ideas of Markov on the application of the concept of complexity to the study of quantitative questions in the theory of algorithms. The present article is a survey of the fundamental results connected with the brief remarks above.", "The book is outstanding and admirable in many respects. ... is necessary reading for all kinds of readers from undergraduate students to top authorities in the field. Journal of Symbolic Logic Written by two experts in the field, this is the only comprehensive and unified treatment of the central ideas and their applications of Kolmogorov complexity. The book presents a thorough treatment of the subject with a wide range of illustrative applications. Such applications include the randomness of finite objects or infinite sequences, Martin-Loef tests for randomness, information theory, computational learning theory, the complexity of algorithms, and the thermodynamics of computing. It will be ideal for advanced undergraduate students, graduate students, and researchers in computer science, mathematics, cognitive sciences, philosophy, artificial intelligence, statistics, and physics. The book is self-contained in that it contains the basic requirements from mathematics and computer science. Included are also numerous problem sets, comments, source references, and hints to solutions of problems. New topics in this edition include Omega numbers, KolmogorovLoveland randomness, universal learning, communication complexity, Kolmogorov's random graphs, time-limited universal distribution, Shannon information and others.", "A method and apparatus for cutting resilient foamed synthetic plastics filter material by pressing a die against a block of such material to provide cuts which are interleaved extending from opposite faces of the block the cuts at each such face being joined by a curved cut conforming to the desired exterior surface shape of the cut block when it is stretched to provide a corrugated configuration." ] }
1304.3872
1667101373
Given a computable probability measure P over natural numbers or infinite binary sequences, there is no computable, randomized method that can produce an arbitrarily large sample such that none of its members are outliers of P.
Information conservation laws were introduced and studied in @cite_15 @cite_8 . Information asymmetry and the complexity of complexity were studied in @cite_0 . A history of the origin of the mutual information of a string with the halting sequence can be found in @cite_23 .
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_23", "@cite_8" ], "mid": [ "2170025089", "123681027", "2104359186", "2041517255" ], "abstract": [ "We say that the mutual information of a triple of binary strings a, b, c can be extracted if there exists a string d such that a, b, and c are independent given d, and d is simple conditional to each of the strings a, b, c. This is an analog of the well-known Gacs-Korner (1973) definition of extrability of the mutual information for a pair of binary strings. We prove that (in contrast to the case of two strings) there exists a criterion of extrability of the mutual information for a triple a, b, c in terms of complexities involving a, b, c. Roughly speaking, the mutual information between a, b, c can be extracted if and only if the conditional mutual informations I(a:b|c), I(a:c|b), I(b:c|a) are negligible. Our proof of the main result is based on a nonShannon-type information inequality, which is a generalization of the recently discovered Zhang-Yeung inequality.", "", "In 1974, Kolmogorov proposed a nonprobabilistic approach to statistics and model selection. Let data be finite binary strings and models be finite sets of binary strings. Consider model classes consisting of models of given maximal (Kolmogorov) complexity. The \"structure function\" of the given data expresses the relation between the complexity level constraint on a model class and the least log-cardinality of a model in the class containing the data. We show that the structure function determines all stochastic properties of the data: for every constrained model class it determines the individual best fitting model in the class irrespective of whether the \"true\" model is in the model class considered or not. In this setting, this happens with certainty, rather than with high probability as is in the classical case. We precisely quantify the goodness-of-fit of an individual model with respect to individual data. We show that-within the obvious constraints-every graph is realized by the structure function of some data. We determine the (un)computability properties of the various functions contemplated and of the \"algorithmic minimal sufficient statistic.\".", "The article further develops Kolmogorov's algorithmic complexity theory. The definition of randomness is modified to satisfy strong invariance properties (conservation inequalities). This allows definitions of concepts such as mutual information in individual infinite sequences. Applications to several areas, like probability theory, theory of algorithms, intuitionistic logic are considered. These theories are simplified substantially with the postulate that the objects they consider are independent of (have small mutual information with) any sequence specified by a mathematical property." ] }
1304.3872
1667101373
Given a computable probability measure P over natural numbers or infinite binary sequences, there is no computable, randomized method that can produce an arbitrarily large sample such that none of its members are outliers of P.
The combination of complexity with distortion balls can be seen in @cite_10 . The work of Kolmogorov and the modelling of individual strings using a two-part code was expanded upon in @cite_23 @cite_20 . These works introduced the notion of using the prefix of a border'' sequence to define a universal algorithmic sufficient statistic of strings. The generalization and synthesis of this work and the development of algorithmic rate distortion theory can be seen in the works of @cite_13 @cite_17 . The work in @cite_6 introduced a variant of theorem 6 in @cite_13 . This led to the results in @cite_18 , which states that all non-exotic sets have simple members. This paper extends the work in @cite_18 to deficiencies of randomness. The first game theoretic proof to the results of @cite_18 can be found in @cite_7 .
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_10", "@cite_6", "@cite_23", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "2143698922", "1543528596", "1507670659", "1827471128", "2104359186", "", "1999580286", "2096075526" ], "abstract": [ "The combined Universal Probability M(D) of strings x in sets D is close to max M( x ) over x in D: their logs differ by at most D's information j=I(D:H) about the halting sequence H. Thus if all x have complexity K(x) >k, D carries >i bits of information on each its x where i+j k. Note that there are no ways to generate D with significant I(D:H).", "We provide some examples showing how game-theoretic arguments can be used in computability theory and algorithmic information theory: unique numbering theorem (Friedberg), the gap between conditional complexity and total conditional complexity, Epstein--Levin theorem and some (yet unpublished) result of Muchnik and Vyugin", "We introduce the study of Kolmogorov complexity with error. For a metric d, we define Ca(x) to be the length of a shortest program p which prints a string y such that d(x,y) ≤ a. We also study a conditional version of this measure Ca, b(x|y) where the task is, given a string y′ such that d(y,y′) ≤ b, print a string x′ such that d(x,x′) ≤ a. This definition admits both a uniform measure, where the same program should work given any y′ such that d(y,y′) ≤ b, and a nonuniform measure, where we take the length of a program for the worst case y′. We study the relation of these measures in the case where d is Hamming distance, and show an example where the uniform measure is exponentially larger than the nonuniform one. We also show an example where symmetry of information does not hold for complexity with error under either notion of conditional complexity.", "We represent agents as sets of strings. Each string encodes a potential interaction with another agent or environment. We represent the total set of dynamics between two agents as the intersection of their respective strings, we prove complexity properties of player interactions using Algorithmic Information Theory. We show how the proposed construction is compatible with Universal Artificial Intelligence, in that the AIXI model can be seen as universal with respect to interaction.", "In 1974, Kolmogorov proposed a nonprobabilistic approach to statistics and model selection. Let data be finite binary strings and models be finite sets of binary strings. Consider model classes consisting of models of given maximal (Kolmogorov) complexity. The \"structure function\" of the given data expresses the relation between the complexity level constraint on a model class and the least log-cardinality of a model in the class containing the data. We show that the structure function determines all stochastic properties of the data: for every constrained model class it determines the individual best fitting model in the class irrespective of whether the \"true\" model is in the model class considered or not. In this setting, this happens with certainty, rather than with high probability as is in the classical case. We precisely quantify the goodness-of-fit of an individual model with respect to individual data. We show that-within the obvious constraints-every graph is realized by the structure function of some data. We determine the (un)computability properties of the various functions contemplated and of the \"algorithmic minimal sufficient statistic.\".", "", "We extend algorithmic information theory to quantum mechanics, taking a universal semicomputable density matrix ( universal probability') as a starting point, and define complexity (an operator) as its negative logarithm. A number of properties of Kolmogorov complexity extend naturally to the new domain. Approximately, a quantum state is simple if it is within a small distance from a low-dimensional subspace of low Kolmogorov complexity. The von Neumann entropy of a computable density matrix is within an additive constant from the average complexity. Some of the theory of randomness translates to the new domain. We explore the relations of the new quantity to the quantum Kolmogorov complexity defined by Vitanyi (we show that the latter is sometimes as large as 2n − 2 log n) and the qubit complexity defined by Berthiaume, Dam and Laplante. The cloning' properties of our complexity measure are similar to those of qubit complexity.", "We examine the structure of families of distortion balls from the perspective of Kolmogorov complexity. Special attention is paid to the canonical rate-distortion function of a source word which returns the minimal Kolmogorov complexity of all distortion balls containing that word subject to a bound on their cardinality. This canonical rate-distortion function is related to the more standard algorithmic rate-distortion function for the given distortion measure. Examples are given of list distortion, Hamming distortion, and Euclidean distortion. The algorithmic rate-distortion function can behave differently from Shannon's rate-distortion function. To this end, we show that the canonical rate-distortion function can and does assume a wide class of shapes (unlike Shannon's); we relate low algorithmic mutual information to low Kolmogorov complexity (and consequently suggest that certain aspects of the mutual information formulation of Shannon's rate-distortion function behave differently than would an analogous formulation using algorithmic mutual information); we explore the notion that low Kolmogorov complexity distortion balls containing a given word capture the interesting properties of that word (which is hard to formalize in Shannon's theory) and this suggests an approach to denoising." ] }
1304.3513
1872181691
Geosocial networks are online social networks centered on the locations of subscribers and businesses. Providing input to targeted advertising, profiling social network users becomes an important source of revenue. Its natural reliance on personal information introduces a trade-off between user privacy and incentives of participation for businesses and geosocial network providers. In this paper we introduce location centric profiles (LCPs), aggregates built over the profiles of users present at a given location. We introduce PROFILR, a suite of mechanisms that construct LCPs in a private and correct manner. We introduce iSafe, a novel, context aware public safety application built on PROFILR . Our Android and browser plugin implementations show that PROFILR is efficient: the end-to-end overhead is small even under strong correctness assurances.
@cite_10 proposed techniques allowing pollsters to collect user data while ensuring the privacy of the users. The privacy is proved at runtime'': if the pollster leaks private data, it will be exposed probabilistically. Our work also allow entities to collect private user data, however, the collectors are never allowed direct access to private user data.
{ "cite_N": [ "@cite_10" ], "mid": [ "2124572775" ], "abstract": [ "Consider a pollster who wishes to collect private, sensitive data from a number of distrustful individuals. How might the pollster convince the respondents that it is trustworthy? Alternately, what mechanism could the respondents insist upon to ensure that mismanagement of their data is detectable and publicly demonstrable? We detail this problem, and provide simple data submission protocols with the properties that a) leakage of private data by the pollster results in evidence of the transgression and b) the evidence cannot be fabricated without breaking cryptographic assumptions. With such guarantees, a responsible pollster could post a “privacy-bond,” forfeited to anyone who can provide evidence of leakage. The respondents are assured that appropriate penalties are applied to a leaky pollster, while the protection from spurious indictment ensures that any honest pollster has no disincentive to participate in such a scheme." ] }
1304.3513
1872181691
Geosocial networks are online social networks centered on the locations of subscribers and businesses. Providing input to targeted advertising, profiling social network users becomes an important source of revenue. Its natural reliance on personal information introduces a trade-off between user privacy and incentives of participation for businesses and geosocial network providers. In this paper we introduce location centric profiles (LCPs), aggregates built over the profiles of users present at a given location. We introduce PROFILR, a suite of mechanisms that construct LCPs in a private and correct manner. We introduce iSafe, a novel, context aware public safety application built on PROFILR . Our Android and browser plugin implementations show that PROFILR is efficient: the end-to-end overhead is small even under strong correctness assurances.
Toubiana et. al @cite_28 proposed Adnostic, a privacy preserving ad targeting architecture. Users have a profile that allows the private matching of relevant ads. While can be used to privately provide location centric targeted ads, its main goal is different - to compute location (venue) centric profiles that preserve the privacy of contributing users.
{ "cite_N": [ "@cite_28" ], "mid": [ "2189109560" ], "abstract": [ "Online behavioral advertising (OBA) refers to the practice of tracking users across web sites in order to infer user interests and preferences. These interests and preferences are then used for selecting ads to present to the user. There is great concern that behavioral advertising in its present form infringes on user privacy. The resulting public debate — which includes consumer advocacy organizations, professional associations, and government agencies — is premised on the notion that OBA and privacy are inherently in conflict.In this paper we propose a practical architecture that enables targeting without compromising user privacy. Behavioral profiling and targeting in our system takes place in the user’s browser. We discuss the effectiveness of the system as well as potential social engineering and web-based attacks on the architecture. One complication is billing; ad-networks must bill the correct advertiser without knowing which ad was displayed to the user. We propose an efficient cryptographic billing system that directly solves the problem. We implemented the core targeting system as a Firefox extension and report on its effectiveness.\u0000" ] }
1304.3513
1872181691
Geosocial networks are online social networks centered on the locations of subscribers and businesses. Providing input to targeted advertising, profiling social network users becomes an important source of revenue. Its natural reliance on personal information introduces a trade-off between user privacy and incentives of participation for businesses and geosocial network providers. In this paper we introduce location centric profiles (LCPs), aggregates built over the profiles of users present at a given location. We introduce PROFILR, a suite of mechanisms that construct LCPs in a private and correct manner. We introduce iSafe, a novel, context aware public safety application built on PROFILR . Our Android and browser plugin implementations show that PROFILR is efficient: the end-to-end overhead is small even under strong correctness assurances.
@cite_15 proposed SMILE, a privacy-preserving missed-connections'' service similar to Craigslist, where the service provider is untrusted and users do not have existing relationships. The solution is distributed, allowing users to anonymously prove to each other the existence of a past encounter. While we have a similar setup, our work addresses a different problem, of privately collecting location centric user profile aggregates.
{ "cite_N": [ "@cite_15" ], "mid": [ "2139223971" ], "abstract": [ "Conventional mobile social services such as Loopt and Google Latitude rely on two classes of trusted relationships: participants trust a centralized server to manage their location information and trust between users is based on existing social relationships. Unfortunately, these assumptions are not secure or general enough for many mobile social scenarios: centralized servers cannot always be relied upon to preserve data confidentiality, and users may want to use mobile social services to establish new relationships. To address these shortcomings, this paper describes SMILE, a privacy-preserving \"missed-connections\" service in which the service provider is untrusted and users are not assumed to have pre-established social relationships with each other. At a high-level, SMILE uses short-range wireless communication and standard cryptographic primitives to mimic the behavior of users in existing missed-connections services such as Craigslist: trust is founded solely on anonymous users' ability to prove to each other that they shared an encounter in the past. We have evaluated SMILE using protocol analysis, an informal study of Craigslist usage, and experiments with a prototype implementation and found it to be both privacy-preserving and feasible." ] }
1304.3513
1872181691
Geosocial networks are online social networks centered on the locations of subscribers and businesses. Providing input to targeted advertising, profiling social network users becomes an important source of revenue. Its natural reliance on personal information introduces a trade-off between user privacy and incentives of participation for businesses and geosocial network providers. In this paper we introduce location centric profiles (LCPs), aggregates built over the profiles of users present at a given location. We introduce PROFILR, a suite of mechanisms that construct LCPs in a private and correct manner. We introduce iSafe, a novel, context aware public safety application built on PROFILR . Our Android and browser plugin implementations show that PROFILR is efficient: the end-to-end overhead is small even under strong correctness assurances.
Location and temporal cloaking techniques, or introducing errors in reported locations in order to provide 1-out-of-k anonymity have been initially proposed in @cite_31 , followed by a significant body of work @cite_3 @cite_14 @cite_17 @cite_38 . We note that provides an orthogonal notion of @math -anonymity: instead of reporting intervals containing @math other users, we allow the construction of location centric profiles only when @math users have reported their location. Computed LCPs hide the profiles the users: user profiles are anonymous, only aggregates are available for inspection, and interactions with venues and the provider are indistinguishable.
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_3", "@cite_31", "@cite_17" ], "mid": [ "1964027885", "2159777963", "2126236329", "2056773559", "2165854553" ], "abstract": [ "Mobile devices with positioning capabilities allow users to participate in novel and exciting location-based applications. For instance, users may track the whereabouts of their acquaintances in location-aware social networking applications, e.g., GoogleLatitude. Furthermore, users can request information about landmarks in their proximity. Such scenarios require users to report their coordinates to other parties, which may not be fully trusted. Reporting precise locations may result in serious privacy violations, such as disclosure of lifestyle details, sexual orientation, etc. A typical approach to preserve location privacy is to generate a cloaking region (CR) that encloses the user position. However, if locations are continuously reported, an attacker can correlate CRs from multiple timestamps to accurately pinpoint the user position within a CR. In this work, we protect against linkage attacks that infer exact locations based on prior knowledge about maximum user velocity. Assume user u who reports two consecutive cloaked regions A and B. We consider two distinct protection scenarios: in the first case, the attacker does not have information about the sensitive locations on the map, and the objective is to ensure that u can reach some point in B from any point in A. In the second case, the attacker knows the placement of sensitive locations, and the objective is to ensure that u can reach any point in B from any point in A. We propose spatial and temporal cloaking transformations to preserve user privacy, and we show experimentally that privacy can be achieved without significant quality of service deterioration.", "Mobile smartphone users frequently need to search for nearby points of interest from a location based service, but in a way that preserves the privacy of the users' locations. We present a technique for private information retrieval that allows a user to retrieve information from a database server without revealing what is actually being retrieved from the server. We perform the retrieval operation in a computationally efficient manner to make it practical for resource-constrained hardware such as smartphones, which have limited processing power, memory, and wireless bandwidth. In particular, our algorithm makes use of a variable-sized cloaking region that increases the location privacy of the user at the cost of additional computation, but maintains the same traffic cost. Our proposal does not require the use of a trusted third-party component, and ensures that we find a good compromise between user privacy and computational efficiency. We evaluated our approach with a proof-of-concept implementation over a commercial-grade database of points of interest. We also measured the performance of our query technique on a smartphone and wireless network.", "Automotive traffic monitoring using probe vehicles with Global Positioning System receivers promises significant improvements in cost, coverage, and accuracy. Current approaches, however, raise privacy concerns because they require participants to reveal their positions to an external traffic monitoring server. To address this challenge, we propose a system based on virtual trip lines and an associated cloaking technique. Virtual trip lines are geographic markers that indicate where vehicles should provide location updates. These markers can be placed to avoid particularly privacy sensitive locations. They also allow aggregating and cloaking several location updates based on trip line identifiers, without knowing the actual geographic locations of these trip lines. Thus they facilitate the design of a distributed architecture, where no single entity has a complete knowledge of probe identities and fine-grained location information. We have implemented the system with GPS smartphone clients and conducted a controlled experiment with 20 phone-equipped drivers circling a highway segment. Results show that even with this low number of probe vehicles, travel time estimates can be provided with less than 15 error, and applying the cloaking techniques reduces travel time estimation accuracy by less than 5 compared to a standard periodic sampling approach.", "Advances in sensing and tracking technology enable location-based applications but they also create significant privacy risks. Anonymity can provide a high degree of privacy, save service users from dealing with service providers’ privacy policies, and reduce the service providers’ requirements for safeguarding private information. However, guaranteeing anonymous usage of location-based services requires that the precise location information transmitted by a user cannot be easily used to re-identify the subject. This paper presents a middleware architecture and algorithms that can be used by a centralized location broker service. The adaptive algorithms adjust the resolution of location information along spatial or temporal dimensions to meet specified anonymity constraints based on the entities who may be using location services within a given area. Using a model based on automotive traffic counts and cartographic material, we estimate the realistically expected spatial resolution for different anonymity constraints. The median resolution generated by our algorithms is 125 meters. Thus, anonymous location-based requests for urban areas would have the same accuracy currently needed for E-911 services; this would provide sufficient resolution for wayfinding, automated bus routing services and similar location-dependent services.", "Privacy preservation has recently received considerable attention for location-based mobile services. Various location cloaking approaches have been proposed to protect the location privacy of mobile users. However, existing cloaking approaches are ill-suited for continuous queries. In view of the privacy disclosure and poor QoS (Quality of Service) under continuous query anonymization, in this paper, we propose a Δp-privacy model and a Δq-distortion model to balance the tradeoff between user privacy and QoS. Furthermore, two incremental utility-based cloaking algorithms --- bottom-up cloaking and hybrid cloaking, are proposed to anonymize continuous queries. Experimental results validate the efficiency and effectiveness of the proposed algorithms." ] }
1304.3513
1872181691
Geosocial networks are online social networks centered on the locations of subscribers and businesses. Providing input to targeted advertising, profiling social network users becomes an important source of revenue. Its natural reliance on personal information introduces a trade-off between user privacy and incentives of participation for businesses and geosocial network providers. In this paper we introduce location centric profiles (LCPs), aggregates built over the profiles of users present at a given location. We introduce PROFILR, a suite of mechanisms that construct LCPs in a private and correct manner. We introduce iSafe, a novel, context aware public safety application built on PROFILR . Our Android and browser plugin implementations show that PROFILR is efficient: the end-to-end overhead is small even under strong correctness assurances.
Our work relies on the assumption that participants cannot control a large number of fake, Sybil accounts. One way to ensure this property is to use existing Sybil detection techniques. Danezis and Mittal @cite_29 proposed a centralized SybilInfer solution based in Bayesian inference. proposed distributed solutions, SybilGuard @cite_26 and SybilLimit @cite_23 , that use online social networks to protect peer-to-peer network against Sybil nodes. They rely on the fast mixing property of social networks and the limited connectivity of Sybil nodes to non-Sybil nodes.
{ "cite_N": [ "@cite_29", "@cite_26", "@cite_23" ], "mid": [ "1551760018", "2101890615", "2110801527" ], "abstract": [ "SybilInfer is an algorithm for labelling nodes in a social network as honest users or Sybils controlled by an adversary. At the heart of SybilInfer lies a probabilistic model of honest social networks, and an inference engine that returns potential regions of dishonest nodes. The Bayesian inference approach to Sybil detection comes with the advantage label has an assigned probability, indicating its degree of certainty. We prove through analytical results as well as experiments on simulated and real-world network topologies that, given standard constraints on the adversary, SybilInfer is secure, in that it successfully distinguishes between honest and dishonest nodes and is not susceptible to manipulation by the adversary. Furthermore, our results show that SybilInfer outperforms state of the art algorithms, both in being more widely applicable, as well as providing vastly more accurate results.", "Peer-to-peer and other decentralized,distributed systems are known to be particularly vulnerable to sybil attacks. In a sybil attack,a malicious user obtains multiple fake identities and pretends to be multiple, distinct nodes in the system. By controlling a large fraction of the nodes in the system,the malicious user is able to \"out vote\" the honest users in collaborative tasks such as Byzantine failure defenses. This paper presents SybilGuard, a novel protocol for limiting the corruptive influences of sybil attacks.Our protocol is based on the \"social network \"among user identities, where an edge between two identities indicates a human-established trust relationship. Malicious users can create many identities but few trust relationships. Thus, there is a disproportionately-small \"cut\" in the graph between the sybil nodes and the honest nodes. SybilGuard exploits this property to bound the number of identities a malicious user can create.We show the effectiveness of SybilGuard both analytically and experimentally.", "Open-access distributed systems such as peer-to-peer systems are particularly vulnerable to sybil attacks, where a malicious user creates multiple fake identities (called sybil nodes). Without a trusted central authority that can tie identities to real human beings, defending against sybil attacks is quite challenging. Among the small number of decentralized approaches, our recent SybilGuard protocol leverages a key insight on social networks to bound the number of sybil nodes accepted. Despite its promising direction, SybilGuard can allow a large number of sybil nodes to be accepted. Furthermore, SybilGuard assumes that social networks are fast-mixing, which has never been confirmed in the real world. This paper presents the novel SybilLimit protocol that leverages the same insight as SybilGuard, but offers dramatically improved and near-optimal guarantees. The number of sybil nodes accepted is reduced by a factor of Θ(√n), or around 200 times in our experiments for a million-node system. We further prove that SybilLimit's guarantee is at most a log n factor away from optimal when considering approaches based on fast-mixing social networks. Finally, based on three large-scale real-world social networks, we provide the first evidence that real-world social networks are indeed fast-mixing. This validates the fundamental assumption behind SybilLimit's and SybilGuard's approach." ] }
1304.3513
1872181691
Geosocial networks are online social networks centered on the locations of subscribers and businesses. Providing input to targeted advertising, profiling social network users becomes an important source of revenue. Its natural reliance on personal information introduces a trade-off between user privacy and incentives of participation for businesses and geosocial network providers. In this paper we introduce location centric profiles (LCPs), aggregates built over the profiles of users present at a given location. We introduce PROFILR, a suite of mechanisms that construct LCPs in a private and correct manner. We introduce iSafe, a novel, context aware public safety application built on PROFILR . Our Android and browser plugin implementations show that PROFILR is efficient: the end-to-end overhead is small even under strong correctness assurances.
Significant work has been done recently to preserve the privacy of users from the online social network provider. @cite_2 proposed Safebook, a distributed online social networks where insiders are protected from external observers through the inherent flow of information in the system. @cite_7 proposed Lockr, a system for improving the privacy of social networks. It achieves this by using the concept of a social attestation, which is a credential proving a social relationship. @cite_0 introduced Persona, a distributed social network with distributed account data storage. @cite_37 proposed a similar solution, extended with revocation capabilities through the use of broadcast encryption. While we rely on distributed online social networks, our goal is to protect the privacy of users while also allowing venues to collect certain user statistics.
{ "cite_N": [ "@cite_0", "@cite_37", "@cite_7", "@cite_2" ], "mid": [ "2406604473", "2117042278", "2103230119", "2114100569" ], "abstract": [ "Online Social Networks (OSNs) encourage users to create an online presence that reflects their offline identity. OSNs create the illusion that these online accounts correspond to the correct offline person, but in reality the OSN lacks the resources to detect impersonation. We propose that OSN users identify each other based on interaction and experience. We believe that impersonation can be thwarted by users who possess exclusive shared knowledge, secret information shared only between a pair of OSN friends. We describe existing protocols that use shared secrets to exchange public keys without revealing those secrets to attackers. We present results from a user study on Facebook to show that users do share exclusive knowledge with their Facebook friends and attackers are rarely able to guess that knowledge. Finally, we show that friend identification can be extended using a web of trust built on the OSN friend graph.", "Online social networks (OSNs) are attractive applications which enable a group of users to share data and stay connected. Facebook, Myspace, and Twitter are among the most popular applications of OSNs where personal information is shared among group contacts. Due to the private nature of the shared information, data privacy is an indispensable security requirement in OSN applications. In this paper, we propose a privacy-preserving scheme for data sharing in OSNs, with efficient revocation for deterring a contact's access right to the private data once the contact is removed from the social group. In addition, the proposed scheme offers advanced features such as efficient search over encrypted data files and dynamic changes to group membership. With slight modification, we extend the application of the proposed scheme to anonymous online social networks of different security and functional requirements. The proposed scheme is demonstrated to be secure, effective, and efficient.", "Today's online social networking (OSN) sites do little to protect the privacy of their users' social networking information. Given the highly sensitive nature of the information these sites store, it is understandable that many users feel victimized and disempowered by OSN providers' terms of service. This paper presents Lockr, a system that improves the privacy of centralized and decentralized online content sharing systems. Lockr offers three significant privacy benefits to OSN users. First, it separates social networking content from all other functionality that OSNs provide. This decoupling lets users control their own social information: they can decide which OSN provider should store it, which third parties should have access to it, or they can even choose to manage it themselves. Such flexibility better accommodates OSN users' privacy needs and preferences. Second, Lockr ensures that digitally signed social relationships needed to access social data cannot be re-used by the OSN for unintended purposes. This feature drastically reduces the value to others of social content that users entrust to OSN providers. Finally, Lockr enables message encryption using a social relationship key. This key lets two strangers with a common friend verify their relationship without exposing it to others, a common privacy threat when sharing data in a decentralized scenario. This paper relates Lockr's design and implementation and shows how we integrate it with Flickr, a centralized OSN, and BitTorrent, a decentralized one. Our implementation demonstrates Lockr's critical primary benefits for privacy as well as its secondary benefits for simplifying site management and accelerating content delivery. These benefits were achieved with negligible performance cost and overhead.", "Social networking services (SNS), which provide the application with the most probably highest growth rates in the Internet today, raise serious security concerns, especially with respect to the privacy of their users. Multiple studies have shown the vulnerability of these services to breaches of privacy and to impersonation attacks mounted by third parties, however the centralized storage at the providers of SNS represents an additional quite significant weakness that so far has not satisfyingly been addressed. In this paper we show the feasibility of “Safebook”, our proposal for the provision of a competitive social networking service, which solves these vulnerabilities by its decentralized design, leveraging on the real life relationships of its users and means of cryptography." ] }
1304.3405
2950226029
Recommender systems associated with social networks often use social explanations (e.g. "X, Y and 2 friends like this") to support the recommendations. We present a study of the effects of these social explanations in a music recommendation context. We start with an experiment with 237 users, in which we show explanations with varying levels of social information and analyze their effect on users' decisions. We distinguish between two key decisions: the likelihood of checking out the recommended artist, and the actual rating of the artist based on listening to several songs. We find that while the explanations do have some influence on the likelihood, there is little correlation between the likelihood and actual (listening) rating for the same artist. Based on these insights, we present a generative probabilistic model that explains the interplay between explanations and background information on music preferences, and how that leads to a final likelihood rating for an artist. Acknowledging the impact of explanations, we discuss a general recommendation framework that models external informational elements in the recommendation interface, in addition to inherent preferences of users.
However, being persuasive has drawbacks. Another study found that although explanations might persuade a user to try an item, they were not good for accurately estimating the quality of an item @cite_24 . The authors further argue the goal of a recommender should not be to promote a recommendation (which they call ), but rather enable a user to make a more accurate judgment on the true quality of the item for that person (which they call ).
{ "cite_N": [ "@cite_24" ], "mid": [ "2130369780" ], "abstract": [ "Recommender systems have become a popular technique for helping users select desirable books, movies, music and other items. Most research in the area has focused on developing and evaluating algorithms for efficiently producing accurate recommendations. However, the ability to effectively explain its recommendations to users is another important aspect of a recommender system. The only previous investigation of methods for explaining recommendations showed that certain styles of explanations were effective at convincing users to adopt recommendations (i.e. promotion) but failed to show that explanations actually helped users make more accurate decisions (i.e. satisfaction). We present two new methods for explaining recommendations of contentbased and or collaborative systems and experimentally show that they actually improve user’s estimation of item quality." ] }
1304.3405
2950226029
Recommender systems associated with social networks often use social explanations (e.g. "X, Y and 2 friends like this") to support the recommendations. We present a study of the effects of these social explanations in a music recommendation context. We start with an experiment with 237 users, in which we show explanations with varying levels of social information and analyze their effect on users' decisions. We distinguish between two key decisions: the likelihood of checking out the recommended artist, and the actual rating of the artist based on listening to several songs. We find that while the explanations do have some influence on the likelihood, there is little correlation between the likelihood and actual (listening) rating for the same artist. Based on these insights, we present a generative probabilistic model that explains the interplay between explanations and background information on music preferences, and how that leads to a final likelihood rating for an artist. Acknowledging the impact of explanations, we discuss a general recommendation framework that models external informational elements in the recommendation interface, in addition to inherent preferences of users.
Besides helping users make an informed choice, explanations may also increase the acceptability of a recommender system overall, by communicating why an item has been recommended to a user @cite_19 and thus helping them understand the system. These explanations and other presentational choices can be designed to increase the system's trustworthiness @cite_21 , and a number of real systems incorporate explanations (e.g., Amazon's explanation of Customers who bought this also bought these'', and Netflix's explanation by genres). provide a number of desirable attributes of explanations, including transparency, scrutability, trustworthiness, effectiveness, persuasiveness, efficiency, and satisfaction @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_21" ], "mid": [ "2126159342", "", "1995445252" ], "abstract": [ "This paper provides a comprehensive review of explanations in recommender systems. We highlight seven possible advantages of an explanation facility, and describe how existing measures can be used to evaluate the quality of explanations. Since explanations are not independent of the recommendation process, we consider how the ways recommendations are presented may affect explanations. Next, we look at different ways of interacting with explanations. The paper is illustrated with examples of explanations throughout, where possible from existing applications.", "", "Based on our recent work on the development of a trust model for recommender agents and a qualitative survey, we explore the potential of building users' trust with explanation interfaces. We present the major results from the survey, which provided a roadmap identifying the most promising areas for investigating design issues for trust-inducing interfaces. We then describe a set of general principles derived from an in-depth examination of various design dimensions for constructing explanation interfaces, which most contribute to trust formation. We present results of a significant-scale user study, which indicate that the organization-based explanation is highly effective in building users' trust in the recommendation interface, with the benefit of increasing users' intention to return to the agent and save cognitive effort." ] }
1304.3405
2950226029
Recommender systems associated with social networks often use social explanations (e.g. "X, Y and 2 friends like this") to support the recommendations. We present a study of the effects of these social explanations in a music recommendation context. We start with an experiment with 237 users, in which we show explanations with varying levels of social information and analyze their effect on users' decisions. We distinguish between two key decisions: the likelihood of checking out the recommended artist, and the actual rating of the artist based on listening to several songs. We find that while the explanations do have some influence on the likelihood, there is little correlation between the likelihood and actual (listening) rating for the same artist. Based on these insights, we present a generative probabilistic model that explains the interplay between explanations and background information on music preferences, and how that leads to a final likelihood rating for an artist. Acknowledging the impact of explanations, we discuss a general recommendation framework that models external informational elements in the recommendation interface, in addition to inherent preferences of users.
Models based on these theories and the availability of social connection information have been proposed to support collaborating filtering algorithms that use social information @cite_17 @cite_8 , focusing on preferences in users' immediate social networks @cite_27 @cite_1 and computing trust between people in networks @cite_4 to improve recommendations.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_1", "@cite_27", "@cite_17" ], "mid": [ "1985109665", "2144487656", "", "1967507014", "1976320242" ], "abstract": [ "The growth of Web-based social networking and the properties of those networks have created great potential for producing intelligent software that integrates a user's social network and preferences. Our research looks particularly at assigning trust in Web-based social networks and investigates how trust information can be mined and integrated into applications. This article introduces a definition of trust suitable for use in Web-based social networks with a discussion of the properties that will influence its use in computation. We then present two algorithms for inferring trust relationships between individuals that are not directly connected in the network. Both algorithms are shown theoretically and through simulation to produce calculated trust values that are highly accurate.. We then present TrustMail, a prototype email client that uses variations on these algorithms to score email messages in the user's inbox based on the user's participation and ratings in a trust network.", "Although Recommender Systems have been comprehensively analyzed in the past decade, the study of social-based recommender systems just started. In this paper, aiming at providing a general method for improving recommender systems by incorporating social network information, we propose a matrix factorization framework with social regularization. The contributions of this paper are four-fold: (1) We elaborate how social network information can benefit recommender systems; (2) We interpret the differences between social-based recommender systems and trust-aware recommender systems; (3) We coin the term Social Regularization to represent the social constraints on recommender systems, and we systematically illustrate how to design a matrix factorization objective function with social regularization; and (4) The proposed method is quite general, which can be easily extended to incorporate other contextual information, like social tags, etc. The empirical analysis on two large datasets demonstrates that our approaches outperform other state-of-the-art methods.", "", "We study personalized recommendation of social software items, including bookmarked web-pages, blog entries, and communities. We focus on recommendations that are derived from the user's social network. Social network information is collected and aggregated across different data sources within our organization. At the core of our research is a comparison between recommendations that are based on the user's familiarity network and his her similarity network. We also examine the effect of adding explanations to each recommended item that show related people and their relationship to the user and to the item. Evaluation, based on an extensive user survey with 290 participants and a field study including 90 users, indicates superiority of the familiarity network as a basis for recommendations. In addition, an important instant effect of explanations is found - interest rate in recommended items increases when explanations are provided.", "Social network systems, like last.fm, play a significant role in Web 2.0, containing large amounts of multimedia-enriched data that are enhanced both by explicit user-provided annotations and implicit aggregated feedback describing the personal preferences of each user. It is also a common tendency for these systems to encourage the creation of virtual networks among their users by allowing them to establish bonds of friendship and thus provide a novel and direct medium for the exchange of data. We investigate the role of these additional relationships in developing a track recommendation system. Taking into account both the social annotation and friendships inherent in the social graph established among users, items and tags, we created a collaborative recommendation system that effectively adapts to the personal information needs of each user. We adopt the generic framework of Random Walk with Restarts in order to provide with a more natural and efficient way to represent social networks. In this work we collected a representative enough portion of the music social network last.fm, capturing explicitly expressed bonds of friendship of the user as well as social tags. We performed a series of comparison experiments between the Random Walk with Restarts model and a user-based collaborative filtering method using the Pearson Correlation similarity. The results show that the graph model system benefits from the additional information embedded in social knowledge. In addition, the graph model outperforms the standard collaborative filtering method." ] }
1304.3405
2950226029
Recommender systems associated with social networks often use social explanations (e.g. "X, Y and 2 friends like this") to support the recommendations. We present a study of the effects of these social explanations in a music recommendation context. We start with an experiment with 237 users, in which we show explanations with varying levels of social information and analyze their effect on users' decisions. We distinguish between two key decisions: the likelihood of checking out the recommended artist, and the actual rating of the artist based on listening to several songs. We find that while the explanations do have some influence on the likelihood, there is little correlation between the likelihood and actual (listening) rating for the same artist. Based on these insights, we present a generative probabilistic model that explains the interplay between explanations and background information on music preferences, and how that leads to a final likelihood rating for an artist. Acknowledging the impact of explanations, we discuss a general recommendation framework that models external informational elements in the recommendation interface, in addition to inherent preferences of users.
This information can also be used to support social explanation, as with the neighbor-based ratings in Bilgic and Mooney @cite_24 and aggregate customer behavior in Amazon. Using user-generated tags, based on their popularity and relevance, is another source of social information that has also been studied for explanation @cite_13 . However, despite the appearance in practice of the use of friendship, egocentric networks, and overall popularity information in social explanations, there has been little study of how they influence likelihood and consumption decisions. Our work directly addresses these questions, and we now turn to the particular social explanations we study.
{ "cite_N": [ "@cite_24", "@cite_13" ], "mid": [ "2130369780", "2114281505" ], "abstract": [ "Recommender systems have become a popular technique for helping users select desirable books, movies, music and other items. Most research in the area has focused on developing and evaluating algorithms for efficiently producing accurate recommendations. However, the ability to effectively explain its recommendations to users is another important aspect of a recommender system. The only previous investigation of methods for explaining recommendations showed that certain styles of explanations were effective at convincing users to adopt recommendations (i.e. promotion) but failed to show that explanations actually helped users make more accurate decisions (i.e. satisfaction). We present two new methods for explaining recommendations of contentbased and or collaborative systems and experimentally show that they actually improve user’s estimation of item quality.", "While recommender systems tell users what items they might like, explanations of recommendations reveal why they might like them. Explanations provide many benefits, from improving user satisfaction to helping users make better decisions. This paper introduces tagsplanations, which are explanations based on community tags. Tagsplanations have two key components: tag relevance, the degree to which a tag describes an item, and tag preference, the user's sentiment toward a tag. We develop novel algorithms for estimating tag relevance and tag preference, and we conduct a user study exploring the roles of tag relevance and tag preference in promoting effective tagsplanations. We also examine which types of tags are most useful for tagsplanations." ] }
1304.2889
2118898123
Glyph-based visualization is an effective tool for depicting multivariate information. Since sorting is one of the most common analytical tasks performed on individual attributes of a multi-dimensional dataset, this motivates the hypothesis that introducing glyph sorting would significantly enhance the usability of glyph-based visualization. In this article, we present a glyph-based conceptual framework as part of a visualization process for interactive sorting of multivariate data. We examine several technical aspects of glyph sorting and provide design principles for developing effective, visually sortable glyphs. Glyphs that are visually sortable provide two key benefits: (1) performing comparative analysis of multiple attributes between glyphs and (2) to support multi-dimensional visual search. We describe a system that incorporates focus and context glyphs to control sorting in a visually intuitive manner and for viewing sorted results in an interactive, multi-dimensional glyph plot that enables users to perform high-dimensional sorting, analyse and examine data trends in detail. To demonstrate the usability of glyph sorting, we present a case study in rugby event analysis for comparing and analysing trends within matches. This work is undertaken in conjunction with a national rugby team. From using glyph sorting, analysts have reported the discovery of new insight beyond traditional match analysis.
Sorting is the computational process of rearranging a sequence of items into ascending or descending order @cite_39 . Many sorting algorithms have been proposed, including bubble sort by Demuth @cite_17 , merge sort by von Neumannr @cite_39 , and quick sort by Hoare @cite_4 . Since best and worse case performance runtime can vary drastically with such algorithms, further research continues to propose new sorting techniques @cite_30 and adaptive approaches that utilise ordered data @cite_33 . Our work is not focused on a faster sorting algorithm per say, but combining the benefits of sorting with glyph-based visualization.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_33", "@cite_39", "@cite_17" ], "mid": [ "2028778377", "", "2066823629", "1591066263", "2123596193" ], "abstract": [ "Traditional Insertion Sort runs in O(n2) time because each insertion takes O(n) time. When people run Insertion Sort in the physical world, they leave gaps between items to accelerate insertions. Gaps help in computers as well. This paper shows that Gapped Insertion Sort has insertion times of O(log n) with high probability, yielding a total running time of O(n log n) with high probability.", "", "The design and analysis of adaptive sorting algorithms has made important contributions to both theory and practice. The main contributions from the theoretical point of view are: the description of the complexity of a sorting algorithm not only in terms of the size of a problem instance but also in terms of the disorder of the given problem instance; the establishment of new relationships among measures of disorder; the introduction of new sorting algorithms that take advantage of the existing order in the input sequence; and, the proofs that several of the new sorting algorithms achieve maximal (optimal) adaptivity with respect to several measures of disorder. The main contributions from the practical point of view are: the demonstration that several algorithms currently in use are adaptive; and, the development of new algorithms, similar to currently used algorithms that perform competitively on random sequences and are significantly faster on nearly sorted sequences. In this survey, we present the basic notions and concepts of adaptive sorting and the state of the art of adaptive sorting algorithms.", "Apparatus for supporting different nets for various sporting purposes including interengaging tubular rods which are arranged to interconnect and have ground engaging portions suitable to be useful for the several functions. The frame of the net support structure includes a pair of spaced apart, vertically extending posts; each of the posts is divided into a pair of telescoping sections. An upper horizontally extending multi-section member extends and connects the upper end of the vertical posts. A U-shaped clip is provided to engage the frame support with resilient holding pressure for supporting a net on the frame.", "This paper presents results of a study of the fundamentals of sorting. Emphasis is placed on understanding sorting and on minimizing the time required to sort with electronic equipment of reasonable cost. Sorting is viewed as a combination of information gathering and item moving activities. Shannon's communication theory measure of information is applied to assess the difficulty of various sorting problems. Bounds on the number of comparisons required to sort are developed, and optimal or near-optimal sorting schemes are described and investigated. Three abstract sorting models based on cyclic, linear, and randomaccess memories are defined. Optimal or near-optimal sorting methods are developed for the models and their parallel-register extensions. A brief review of the origin of the work and some of its hypotheses is also presented." ] }
1304.2889
2118898123
Glyph-based visualization is an effective tool for depicting multivariate information. Since sorting is one of the most common analytical tasks performed on individual attributes of a multi-dimensional dataset, this motivates the hypothesis that introducing glyph sorting would significantly enhance the usability of glyph-based visualization. In this article, we present a glyph-based conceptual framework as part of a visualization process for interactive sorting of multivariate data. We examine several technical aspects of glyph sorting and provide design principles for developing effective, visually sortable glyphs. Glyphs that are visually sortable provide two key benefits: (1) performing comparative analysis of multiple attributes between glyphs and (2) to support multi-dimensional visual search. We describe a system that incorporates focus and context glyphs to control sorting in a visually intuitive manner and for viewing sorted results in an interactive, multi-dimensional glyph plot that enables users to perform high-dimensional sorting, analyse and examine data trends in detail. To demonstrate the usability of glyph sorting, we present a case study in rugby event analysis for comparing and analysing trends within matches. This work is undertaken in conjunction with a national rugby team. From using glyph sorting, analysts have reported the discovery of new insight beyond traditional match analysis.
Interactive visualization studies the ability of human interaction for exploring and understanding datasets through visualization, which Zudilova et al @cite_35 covers in a state-of-the-art report. De Leeuw and Van Wijk @cite_8 is one earlier research which incorporates glyphs into interactive visualization for analysing multiple flow characteristics in selected regions using a probe glyph. Shaw @cite_0 describe an interactive glyph-based framework for visualizing multi-dimensional data, where attributes are mapped in order of data importance to visual cues such as location, size, colour and shape. To our knowledge, this is the first work of its kind to introduce focus and context glyphs for visual sorting of high-dimensional data.
{ "cite_N": [ "@cite_0", "@cite_35", "@cite_8" ], "mid": [ "1971702804", "", "2079380280" ], "abstract": [ "This paper describes a new technique for the multi-dimensional visualization of data through automatic procedural generation of glyph shapes based on mathematical functions. Our glyph- based Stereoscopic Field Analyzer (SFA) system allows the visualization of both regular and irregular grids of volumetric data. SFA uses a glyph's location, 3D size, color and opacity to encode up to 8 attributes of scalar data per glyph. We have extended SFA's capabilities to explore shape variation as a visualization attribute. We opted for a procedural approach, which allows flexibility, data abstraction, and freedom from specification of detailed shapes. Superquadrics are a natural choice to satisfy our goal of automatic and comprehensible mapping of data to shape. For our initial implementation we have chosen superellipses. We parameterize superquadrics to allow continuous control over the 'roundness' or 'pointiness' of the shape in the two major planes which intersect to form the shape, allowing a very simple, intuitive, abstract schema of shape specification.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.", "", "A probe for the interactive visualization of flow fields is presented. The probe can be used to visualize many characteristics of the flow in detail for a small region in the data set. The velocity and the local change of velocity (the velocity gradient tensor) are visualized by a set of geometric primitives. To this end, the velocity gradient tensor is transformed to a local coordinate frame, and decomposed into components parallel with and perpendicular to the flow. These components are visualized as geometric objects with an intuitively meaningful interpretation. An implementation is presented which shows that this probe is a useful tool for flow visualization. >" ] }
1304.3010
1995358441
This paper presents a study of the life cycle of news articles posted online. We describe the interplay between website visitation patterns and social media reactions to news content. We show that we can use this hybrid observation method to characterize distinct classes of articles. We also find that social media reactions can help predict future visitation patterns early and accurately. We validate our methods using qualitative analysis as well as quantitative analysis on data from a large international news network, for a set of articles generating more than 3,000,000 visits and 200,000 social media reactions. We show that it is possible to model accurately the overall traffic articles will ultimately receive by observing the first ten to twenty minutes of social media reactions. Achieving the same prediction accuracy with visits alone would require to wait for three hours of data. We also describe significant improvements on the accuracy of the early prediction of shelf-life for news stories.
Behavioral-driven article classification. Previous works including @cite_5 @cite_18 that have studied online activities around online resources (e.g. visiting, voting, sharing, etc.), have consistently identified broad classes of temporal patterns. These classes can be generally characterized, first, by the presence or absence of a clear peak'' of activity; and second, by the amount of activity before and after the peak.
{ "cite_N": [ "@cite_5", "@cite_18" ], "mid": [ "2042034885", "2020221730" ], "abstract": [ "We study the relaxation response of a social system after endogenous and exogenous bursts of activity using the time series of daily views for nearly 5 million videos on YouTube. We find that most activity can be described accurately as a Poisson process. However, we also find hundreds of thousands of examples in which a burst of activity is followed by an ubiquitous power-law relaxation governing the timing of views. We find that these relaxation exponents cluster into three distinct classes and allow for the classification of collective human dynamics. This is consistent with an epidemic model on a social network containing two ingredients: a power-law distribution of waiting times between cause and action and an epidemic cascade of actions becoming the cause of future actions. This model is a conceptual extension of the fluctuation-dissipation theorem to social systems [Ruelle, D (2004) Phys Today 57:48–53] and [Roehner BM, et al, (2004) Int J Mod Phys C 15:809–834], and provides a unique framework for the investigation of timing in complex systems.", "Micro-blogging systems such as Twitter expose digital traces of social discourse with an unprecedented degree of resolution of individual behaviors. They offer an opportunity to investigate how a large-scale social system responds to exogenous or endogenous stimuli, and to disentangle the temporal, spatial and topical aspects of users' activity. Here we focus on spikes of collective attention in Twitter, and specifically on peaks in the popularity of hashtags. Users employ hashtags as a form of social annotation, to define a shared context for a specific event, topic, or meme. We analyze a large-scale record of Twitter activity and find that the evolution of hashtag popularity over time defines discrete classes of hashtags. We link these dynamical classes to the events the hashtags represent and use text mining techniques to provide a semantic characterization of the hashtag classes. Moreover, we track the propagation of hashtags in the Twitter social network and find that epidemic spreading plays a minor role in hashtag popularity, which is mostly driven by exogenous factors." ] }
1304.3010
1995358441
This paper presents a study of the life cycle of news articles posted online. We describe the interplay between website visitation patterns and social media reactions to news content. We show that we can use this hybrid observation method to characterize distinct classes of articles. We also find that social media reactions can help predict future visitation patterns early and accurately. We validate our methods using qualitative analysis as well as quantitative analysis on data from a large international news network, for a set of articles generating more than 3,000,000 visits and 200,000 social media reactions. We show that it is possible to model accurately the overall traffic articles will ultimately receive by observing the first ten to twenty minutes of social media reactions. Achieving the same prediction accuracy with visits alone would require to wait for three hours of data. We also describe significant improvements on the accuracy of the early prediction of shelf-life for news stories.
Prediction of users' activity. The prediction of the volume of user activities with respect to on-line content items has attracted a considerable amount of research. This is attested by a number of papers, some of which are outlined in Table . Another active topic that is closely related, but different, is that of predicting real-world variables such as sales or profits using social media signals (e.g. @cite_12 and many others).
{ "cite_N": [ "@cite_12" ], "mid": [ "1982381099" ], "abstract": [ "An increasing fraction of the global discourse is migrating online in the form of blogs, bulletin boards, web pages, wikis, editorials, and a dizzying array of new collaborative technologies. The migration has now proceeded to the point that topics reflecting certain individual products are sufficiently popular to allow targeted online tracking of the ebb and flow of chatter around these topics. Based on an analysis of around half a million sales rank values for 2,340 books over a period of four months, and correlating postings in blogs, media, and web pages, we are able to draw several interesting conclusions.First, carefully hand-crafted queries produce matching postings whose volume predicts sales ranks. Second, these queries can be automatically generated in many cases. And third, even though sales rank motion might be difficult to predict in general, algorithmic predictors can use online postings to successfully predict spikes in sales rank." ] }
1304.3480
1959141096
Feld's friendship paradox states that "your friends have more friends than you, on average." This paradox arises because extremely popular people, despite being rare, are overrepresented when averaging over friends. Using a sample of the Twitter firehose, we confirm that the friendship paradox holds for >98 of Twitter users. Because of the directed nature of the follower graph on Twitter, we are further able to confirm more detailed forms of the friendship paradox: everyone you follow or who follows you has more friends and followers than you. This is likely caused by a correlation we demonstrate between Twitter activity, number of friends, and number of followers. In addition, we discover two new paradoxes: the virality paradox that states "your friends receive more viral content than you, on average," and the activity paradox, which states "your friends are more active than you, on average." The latter paradox is important in regulating online communication. It may result in users having difficulty maintaining optimal incoming information rates, because following additional users causes the volume of incoming tweets to increase super-linearly. While users may compensate for increased information flow by increasing their own activity, users become information overloaded when they receive more information than they are able or willing to process. We compare the average size of cascades that are sent and received by overloaded and underloaded users. And we show that overloaded users post and receive larger cascades and they are poor detector of small cascades.
The friendship paradox describes the phenomenon that most people have fewer friends than their friends have @cite_19 . The paradox exists because people who have more friends are more likely to be observed among other's friends; therefore, they contribute more frequently to the average. Interestingly, most people think they have more friends than their friends do @cite_25 .
{ "cite_N": [ "@cite_19", "@cite_25" ], "mid": [ "2084862036", "1971930416" ], "abstract": [ "It is reasonable to suppose that individuals use the number of friends that their friends have as one basis for determining whether they, themselves, have an adequate number of friends. This article shows that, if individuals compare themselves with their friends, it is likely that most of them will feel relatively inadequate. Data on friendship drawn from James Coleman's (1961) classic study The Adolescent Society are used to illustrate the phenomenon that most people have fewer friends than their friends have. The logic underlying the phenomenon is mathematically explored, showing that the mean number of friends of friends is always greater than the mean number of friends of individuals. Further analysis shows that the proportion of individuals who have fewer friends than the mean number of friends their own friends have is affected by the exact arrangement of friendships in a social network. This disproportionate experiencing of friends with many friends is related to a set of", "We report on a survey of undergraduates at the University of Chicago in which respondents were asked to assess their popularity relative to others. Popularity estimates were related to actual popularity, but we also found strong evidence of self-enhancement in self-other comparisons of popularity. In particular, self-enhancement was stronger for self versus friend comparisons than for self versus typical other comparisons; this is contrary to the reality demonstrated in Feld's friendship paradox and suggests that people are more threatened by the success of friends than of strangers. At the same time, people with relatively popular friends tended to make more self-serving estimates of their own popularity than did people with less popular friends. These results clarify how objective patterns of interpersonal contact work together with cognitive and motivational tendencies to shape perceptions of one's location in the social world." ] }
1304.3480
1959141096
Feld's friendship paradox states that "your friends have more friends than you, on average." This paradox arises because extremely popular people, despite being rare, are overrepresented when averaging over friends. Using a sample of the Twitter firehose, we confirm that the friendship paradox holds for >98 of Twitter users. Because of the directed nature of the follower graph on Twitter, we are further able to confirm more detailed forms of the friendship paradox: everyone you follow or who follows you has more friends and followers than you. This is likely caused by a correlation we demonstrate between Twitter activity, number of friends, and number of followers. In addition, we discover two new paradoxes: the virality paradox that states "your friends receive more viral content than you, on average," and the activity paradox, which states "your friends are more active than you, on average." The latter paradox is important in regulating online communication. It may result in users having difficulty maintaining optimal incoming information rates, because following additional users causes the volume of incoming tweets to increase super-linearly. While users may compensate for increased information flow by increasing their own activity, users become information overloaded when they receive more information than they are able or willing to process. We compare the average size of cascades that are sent and received by overloaded and underloaded users. And we show that overloaded users post and receive larger cascades and they are poor detector of small cascades.
Besides being an interesting phenomenon, the friendship paradox has some practical applications. E.g., in @cite_9 and @cite_16 authors use the paradox for early detection of contagious outbreaks, both virtual and pathogenic. Studies have shown that people with more friends are more likely to get infected early on. So, if we consider a random sample and check the friends of the random sample for the outbreak, we will have higher chance in detecting the outbreak in early days.
{ "cite_N": [ "@cite_9", "@cite_16" ], "mid": [ "2140095656", "2949086544" ], "abstract": [ "Current methods for the detection of contagious outbreaks give contemporaneous information about the course of an epidemic at best. It is known that individuals near the center of a social network are likely to be infected sooner during the course of an outbreak, on average, than those at the periphery. Unfortunately, mapping a whole network to identify central individuals who might be monitored for infection is typically very difficult. We propose an alternative strategy that does not require ascertainment of global network structure, namely, simply monitoring the friends of randomly selected individuals. Such individuals are known to be more central. To evaluate whether such a friend group could indeed provide early detection, we studied a flu outbreak at Harvard College in late 2009. We followed 744 students who were either members of a group of randomly chosen individuals or a group of their friends. Based on clinical diagnoses, the progression of the epidemic in the friend group occurred 13.9 days (95 C.I. 9.9–16.6) in advance of the randomly chosen group (i.e., the population as a whole). The friend group also showed a significant lead time (p<0.05) on day 16 of the epidemic, a full 46 days before the peak in daily incidence in the population as a whole. This sensor method could provide significant additional time to react to epidemics in small or large populations under surveillance. The amount of lead time will depend on features of the outbreak and the network at hand. The method could in principle be generalized to other biological, psychological, informational, or behavioral contagions that spread in networks.", "Recent research has focused on the monitoring of global-scale online data for improved detection of epidemics, mood patterns, movements in the stock market, political revolutions, box-office revenues, consumer behaviour and many other important phenomena. However, privacy considerations and the sheer scale of data available online are quickly making global monitoring infeasible, and existing methods do not take full advantage of local network structure to identify key nodes for monitoring. Here, we develop a model of the contagious spread of information in a global-scale, publicly-articulated social network and show that a simple method can yield not just early detection, but advance warning of contagious outbreaks. In this method, we randomly choose a small fraction of nodes in the network and then we randomly choose a \"friend\" of each node to include in a group for local monitoring. Using six months of data from most of the full Twittersphere, we show that this friend group is more central in the network and it helps us to detect viral outbreaks of the use of novel hashtags about 7 days earlier than we could with an equal-sized randomly chosen group. Moreover, the method actually works better than expected due to network structure alone because highly central actors are both more active and exhibit increased diversity in the information they transmit to others. These results suggest that local monitoring is not just more efficient, it is more effective, and it is possible that other contagious processes in global-scale networks may be similarly monitored." ] }
1304.3480
1959141096
Feld's friendship paradox states that "your friends have more friends than you, on average." This paradox arises because extremely popular people, despite being rare, are overrepresented when averaging over friends. Using a sample of the Twitter firehose, we confirm that the friendship paradox holds for >98 of Twitter users. Because of the directed nature of the follower graph on Twitter, we are further able to confirm more detailed forms of the friendship paradox: everyone you follow or who follows you has more friends and followers than you. This is likely caused by a correlation we demonstrate between Twitter activity, number of friends, and number of followers. In addition, we discover two new paradoxes: the virality paradox that states "your friends receive more viral content than you, on average," and the activity paradox, which states "your friends are more active than you, on average." The latter paradox is important in regulating online communication. It may result in users having difficulty maintaining optimal incoming information rates, because following additional users causes the volume of incoming tweets to increase super-linearly. While users may compensate for increased information flow by increasing their own activity, users become information overloaded when they receive more information than they are able or willing to process. We compare the average size of cascades that are sent and received by overloaded and underloaded users. And we show that overloaded users post and receive larger cascades and they are poor detector of small cascades.
In this paper, we confirm the friendship paradox exists in Twitter, i.e. a user's friends have more friends on average than the user itself, which has also been observed by Garcia- @cite_16 . Complimenting the work by Garcia-, we indirectly explain why early detection is possible on Twitter. Tweets are not pathogens, i.e., a tweet must be actively propagated to become a viral meme, unlike the flu or other live pathogens which propagate without any conscious effort by the host vector. Hence, network structure alone is insufficient to develop a robustly successful application of the friendship paradox to understanding social contagion. We report that the missing connection is the high correlation between activity and connectivity.
{ "cite_N": [ "@cite_16" ], "mid": [ "2949086544" ], "abstract": [ "Recent research has focused on the monitoring of global-scale online data for improved detection of epidemics, mood patterns, movements in the stock market, political revolutions, box-office revenues, consumer behaviour and many other important phenomena. However, privacy considerations and the sheer scale of data available online are quickly making global monitoring infeasible, and existing methods do not take full advantage of local network structure to identify key nodes for monitoring. Here, we develop a model of the contagious spread of information in a global-scale, publicly-articulated social network and show that a simple method can yield not just early detection, but advance warning of contagious outbreaks. In this method, we randomly choose a small fraction of nodes in the network and then we randomly choose a \"friend\" of each node to include in a group for local monitoring. Using six months of data from most of the full Twittersphere, we show that this friend group is more central in the network and it helps us to detect viral outbreaks of the use of novel hashtags about 7 days earlier than we could with an equal-sized randomly chosen group. Moreover, the method actually works better than expected due to network structure alone because highly central actors are both more active and exhibit increased diversity in the information they transmit to others. These results suggest that local monitoring is not just more efficient, it is more effective, and it is possible that other contagious processes in global-scale networks may be similarly monitored." ] }
1304.3480
1959141096
Feld's friendship paradox states that "your friends have more friends than you, on average." This paradox arises because extremely popular people, despite being rare, are overrepresented when averaging over friends. Using a sample of the Twitter firehose, we confirm that the friendship paradox holds for >98 of Twitter users. Because of the directed nature of the follower graph on Twitter, we are further able to confirm more detailed forms of the friendship paradox: everyone you follow or who follows you has more friends and followers than you. This is likely caused by a correlation we demonstrate between Twitter activity, number of friends, and number of followers. In addition, we discover two new paradoxes: the virality paradox that states "your friends receive more viral content than you, on average," and the activity paradox, which states "your friends are more active than you, on average." The latter paradox is important in regulating online communication. It may result in users having difficulty maintaining optimal incoming information rates, because following additional users causes the volume of incoming tweets to increase super-linearly. While users may compensate for increased information flow by increasing their own activity, users become information overloaded when they receive more information than they are able or willing to process. We compare the average size of cascades that are sent and received by overloaded and underloaded users. And we show that overloaded users post and receive larger cascades and they are poor detector of small cascades.
The present work demonstrates that a clear model of how users discover friends and manage existing friendships is essential for mitigating any undesirable consequences of the high correlation between activity and connectivity. For example, among children, this can result in popular" kids having undue influence on others regarding the perception of peer alcohol and drug abuse @cite_20 @cite_28 . Furthermore, better understanding the activity paradox can help online social networks identify and recommend interesting users to follow that will account for any undesired information overload.
{ "cite_N": [ "@cite_28", "@cite_20" ], "mid": [ "2161913456", "2150284449" ], "abstract": [ "False consensus, or the tendency to overestimate the extent to which others share one's own attitudes and behaviors, was investigated in a study of 348 university students classified as non-drug users, cannabis-only users, or amphetamine+ cannabis users. Participants estimated the prevalence of cannabis and amphetamine use among students. Cannabis and amphetamine users made significantly higher estimates of cannabis use among students than did nonusers, whereas amphetamine users gave significantly higher estimates of amphetamine use than nonusers and cannabis-only users. Correlations between estimates of use among friends and other students were significantly positive for both drugs. The results suggest that students are motivated to overestimate the commonality of their own position on drug use and that their estimates may also be influenced by selective exposure.", "Associations of popularity with adolescent substance use were examined among 1793 6-8th grade students who completed an in-school survey. Popularity was assessed through both self-ratings and peer nominations. Students who scored higher on either measure of popularity were more likely to be lifetime cigarette smokers, drinkers, and marijuana users, as well as past month drinkers. Self-rated popularity was positively associated with past month marijuana use and heavy drinking, and peer-nominated popularity showed a quadratic association with past month heavy drinking. These results extend previous work and highlight that popularity, whether based on self-perceptions or peer friendship nominations, is a risk factor for substance use during middle school. Given the substantial increase in peer influence during early adolescence, prevention program effectiveness may be enhanced by addressing popularity as a risk factor for substance use or working with popular students to be peer leaders to influence social norms and promote healthier choices. Language: en" ] }
1304.3393
1979742679
Novelty search has shown to be a promising approach for the evolution of controllers for swarms of robots. In existing studies, however, the experimenter had to craft a task-specific behaviour similarity measure. The reliance on hand-crafted similarity measures places an additional burden to the experimenter and introduces a bias in the evolutionary process. In this paper, we propose and compare two generic behaviour similarity measures: combined state count and sampled average state. The proposed measures are based on the values of sensors and effectors recorded for each individual robot of the swarm. The characterisation of the group-level behaviour is then obtained by combining the sensor-effector values from all the robots. We evaluate the proposed measures in an aggregation task and in a resource sharing task. We show that the generic measures match the performance of task-specific measures in terms of solution quality. Our results indicate that the proposed generic measures operate as effective behaviour similarity measures, and that it is possible to leverage the benefits of novelty search without having to craft task-specific similarity measures.
Novelty search @cite_2 can be implemented over any evolutionary algorithm. The distinctive aspect of novelty search is how the individuals of the population are scored. Instead of being scored according to how well they perform a given task -- which is typically measured by a fitness function, the individuals are scored based on their behavioural novelty -- which is given by the novelty metric. This metric quantifies how different an individual is from the other, previously evaluated individuals with respect to behaviour.
{ "cite_N": [ "@cite_2" ], "mid": [ "2151083897" ], "abstract": [ "In evolutionary computation, the fitness function normally measures progress toward an objective in the search space, effectively acting as an objective function. Through deception, such objective functions may actually prevent the objective from being reached. While methods exist to mitigate deception, they leave the underlying pathology untreated: Objective functions themselves may actively misdirect search toward dead ends. This paper proposes an approach to circumventing deception that also yields a new perspective on open-ended evolution. Instead of either explicitly seeking an objective or modeling natural evolution to capture open-endedness, the idea is to simply search for behavioral novelty. Even in an objective-based problem, such novelty search ignores the objective. Because many points in the search space collapse to a single behavior, the search for novelty is often feasible. Furthermore, because there are only so many simple behaviors, the search for novelty leads to increasing complexity. By decoupling open-ended search from artificial life worlds, the search for novelty is applicable to real world problems. Counterintuitively, in the maze navigation and biped walking tasks in this paper, novelty search significantly outperforms objective-based search, suggesting the strange conclusion that some problems are best solved by methods that ignore the objective. The main lesson is the inherent limitation of the objective-based paradigm and the unexploited opportunity to guide search through other means." ] }
1304.2694
2952242840
The Rao-Blackwell theorem is utilized to analyze and improve the scalability of inference in large probabilistic models that exhibit symmetries. A novel marginal density estimator is introduced and shown both analytically and empirically to outperform standard estimators by several orders of magnitude. The developed theory and algorithms apply to a broad class of probabilistic models including statistical relational models considered not susceptible to lifted probabilistic inference.
There are numerous lifted inference algorithms such as lifted variable elimination @cite_21 , lifted belief propagation @cite_27 @cite_7 , first-order knowledge compilation @cite_15 , and lifted variational inference @cite_6 . Probabilistic theorem proving applied to a clustering of the relational model was used to lift the Gibbs sampler @cite_13 . Recent work exploits automorphism groups of probabilistic models for more efficient probabilistic inference @cite_4 @cite_0 . Orbital Markov chains @cite_0 are a class of Markov chains that implicitly operate on the orbit partition of the assignment space and do not invoke the Rao-Blackwell theorem.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_21", "@cite_6", "@cite_0", "@cite_27", "@cite_15", "@cite_13" ], "mid": [ "1939219112", "1728061942", "", "2952413404", "1605434379", "", "2159284075", "2133933452" ], "abstract": [ "Using the theory of group action, we first introduce the concept of the automorphism group of an exponential family or a graphical model, thus formalizing the general notion of symmetry of a probabilistic model. This automorphism group provides a precise mathematical framework for lifted inference in the general exponential family. Its group action partitions the set of random variables and feature functions into equivalent classes (called orbits) having identical marginals and expectations. Then the inference problem is effectively reduced to that of computing marginals or expectations for each class, thus avoiding the need to deal with each individual variable or feature. We demonstrate the usefulness of this general framework in lifting two classes of variational approximation for MAP inference: local LP relaxation and local LP relaxation with cycle constraints; the latter yields the first lifted inference that operate on a bound tighter than local constraints. Initial experimental results demonstrate that lifted MAP inference with cycle constraints achieved the state of the art performance, obtaining much better objective function values than local approximation while remaining relatively efficient.", "A major benefit of graphical models is that most knowledge is captured in the model structure. Many models, however, produce inference problems with a lot of symmetries not reflected in the graphical structure and hence not exploitable by efficient inference techniques such as belief propagation (BP). In this paper, we present a new and simple BP algorithm, called counting BP, that exploits such additional symmetries. Starting from a given factor graph, counting BP first constructs a compressed factor graph of clusternodes and clusterfactors, corresponding to sets of nodes and factors that are indistinguishable given the evidence. Then it runs a modified BP algorithm on the compressed graph that is equivalent to running BP on the original factor graph. Our experiments show that counting BP is applicable to a variety of important AI tasks such as (dynamic) relational models and boolean model counting, and that significant efficiency gains are obtainable, often by orders of magnitude.", "", "Hybrid continuous-discrete models naturally represent many real-world applications in robotics, finance, and environmental engineering. Inference with large-scale models is challenging because relational structures deteriorate rapidly during inference with observations. The main contribution of this paper is an efficient relational variational inference algorithm that factors largescale probability models into simpler variational models, composed of mixtures of iid (Bernoulli) random variables. The algorithm takes probability relational models of largescale hybrid systems and converts them to a close-to-optimal variational models. Then, it efficiently calculates marginal probabilities on the variational models by using a latent (or lifted) variable elimination or a lifted stochastic sampling. This inference is unique because it maintains the relational structure upon individual observations and during inference steps.", "We present a novel approach to detecting and utilizing symmetries in probabilistic graphical models with two main contributions. First, we present a scalable approach to computing generating sets of permutation groups representing the symmetries of graphical models. Second, we introduce orbital Markov chains, a novel family of Markov chains leveraging model symmetries to reduce mixing times. We establish an insightful connection between model symmetries and rapid mixing of orbital Markov chains. Thus, we present the first lifted MCMC algorithm for probabilistic graphical models. Both analytical and empirical results demonstrate the effectiveness and efficiency of the approach.", "", "Probabilistic logics are receiving a lot of attention today because of their expressive power for knowledge representation and learning. However, this expressivity is detrimental to the tractability of inference, when done at the propositional level. To solve this problem, various lifted inference algorithms have been proposed that reason at the first-order level, about groups of objects as a whole. Despite the existence of various lifted inference approaches, there are currently no completeness results about these algorithms. The key contribution of this paper is that we introduce a formal definition of lifted inference that allows us to reason about the completeness of lifted inference algorithms relative to a particular class of probabilistic models. We then show how to obtain a completeness result using a first-order knowledge compilation approach for theories of formulae containing up to two logical variables.", "First-order probabilistic models combine the power of first-order logic, the de facto tool for handling relational structure, with probabilistic graphical models, the de facto tool for handling uncertainty. Lifted probabilistic inference algorithms for them have been the subject of much recent research. The main idea in these algorithms is to improve the accuracy and scalability of existing graphical models' inference algorithms by exploiting symmetry in the first-order representation. In this paper, we consider blocked Gibbs sampling, an advanced MCMC scheme, and lift it to the first-order level. We propose to achieve this by partitioning the first-order atoms in the model into a set of disjoint clusters such that exact lifted inference is polynomial in each cluster given an assignment to all other atoms not in the cluster. We propose an approach for constructing the clusters and show how it can be used to trade accuracy with computational complexity in a principled manner. Our experimental evaluation shows that lifted Gibbs sampling is superior to the propositional algorithm in terms of accuracy, scalability and convergence." ] }
1304.2694
2952242840
The Rao-Blackwell theorem is utilized to analyze and improve the scalability of inference in large probabilistic models that exhibit symmetries. A novel marginal density estimator is introduced and shown both analytically and empirically to outperform standard estimators by several orders of magnitude. The developed theory and algorithms apply to a broad class of probabilistic models including statistical relational models considered not susceptible to lifted probabilistic inference.
Rao-Blackwellized (RB) estimators have been used for inference in Bayesian networks @cite_32 @cite_28 and latent Dirichlet allocation @cite_33 with application in robotics @cite_31 and activity recognition @cite_19 . The RB theorem and estimator are important concepts in statistics @cite_3 @cite_9 .
{ "cite_N": [ "@cite_33", "@cite_28", "@cite_9", "@cite_32", "@cite_3", "@cite_19", "@cite_31" ], "mid": [ "2130428211", "2155102572", "2071988465", "2149020252", "2083875149", "1674411155", "" ], "abstract": [ "Latent Dirichlet allocation (LDA) is a Bayesian network that has recently gained much popularity in applications ranging from document modeling to computer vision. Due to the large scale nature of these applications, current inference procedures like variational Bayes and Gibbs sampling have been found lacking. In this paper we propose the collapsed variational Bayesian inference algorithm for LDA, and show that it is computationally efficient, easy to implement and significantly more accurate than standard variational Bayesian inference for LDA.", "The paper presents a new sampling methodology for Bayesian networks that samples only a subset of variables and applies exact inference to the rest. Cutset sampling is a network structure-exploiting application of the Rao-Blackwellisation principle to sampling in Bayesian networks. It improves convergence by exploiting memory-based inference algorithms. It can also be viewed as an anytime approximation of the exact cutset-conditioning algorithm developed by Pearl. Cutset sampling can be implemented efficiently when the sampled variables constitute a loop-cutset of the Bayesian network and, more generally, when the induced width of the network's graph conditioned on the observed sampled variables is bounded by a constant w. We demonstrate empirically the benefit of this scheme on a range of benchmarks.", "SUMMARY This paper proposes a post-simulation improvement for two common Monte Carlo methods, the Accept-Reject and Metropolis algorithms. The improvement is based on a Rao-Blackwellisation method that integrates over the uniform random variables involved in the algorithms, and thus post-processes the standard estimators. We show how the Rao-Blackwellised versions of these algorithms can be implemented and, through examples, illustrate the improvement in variance brought by these new procedures. We also compare the improved version of the Metropolis algorithm with ordinary and Rao-Blackwellised importance sampling procedures for independent and general Metropolis set-ups.", "Particle filters (PFs) are powerful sampling-based inference learning algorithms for dynamic Bayesian networks (DBNs). They allow us to treat, in a principled way, any type of probability distribution, nonlinearity and non-stationarity. They have appeared in several fields under such names as \"condensation\", \"sequential Monte Carlo\" and \"survival of the fittest\". In this paper, we show how we can exploit the structure of the DBN to increase the efficiency of particle filtering, using a technique known as Rao-Blackwellisation. Essentially, this samples some of the variables, and marginalizes out the rest exactly, using the Kalman filter, HMM filter, junction tree algorithm, or any other finite dimensional optimal filter. We show that Rao-Blackwellised particle filters (RBPFs) lead to more accurate estimates than standard PFs. We demonstrate RBPFs on two problems, namely non-stationary online regression with radial basis function networks and robot localization and map building. We also discuss other potential application areas and provide references to some finite dimensional optimal filters.", "Abstract Stochastic substitution, the Gibbs sampler, and the sampling-importance-resampling algorithm can be viewed as three alternative sampling- (or Monte Carlo-) based approaches to the calculation of numerical estimates of marginal probability distributions. The three approaches will be reviewed, compared, and contrasted in relation to various joint probability structures frequently encountered in applications. In particular, the relevance of the approaches to calculating Bayesian posterior densities for a variety of structured models will be discussed and illustrated.", "In this paper, we present a method for recognising an agent's behaviour in dynamic, noisy, uncertain domains, and across multiple levels of abstraction. We term this problem on-line plan recognition under uncertainty and view it generally as probabilistic inference on the stochastic process representing the execution of the agent's plan. Our contributions in this paper are twofold. In terms of probabilistic inference, we introduce the Abstract Hidden Markov Model (AHMM), a novel type of stochastic processes, provide its dynamic Bayesian network (DBN) structure and analyse the properties of this network. We then describe an application of the Rao-Blackwellised Particle Filter to the AHMM which allows us to construct an efficient, hybrid inference method for this model. In terms of plan recognition, we propose a novel plan recognition framework based on the AHMM as the plan execution model. The Rao-Blackwellised hybrid inference for AHMM can take advantage of the independence properties inherent in a model of plan execution, leading to an algorithm for online probabilistic plan recognition that scales well with the number of levels in the plan hierarchy. This illustrates that while stochastic models for plan execution can be complex, they exhibit special structures which, if exploited, can lead to efficient plan recognition algorithms. We demonstrate the usefulness of the AHMM framework via a behaviour recognition system in a complex spatial environment using distributed video surveillance data.", "" ] }
1304.2504
2951391813
The popularity of online social networks (OSNs) makes the protection of users' private information an important but scientifically challenging problem. In the literature, relationship-based access control schemes have been proposed to address this problem. However, with the dynamic developments of OSNs, we identify new access control requirements which cannot be fully captured by the current schemes. In this paper, we focus on public information in OSNs and treat it as a new dimension which users can use to regulate access to their resources. We define a new OSN model containing users and their relationships as well as public information. Based on this model, we introduce a variant of hybrid logic for formulating access control policies. We exploit a type of category information and relationship hierarchy to further extend our logic for its usage in practice. In the end, we propose a few solutions to address the problem of information reliability in OSNs, and formally model collaborative access control in our access control scheme.
proposed the first relationship-based access control model in @cite_4 , where the relationships between the qualified requester and the owner are interpreted into three aspects, i.e., relationship type, depth and trust level. @cite_29 , the authors used semantic web technology including OWL and SWRL to extend the model of @cite_4 . They also proposed administrative and filtering policies which can be used for collaborative and supervising access control, respectively.
{ "cite_N": [ "@cite_29", "@cite_4" ], "mid": [ "2111405580", "2063703813" ], "abstract": [ "The existence of on-line social networks that include person specific information creates interesting opportunities for various applications ranging from marketing to community organization. On the other hand, security and privacy concerns need to be addressed for creating such applications. Improving social network access control systems appears as the first step toward addressing the existing security and privacy concerns related to on-line social networks. To address some of the current limitations, we propose an extensible fine grained access control model based on semantic web tools. In addition, we propose authorization, admin and filtering policies that depend on trust relationships among various users, and are modeled using OWL and SWRL. Besides describing the model, we present the architecture of the framework in its support.", "In this article, we propose an access control mechanism for Web-based social networks, which adopts a rule-based approach for specifying access policies on the resources owned by network participants, and where authorized users are denoted in terms of the type, depth, and trust level of the relationships existing between nodes in the network. Different from traditional access control systems, our mechanism makes use of a semidecentralized architecture, where access control enforcement is carried out client-side. Access to a resource is granted when the requestor is able to demonstrate being authorized to do that by providing a proof. In the article, besides illustrating the main notions on which our access control model relies, we present all the protocols underlying our system and a performance study of the implemented prototype." ] }
1304.2504
2951391813
The popularity of online social networks (OSNs) makes the protection of users' private information an important but scientifically challenging problem. In the literature, relationship-based access control schemes have been proposed to address this problem. However, with the dynamic developments of OSNs, we identify new access control requirements which cannot be fully captured by the current schemes. In this paper, we focus on public information in OSNs and treat it as a new dimension which users can use to regulate access to their resources. We define a new OSN model containing users and their relationships as well as public information. Based on this model, we introduce a variant of hybrid logic for formulating access control policies. We exploit a type of category information and relationship hierarchy to further extend our logic for its usage in practice. In the end, we propose a few solutions to address the problem of information reliability in OSNs, and formally model collaborative access control in our access control scheme.
proposed an access control scheme for Facebook-style social networks @cite_30 , in which they model the access control procedure as two stages. In the first stage, the requester has to find the owner of the target resource; then in the second one, the owner decides whether the authorization is granted or not. Their access control policies are mainly based on the relationships between the requester and the owner. Moreover, they proposed several meaningful access control policies based on the graph structure of OSNs, such as @math -common friends and clique. @cite_8 , Fong introduced a modal logic to define access control policies for OSNs. Later Fong and Siahaan @cite_21 improved the previously proposed logic to further support policies like @math -common friends and clique. @cite_11 , the authors adopted a hybrid logic to describe policies which eliminates an exponential penalty in expressing complex relationships such as @math -common friends. A visualization tool for evaluating the effect of the access control configurations is designed in @cite_0 , with which a user can check what other users within a certain distance to him can view his resources.
{ "cite_N": [ "@cite_30", "@cite_8", "@cite_21", "@cite_0", "@cite_11" ], "mid": [ "1555093693", "2036467405", "1981058821", "1983514061", "2043563246" ], "abstract": [ "Recent years have seen unprecedented growth in the popularity of social network systems, with Facebook being an archetypical example. The access control paradigm behind the privacy preservation mechanism of Facebook is distinctly different from such existing access control paradigms as Discretionary Access Control, Role-Based Access Control, Capability Systems, and TrustManagement Systems. This work takes a first step in deepening the understanding of this access control paradigm, by proposing an access control model that formalizes and generalizes the privacy preservation mechanism of Facebook. The model can be instantiated into a family of Facebook-style social network systems, each with a recognizably different access control mechanism, so that Facebook is but one instantiation of the model. We also demonstrate that the model can be instantiated to express policies that are not currently supported by Facebook but possess rich and natural social significance. This work thus delineates the design space of privacy preservation mechanisms for Facebook-style social network systems, and lays out a formal framework for policy analysis in these systems.", "Social Network Systems pioneer a paradigm of access control that is distinct from traditional approaches to access control. Gates coined the term Relationship-Based Access Control (ReBAC) to refer to this paradigm. ReBAC is characterized by the explicit tracking of interpersonal relationships between users, and the expression of access control policies in terms of these relationships. This work explores what it takes to widen the applicability of ReBAC to application domains other than social computing. To this end, we formulate an archetypical ReBAC model to capture the essence of the paradigm, that is, authorization decisions are based on the relationship between the resource owner and the resource accessor in a social network maintained by the protection system. A novelty of the model is that it captures the contextual nature of relationships. We devise a policy language, based on modal logic, for composing access control policies that support delegation of trust. We use a case study in the domain of Electronic Health Records to demonstrate the utility of our model and its policy language. This work provides initial evidence to the feasibility and utility of ReBAC as a general-purpose paradigm of access control.", "The Relationship-Based Access Control (ReBAC) model was recently proposed as a general-purpose access control model. It supports the natural expression of parameterized roles, the composition of policies, and the delegation of trust. Fong proposed a policy language that is based on Modal Logic for expressing and composing ReBAC policies. A natural question is whether such a language is representationally complete, that is, whether the language is capable of expressing all ReBAC policies that one is interested in expressing. In this work, we argue that the extensive use of what we call Relational Policies is what distinguishes ReBAC from traditional access control models. We show that Fong's policy language is representationally incomplete in that certain previously studied Relational Policies are not expressible in the language. We introduce two extensions to the policy language of Fong, and prove that the extended policy language is representationally complete with respect to a well-defined subclass of Relational Policies.", "Understanding the privacy implication of adopting a certain privacy setting is a complex task for the users of social network systems. Users need tool support to articulate potential access scenarios and perform policy analysis. Such a need is particularly acute for Facebook-style Social Network Systems (FSNSs), in which semantically rich topology-based policies are used for access control. In this work, we develop a prototypical tool for Reflective Policy Assessment (RPA) --- a process in which a user examines her profile from the viewpoint of another user in her extended neighbourhood in the social graph. We verify the utility and usability of our tool in a within-subject user study.", "Access control policy is typically defined in terms of attributes, but in many applications it is more natural to define permissions in terms of relationships that resources, systems, and contexts may enjoy. The paradigm of relationship-based access control has been proposed to address this issue, and modal logic has been used as a technical foundation. We argue here that hybrid logic -- a natural and well-established extension of modal logic -- addresses limitations in the ability of modal logic to express certain relationships. We identify a fragment of hybrid logic to be used for expressing relationship-based access-control policies, show that this fragment supports important policy idioms, and demonstrate that it removes an exponential penalty in existing attempts of specifying complex relationships such as \"at least three friends\". We also capture the previously studied notion of relational policies in a static type system." ] }
1304.2504
2951391813
The popularity of online social networks (OSNs) makes the protection of users' private information an important but scientifically challenging problem. In the literature, relationship-based access control schemes have been proposed to address this problem. However, with the dynamic developments of OSNs, we identify new access control requirements which cannot be fully captured by the current schemes. In this paper, we focus on public information in OSNs and treat it as a new dimension which users can use to regulate access to their resources. We define a new OSN model containing users and their relationships as well as public information. Based on this model, we introduce a variant of hybrid logic for formulating access control policies. We exploit a type of category information and relationship hierarchy to further extend our logic for its usage in practice. In the end, we propose a few solutions to address the problem of information reliability in OSNs, and formally model collaborative access control in our access control scheme.
proposed a rich OSN model in @cite_6 . In their work, not only users but also resources are treated as entities and actions performed by users are considered as relationships in OSNs. As more information are incorporated in their model, many new access control policies can be expressed (more details can be found in Sect. ). Their model supports administrative and filtering policies as proposed in @cite_29 . Besides models, several security protocols based on cryptographic techniques are proposed to enforce relationship-based access control policies, e.g., see @cite_23 @cite_9 @cite_24 @cite_25 @cite_26 @cite_28 @cite_16 .
{ "cite_N": [ "@cite_26", "@cite_28", "@cite_9", "@cite_29", "@cite_6", "@cite_24", "@cite_23", "@cite_16", "@cite_25" ], "mid": [ "1545097366", "2169117590", "1530514231", "2111405580", "1998591172", "2010429444", "1502787543", "2400522057", "1986789125" ], "abstract": [ "As social networks sites continue to proliferate and are being used for an increasing variety of purposes, the privacy risks raised by the full access of social networking sites over user data become uncomfortable. A decentralized social network would help alleviate this problem, but offering the functionalities of social networking sites is a distributed manner is a challenging problem. In this paper, we provide techniques to instantiate one of the core functionalities of social networks: discovery of paths between individuals. Our algorithm preserves the privacy of relationship information, and can operate offline during the path discovery phase. We simulate our algorithm on real social network topologies.", "One of the key service of social networks is path discovery, in that release of a resource or delivering of a service is usually constrained by the existence of a path with given characteristics in the social network graph. One fundamental issue is that path discovery should preserve relationship privacy. In this paper, we address this issue by proposing a Privacy-Preserving Path Discovery protocol, called P^3D. Relevant features of P^3D are that: (1) it computes only aggregate information on the discovered paths, whereas details on single relationships are not revealed to anyone, (2) it is designed for a decentralized social network. Moreover, P^3D is designed such to reduce the drawbacks that offline nodes may create to path discovery. In the paper, besides giving the details of the protocol, we provide an extensive performance study. We also present the security analysis of P^3D, showing its robustness against the main security threats.", "The popularity of online social networks (OSNs) makes the protection of users' private information an important but scientifically challenging problem. In the literature, relationship-based access control schemes have been proposed to address this problem. However, with the dynamic developments of OSNs, we identify new access control requirements which cannot be fully captured by the current schemes. In this paper, we focus on public information in OSNs and treat it as a new dimension which users can use to regulate access to their resources. We define a new OSN model containing users and their relationships as well as public information. Based on this model, we introduce a variant of hybrid logic for formulating access control policies. We exploit a type of category information and relationship hierarchy to further extend our logic for its usage in practice. In the end, we propose a few solutions to address the problem of information reliability in OSNs, and formally model collaborative access control in our access control scheme.", "The existence of on-line social networks that include person specific information creates interesting opportunities for various applications ranging from marketing to community organization. On the other hand, security and privacy concerns need to be addressed for creating such applications. Improving social network access control systems appears as the first step toward addressing the existing security and privacy concerns related to on-line social networks. To address some of the current limitations, we propose an extensible fine grained access control model based on semantic web tools. In addition, we propose authorization, admin and filtering policies that depend on trust relationships among various users, and are modeled using OWL and SWRL. Besides describing the model, we present the architecture of the framework in its support.", "User-to-user (U2U) relationship-based access control has become the most prevalent approach for modeling access control in online social networks (OSNs), where authorization is typically made by tracking the existence of a U2U relationship of particular type and or depth between the accessing user and the resource owner. However, today's OSN applications allow various user activities that cannot be controlled by using U2U relationships alone. In this paper, we develop a relationship-based access control model for OSNs that incorporates not only U2U relationships but also user-to-resource (U2R) and resource-to-resource (R2R) relationships. Furthermore, while most access control proposals for OSNs only focus on controlling users' normal usage activities, our model also captures controls on users' administrative activities. Authorization policies are defined in terms of patterns of relationship paths on social graph and the hop count limits of these path. The proposed policy specification language features hop count skipping of resource-related relationships, allowing more flexibility and expressive power. We also provide simple specifications of conflict resolution policies to resolve possible conflicts among authorization policies.", "Web-based Social Networks (WBSNs) are today one of the hugest data source available on the Web and therefore data protection has become an urgent need. This has resulted in the proposals of some access control models for social networks (e.g., [1, 4, 5, 15, 16]). Quite all the models proposed so far enforce a relationship-based access control, where the granting of a resource depends on the relationships established in the network. An important issue is therefore to devise access control mechanisms able to enforce relationship-based access control by, at the same time, protecting relationships privacy. In this paper, we propose a solution to this problem, which enforces access control through a collaboration of selected nodes in the network. We exploit the ElGamal cryptosystem [11] to preserve relationship privacy when relationship information is used for access control purposes.", "Access control over resources shared by social network users is today receiving growing attention due to the widespread use of social networks not only for recreational but also for business purposes. In a social network, access control is mainly regulated by the relationships established by social network users. An important issue is therefore to devise privacy-awareaccess control mechanisms able to perform a controlled sharing of resources by, at the same time, satisfying privacy requirements of social network users wrt their relationships. In this paper, we propose a solution to this problem, which enforces access control through a collaboration of selected nodes in the network. The use of cryptographic and digital signature techniques ensures that relationship privacy is guaranteed during the collaborative process. In the paper, besides giving the protocols to enforce collaborative access control we discuss their robustness against the main security threats.", "We present a cryptographic framework to achieve access control, privacy of social relations, secrecy of resources, and anonymity of users in social networks. We illustrate our technique on a core API for social networking, which includes methods for establishing social relations and for sharing resources. The cryptographic protocols implementing these methods use pseudonyms to hide user identities, signatures on these pseudonyms to establish social relations, and zero-knowledge proofs of knowledge of such signatures to demonstrate the existence of social relations without sacrificing user anonymity. As we do not put any constraints on the underlying social network, our framework is generally applicable and, in particular, constitutes an ideal plug-in for decentralized social networks. We analyzed the security of our protocols by developing formal definitions of the aforementioned security properties and by verifying them using ProVerif, an automated theorem prover for cryptographic protocols. Finally, we built a prototypical implementation and conducted an experimental evaluation to demonstrate the efficiency and the scalability of our framework.", "In this paper we introduce a novel scheme for key management in social networks that is a first step towards the creation of a private social network. A social network graph (i.e., the graph of friendship relationships) is private and social networks are often used to share content, which may be private, amongst its users. In the status quo, the social networking server has access to both this graph and to all of the content, effectively requiring that it is a trusted third party. The goal of this paper is to produce a mechanism through which users can control how their content is shared with other users, without relying on a trusted third party to manage the social network graph and the users' data. The specific access control model considered here is that users will specify access policies based on distance in the social network; for example some content is visible to friends only, while other content is visible to friends of friends, etc. This access control is enforced via key management. That is for each user, there is a key that only friends should be able to derive, there is a key that both friends of the user and friends of friends can derive, etc. The proposed scheme enjoys the following properties: i) the scheme is asynchronous in that it does not require users to be online at the same time, ii) the scheme provides key indistinguishability (that is if a user is not allowed to derive a key according to the access policy, then that key is indistinguishable from a random value), iii) the scheme is efficient in terms of server storage and key derivation time, and iv) the scheme is collusion resistant." ] }
1304.2504
2951391813
The popularity of online social networks (OSNs) makes the protection of users' private information an important but scientifically challenging problem. In the literature, relationship-based access control schemes have been proposed to address this problem. However, with the dynamic developments of OSNs, we identify new access control requirements which cannot be fully captured by the current schemes. In this paper, we focus on public information in OSNs and treat it as a new dimension which users can use to regulate access to their resources. We define a new OSN model containing users and their relationships as well as public information. Based on this model, we introduce a variant of hybrid logic for formulating access control policies. We exploit a type of category information and relationship hierarchy to further extend our logic for its usage in practice. In the end, we propose a few solutions to address the problem of information reliability in OSNs, and formally model collaborative access control in our access control scheme.
As a shared platform, resources in OSNs may be co-owned by a number of users. Thus, collaborative access control also plays an essential role in protecting privacy. A game theoretical method based on the Clarke-Tax mechanism for collective privacy management was proposed by @cite_12 . proposed a different approach by combining trust relations in OSNs and preferential voting schemes @cite_18 . introduced a multiparty access control model in @cite_3 . In addition, they developed a policy specification scheme and a voting based conflict resolution mechanism. Photo tagging is the most common service relevant to collaborative access control. The authors of @cite_27 @cite_17 have investigated users' privacy concerns about this service and proposed principles for designing better collaborative access control schemes.
{ "cite_N": [ "@cite_18", "@cite_3", "@cite_27", "@cite_12", "@cite_17" ], "mid": [ "1597888167", "2104730303", "", "2108710747", "2113533904" ], "abstract": [ "Social networking sites have sprung up and become a hot issue of current society. In spite of the fact that these sites provide users with a variety of attractive features, much to users' dismay, however, they are prone to expose users' private information. In this paper, we propose an approach which addresses the problem of collaboratively deciding privacy policies for, but not limited to, shared photos. Our approach utilizes trust relations in social networks and combines them with Condorcet's preferential voting scheme. We study properties of our trust-augmented voting scheme and develop two approximations to improve its efficiency. Our algorithms are compared and justified by experimental results, which support the usability of our trust-augmented voting scheme.", "Online social networks (OSNs) have experienced tremendous growth in recent years and become a de facto portal for hundreds of millions of Internet users. These OSNs offer attractive means for digital social interactions and information sharing, but also raise a number of security and privacy issues. While OSNs allow users to restrict access to shared data, they currently do not provide any mechanism to enforce privacy concerns over data associated with multiple users. To this end, we propose an approach to enable the protection of shared data associated with multiple users in OSNs. We formulate an access control model to capture the essence of multiparty authorization requirements, along with a multiparty policy specification scheme and a policy enforcement mechanism. Besides, we present a logical representation of our access control model that allows us to leverage the features of existing logic solvers to perform various analysis tasks on our model. We also discuss a proof-of-concept prototype of our approach as part of an application in Facebook and provide usability study and system evaluation of our method.", "", "Social Networking is one of the major technological phenomena of the Web 2.0, with hundreds of millions of people participating. Social networks enable a form of self expression for users, and help them to socialize and share content with other users. In spite of the fact that content sharing represents one of the prominent features of existing Social Network sites, Social Networks yet do not support any mechanism for collaborative management of privacy settings for shared content. In this paper, we model the problem of collaborative enforcement of privacy policies on shared data by using game theory. In particular, we propose a solution that offers automated ways to share images based on an extended notion of content ownership. Building upon the Clarke-Tax mechanism, we describe a simple mechanism that promotes truthfulness, and that rewards users who promote co-ownership. We integrate our design with inference techniques that free the users from the burden of manually selecting privacy preferences for each picture. To the best of our knowledge this is the first time such a protection mechanism for Social Networking has been proposed. In the paper, we also show a proof-of-concept application, which we implemented in the context of Facebook, one of today's most popular social networks. We show that supporting these type of solutions is not also feasible, but can be implemented through a minimal increase in overhead to end-users.", "Photo tagging is a popular feature of many social network sites that allows users to annotate uploaded images with those who are in them, explicitly linking the photo to each person's profile. In this paper, we examine privacy concerns and mechanisms surrounding these tagged images. Using a focus group, we explored the needs and concerns of users, resulting in a set of design considerations for tagged photo privacy. We then designed a privacy enhancing mechanism based on our findings, and validated it using a mixed methods approach. Our results identify the social tensions that tagging generates, and the needs of privacy tools to address the social implications of photo privacy management." ] }
1304.2144
2952453532
The problem of mobile sequential recommendation is presented to suggest a route connecting some pick-up points for a taxi driver so that he she is more likely to get passengers with less travel cost. Essentially, a key challenge of this problem is its high computational complexity. In this paper, we propose a dynamical programming based method to solve this problem. Our method consists of two separate stages: an offline pre-processing stage and an online search stage. The offline stage pre-computes optimal potential sequence candidates from a set of pick-up points, and the online stage selects the optimal driving route based on the pre-computed sequences with the current position of an empty taxi. Specifically, for the offline pre-computation, a backward incremental sequence generation algorithm is proposed based on the iterative property of the cost function. Simultaneously, an incremental pruning policy is adopted in the process of sequence generation to reduce the search space of the potential sequences effectively. In addition, a batch pruning algorithm can also be applied to the generated potential sequences to remove the non-optimal ones of a certain length. Since the pruning effect continuously increases with the increase of the sequence length, our method can search the optimal driving route efficiently in the remaining potential sequence candidates. Experimental results on real and synthetic data sets show that the pruning percentage of our method is significantly improved compared to the state-of-the-art methods, which makes our method can be used to handle the problem of mobile sequential recommendation with more pick-up points and to search the optimal driving routes in arbitrary length ranges.
In recent years, intelligent transportation systems and trajectory data mining have aroused widespread attentions @cite_13 @cite_7 @cite_14 @cite_5 . Mobile navigation and route recommendation have become a hot topic in this research field @cite_17 @cite_8 @cite_15 @cite_19 @cite_6 @cite_12 @cite_25 @cite_22 @cite_18 @cite_24 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_7", "@cite_8", "@cite_6", "@cite_24", "@cite_19", "@cite_5", "@cite_15", "@cite_13", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2172041433", "2113146968", "1971711168", "2017654459", "145447398", "2126648587", "2168884627", "", "2047348678", "1947077359", "2012580531", "2031674781", "1973528035", "2138198492" ], "abstract": [ "This paper presents a Cloud-based system computing customized and practically fast driving routes for an end user using (historical and real-time) traffic conditions and driver behavior. In this system, GPS-equipped taxicabs are employed as mobile sensors constantly probing the traffic rhythm of a city and taxi drivers' intelligence in choosing driving directions in the physical world. Meanwhile, a Cloud aggregates and mines the information from these taxis and other sources from the Internet, like Web maps and weather forecast. The Cloud builds a model incorporating day of the week, time of day, weather conditions, and individual driving strategies (both of the taxi drivers and of the end user for whom the route is being computed). Using this model, our system predicts the traffic conditions of a future time (when the computed route is actually driven) and performs a self-adaptive driving direction service for a particular user. This service gradually learns a user's driving behavior from the user's GPS logs and customizes the fastest route for the user with the help of the Cloud. We evaluate our service using a real-world dataset generated by over 33,000 taxis over a period of 3 months in Beijing. As a result, our service accurately estimates the travel time of a route for a user; hence finding the fastest route customized for the user.", "Classification has been used for modeling many kinds of data sets, including sets of items, text documents, graphs, and networks. However, there is a lack of study on a new kind of data, trajectories on road networks. Modeling such data is useful with the emerging GPS and RFID technologies and is important for effective transportation and traffic planning. In this work, we study methods for classifying trajectories on road networks. By analyzing the behavior of trajectories on road networks, we observe that, in addition to the locations where vehicles have visited, the order of these visited locations is crucial for improving classification accuracy. Based on our analysis, we contend that (frequent) sequential patterns are good feature candidates since they preserve this order information. Furthermore, when mining sequential patterns, we propose to confine the length of sequential patterns to ensure high efficiency. Compared with closed sequential patterns, these partial (i.e., length-confined) sequential patterns allow us to significantly improve efficiency almost without losing accuracy. In this paper, we present a framework for frequent pattern-based classification for trajectories on road networks. Our comparative study over a broad range of classification approaches demonstrates that our method significantly improves accuracy over other methods in some synthetic and real trajectory data.", "With the rapid development of wireless telecommunication technologies, a number of studies have been done on the Location-Based Services (LBSs) due to wide applications. Among them, one of the active topics is travel recommendation. Most of previous studies focused on recommendations of attractions or trips based on the useri¦s location. However, such recommendation results may not satisfy the travel time constraints of users. Besides, the efficiency of trip planning is sensitive to the scalability of travel regions. In this paper, we propose a novel data mining-based approach, namely Trip-Mine, to efficiently find the optimal trip which satisfies the useri¦s travel time constraint based on the useri¦s location. Furthermore, we propose three optimization mechanisms based on Trip-Mine to further enhance the mining efficiency and memory storage requirement for optimal trip finding. To the best of our knowledge, this is the first work that takes efficient trip planning and travel time constraints into account simultaneously. Finally, we performed extensive experimental evaluations and show that our proposals deliver excellent results.", "Controling Greenhouse gas (GHG) emissions for minimizing the impact on the environment is one of the major challenges in front of the human civilization. Although future concentrations, damages and costs are unknown, it is widely recognized that major emissions reduction efforts are needed. In 1997, the Kyoto Protocol promoted by the United Nations Framework Convention on Climate Change, aimed at fighting global warming. The main goal is “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system” [9]. According to the International Energy Agency [1], energy efficient in buildings, industrial processes and transportation could reduce the world’s energy needs in 2050 by one third, and help controlling global emissions of greenhouse gases. The report [1] describes a series of scenarios showing how key energy technologies can reduce emissions of carbon dioxide, the greenhouse gas which is most responsible for climate change. Of the four primary GHG under scrutiny, carbon dioxide (CO2), and the need to lower carbon emissions in general, is of paramount concern. It is estimated that transportation activities are responsible for approximately 25 to 30 of total U.S. GHG emissions, with the on-highway commercial truck market accounting for over 45 of transportation GHG. However, the transportation sector emissions remain almost entirely unaddressed with respect to GHG and CO2 reduction. The Intergovernmental Panel on Climate Change (IPCC) provided guidelines for calculating carbon emission offer estimations only for certain common types of fuels; even the", "Short distance trips are crucial for urban mobility and accessibility. They can contribute to integrated transportation (the “last mile” problem), and more generally to urban ad-hoc ride sharing scenarios. Since no transport provider covers short distance trips where demand arises, private car use is flourishing in recent decades, with all the known disadvantages of traffic congestion, resource wastes, air pollution, and insufficient parking space especially in city centers. Taxis are focusing on providing a door-to-door service, but they do not perform well in short distance trip pickup and delivery services. This paper identifies the obstacles, and suggests the empty cruise time of taxis as (a) a feasible solution for the short distance trip problem, and (b) a contribution to develop a short distance trip market for the taxi industry. This empty cruise contribution hypothesis is investigated by testing different models that define ad-hoc matches of passengers and empty cruising taxis. An agent-based simulation is designed to study the match probability by these models. Based on the experimental results it is shown that taxi empty cruise match models have the potential to solve the short distance problem and to develop the taxi short distance trip markets.", "Taxi service has undergone radical revamp in recent years. In particular, significant investments in communication system and GPS devices have improved quality of taxi services through better dispatches. In this paper, we propose to leverage on such infrastructure and build a service choice model that helps individual drivers in deciding whether to serve a specific taxi stand or not. We demonstrate the value of our model by applying it to a real-world scenario. We also highlight interesting new potential approaches that could significantly improve the quality of taxi services.", "Advances in GPS tracking technology have enabled us to install GPS tracking devices in city taxis to collect a large amount of GPS traces under operational time constraints. These GPS traces provide unparallel opportunities for us to uncover taxi driving fraud activities. In this paper, we develop a taxi driving fraud detection system, which is able to systematically investigate taxi driving fraud. In this system, we first provide functions to find two aspects of evidences: travel route evidence and driving distance evidence. Furthermore, a third function is designed to combine the two aspects of evidences based on Dempster-Shafer theory. To implement the system, we first identify interesting sites from a large amount of taxi GPS logs. Then, we propose a parameter-free method to mine the travel route evidences. Also, we introduce route mark to represent a typical driving path from an interesting site to another one. Based on route mark, we exploit a generative statistical model to characterize the distribution of driving distance and identify the driving distance evidences. Finally, we evaluate the taxi driving fraud detection system with large scale real-world taxi GPS logs. In the experiments, we uncover some regularity of driving fraud activities and investigate the motivation of drivers to commit a driving fraud by analyzing the produced taxi fraud data.", "", "Moving Object Databases (MOD), although ubiquitous, still call for methods that will be able to understand, search, analyze, and browse their spatiotemporal content. In this paper, we propose a method for trajectory segmentation and sampling based on the representativeness of the (sub)trajectories in the MOD. In order to find the most representative subtrajectories, the following methodology is proposed. First, a novel global voting algorithm is performed, based on local density and trajectory similarity information. This method is applied for each segment of the trajectory, forming a local trajectory descriptor that represents line segment representativeness. The sequence of this descriptor over a trajectory gives the voting signal of the trajectory, where high values correspond to the most representative parts. Then, a novel segmentation algorithm is applied on this signal that automatically estimates the number of partitions and the partition borders, identifying homogenous partitions concerning their representativeness. Finally, a sampling method over the resulting segments yields the most representative subtrajectories in the MOD. Our experimental results in synthetic and real MOD verify the effectiveness of the proposed scheme, also in comparison with other sampling techniques.", "The tremendous development of information and communication technology had large influence for service management in the taxi dispatching work. However, taxi drivers who want to drive in \"cruising taxis\" decide their travel routes by depending on their own heuristics. As a result, traffic jams and local excess supplies have often been occurred. In this paper, we propose an adaptive routing method in the cruising taxis. In our method, pathways where many customers are expected to exist are assigned to drivers. This assignment changes dynamically adapting to changes of taxis' positions. Our simulation experiment shows that our method was able to gain more customers than the existing means of cruising taxis.", "The increasing pervasiveness of location-acquisition technologies (GPS, GSM networks, etc.) is leading to the collection of large spatio-temporal datasets and to the opportunity of discovering usable knowledge about movement behaviour, which fosters novel applications and services. In this paper, we move towards this direction and develop an extension of the sequential pattern mining paradigm that analyzes the trajectories of moving objects. We introduce trajectory patterns as concise descriptions of frequent behaviours, in terms of both space (i.e., the regions of space visited during movements) and time (i.e., the duration of movements). In this setting, we provide a general formal statement of the novel mining problem and then study several different instantiations of different complexity. The various approaches are then empirically evaluated over real data and synthetic benchmarks, comparing their strengths and weaknesses.", "GPS-equipped taxis can be regarded as mobile sensors probing traffic flows on road surfaces, and taxi drivers are usually experienced in finding the fastest (quickest) route to a destination based on their knowledge. In this paper, we mine smart driving directions from the historical GPS trajectories of a large number of taxis, and provide a user with the practically fastest route to a given destination at a given departure time. In our approach, we propose a time-dependent landmark graph, where a node (landmark) is a road segment frequently traversed by taxis, to model the intelligence of taxi drivers and the properties of dynamic road networks. Then, a Variance-Entropy-Based Clustering approach is devised to estimate the distribution of travel time between two landmarks in different time slots. Based on this graph, we design a two-stage routing algorithm to compute the practically fastest route. We build our system based on a real-world trajectory dataset generated by over 33,000 taxis in a period of 3 months, and evaluate the system by conducting both synthetic experiments and in-the-field evaluations. As a result, 60-70 of the routes suggested by our method are faster than the competing methods, and 20 of the routes share the same results. On average, 50 of our routes are at least 20 faster than the competing approaches.", "This paper investigates the effect of travel time variability on drivers' route choice behavior in the context of Shanghai, China. A stated preference survey is conducted to collect drivers' hypothetical choice between two alternative routes with designated unequal travel time and travel time variability. A binary choice model is developed to quantify trade-offs between travel time and travel time variability across various types of drivers. In the model, travel time and travel time variability are, respectively, measured by expectation and standard deviation of random travel time. The model shows that travel time and travel time variability on a route exert similarly negative effects on drivers' route choice behavior. In particular, it is found that middle-age drivers are more sensitive to travel time variability and less likely to choose a route with travel time uncertainty than younger and elder drivers. In addition, it is shown that taxi drivers are more sensitive to travel time and more inclined to choose a route with less travel time. Drivers with rich driving experience are less likely to choose a route with travel time uncertainty.", "The increasing availability of large-scale location traces creates unprecedent opportunities to change the paradigm for knowledge discovery in transportation systems. A particularly promising area is to extract energy-efficient transportation patterns (green knowledge), which can be used as guidance for reducing inefficiencies in energy consumption of transportation sectors. However, extracting green knowledge from location traces is not a trivial task. Conventional data analysis tools are usually not customized for handling the massive quantity, complex, dynamic, and distributed nature of location traces. To that end, in this paper, we provide a focused study of extracting energy-efficient transportation patterns from location traces. Specifically, we have the initial focus on a sequence of mobile recommendations. As a case study, we develop a mobile recommender system which has the ability in recommending a sequence of pick-up points for taxi drivers or a sequence of potential parking positions. The goal of this mobile recommendation system is to maximize the probability of business success. Along this line, we provide a Potential Travel Distance (PTD) function for evaluating each candidate sequence. This PTD function possesses a monotone property which can be used to effectively prune the search space. Based on this PTD function, we develop two algorithms, LCP and SkyRoute, for finding the recommended routes. Finally, experimental results show that the proposed system can provide effective mobile sequential recommendation and the knowledge extracted from location traces can be used for coaching drivers and leading to the efficient use of energy." ] }
1304.2144
2952453532
The problem of mobile sequential recommendation is presented to suggest a route connecting some pick-up points for a taxi driver so that he she is more likely to get passengers with less travel cost. Essentially, a key challenge of this problem is its high computational complexity. In this paper, we propose a dynamical programming based method to solve this problem. Our method consists of two separate stages: an offline pre-processing stage and an online search stage. The offline stage pre-computes optimal potential sequence candidates from a set of pick-up points, and the online stage selects the optimal driving route based on the pre-computed sequences with the current position of an empty taxi. Specifically, for the offline pre-computation, a backward incremental sequence generation algorithm is proposed based on the iterative property of the cost function. Simultaneously, an incremental pruning policy is adopted in the process of sequence generation to reduce the search space of the potential sequences effectively. In addition, a batch pruning algorithm can also be applied to the generated potential sequences to remove the non-optimal ones of a certain length. Since the pruning effect continuously increases with the increase of the sequence length, our method can search the optimal driving route efficiently in the remaining potential sequence candidates. Experimental results on real and synthetic data sets show that the pruning percentage of our method is significantly improved compared to the state-of-the-art methods, which makes our method can be used to handle the problem of mobile sequential recommendation with more pick-up points and to search the optimal driving routes in arbitrary length ranges.
In @cite_17 , the authors focus on the MSR problem with a length constraint due to the high computational complexity of the unconstraint simple MSR problem. To reduce the search space, they proposed a route dominance based sequence pruning algorithm LCP. However, the proposed algorithm has difficulty in handling the problem with a large number of pick-up points. A novel skyline based algorithm SkyRoute is also introduced for searching the optimal route which can service multiple cabs online. However, the skyline query is inefficient in handling , since it is processed online.
{ "cite_N": [ "@cite_17" ], "mid": [ "2138198492" ], "abstract": [ "The increasing availability of large-scale location traces creates unprecedent opportunities to change the paradigm for knowledge discovery in transportation systems. A particularly promising area is to extract energy-efficient transportation patterns (green knowledge), which can be used as guidance for reducing inefficiencies in energy consumption of transportation sectors. However, extracting green knowledge from location traces is not a trivial task. Conventional data analysis tools are usually not customized for handling the massive quantity, complex, dynamic, and distributed nature of location traces. To that end, in this paper, we provide a focused study of extracting energy-efficient transportation patterns from location traces. Specifically, we have the initial focus on a sequence of mobile recommendations. As a case study, we develop a mobile recommender system which has the ability in recommending a sequence of pick-up points for taxi drivers or a sequence of potential parking positions. The goal of this mobile recommendation system is to maximize the probability of business success. Along this line, we provide a Potential Travel Distance (PTD) function for evaluating each candidate sequence. This PTD function possesses a monotone property which can be used to effectively prune the search space. Based on this PTD function, we develop two algorithms, LCP and SkyRoute, for finding the recommended routes. Finally, experimental results show that the proposed system can provide effective mobile sequential recommendation and the knowledge extracted from location traces can be used for coaching drivers and leading to the efficient use of energy." ] }