aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1206.3666
2953137579
The performance of neural decoders can degrade over time due to nonstationarities in the relationship between neuronal activity and behavior. In this case, brain-machine interfaces (BMI) require adaptation of their decoders to maintain high performance across time. One way to achieve this is by use of periodical calibration phases, during which the BMI system (or an external human demonstrator) instructs the user to perform certain movements or behaviors. This approach has two disadvantages: (i) calibration phases interrupt the autonomous operation of the BMI and (ii) between two calibration phases the BMI performance might not be stable but continuously decrease. A better alternative would be that the BMI decoder is able to continuously adapt in an unsupervised manner during autonomous BMI operation, i.e. without knowing the movement intentions of the user. In the present article, we present an efficient method for such unsupervised training of BMI systems for continuous movement control. The proposed method utilizes a cost function derived from neuronal recordings, which guides a learning algorithm to evaluate the decoding parameters. We verify the performance of our adaptive method by simulating a BMI user with an optimal feedback control model and its interaction with our adaptive BMI decoder. The simulation results show that the cost function and the algorithm yield fast and precise trajectories towards targets at random orientations on a 2-dimensional computer screen. For initially unknown and non-stationary tuning parameters, our unsupervised method is still able to generate precise trajectories and to keep its performance stable in the long term. The algorithm can optionally work also with neuronal error signals instead or in conjunction with the proposed unsupervised adaptation.
More recently, @cite_48 , @cite_44 , @cite_11 have proposed co-adaptive BMIs, where both subjects (rats) and decoders adapt themselves in order to perform a defined task. This task is either a discrete choice task like pushing a lever @cite_44 @cite_20 or a continuous estimation task such as reproducing the frequency of the cue tone by neural activity @cite_11 . employ a supervised adaptive Kalman filter to update the decoder parameters that match the neural activity to cue tone frequency. and utilize a reward signal to train the decoder. The reward signal is an indicator of a successful completion of the discrete choice task. The decoder adaptation follows a reinforcement learning algorithm rather than a supervised one. Whether the target has been reached, however, in contrast to a fully autonomous BMI task, is known to the decoder.
{ "cite_N": [ "@cite_44", "@cite_48", "@cite_20", "@cite_11" ], "mid": [ "1993958494", "2116532829", "", "2168512214" ], "abstract": [ "The success of brain-machine interfaces (BMI) is enabled by the remarkable ability of the brain to incorporate the artificial neuroprosthetic 'tool' into its own cognitive space and use it as an extension of the user's body. Unlike other tools, neuroprosthetics create a shared space that seamlessly spans the user's internal goal representation of the world and the external physical environment enabling a much deeper human-tool symbiosis. A key factor in the transformation of 'simple tools' into 'intelligent tools' is the concept of co-adaptation where the tool becomes functionally involved in the extraction and definition of the user's goals. Recent advancements in the neuroscience and engineering of neuroprosthetics are providing a blueprint for how new co-adaptive designs based on reinforcement learning change the nature of a user's ability to accomplish tasks that were not possible using conventional methodologies. By designing adaptive controls and artificial intelligence into the neural interface, tools can become active assistants in goal-directed behavior and further enhance human performance in particular for the disabled population. This paper presents recent advances in computational and neural systems supporting the development of symbiotic neuroprosthetic assistants.", "This paper introduces and demonstrates a novel brain-machine interface (BMI) architecture based on the concepts of reinforcement learning (RL), coadaptation, and shaping. RL allows the BMI control algorithm to learn to complete tasks from interactions with the environment, rather than an explicit training signal. Coadaption enables continuous, synergistic adaptation between the BMI control algorithm and BMI user working in changing environments. Shaping is designed to reduce the learning curve for BMI users attempting to control a prosthetic. Here, we present the theory and in vivo experimental paradigm to illustrate how this BMI learns to complete a reaching task using a prosthetic arm in a 3-D workspace based on the user's neuronal activity. This semisupervised learning framework does not require user movements. We quantify BMI performance in closed-loop brain control over six to ten days for three rats as a function of increasing task difficulty. All three subjects coadapted with their BMI control algorithms to control the prosthetic significantly above chance at each level of difficulty.", "", "The ability to control a prosthetic device directly from the neocortex has been demonstrated in rats, monkeys and humans. Here we investigate whether neural control can be accomplished in situations where (1) subjects have not received prior motor training to control the device (na¨ user) and (2) the neural encoding of movement parameters in the cortex is unknown to the prosthetic device (nacontroller). By adopting a decoding strategy that identifies and focuses on units whose firing rate properties are best suited for control, we show that na¨ subjects mutually adapt to learn control of a neural prosthetic system. Six untrained Long-Evans rats, implanted with silicon micro-electrodes in the motor cortex, learned cortical control of an auditory device without prior motor characterization of the recorded neural ensemble. Single- and multi-unit activities were decoded using a Kalman filter to represent an audio 'cursor' (90 ms tone pips ranging from 250 Hz to 16 kHz) which subjects controlled to match a given target frequency. After each trial, a novel adaptive algorithm trained the decoding filter based on correlations of the firing patterns with expected cursor movement. Each behavioral session consisted of 100 trials and began with randomized decoding weights. Within 7 ± 1.4 (mean ± SD) sessions, all subjects were able to significantly score above chance ( P< 0.05, randomization method) in a fixed target paradigm. Training lasted 24 sessions in which both the behavioral performance and signal to noise ratio of the peri-event histograms increased significantly ( P< 0.01, ANOVA). Two rats continued training on a more complex task using a bilateral, two-target control paradigm. Both subjects were able to significantly discriminate the target tones ( P< 0.05, Z-test), while one subject demonstrated control above chance ( P< 0.05, Z-test) after 12 sessions and continued improvement with many sessions achieving over 90 correct targets. Dynamic analysis of binary trial responses indicated that early learning for this subject occurred during session 6. This study demonstrates that subjects can learn to generate neural control signals that are well suited for use with external devices without prior experience or training." ] }
1206.3738
1549155553
Many tools and libraries employ hardware performance monitoring (HPM) on modern processors, and using this data for performance assessment and as a starting point for code optimizations is very popular. However, such data is only useful if it is interpreted with care, and if the right metrics are chosen for the right purpose. We demonstrate the sensible use of hardware performance counters in the context of a structured performance engineering approach for applications in computational science. Typical performance patterns and their respective metric signatures are defined, and some of them are illustrated using case studies. Although these generic concepts do not depend on specific tools or environments, we restrict ourselves to modern x86-based multicore processors and use the likwid-perfctr tool under the Linux OS.
Hardware performance monitoring (HPM) is regarded as a state of the art advanced tool to guide code optimizations. While there are countless publications about HPM-based optimization efforts, a structured method for using hardware events is often missing. even from the same vendor. One exception is the use of cache miss events, which are very popular since memory access is regarded to be a major bottleneck on modern architectures. In fact, miss events are often seen as the most useful metrics in HPM. Many optimization efforts solely focus on minimizing cache miss ratios @cite_1 . Another popular application of HPM is automatic performance tuning via a runtime approach @cite_0 @cite_2 . Recent work attempts to apply statistical methods such as regression analysis to achieve automatic application characterization based on HPM @cite_7 @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_1", "@cite_0", "@cite_2" ], "mid": [ "2167481706", "1967491301", "2028763350", "22490863", "2165006697" ], "abstract": [ "A method is presented for modeling application performance on parallel computers in terms of the performance of microkernels from the HPC Challenge benchmarks. Specifically, the application run time is expressed as a linear combination of inverse speeds and latencies from microkernels or system characteristics. The model parameters are obtained by an automated series of least squares fits using backward elimination to ensure statistical significance. If necessary, outliers are deleted to ensure that the final fit is robust. Typically three or four terms appear in each model: at most one each for floating-point speed, memory bandwidth, interconnect bandwidth, and interconnect latency. Such models allow prediction of application performance on future computers from easier-to-make predictions of microkernel performance. The method was used to build models for four benchmark problems involving the PARATEC and MILC scientific applications. These models not only describe performance well on the ten computers used to build the models, but also do a good job of predicting performance on three additional computers with newer design features. For the four application benchmark problems with six predictions each, the relative root mean squared error in the predicted run times varies between 13 and 16 . The method was also used to build models for the HPL and G-FFTE benchmarks in HPCC, including functional dependences on problem size and core count from complexity analysis. The model for HPL predicts performance even better than the application models do, while the model for G-FFTE systematically underpredicts run times.", "Abstract It is crucial to optimize stencil computations since they are the core (and most computational demanding segment) of many Scientific Computing applications, therefore reducing overall execution time. This is not a simple task, actually it is lengthy and tedious. It is lengthy because the large number of stencil optimizations combinations to test, which might consume days of computing time, and the process is tedious due to the slightly different versions of code to implement. Alternatively, models that predict performance can be built without any actual stencil execution, thus reducing the cumbersome optimization task. Previous works have proposed cache misses and execution time models for specific stencil optimizations. Furthermore, most of them have been designed for 2D datasets or stencil sizes that only suit low order numerical schemes. We propose a flexible and accurate model for a wide range of stencil sizes up to high order schemes, that captures the behavior of 3D stencil computations using platform parameters. The model has been tested in a group of representative hardware architectures, using realistic dataset sizes. Our model predicts successfully stencil execution times and cache misses. However, predictions accuracy depends on the platform, for instance on x86 architectures prediction errors ranges between 1-20 . Therefore, the model is reliable and can help to speed up the stencil computation optimization process. To that end, other stencil optimization techniques can be added to this model, thus essentially providing a framework which covers most of the state-of-the-art.", "Competitive numerical algorithms for solving partial differential equations have to work with the most efficient numerical methods like multigrid and adaptive grid refinement and thus with hierarchical data structures. Unfortunately, in most implementations, hierarchical data—typically stored in trees—cause a nonnegligible overhead in data access. To overcome this quandary—numerical efficiency versus efficient implementation—our algorithm uses space-filling curves to build up data structures which are processed linearly. In fact, the only kind of data structure used in our implementation is stacks. Thus, data access becomes very fast—even faster than the common access to nonhierarchical data stored in matrices—and, in particular, cache misses are reduced considerably. Furthermore, the implementation of multigrid cycles and or higher order discretizations as well as the parallelization of the whole algorithm become very easy and straightforward on these data structures.", "In this paper we present a framework for automatic detection and application of the best binding between threads of a running parallel application and processor cores in a shared memory system, by making use of hardware performance counters. This is especially important within the scope of multicore architectures with shared cache levels. We demonstrate that many applications from the SPEC OMP benchmark show quite sensitive runtime behavior depending on the thread core binding used. In our tests, the proposed framework is able to find the best binding in nearly all cases. The proposed framework is intended to supplement job scheduling systems for better automatic exploitation of systems with multicore processors, as well as making programmers aware of this issue by providing measurement logs.", "Optimizing programs at run-time provides opportunities to apply aggressive optimizations to programs based on information that was not available at compile time. At run time, programs can be adapted to better exploit architectural features, optimize the use of dynamic libraries, and simplify code based on run-time constants. Our profiling system provides a framework for collecting information required for performing run-time optimization. We sample the performance hardware registers available on an Itanium processor, and select a set of code that is likely to lead to important performance-events. We gather distribution information about the performance-events we wish to monitor, and test our traces by estimating the ability for dynamic patching of a program to execute run-time generated traces. Our results show that we are able to capture 58 of execution time across various SPEC2000 integer benchmarks using our profile and patching techniques on a relatively small number of frequently executed execution paths. Our profiling and detection system overhead increases execution time by only 2-4 ." ] }
1206.2957
1852463194
Auctions in which agents' payoffs are random variables have received increased attention in recent years. In particular, recent work in algorithmic mechanism design has produced mechanisms employing internal randomization, partly in response to limitations on deterministic mechanisms imposed by computational complexity. For many of these mechanisms, which are often referred to as truthful-in-expectation, incentive compatibility is contingent on the assumption that agents are risk-neutral. These mechanisms have been criticized on the grounds that this assumption is too strong, because "real" agents are typically risk averse, and moreover their precise attitude towards risk is typically unknown a-priori. In response, researchers in algorithmic mechanism design have sought the design of universally-truthful mechanisms --- mechanisms for which incentive-compatibility makes no assumptions regarding agents' attitudes towards risk. We show that any truthful-in-expectation mechanism can be generically transformed into a mechanism that is incentive compatible even when agents are risk averse, without modifying the mechanism's allocation rule. The transformed mechanism does not require reporting of agents' risk profiles. Equivalently, our result can be stated as follows: Every (randomized) allocation rule that is implementable in dominant strategies when players are risk neutral is also implementable when players are endowed with an arbitrary and unknown concave utility function for money.
Most of the literature in mechanism design assumes risk neutrality, both for the principal (also known as the seller) and the agents (the buyers). However, risk aversion has been studied, and we mention a sample of those works. Maskin and Riley @cite_7 design a revenue-maximizing single-item auction in a Bayesian setting, where buyers are risk averse with given risk profiles. Es "o and Futo @cite_3 design a single-item auction that is optimal for a risk-averse seller, in a Bayesian setting where buyers are risk neutral. The mathematics underlying our result is similar to that underlying the mechanism of @cite_3 ; their mechanism is built from the revenue-maximizing auction of Myerson @cite_14 , and removes all risk from the seller's revenue by charging each player a random payment equal to that in @cite_14 in expectation, but appropriately correlated with the payments of other buyers such that the resulting revenue is deterministic.
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_7" ], "mid": [ "2029050771", "2054347841", "2066809750" ], "abstract": [ "This paper considers the problem faced by a seller who has a single object to sell to one of several possible buyers, when the seller has imperfect information about how much the buyers might be willing to pay for the object. The seller's problem is to design an auction game which has a Nash equilibrium giving him the highest possible expected utility. Optimal auctions are derived in this paper for a wide class of auction design problems.", "Abstract We consider auctions with a risk averse seller in independent private values environments with risk neutral buyers. We show that for every incentive compatible selling mechanism there exists a mechanism which provides deterministically the same (expected) revenue.", "" ] }
1206.2082
2949166564
We present a suite of algorithms for Dimension Independent Similarity Computation (DISCO) to compute all pairwise similarities between very high dimensional sparse vectors. All of our results are provably independent of dimension, meaning apart from the initial cost of trivially reading in the data, all subsequent operations are independent of the dimension, thus the dimension can be very large. We study Cosine, Dice, Overlap, and the Jaccard similarity measures. For Jaccard similiarity we include an improved version of MinHash. Our results are geared toward the MapReduce framework. We empirically validate our theorems at large scale using data from the social networking site Twitter. At time of writing, our algorithms are live in production at twitter.com.
In @cite_0 , the authors consider the problem of finding highly similar pairs of documents in MapReduce. They discuss shuffle size briefly but do not provide proven guarantees. They also discuss reduce key size, but again do not provide bounds on the size.
{ "cite_N": [ "@cite_0" ], "mid": [ "2132399973" ], "abstract": [ "This paper presents a MapReduce algorithm for computing pairwise document similarity in large document collections. MapReduce is an attractive framework because it allows us to decompose the inner products involved in computing document similarity into separate multiplication and summation stages in a way that is well matched to efficient disk access patterns across several machines. On a collection consisting of approximately 900,000 newswire articles, our algorithm exhibits linear growth in running time and space in terms of the number of documents." ] }
1206.2082
2949166564
We present a suite of algorithms for Dimension Independent Similarity Computation (DISCO) to compute all pairwise similarities between very high dimensional sparse vectors. All of our results are provably independent of dimension, meaning apart from the initial cost of trivially reading in the data, all subsequent operations are independent of the dimension, thus the dimension can be very large. We study Cosine, Dice, Overlap, and the Jaccard similarity measures. For Jaccard similiarity we include an improved version of MinHash. Our results are geared toward the MapReduce framework. We empirically validate our theorems at large scale using data from the social networking site Twitter. At time of writing, our algorithms are live in production at twitter.com.
Other related work includes clustering of web data . These applications of clustering typically employ relatively straightforward exact algorithms or approximation methods for computing similarity. Our work could be leveraged by these applications for improved performance or higher accuracy through reliance on proven results. We have also used DISCO in finding similar users in a production environment @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "19684845" ], "abstract": [ "WTF (\"Who to Follow\") is Twitter's user recommendation service, which is responsible for creating millions of connections daily between users based on shared interests, common connections, and other related factors. This paper provides an architectural overview and shares lessons we learned in building and running the service over the past few years. Particularly noteworthy was our design decision to process the entire Twitter graph in memory on a single server, which significantly reduced architectural complexity and allowed us to develop and deploy the service in only a few months. At the core of our architecture is Cassovary, an open-source in-memory graph processing engine we built from scratch for WTF. Besides powering Twitter's user recommendations, Cassovary is also used for search, discovery, promoted products, and other services as well. We describe and evaluate a few graph recommendation algorithms implemented in Cassovary, including a novel approach based on a combination of random walks and SALSA. Looking into the future, we revisit the design of our architecture and comment on its limitations, which are presently being addressed in a second-generation system under development." ] }
1206.1331
2951522744
Social networks play a fundamental role in the diffusion of information. However, there are two different ways of how information reaches a person in a network. Information reaches us through connections in our social networks, as well as through the influence of external out-of-network sources, like the mainstream media. While most present models of information adoption in networks assume information only passes from a node to node via the edges of the underlying network, the recent availability of massive online social media data allows us to study this process in more detail. We present a model in which information can reach a node via the links of the social network or through the influence of external sources. We then develop an efficient model parameter fitting technique and apply the model to the emergence of URL mentions in the Twitter network. Using a complete one month trace of Twitter we study how information reaches the nodes of the network. We quantify the external influences over time and describe how these influences affect the information adoption. We discover that the information tends to "jump" across the network, which can only be explained as an effect of an unobservable external influence on the network. We find that only about 71 of the information volume in Twitter can be attributed to network diffusion, and the remaining 29 is due to external events and factors outside the network.
External influence in networks has been considered in the case of the popularity of YouTube videos @cite_7 . Authors considered a simple model of information diffusion on an implicit completely connected network and argued that since some videos became popular quicker than their model predicted, the additional popularity must have been a result of external influence. Our approach differs significantly: We directly consider the network and the effect of node-to-node interactions, explicitly infer the activity of external source over time and use a much more realistic model of information adoption that distinguishes between exposures to and the adoption of information. Our model builds on the notion of exposure curves which was proposed and studied by @cite_13 . Recently, it was also argued @cite_6 that it is the shape of exposure curves that stops the information from spreading. We make a step forward by providing an inference method that infers the shape of such exposure curves. Simulations show that our method much more accurately infers the exposure curves than the methods previously proposed @cite_13 @cite_6 .
{ "cite_N": [ "@cite_13", "@cite_6", "@cite_7" ], "mid": [ "2952347589", "1937101755", "2042034885" ], "abstract": [ "Time plays an essential role in the diffusion of information, influence and disease over networks. In many cases we only observe when a node copies information, makes a decision or becomes infected -- but the connectivity, transmission rates between nodes and transmission sources are unknown. Inferring the underlying dynamics is of outstanding interest since it enables forecasting, influencing and retarding infections, broadly construed. To this end, we model diffusion processes as discrete networks of continuous temporal processes occurring at different rates. Given cascade data -- observed infection times of nodes -- we infer the edges of the global diffusion network and estimate the transmission rates of each edge that best explain the observed data. The optimization problem is convex. The model naturally (without heuristics) imposes sparse solutions and requires no parameter tuning. The problem decouples into a collection of independent smaller problems, thus scaling easily to networks on the order of hundreds of thousands of nodes. Experiments on real and synthetic data show that our algorithm both recovers the edges of diffusion networks and accurately estimates their transmission rates from cascade data.", "Theoretical progress in understanding the dynamics of spreading processes on graphs suggests the existence of an epidemic threshold below which no epidemics form and above which epidemics spread to a significant fraction of the graph. We have observed information cascades on the social media site Digg that spread fast enough for one initial spreader to infect hundreds of people, yet end up affecting only 0.1 of the entire network. We find that two effects, previously studied in isolation, combine cooperatively to drastically limit the final size of cascades on Digg. First, because of the highly clustered structure of the Digg network, most people who are aware of a story have been exposed to it via multiple friends. This structure lowers the epidemic threshold while moderately slowing the overall growth of cascades. In addition, we find that the mechanism for social contagion on Digg points to a fundamental difference between information spread and other contagion processes: despite multiple opportunities for infection within a social group, people are less likely to become spreaders of information with repeated exposure. The consequences of this mechanism become more pronounced for more clustered graphs. Ultimately, this effect severely curtails the size of social epidemics on Digg.", "We study the relaxation response of a social system after endogenous and exogenous bursts of activity using the time series of daily views for nearly 5 million videos on YouTube. We find that most activity can be described accurately as a Poisson process. However, we also find hundreds of thousands of examples in which a burst of activity is followed by an ubiquitous power-law relaxation governing the timing of views. We find that these relaxation exponents cluster into three distinct classes and allow for the classification of collective human dynamics. This is consistent with an epidemic model on a social network containing two ingredients: a power-law distribution of waiting times between cause and action and an epidemic cascade of actions becoming the cause of future actions. This model is a conceptual extension of the fluctuation-dissipation theorem to social systems [Ruelle, D (2004) Phys Today 57:48–53] and [Roehner BM, et al, (2004) Int J Mod Phys C 15:809–834], and provides a unique framework for the investigation of timing in complex systems." ] }
1206.1402
2135144664
This paper proposes a new algorithm for multiple sparse regression in high dimensions, where the task is to estimate the support and values of several (typically related) sparse vectors from a few noisy linear measurements. Our algorithm is a -backward" greedy procedure that uniquely operates on two distinct classes of objects. In particular, we organize our target sparse vectors as a matrix; our algorithm involves iterative addition and removal of both (a) individual elements, and (b) entire rows (corresponding to shared features), of the matrix. Analytically, we establish that our algorithm manages to recover the supports (exactly) and values (approximately) of the sparse vectors, under assumptions similar to existing approaches based on convex optimization. However, our algorithm has a much smaller computational complexity. Perhaps most interestingly, it is seen empirically to require visibly fewer samples. Ours represents the rst attempt to extend greedy algorithms to the class of models that can only best be represented by a combination of component structural assumptions (sparse and group-sparse, in our case).
Greedy methods: Several algorithms attempt to find the support (and hence values) of sparse vectors by iteratively adding, and possibly dropping, elements from the support. The earliest examples were simple forward" algorithms like Orthogonal Matching Pursuit (OMP) @cite_8 @cite_14 , etc.; these add elements to the support until the loss goes below a threshold. More recently, it has been shown @cite_7 @cite_2 that adding a backward step is more statistically efficient, requiring weaker conditions for support recovery. Another line of (forward) greedy algorithms works by looking at the gradient of the loss function, instead of the function itself; see e.g. @cite_10 . A big difference between our work and these is that our forward-backward algorithm works with two different classes of objects simultaneously: singleton elements of the matrix of vectors that need to be recovered, and entire rows of this matrix. This adds a significant extra dimension in algorithm design, as we need a way to compare the gains provided by each class of object in a way that ensures convergence and correctness.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_2", "@cite_10" ], "mid": [ "2091893102", "2106294397", "2127271355", "2107151623", "2118838680" ], "abstract": [ "This paper presents a new analysis for the orthogonal matching pursuit (OMP) algorithm. It is shown that if the restricted isometry property (RIP) is satisfied at sparsity level O(k), then OMP can stably recover a k-sparse signal in 2-norm under measurement noise. For compressed sensing applications, this result implies that in order to uniformly recover a k-sparse signal in Rd, only O(k lnd) random projections are needed. This analysis improves some earlier results on OMP depending on stronger conditions that can only be satisfied with Ω(k2 lnd) or Ω(k1.6 lnd) random projections.", "Consider linear prediction models where the target function is a sparse linear combination of a set of basis functions. We are interested in the problem of identifying those basis functions with non-zero coefficients and reconstructing the target function from noisy observations. Two heuristics that are widely used in practice are forward and backward greedy algorithms. First, we show that neither idea is adequate. Second, we propose a novel combination that is based on the forward greedy algorithm but takes backward steps adaptively whenever beneficial. We prove strong theoretical results showing that this procedure is effective in learning sparse representations. Experimental results support our theory.", "This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.", "In this paper, we address the problem of learning the structure of a pairwise graphical model from samples in a high-dimensional setting. Our first main result studies the sparsistency, or consistency in sparsity pattern recovery, properties of a forward-backward greedy algorithm as applied to general statistical models. As a special case, we then apply this algorithm to learn the structure of a discrete graphical model via neighborhood estimation. As a corollary of our general result, we derive sufficient conditions on the number of samples n, the maximum node-degree d and the problem size p, as well as other conditions on the model parameters, so that the algorithm recovers all the edges with high probability. Our result guarantees graph selection for samples scaling as n = Ω(d2 log(p)), in contrast to existing convex-optimization based algorithms that require a sample complexity of Ω(d3 log(p)). Further, the greedy algorithm only requires a restricted strong convexity condition which is typically milder than irrepresentability assumptions. We corroborate these results using numerical simulations at the end.", "In this paper, we consider the problem of compressed sensing where the goal is to recover all sparse vectors using a small number of fixed linear measurements. For this problem, we propose a novel partial hard-thresholding operator that leads to a general family of iterative algorithms. While one extreme of the family yields well known hard thresholding algorithms like ITI and HTP[17, 10], the other end of the spectrum leads to a novel algorithm that we call Orthogonal Matching Pursuit with Replacement (OMPR). OMPR, like the classic greedy algorithm OMP, adds exactly one coordinate to the support at each iteration, based on the correlation with the current residual. However, unlike OMP, OMPR also removes one coordinate from the support. This simple change allows us to prove that OMPR has the best known guarantees for sparse recovery in terms of the Restricted Isometry Property (a condition on the measurement matrix). In contrast, OMP is known to have very weak performance guarantees under RIP. Given its simple structure, we are able to extend OMPR using locality sensitive hashing to get OMPR-Hash, the first provably sub-linear (in dimensionality) algorithm for sparse recovery. Our proof techniques are novel and flexible enough to also permit the tightest known analysis of popular iterative algorithms such as CoSaMP and Subspace Pursuit. We provide experimental results on large problems providing recovery for vectors of size up to million dimensions. We demonstrate that for large-scale problems our proposed methods are more robust and faster than existing methods." ] }
1206.0983
2949290550
The Coding Theorem of L.A. Levin connects unconditional prefix Kolmogorov complexity with the discrete universal distribution. There are conditional versions referred to in several publications but as yet there exist no written proofs in English. Here we provide those proofs. They use a different definition than the standard one for the conditional version of the discrete universal distribution. Under the classic definition of conditional probability, there is no conditional version of the Coding Theorem.
We can enumerate all lower semicomputable probability mass functions with one argument. For convenience these arguments are elements of @math . The enumeration list is denoted [ P = P_1, P_2, . ] There is another interpretation possible. Let prefix Turing machine @math be the @math th element in the standard enumeration of prefix Turing machines @math Then @math where @math is a program for @math such that @math . This @math is the probability that prefix Turing machine @math outputs @math when the program on its input tape is supplied by flips of a fair coin. We can thus form the list [ R = R_1, R_2, . ] Both lists @math and @math enumerate the same functions and there are computable isomorphisms between the two @cite_4 Lemma 4.3.4.
{ "cite_N": [ "@cite_4" ], "mid": [ "1638203394" ], "abstract": [ "The book is outstanding and admirable in many respects. ... is necessary reading for all kinds of readers from undergraduate students to top authorities in the field. Journal of Symbolic Logic Written by two experts in the field, this is the only comprehensive and unified treatment of the central ideas and their applications of Kolmogorov complexity. The book presents a thorough treatment of the subject with a wide range of illustrative applications. Such applications include the randomness of finite objects or infinite sequences, Martin-Loef tests for randomness, information theory, computational learning theory, the complexity of algorithms, and the thermodynamics of computing. It will be ideal for advanced undergraduate students, graduate students, and researchers in computer science, mathematics, cognitive sciences, philosophy, artificial intelligence, statistics, and physics. The book is self-contained in that it contains the basic requirements from mathematics and computer science. Included are also numerous problem sets, comments, source references, and hints to solutions of problems. New topics in this edition include Omega numbers, KolmogorovLoveland randomness, universal learning, communication complexity, Kolmogorov's random graphs, time-limited universal distribution, Shannon information and others." ] }
1206.0883
2032246575
In this study we analyze the dynamics of the contact list evolution of millions of users of the Skype communication network. We find that egocentric networks evolve heterogeneously in time as events of edge additions and deletions of individuals are grouped in long bursty clusters, which are separated by long inactive periods. We classify users by their link creation dynamics and show that bursty peaks of contact additions are likely to appear shortly after user account creation. We also study possible relations between bursty contact addition activity and other user-initiated actions like free and paid service adoption events. We show that bursts of contact additions are associated with increases in activity and adoption—an observation that can inform the design of targeted marketing tactics.
Temporal evolution of networks was studied thoroughly during the last years as datasets recording the dynamics of millions of interacting entities became available @cite_3 . One of the most investigated area was the evolution of large social networks @cite_24 @cite_20 @cite_11 @cite_1 where it has been shown that several mechanisms push such networks towards developing heterogeneous topologies and strongly modular structures @cite_26 @cite_13 @cite_12 . In addition various methodologies have been developed to detect evolving mezoscopic patterns @cite_4 @cite_6 and emerging community structures @cite_8 . Our study falls under the same umbrella as these previous works but focuses on the temporal evolution of egocentric networks.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_8", "@cite_1", "@cite_3", "@cite_6", "@cite_24", "@cite_13", "@cite_12", "@cite_20", "@cite_11" ], "mid": [ "2124637492", "2153624566", "2127048411", "1928223220", "", "2029792996", "2165598812", "2151078464", "2065096118", "1723561712", "2141113219" ], "abstract": [ "The emergence of order in natural systems is a constant source of inspiration for both physical and biological sciences. While the spatial order characterizing for example the crystals has been the basis of many advances in contemporary physics, most complex systems in nature do not offer such high degree of order. Many of these systems form complex networks whose nodes are the elements of the system and edges represent the interactions between them. Traditionally complex networks have been described by the random graph theory founded in 1959 by Paul Erdohs and Alfred Renyi. One of the defining features of random graphs is that they are statistically homogeneous, and their degree distribution (characterizing the spread in the number of edges starting from a node) is a Poisson distribution. In contrast, recent empirical studies, including the work of our group, indicate that the topology of real networks is much richer than that of random graphs. In particular, the degree distribution of real networks is a power-law, indicating a heterogeneous topology in which the majority of the nodes have a small degree, but there is a significant fraction of highly connected nodes that play an important role in the connectivity of the network. The scale-free topology of real networks has very important consequences on their functioning. For example, we have discovered that scale-free networks are extremely resilient to the random disruption of their nodes. On the other hand, the selective removal of the nodes with highest degree induces a rapid breakdown of the network to isolated subparts that cannot communicate with each other. The non-trivial scaling of the degree distribution of real networks is also an indication of their assembly and evolution. Indeed, our modeling studies have shown us that there are general principles governing the evolution of networks. Most networks start from a small seed and grow by the addition of new nodes which attach to the nodes already in the system. This process obeys preferential attachment: the new nodes are more likely to connect to nodes with already high degree. We have proposed a simple model based on these two principles wich was able to reproduce the power-law degree distribution of real networks. Perhaps even more importantly, this model paved the way to a new paradigm of network modeling, trying to capture the evolution of networks, not just their static topology.", "Complex networks are studied across many fields of science. To uncover their structural design principles, we defined “network motifs,” patterns of interconnections occurring in complex networks at numbers that are significantly higher than those in randomized networks. We found such motifs in networks from biochemistry, neurobiology, ecology, and engineering. The motifs shared by ecological food webs were distinct from the motifs shared by the genetic networks of Escherichia coli and Saccharomyces cerevisiae or from those found in the World Wide Web. Similar motifs were found in networks that perform information processing, even though they describe elements as different as biomolecules within a cell and synaptic connections between neurons in Caenorhabditis elegans. Motifs may thus define universal classes of networks. This", "The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.", "Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100–200 stable relationships. Thus, the ‘economy of attention’ is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.", "", "Temporal networks are commonly used to represent systems where connections between elements are active only for restricted periods of time, such as telecommunication, neural signal processing, biochemical reaction and human social interaction networks. We introduce the framework of temporal motifs to study the mesoscale topological–temporal structure of temporal networks in which the events of nodes do not overlap in time. Temporal motifs are classes of similar event sequences, where the similarity refers not only to topology but also to the temporal order of the events. We provide a mapping from event sequences to coloured directed graphs that enables an efficient algorithm for identifying temporal motifs. We discuss some aspects of temporal motifs, including causality and null models, and present basic statistics of temporal motifs in a large mobile call network.", "We analyze the anonymous communication patterns of 2.5 million customers of a Belgian mobile phone operator. Grouping customers by billing address, we build a social network of cities that consists of communications between 571 cities in Belgium. We show that inter-city communication intensity is characterized by a gravity model: the communication intensity between two cities is proportional to the product of their sizes divided by the square of their distance.", "We present a detailed study of network evolution by analyzing four large online social networks with full temporal information about node and edge arrivals. For the first time at such a large scale, we study individual node arrival and edge creation processes that collectively lead to macroscopic properties of networks. Using a methodology based on the maximum-likelihood principle, we investigate a wide variety of network formation strategies, and show that edge locality plays a critical role in evolution of networks. Our findings supplement earlier network models based on the inherently non-local preferential attachment. Based on our observations, we develop a complete model of network evolution, where nodes arrive at a prespecified rate and select their lifetimes. Each node then independently initiates edges according to a \"gap\" process, selecting a destination for each edge according to a simple triangle-closing model free of any parameters. We show analytically that the combination of the gap distribution with the node lifetime leads to a power law out-degree distribution that accurately reflects the true network in all four cases. Finally, we give model parameter settings that allow automatic evolution and generation of realistic synthetic networks of arbitrary scale.", "Online communities in the form of message boards, listservs, and newsgroups continue to represent a considerable amount of the social activity on the Internet. Every year thousands of groups ourish while others decline into relative obscurity; likewise, millions of members join a new community every year, some of whom will come to manage or moderate the conversation while others simply sit by the sidelines and observe. These processes of group formation, growth, and dissolution are central in social science, and in an online venue they have ramifications for the design and development of community software In this paper we explore a large corpus of thriving online communities. These groups vary widely in size, moderation and privacy, and cover an equally diverse set of subject matter. We present a broad range of descriptive statistics of these groups. Using metadata from groups, members, and individual messages, we identify users who post and are replied-to frequently by multiple group members; we classify these high-engagement users based on the longevity of their engagements. We show that users who will go on to become long-lived, highly-engaged users experience significantly better treatment than other users from the moment they join the group, well before there is an opportunity for them to develop a long-standing relationship with members of the group We present a simple model explaining long-term heavy engagement as a combination of user-dependent and group-dependent factors. Using this model as an analytical tool, we show that properties of the user alone are sufficient to explain 95 of all memberships, but introducing a small amount of per-group information dramatically improves our ability to model users belonging to multiple groups.", "Complex networks are often constructed by aggregating empirical data over time, such that a link represents the existence of interactions between the endpoint nodes and the link weight represents the intensity of such interactions within the aggregation time window. The resulting networks are then often considered static. More often than not, the aggregation time window is dictated by the availability of data, and the effects of its length on the resulting networks are rarely considered. Here, we address this question by studying the structural features of networks emerging from aggregating empirical data over different time intervals, focussing on networks derived from time-stamped, anonymized mobile telephone call records. Our results show that short aggregation intervals yield networks where strong links associated with dense clusters dominate; the seeds of such clusters or communities become already visible for intervals of around one week. The degree and weight distributions are seen to become stationary around a few days and a few weeks, respectively. An aggregation interval of around 30 days results in the stablest similar networks when consecutive windows are compared. For longer intervals, the effects of weak or random links become increasingly stronger, and the average degree of the network keeps growing even for intervals up to 180 days. The placement of the time window is also seen to affect the outcome: for short windows, different behavioural patterns play a role during weekends and weekdays, and for longer windows it is seen that networks aggregated during holiday periods are significantly different.", "Electronic databases, from phone to e-mails logs, currently provide detailed records of human communication patterns, offering novel avenues to map and explore the structure of social and communication networks. Here we examine the communication patterns of millions of mobile phone users, allowing us to simultaneously study the local and the global structure of a society-wide communication network. We observe a coupling between interaction strengths and the network's local structure, with the counterintuitive consequence that social networks are robust to the removal of the strong ties but fall apart after a phase transition if the weak ties are removed. We show that this coupling significantly slows the diffusion process, resulting in dynamic trapping of information in communities and find that, when it comes to information diffusion, weak and strong ties are both simultaneously ineffective." ] }
1206.0883
2032246575
In this study we analyze the dynamics of the contact list evolution of millions of users of the Skype communication network. We find that egocentric networks evolve heterogeneously in time as events of edge additions and deletions of individuals are grouped in long bursty clusters, which are separated by long inactive periods. We classify users by their link creation dynamics and show that bursty peaks of contact additions are likely to appear shortly after user account creation. We also study possible relations between bursty contact addition activity and other user-initiated actions like free and paid service adoption events. We show that bursts of contact additions are associated with increases in activity and adoption—an observation that can inform the design of targeted marketing tactics.
Heterogeneities in the dynamics of social interactions have been observed by following the communication sequences of individuals @cite_18 @cite_9 @cite_14 @cite_27 @cite_5 . Circadian fluctuations and long range temporal correlations were shown to play important role here @cite_16 @cite_22 @cite_28 @cite_7 and they partially explain the observed non-homogeneous behaviour. Lately heterogeneous evolution of social networks was also reported by Gaito et.al. @cite_10 who analyzed the dynamics of the Renren online social network. In this paper @cite_10 the authors simultaneously arrived to similar conclusions like us regarding the burstiness in the evolution of contact addition of users. In our study -- beyond confirming this effect in an independent dataset -- we extend this finding in two ways. First, we put forward evolving bursty trains also in the sequence of contact deletion of individuals and second we highlight non-trivial correlations triggering bursty periods in the evolution of egocentric networks.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_7", "@cite_28", "@cite_9", "@cite_27", "@cite_5", "@cite_16", "@cite_10" ], "mid": [ "1972309850", "2104972875", "2089803230", "2087346837", "2044555216", "1973253288", "2023019696", "1988115140", "2052702514", "2109805691" ], "abstract": [ "What determines the timing of human actions? A big question, but the science of human dynamics is here to tackle it. And its predictions are of practical value: for example, when ISPs decide what bandwidth an institution needs, they use a model of the likely timing and activity level of the individuals. Current models assume that an individual has a well defined probability of engaging in a specific action at a given moment, but evidence that the timing of human actions does not follow this pattern (of Poisson statistics) is emerging. Instead the delay between two consecutive events is best described by a heavy-tailed (power law) distribution. Albert-Laszlo Barabasi proposes an explanation for the prevalence of this behaviour. The ‘bursty’ nature of human dynamics, he finds, is a fundamental consequence of decision making.", "We study the dynamic network of e-mail traffic and find that it develops self-organized coherent structures similar to those appearing in many nonlinear dynamic systems. Such structures are uncovered by a general information theoretic approach to dynamic networks based on the analysis of synchronization among trios of users. In the e-mail network, coherent structures arise from temporal correlations when users act in a synchronized manner. These temporally linked structures turn out to be functional, goal-oriented aggregates that must react in real time to changing objectives and challenges (e.g., committees at a university). In contrast, static structures turn out to be related to organizational units (e.g., departments).", "Inhomogeneous temporal processes, like those appearing in human communications, neuron spike trains, and seismic signals, consist of high-activity bursty intervals alternating with long low-activity periods. In recent studies such bursty behavior has been characterized by a fat-tailed inter-event time distribution, while temporal correlations were measured by the autocorrelation function. However, these characteristic functions are not capable to fully characterize temporally correlated heterogenous behavior. Here we show that the distribution of the number of events in a bursty period serves as a good indicator of the dependencies, leading to the universal observation of power-law distribution for a broad class of phenomena. We find that the correlations in these quite different systems can be commonly interpreted by memory effects and described by a simple phenomenological model, which displays temporal behavior qualitatively similar to that in real systems.", "Even though people in our contemporary technological society are depending on communication, our understanding of the underlying laws of human communicational behavior continues to be poorly understood. Here we investigate the communication patterns in 2 social Internet communities in search of statistical laws in human interaction activity. This research reveals that human communication networks dynamically follow scaling laws that may also explain the observed trends in economic growth. Specifically, we identify a generalized version of Gibrat's law of social activity expressed as a scaling law between the fluctuations in the number of messages sent by members and their level of activity. Gibrat's law has been essential in understanding economic growth patterns, yet without an underlying general principle for its origin. We attribute this scaling law to long-term correlation patterns in human activity, which surprisingly span from days to the entire period of the available data of more than 1 year. Further, we provide a mathematical framework that relates the generalized version of Gibrat's law to the long-term correlated dynamics, which suggests that the same underlying mechanism could be the source of Gibrat's law in economics, ranging from large firms, research and development expenditures, gross domestic product of countries, to city population growth. These findings are also of importance for designing communication networks and for the understanding of the dynamics of social systems in which communication plays a role, such as economic markets and political systems.", "We investigate the communication sequences of millions of people through two different channels and analyse the fine grained temporal structure of correlated event trains induced by single individuals. By focusing on correlations between the heterogeneous dynamics and the topology of egocentric networks we find that the bursty trains usually evolve for pairs of individuals rather than for the ego and his her several neighbours, thus burstiness is a property of the links rather than of the nodes. We compare the directional balance of calls and short messages within bursty trains to the average on the actual link and show that for the trains of voice calls the imbalance is significantly enhanced, while for short messages the balance within the trains increases. These effects can be partly traced back to the technological constraints (for short messages) and partly to the human behavioural features (voice calls). We define a model that is able to reproduce the empirical results and may help us to understand better the mechanisms driving technology mediated human communication dynamics.", "The dynamics of a wide range of real systems, from email patterns to earthquakes, display a bursty, intermittent nature, characterized by short timeframes of intense activity followed by long times of no or reduced activity. The understanding of the origin of such bursty patterns is hindered by the lack of tools to compare different systems using a common framework. Here we propose to characterize the bursty nature of real signals using orthogonal measures quantifying two distinct mechanisms leading to burstiness: the interevent time distribution and the memory. We find that while the burstiness of natural phenomena is rooted in both the interevent time distribution and memory, for human dynamics memory is weak, and the bursty character is due to the changes in the interevent time distribution. Finally, we show that current models lack in their ability to reproduce the activity pattern observed in real systems, opening up avenues for future work. Copyright c �EPLA, 2008", "Abstract Patterns of deliberate human activity and behavior are of utmost importance in areas as diverse as disease spread, resource allocation, and emergency response. Because of its widespread availability and use, e-mail correspondence provides an attractive proxy for studying human activity. Recently, it was reported that the probability density for the inter-event time τ between consecutively sent e-mails decays asymptotically as τ−α, with α ≈ 1. The slower-than-exponential decay of the inter-event time distribution suggests that deliberate human activity is inherently non-Poissonian. Here, we demonstrate that the approximate power-law scaling of the inter-event time distribution is a consequence of circadian and weekly cycles of human activity. We propose a cascading nonhomogeneous Poisson process that explicitly integrates these periodic patterns in activity with an individual's tendency to continue participating in an activity. Using standard statistical techniques, we show that our model is consistent with the empirical data. Our findings may also provide insight into the origins of heavy-tailed distributions in other complex systems. complex systems human activity hypothesis testing point process", "We present a modeling framework for dynamical and bursty contact networks made of agents in social interaction. We consider agents’ behavior at short time scales in which the contact network is formed by disconnected cliques of different sizes. At each time a random agent can make a transition from being isolated to being part of a group or vice versa. Different distributions of contact times and intercontact times between individuals are obtained by considering transition probabilities with memory effects, i.e., the transition probabilities for each agent depend both on its state isolated or interacting and on the time elapsed since the last change in state. The model lends itself to analytical and numerical investigations. The modeling framework can be easily extended and paves the way for systematic investigations of dynamical processes occurring on rapidly", "The temporal communication patterns of human individuals are known to be inhomogeneous or bursty, which is reflected as heavy tail behavior in the inter-event time distribution. As the cause of such a bursty behavior two main mechanisms have been suggested: (i) inhomogeneities due to the circadian and weekly activity patterns and (ii) inhomogeneities rooted in human task execution behavior. In this paper, we investigate the role of these mechanisms by developing and then applying systematic de-seasoning methods to remove the circadian and weekly patterns from the time series of mobile phone communication events of individuals. We find that the heavy tails in the inter-event time distributions remain robust with respect to this procedure, which clearly indicates that the human task execution-based mechanism is a possible cause of the remaining burstiness in temporal mobile phone communication patterns.", "The high level of dynamics in today's online social networks (OSNs) creates new challenges for their infrastructures and providers. In particular, dynamics involving edge creation has direct implications on strategies for resource allocation, data partitioning and replication. Understanding network dynamics in the context of physical time is a critical first step towards a predictive approach towards infrastructure management in OSNs. Despite increasing efforts to study social network dynamics, current analyses mainly focus on change over time of static metrics computed on snapshots of social graphs. The limited prior work models network dynamics with respect to a logical clock. In this paper, we present results of analyzing a large timestamped dataset describing the initial growth and evolution of a large social network in China. We analyze and model the burstiness of link creation process, using the second derivative, i.e. the acceleration of the degree. This allows us to detect bursts, and to characterize the social activity of a OSN user as one of four phases: acceleration at the beginning of an activity burst, where link creation rate is increasing; deceleration when burst is ending and link creation process is slowing; cruising, when node activity is in a steady state, and complete inactivity." ] }
1206.1270
2951734015
This paper describes a new approach, based on linear programming, for computing nonnegative matrix factorizations (NMFs). The key idea is a data-driven model for the factorization where the most salient features in the data are used to express the remaining features. More precisely, given a data matrix X, the algorithm identifies a matrix C such that X approximately equals CX and some linear constraints. The constraints are chosen to ensure that the matrix C selects features; these features can then be used to find a low-rank NMF of X. A theoretical analysis demonstrates that this approach has guarantees similar to those of the recent NMF algorithm of (2012). In contrast with this earlier work, the proposed method extends to more general noise models and leads to efficient, scalable algorithms. Experiments with synthetic and real datasets provide evidence that the new approach is also superior in practice. An optimized C++ implementation can factor a multigigabyte matrix in a matter of minutes.
Localizing factorizations via column or row subset selection is a popular alternative to direct factorization methods such as the SVD. Interpolative decomposition such as Rank-Revealing QR @cite_7 and CUR @cite_25 have favorable efficiency properties as compared to factorizations (such as SVD) that are not based on exemplars. Factorization localization has been used in subspace clustering and has been shown to be robust to outliers @cite_22 @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_22", "@cite_25", "@cite_7" ], "mid": [ "2139054653", "2003217181", "2141696759", "2119233169" ], "abstract": [ "This paper considers the problem of clustering a collection of unlabeled data points assumed to lie near a union of lower dimensional planes. As is common in computer vision or unsupervised learning applications, we do not know in advance how many subspaces there are nor do we have any information about their dimensions. We develop a novel geometric analysis of an algorithm named sparse subspace clustering (SSC) [11], which signicantly broadens the range of problems where it is provably eective. For instance, we show that SSC can recover multiple subspaces, each of dimension comparable to the ambient dimension. We also prove that SSC can correctly cluster data points even when the subspaces of interest intersect. Further, we develop an extension of SSC that succeeds when the data set is corrupted with possibly overwhelmingly many outliers. Underlying our analysis are clear geometric insights, which may bear on other sparse recovery problems. A numerical study complements our theoretical analysis and demonstrates the eectiveness of these methods.", "We propose a method based on sparse representation (SR) to cluster data drawn from multiple low-dimensional linear or affine subspaces embedded in a high-dimensional space. Our method is based on the fact that each point in a union of subspaces has a SR with respect to a dictionary formed by all other data points. In general, finding such a SR is NP hard. Our key contribution is to show that, under mild assumptions, the SR can be obtained exactly' by using l1 optimization. The segmentation of the data is obtained by applying spectral clustering to a similarity matrix built from this SR. Our method can handle noise, outliers as well as missing data. We apply our subspace clustering algorithm to the problem of segmenting multiple motions in video. Experiments on 167 video sequences show that our approach significantly outperforms state-of-the-art methods.", "Principal components analysis and, more generally, the Singular Value Decomposition are fundamental data analysis tools that express a data matrix in terms of a sequence of orthogonal or uncorrelated vectors of decreasing importance. Unfortunately, being linear combinations of up to all the data points, these vectors are notoriously difficult to interpret in terms of the data and processes generating the data. In this article, we develop CUR matrix decompositions for improved data analysis. CUR decompositions are low-rank matrix decompositions that are explicitly expressed in terms of a small number of actual columns and or actual rows of the data matrix. Because they are constructed from actual data elements, CUR decompositions are interpretable by practitioners of the field from which the data are drawn (to the extent that the original data are). We present an algorithm that preferentially chooses columns and rows that exhibit high “statistical leverage” and, thus, in a very precise statistical sense, exert a disproportionately large “influence” on the best low-rank fit of the data matrix. By selecting columns and rows in this manner, we obtain improved relative-error and constant-factor approximation guarantees in worst-case analysis, as opposed to the much coarser additive-error guarantees of prior work. In addition, since the construction involves computing quantities with a natural and widely studied statistical interpretation, we can leverage ideas from diagnostic regression analysis to employ these matrix decompositions for exploratory data analysis.", "Given anm n matrixM withm > n, it is shown that there exists a permutation FI and an integer k such that the QR factorization MYI= Q(Ak ckBk) reveals the numerical rank of M: the k k upper-triangular matrix Ak is well conditioned, IlCkll2 is small, and Bk is linearly dependent on Ak with coefficients bounded by a low-degree polynomial in n. Existing rank-revealing QR (RRQR) algorithms are related to such factorizations and two algorithms are presented for computing them. The new algorithms are nearly as efficient as QR with column pivoting for most problems and take O (ran2) floating-point operations in the worst case." ] }
1206.1270
2951734015
This paper describes a new approach, based on linear programming, for computing nonnegative matrix factorizations (NMFs). The key idea is a data-driven model for the factorization where the most salient features in the data are used to express the remaining features. More precisely, given a data matrix X, the algorithm identifies a matrix C such that X approximately equals CX and some linear constraints. The constraints are chosen to ensure that the matrix C selects features; these features can then be used to find a low-rank NMF of X. A theoretical analysis demonstrates that this approach has guarantees similar to those of the recent NMF algorithm of (2012). In contrast with this earlier work, the proposed method extends to more general noise models and leads to efficient, scalable algorithms. Experiments with synthetic and real datasets provide evidence that the new approach is also superior in practice. An optimized C++ implementation can factor a multigigabyte matrix in a matter of minutes.
In recent work on dictionary learning, and have proposed a factorization localization solution to nonnegative matrix factorization using group sparsity techniques @cite_1 @cite_18 . prove asymptotic exact recovery in a restricted noise model, but this result requires preprocessing to remove duplicate or near-duplicate rows. Elhamifar shows exact representative recovery in the noiseless setting assuming no hott topics are duplicated. Our work here improves upon this work in several aspects, enabling finite sample error bounds, the elimination of any need to preprocess the data, and algorithmic implementations that scale to very large data sets.
{ "cite_N": [ "@cite_18", "@cite_1" ], "mid": [ "1966872876", "2046793177" ], "abstract": [ "We consider the problem of finding a few representatives for a dataset, i.e., a subset of data points that efficiently describes the entire dataset. We assume that each data point can be expressed as a linear combination of the representatives and formulate the problem of finding the representatives as a sparse multiple measurement vector problem. In our formulation, both the dictionary and the measurements are given by the data matrix, and the unknown sparse codes select the representatives via convex optimization. In general, we do not assume that the data are low-rank or distributed around cluster centers. When the data do come from a collection of low-rank models, we show that our method automatically selects a few representatives from each low-rank model. We also analyze the geometry of the representatives and discuss their relationship to the vertices of the convex hull of the data. We show that our framework can be extended to detect and reject outliers in datasets, and to efficiently deal with new observations and large datasets. The proposed framework and theoretical foundations are illustrated with examples in video summarization and image classification using representatives.", "A collaborative convex framework for factoring a data matrix X into a nonnegative product AS , with a sparse coefficient matrix S, is proposed. We restrict the columns of the dictionary matrix A to coincide with certain columns of the data matrix X, thereby guaranteeing a physically meaningful dictionary and dimensionality reduction. We use l1, ∞ regularization to select the dictionary from the data and show that this leads to an exact convex relaxation of l0 in the case of distinct noise-free data. We also show how to relax the restriction-to-X constraint by initializing an alternating minimization approach with the solution of the convex model, obtaining a dictionary close to but not necessarily in X. We focus on applications of the proposed framework to hyperspectral endmember and abundance identification and also show an application to blind source separation of nuclear magnetic resonance data." ] }
1206.0111
1585614163
OpenGM is a C++ template library for defining discrete graphical models and performing inference on these models, using a wide range of state-of-the-art algorithms. No restrictions are imposed on the factor graph to allow for higher-order factors and arbitrary neighborhood structures. Large models with repetitive structure are handled efficiently because (i) functions that occur repeatedly need to be stored only once, and (ii) distinct functions can be implemented differently, using different encodings alongside each other in the same model. Several parametric functions (e.g. metrics), sparse and dense value tables are provided and so is an interface for custom C++ code. Algorithms are separated by design from the representation of graphical models and are easily exchangeable. OpenGM, its algorithms, HDF5 file format and command line tools are modular and extendible.
Graphical models have become a standard tool in machine learning, and inference (marginal and MAP estimation) is the central problem, cf. @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "2137813581" ], "abstract": [ "Algorithms that must deal with complicated global functions of many variables often exploit the manner in which the given functions factor as a product of \"local\" functions, each of which depends on a subset of the variables. Such a factorization can be visualized with a bipartite graph that we call a factor graph, In this tutorial paper, we present a generic message-passing algorithm, the sum-product algorithm, that operates in a factor graph. Following a single, simple computational rule, the sum-product algorithm computes-either exactly or approximately-various marginal functions derived from the global function. A wide variety of algorithms developed in artificial intelligence, signal processing, and digital communications can be derived as specific instances of the sum-product algorithm, including the forward backward algorithm, the Viterbi algorithm, the iterative \"turbo\" decoding algorithm, Pearl's (1988) belief propagation algorithm for Bayesian networks, the Kalman filter, and certain fast Fourier transform (FFT) algorithms." ] }
1206.0629
2949344645
Community discovery in complex networks is an interesting problem with a number of applications, especially in the knowledge extraction task in social and information networks. However, many large networks often lack a particular community organization at a global level. In these cases, traditional graph partitioning algorithms fail to let the latent knowledge embedded in modular structure emerge, because they impose a top-down global view of a network. We propose here a simple local-first approach to community discovery, able to unveil the modular organization of real complex networks. This is achieved by democratically letting each node vote for the communities it sees surrounding it in its limited view of the global system, i.e. its ego neighborhood, using a label propagation algorithm; finally, the local communities are merged into a global collection. We tested this intuition against the state-of-the-art overlapping and non-overlapping community discovery methods, and found that our new method clearly outperforms the others in the quality of the obtained communities, evaluated by using the extracted communities to predict the metadata about the nodes of several real world networks. We also show how our method is deterministic, fully incremental, and has a limited time complexity, so that it can be used on web-scale real networks.
A variety of CD methods are based on the modularity concept, a quality function of a partition proposed by Newman @cite_9 @cite_20 . Modularity scores high values for partitions in which the internal cluster density is higher than the external density. Hundreds of papers have been written about modularity, either using it as a quality function to be optimized, or studying its properties and deficiencies. One of the most advanced examples of modularity maximization CD is @cite_16 , where the authors use an extension of the modularity formula to cluster multiplex (evolving and or multirelational) networks. A fast and efficient greedy algorithm, Modularity Unfolding, has been successfully applied to the analysis of huge web graphs of millions of nodes and billions of edges, representing the structure in a subset of the WWW @cite_10 .
{ "cite_N": [ "@cite_9", "@cite_16", "@cite_10", "@cite_20" ], "mid": [ "2151936673", "2074617510", "2131681506", "" ], "abstract": [ "Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as “modularity” over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets.", "Network science is an interdisciplinary endeavor, with methods and applications drawn from across the natural, social, and information sciences. A prominent problem in network science is the algorithmic detection of tightly connected groups of nodes known as communities. We developed a generalized framework of network quality functions that allowed us to study the community structure of arbitrary multislice networks, which are combinations of individual networks coupled through links that connect each node in one network slice to itself in other slices. This framework allows studies of community structure in a general setting encompassing networks that evolve over time, have multiple types of links (multiplexity), and have multiple scales.", "We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks.", "" ] }
1206.0629
2949344645
Community discovery in complex networks is an interesting problem with a number of applications, especially in the knowledge extraction task in social and information networks. However, many large networks often lack a particular community organization at a global level. In these cases, traditional graph partitioning algorithms fail to let the latent knowledge embedded in modular structure emerge, because they impose a top-down global view of a network. We propose here a simple local-first approach to community discovery, able to unveil the modular organization of real complex networks. This is achieved by democratically letting each node vote for the communities it sees surrounding it in its limited view of the global system, i.e. its ego neighborhood, using a label propagation algorithm; finally, the local communities are merged into a global collection. We tested this intuition against the state-of-the-art overlapping and non-overlapping community discovery methods, and found that our new method clearly outperforms the others in the quality of the obtained communities, evaluated by using the extracted communities to predict the metadata about the nodes of several real world networks. We also show how our method is deterministic, fully incremental, and has a limited time complexity, so that it can be used on web-scale real networks.
Many algorithms have been proposed that are unrelated to modularity. Among them, a particular important field is the application of information theory techniques, as for example in Infomap @cite_21 or Cross Associations @cite_4 . In particular, Infomap has been proven to be one among the best performing non overlapping algorithms @cite_15 . For this reason we chose Infomap as alternative to modularity approaches as a baseline method. Further, modularity approaches are affected by known issues, namely the resolution problem and the degeneracy of good solutions @cite_2 . Similarly to Infomap, Walktrap @cite_3 is based on flow methods and random walks.
{ "cite_N": [ "@cite_4", "@cite_21", "@cite_3", "@cite_2", "@cite_15" ], "mid": [ "", "2164998314", "2033590892", "2128366083", "1995996823" ], "abstract": [ "", "To comprehend the multipartite organization of large-scale biological and social systems, we introduce an information theoretic approach that reveals community structure in weighted and directed networks. We use the probability flow of random walks on a network as a proxy for information flows in the real system and decompose the network into modules by compressing a description of the probability flow. The result is a map that both simplifies and highlights the regularities in the structure and their relationships. We illustrate the method by making a map of scientific communication as captured in the citation patterns of >6,000 journals. We discover a multicentric organization with fields that vary dramatically in size and degree of integration into the network of science. Along the backbone of the network—including physics, chemistry, molecular biology, and medicine—information flows bidirectionally, but the map reveals a directional pattern of citation from the applied fields to the basic sciences.", "In a representative embodiment of the invention described herein, a well logging system for investigating subsurface formations is controlled by a general purpose computer programmed for real-time operation. The system is cooperatively arranged to provide for all aspects of a well logging operation, such as data acquisition and processing, tool control, information or data storage, and data presentation as a well logging tool is moved through a wellbore. The computer controlling the system is programmed to provide for data acquisition and tool control commands in direct response to asynchronous real-time external events. Such real-time external events may occur, for example, as a result of movement of the logging tool over a selected depth interval, or in response to requests or commands directed to the system by the well logging engineer by means of keyboard input.", "Detecting community structure is fundamental for uncovering the links between structure and function in complex networks and for practical applications in many disciplines such as biology and sociology. A popular method now widely used relies on the optimization of a quantity called modularity, which is a quality index for a partition of a network into communities. We find that modularity optimization may fail to identify modules smaller than a scale which depends on the total size of the network and on the degree of interconnectedness of the modules, even in cases where modules are unambiguously defined. This finding is confirmed through several examples, both in artificial and in real social, biological, and technological networks, where we show that modularity optimization indeed does not resolve a large number of modules. A check of the modules obtained through modularity optimization is thus necessary, and we provide here key elements for the assessment of the reliability of this community detection method.", "Uncovering the community structure exhibited by real networks is a crucial step toward an understanding of complex systems that goes beyond the local organization of their constituents. Many algorithms have been proposed so far, but none of them has been subjected to strict tests to evaluate their performance. Most of the sporadic tests performed so far involved small networks with known community structure and or artificial graphs with a simplified structure, which is very uncommon in real systems. Here we test several methods against a recently introduced class of benchmark graphs, with heterogeneous distributions of degree and community size. The methods are also tested against the benchmark by Girvan and Newman [Proc. Natl. Acad. Sci. U.S.A. 99, 7821 (2002)] and on random graphs. As a result of our analysis, three recent algorithms introduced by Rosvall and Bergstrom [Proc. Natl. Acad. Sci. U.S.A. 104, 7327 (2007); Proc. Natl. Acad. Sci. U.S.A. 105, 1118 (2008)], [J. Stat. Mech.: Theory Exp. (2008), P10008], and Ronhovde and Nussinov [Phys. Rev. E 80, 016109 (2009)] have an excellent performance, with the additional advantage of low computational complexity, which enables one to analyze large systems." ] }
1206.0629
2949344645
Community discovery in complex networks is an interesting problem with a number of applications, especially in the knowledge extraction task in social and information networks. However, many large networks often lack a particular community organization at a global level. In these cases, traditional graph partitioning algorithms fail to let the latent knowledge embedded in modular structure emerge, because they impose a top-down global view of a network. We propose here a simple local-first approach to community discovery, able to unveil the modular organization of real complex networks. This is achieved by democratically letting each node vote for the communities it sees surrounding it in its limited view of the global system, i.e. its ego neighborhood, using a label propagation algorithm; finally, the local communities are merged into a global collection. We tested this intuition against the state-of-the-art overlapping and non-overlapping community discovery methods, and found that our new method clearly outperforms the others in the quality of the obtained communities, evaluated by using the extracted communities to predict the metadata about the nodes of several real world networks. We also show how our method is deterministic, fully incremental, and has a limited time complexity, so that it can be used on web-scale real networks.
A very important property for community discovery is the ability to return overlapping partitions, i.e., the possibility of a node to be part of more than one community. This property reflects the common sense intuition that each of us is part of many different communities, including family, work, and probably many hobby-related communities. Specific algorithms developed over this property are Hierarchical Link Clustering @cite_22 , HCDF @cite_12 and k-clique percolation @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_12" ], "mid": [ "2169404653", "2110620844", "" ], "abstract": [ "There has been a quickly growing interest in networks, since they can represent the structure of a wide class of complex systems occurring from the level of cells to society. Data obtained on real networks show that the corresponding graphs exhibit unexpected nontrivial properties, e.g., anomalous degree distributions, diameter, spreading phenomena, clustering coefficient, and correlations [1]. Very recently great attention has been paid to the local structural units of networks. Small and well defined subgraphs have been introduced as ‘‘motifs’’ [2]. Their distribution and clustering properties [2,3] can be used to interpret global features as well. Somewhat larger units, made up of vertices that are more densely connected to each other than to the rest of the network, are often referred to as communities [4], and have been considered to be the essential structural units of real networks. They have no obvious definition, and most of the recent methods for their identification rely on dividing the network into smaller pieces. The biggest drawback of these methods is that they do not allow for overlapping communities, although overlaps are generally assumed to be crucial features of communities. In this Letter we lay down the fundamentals of a kind of percolation phenomenon on graphs, which can also be used as an effective and deterministic method for uniquely identifying overlapping communities in large real networks [5]. Meanwhile, the various aspects of the classical Erdo ýs", "Network theory has become pervasive in all sectors of biology, from biochemical signalling to human societies, but identification of relevant functional communities has been impaired by many nodes belonging to several overlapping groups at once, and by hierarchical structures. These authors offer a radically different viewpoint, focusing on links rather than nodes, which allows them to demonstrate that overlapping communities and network hierarchies are two faces of the same issue.", "" ] }
1206.0629
2949344645
Community discovery in complex networks is an interesting problem with a number of applications, especially in the knowledge extraction task in social and information networks. However, many large networks often lack a particular community organization at a global level. In these cases, traditional graph partitioning algorithms fail to let the latent knowledge embedded in modular structure emerge, because they impose a top-down global view of a network. We propose here a simple local-first approach to community discovery, able to unveil the modular organization of real complex networks. This is achieved by democratically letting each node vote for the communities it sees surrounding it in its limited view of the global system, i.e. its ego neighborhood, using a label propagation algorithm; finally, the local communities are merged into a global collection. We tested this intuition against the state-of-the-art overlapping and non-overlapping community discovery methods, and found that our new method clearly outperforms the others in the quality of the obtained communities, evaluated by using the extracted communities to predict the metadata about the nodes of several real world networks. We also show how our method is deterministic, fully incremental, and has a limited time complexity, so that it can be used on web-scale real networks.
To to extract useful knowledge from the modular structure of networked data is also a prolific track of research. We recall the GuruMine framework, whose aim is to identify leaders in information spread and to detect groups of users that are usually influenced by the same leaders @cite_1 . Many other works investigate the possibility of applying network analysis for studying, for instance, the dynamics of viral marketing @cite_7 .
{ "cite_N": [ "@cite_1", "@cite_7" ], "mid": [ "2170138277", "1994473607" ], "abstract": [ "In this demo we introduce GuruMine, a pattern mining system for the discovery of leaders, i.e., influential users in social networks, and their tribes, i.e., a set of users usually influenced by the same leader over several actions. GuruMine is built upon a novel pattern mining framework for leaders discovery, that we introduced in [1]. In particular, we consider social networks where users perform actions. Actions may be as simple as tagging resources (urls) as in del.icio.us, rating songs as in Yahoo! Music, or movies as in Yahoo! Movies, or users buying gadgets such as cameras, handholds, etc. and blogging a review on the gadgets. The assumption is that actions performed by a user can be seen by their network friends. Users seeing their friends actions are sometimes tempted to perform those actions. On the basis of the propagation of such influence, in [1] we provided various notion of leaders and developed algorithms for their efficient discovery. GuruMine provides users with a friendly graphical interface for selecting the actions of interest, and the kind of leaders to mine. The set of parameters driving the pattern discovery process can be iteratively refined, and the result is updated, if possible without incurring a completely new computation. Once a set of leaders has been extracted, GuruMine can easily validate them on a set of actions unseen during the pattern mining, by analyzing the portion of network reached by the influence of the selected leaders on the unseen actions. GuruMine also offers various visualizations over the social networks: the propagation of an action, the leaders, their tribes, and the interactions between different leaders and tribes. In this demo we will show: (i) how the pattern mining process can be driven towards the discovery of a good set of leaders, (ii) the ease of use of GuruMine system, and (iii) its outstanding performances on large real-world social networks and actions databases.", "We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We analyze how user behavior varies within user communities defined by a recommendation network. Product purchases follow a ‘long tail’ where a significant share of purchases belongs to rarely sold items. We establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies communities, product, and pricing categories for which viral marketing seems to be very effective." ] }
1206.0089
2755347
We consider the problem of approximate consensus in mobile networks containing Byzantine nodes. We assume that each correct node can communicate only with its neighbors and has no knowledge of the global topology. As all nodes have moving ability, the topology is dynamic. The number of Byzantine nodes is bounded by f and known by all correct nodes. We first introduce an approximate Byzantine consensus protocol which is based on the linear iteration method. As nodes are allowed to collect information during several consecutive rounds, moving gives them the opportunity to gather more values. We propose a novel sufficient and necessary condition to guarantee the final convergence of the consensus protocol. The requirement expressed by our condition is not "universal": in each phase it affects only a single correct node. More precisely, at least one correct node among those that propose either the minimum or the maximum value which is present in the network, has to receive enough messages (quantity constraint) with either higher or lower values (quality constraint). Of course, nodes' motion should not prevent this requirement to be fulfilled. Our conclusion shows that the proposed condition can be satisfied if the total number of nodes is greater than 3f+1.
are the first to address the approximate consensus problem in the presence of failures @cite_10 . Under the assumptions that the network is fully connected and the total number of nodes is known, @cite_10 proposes , and operations and then presents two consensus protocols in a synchronous and an asynchronous environment, separately. @cite_12 , improve the protocol proposed in @cite_10 : only @math nodes are needed in an asynchronous environment.
{ "cite_N": [ "@cite_10", "@cite_12" ], "mid": [ "2126906505", "2115307136" ], "abstract": [ "This paper considers a variant of the Byzantine Generals problem, in which processes start with arbitrary real values rather than Boolean values or values from some bounded range, and in which approximate, rather than exact, agreement is the desired goal. Algorithms are presented to reach approximate agreement in asynchronous, as well as synchronous systems. The asynchronous agreement algorithm is an interesting contrast to a result of , who show that exact agreement with guaranteed termination is not attainable in an asynchronous system with as few as one faulty process. The algorithms work by successive approximation, with a provable convergence rate that depends on the ratio between the number of faulty processes and the total number of processes. Lower bounds on the convergence rate for algorithms of this form are proved, and the algorithms presented are shown to be optimal.", "Consider an asynchronous system where each process begins with an arbitrary real value. Given some fixed e>0, an approximate agreement algorithm must have all non-faulty processes decide on values that are at most e from each other and are in the range of the initial values of the non-faulty processes. Previous constructions solved asynchronous approximate agreement only when there were at least 5t+1 processes, t of which may be Byzantine. In this paper we close an open problem raised by in 1983. We present a deterministic optimal resilience approximate agreement algorithm that can tolerate any t Byzantine faults while requiring only 3t+1 processes. The algorithm's rate of convergence and total message complexity are efficiently bounded as a function of the range of the initial values of the non-faulty processes. All previous asynchronous algorithms that are resilient to Byzantine failures may require arbitrarily many messages to be sent." ] }
1206.0089
2755347
We consider the problem of approximate consensus in mobile networks containing Byzantine nodes. We assume that each correct node can communicate only with its neighbors and has no knowledge of the global topology. As all nodes have moving ability, the topology is dynamic. The number of Byzantine nodes is bounded by f and known by all correct nodes. We first introduce an approximate Byzantine consensus protocol which is based on the linear iteration method. As nodes are allowed to collect information during several consecutive rounds, moving gives them the opportunity to gather more values. We propose a novel sufficient and necessary condition to guarantee the final convergence of the consensus protocol. The requirement expressed by our condition is not "universal": in each phase it affects only a single correct node. More precisely, at least one correct node among those that propose either the minimum or the maximum value which is present in the network, has to receive enough messages (quantity constraint) with either higher or lower values (quality constraint). Of course, nodes' motion should not prevent this requirement to be fulfilled. Our conclusion shows that the proposed condition can be satisfied if the total number of nodes is greater than 3f+1.
extend approximate consensus to partially connected networks @cite_1 @cite_5 . However without using flooding, they did not completely achieve global convergence. Approximate consensus problem is also addressed in multi-agent system @cite_6 @cite_14 @cite_13 @cite_2 @cite_0 . These protocols are called linear iterative consensus and mainly based on linear control theory and matrix theory. Without Byzantine failure, @cite_6 indicates that in an undirected graph a sufficient and necessary condition for consists in having adequate connected graphs. For a directed graph, @cite_14 points out that a sufficient and necessary condition consists in having a spanning tree contained in adequate connected graphs. When no Byzantine failure occurs, the speed of was analyzed in @cite_13 . Based on the knowledge of the global topology, @cite_2 and @cite_0 address approximate consensus problem in systems where nodes suffer from Byzantine faults.
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_13" ], "mid": [ "2099175737", "", "", "2107802325", "2144246223", "1978442567", "2074032690" ], "abstract": [ "This note considers the problem of information consensus among multiple agents in the presence of limited and unreliable information exchange with dynamically changing interaction topologies. Both discrete and continuous update schemes are proposed for information consensus. This note shows that information consensus under dynamically changing interaction topologies can be achieved asymptotically if the union of the directed interaction graphs have a spanning tree frequently enough as the system evolves.", "", "", "Given a network of interconnected nodes, each with a given initial value, we develop a distributed strategy that enables some or all of the nodes to calculate any arbitrary function of these initial values, despite the presence of some malicious (or faulty) nodes. Our scheme utilizes a linear iterative strategy where, at each time-step, each node updates its value to be a weighted average of its own previous value and those of its neighbors. We consider a node to be malicious if, instead of following the predefined linear iterative strategy, it updates its value arbitrarily at each time-step (perhaps by conspiring and coordinating with other malicious nodes). When there are up to f malicious nodes, we show that any node in the network is guaranteed to be able to calculate any arbitrary function of all initial node values if the graph of the network is at least (2f +1)-connected. Specifically, we show that under this condition, the nodes can calculate their desired functions after running the linear iteration for a finite number of time- steps (upper bounded by the number of nodes in the network) using almost any set of weights (i.e., for all weights except for a set of measure zero). Our approach treats the problem of fault-tolerant distributed consensus, where all nodes have to calculate the same function despite the presence of faulty or malicious nodes, as a special case.", "We consider the problem of distributed function calculation in the presence of faulty or malicious agents. In particular, we consider a setup where each node has an initial value and the goal is for (a subset of) the nodes to calculate a function of these values in a distributed manner. We focus on linear iterative strategies for function calculation, where each node updates its value at each time-step to be a weighted average of its own previous value and those of its neighbors; after a sufficiently large number of time-steps, each node is expected to have enough information to calculate the desired function of the initial node values. We study the susceptibility of such strategies to misbehavior by some nodes in the network; specifically, we consider a node to be malicious if it updates its value arbitrarily at each time-step, instead of following the predefined linear iterative strategy. If the connectivity of the network topology is 2f or less, we show that it is possible for a set of f malicious nodes to conspire in a way that makes it impossible for a subset of the other nodes in the network to correctly calculate an arbitrary function of all node values. Our analysis is constructive, in that it provides a specific scheme for the malicious nodes to follow in order to obfuscate the network in this fashion.", "In a distributed system, it is often necessary for nodes to agree on a particular event or to coordinate their activities. Applications of distributed agreement are many, such as Commit Protocols in distributed database systems, selection of a monitor node in a distributed system, detecting an intruder, or agreeing on the malicious behavior of a node. Among many forms of Distributed Agreement, one form is called Approximate Agreement (AA), in which the nodes, by exchanging their local values with other nodes, need to agree on values which are approximately equal to each other. Research on AA for fully connected networks is relatively mature. In contrast, the study of AA in partially connected networks has been very limited. More specifically, no general solution to the AA problem exists for such networks. This research solves the AA problem for a specific, scalable, partially connected network with limited relays. The research considers the worst failure mode of nodes, called Byzantine, and hybrid failure modes. The results show low communication cost in comparison to fully connected networks. The network is designed to take advantage of the results available for fully connected networks. Thus, the analysis for obtaining the expressions for Convergence Rate and Fault Tolerance becomes relatively easy.", "We consider the set G consisting of graphs of fixed order and weighted edges. The vertex set of graphs in G will correspond to point masses and the weight for an edge between two vertices is a functional of the distance between them. We pose the problem of finding the best vertex positional configuration in the presence of an additional proximity constraint, in the sense that, the second smallest eigenvalue of the corresponding graph Laplacian is maximized. In many recent applications of algebraic graph theory in systems and control, the second smallest eigenvalue of Laplacian has emerged as a critical parameter that influences the stability and robustness properties of dynamic systems that operate over an information network. Our motivation in the present work is to \"assign\" this Laplacian eigenvalue when relative positions of various elements dictate the interconnection of the underlying weighted graph. In this venue, one would then be able to \"synthesize\" information graphs that have desirable system theoretic properties." ] }
1206.0089
2755347
We consider the problem of approximate consensus in mobile networks containing Byzantine nodes. We assume that each correct node can communicate only with its neighbors and has no knowledge of the global topology. As all nodes have moving ability, the topology is dynamic. The number of Byzantine nodes is bounded by f and known by all correct nodes. We first introduce an approximate Byzantine consensus protocol which is based on the linear iteration method. As nodes are allowed to collect information during several consecutive rounds, moving gives them the opportunity to gather more values. We propose a novel sufficient and necessary condition to guarantee the final convergence of the consensus protocol. The requirement expressed by our condition is not "universal": in each phase it affects only a single correct node. More precisely, at least one correct node among those that propose either the minimum or the maximum value which is present in the network, has to receive enough messages (quantity constraint) with either higher or lower values (quality constraint). Of course, nodes' motion should not prevent this requirement to be fulfilled. Our conclusion shows that the proposed condition can be satisfied if the total number of nodes is greater than 3f+1.
Without flooding and global topology information, to our knowledge, @cite_11 is the first paper where a solution to the approximate Byzantine consensus problem based on the linear iteration method is proposed. A sufficient condition on the network topology is proposed. When this condition is satisfied, is ensured.
{ "cite_N": [ "@cite_11" ], "mid": [ "2953249628" ], "abstract": [ "We consider the problem of diffusing information in networks that contain malicious nodes. We assume that each normal node in the network has no knowledge of the network topology other than an upper bound on the number of malicious nodes in its neighborhood. We introduce a topological property known as r-robustness of a graph, and show that this property provides improved bounds on tolerating malicious behavior, in comparison to traditional concepts such as connectivity and minimum degree. We use this topological property to analyze the canonical problems of distributed consensus and broadcasting, and provide sufficient conditions for these operations to succeed. Finally, we provide a construction for r-robust graphs and show that the common preferential-attachment model for scale-free networks produces a robust graph." ] }
1206.0089
2755347
We consider the problem of approximate consensus in mobile networks containing Byzantine nodes. We assume that each correct node can communicate only with its neighbors and has no knowledge of the global topology. As all nodes have moving ability, the topology is dynamic. The number of Byzantine nodes is bounded by f and known by all correct nodes. We first introduce an approximate Byzantine consensus protocol which is based on the linear iteration method. As nodes are allowed to collect information during several consecutive rounds, moving gives them the opportunity to gather more values. We propose a novel sufficient and necessary condition to guarantee the final convergence of the consensus protocol. The requirement expressed by our condition is not "universal": in each phase it affects only a single correct node. More precisely, at least one correct node among those that propose either the minimum or the maximum value which is present in the network, has to receive enough messages (quantity constraint) with either higher or lower values (quality constraint). Of course, nodes' motion should not prevent this requirement to be fulfilled. Our conclusion shows that the proposed condition can be satisfied if the total number of nodes is greater than 3f+1.
While @cite_11 only shows a sufficient condition, @cite_9 and @cite_8 define a sufficient and necessary condition almost simultaneously. Their new arguments are also related to topology. Yet their conditions are static and can not be adapted directly to mobile environments.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_11" ], "mid": [ "2151901719", "2025375132", "2953249628" ], "abstract": [ "This paper addresses the problem of resilient consensus in the presence of misbehaving nodes. Although it is typical to assume knowledge of at least some nonlocal information when studying secure and fault-tolerant consensus algorithms, this assumption is not suitable for large-scale dynamic networks. To remedy this, we emphasize the use of local strategies to deal with resilience to security breaches. We study a consensus protocol that uses only local information and we consider worst-case security breaches, where the compromised nodes have full knowledge of the network and the intentions of the other nodes. We provide necessary and sufficient conditions for the normal nodes to reach consensus despite the influence of the malicious nodes under different threat assumptions. These conditions are stated in terms of a novel graph-theoretic property referred to as network robustness.", "This paper proves a necessary and sufficient condition for the existence of iterative, algorithms that achieve approximate Byzantine consensus in arbitrary directed graphs, where each directed edge represents a communication channel between a pair of nodes. The class of iterative algorithms considered in this paper ensures that, after each iteration of the algorithm, the state of each fault-free node remains in the convex hull of the states of the fault-free nodes at the end of the previous iteration. The following convergence requirement is imposed: for any e > 0, after a sufficiently large number of iterations, the states of the fault-free nodes are guaranteed to be within e of each other. To the best of our knowledge, tight necessary and sufficient conditions for the existence of such iterative consensus algorithms in synchronous arbitrary point-to-point networks in presence of Byzantine faults, have not been developed previously. The methodology and results presented in this paper can also be extended to asynchronous systems.", "We consider the problem of diffusing information in networks that contain malicious nodes. We assume that each normal node in the network has no knowledge of the network topology other than an upper bound on the number of malicious nodes in its neighborhood. We introduce a topological property known as r-robustness of a graph, and show that this property provides improved bounds on tolerating malicious behavior, in comparison to traditional concepts such as connectivity and minimum degree. We use this topological property to analyze the canonical problems of distributed consensus and broadcasting, and provide sufficient conditions for these operations to succeed. Finally, we provide a construction for r-robust graphs and show that the common preferential-attachment model for scale-free networks produces a robust graph." ] }
1206.0089
2755347
We consider the problem of approximate consensus in mobile networks containing Byzantine nodes. We assume that each correct node can communicate only with its neighbors and has no knowledge of the global topology. As all nodes have moving ability, the topology is dynamic. The number of Byzantine nodes is bounded by f and known by all correct nodes. We first introduce an approximate Byzantine consensus protocol which is based on the linear iteration method. As nodes are allowed to collect information during several consecutive rounds, moving gives them the opportunity to gather more values. We propose a novel sufficient and necessary condition to guarantee the final convergence of the consensus protocol. The requirement expressed by our condition is not "universal": in each phase it affects only a single correct node. More precisely, at least one correct node among those that propose either the minimum or the maximum value which is present in the network, has to receive enough messages (quantity constraint) with either higher or lower values (quality constraint). Of course, nodes' motion should not prevent this requirement to be fulfilled. Our conclusion shows that the proposed condition can be satisfied if the total number of nodes is greater than 3f+1.
Convergence and gathering problems in environments with mobile robots are also similar with approximate consensus. Each robot needs to make the next moving action according to the results returned by its sensors @cite_7 . However they did not consider any topology requirements: each robot can sense all the other ones.
{ "cite_N": [ "@cite_7" ], "mid": [ "1780011259" ], "abstract": [ "This paper investigates the task solvability of mobile robot systems subject to Byzantine faults. We first consider the gathering problem, which requires all robots to meet in finite time at a non-predefined location. It is known that the solvability of Byzantine gathering strongly depends on a number of system attributes, such as synchrony, the number of Byzantine robots, scheduling strategy, obliviousness, orientation of local coordinate systems and so on. However, the complete characterization of the attributes making Byzantine gathering solvable still remains open. In this paper, we show strong impossibility results of Byzantine gathering. Namely, we prove that Byzantine gathering is impossible even if we assume one Byzantine fault, an atomic execution system, the n-bounded centralized scheduler, non-oblivious robots, instantaneous movements and a common orientation of local coordinate systems (where n denote the number of correct robots). Those hypotheses are much weaker than used in previous work, inducing a much stronger impossibility result. At the core of our impossibility result is a reduction from the distributed consensus problem in asynchronous shared-memory systems. In more details, we newly construct a generic reduction scheme based on the distributed BG-simulation. Interestingly, because of its versatility, we can easily extend our impossibility result for general pattern formation problems." ] }
1206.0051
2952710120
Online aggregation provides estimates to the final result of a computation during the actual processing. The user can stop the computation as soon as the estimate is accurate enough, typically early in the execution. This allows for the interactive data exploration of the largest datasets. In this paper we introduce the first framework for parallel online aggregation in which the estimation virtually does not incur any overhead on top of the actual execution. We define a generic interface to express any estimation model that abstracts completely the execution details. We design a novel estimator specifically targeted at parallel online aggregation. When executed by the framework over a massive @math TPC-H instance, the estimator provides accurate confidence bounds early in the execution even when the cardinality of the final result is seven orders of magnitude smaller than the dataset size and without incurring overhead.
There is a plethora of work on online aggregation published in the database literature @cite_6 starting with the seminal paper by @cite_26 . We can broadly categorize this body of work into system design @cite_19 @cite_10 @cite_20 @cite_4 , online join algorithms @cite_29 @cite_27 @cite_1 , online algorithms for estimations other than join @cite_11 @cite_22 @cite_31 , and methods to derive confidence bounds @cite_25 . All of this work is targeted at single-node centralized environments. The parallel online aggregation literature is not as rich though. We identified only a relatively small number of research papers that are closely related to our work. We discuss them in details in the following.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_22", "@cite_29", "@cite_1", "@cite_6", "@cite_19", "@cite_27", "@cite_31", "@cite_10", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "", "2165990006", "", "2020147322", "2120113238", "", "", "", "2156964798", "2035801804", "", "2016443758", "2162569193" ], "abstract": [ "", "DBO is a database system that utilizes randomized algorithms to give statistically meaningful estimates for the final answer to a multi-table, disk-based query from start to finish during query execution. However, DBO's \"time 'til utility\" (or \"TTU\"; that is, the time until DBO can give a useful estimate) can be overly large, particularly in the case that many database tables are joined in a query, or in the case that a join query includes a very selective predicate on one or more of the tables, or when the data are skewed. In this paper, we describe Turbo DBO, which is a prototype database system that can answer multi-table join queries in a scalable fashion, just like DBO. However, Turbo DBO often has a much lower TTU than DBO. The key innovation of Turbo DBO is that it makes use of novel algorithms that look for and remember \"partial match\" tuples in a randomized fashion. These are tuples that satisfy some of the boolean predicates associated with the query, and can possibly be grown into tuples that actually contribute to the final query result at a later time.", "", "We present a new family of join algorithms, called ripple joins, for online processing of multi-table aggregation queries in a relational database management system (DBMS). Such queries arise naturally in interactive exploratory decision-support applications. Traditional offline join algorithms are designed to minimize the time to completion of the query. In contrast, ripple joins are designed to minimize the time until an acceptably precise estimate of the query result is available, as measured by the length of a confidence interval. Ripple joins are adaptive, adjusting their behavior during processing in accordance with the statistical properties of the data. Ripple joins also permit the user to dynamically trade off the two key performance factors of on-line aggregation: the time between successive updates of the running aggregate, and the amount by which the confidence-interval length decreases at each update. We show how ripple joins can be implemented in an existing DBMS using iterators, and we give an overview of the methods used to compute confidence intervals and to adaptively optimize the ripple join “aspect-ratio” parameters. In experiments with an initial implementation of our algorithms in the POSTGRES DBMS, the time required to produce reasonably precise online estimates was up to two orders of magnitude smaller than the time required for the best offline join algorithms to produce exact answers.", "Online aggregation is a promising solution to achieving fast early responses for interactive ad-hoc queries that compute aggregates on a large amount of data. Essential to the success of online aggregation is a good non-blocking join algorithm that enables both (i) high early result rates with statistical guarantees and (ii) fast end-to-end query times. We analyze existing non-blocking join algorithms and find that they all provide sub-optimal early result rates, and those with fast end-to-end times achieve them only by further sacrificing their early result rates. We propose a new non-blocking join algorithm, Partitioned expanding Ripple Join (PR-Join), which achieves considerably higher early result rates than previous non-blocking joins, while also delivering fast end-to-end query times. PR-Join performs separate, ripple-like join operations on individual hash partitions, where the width of a ripple expands multiplicatively over time. This contrasts with the non-partitioned, fixed-width ripples of Block Ripple Join. Assuming, as in previous non-blocking join studies, that the input relations are in random order, PR-Join ensures representative early results that are amenable to statistical guarantees. We show both analytically and with real-machine experiments that PR-Join achieves over an order of magnitude higher early result rates than previous non-blocking joins. We also discuss the benefits of using a flash-based SSD for temporary storage, showing that PR-Join can then achieve close to optimal end-to-end performance. Finally, we consider the joining of finite data streams that arrive over time, and find that PR-Join achieves similar or higher result rates than RPJ, the state-of-the-art algorithm specialized for that domain.", "", "", "", "For a large number of data management problems, it would be very useful to be able to obtain a few samples from a data set, and to use the samples to guess the largest (or smallest) value in the entire data set. Min max online aggregation, top-k query processing, outlier detection, and distance join are just a few possible applications. This paper details a statistically rigorous, Bayesian approach to attacking this problem. Just as importantly, we demonstrate the utility of our approach by showing how it can be applied to two specific problems that arise in the context of data management.", "This paper describes query processing in the DBO database system. Like other database systems designed for ad-hoc, analytic processing, DBO is able to compute the exact answer to queries over a large relational database in a scalable fashion. Unlike any other system designed for analytic processing, DBO can constantly maintain a guess as to the final answer to an aggregate query throughout execution, along with statistically meaningful bounds for the guess's accuracy. As DBO gathers more and more information, the guess gets more and more accurate, until it is 100 accurate as the query is completed. This allows users to stop the execution at any time that they are happy with the query accuracy, and encourages exploratory data analysis.", "", "We demonstrate our prototype of the DBO database system. DBO is designed to facilitate scalable analytic processing over large data archives. DBO's analytic processing performance is competitive with other database systems; however, unlike any other existing research or industrial system, DBO maintains a statistically meaningful guess to the final answer to a query from start to finish during query processing. This guess may be quite accurate after only a few seconds or minutes, while answering a query exactly may take hours. This can result in significant savings in both user and computer time, since a user can abort a query as soon as he or she is happy with the guess' accuracy.", "The largest databases in use today are so large that answering a query exactly can take minutes, hours, or even days. One way to address this problem is to make use of approximation algorithms. Previous work on online aggregation has considered how to give online estimates with ever-increasing accuracy for aggregate functions over relational join and selection queries. However, no existing work is applicable to online estimation over subset-based SQL queries-those queries with a correlated subquery linked to an outer query via a NOT EXISTS, NOT IN, EXISTS, or IN clause (other queries such as EXCEPT and INTERSECT can also be seen as subset-based queries). In this paper we develop algorithms for online estimation over such queries, and consider the difficult problem of providing probabilistic accuracy guarantees at all times during query execution." ] }
1206.0051
2952710120
Online aggregation provides estimates to the final result of a computation during the actual processing. The user can stop the computation as soon as the estimate is accurate enough, typically early in the execution. This allows for the interactive data exploration of the largest datasets. In this paper we introduce the first framework for parallel online aggregation in which the estimation virtually does not incur any overhead on top of the actual execution. We define a generic interface to express any estimation model that abstracts completely the execution details. We design a novel estimator specifically targeted at parallel online aggregation. When executed by the framework over a massive @math TPC-H instance, the estimator provides accurate confidence bounds early in the execution even when the cardinality of the final result is seven orders of magnitude smaller than the dataset size and without incurring overhead.
@cite_23 extend the centralized ripple join algorithms @cite_29 to a parallel setting. The proposed parallel hash ripple join algorithm is a non-blocking version of the parallel hybrid hash join algorithm that allows for estimates to the final query result to be computed. A stratified sampling estimator @cite_16 is defined to compute the result estimate while confidence bounds cannot always be derived. We implement a similar stratified sampling estimator in PF-OLA and compare it with the estimator we propose. Our focus is on analyzing the properties of the two estimators along a larger set of dimensions, including robustness, which is not discussed at all in @cite_23 . Moreover, the prototype system implementing the specific parallel hash ripple join algorithm is very particular to the proposed estimator. There is no common framework proposed for general parallel online aggregation.
{ "cite_N": [ "@cite_29", "@cite_16", "@cite_23" ], "mid": [ "2020147322", "", "2132808937" ], "abstract": [ "We present a new family of join algorithms, called ripple joins, for online processing of multi-table aggregation queries in a relational database management system (DBMS). Such queries arise naturally in interactive exploratory decision-support applications. Traditional offline join algorithms are designed to minimize the time to completion of the query. In contrast, ripple joins are designed to minimize the time until an acceptably precise estimate of the query result is available, as measured by the length of a confidence interval. Ripple joins are adaptive, adjusting their behavior during processing in accordance with the statistical properties of the data. Ripple joins also permit the user to dynamically trade off the two key performance factors of on-line aggregation: the time between successive updates of the running aggregate, and the amount by which the confidence-interval length decreases at each update. We show how ripple joins can be implemented in an existing DBMS using iterators, and we give an overview of the methods used to compute confidence intervals and to adaptively optimize the ripple join “aspect-ratio” parameters. In experiments with an initial implementation of our algorithms in the POSTGRES DBMS, the time required to produce reasonably precise online estimates was up to two orders of magnitude smaller than the time required for the best offline join algorithms to produce exact answers.", "", "Recently, Haas and Hellerstein proposed the hash ripple join algorithm in the context of online aggregation. Although the algorithm rapidly gives a good estimate for many join-aggregate problem instances, the convergence can be slow if the number of tuples that satisfy the join predicate is small or if there are many groups in the output. Furthermore, if memory overflows (for example, because the user allows the algorithm to run to completion for an exact answer), the algorithm degenerates to block ripple join and performance suffers. In this paper, we build on the work of Haas and Hellerstein and propose a new algorithm that (a) combines parallelism with sampling to speed convergence, and (b) maintains good performance in the presence of memory overflow. Results from a prototype implementation in a parallel DBMS show that its rate of convergence scales with the number of processors, and that when allowed to run to completion, even in the presence of memory overflow, it is competitive with the traditional parallel hybrid hash join algorithm." ] }
1206.0051
2952710120
Online aggregation provides estimates to the final result of a computation during the actual processing. The user can stop the computation as soon as the estimate is accurate enough, typically early in the execution. This allows for the interactive data exploration of the largest datasets. In this paper we introduce the first framework for parallel online aggregation in which the estimation virtually does not incur any overhead on top of the actual execution. We define a generic interface to express any estimation model that abstracts completely the execution details. We design a novel estimator specifically targeted at parallel online aggregation. When executed by the framework over a massive @math TPC-H instance, the estimator provides accurate confidence bounds early in the execution even when the cardinality of the final result is seven orders of magnitude smaller than the dataset size and without incurring overhead.
It is the work in @cite_13 where HOP is elevated to a real online aggregation system by providing an estimation mechanism. The proposed solution is a Bayesian framework to handle the correlation between the time to process a data partition and the result it generates. This is required because chunks are treated as black boxes that only produce an aggregate. There is no information on what operation was performed to generate the aggregate or on the content of the chunk. Based on the aggregates produced by the processed chunks and the time it took to schedule and process the chunk, a prediction is made for the aggregates in the chunks not scheduled yet---the processed chunks are an independent and identically distributed (iid) sample from the entire chunk population. The Bayesian model is continuously updated as more chunks are processed. This results in more accurate estimates as more data are processed. Although this estimation model is not based on sampling, it can still be expressed as a GLA using the extended UDA interface. We plan to do this in future work to further validate the expressiveness of the PF-OLA framework.
{ "cite_N": [ "@cite_13" ], "mid": [ "2164507334" ], "abstract": [ "In online aggregation, a database system processes a user’s aggregation query in an online fashion. At all times during processing, the system gives the user an estimate of the final query result, with the confidence bounds that become tighter over time. In this paper, we consider how online aggregation can be built into a MapReduce system for large-scale data processing. Given the MapReduce paradigm’s close relationship with cloud computing (in that one might expect a large fraction of MapReduce jobs to be run in the cloud), online aggregation is a very attractive technology. Since large-scale cloud computations are typically pay-as-you-go, a user can monitor the accuracy obtained in an online fashion, and then save money by killing the computation early once sufficient accuracy has been obtained." ] }
1206.0051
2952710120
Online aggregation provides estimates to the final result of a computation during the actual processing. The user can stop the computation as soon as the estimate is accurate enough, typically early in the execution. This allows for the interactive data exploration of the largest datasets. In this paper we introduce the first framework for parallel online aggregation in which the estimation virtually does not incur any overhead on top of the actual execution. We define a generic interface to express any estimation model that abstracts completely the execution details. We design a novel estimator specifically targeted at parallel online aggregation. When executed by the framework over a massive @math TPC-H instance, the estimator provides accurate confidence bounds early in the execution even when the cardinality of the final result is seven orders of magnitude smaller than the dataset size and without incurring overhead.
Different estimation algorithms are proposed in each reference we mention. No common framework for estimation exists. PF-OLA provides a common framework to model a much larger class of estimation models. In terms of performance, all the estimation methods incur considerable overhead. The only exception is PR-Join @cite_1 which combines a non-blocking join algorithm with temporary storage on solid-state drives (SSD) to produce the result tuples much faster, thus increasing the convergence rate. It is a centralized algorithm though.
{ "cite_N": [ "@cite_1" ], "mid": [ "2120113238" ], "abstract": [ "Online aggregation is a promising solution to achieving fast early responses for interactive ad-hoc queries that compute aggregates on a large amount of data. Essential to the success of online aggregation is a good non-blocking join algorithm that enables both (i) high early result rates with statistical guarantees and (ii) fast end-to-end query times. We analyze existing non-blocking join algorithms and find that they all provide sub-optimal early result rates, and those with fast end-to-end times achieve them only by further sacrificing their early result rates. We propose a new non-blocking join algorithm, Partitioned expanding Ripple Join (PR-Join), which achieves considerably higher early result rates than previous non-blocking joins, while also delivering fast end-to-end query times. PR-Join performs separate, ripple-like join operations on individual hash partitions, where the width of a ripple expands multiplicatively over time. This contrasts with the non-partitioned, fixed-width ripples of Block Ripple Join. Assuming, as in previous non-blocking join studies, that the input relations are in random order, PR-Join ensures representative early results that are amenable to statistical guarantees. We show both analytically and with real-machine experiments that PR-Join achieves over an order of magnitude higher early result rates than previous non-blocking joins. We also discuss the benefits of using a flash-based SSD for temporary storage, showing that PR-Join can then achieve close to optimal end-to-end performance. Finally, we consider the joining of finite data streams that arrive over time, and find that PR-Join achieves similar or higher result rates than RPJ, the state-of-the-art algorithm specialized for that domain." ] }
1206.0469
2952820798
Group-buying ads seeking a minimum number of customers before the deal expiry are increasingly used by the daily-deal providers. Unlike the traditional web ads, the advertiser's profits for group-buying ads depends on the time to expiry and additional customers needed to satisfy the minimum group size. Since both these quantities are time-dependent, optimal bid amounts to maximize profits change with every impression. Consequently, traditional static bidding strategies are far from optimal. Instead, bid values need to be optimized in real-time to maximize expected bidder profits. This online optimization of deal profits is made possible by the advent of ad exchanges offering real-time (spot) bidding. To this end, we propose a real-time bidding strategy for group-buying deals based on the online optimization of bid values. We derive the expected bidder profit of deals as a function of the bid amounts, and dynamically vary bids to maximize profits. Further, to satisfy time constraints of the online bidding, we present methods of minimizing computation timings. Subsequently, we derive the real time ad selection, admissibility, and real time bidding of the traditional ads as the special cases of the proposed method. We evaluate the proposed bidding, selection and admission strategies on a multi-million click stream of 935 ads. The proposed real-time bidding, selection and admissibility show significant profit increases over the existing strategies. Further the experiments illustrate the robustness of the bidding and acceptable computation timings.
Grabchak @cite_4 addressed the problem of optimal selection of guaranteed (group buying) ads . Our work is different, since we deal with optimal bidding, whereas Grabchak does not consider the bidding and consider offline selection of deals. Further, even the non-bidding selection sub-problem discussed in this paper is different since we consider a minimum number of conversions like deals, whereas Grabchak consider an exact number of required conversions.
{ "cite_N": [ "@cite_4" ], "mid": [ "2125847008" ], "abstract": [ "Stochastic knapsack problems deal with selecting items with potentially random sizes and rewards so as to maximize the total reward while satisfying certain capacity constraints. A novel variant of this problem, where items are worthless unless collected in bundles, is introduced here. This setup is similar to the Groupon model, where a deal is off unless a minimum number of users sign up for it. Since the optimal algorithm to solve this problem is not practical, several adaptive greedy approaches with reasonable time and memory requirements are studied in detail - theoretically, as well as, experimentally. Worst case performance guarantees are provided for some of these greedy algorithms, while results of experimental evaluation demonstrate that they are much closer to optimal than what the theoretical bounds suggest. Applications include optimizing for online advertising pricing models where advertisers pay only when certain goals, in terms of clicks or conversions, are met. We perform extensive experiments for the situation where there are between two and five ads. For typical ad conversion rates, the greedy policy of selecting items having the highest individual expected reward obtains a value within 5 of optimal over 95 of the time for a wide selection of parameters." ] }
1206.0469
2952820798
Group-buying ads seeking a minimum number of customers before the deal expiry are increasingly used by the daily-deal providers. Unlike the traditional web ads, the advertiser's profits for group-buying ads depends on the time to expiry and additional customers needed to satisfy the minimum group size. Since both these quantities are time-dependent, optimal bid amounts to maximize profits change with every impression. Consequently, traditional static bidding strategies are far from optimal. Instead, bid values need to be optimized in real-time to maximize expected bidder profits. This online optimization of deal profits is made possible by the advent of ad exchanges offering real-time (spot) bidding. To this end, we propose a real-time bidding strategy for group-buying deals based on the online optimization of bid values. We derive the expected bidder profit of deals as a function of the bid amounts, and dynamically vary bids to maximize profits. Further, to satisfy time constraints of the online bidding, we present methods of minimizing computation timings. Subsequently, we derive the real time ad selection, admissibility, and real time bidding of the traditional ads as the special cases of the proposed method. We evaluate the proposed bidding, selection and admission strategies on a multi-million click stream of 935 ads. The proposed real-time bidding, selection and admissibility show significant profit increases over the existing strategies. Further the experiments illustrate the robustness of the bidding and acceptable computation timings.
Different models of group-buying auctions and bidding mechanisms has been studied @cite_3 @cite_6 . But our problem of bidding to sell deals online---mostly made popular after the emergence of dail-deal sites---has not been studied for any of the group-buying auction models.
{ "cite_N": [ "@cite_6", "@cite_3" ], "mid": [ "2170889938", "2163070038" ], "abstract": [ "The group-buying auction is a new kind of dynamic pricing mechanism on the Internet. It is a variant of the sellers' price double auction, which makes the bidders as a group through Internet to get the volume discounts, i.e., the more bidders bid, the lower the price of the object being auctioned becomes. In this paper, we analyze the group-buying auction under some assumptions, such as that the independent private values (IPVs) model applies and bidders are risk neutral and symmetric, etc., and build an incomplete information dynamic game model to illustrate the bidders' bidding process. It proves that for the bidders there exists a weakly dominant strategy S, i.e., no matter when a bidder arrives at the auction and what the bidding history is, the highest permitted bid price that is no greater than his value to the object is always his optimal bid price but may not be the unique one.", "Web-based group-buying mechanisms are being widely used for both business-to-business (B2B) and business-to-consumer (B2C) transactions. We survey currently operational online group-buying markets, and then study this phenomenon using analytical models. We build on the literatures in information economics and operations management in our analytical model of a monopolist offering Web-based group-buying under different kinds of demand uncertainty. We derive the monopolist's optimal group-buying schedule under varying conditions of heterogeneity in the demand regimes, and compare its profits with those that obtain under the more conventional posted-price mechanism. We further study the impact ofproduction postponement by endogenizing the timing of the pricing and production decisions in a two-stage game between the monopolist and buyers. Our results have implications for firms' choice of price-discovery mechanisms in e-markets, and for the scheduling of production and pricing decisions in the presence (and absence) of scale economies of production." ] }
1206.0469
2952820798
Group-buying ads seeking a minimum number of customers before the deal expiry are increasingly used by the daily-deal providers. Unlike the traditional web ads, the advertiser's profits for group-buying ads depends on the time to expiry and additional customers needed to satisfy the minimum group size. Since both these quantities are time-dependent, optimal bid amounts to maximize profits change with every impression. Consequently, traditional static bidding strategies are far from optimal. Instead, bid values need to be optimized in real-time to maximize expected bidder profits. This online optimization of deal profits is made possible by the advent of ad exchanges offering real-time (spot) bidding. To this end, we propose a real-time bidding strategy for group-buying deals based on the online optimization of bid values. We derive the expected bidder profit of deals as a function of the bid amounts, and dynamically vary bids to maximize profits. Further, to satisfy time constraints of the online bidding, we present methods of minimizing computation timings. Subsequently, we derive the real time ad selection, admissibility, and real time bidding of the traditional ads as the special cases of the proposed method. We evaluate the proposed bidding, selection and admission strategies on a multi-million click stream of 935 ads. The proposed real-time bidding, selection and admissibility show significant profit increases over the existing strategies. Further the experiments illustrate the robustness of the bidding and acceptable computation timings.
Considering related problems of allocation and bidding of display ads, Ghosh @cite_14 considered allocating guaranteed display impressions matching a quality distribution representative of the market. Vee @cite_8 analyzed the problem of optimal online matching with access to random future samples. Boutilier @cite_2 introduced an auction mechanism for real time bidding of display ads.
{ "cite_N": [ "@cite_14", "@cite_2", "@cite_8" ], "mid": [ "2188273680", "1823541154", "2112741516" ], "abstract": [ "Display advertising has traditionally been sold via guaranteed contracts --- a guaranteed contract is a deal between a publisher and an advertiser to allocate a certain number of impressions over a certain period, for a pre-specified price per impression. However, as spot markets for display ads, such as the RightMedia Exchange, have grown in prominence, the selection of advertisements to show on a given page is increasingly being chosen based on price, using an auction. As the number of participants in the exchange grows, the price of an impressions becomes a signal of its value. This correlation between price and value means that a seller implementing the contract through bidding should offer the contract buyer a range of prices, and not just the cheapest impressions necessary to fulfill its demand. Implementing a contract using a range of prices, is akin to creating a mutual fund of advertising impressions, and requires randomized bidding. We characterize what allocations can be implemented with randomized bidding, namely those where the desired share obtained at each price is a non-increasing function of price. In addition, we provide a full characterization of when a set of campaigns are compatible and how to implement them with randomized bidding strategies.", "We present the design of a banner advertising auction which is considerably more expressive than current designs. We describe a general model of expressive ad contract bidding and an allocation model that can be executed in real time through the assignment of fractions of relevant ad channels to specific advertiser contracts. The uncertainty in channel supply and demand is addresscd by the formulation of a stochastic combinatorial optimization problem for channel allocation that is rerun periodically. We solve this in two different ways: fast deterministic optimization with respect to expectations; and a novel online sample-based stochastic optimization method-- that can be applied to continuous decision spaces--which exploits the deterministic optimization as a black box. Experiments demonstrate the importance of expressive bidding and the value of stochastic optimization.", "Motivated by the allocation problem facing publishers in display advertising we formulate the online assignment with forecast problem, a version of the online allocation problem where the algorithm has access to random samples from the future set of arriving vertices. We provide a solution that allows us to serve Internet users in an online manner that is provably nearly optimal. Our technique applies to the forecast version of a large class of online assignment problems, such as online bipartite matching, allocation, and budgeted bidders, in which we wish to minimize the value of some convex objective function subject to a set of linear supply and demand constraints. Our solution utilizes a particular subspace of the dual space, allowing us to describe the optimal primal solution implicitly in space proportional to the demand side of the input graph. More importantly, it allows us to prove that representing the primal solution using such a compact allocation plan yields a robust online algorithm which makes near-optimal online decisions. Furthermore, unlike the primal solution, we show that the compact allocation plan produced by considering only a sampled version of the original problem generalizes to produce a near optimal solution on the full problem instance." ] }
1206.0469
2952820798
Group-buying ads seeking a minimum number of customers before the deal expiry are increasingly used by the daily-deal providers. Unlike the traditional web ads, the advertiser's profits for group-buying ads depends on the time to expiry and additional customers needed to satisfy the minimum group size. Since both these quantities are time-dependent, optimal bid amounts to maximize profits change with every impression. Consequently, traditional static bidding strategies are far from optimal. Instead, bid values need to be optimized in real-time to maximize expected bidder profits. This online optimization of deal profits is made possible by the advent of ad exchanges offering real-time (spot) bidding. To this end, we propose a real-time bidding strategy for group-buying deals based on the online optimization of bid values. We derive the expected bidder profit of deals as a function of the bid amounts, and dynamically vary bids to maximize profits. Further, to satisfy time constraints of the online bidding, we present methods of minimizing computation timings. Subsequently, we derive the real time ad selection, admissibility, and real time bidding of the traditional ads as the special cases of the proposed method. We evaluate the proposed bidding, selection and admission strategies on a multi-million click stream of 935 ads. The proposed real-time bidding, selection and admissibility show significant profit increases over the existing strategies. Further the experiments illustrate the robustness of the bidding and acceptable computation timings.
With the increase of ad exchanges offering real-time bidding, there are a few papers on related problems. Chen @cite_1 formulated the problem of supply side allocation of traditional ads with upper bounds on budgets as an online constrained optimization matching problem. Chakraborty @cite_13 considered the problem of ad exchanges calling out a subset of ad-networks without exceeding capacity of individual networks for real time bidding. To the best of our knowledge, the optimal bidding problem of group-buy deals and the extensions have not been addressed.
{ "cite_N": [ "@cite_13", "@cite_1" ], "mid": [ "1823461189", "2033798573" ], "abstract": [ "Ads on the Internet are increasingly sold via ad exchanges such as RightMedia, AdECN and Doubleclick Ad Exchange. These exchanges allow real-time bidding, that is, each time the publisher contacts the exchange, the exchange \"calls out\" to solicit bids from ad networks. This solicitation introduces a novel aspect, in contrast to existing literature. This suggests developing a joint optimization framework which optimizes over the allocation and well as solicitation. We model this selective call out as an online recurrent Bayesian decision framework with bandwidth type constraints. We obtain natural algorithms with bounded performance guarantees for several natural optimization criteria. We show that these results hold under different call out constraint models, and different arrival processes. Interestingly, the paper shows that under MHR assumptions, the expected revenue of generalized second price auction with reserve is constant factor of the expected welfare. Also the analysis herein allow us prove adaptivity gap type results for the adwords problem.", "We describe a real-time bidding algorithm for performance-based display ad allocation. A central issue in performance display advertising is matching campaigns to ad impressions, which can be formulated as a constrained optimization problem that maximizes revenue subject to constraints such as budget limits and inventory availability. The current practice is to solve the optimization problem offline at a tractable level of impression granularity (e.g., the page level), and to serve ads online based on the precomputed static delivery scheme. Although this offline approach takes a global view to achieve optimality, it fails to scale to ad allocation at the individual impression level. Therefore, we propose a real-time bidding algorithm that enables fine-grained impression valuation (e.g., targeting users with real-time conversion data), and adjusts value-based bids according to real-time constraint snapshots (e.g., budget consumption levels). Theoretically, we show that under a linear programming (LP) primal-dual formulation, the simple real-time bidding algorithm is indeed an online solver to the original primal problem by taking the optimal solution to the dual problem as input. In other words, the online algorithm guarantees the offline optimality given the same level of knowledge an offline optimization would have. Empirically, we develop and experiment with two real-time bid adjustment approaches to adapting to the non-stationary nature of the marketplace: one adjusts bids against real-time constraint satisfaction levels using control-theoretic methods, and the other adjusts bids also based on the statistically modeled historical bidding landscape. Finally, we show experimental results with real-world ad delivery data that support our theoretical conclusions." ] }
1205.6594
2953267953
Fault based testing is a technique in which test cases are chosen to reveal certain classes of faults. At present, testing professionals use their personal experience to select testing methods for fault classes considered the most likely to be present. However, there is little empirical evidence available in the open literature to support these intuitions. By examining the source code changes when faults were fixed in seven open source software artifacts, we have classified bug fix patterns into fault classes, and recorded the relative frequencies of the identified fault classes. This paper reports our findings related to "if-conditional" fixes. We have classified the "if-conditional" fixes into fourteen fault classes and calculated their frequencies. We found the most common fault class related to changes within a single "atom". The next most common fault was the omission of an "atom". We analysed these results in the context of Boolean specification testing.
Historical information and relative frequencies of fault types can help practitioners to select suitable testing methods. From the testing researchers point of view, such historic information and relative frequencies may allow the general recommendation of particular unit testing techniques, or serve as inspiration to devise newer and more effective testing methods. A number of researchers ( @cite_23 @cite_3 ) have devised techniques using historical information to identify the most fault prone files or modules of a software project. These types of techniques are helpful to reduce testing effort by predicting mostly fault prone files, however, they do not provide much information about how to test them. An attempt has been taken by Hayes in @cite_6 , where a number of fault classification studies are used to analyse the merits of various testing techniques in object oriented software. But no relative frequency has been considered and the classification was largely based on the author's personal testing experience.
{ "cite_N": [ "@cite_3", "@cite_6", "@cite_23" ], "mid": [ "", "170165378", "2151553346" ], "abstract": [ "", "The goal of this paper is to examine the testing of object-oriented systems and to compare and contrast it with the testing of conventional programming language systems, with emphasis on fault-based testing. Conventional system testing, object-oriented system testing, and the application of conventional testing methods to object-oriented software will be examined, followed by a look at the differences between testing of conventional (procedural) software and the testing of object-oriented software. An examination of software faults (defects) will follow, with emphasis on developing a preliminary taxonomy of faults specific to object-oriented systems. Test strategy adequacy will be briefly presented. As a result of these examinations, a set of candidate testing methods for object-oriented programming systems will be identified.", "We analyze the version history of 7 software systems to predict the most fault prone entities and files. The basic assumption is that faults do not occur in isolation, but rather in bursts of several related faults. Therefore, we cache locations that are likely to have faults: starting from the location of a known (fixed) fault, we cache the location itself, any locations changed together with the fault, recently added locations, and recently changed locations. By consulting the cache at the moment a fault is fixed, a developer can detect likely fault-prone locations. This is useful for prioritizing verification and validation resources on the most fault prone files or entities. In our evaluation of seven open source projects with more than 200,000 revisions, the cache selects 10 of the source code files; these files account for 73 -95 of faults - a significant advance beyond the state of the art." ] }
1205.6594
2953267953
Fault based testing is a technique in which test cases are chosen to reveal certain classes of faults. At present, testing professionals use their personal experience to select testing methods for fault classes considered the most likely to be present. However, there is little empirical evidence available in the open literature to support these intuitions. By examining the source code changes when faults were fixed in seven open source software artifacts, we have classified bug fix patterns into fault classes, and recorded the relative frequencies of the identified fault classes. This paper reports our findings related to "if-conditional" fixes. We have classified the "if-conditional" fixes into fourteen fault classes and calculated their frequencies. We found the most common fault class related to changes within a single "atom". The next most common fault was the omission of an "atom". We analysed these results in the context of Boolean specification testing.
Static checking tools, such as FindBugs @cite_16 , automatically classify some bug patterns. These patterns indicate bugs which occur due to mistakes with code idioms, or misuse of language features. Those bug fix patterns can be detectable by using static checking tools. As testing researchers, we are primarily interested to look for bug fix patterns which demand testing rather than static checking to detect them - static checkers should be used to find and remove such bugs as can be identified testing!
{ "cite_N": [ "@cite_16" ], "mid": [ "1986453394" ], "abstract": [ "Many techniques have been developed over the years to automatically find bugs in software. Often, these techniques rely on formal methods and sophisticated program analysis. While these techniques are valuable, they can be difficult to apply, and they aren't always effective in finding real bugs. Bug patterns are code idioms that are often errors. We have implemented automatic detectors for a variety of bug patterns found in Java programs. In this extended abstract1, we describe how we have used bug pattern detectors to find serious bugs in several widely used Java applications and libraries. We have found that the effort required to implement a bug pattern detector tends to be low, and that even extremely simple detectors find bugs in real applications. From our experience applying bug pattern detectors to real programs, we have drawn several interesting conclusions. First, we have found that even well tested code written by experts contains a surprising number of obvious bugs. Second, Java (and similar languages) have many language features and APIs which are prone to misuse. Finally, that simple automatic techniques can be effective at countering the impact of both ordinary mistakes and misunderstood language features." ] }
1205.6919
2099281947
Information about the strength of gas sources in buildings has a number of applications in the area of building automation and control, including temperature and ventilation control, fire detection, and security systems. In this paper, we consider the problem of estimating the strength of a gas source in an enclosure when some of the parameters of the gas transport process are unknown. Traditionally, these problems are either solved by the maximum-likelihood method, which is accurate but computationally intensive, or by recursive least squares (also Kalman) filtering, which is simpler but less accurate. In this paper, we suggest a different statistical estimation procedure based on the concept of method of moments. We outline techniques that make this procedure computationally efficient and amenable for recursive implementation. We provide a comparative analysis of our proposed method based on experimental results, as well as Monte Carlo simulations. When used with the building control systems, these algorithms can estimate the gaseous strength in a room both quickly and accurately and can potentially provide improved indoor air quality in an efficient manner.
The Kalman Filter based approach detailed in @cite_11 simplifies to a Recursive Least Squares (RLS) algorithm (e.g., @cite_2 ) with the assumption of constant parameters. Based on the differential model for signal in , the RLS model is predicated on the difference equation where @math represent the additive noise (observation error) and @math represents the measurements of the room concentration. Equation above represents a set of @math equations for the @math measurements for @math . Denoting @math as the equations can be represented as @math , where @math and @math is the matrix containing the signal terms '. From the theory of Generalized Least Squares (e.g., @cite_2 ), we know that the LS estimate and the expected variance are given by where @math . We are interested in the variance of @math , which is given by The matrices @math and @math need to be described further. Note that the time correlation of @math results in this @math tridiagonal matrix . The inverse of this tridiagonal matrix can be written in closed form. Bearing in mind that @math is symmetric, here we write only the upper diagonal terms
{ "cite_N": [ "@cite_2", "@cite_11" ], "mid": [ "1965392255", "2114849633" ], "abstract": [ "Minimum variance unbiased estimation Cramer-Rao lower bound linear models general minimum variance unbiased estimation best linear unbiased estimators maximum likelihood estimation least squares method of moments the Bayesian philosophy general Bayesian estimators linear Bayesian estimators Kalman filters summary of estimators extension for complex data and parameters.", "Information about the strength of gas sources in buildings has a number of applications in the area of building automation and control, including temperature and ventilation control, fire detection, and security systems. In this paper, a method for estimating the strength of a gas source in an enclosure when some of the parameters of the gas transport process are unknown is described. It is based on a perfect-mixing model of the gas species transport dynamics. The estimation problem is formulated as a Kalman filtering problem, where the states estimated by the Kalman filter are the unknown process parameters and the source strength. Sudden changes in the strength of the source are detected and tracked with a hypothesis testing and covariance resetting algorithm that is based on statistics provided by the Kalman filter. Experimental results from two first-order systems demonstrate the efficacy of this method." ] }
1205.6903
2040019831
We seek to characterize the estimation performance of a sensor network where the individual sensors exhibit the phenomenon of drift, i.e., a gradual change of the bias. Though estimation in the presence of random errors has been extensively studied in the literature, the loss of estimation performance due to systematic errors like drift have rarely been looked into. In this paper, we derive closed-form Fisher Information Matrix and subsequently Cramer-Rao bounds (up to reasonable approximation) for the estimation accuracy of drift-corrupted signals. We assume a polynomial time-series as the representative signal and an autoregressive process model for the drift. When the Markov parameter for drift ρ <; 1, we show that the first-order effect of drift is asymptotically equivalent to scaling the measurement noise by an appropriate factor. For ρ = 1, i.e., when the drift is nonstationary, we show that the constant part of a signal can only be estimated inconsistently (non-zero asymptotic variance). Practical usage of the results are demonstrated through the analysis of 1) networks with multiple sensors and 2) bandwidth limited networks communicating only quantized observations.
A related area of work is the study of systematic-bias or model-error estimation schemes using multiple, and sometimes collaborative sensors. In the radar signal processing literature, the process of model-based estimation and subsequent removal of systematic errors prior to target tracking is known as sensor-registration @cite_14 , @cite_9 . In the weather research literature, the serially-correlated forecasting error arising due to modeling deficiency is often considered separately and tracked alongside the model parameters @cite_32 , @cite_23 . In the sensor network literature, drift-aware networks perform learning-based collaborative bias estimation to enhance the effective lifetime of the network @cite_15 , @cite_12 . However, in this paper, we are focused on the quality of estimation in the presence of systematic errors, rather than techniques on mitigating systematic errors.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_32", "@cite_23", "@cite_15", "@cite_12" ], "mid": [ "", "1980529480", "2022034990", "1973685826", "1556034606", "2131007986" ], "abstract": [ "", "Data fusion is a process dealing with the association, correlation, and combination of data and information from multiple sources to achieve refined position and identity estimates. We consider the registration problem, which is a prerequisite process of a data fusion system to accurately estimate and correct systematic errors. An exact maximum likelihood (EML) algorithm for registration is presented. The algorithm is implemented using a recursive two-step optimization that involves a modified Gauss-Newton procedure to ensure fast convergence. Statistical performance of the algorithm is also investigated, including its consistency and efficiency discussions. In particular, the explicit formulas for both the asymptotic covariance and the Cramer-Rao bound (CRB) are derived. Finally, simulated and real-life multiple radar data are used to evaluate the performance of the proposed algorithm.", "The authors describe the application of the unbiased sequential analysis algorithm developed by Dee and da Silva to the Goddard Earth Observing System moisture analysis. The algorithm estimates the slowly varying, systematic component of model error from rawinsonde observations and adjusts the first-guess moisture field accordingly. Results of two seasonal data assimilation cycles show that moisture analysis bias is almost completely eliminated in all observed regions. The improved analyses cause a sizable reduction in the 6-h forecast bias and a marginal improvement in the error standard deviations.", "Abstract A methodology for model error estimation is proposed and examined in this study. It provides estimates of the dynamical model state, the bias, and the empirical parameters by combining three approaches: 1) ensemble data assimilation, 2) state augmentation, and 3) parameter and model bias estimation. Uncertainties of these estimates are also determined, in terms of the analysis and forecast error covariances, employing the same methodology. The model error estimation approach is evaluated in application to Korteweg–de Vries–Burgers (KdVB) numerical model within the framework of maximum likelihood ensemble filter (MLEF). Experimental results indicate improved filter performance due to model error estimation. The innovation statistics also indicate that the estimated uncertainties are reliable. On the other hand, neglecting model errors—either in the form of an incorrect model parameter, or a model bias—has detrimental effects on data assimilation, in some cases resulting in filter divergence. Altho...", "Wireless sensor networks are deployed for the purpose of sensing and monitoring an area of interest. Sensors in the sensor network can suffer from both random and systematic bias problems. Even when the sensors are properly calibrated at the time of their deployment, they develop drift in their readings leading to erroneous inferences being made by the network. The drift in this context is defined as a slow, unidirectional, long-term change in the sensor measurements. In this paper we present a novel algorithm for detecting and correcting sensors drifts by utilising the spatio-temporal correlation between neigbouring sensors. Based on the assumption that neighbouring sensors have correlated measurements and that the instantiation of drift in a sensor is uncorrelated with other sensors, each sensor runs a support vector regression algorithm on its neigbourspsila corrected readings to obtain a predicted value for its measurements. It then uses this predicted data to self-assess its measurement and detect and correct its drift using a Kalman filter. The algorithm is run recursively and is totally decentralized. We demonstrate using real data obtained from the Intel Berkeley Laboratory that our algorithm successfully suppresses drifts developed in sensors and thereby prolongs the effective lifetime of the network.", "Wireless sensor networks are deployed for the purpose of monitoring an area of interest. Even when the sensors are properly calibrated at the time of deployment, they develop drift in their readings leading to erroneous network inferences. Based on the assumption that neighbouring sensors have correlated measurements and that the instantiations of drifts in sensors are uncorrelated, the authors present a novel algorithm for detecting and correcting sensor measurement errors. The authors use statistical modelling rather than physical relations to model the spatio-temporal cross-correlations among sensors. This in principle makes the framework presented applicable to most sensing problems. Each sensor in the network trains a support vector regression algorithm on its neighbours' corrected readings to obtain a predicted value for its future measurements. This phase is referred to here as the training phase. In the running phase, the predicted measurements are used by each node, in a recursive decentralised fashion, to self-assess its measurement and to detect and correct its drift and random error using an unscented Kalman filter. No assumptions regarding the linearity of drift or the density (closeness) of sensor deployment are made. The authors also demonstrate using real data obtained from the Intel Berkeley Research Laboratory that the proposed algorithm successfully suppresses drifts developed in sensors and thereby prolongs the effective lifetime of the network." ] }
1205.6903
2040019831
We seek to characterize the estimation performance of a sensor network where the individual sensors exhibit the phenomenon of drift, i.e., a gradual change of the bias. Though estimation in the presence of random errors has been extensively studied in the literature, the loss of estimation performance due to systematic errors like drift have rarely been looked into. In this paper, we derive closed-form Fisher Information Matrix and subsequently Cramer-Rao bounds (up to reasonable approximation) for the estimation accuracy of drift-corrupted signals. We assume a polynomial time-series as the representative signal and an autoregressive process model for the drift. When the Markov parameter for drift ρ <; 1, we show that the first-order effect of drift is asymptotically equivalent to scaling the measurement noise by an appropriate factor. For ρ = 1, i.e., when the drift is nonstationary, we show that the constant part of a signal can only be estimated inconsistently (non-zero asymptotic variance). Practical usage of the results are demonstrated through the analysis of 1) networks with multiple sensors and 2) bandwidth limited networks communicating only quantized observations.
Several researchers have studied the Cram 'er-Rao bounds for polynomial (or polyphase) signal estimation in the presence of independent (or correlated) noise. The CRB is usually obtained from the inverse of the Fisher Information Matrix (FIM) @cite_33 . For Additive White Gaussian Noise (AWGN), the large sample approximation of FIM is known to be a multiple of the Hilbert Matrix (e.g., @cite_27 ). The second order approximation was derived in @cite_19 in the context of polynomial phase signals. For a mixture of additive and multiplicative white Gaussian noise, the large sample FIM was shown in @cite_2 to be a scalar multiple of the AWGN case. . We also discuss the non-stationary case when the autoregressive parameter is equal to @math . To the knowledge of the authors, polynomial signal estimation in such a mixture of noise has not been considered earlier. As mentioned earlier, the study of estimating polynomial signals in AR(1)+White noise would help us characterize the capability of a sensor network to infer the parameters of a time-varying signal using sensors with drift.
{ "cite_N": [ "@cite_19", "@cite_27", "@cite_33", "@cite_2" ], "mid": [ "2170021087", "2151282433", "1965392255", "2090705694" ], "abstract": [ "The authors derive the Cramer-Rao lower bound (CRLB) for complex signals with constant amplitude and polynomial phase, measured in additive Gaussian white noise. The exact bound requires numerical inversion of an ill-conditioned matrix, while its O(N sup -1 ) approximation is free of matrix inversion. The approximation is tested for several typical parameter values and is found to be excellent in most cases. The formulas derived are of practical value in several radar applications, such as electronic intelligence systems (ELINT) for special pulse-compression radars, and motion estimation from Doppler measurements. Consequently, it is of interest to analyze the best possible performance of potential estimators of the phase coefficients, as a function of signal parameters, the signal-to-noise ratio, the sampling rate, and the number of measurements. This analysis is carried out. >", "A time-varying channel estimation method for orthogonal frequency division multiplexing (OFDM) systems is presented. The presented channel estimation method employs two training symbols in combination with polynomial fitting, thus obtaining accurate estimation results under a large normalized Doppler frequency.", "Minimum variance unbiased estimation Cramer-Rao lower bound linear models general minimum variance unbiased estimation best linear unbiased estimators maximum likelihood estimation least squares method of moments the Bayesian philosophy general Bayesian estimators linear Bayesian estimators Kalman filters summary of estimators extension for complex data and parameters.", "Abstract We derive expressions for the Cramer-Rao Bound (CRB) for the estimates of the parameters of a deterministic signal observed in additive and multiplicative noise which may be i.i.d. non-Gaussian or colored Gaussian. General expressions for the CRB are derived, and are then applied to the specific case of polynomial phase signals for which closed-form expressions are obtained under certain assumptions. We also develop bounds on the CRB; these bounds are tight at low SNR." ] }
1205.5734
1991241376
We estimate convergence rates for curves generated by Loewner's differential equation under the basic assumption that a convergence rate for the driving terms is known. An important tool is what we call the tip structure modulus, a geometric measure of regularity for Loewner curves parameterized by capacity. It is analogous to Warschawski's boundary structure modulus and closely related to annuli crossings. The main application we have in mind is that of a random discrete-model curve approaching a Schramm-Loewner evolution (SLE) curve in the lattice size scaling limit. We carry out the approach in the case of loop-erased random walk (LERW) in a simply connected domain. Under mild assumptions of boundary regularity, we obtain an explicit power-law rate for the convergence of the LERW path toward the radial SLE2 path in the supremum norm, the curves being parameterized by capacity. On the deterministic side, we show that the tip structure modulus gives a sufficient geometric condition for a Loewner curve to be Holder continuous in the capacity parameterization, assuming its driving term is Holder continuous. We also briefly discuss the case when the curves are a priori known to be Holder continuous in the capacity parameterization and we obtain a power-law convergence rate depending only on the regularity of the curves.
To implement these ideas in a particular setting we need to show that the assumptions we used are satisfied uniformly in @math , with high probability in terms of @math . If a convergence rate for the driving terms (or martingale observable in rough domains) is known, then we believe it is possible to derive the remaining required information from existing results in the literature on discrete models without too much effort, and we derive the needed SLE derivative estimates, from estimates in @cite_12 , in this paper. Indeed, as already mentioned, the event that the geometric condition fails implies annuli crossing events that are reasonably well-understood for the models known to converge to SLE.
{ "cite_N": [ "@cite_12" ], "mid": [ "2161511058" ], "abstract": [ "The purpose of this note is to describe a framework which unifies radial, chordal and dipolar SLE. When the definition of SLE(κ; ρ) is extended to the setting where the force points can be in the interior of the domain, radial SLE(κ) becomes chordal SLE(κ; ρ), with ρ = κ − 6, and vice versa. We also write down the martingales describing the Radon-Nykodim derivative of SLE(κ; ρ1 ,...,ρ n) with respect to SLE(κ)." ] }
1205.5734
1991241376
We estimate convergence rates for curves generated by Loewner's differential equation under the basic assumption that a convergence rate for the driving terms is known. An important tool is what we call the tip structure modulus, a geometric measure of regularity for Loewner curves parameterized by capacity. It is analogous to Warschawski's boundary structure modulus and closely related to annuli crossings. The main application we have in mind is that of a random discrete-model curve approaching a Schramm-Loewner evolution (SLE) curve in the lattice size scaling limit. We carry out the approach in the case of loop-erased random walk (LERW) in a simply connected domain. Under mild assumptions of boundary regularity, we obtain an explicit power-law rate for the convergence of the LERW path toward the radial SLE2 path in the supremum norm, the curves being parameterized by capacity. On the deterministic side, we show that the tip structure modulus gives a sufficient geometric condition for a Loewner curve to be Holder continuous in the capacity parameterization, assuming its driving term is Holder continuous. We also briefly discuss the case when the curves are a priori known to be Holder continuous in the capacity parameterization and we obtain a power-law convergence rate depending only on the regularity of the curves.
In we define the tip structure modulus and prove the estimates implying . Then in Theorem we show that if a Loewner curve @math has the property that there is @math such that @math and the driving term is H "older continuous, then @math is also H "older continuous in the capacity parameterization with exponent depending only on @math and the exponent for the driving term. This is an analog of the fact that John domains are H "older domains @cite_1 for Loewner curves parameterized by capacity.
{ "cite_N": [ "@cite_1" ], "mid": [ "2064685239" ], "abstract": [ "1. Some Basic Facts.- 2. Continuity and Prime Ends.- 3. Smoothness and Corners.- 4. Distortion.- 5. Quasidisks.- 6. Linear Measure.- 7. Smirnov and Lavrentiev Domains.- 8. Integral Means.- 9. Curve Families and Capacity.- 10. Hausdorff Measure.- 11. Local Boundary Behaviour.- References.- Author Index." ] }
1205.5734
1991241376
We estimate convergence rates for curves generated by Loewner's differential equation under the basic assumption that a convergence rate for the driving terms is known. An important tool is what we call the tip structure modulus, a geometric measure of regularity for Loewner curves parameterized by capacity. It is analogous to Warschawski's boundary structure modulus and closely related to annuli crossings. The main application we have in mind is that of a random discrete-model curve approaching a Schramm-Loewner evolution (SLE) curve in the lattice size scaling limit. We carry out the approach in the case of loop-erased random walk (LERW) in a simply connected domain. Under mild assumptions of boundary regularity, we obtain an explicit power-law rate for the convergence of the LERW path toward the radial SLE2 path in the supremum norm, the curves being parameterized by capacity. On the deterministic side, we show that the tip structure modulus gives a sufficient geometric condition for a Loewner curve to be Holder continuous in the capacity parameterization, assuming its driving term is Holder continuous. We also briefly discuss the case when the curves are a priori known to be Holder continuous in the capacity parameterization and we obtain a power-law convergence rate depending only on the regularity of the curves.
In Appendix we derive an estimate on the probability (in terms of @math ) that a bound of the type holds for SLE from a corresponding estimate for chordal SLE from @cite_12 . This is where the stopping time @math is needed; it is related to the disconnection time'' when the radial and chordal SLE @math processes become singular with respect to each other.
{ "cite_N": [ "@cite_12" ], "mid": [ "2161511058" ], "abstract": [ "The purpose of this note is to describe a framework which unifies radial, chordal and dipolar SLE. When the definition of SLE(κ; ρ) is extended to the setting where the force points can be in the interior of the domain, radial SLE(κ) becomes chordal SLE(κ; ρ), with ρ = κ − 6, and vice versa. We also write down the martingales describing the Radon-Nykodim derivative of SLE(κ; ρ1 ,...,ρ n) with respect to SLE(κ)." ] }
1205.5783
2951555440
Dynamically Adaptive Systems modify their behav- ior and structure in response to changes in their surrounding environment and according to an adaptation logic. Critical sys- tems increasingly incorporate dynamic adaptation capabilities; examples include disaster relief and space exploration systems. In this paper, we focus on mutation testing of the adaptation logic. We propose a fault model for adaptation logics that classifies faults into environmental completeness and adaptation correct- ness. Since there are several adaptation logic languages relying on the same underlying concepts, the fault model is expressed independently from specific adaptation languages. Taking benefit from model-driven engineering technology, we express these common concepts in a metamodel and define the operational semantics of mutation operators at this level. Mutation is applied on model elements and model transformations are used to propagate these changes to a given adaptation policy in the chosen formalism. Preliminary results on an adaptive web server highlight the difficulty of killing mutants for adaptive systems, and thus the difficulty of generating efficient tests.
@cite_2 study the testing of pervasive context-aware software. They propose a family of test adequacy criteria that measure the quality of test sets with respect to the context variability.
{ "cite_N": [ "@cite_2" ], "mid": [ "2136054002" ], "abstract": [ "Pervasive computing software adapts its behavior according to the changing contexts. Nevertheless, contexts are often noisy. Context inconsistency resolution provides a cleaner pervasive computing environment to context-aware applications. A faulty context-aware application may, however, mistakenly mix up inconsistent contexts and resolved ones, causing incorrect results. This paper studies how such faulty context-aware applications may be affected by these services. We model how programs should handle contexts that are continually checked and resolved by context inconsistency resolution, develop novel sets of data flow equations to analyze the potential impacts, and thus formulate a new family of test adequacy criteria for testing these applications. Experimentation shows that our approach is promising." ] }
1205.5088
1914059017
We present Kinodynamic RRT*, an incremental sampling-based approach for asymptotically optimal motion planning for robots with linear differential constraints. Our approach extends RRT*, which was introduced for holonomic robots ( 2011), by using a fixed-final-state-free-final-time controller that exactly and optimally connects any pair of states, where the cost function is expressed as a trade-off between the duration of a trajectory and the expended control effort. Our approach generalizes earlier work on extending RRT* to kinodynamic systems, as it guarantees asymptotic optimality for any system with controllable linear dynamics, in state spaces of any dimension. Our approach can be applied to non-linear dynamics as well by using their first-order Taylor approximations. In addition, we show that for the rich subclass of systems with a nilpotent dynamics matrix, closed-form solutions for optimal trajectories can be derived, which keeps the computational overhead of our algorithm compared to traditional RRT* at a minimum. We demonstrate the potential of our approach by computing asymptotically optimal trajectories in three challenging motion planning scenarios: (i) a planar robot with a 4-D state space and double integrator dynamics, (ii) an aerial vehicle with a 10-D state space and linearized quadrotor dynamics, and (iii) a car-like robot with a 5-D state space and non-linear dynamics.
The term kinodynamic planning was first introduced in 1993 in @cite_7 , which presented a resolution-complete algorithm for optimal planning of robots with discretized double integrator dynamics in low-dimensional workspaces. Kinodynamic planning has since been an active area of research. Incremental sampling-based algorithms, in particular the rapidly-exploring random tree (RRT) approach @cite_6 , proved to be effective in state spaces of high dimensionality, and is applicable to general dynamics systems as it builds a random tree of trajectories, and complex dynamics can be forward integrated to expand the tree. Unfortunately, RRT does not produce optimal trajectories. In fact, the probability that it finds an optimal path is zero @cite_18 . Recently, RRT* was introduced to overcome this problem and guarantees asymptotic optimality @cite_5 ; it iteratively builds a tree of trajectories through the state space whose probability of containing an optimal solution approaches 1 as the number of iterations of the algorithm approaches infinity. However, RRT* requires for its step critical to achieving asymptotic optimality that any pair of states can be connected. Therefore, RRT* was introduced for holonomic systems, where any pair of states can be optimally connected by a straight-line trajectory through the state space.
{ "cite_N": [ "@cite_5", "@cite_18", "@cite_6", "@cite_7" ], "mid": [ "1777783943", "1971086298", "2000359213", "2048820947" ], "abstract": [ "During the last decade, incremental sampling-based motion planning algorithms, such as the Rapidly-exploring Random Trees (RRTs), have been shown to work well in practice and to possess theoretical guarantees such as probabilistic completeness. However, no theoretical bounds on the quality of the solution obtained by these algorithms, e.g., in terms of a given cost function, have been established so far. The purpose of this paper is to fill this gap, by designing efficient incremental samplingbased algorithms with provable optimality properties. The first contribution of this paper is a negative result: it is proven that, under mild technical conditions, the cost of the best path returned by RRT converges almost surely to a non-optimal value, as the number of samples increases. Second, a new algorithm is considered, called the Rapidly-exploring Random Graph (RRG), and it is shown that the cost of the best path returned by RRG converges to the optimum almost surely. Third, a tree version of RRG is introduced, called RRT∗, which preserves the asymptotic optimality of RRG while maintaining a tree structure like RRT. The analysis of the new algorithms hinges on novel connections between sampling-based motion planning algorithms and the theory of random geometric graphs. In terms of computational complexity, it is shown that the number of simple operations required by both the RRG and RRT∗ algorithms is asymptotically within a constant factor of that required by RRT.", "During the last decade, sampling-based path planning algorithms, such as probabilistic roadmaps (PRM) and rapidly exploring random trees (RRT), have been shown to work well in practice and possess theoretical guarantees such as probabilistic completeness. However, little effort has been devoted to the formal analysis of the quality of the solution returned by such algorithms, e.g. as a function of the number of samples. The purpose of this paper is to fill this gap, by rigorously analyzing the asymptotic behavior of the cost of the solution returned by stochastic sampling-based algorithms as the number of samples increases. A number of negative results are provided, characterizing existing algorithms, e.g. showing that, under mild technical conditions, the cost of the solution returned by broadly used sampling-based algorithms converges almost surely to a non-optimal value. The main contribution of the paper is the introduction of new algorithms, namely, PRM* and RRT*, which are provably asymptotically optimal, i.e. such that the cost of the returned solution converges almost surely to the optimum. Moreover, it is shown that the computational complexity of the new algorithms is within a constant factor of that of their probabilistically complete (but not asymptotically optimal) counterparts. The analysis in this paper hinges on novel connections between stochastic sampling-based path planning algorithms and the theory of random geometric graphs.", "This paper presents the first randomized approach to kinodynamic planning (also known as trajectory planning or trajectory design). The task is to determine control inputs to drive a robot from an ini ial configuration and velocity to a goal configuration and velocity while obeying physically based dynamical models and avoiding obstacles in the robot’s environment. The authors consider generic systems that express the nonlinear dynamics of a robot in terms of the robot’s high-dimensional configuration space. Kinodynamic planning is treated as a motion-planning problem in a higher dimensional state space that has both first-order differential constraints and obstacle-based global constraints. The state space serves the same role as the configuration space for basic path planning; however, standard randomized path-planning techniques do not directly apply to planning trajectories in the state space. The authors have developed a randomized planning approach that is particularly tailored to trajectory plannin...", "Kinodynamic planning attempts to solve a robot motion problem subject to simultaneous kinematic and dynamics constraints. In the general problem, given a robot system, we must find a minimal-time trajectory that goes from a start position and velocity to a goal position and velocity while avoiding obstacles by a safety margin and respecting constraints on velocity and acceleration. We consider the simplified case of a point mass under Newtonian mechanics, together with velocity and acceleration bounds. The point must be flown from a start to a goal, amidst polyhedral obstacles in 2D or 3D. Although exact solutions to this problem are not known, we provide the first provably good approximation algorithm, and show that it runs in polynomial time" ] }
1205.5088
1914059017
We present Kinodynamic RRT*, an incremental sampling-based approach for asymptotically optimal motion planning for robots with linear differential constraints. Our approach extends RRT*, which was introduced for holonomic robots ( 2011), by using a fixed-final-state-free-final-time controller that exactly and optimally connects any pair of states, where the cost function is expressed as a trade-off between the duration of a trajectory and the expended control effort. Our approach generalizes earlier work on extending RRT* to kinodynamic systems, as it guarantees asymptotic optimality for any system with controllable linear dynamics, in state spaces of any dimension. Our approach can be applied to non-linear dynamics as well by using their first-order Taylor approximations. In addition, we show that for the rich subclass of systems with a nilpotent dynamics matrix, closed-form solutions for optimal trajectories can be derived, which keeps the computational overhead of our algorithm compared to traditional RRT* at a minimum. We demonstrate the potential of our approach by computing asymptotically optimal trajectories in three challenging motion planning scenarios: (i) a planar robot with a 4-D state space and double integrator dynamics, (ii) an aerial vehicle with a 10-D state space and linearized quadrotor dynamics, and (iii) a car-like robot with a 5-D state space and non-linear dynamics.
Our approach improves upon this prior work by connecting any pair of states exactly and optimally for systems with controllable linear dynamics, which guarantees that asymptotic optimality is in fact achieved. We accomplish this by extending the well-studied formulation for a fixed final state and fixed final time optimal control problem @cite_9 to derive an optimal, open-loop, fixed final state free final time control policy. A similar approach has been adopted by @cite_1 for extending RRTs in state space under a dynamic cost-to-go distance metric @cite_2 . In comparison to the latter work, we present a numerical solution that is guaranteed to find a for the general case, and an efficient closed-form solution for the special case of systems with a nilpotent dynamics matrix.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_2" ], "mid": [ "", "2296356821", "2031335381" ], "abstract": [ "", "Recent advances in the direct computation of Lyapunov functions using convex optimization make it possible to efficiently evaluate regions of stability for smooth nonlinear systems. Here we present a feedback motion planning algorithm which uses these results to efficiently combine locally-valid linear quadratic regulator (LQR) controllers into a nonlinear feedback policy which probabilistically covers the reachable area of a (bounded) state space with a region of stability, certifying that all initial conditions that are capable of reaching the goal will stabilize to the goal. We carefully investigate the algorithm on a two-dimensional model system and discuss the potential for the control of more complicated underactuated control problems like bipedal walking.", "Kinodynamic planning algorithms like Rapidly-Exploring Randomized Trees (RRTs) hold the promise of finding feasible trajectories for rich dynamical systems with complex, nonconvex constraints. In practice, these algorithms perform very well on configuration space planning, but struggle to grow efficiently in systems with dynamics or differential constraints. This is due in part to the fact that the conventional distance metric, Euclidean distance, does not take into account system dynamics and constraints when identifying which node in the existing tree is capable of producing children closest to a given point in state space. We show that an affine quadratic regulator (AQR) design can be used to approximate the exact minimum-time distance pseudometric at a reasonable computational cost. We demonstrate improved exploration of the state spaces of the double integrator and simple pendulum when using this pseudometric within the RRT framework, but this improvement drops off as systems' nonlinearity and complexity increase. Future work includes exploring methods for approximating the exact minimum-time distance pseudometric that can reason about dynamics with higher-order terms." ] }
1205.5012
1840190952
We consider the problem of learning the structure of a pairwise graphical model over continuous and discrete variables. We present a new pairwise model for graphical models with both continuous and discrete variables that is amenable to structure learning. In previous work, authors have considered structure learning of Gaussian graphical models and structure learning of discrete models. Our approach is a natural generalization of these two lines of work to the mixed case. The penalization scheme involves a novel symmetric use of the group-lasso norm and follows naturally from a particular parametrization of the model.
An alternative to the regularization approach that we take in this paper, is the limited-order correlation hypothesis testing method @cite_8 . The authors develop a hypothesis test via likelihood ratios for conditional independence. However, they restrict to the case where the discrete variables are marginally independent so the maximum likelihood estimates are well-defined for @math .
{ "cite_N": [ "@cite_8" ], "mid": [ "1567947962" ], "abstract": [ "Structure learning of Gaussian graphical models is an extensively studied problem in the classical multivariate setting where the sample size n is larger than the number of random variables p, as well as in the more challenging setting when p>>n. However, analogous approaches for learning the structure of graphical models with mixed discrete and continuous variables when p>>n remain largely unexplored. Here we describe a statistical learning procedure for this problem based on limited-order correlations and assess its performance with synthetic and real data." ] }
1205.5164
2950843169
We consider the problem of constructing a communication infrastructure from scratch, for a collection of identical wireless nodes. Combinatorially, this means a) finding a set of links that form a strongly connected spanning graph on a set of @math points in the plane, and b) scheduling it efficiently in the SINR model of interference. The nodes must converge on a solution in a distributed manner, having no means of communication beyond the sole wireless channel. We give distributed connectivity algorithms that run in time @math , where @math is the ratio between the longest and shortest distances among nodes. Given that algorithm without prior knowledge of the instance are essentially limited to using uniform power, this is close to best possible. Our primary aim, however, is to find efficient structures, measured in the number of slots used in the final schedule of the links. Our main result is algorithms that match the efficiency of centralized solutions. Specifically, the networks can be scheduled in @math slots using (arbitrary) power control, and in @math slots using a simple oblivious power scheme. Additionally, the networks have the desirable properties that the latency of a converge-cast and of any node-to-node communication is optimal @math time.
Connectivity was the first problem studied from a worst-case perspective in the SINR model. In a seminal paper, Moscibroda and Wattenhofer @cite_7 formalized the problem and proposed an algorithm that connects arbitrary set of @math points in @math slots. This was improved to @math @cite_14 , @math @cite_19 , and recently to @math @cite_4 . All these works deploy centralized algorithms. No non-trivial lower bound is known. Somewhat orthogonally, a large body of work exists on randomly deployed wireless networks, starting with the influential work by Gupta and Kumar @cite_8 . Work in this setting for connectivity includes @cite_21 , which studied the probability of there existing a path between two nodes in a randomly deployed network. In @cite_16 , minimum energy connectivity structures is studied for randomly deployed networks, but interference is essentially ignored.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_8", "@cite_21", "@cite_19", "@cite_16" ], "mid": [ "2148868861", "2953216592", "2098480450", "2137775453", "2085356235", "2106388288", "2170469173" ], "abstract": [ "To date, topology control in wireless ad hoc and sensor networks--the study of how to compute from the given communication network a subgraph with certain beneficial properties .has been considered as a static problem only; the time required to actually schedule the links of a computed topology without message collision was generally ignored. In this paper we analyze topology control in the context of the physical Signal-to-Interference-plus-Noise-Ratio (SINR) model, focusing on the question of how and how fast the links of a resulting topology can actually be realized over time.For this purpose, we define and study a generalized version of the SINR model and obtain theoretical upper bounds on the scheduling complexity of arbitrary topologies in wireless networks. Specifically, we prove that even in worst-case networks, if the signals are transmitted with correctly assigned transmission power levels, the number of time slots required to successfully schedule all links of an arbitrary topology is proportional to the squared logarithm of the number of network nodes times a previously defined static interference measure Interestingly, although originally considered without explicit accounting for signal collision in the SINR model, this static interference measure plays an important role in the analysis of link scheduling with physical link interference. Our result thus bridges the gap between static graph-based interference models and the physical SINR model. Based on these results, we also show that when it comes to scheduling, requiring the communication links to be symmetric may imply significantly higher costs as opposed to topologies allowing unidirectional links.", "Given @math wireless transceivers located in a plane, a fundamental problem in wireless communications is to construct a strongly connected digraph on them such that the constituent links can be scheduled in fewest possible time slots, assuming the SINR model of interference. In this paper, we provide an algorithm that connects an arbitrary point set in @math slots, improving on the previous best bound of @math due to Moscibroda. This is complemented with a super-constant lower bound on our approach to connectivity. An important feature is that the algorithms allow for bi-directional (half-duplex) communication. One implication of this result is an improved bound of @math on the worst-case capacity of wireless networks, matching the best bound known for the extensively studied average-case. We explore the utility of oblivious power assignments, and show that essentially all such assignments result in a worst case bound of @math slots for connectivity. This rules out a recent claim of a @math bound using oblivious power. On the other hand, using our result we show that @math slots suffice, where @math is the ratio between the largest and the smallest links in a minimum spanning tree of the points. Our results extend to the related problem of minimum latency aggregation scheduling, where we show that aggregation scheduling with @math latency is possible, improving upon the previous best known latency of @math . We also initiate the study of network design problems in the SINR model beyond strong connectivity, obtaining similar bounds for biconnected and @math -edge connected structures.", "We define and study the scheduling complexity in wireless networks, which expresses the theoretically achievable efficiency of MAC layer protocols. Given a set of communication requests in arbitrary networks, the scheduling complexity describes the amount of time required to successfully schedule all requests. The most basic and important network structure in wireless networks being connectivity, we study the scheduling complexity of connectivity, i.e., the minimal amount of time required until a connected structure can be scheduled. In this paper, we prove that the scheduling complexity of connectivity grows only polylogarithmically in the number of nodes. Specifically, we present a novel scheduling algorithm that successfully schedules a strongly connected set of links in time O(logn) even in arbitrary worst-case networks. On the other hand, we prove that standard MAC layer or scheduling protocols can perform much worse. Particularly, any protocol that either employs uniform or linear (a node’s transmit power is proportional to the minimum power required to reach its intended receiver) power assignment has a Ω(n) scheduling complexity in the worst case, even for simple communication requests. In contrast, our polylogarithmic scheduling algorithm allows many concurrent transmission by using an explicitly formulated non-linear power assignment scheme. Our results show that even in large-scale worst-case networks, there is no theoretical scalability problem when it comes to scheduling transmission requests, thus giving an interesting complement to the more pessimistic bounds for the capacity in wireless networks. All results are based on the physical model of communication, which takes into account that the signal-tonoise plus interference ratio (SINR) at a receiver must be above a certain threshold if the transmission is to be received correctly.", "When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance.", "This paper presents a framework for the calculation of stochastic connectivity properties of wireless multihop networks. Assuming that n nodes, each node with transmission range r0, are distributed according to some spatial probability density function, we study the level of connectivity of the resulting network topology from three viewpoints. First, we analyze the number of neighbors of a given node. Second, we study the probability that there is a communication path between two given nodes. Third, we investigate the probability that the entire network is connected, i.e. each node can communicate with every other node via a multihop path. For the last-mentioned issue, we compute a tight approximation for the critical (r0 ,n )pairs that are required to keep the network connected with a probability close to one. In fact, the problem is solved for the general case of a k-connected network, accounting for the robustness against node failures. These issues are studied for uniformly distributed nodes (with and without ‘border effects’), Gaussian distributed nodes, and nodes that move according to the commonly used random waypoint mobility model. The results are of practical value for the design and simulation of wireless sensor and mobile ad hoc networks.", "The key application scenario of wireless sensor networks is data gathering: sensor nodes transmit data, possibly in a multi-hop fashion, to an information sink. The performance of sensor networks is thus characterized by the rate at which information can be aggregated to the sink. In this paper, we derive the first scaling laws describing the achievable rate in worst-case, i.e. arbitrarily deployed, sensor networks. We show that in the physical model of wireless communication and for a large number of practically important functions, a sustainable rate of Theta(1 log2 n) can be achieved in every network, even when nodes are positioned in a worst-case manner. In contrast, we show that the best possible rate in the protocol model is Theta(1 n), which establishes an exponential gap between these two standard models of wireless communication. Furthermore, our worst-case capacity result almost matches the rate of Theta(1 log n) that can be achieved in randomly deployed networks. The high rate is made possible by employing non-linear power assignment at nodes and by exploiting SINR-effects. Finally, our algorithm also improves the best known bounds on the scheduling complexity in wireless networks.", "We describe a distributed position-based network protocol optimized for minimum energy consumption in mobile wireless networks that support peer-to-peer communications. Given any number of randomly deployed nodes over an area, we illustrate that a simple local optimization scheme executed at each node guarantees strong connectivity of the entire network and attains the global minimum energy solution for stationary networks. Due to its localized nature, this protocol proves to be self-reconfiguring and stays close to the minimum energy solution when applied to mobile networks. Simulation results are used to verify the performance of the protocol." ] }
1205.5164
2950843169
We consider the problem of constructing a communication infrastructure from scratch, for a collection of identical wireless nodes. Combinatorially, this means a) finding a set of links that form a strongly connected spanning graph on a set of @math points in the plane, and b) scheduling it efficiently in the SINR model of interference. The nodes must converge on a solution in a distributed manner, having no means of communication beyond the sole wireless channel. We give distributed connectivity algorithms that run in time @math , where @math is the ratio between the longest and shortest distances among nodes. Given that algorithm without prior knowledge of the instance are essentially limited to using uniform power, this is close to best possible. Our primary aim, however, is to find efficient structures, measured in the number of slots used in the final schedule of the links. Our main result is algorithms that match the efficiency of centralized solutions. Specifically, the networks can be scheduled in @math slots using (arbitrary) power control, and in @math slots using a simple oblivious power scheme. Additionally, the networks have the desirable properties that the latency of a converge-cast and of any node-to-node communication is optimal @math time.
Distributed connectivity of wireless networks has also been the subject of research. In @cite_15 , connectivity in mobile networks was studied from a graph-theoretic perspective with no explicit interference model. Indeed, connectivity maintenance problem has been well studied in control theory and robotics @cite_15 @cite_6 @cite_22 , but with different underlying assumptions, typically without the use of the SINR interference model. Sensor connectivity has also been studied @cite_27 without reference to any particular interference model. In @cite_17 , a heuristic was proposed for connectivity maintenance in multi-hop wireless networks. A more rigorous study was done in @cite_9 but with the assumption of an underlying MAC layer that resolves interference problems.
{ "cite_N": [ "@cite_22", "@cite_9", "@cite_6", "@cite_27", "@cite_15", "@cite_17" ], "mid": [ "2141327408", "2148570604", "2121221698", "2076865372", "2166188160", "2142928312" ], "abstract": [ "A distributed control law that guarantees connectivity maintenance in a network of multiple mobile agents is presented. The control law, which lets the agents perform formation manoeuvres, respects sensor limitations by allowing each agent to only take into account agents within its sensing radius. In contrast to previous approaches to the problem, the proposed control law does not attain infinite values whenever an edge of the communication graph tends to be lost. This is achieved via the use of decentralised navigation functions, which are bounded potential fields. The navigation functions are defined to take into account the connectivity maintenance objective. The authors first treat the case of connectivity maintenance for a static communication graph and then extend the result to the case of dynamic graphs. The results are illustrated on a formation control problem.", "The topology of wireless multihop ad hoc networks can be controlled by varying the transmission power of each node. We propose a simple distributed algorithm where each node makes local decisions about its transmission power and these local decisions collectively guarantee global connectivity. Specifically, based on the directional information, a node grows it transmission power until it finds a neighbor node in every direction. The resulting network topology increases the network lifetime by reducing the transmission power and reduces traffic interference by having low node degrees. Moreover, we show that the routes in the multihop network are efficient in power consumption. We give an approximation scheme in which the power consumption of each route can be made arbitrarily close to the optimal by carefully choosing the parameters. Simulation results demonstrate significant performance improvements.", "This paper presents a solution to the limited information rendezvous problem over dynamic interaction graphs. In particular, we show how we, by adding appropriate weights to the edges in the graphs, can guarantee that the graph stays connected. In previous work on graph-based coordination, connectedness have been assumed, and this paper thus shows how to overcome this limitation even when the graphs are subject to dynamic changes.", "Wireless sensor networks have attracted a lot of attention recently. Such environments may consist of many inexpensive nodes, each capable of collecting, storing, and processing environmental information, and communicating with neighboring nodes through wireless links. For a sensor network to operate successfully, sensors must maintain both sensing coverage and network connectivity. This issue has been studied in [2003] and Zhang and Hou [2004a], both of which reach a similar conclusion that coverage can imply connectivity as long as sensors' communication ranges are no less than twice their sensing ranges. In this article, without relying on this strong assumption, we investigate the issue from a different angle and develop several necessary and sufficient conditions for ensuring coverage and connectivity of a sensor network. Hence, the results significantly generalize the results in [2003] and Zhang and Hou [2004a]. This work is also a significant extension of our earlier work [Huang and Tseng 2003; 2004], which addresses how to determine the level of coverage of a given sensor network but does not consider the network connectivity issue. Our work is the first work allowing an arbitrary relationship between sensing ranges and communication distances of sensor nodes. We develop decentralized solutions for determining, or even adjusting, the levels of coverage and connectivity of a given network. Adjusting levels of coverage and connectivity is necessary when sensors are overly deployed, and we approach this problem by putting sensors to sleep mode and tuning their transmission powers. This results in prolonged network lifetime.", "Control of mobile networks raises fundamental and novel problems in controlling the structure of the resulting dynamic graphs. In particular, in applications involving mobile sensor networks and multi-agent systems, a great new challenge is the development of distributed motion algorithms that guarantee connectivity of the overall network. In this paper, we address this challenge using a novel control decomposition. First, motion control is performed in the continuous state space, where nearest neighbor potential fields are used to maintain existing links in the network. Second, distributed coordination protocols in the discrete graph space ensure connectivity of the switching network topology. Coordination is based on locally updated estimates of the abstract network topology by every agent as well as distributed auctions that enable tie breaking whenever simultaneous link deletions may violate connectivity. Integration of the overall system results in a distributed, multi- agent, hybrid system for which we show that, under certain secondary objectives on the agents and the assumption that the initial network is connected, the resulting motion always satisfies connectivity of the network. Our approach can also account for communication time delays in the network as well as collision avoidance, while its efficiency and scalability properties are illustrated in nontrivial computer simulations.", "We consider the problem of adjusting the transmit powers of nodes in a multihop wireless network (also called an ad hoc network) to create a desired topology. We formulate it as a constrained optimization problem with two constraints-connectivity and biconnectivity, and one optimization objective-maximum power used. We present two centralized algorithms for use in static networks, and prove their optimality. For mobile networks, we present two distributed heuristics that adaptively adjust node transmit powers in response to topological changes and attempt to maintain a connected topology using minimum power. We analyze the throughput, delay, and power consumption of our algorithms using a prototype software implementation, an emulation of a power-controllable radio, and a detailed channel model. Our results show that the performance of multihop wireless networks in practice can be substantially increased with topology control." ] }
1205.5164
2950843169
We consider the problem of constructing a communication infrastructure from scratch, for a collection of identical wireless nodes. Combinatorially, this means a) finding a set of links that form a strongly connected spanning graph on a set of @math points in the plane, and b) scheduling it efficiently in the SINR model of interference. The nodes must converge on a solution in a distributed manner, having no means of communication beyond the sole wireless channel. We give distributed connectivity algorithms that run in time @math , where @math is the ratio between the longest and shortest distances among nodes. Given that algorithm without prior knowledge of the instance are essentially limited to using uniform power, this is close to best possible. Our primary aim, however, is to find efficient structures, measured in the number of slots used in the final schedule of the links. Our main result is algorithms that match the efficiency of centralized solutions. Specifically, the networks can be scheduled in @math slots using (arbitrary) power control, and in @math slots using a simple oblivious power scheme. Additionally, the networks have the desirable properties that the latency of a converge-cast and of any node-to-node communication is optimal @math time.
Two fundamental problems that deal with a given set of links relate to this work. : find the largest feasible subset of links, and : partition the link set into the fewest number of feasible sets. For the former, constant-factor algorithms were given for uniform power @cite_10 @cite_20 , mean and linear power (and most other oblivious power assignments) @cite_5 , and power control @cite_23 . These imply a logarithmic factor for the corresponding scheduling problems. Distributed algorithm was given for with oblivious power @cite_11 and shown to achieve @math -approximation @cite_24 .
{ "cite_N": [ "@cite_24", "@cite_23", "@cite_5", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "2950595026", "1584433497", "2950068681", "2106242763", "2100316242", "2152723361" ], "abstract": [ "We study the wireless scheduling problem in the SINR model. More specifically, given a set of @math links, each a sender-receiver pair, we wish to partition (or ) the links into the minimum number of slots, each satisfying interference constraints allowing simultaneous transmission. In the basic problem, all senders transmit with the same uniform power. We give a distributed @math -approximation algorithm for the scheduling problem, matching the best ratio known for centralized algorithms. It holds in arbitrary metric space and for every length-monotone and sublinear power assignment. It is based on an algorithm of Kesselheim and V \"ocking, whose analysis we improve by a logarithmic factor. We show that every distributed algorithm uses @math slots to schedule certain instances that require only two slots, which implies that the best possible absolute performance guarantee is logarithmic.", "In modern wireless networks devices are able to set the power for each transmission carried out. Experimental but also theoretical results indicate that such power control can improve the network capacity significantly. We study this problem in the physical interference model using SINR constraints. In the SINR capacity maximization problem, we are given n pairs of senders and receivers, located in a metric space (usually a so-called fading metric). The algorithm shall select a subset of these pairs and choose a power level for each of them with the objective of maximizing the number of simultaneous communications. This is, the selected pairs have to satisfy the SINR constraints with respect to the chosen powers. We present the first algorithm achieving a constant-factor approximation in fading metrics. The best previous results depend on further network parameters such as the ratio of the maximum and the minimum distance between a sender and its receiver. Expressed only in terms of n, they are (trivial) Ω(n) approximations. Our algorithm still achieves an O(log n) approximation if we only assume to have a general metric space rather than a fading metric. Furthermore, existing approaches work well together with the algorithm allowing it to be used in singlehop and multi-hop scheduling scenarios. Here, we also get polylog n approximations.", "The capacity of a wireless network is the maximum possible amount of simultaneous communication, taking interference into account. Formally, we treat the following problem. Given is a set of links, each a sender-receiver pair located in a metric space, and an assignment of power to the senders. We seek a maximum subset of links that are feasible in the SINR model: namely, the signal received on each link should be larger than the sum of the interferences from the other links. We give a constant-factor approximation that holds for any length-monotone, sub-linear power assignment and any distance metric. We use this to give essentially tight characterizations of capacity maximization under power control using oblivious power assignments. Specifically, we show that the mean power assignment is optimal for capacity maximization of bi-directional links, and give a tight @math -approximation of scheduling bi-directional links with power control using oblivious power. For uni-directional links we give a nearly optimal @math -approximation to the power control problem using mean power, where @math is the ratio of longest and shortest links. Combined, these results clarify significantly the centralized complexity of wireless communication problems.", "In this work we study the problem of determining the throughput capacity of a wireless network. We propose a scheduling algorithm to achieve this capacity within an approximation factor. Our analysis is performed in the physical interference model, where nodes are arbitrarily distributed in Euclidean space. We consider the problem separately from the routing problem and the power control problem, i.e., all requests are single-hop, and all nodes transmit at a fixed power level. The existing solutions to this problem have either concentrated on special-case topologies, or presented optimality guarantees which become arbitrarily bad (linear in the number of nodes) depending on the network's topology. We propose the first scheduling algorithm with approximation guarantee independent of the topology of the network. The algorithm has a constant approximation guarantee for the problem of maximizing the number of links scheduled in one time-slot. Furthermore, we obtain a O(log n) approximation for the problem of minimizing the number of time slots needed to schedule a given set of requests. Simulation results indicate that our algorithm does not only have an exponentially better approximation ratio in theory, but also achieves superior performance in various practical network scenarios. Furthermore, we prove that the analysis of the algorithm is extendable to higher-dimensional Euclidean spaces, and to more realistic bounded-distortion spaces, induced by non-isotropic signal distortions. Finally, we show that it is NP-hard to approximate the scheduling problem to within n 1-epsiv factor, for any constant epsiv > 0, in the non-geometric SINR model, in which path-loss is independent of the Euclidean coordinates of the nodes.", "In this paper we address a common question in wireless communication: How long does it take to satisfy an arbitrary set of wireless communication requests? This problem is known as the wireless scheduling problem. Our main result proves that wireless scheduling is in APX. In addition we present a robustness result, showing that constant parameter and model changes will modify the result only by a constant.", "We present and analyze simple distributed contention resolution protocols for wireless networks. In our setting, one is given n pairs of senders and receivers located in a metric space. Each sender wants to transmit a signal to its receiver at a prespecified power level, e. g., all senders use the same, uniform power level as it is typically implemented in practice. Our analysis is based on the physical model in which the success of a transmission depends on the Signal-to-Interference-plus-Noise-Ratio (SINR). The objective is to minimize the number of time slots until all signals are successfully transmitted. Our main technical contribution is the introduction of a measure called maximum average affectance enabling us to analyze random contention-resolution algorithms in which each packet is transmitted in each step with a fixed probability depending on the maximum average affectance. We prove that the schedule generated this way is only an O(log2 n) factor longer than the optimal one, provided that the prespecified power levels satisfy natural monontonicity properties. By modifying the algorithm, senders need not to know the maximum average affectance in advance but only static information about the network. In addition, we extend our approach to multi-hop communication achieving the same appoximation factor." ] }
1205.5164
2950843169
We consider the problem of constructing a communication infrastructure from scratch, for a collection of identical wireless nodes. Combinatorially, this means a) finding a set of links that form a strongly connected spanning graph on a set of @math points in the plane, and b) scheduling it efficiently in the SINR model of interference. The nodes must converge on a solution in a distributed manner, having no means of communication beyond the sole wireless channel. We give distributed connectivity algorithms that run in time @math , where @math is the ratio between the longest and shortest distances among nodes. Given that algorithm without prior knowledge of the instance are essentially limited to using uniform power, this is close to best possible. Our primary aim, however, is to find efficient structures, measured in the number of slots used in the final schedule of the links. Our main result is algorithms that match the efficiency of centralized solutions. Specifically, the networks can be scheduled in @math slots using (arbitrary) power control, and in @math slots using a simple oblivious power scheme. Additionally, the networks have the desirable properties that the latency of a converge-cast and of any node-to-node communication is optimal @math time.
The Minimum-Latency Aggregation Scheduling problem is closely related to connectivity, where the latency for transmitting messages to a sink is to be minimized. A large literature is known, but the first worst-case analysis in the SINR model was given in @cite_3 , with a @math bound on the schedule length by a centralized algorithm and @math by a distributed algorithm. The centralized bound was improved to optimal @math in @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_3" ], "mid": [ "2953216592", "2125202270" ], "abstract": [ "Given @math wireless transceivers located in a plane, a fundamental problem in wireless communications is to construct a strongly connected digraph on them such that the constituent links can be scheduled in fewest possible time slots, assuming the SINR model of interference. In this paper, we provide an algorithm that connects an arbitrary point set in @math slots, improving on the previous best bound of @math due to Moscibroda. This is complemented with a super-constant lower bound on our approach to connectivity. An important feature is that the algorithms allow for bi-directional (half-duplex) communication. One implication of this result is an improved bound of @math on the worst-case capacity of wireless networks, matching the best bound known for the extensively studied average-case. We explore the utility of oblivious power assignments, and show that essentially all such assignments result in a worst case bound of @math slots for connectivity. This rules out a recent claim of a @math bound using oblivious power. On the other hand, using our result we show that @math slots suffice, where @math is the ratio between the largest and the smallest links in a minimum spanning tree of the points. Our results extend to the related problem of minimum latency aggregation scheduling, where we show that aggregation scheduling with @math latency is possible, improving upon the previous best known latency of @math . We also initiate the study of network design problems in the SINR model beyond strong connectivity, obtaining similar bounds for biconnected and @math -edge connected structures.", "Minimum-Latency Aggregation Scheduling (MLAS) is a problem of fundamental importance in wireless sensor networks. There however has been very little effort spent on designing algorithms to achieve sufficiently fast data aggregation under the physical interference model which is a more realistic model than traditional protocol interference model. In particular, a distributed solution to the problem under the physical interference model is challenging because of the need for global-scale information to compute the cumulative interference at any individual node. In this paper, we propose a distributed algorithm that solves the MLAS problem under the physical interference model in networks of arbitrary topology in O(K) time slots, where K is the logarithm of the ratio between the lengths of the longest and shortest links in the network. We also give a centralized algorithm to serve as a benchmark for comparison purposes, which aggregates data from all sources in O(log3n) time slots (where n is the total number of nodes). This is the current best algorithm for the problem in the literature. The distributed algorithm partitions the network into cells according to the value K, thus obviating the need for global information. The centralized algorithm strategically combines our aggregation tree construction algorithm with the non-linear power assignment strategy in [9]. We prove the correctness and efficiency of our algorithms, and conduct empirical studies under realistic settings to validate our analytical results." ] }
1205.4377
2953145865
In many classification systems, sensing modalities have different acquisition costs. It is often unnecessary to use every modality to classify a majority of examples. We study a multi-stage system in a prediction time cost reduction setting, where the full data is available for training, but for a test example, measurements in a new modality can be acquired at each stage for an additional cost. We seek decision rules to reduce the average measurement acquisition cost. We formulate an empirical risk minimization problem (ERM) for a multi-stage reject classifier, wherein the stage @math classifier either classifies a sample using only the measurements acquired so far or rejects it to the next stage where more attributes can be acquired for a cost. To solve the ERM problem, we show that the optimal reject classifier at each stage is a combination of two binary classifiers, one biased towards positive examples and the other biased towards negative examples. We use this parameterization to construct stage-by-stage global surrogate risk, develop an iterative algorithm in the boosting framework and present convergence and generalization results. We test our work on synthetic, medical and explosives detection datasets. Our results demonstrate that substantial cost reduction without a significant sacrifice in accuracy is achievable.
Detection Cascades: Our multi-stage sequential reject classifiers bears close resemblance to detection cascades. There is much literature on cascade design (see @cite_19 @cite_4 and references therein) but most cascades roughly follow the set-up introduced by @cite_13 to reduce computation cost during classification. At each stage in a cascade, there is a binary classifier with a very high detection rate and a mediocre false alarm rate. Each stage makes a partial decision; it either detects an instance as negative or passes it on to the next stage. Only the last stage in the cascade makes a full decision, namely, whether the example belongs to a positive or negative class.
{ "cite_N": [ "@cite_13", "@cite_19", "@cite_4" ], "mid": [ "1761390164", "2125277152", "" ], "abstract": [ "This paper describes a visual object detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features and yields extremely efficient classifiers [4]. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. A set of experiments in the domain of face detection are presented. The system yields face detection performance comparable to the best previous systems [16, 11, 14, 10, 1]. Implemented on a conventional desktop, face detection proceeds at 15 frames per second. Author email: fPaul.Viola,Mike.J.Jonesg@compaq.com c Compaq Computer Corporation, 2001 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of the Cambridge Research Laboratory of Compaq Computer Corporation in Cambridge, Massachusetts; an acknowledgment of the authors and individual contributors to the work; and all applicable portions of the copyright notice. Copying, reproducing, or republishing for any other purpose shall require a license with payment of fee to the Cambridge Research Laboratory. All rights reserved. CRL Technical reports are available on the CRL’s web page at http: crl.research.compaq.com. Compaq Computer Corporation Cambridge Research Laboratory One Cambridge Center Cambridge, Massachusetts 02142 USA", "Face detection has been one of the most studied topics in the computer vision literature. In this technical report, we survey the recent advances in face detection for the past decade. The seminal Viola-Jones face detector is first reviewed. We then survey the various techniques according to how they extract features and what learning algorithms are adopted. It is our hope that by reviewing the many existing algorithms, we will see even better algorithms developed to solve this fundamental computer vision problem. 1", "" ] }
1205.4377
2953145865
In many classification systems, sensing modalities have different acquisition costs. It is often unnecessary to use every modality to classify a majority of examples. We study a multi-stage system in a prediction time cost reduction setting, where the full data is available for training, but for a test example, measurements in a new modality can be acquired at each stage for an additional cost. We seek decision rules to reduce the average measurement acquisition cost. We formulate an empirical risk minimization problem (ERM) for a multi-stage reject classifier, wherein the stage @math classifier either classifies a sample using only the measurements acquired so far or rejects it to the next stage where more attributes can be acquired for a cost. To solve the ERM problem, we show that the optimal reject classifier at each stage is a combination of two binary classifiers, one biased towards positive examples and the other biased towards negative examples. We use this parameterization to construct stage-by-stage global surrogate risk, develop an iterative algorithm in the boosting framework and present convergence and generalization results. We test our work on synthetic, medical and explosives detection datasets. Our results demonstrate that substantial cost reduction without a significant sacrifice in accuracy is achievable.
Other Cost Sensitive Methods: Network intrusion detection systems (IDS) is an area where sequential decision systems have been explored. (see @cite_24 @cite_20 @cite_10 ). In IDS, features have different computation costs. For each cost level, a ruleset is learned. The goal is to use as many low cost rules as possible. In a related set-up, @cite_2 @cite_21 consider a more general ensemble of base classifiers and explore how to minimize the ensemble size without sacrificing performance. In the test phase, for a sample, another classifier is added to the ensemble if the confidence of the current classification low. Here, similar to detection cascades, the goal is to reduce computation time. As we described in Sec 1, item (C) , the important distinction is that, in our setting, a decision is based only on the partial information acquired up to a stage. In a computation driven method, a stage (or base classifier) decides using a feature computed from the full measurement vector.
{ "cite_N": [ "@cite_21", "@cite_24", "@cite_2", "@cite_10", "@cite_20" ], "mid": [ "2009727399", "1794828807", "2082366447", "", "1800991598" ], "abstract": [ "Recently, mining data streams with concept drifts for actionable insights has become an important and challenging task for a wide range of applications including credit card fraud protection, target marketing, network intrusion detection, etc. Conventional knowledge discovery tools are facing two challenges, the overwhelming volume of the streaming data, and the concept drifts. In this paper, we propose a general framework for mining concept-drifting data streams using weighted ensemble classifiers. We train an ensemble of classification models, such as C4.5, RIPPER, naive Beyesian, etc., from sequential chunks of the data stream. The classifiers in the ensemble are judiciously weighted based on their expected classification accuracy on the test data under the time-evolving environment. Thus, the ensemble approach improves both the efficiency in learning the model and the accuracy in performing classification. Our empirical study shows that the proposed methods have substantial advantage over single-classifier approaches in prediction accuracy, and the ensemble framework is effective for a variety of classification models.", "Intrusion detection systems (IDSs) need to maximize security while minimizing costs. In this paper, we study the problem of building cost-sensitive intrusion detection models to be used for real-time detection. We briefly discuss the major cost factors in IDS, including consequential and operational costs. We propose a multiple model cost-sensitive machine learning technique to produce models that are optimized for user-defined cost metrics. Empirical experiments in off-line analysis show a reduction of approximately 97 in operational cost over a single model approach, and a reduction of approximately 30 in consequential cost over a pure accuracy-based approach.", "Previous research has shown that averaging ensemble can scale up learning over very large cost-sensitive datasets with linear speedup independent of the learning algorithms. At the same time, it achieves the same or even better accuracy than a single model computed from the entire dataset. However, one major drawback is its inefficiency in prediction since every base model in the ensemble has to be consulted in order to produce a final prediction. In this paper, we propose several approaches to reduce the number of base classifiers. Among various methods explored, our empirical studies have shown that the benefit-based greedy approach can safely remove more than 90 of the base models while maintaining or even exceeding the prediction accuracy of the original ensemble. Assuming that each base classifier consumes one unit of prediction time, the removal of 90 of base classifiers translates to a prediction speedup of 10 times. On top of pruning, we propose a novel dynamic scheduling approach to further reduce the \"expected\" number of classifiers employed in prediction. It measures the confidence of a prediction by a subset of classifiers in the pruned ensemble. This confidence is used to decide if more classifiers are needed in order to produce a prediction that is the same as the original unpruned ensemble. This approach reduces the \"expected\" number of classifiers by another 25 to 75 without loss of accuracy.", "", "Intrusion detection systems (IDSs) must maximize the realization of security goals while minimizing costs. In this paper, we study the problem of building cost-sensitive intrusion detection models. We examine the major cost factors associated with an IDS, which include development cost, operational cost, damage cost due to successful intrusions, and the cost of manual and automated response to intrusions. These cost factors can be qualified according to a defined attack taxonomy and site-specific security policies and priorities. We define cost models to formulate the total expected cost of an IDS, and present cost-sensitive machine learning techniques that can produce detection models that are optimized for user-defined cost metrics. Empirical experiments show that our cost-sensitive modeling and deployment techniques are effective in reducing the overall cost of intrusion detection." ] }
1205.4681
2039199977
Recent years have seen significant interest in designing networks that are self-healing in the sense that they can automatically recover from adversarial attacks. Previous work shows that it is possible for a network to automatically recover, even when an adversary repeatedly deletes nodes in the network. However, there have not yet been any algorithms that self-heal in the case where an adversary takes over nodes in the network. In this paper, we address this gap. In particular, we describe a communication network over n nodes that ensures the following properties, even when an adversary controls up to t <= (1 8 - )n nodes, for any non-negative . First, the network provides a point-to-point communication with bandwidth and latency costs that are asymptotically optimal. Second, the expected total number of message corruptions is O(t(log* n)^2) before the adversarially controlled nodes are effectively quarantined so that they cause no more corruptions. Empirical results show that our algorithm can reduce the bandwidth cost by up to a factor of 70.
Several papers @cite_7 @cite_11 @cite_20 @cite_23 @cite_2 have discussed different restoration mechanisms to preserve network performance by adding capacity and rerouting traffic streams in the presence of node or link failures. They present mathematical models to determine global optimal restoration paths, and provide methods for capacity optimization of path-restorable networks.
{ "cite_N": [ "@cite_7", "@cite_23", "@cite_2", "@cite_20", "@cite_11" ], "mid": [ "2114450250", "2171699260", "2140252921", "2138244558", "2131856621" ], "abstract": [ "With the employment of very high capacity transmission systems in high-speed broadband networks (B-ISDN, broadband integrated services digital network), based upon the asynchronous transfer mode (ATM), the consequences of a link failure have increased, since even during a short disconnection a large volume of data is lost. These networks can be made safer against failure, if in the case of a transmission link outage the affected traffic is routed over still intact network parts. This paper describes various protection switching methods for ATM networks and presents mathematical models which can be used to determine globally optimal restoration paths and to dimension spare capacities in the network. Finally, the results are discussed and the various methods are compared.", "For restoration in the case of single link failures in meshed networks several strategies can be considered: link restoration, path restoration and path restoration with link-disjunct route. In terms of spare capacity requirement path restoration is found to be most attractive. These results are obtained by using two general optimisation techniques (simulated annealing and integer linear programming). Both techniques have each their own features; their applicability depends on the size and type of the problem. Application to WDM fibre networks is discussed.", "This paper studies the capacity and flow assignment problem arising in the design of self-healing asynchronous transfer mode (ATM) networks using the virtual path concept. The problem is formulated here as a linear programming problem which is solved using standard methods. The objective is to minimize the spare capacity cost for the given restoration requirement. The spare cost depends on the restoration strategies used in the network. We compare several restoration strategies quantitatively in terms of spare cost, notably: global versus failure-oriented reconfiguration, path versus link restoration, and state-dependent versus state-independent restoration. The advantages and disadvantages of various restoration strategies are also highlighted. Such comparisons provide useful guidance for real network design. Further, a new heuristic algorithm based on the minimum cost route concept is developed for the design of large self-healing ATM networks using path restoration. Numerical results illustrate that the heuristic algorithm is efficient and gives near-optimal solutions for the spare capacity allocation and flow assignment for tested examples.", "In self-healing networks, end-to-end restoration schemes have been considered more advantageous than line restoration schemes because of a possible cost reduction of the total capacity to construct a fully restorable network. This paper clarifies the benefit of end-to-end restoration schemes quantitatively through a comparative analysis of the minimum link capacity installation cost. A jointly optimal capacity and flow assignment algorithm is developed for the self-healing ATM networks based on end-to-end and line restoration. Several networks with diverse topological characteristics as well as multiple projected traffic demand patterns are employed in the experiments to see the effect of various network parameters. The results indicate that the network topology has a significant impact on the required resource installation cost for each restoration scheme. Contrary to a wide belief in the economic advantage of the end-to-end restoration scheme, this study reveals that the attainable gain could be marginal for a well-connected and or unbalanced network.", "The total transmission capacity required by a transport network to satisfy demand and protect it from failures contributes significantly to its cost, especially in long-haul networks. Previously, the spare capacity of a network with a given set of working span sizes has been optimized to facilitate span restoration. Path restorable networks can, however, be even more efficient by defining the restoration problem from an end to end rerouting viewpoint. We provide a method for capacity optimization of path restorable networks which is applicable to both synchronous transfer mode (STM) and asynchronous transfer mode (ATM) virtual path (VP)-based restoration. Lower bounds on spare capacity requirements in span and path restorable networks are first compared, followed by an integer program formulation based on flow constraints which solves the spare and or working capacity placement problem in either span or path restorable networks. The benefits of path and span restoration, and of jointly optimizing working path routing and spare capacity placement, are then analyzed." ] }
1205.4681
2039199977
Recent years have seen significant interest in designing networks that are self-healing in the sense that they can automatically recover from adversarial attacks. Previous work shows that it is possible for a network to automatically recover, even when an adversary repeatedly deletes nodes in the network. However, there have not yet been any algorithms that self-heal in the case where an adversary takes over nodes in the network. In this paper, we address this gap. In particular, we describe a communication network over n nodes that ensures the following properties, even when an adversary controls up to t <= (1 8 - )n nodes, for any non-negative . First, the network provides a point-to-point communication with bandwidth and latency costs that are asymptotically optimal. Second, the expected total number of message corruptions is O(t(log* n)^2) before the adversarially controlled nodes are effectively quarantined so that they cause no more corruptions. Empirical results show that our algorithm can reduce the bandwidth cost by up to a factor of 70.
Our results are inspired by recent work on self-healing algorithms @cite_3 @cite_6 @cite_15 @cite_5 @cite_8 @cite_17 . A common model for these results is that the following process repeats indefinitely: an adversary deletes some nodes in the network, and the algorithm adds edges. The algorithm is constrained to never increase the degree of any node by more than a logarithmic factor from its original degree. In this model, researchers have presented algorithms that ensure the following properties: the network stays connected and the diameter does not increase by much @cite_3 @cite_6 @cite_15 ; the shortest path between any pair of nodes does not increase by much @cite_5 ; and expansion properties of the network are approximately preserved @cite_8 .
{ "cite_N": [ "@cite_8", "@cite_6", "@cite_3", "@cite_5", "@cite_15", "@cite_17" ], "mid": [ "2170420565", "", "", "2086958729", "2136928097", "2950430329" ], "abstract": [ "We consider the problem of self-healing in reconfigurable networks (e.g. peer-to-peer and wireless mesh networks) that are under repeated attack by an omniscient adversary and propose a fully distributed algorithm, Xheal, that maintains good expansion and spectral properties of the network, also keeping the network connected. Moreover, Xheal does this while allowing only low stretch and degree increase per node. The algorithm heals global properties like expansion and stretch while only doing local changes and using only local information. We use a model similar to that used in recent work on self-healing. In our model, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds by quick \"repairs,\" which consist of adding or deleting edges in an efficient localized manner. These repairs preserve the edge expansion, spectral gap, and network stretch, after adversarial deletions, without increasing node degrees by too much, in the following sense. At any point in the algorithm, the expansion of the graph will be either 'better' than the expansion of the graph formed by considering only the adversarial insertions (not the adversarial deletions) or the expansion will be, at least, a constant. Also, the stretch i.e. the distance between any pair of nodes in the healed graph is no more than a O(log n) factor. Similarly, at any point, a node v whose degree would have been d in the graph with adversarial insertions only, will have degree at most O(κ d) in the actual graph, for a small parameter κ. We also provide bounds on the second smallest eigenvalue of the Laplacian which captures key properties such as mixing time, conductance, congestion in routing etc. Our distributed data structure has low amortized latency and bandwidth requirements. Our work improves over the self-healing algorithms Forgiving tree [PODC 2008] and Forgiving graph [PODC 2009] in that we are able to give guarantees on degree and stretch, while at the same time preserving the expansion and spectral properties of the network.", "", "", "We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick \"repairs,\" which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions, without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been l in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most l log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degree would have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from [8] in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it does not require an initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network.", "We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that the following process continues for up to n rounds where n is the total number of nodes initially in the network: the adversary deletesan arbitrary node from the network, then the network responds by quickly adding a small number of new edges. We present a distributed data structure that ensures two key properties. First, the diameter of the network is never more than O(log Delta) times its original diameter, where Delta is the maximum degree of the network initially. We note that for many peer-to-peer systems, Delta is polylogarithmic, so the diameter increase would be a O(loglog n) multiplicative factor. Second, the degree of any node never increases by more than 3 over its original degree. Our data structure is fully distributed, has O(1) latency per round and requires each node to send and receive O(1) messages per round. The data structure requires an initial setup phase that has latency equal to the diameter of the original network, and requires, with high probability, each node v to send O(log n) messages along every edge incident to v. Our approach is orthogonal and complementary to traditional topology-based approaches to defending against attack.", "Healing algorithms play a crucial part in distributed P2P networks where failures occur continuously and frequently. Several self-healing algorithms have been suggested recently [IPDPS'08, PODC'08, PODC'09, PODC'11] in a line of work that has yielded gradual improvements in the properties ensured on the graph. This work motivates a strong general phenomenon of edge-preserving healing that aims at obtaining self-healing algorithms with the constraint that all original edges in the graph (not deleted by the adversary), be retained in every intermediate graph. The previous algorithms, in their nascent form, are not explicitly edge preserving. In this paper, we show they can be suitably modified (We introduce Xheal+, an edge-preserving version of Xheal[PODC'11]). Towards this end, we present a general self-healing model that unifies the previous models. The main contribution of this paper is not in the technical complexity, rather in the simplicity with which the edge-preserving property can be ensured and the message that this is a crucial property with several benefits. In particular, we highlight this by showing that, almost as an immediate corollary, subgraph densities are preserved or increased. Maintaining density is a notion motivated by the fact that in certain distributed networks, certain nodes may require and initially have a larger number of inter-connections. It is vital that a healing algorithm, even amidst failures, respect these requirements. Our suggested modifications yield such subgraph density preservation as a by product. In addition, edge preservation helps maintain any subgraph induced property that is monotonic. Also, algorithms that are edge-preserving require minimal alteration of edges which can be an expensive cost in healing - something that has not been modeled in any of the past work." ] }
1205.4681
2039199977
Recent years have seen significant interest in designing networks that are self-healing in the sense that they can automatically recover from adversarial attacks. Previous work shows that it is possible for a network to automatically recover, even when an adversary repeatedly deletes nodes in the network. However, there have not yet been any algorithms that self-heal in the case where an adversary takes over nodes in the network. In this paper, we address this gap. In particular, we describe a communication network over n nodes that ensures the following properties, even when an adversary controls up to t <= (1 8 - )n nodes, for any non-negative . First, the network provides a point-to-point communication with bandwidth and latency costs that are asymptotically optimal. Second, the expected total number of message corruptions is O(t(log* n)^2) before the adversarially controlled nodes are effectively quarantined so that they cause no more corruptions. Empirical results show that our algorithm can reduce the bandwidth cost by up to a factor of 70.
Our results are also similar in spirit to those of Saia and Young @cite_9 and @cite_18 , which both show how to reduce message complexity when transmitting a message across a quorum path of length @math . The first result, @cite_9 , achieves expected message complexity of @math by use of bipartite expanders. However, this result is impractical due to high hidden constants and high setup costs. The second result, @cite_18 , achieves expected message complexity of @math . However, this second result requires the sender to iteratively contact a member of each quorum in the quorum path.
{ "cite_N": [ "@cite_9", "@cite_18" ], "mid": [ "1971843894", "2116910210" ], "abstract": [ "Several recent research results describe how to design Distributed Hash Tables (DHTs) that are robust to adversarial attack via Byzantine faults. Unfortunately, all of these results require a significant blowup in communication costs over standard DHTs. For example, to perform a lookup operation, all such robust DHTs of which we are aware require sending O(log^3n) messages while standard DHTs require sending only O(logn), where n is the number of nodes in the network. In this paper, we describe protocols to reduce the communication costs of all such robust DHTs. In particular, we give a protocol to reduce the number of messages sent to perform a lookup operation from O(log^3n) to O(log^2n) in expectation. Moreover, we also give a protocol for sending a large (i.e. containing @W(log^4n) bits) message securely through a robust DHT that requires, in expectation, only a constant blowup in the total number of bits sent compared with performing the same operation in a standard DHT. This is an improvement over the O(log^2n) bit blowup that is required to perform such an operation in all current robust DHTs. Both of our protocols are robust against an adaptive adversary.", "There are several analytical results on distributed hash tables (DHTs) that can tolerate Byzantine faults. Unfortunately, in such systems, operations such as data retrieval and message sending incur significant communication costs. For example, a simple scheme used in many Byzantine fault-tolerant DHT constructions of @math nodes requires @math messages, this is likely impractical for real-world applications. The previous best known message complexity is @math in expectation , however, the corresponding protocol suffers from prohibitive costs owing to hidden constants in the asymptotic notation and setup costs. In this paper, we focus on reducing the communication costs against a computationally bounded adversary. We employ threshold cryptography and distributed key generation to define two protocols both of which are more efficient than existing solutions. In comparison, our first protocol is deterministic with @math message complexity and our second protocol is randomized with expected @math message complexity. Further, both the hidden constants and setup costs for our protocols are small and no trusted third party is required. Finally, we present results from micro benchmarks conducted over PlanetLab showing that our protocols are practical for deployment under significant levels of churn and adversarial behaviour." ] }
1205.4626
1848261800
We examine and bring out the architecturally significant characteristics of various virtualization and cloud oriented platforms. The impact of such characteristics on the ability of guest applications to achieve various quality attributes (QA) has also been determined by examining existing body of architecture knowledge. We observe from our findings that efficiency, resource elasticity and security are among the most impacted QAs, and virtualization platforms exhibit the maximum impact on various QAs.
Several researchers have explored different dimensions of virtualization and cloud based platforms. For instance, a dissection of the cloud into five main layers, and illustrating their interrelations and inter-dependency on constituent components has been discussed by Youseff et. al. @cite_3 . The architectural requirements of cloud platforms have been discussed by Rimal et. al. @cite_13 . These works examine different technical dimensions of cloud computing. However, from the standpoint of an application that wants to make exploit capabilities of such modern computing platforms, some questions still do not have clear answers. For example: What are important characteristics of various platforms from the viewpoint of a guest application? How do such characteristics impact various functional and non-functional aspects of guest application?
{ "cite_N": [ "@cite_13", "@cite_3" ], "mid": [ "2042852274", "2030565040" ], "abstract": [ "Cloud Computing is a model of service delivery and access where dynamically scalable and virtualized resources are provided as a service over the Internet. This model creates a new horizon of opportunity for enterprises. It introduces new operating and business models that allow customers to pay for the resources they effectively use, instead of making heavy upfront investments. The biggest challenge in Cloud Computing is the lack of a de facto standard or single architectural method, which can meet the requirements of an enterprise cloud approach. In this paper, we explore the architectural features of Cloud Computing and classify them according to the requirements of end-users, enterprises that use the cloud as a platform, and cloud providers themselves. We show that several architectural features will play a major role in the adoption of the Cloud Computing paradigm as a mainstream commodity in the enterprise world. This paper also provides key guidelines to software architects and Cloud Computing application developers for creating future architectures.", "Progress of research efforts in a novel technology is contingent on having a rigorous organization of its knowledge domain and a comprehensive understanding of all the relevant components of this technology and their relationships. Cloud computing is one contemporary technology in which the research community has recently embarked. Manifesting itself as the descendant of several other computing research areas such as service-oriented architecture, distributed and grid computing, and virtualization, cloud computing inherits their advancements and limitations. Towards the end-goal of a thorough comprehension of the field of cloud computing, and a more rapid adoption from the scientific community, we propose in this paper an ontology of this area which demonstrates a dissection of the cloud into five main layers, and illustrates their interrelations as well as their inter-dependency on preceding technologies. The contribution of this paper lies in being one of the first attempts to establish a detailed ontology of the cloud. Better comprehension of the technology would enable the community to design more efficient portals and gateways for the cloud, and facilitate the adoption of this novel computing approach in scientific environments. In turn, this will assist the scientific community to expedite its contributions and insights into this evolving computing field." ] }
1205.3305
2949548937
Many studies have tried to evaluate wireless networks and especially the IEEE 802.15.4 standard. Hence, several papers have aimed to describe the functionalities of the physical (PHY) and medium access control (MAC) layers. They have highlighted some characteristics with experimental results and or have attempted to reproduce them using theoretical models. In this paper, we use the first way to better understand IEEE 802.15.4 standard. Indeed, we provide a comprehensive model, able more faithfully to mimic the functionalities of this standard at the PHY and MAC layers. We propose a combination of two relevant models for the two layers. The PHY layer behavior is reproduced by a mathematical framework, which is based on radio and channel models, in order to quantify link reliability. On the other hand, the MAC layer is mimed by an enhanced Markov chain. The results show the pertinence of our approach compared to the model based on a Markov chain for IEEE 802.15.4 MAC layer. This contribution allows us fully and more precisely to estimate the network performance with different network sizes, as well as different metrics such as node reliability and delay. Our contribution enables us to catch possible failures at both layers.
In another context, numerous works focus on the physical layer modeling. For instance, Zuniga and Krishnamachari @cite_6 @cite_15 have analyzed the major causes behind unreliability @cite_6 @cite_15 and the negative impact of asymmetry on link efficiency @cite_15 in low power wireless links. Instead of the binary disc-shaped model these models reproduce the called @math @math @cite_4 @cite_8 @cite_12 in order to model the transmission range. The packet reception rate and the upper-layer protocol reliability are highly instable when a neighbor is located in this region. To understand it, two models have been proposed: a channel model that is based on the log-normal path loss propagation model @cite_10 and a radio reception model closely tied to the determination of packet reception ratio. Through these models, it is possible to derive the expected distribution and the variance of the packet reception ratio according to the distance.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_6", "@cite_15", "@cite_10", "@cite_12" ], "mid": [ "", "", "2110360611", "2160695204", "1575885344", "38723175" ], "abstract": [ "", "", "The wireless sensor networks community, has now an increased understanding of the need for realistic link layer models. Recent experimental studies have shown that real deployments have a \"transitional region\" with highly unreliable links, and that therefore the idealized perfect-reception-within-range models used in common network simulation tools can be very misleading. In this paper, we use mathematical techniques from communication theory to model and analyze the low power wireless links. The primary contribution of this work is the identification of the causes of the transitional region, and a quantification of their influence. Specifically, we derive expressions for the packet reception rate as a function of distance, and for the width of the transitional region. These expressions incorporate important channel and radio parameters such as the path loss exponent and shadowing variance of the channel; and the modulation and encoding of the radio. A key finding is that for radios using narrow-band modulation, the transitional region is not an artifact of the radio non-ideality, as it would exist even with perfect-threshold receivers because of multi-path fading. However, we hypothesize that radios with mechanisms to combat multi-path effects, such as spread-spectrum and diversity techniques, can reduce the transitional region.", "Experimental studies have demonstrated that the behavior of real links in low-power wireless networks (such as wireless sensor networks) deviates to a large extent from the ideal binary model used in several simulation studies. In particular, there is a large transitional region in wireless link quality that is characterized by significant levels of unreliability and asymmetry, significantly impacting the performance of higher-layer protocols. We provide a comprehensive analysis of the root causes of unreliability and asymmetry. In particular, we derive expressions for the distribution, expectation, and variance of the packet reception rate as a function of distance, as well as for the location and extent of the transitional region. These expressions incorporate important environmental and radio parameters such as the path loss exponent and shadowing variance of the channel, and the modulation, encoding, and hardware variance of the radios.", "A surface maintenance machine that includes a powered vehicle having a rotary tool in a tool enclosure to direct debris from the surface through an enclosure outlet, a debris hopper having a debris inlet locatable adjacent the enclosure outlet, and a discharge opening remote from the hopper inlet, a hingedly mounted door for closing the discharge opening, lift arms mounted on the vehicle for mounting the hopper for pivotable movement relative thereto, supporting the hopper in a debris collecting position, and selectively elevating the hopper from a street level position to a high dump position, control arms and piston cylinder combinations to control the pivoting of the hopper as it is moved between its street level and high dump positions, the piston cylinder combinations being operated from a retracted condition to an extended condition for pivoting the hopper to a dumping position in either the street level position or the high dump position, latches mounted on the hopper that are operated through the hopper pivoting to a dumping position to release their latching engagement with the door, and a door holder assembly which upon the door swinging open as the hopper is moved to a dumping position is resiliently operated to maintain a sufficient minimum spacing of the door from the discharge opening and that upon the piston cylinder combinations being substantially fully retracted is operated to release the door and permit the door swinging under gravity to a closed position with sufficient force to insure the door is latched in a closed position, the door holder assembly being operated to permit the door swinging closed by a cam surface on the adjacent control arm.", "A new class of networked systems is emerging that involve very large numbers of small, low-power, wireless devices. We present findings from a large scale empirical study involving over 150 such nodes operated at various transmission power settings. The instrumentation in our experiments permits us to separate effects at the various layers of the protocol stack. At the link layer, we present statistics on packet reception, effective communication range and link asymmetry; at the MAC layer, we measure contention, collision and latency; and at the application layer, we analyze the structure of trees constructed using flooding. The study reveals that even a simple protocol, flooding, can exhibit surprising complexity at scale. The data and analysis lay a foundation for a much wider set of algorithmic studies in this space." ] }
1205.3205
1508865733
Unlike static documents, version-controlled documents are edited by one or more authors over a certain period of time. Examples include large scale computer code, papers authored by a team of scientists, and online discussion boards. Such collaborative revision process makes traditional document modeling and visualization techniques inappropriate. In this paper we propose a new visualization technique for version-controlled documents that reveals interesting authoring patterns in papers, computer code and Wikipedia articles. The revealed authoring patterns are useful for the readers, participants in the authoring process, and supervisors.
Several attempts have been made to visualize themes and topics in documents, either by keeping track of the word distribution or by dimensionality reduction techniques e.g., @cite_3 @cite_23 @cite_7 @cite_26 . Such studies tend to visualize a corpus of unrelated documents as opposed to ordered collections of revisions which we explore.
{ "cite_N": [ "@cite_26", "@cite_23", "@cite_7", "@cite_3" ], "mid": [ "", "2106738877", "2143967859", "2105057414" ], "abstract": [ "", "The ThemeRiver visualization depicts thematic variations over time within a large collection of documents. The thematic changes are shown in the context of a time-line and corresponding external events. The focus on temporal thematic change within a context framework allows a user to discern patterns that suggest relationships or trends. For example, the sudden change of thematic strength following an external event may indicate a causal relationship. Such patterns are not readily accessible in other visualizations of the data. We use a river metaphor to convey several key notions. The document collection's time-line, selected thematic content and thematic strength are indicated by the river's directed flow, composition and changing width, respectively. The directed flow from left to right is interpreted as movement through time and the horizontal distance between two points on the river defines a time interval. At any point in time, the vertical distance, or width, of the river indicates the collective strength of the selected themes. Colored \"currents\" flowing within the river represent individual themes. A current's vertical width narrows or broadens to indicate decreases or increases in the strength of the individual theme.", "This paper introduces a novel representation, called the InfoCrystal, that can be used as a visualization tool as well as a visual query language to help users search for information. The InfoCrystal visualizes all the possible relationships among N concepts. Users can assign relevance weights to the concepts and use thresholding to select relationships of interest. The InfoCrystal allows users to specify Boolean as well as vector-space queries graphically. Arbitrarily complex queries can be created by using the InfoCrystals as building blocks and organizing them in a hierarchical structure. The InfoCrystal enables users to explore and filter information in a flexible, dynamic and interactive way. >", "Visualization is commonly used in data analysis to help the user in getting an initial idea about the raw data as well as visual representation of the regularities obtained in the analysis. In similar way, when we talk about automated text processing and the data consists of text documents, visualization of text document corpus can be very useful. From the automated text processing point of view, natural language is very redundant in the sense that many different words share a common or similar meaning. For computer this can be hard to understand without some background knowledge. We describe an approach to visualization of text document collection based on methods from linear algebra. We apply Latent Semantic Indexing (LSI) as a technique that helps in extracting some of the background knowledge from corpus of text documents. This can be also viewed as extraction of hidden semantic concepts from text documents. In this way visualization can be very helpful in data analysis, for instance, for finding main topics that appear in larger sets of documents. Extraction of main concepts from documents using techniques such as LSI, can make the results of visualizations more useful. For example, given a set of descriptions of European Research projects (6FP) one can find main areas that these projects cover including semantic web, e-learning, security, etc. In this paper we describe a method for visualization of document corpus based on LSI, the system implementing it and give results of using the system on several datasets." ] }
1205.3205
1508865733
Unlike static documents, version-controlled documents are edited by one or more authors over a certain period of time. Examples include large scale computer code, papers authored by a team of scientists, and online discussion boards. Such collaborative revision process makes traditional document modeling and visualization techniques inappropriate. In this paper we propose a new visualization technique for version-controlled documents that reveals interesting authoring patterns in papers, computer code and Wikipedia articles. The revealed authoring patterns are useful for the readers, participants in the authoring process, and supervisors.
A partial list of references for text visualization are @cite_7 @cite_9 @cite_23 @cite_3 @cite_10 @cite_20 with additional references available in @cite_27 . A selection of software systems for visualizing text corpora are IN-SPIRE http: in-spire.pnl.gov , Jigsaw http: www.cc.gatech.edu gvu ii jigsaw , Enron corpus viewer http: jheer.org enron , Thomson's refviz http: www.refviz.com , and the Science topic browser http: www.cs.cmu.edu lemur science
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_3", "@cite_27", "@cite_23", "@cite_10", "@cite_20" ], "mid": [ "2143967859", "1838945765", "2105057414", "", "2106738877", "2145436368", "2072644219" ], "abstract": [ "This paper introduces a novel representation, called the InfoCrystal, that can be used as a visualization tool as well as a visual query language to help users search for information. The InfoCrystal visualizes all the possible relationships among N concepts. Users can assign relevance weights to the concepts and use thresholding to select relationships of interest. The InfoCrystal allows users to specify Boolean as well as vector-space queries graphically. Arbitrarily complex queries can be created by using the InfoCrystals as building blocks and organizing them in a hierarchical structure. The InfoCrystal enables users to explore and filter information in a flexible, dynamic and interactive way. >", "This paper introduces a graphical method for visually presenting and exploring the results of multiple queries simultaneously. This method allows a user to visually compare multiple query result sets, explore various combinations among the query result sets, and identify the \"best\" matches for combinations of multiple independent queries. This approach might also help users explore methods for progressively improving queries by visually comparing the improvement in result sets.", "Visualization is commonly used in data analysis to help the user in getting an initial idea about the raw data as well as visual representation of the regularities obtained in the analysis. In similar way, when we talk about automated text processing and the data consists of text documents, visualization of text document corpus can be very useful. From the automated text processing point of view, natural language is very redundant in the sense that many different words share a common or similar meaning. For computer this can be hard to understand without some background knowledge. We describe an approach to visualization of text document collection based on methods from linear algebra. We apply Latent Semantic Indexing (LSI) as a technique that helps in extracting some of the background knowledge from corpus of text documents. This can be also viewed as extraction of hidden semantic concepts from text documents. In this way visualization can be very helpful in data analysis, for instance, for finding main topics that appear in larger sets of documents. Extraction of main concepts from documents using techniques such as LSI, can make the results of visualizations more useful. For example, given a set of descriptions of European Research projects (6FP) one can find main areas that these projects cover including semantic web, e-learning, security, etc. In this paper we describe a method for visualization of document corpus based on LSI, the system implementing it and give results of using the system on several datasets.", "", "The ThemeRiver visualization depicts thematic variations over time within a large collection of documents. The thematic changes are shown in the context of a time-line and corresponding external events. The focus on temporal thematic change within a context framework allows a user to discern patterns that suggest relationships or trends. For example, the sudden change of thematic strength following an external event may indicate a causal relationship. Such patterns are not readily accessible in other visualizations of the data. We use a river metaphor to convey several key notions. The document collection's time-line, selected thematic content and thematic strength are indicated by the river's directed flow, composition and changing width, respectively. The directed flow from left to right is interpreted as movement through time and the horizontal distance between two points on the river defines a time interval. At any point in time, the vertical distance, or width, of the river indicates the collective strength of the selected themes. Colored \"currents\" flowing within the river represent individual themes. A current's vertical width narrows or broadens to indicate decreases or increases in the strength of the individual theme.", "We present Themail, a visualization that portrays relationships using the interaction histories preserved in email archives. Using the content of exchanged messages, it shows the words that characterize one's correspondence with an individual and how they change over the period of the relationship.This paper describes the interface and content-parsing algorithms in Themail. It also presents the results from a user study where two main interaction modes with the visualization emerged: exploration of \"big picture\" trends and themes in email (haystack mode) and more detail-oriented exploration (needle mode). Finally, the paper discusses the limitations of the content parsing approach in Themail and the implications for further research on email content visualization.", "A family of probabilistic time series models is developed to analyze the time evolution of topics in large document collections. The approach is to use state space models on the natural parameters of the multinomial distributions that represent the topics. Variational approximations based on Kalman filters and nonparametric wavelet regression are developed to carry out approximate posterior inference over the latent topics. In addition to giving quantitative, predictive models of a sequential corpus, dynamic topic models provide a qualitative window into the contents of a large document collection. The models are demonstrated by analyzing the OCR'ed archives of the journal Science from 1880 through 2000." ] }
1205.3205
1508865733
Unlike static documents, version-controlled documents are edited by one or more authors over a certain period of time. Examples include large scale computer code, papers authored by a team of scientists, and online discussion boards. Such collaborative revision process makes traditional document modeling and visualization techniques inappropriate. In this paper we propose a new visualization technique for version-controlled documents that reveals interesting authoring patterns in papers, computer code and Wikipedia articles. The revealed authoring patterns are useful for the readers, participants in the authoring process, and supervisors.
Visualizing numeric data, such as word histograms, serves a foundational role in visualizing complicated textual objects. Monographs describing traditional visualization techniques are @cite_5 @cite_17 while less traditional approaches for visual data exploration are surveyed in @cite_24 . Some interesting ideas concerning visualizing low-dimensional numeric time series are @cite_1 @cite_11 . Recent trends in the area of time series visualization are mostly concerned with interactive visualization and with multiple or vector-valued time series. An interesting exposition of the state-of-the-art and future vision in the related field of visual analytics is @cite_27 .
{ "cite_N": [ "@cite_1", "@cite_17", "@cite_24", "@cite_27", "@cite_5", "@cite_11" ], "mid": [ "2097236039", "2155843307", "2119111481", "", "", "2111736285" ], "abstract": [ "In this paper, we present a new approach for the visualization of time-series data based on spirals. Different to classical bar charts and line graphs, the spiral is suited to visualize large data sets and supports much better the identification of periodic structures in the data. Moreover, it supports both the visualization of nominal and quantitative data based on a similar visualization metaphor. The extension of the spiral visualization to 3D gives access to concepts for zooming and focusing and linking in the data set. As such, spirals complement other visualization techniques for time series and specifically enhance the identication of periodic patterns.", "Slik lyder tre av omtalene av Edward Tuftes The visual display of quantitative information. Siden forste utgave kom ut i 1983, har den blitt sett pa som en tidlos klassiker og bestselger innen informasjonsgrafikk. Siden har den hatt tre «oppfolgere»: Envisioning information (1990), Visual explanations (1997) og Beautiful Evidence (2006). Med sin hoye kompetanse innen informasjonsgrafikk blir Edward Tufte i dag sett pa som en av de fremste pioneerene innen faget, og han har blitt tildelt over 40 priser for sine verker. Han er professor ved Yale University, og underviser den dag i dag i statistikk, informasjonsdesign og politisk okonomi. I The visual display of quantitative information drofter Tufte med et kritisk syn flere eksempler pa informasjonsgrafikk skapt gjennom tidene. Budskapet summeres opp i klare retningslinjer pa hva som er rett og galt innen faget – hvordan man skal visualisere kvantitativ informasjon korrekt. Boken har derfor blitt tatt godt i mot hos flere yrkesgrupper – den har naermest blitt sett pa som en bibel – dog har den og opplevd mye kritikk selv.", "We survey work on the different uses of graphical mapping and interaction techniques for visual data mining of large data sets represented as table data. Basic terminology related to data mining, data sets, and visualization is introduced. Previous work on information visualization is reviewed in light of different categorizations of techniques and systems. The role of interaction techniques is discussed, in addition to work addressing the question of selecting and evaluating visualization techniques. We review some representative work on the use of information visualization techniques in the context of mining data. This includes both visual data exploration and visually expressing the outcome of specific mining algorithms. We also review recent innovative approaches that attempt to integrate visualization into the DM KDD process, using it to enhance user interaction and comprehension.", "", "", "Timeboxes are rectangular widgets that can be used in direct-manipulation graphical user interfaces (GUIs) to specify query constraints on time series data sets. Timeboxes are used to specify simultaneously two sets of constraints: given a set of N time series profiles, a timebox covering time periods x1...x2 (x1<x2) and values y1...y2(y1≤y2) will retrieve only those n ∈N that have values y1≤y≤y2 during all times x1≤x≤x2. TimeSearcher is an information visualization tool that combines timebox queries with overview displays, query-by-example facilities, and support for queries over multiple time-varying attributes. Query manipulation tools including pattern inversion and 'leaders & laggards' graphical bookmarks provide additional support for interactive exploration of data sets. Extensions to the basic timebox model that provide additional expressivity include variable time timeboxes, which can be used to express queries with variability in the time interval, and angular queries, which search for ranges of differentials, rather than absolute values. Analysis of the algorithmic requirements for providing dynamic query performance for timebox queries showed that a sequential search outperformed searches based on geometric indices. Design studies helped identify the strengths and weaknesses of the query tools. Extended case studies involving the analysis of two different types of data from molecular biology experiments provided valuable feedback and validated the utility of both the timebox model and the TimeSearcher tool. Timesearcher is available at http: www.cs.umd.edu hcil timesearcher" ] }
1205.3205
1508865733
Unlike static documents, version-controlled documents are edited by one or more authors over a certain period of time. Examples include large scale computer code, papers authored by a team of scientists, and online discussion boards. Such collaborative revision process makes traditional document modeling and visualization techniques inappropriate. In this paper we propose a new visualization technique for version-controlled documents that reveals interesting authoring patterns in papers, computer code and Wikipedia articles. The revealed authoring patterns are useful for the readers, participants in the authoring process, and supervisors.
Good overviews of techniques for visualizing the software evolution process are @cite_2 @cite_18 . Specific examples include SeeSoft @cite_21 , a line-by-line visualization of source code, as well as Augur @cite_8 and Advizor @cite_13 . The latter two are collections of visualizations, such as 2D and 2.5D matrix views which identify file and source changes in terms of project branch, date, author, etc. Cenqua Fisheye http: fisheye.cenqua.com is one such commercial tool for visually interacting with software repositories, however the interface consists of text-centric and graphical displays displaying line charts and histograms. The StarGate project @cite_6 and CodeSaw http: social.cs.uiuc.edu projects codesaw.html serve roles similar to FishEye, i.e. tracking where and to what extent authors are concentrating their efforts, but provide a less static presentation.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_21", "@cite_6", "@cite_2", "@cite_13" ], "mid": [ "1563325992", "2044762648", "2150231504", "", "2067172306", "2119158444" ], "abstract": [ "Here is an ideal textbook on software visualization, written especially for students and teachers in computer science. It provides a broad and systematic overview of the area including many pointers to tools available today. Topics covered include static program visualization, algorithm animation, visual debugging, as well as the visualization of the evolution of software. The author's presentation emphasizes common principles and provides different examples mostly taken from seminal work. In addition, each chapter is followed by a list of exercises including both pen-and-paper exercises as well as programming tasks.", "In large projects, software developers struggle with two sources of complexity - the complexity of the code itself, and the complexity of the process of producing it. Both of these concerns have been subjected to considerable research investigation, and tools and techniques have been developed to help manage them. However, these solutions have generally been developed independently, making it difficult to deal with problems that inherently span both dimensions. We describe Augur, a visualization tool that supports distributed software development processes. Augur creates visual representations of both software artifacts and software development activities, and, crucially, allows developers to explore the relationship between them. Augur is designed not for managers, but for the developers participating in the software development process. We discuss some of the early results of informal evaluation with open source software developers. Our experiences to date suggest that combining views of artifacts and activities is both meaningful and valuable to software developers.", "The Seesoft software visualization system allows one to analyze up to 50000 lines of code simultaneously by mapping each line of code into a thin row. The color of each row indicates a statistic of interest, e.g., red rows are those most recently changed, and blue are those least recently changed. Seesoft displays data derived from a variety of sources, such as version control systems that track the age, programmer, and purpose of the code (e.g., control ISDN lamps, fix bug in call forwarding); static analyses, (e.g., locations where functions are called); and dynamic analyses (e.g., profiling). By means of direct manipulation and high interaction graphics, the user can manipulate this reduced representation of the code in order to find interesting patterns. Further insight is obtained by using additional windows to display the actual code. Potential applications for Seesoft include discovery, project management, code tuning, and analysis of development methodologies. >", "", "This paper proposes a framework for describing, comparing and understanding visualization tools that provide awareness of human activities in software development. The framework has several purposes -- it can act as a formative evaluation mechanism for tool designers; as an assessment tool for potential tool users; and as a comparison tool so that tool researchers can compare and understand the differences between various tools and identify potential new research areas. We use this framework to structure a survey of visualization tools for activity awareness in software development. Based on this survey we suggest directions for future research.", "A key problem in software engineering is changing the code. We present a sequence of visualizations and visual metaphors designed to help engineers understand and manage the software change process. The principal metaphors are matrix views, cityscapes, bar and pie charts, data sheets and networks. Linked by selection mechanisms, multiple views are combined to form perspectives that both enable discovery of high-level structure in software change data and allow effective access to details of those data. Use of the views and perspectives is illustrated in two important contexts: understanding software change by exploration of software change data and management of software development. Our approach complements existing visualizations of software structure and software execution." ] }
1205.2519
2043251334
We extend the higher-order termination method of dynamic dependency pairs to Algebraic Functional Systems (AFSs). In this setting, simply typed lambda-terms with algebraic reduction and separate -steps are considered. For left-linear AFSs, the method is shown to be complete. For so-called local AFSs we dene a variation of usable rules and an extension of argument lterings. All these techniques have been implemented in the higher-order termination tool WANDA.
Dynamic Dependency Pairs for HRSs A first definition of dependency pairs for HRSs is given in @cite_37 . Here termination is not equivalent to the absence of infinite dependency chains, and a term is required to be greater than its subterms (the ), which makes many optimisations impossible. @cite_33 (extended abstract) we have discussed how the subterm property may be weakened by posing restrictions on the rules, and in @cite_25 , the short version of this paper, we have explored an extension of the dynamic approach to AFSs.
{ "cite_N": [ "@cite_37", "@cite_25", "@cite_33" ], "mid": [ "1505645573", "1548902908", "1495662220" ], "abstract": [ "This paper explores how to extend the dependency pair technique for proving termination of higher-order rewrite systems. We show that the termination property of higher-order rewrite systems can be checked by the non-existence of an infinite R-chain, which is an extension of Arts’ and Giesl’s result for the first-order case. It is clarified that the subterm property of the quasi-ordering, used for proving termination automatically, is indispensable.", "We extend the termination method using dynamic dependency pairs to higher order rewriting systems with beta as a rewrite step, also called Algebraic Functional Systems (AFSs). We introduce a variation of usable rules, and use monotone algebras to solve the constraints generated by dependency pairs. This approach differs in several respects from those dealing with higher order rewriting modulo beta (e.g. HRSs).", "We present a termination method for left-linear Higher-order Rewrite Systems (HRSs) that are algebraic using a higher-order generalization of dependency pairs with argument filterings." ] }
1205.2519
2043251334
We extend the higher-order termination method of dynamic dependency pairs to Algebraic Functional Systems (AFSs). In this setting, simply typed lambda-terms with algebraic reduction and separate -steps are considered. For left-linear AFSs, the method is shown to be complete. For so-called local AFSs we dene a variation of usable rules and an extension of argument lterings. All these techniques have been implemented in the higher-order termination tool WANDA.
Static Dependency Pairs for HRSs The static approach in @cite_21 is moved to the setting of HRSs in @cite_5 , and extended with argument filterings and usable rules in @cite_27 . The static approach omits dependency pairs @math with @math a variable, which avoids the need for a subterm property, but it allows bound variables to become free in the right-hand side of a dependency pair. The technique is restricted to HRSs. A system with for instance the (terminating) rule @math cannot be handled. Moreover, the approach is not complete: a terminating AFS may have a static dependency chain.
{ "cite_N": [ "@cite_5", "@cite_27", "@cite_21" ], "mid": [ "2119852735", "2091226256", "1978279608" ], "abstract": [ "Higher-order rewrite systems (HRSs) and simply-typed term rewriting systems (STRSs) are computational models of functional programs. We recently proposed an extremely powerful method, the static dependency pair method, which is based on the notion of strong computability, in order to prove termination in STRSs. In this paper, we extend the method to HRSs. Since HRSs include λ-abstraction but STRSs do not, we restructure the static dependency pair method to allow λ-abstraction, and show that the static dependency pair method also works well on HRSs without new restrictions.", "The static dependency pair method is a method for proving the termination of higher-order rewrite systems a la Nipkow. It combines the dependency pair method introduced for first-order rewrite systems with the notion of strong computability introduced for typed lambda-calculi. Argument filterings and usable rules are two important methods of the dependency pair framework used by current state-of-the-art first-order automated termination provers. In this paper, we extend the class of higher-order systems on which the static dependency pair method can be applied. Then, we extend argument filterings and usable rules to higher-order rewriting, hence providing the basis for a powerful automated termination prover for higher-order rewrite systems.", "We enhance the dependency pair method in order to prove termination using recursive structure analysis in simply-typed term rewriting systems, which is one of the computational models of functional programs. The primary advantage of our method is that one can exclude higher-order variables which are difficult to analyze theoretically, from recursive structure analysis. The key idea of our method is to analyze recursive structure from the viewpoint of strong computability. This property was introduced for proving termination in typed λ-calculus, and is a stronger condition than the property of termination. The difficulty in incorporating this concept into recursive structure analysis is that because it is defined inductively over type structure, it is not closed under the subterm relation. This breaks the correspondence between strong computability and recursive structure. In order to guarantee the correspondence, we propose plain function-passing as a restriction, which is satisfied by many non-artificial functional programs." ] }
1205.2519
2043251334
We extend the higher-order termination method of dynamic dependency pairs to Algebraic Functional Systems (AFSs). In this setting, simply typed lambda-terms with algebraic reduction and separate -steps are considered. For left-linear AFSs, the method is shown to be complete. For so-called local AFSs we dene a variation of usable rules and an extension of argument lterings. All these techniques have been implemented in the higher-order termination tool WANDA.
The definitions for HRSs @cite_37 @cite_5 do not immediately carry over to AFSs, since AFSs may have rules of functional type, and @math -reduction is a separate rewrite step. A short paper by Blanqui @cite_12 introduces static dependency pairs on a form of rewriting which includes AFSs, but it poses some restrictions, such as base-type rules. The present work considers dynamic dependency pairs for AFSs and is most related to @cite_37 , but is adapted for the different formalism. Our method conservatively extends the one for first-order rewriting and provides a characterisation of termination for left-linear AFSs. We have chosen for a dynamic rather than a static approach because, although the static approach is stronger when applicable, the dynamic definitions can be given without restrictions. The restrictions we do provide, to weaken the subterm property and enable for instance argument filterings, are optional. We will say some words about integrating the static and dynamic approaches in .
{ "cite_N": [ "@cite_5", "@cite_37", "@cite_12" ], "mid": [ "2119852735", "1505645573", "2964098951" ], "abstract": [ "Higher-order rewrite systems (HRSs) and simply-typed term rewriting systems (STRSs) are computational models of functional programs. We recently proposed an extremely powerful method, the static dependency pair method, which is based on the notion of strong computability, in order to prove termination in STRSs. In this paper, we extend the method to HRSs. Since HRSs include λ-abstraction but STRSs do not, we restructure the static dependency pair method to allow λ-abstraction, and show that the static dependency pair method also works well on HRSs without new restrictions.", "This paper explores how to extend the dependency pair technique for proving termination of higher-order rewrite systems. We show that the termination property of higher-order rewrite systems can be checked by the non-existence of an infinite R-chain, which is an extension of Arts’ and Giesl’s result for the first-order case. It is clarified that the subterm property of the quasi-ordering, used for proving termination automatically, is indispensable.", "Arts and Giesl proved that the termination of a first-order rewrite system can be reduced to the study of its dependency pairs''. We extend these results to rewrite systems on simply typed lambda-terms by using Tait's computability technique." ] }
1205.2833
2951024333
For small cell technology to significantly increase the capacity of tower-based cellular networks, mobile users will need to be actively pushed onto the more lightly loaded tiers (corresponding to, e.g., pico and femtocells), even if they offer a lower instantaneous SINR than the macrocell base station (BS). Optimizing a function of the long-term rates for each user requires (in general) a massive utility maximization problem over all the SINRs and BS loads. On the other hand, an actual implementation will likely resort to a simple biasing approach where a BS in tier j is treated as having its SINR multiplied by a factor A_j>=1, which makes it appear more attractive than the heavily-loaded macrocell. This paper bridges the gap between these approaches through several physical relaxations of the network-wide optimal association problem, whose solution is NP hard. We provide a low-complexity distributed algorithm that converges to a near-optimal solution with a theoretical performance guarantee, and we observe that simple per-tier biasing loses surprisingly little, if the bias values A_j are chosen carefully. Numerical results show a large (3.5x) throughput gain for cell-edge users and a 2x rate gain for median users relative to a max received power association.
The existing work on cell association can be broadly classified into two groups: Strategies based on from lightly-loaded cells, such as hybrid channel assignment (HCA) @cite_21 , channel borrowing without locking (CBWL) @cite_16 , load balancing with selective borrowing (LBSB) @cite_15 @cite_24 , etc; Strategies based on to lightly-loaded cells, such as directed retry @cite_8 , mobile-assisted call admission algorithms (MACA) @cite_14 , hierarchical macrocell overlay systems @cite_17 @cite_25 , cell breathing techniques @cite_2 @cite_28 , and biasing methods in HetNets @cite_11 . The approach in this paper is based on traffic transfer. There have been many efforts in the literature toward traffic transfer strategies in macro-only cellular networks. The so-called cell breathing'' technique @cite_2 @cite_28 dynamically changes (contracts or expands) the coverage area depending on the load situation (over-loaded or under-loaded) of the cells by adjusting the transmit power. Sang @cite_9 proposed an integrated framework consisting of MAC-layer cell breathing and load-aware handover cell-site selection. Cell breathing aims to balance the load among neighboring macrocells, while in HetNets we additionally need to balance the load among different tiers.
{ "cite_N": [ "@cite_14", "@cite_11", "@cite_8", "@cite_28", "@cite_9", "@cite_21", "@cite_24", "@cite_2", "@cite_15", "@cite_16", "@cite_25", "@cite_17" ], "mid": [ "2118576677", "2136530738", "2000268008", "2037831381", "1975618234", "2007331106", "", "2130827801", "2688136489", "2107108957", "2153029410", "2160441617" ], "abstract": [ "In a cellular network, a fixed number of channels is normally assigned to each cell. However, under this scheme, the channel usage may not be efficient because of the variability in the offered traffic. Different approaches such as channel borrowing (CB) and dynamic channel allocation (DCA) have been proposed to accommodate variable traffic. In this paper, we expand on the CB scheme and propose a new channel allocation scheme-mobile-assisted connection-admission (MACA) algorithm-to achieve load balancing in a cellular network. In this scheme, some special channels are used to connect mobile units from different cells; thus, a mobile unit, which is unable to connect to its own base station because it is in a heavily-loaded \"hot\" cell, may be able to get connected to its neighboring cold cell's base station through a two-hop link. We find that MACA can greatly improve the performance of a cellular network.", "As the spectral efficiency of a point-to-point link in cellular networks approaches its theoretical limits, with the forecasted explosion of data traffic, there is a need for an increase in the node density to further improve network capacity. However, in already dense deployments in today's networks, cell splitting gains can be severely limited by high inter-cell interference. Moreover, high capital expenditure cost associated with high power macro nodes further limits viability of such an approach. This article discusses the need for an alternative strategy, where low power nodes are overlaid within a macro network, creating what is referred to as a heterogeneous network. We survey current state of the art in heterogeneous deployments and focus on 3GPP LTE air interface to describe future trends. A high-level overview of the 3GPP LTE air interface, network nodes, and spectrum allocation options is provided, along with the enabling mechanisms for heterogeneous deployments. Interference management techniques that are critical for LTE heterogeneous deployments are discussed in greater detail. Cell range expansion, enabled through cell biasing and adaptive resource partitioning, is seen as an effective method to balance the load among the nodes in the network and improve overall trunking efficiency. An interference cancellation receiver plays a crucial role in ensuring acquisition of weak cells and reliability of control and data reception in the presence of legacy signals.", "A directed retry facility, which enables subscribers in a mobile telephone system to look for free radio channels in more than one cell, is investigated with respect to blocking probability and channel utilization. An iterative procedure is devised, by which the dependencies among cells can be illustrated. This procedure makes use of theories developed for overflow systems in classical telephony, and proves to be very accurate for the situations under study. Analytical results are compared with simulations and good agreement is observed. Results show that a substantial improvement, compared with systems without a directed retry facility, can be achieved as far as carried traffic is concerned. The improvement is accomplished at the expense of those subscribers who cannot make use of the directed retry facility due to variations in radio coverage.", "Maximizing network throughput while providing fairness is one of the key challenges in wireless LANs (WLANs). This goal is typically achieved when the load of access points (APs) is balanced. Recent studies on operational WLANs, however, have shown that AP load is often substantially uneven. To alleviate such imbalance of load, several load balancing schemes have been proposed. These schemes commonly require proprietary software or hardware at the user side for controlling the user-AP association. In this paper we present a new load balancing technique by controlling the size of WLAN cells (i.e., AP's coverage range), which is conceptually similar to cell breathing in cellular networks. The proposed scheme does not require any modification to the users neither the IEEE 802.11 standard. It only requires the ability of dynamically changing the transmission power of the AP beacon messages. We develop a set of polynomial time algorithms that find the optimal beacon power settings which minimize the load of the most congested AP. We also consider the problem of network-wide min-max load balancing. Simulation results show that the performance of the proposed method is comparable with or superior to the best existing association-based methods.", "We investigate a wireless system of multiple cells, each having a downlink shared channel in support of high-speed packet data services. In practice, such a system consists of hierarchically organized entities including a central server, Base Stations (BSs), and Mobile Stations (MSs). Our goal is to improve global resource utilization and reduce regional congestion given asymmetric arrivals and departures of mobile users, a goal requiring load balancing among multiple cells. For this purpose, we propose a scalable cross-layer framework to coordinate packet-level scheduling, call-level cell-site selection and handoff, and system-level cell coverage based on load, throughput, and channel measurements. In this framework, an opportunistic scheduling algorithm--the weighted Alpha-Rule--exploits the gain of multiuser diversity in each cell independently, trading aggregate (mean) down-link throughput for fairness and minimum rate guarantees among MSs. Each MS adapts to its channel dynamics and the load fluctuations in neighboring cells, in accordance with MSs' mobility or their arrival and departure, by initiating load-aware handoff and cell-site selection. The central server adjusts schedulers of all cells to coordinate their coverage by prompting cell breathing or distributed MS handoffs. Across the whole system, BSs and MSs constantly monitor their load, throughput, or channel quality in order to facilitate the overall system coordination. Our specific contributions in such a framework are highlighted by the minimum-rate guaranteed weighted Alpha-Rule scheduling, the load-aware MS handoff cell-site selection, and the Media Access Control (MAC)-layer cell breathing. Our evaluations show that the proposed framework can improve global resource utilization and load balancing, resulting in a smaller blocking rate of MS arrivals without extra resources while the aggregate throughput remains roughly the same or improved at the hot-spots. Our simulation tests also show that the coordinated system is robust to dynamic load fluctuations and is scalable to both the system dimension and the size of MS population.", "This paper considers the problem of channel assignment in mobile communication systems, where the service area is divided in hexagonal cells. In particular, a Hybrid Channel Assignment Scheme is studied and certain results are obtained using GPSS simulation of a model 40-cell system.", "", "Third generation code-division multiple access (CDMA) systems propose to provide packet data service through a high speed shared channel with intelligent and fast scheduling at the base-stations. In the current approach base-stations schedule independently of other base-stations. We consider scheduling schemes in which scheduling decisions are made jointly for a cluster of cells thereby enhancing performance through interference avoidance and dynamic load balancing. We consider algorithms that assume complete knowledge of the channel quality information from each of the base-stations to the terminals at the centralized scheduler as well as a two-tier scheduling strategy that assumes only the knowledge of the long term channel conditions at the centralized scheduler. We demonstrate that in the case of asymmetric traffic distribution, where load imbalance is most pronounced, significant throughput gains can be obtained while the gains in the symmetric case are modest. Since the load balancing is achieved through centralized scheduling, our scheme can adapt to time-varying traffic patterns dynamically.", "We propose a dynamic load balancing scheme for the channel assignment problem in a cellular mobile environment. As an underlying approach, we start with a fixed assignment scheme where each cell is initially allocated a set of channels, each to be assigned on demand to a user in the cell. A cell is classified as ‘hot’, if the degree of coldness of a cell (defined as the ratio of the number of available channels to the total number of channels for that cell), is less than or equal to some threshold value. Otherwise the cell is cold'. Our load balancing scheme proposes to migrate unused channels from underloaded cells to an overloaded one. This is achieved through borrowing a fixed number of channels from cold cells to a hot one according to a channel borrowing algorithm. A channel assignment strategy is also proposed based on dividing the users in a cell into three broad types—‘new’, ‘departing’, ‘others’—and forming different priority classes of channel demands from these three types of users. Assignment of the local and borrowed channels are performed according to the priority classes. Next, a Markov model for an individual cell is developed, where the state is determined by the number of occupied channels in the cell. The probability for a cell being hot and the call blocking probability in a hot cell are derived, and a method to estimate the value of the threshold is also given. Detailed simulation experiments are carried out in order to evaluate our proposed methodology. The performance of our load balancing scheme is compared with the fixed channel assignment, simple borrowing, and two existing strategies with load balancing (e.g., directed retry and CBWL), and a significant improvement of the system behavior is noted in all cases.", "A new scheme that allows cell gateways (base stations) to borrow channels from adjacent gateways in a cellular communication system is presented. Borrowed channels are used with reduced transmitted power to limit interference with cochannel cells. No channel locking is needed. The scheme, which can be used with various multiple access techniques, permits simple channel control management without requiring global information about channel usage throughout the system. It provides enhanced traffic performance in homogeneous environments and also can be used to relieve spatially localized traffic overloads (tele-traffic 'hot spots'). Co-channel interference analysis shows that the scheme can maintain the same SIR as nonborrowing schemes. Analytical models using multidimensional birth-death processes and decomposition methods are devised to characterize performance. The results which are also validated by simulation indicate that significantly increased traffic capacity can be achieved in comparison with nonborrowing schemes. >", "The capacity of wireless networks can be increased via dynamic load balancing sharing by employing overlay networks on top of the existing cellular networks. One such recently proposed system is the integrated cellular and ad hoc relay (iCAR) system, where an overlay ad hoc network is employed to use the resources efficiently by dynamically balancing the load of the hot spots in the cellular network, and to provide quality-of-service to subscribers, no matter where they are located and when the request is made. It is assumed that this overlay network operates in the 2.4-GHz Industrial, Scientific, and Medical (ISM) band and, hence, the number of available ISM-band relay channels used for load balancing will be limited due to other users' interference at a given point in time. In this paper, the impact of ISM-band interference on the performance of iCAR systems, which is a representative hybrid wireless network, is studied, and it is shown that dynamic load balancing and sharing capabilities of iCAR systems are strictly dependent on the availability of the ISM-band relay channels. In addition to quantifying the impact of the number of available relay channels on the performance of iCAR systems, a simple channel assignment scheme to reduce the performance degradation due to other users' interference is also provided. Results show that this interference avoidance technique can improve the realistic performance of iCAR-like hybrid wireless networks by 12 -23 when the interferers are uniformly distributed in the ISM-band.", "The popularity of wireless communication systems can be seen almost everywhere in the form of cellular networks, WLANs, and WPANs. In addition, small portable devices have been increasingly equipped with multiple communication interfaces building a heterogeneous environment in terms of access technologies. The desired ubiquitous computing environment of the future has to exploit this multitude of connectivity alternatives resulting from diverse wireless communication systems and different access technologies to provide useful services with guaranteed quality to users. Many new applications require a ubiquitous computing environment capable of accessing information from different portable devices at any time and everywhere. This has motivated researchers to integrate various wireless platforms such as cellular networks, WLANs, and MANETs. Integration of different technologies with different capabilities and functionalities is an extremely complex task and involves issues at all layers of the protocol stack. This article envisions an architecture for state-of-the-art heterogeneous multihop networks, and identifies research issues that need to be addressed for successful integration of heterogeneous technologies for the next generation of wireless and mobile networks." ] }
1205.3109
2953152866
Bayesian model-based reinforcement learning is a formally elegant approach to learning optimal behaviour under model uncertainty, trading off exploration and exploitation in an ideal way. Unfortunately, finding the resulting Bayes-optimal policies is notoriously taxing, since the search space becomes enormous. In this paper we introduce a tractable, sample-based method for approximate Bayes-optimal planning which exploits Monte-Carlo tree search. Our approach outperformed prior Bayesian model-based RL algorithms by a significant margin on several well-known benchmark problems -- because it avoids expensive applications of Bayes rule within the search tree by lazily sampling models from the current beliefs. We illustrate the advantages of our approach by showing it working in an infinite state space domain which is qualitatively out of reach of almost all previous work in Bayesian exploration.
Bayesian DP @cite_25 maintains a posterior distribution over transition models. At each step, a single model is sampled, and the action that is optimal in that model is executed. The Best Of Sampled Set (BOSS) algorithm generalizes this idea @cite_9 . BOSS samples a number of models from the posterior and combines them optimistically. This drives sufficient exploration to guarantee finite-sample performance guarantees. BOSS is quite sensitive to its parameter that governs the sampling criterion. Unfortunately, this is difficult to select. Castro and Precup proposed an SBOSS variant, which provides a more effective adaptive sampling criterion @cite_12 . BOSS algorithms are generally quite robust, but suffer from over-exploration.
{ "cite_N": [ "@cite_9", "@cite_25", "@cite_12" ], "mid": [ "2950270837", "1582436621", "2126848223" ], "abstract": [ "We present a modular approach to reinforcement learning that uses a Bayesian representation of the uncertainty over models. The approach, BOSS (Best of Sampled Set), drives exploration by sampling multiple models from the posterior and selecting actions optimistically. It extends previous work by providing a rule for deciding when to resample and how to combine the models. We show that our algorithm achieves nearoptimal reward with high probability with a sample complexity that is low relative to the speed at which the posterior distribution converges during learning. We demonstrate that BOSS performs quite favorably compared to state-of-the-art reinforcement-learning approaches and illustrate its flexibility by pairing it with a non-parametric model that generalizes across states.", "The reinforcement learning problem can be decomposed into two parallel types of inference: (i) estimating the parameters of a model for the underlying process; (ii) determining behavior which maximizes return under the estimated model. Following Dearden, Friedman and Andre (1999), it is proposed that the learning process estimates online the full posterior distribution over models. To determine behavior, a hypothesis is sampled from this distribution and the greedy policy with respect to the hypothesis is obtained by dynamic programming. By using a different hypothesis for each trial appropriate exploratory and exploitative behavior is obtained. This Bayesian method always converges to the optimal policy for a stationary process with discrete states.", "Bayesian reinforcement learning (RL) is aimed at making more efficient use of data samples, but typically uses significantly more computation. For discrete Markov Decision Processes, a typical approach to Bayesian RL is to sample a set of models from an underlying distribution, and compute value functions for each, e.g. using dynamic programming. This makes the computation cost per sampled model very high. Furthermore, the number of model samples to take at each step has mainly been chosen in an ad-hoc fashion. We propose a principled method for determining the number of models to sample, based on the parameters of the posterior distribution over models. Our sampling method is local, in that we may choose a different number of samples for each state-action pair. We establish bounds on the error in the value function between a random model sample and the mean model from the posterior distribution. We compare our algorithm against state-of-the-art methods and demonstrate that our method provides a better trade-off between performance and running time." ] }
1205.3109
2953152866
Bayesian model-based reinforcement learning is a formally elegant approach to learning optimal behaviour under model uncertainty, trading off exploration and exploitation in an ideal way. Unfortunately, finding the resulting Bayes-optimal policies is notoriously taxing, since the search space becomes enormous. In this paper we introduce a tractable, sample-based method for approximate Bayes-optimal planning which exploits Monte-Carlo tree search. Our approach outperformed prior Bayesian model-based RL algorithms by a significant margin on several well-known benchmark problems -- because it avoids expensive applications of Bayes rule within the search tree by lazily sampling models from the current beliefs. We illustrate the advantages of our approach by showing it working in an infinite state space domain which is qualitatively out of reach of almost all previous work in Bayesian exploration.
Sparse sampling @cite_3 is a sample-based tree search algorithm. The key idea is to sample successor nodes from each state, and apply a Bellman backup to update the value of the parent node from the values of the child nodes. applied sparse sampling to search over belief-state MDPs @cite_11 . The tree is expanded non-uniformly according to the sampled trajectories. At each decision node, a promising action is selected using Thompson sampling --- i.e., sampling an MDP from that belief-state, solving the MDP and taking the optimal action. At each chance node, a successor belief-state is sampled from the transition dynamics of the belief-state MDP.
{ "cite_N": [ "@cite_3", "@cite_11" ], "mid": [ "1512919909", "2071814471" ], "abstract": [ "A critical issue for the application of Markov decision processes (MDPs) to realistic problems is how the complexity of planning scales with the size of the MDP. In stochastic environments with very large or infinite state spaces, traditional planning and reinforcement learning algorithms may be inapplicable, since their running time typically grows linearly with the state space size in the worst case. In this paper we present a new algorithm that, given only a generative model (a natural and common type of simulator) for an arbitrary MDP, performs on-line, near-optimal planning with a per-state running time that has no dependence on the number of states. The running time is exponential in the horizon time (which depends only on the discount factor γ and the desired degree of approximation to the optimal policy). Our algorithm thus provides a different complexity trade-off than classical algorithms such as value iteration—rather than scaling linearly in both horizon time and state space size, our running time trades an exponential dependence on the former in exchange for no dependence on the latter. Our algorithm is based on the idea of sparse sampling. We prove that a randomly sampled look-ahead tree that covers only a vanishing fraction of the full look-ahead tree nevertheless suffices to compute near-optimal actions from any state of an MDP. Practical implementations of the algorithm are discussed, and we draw ties to our related recent results on finding a near-best strategy from a given class of strategies in very large partially observable MDPs (Kearns, Mansour, & Ng. Neural information processing systems 13, to appear).", "We present an efficient \"sparse sampling\" technique for approximating Bayes optimal decision making in reinforcement learning, addressing the well known exploration versus exploitation tradeoff. Our approach combines sparse sampling with Bayesian exploration to achieve improved decision making while controlling computational cost. The idea is to grow a sparse lookahead tree, intelligently, by exploiting information in a Bayesian posterior---rather than enumerate action branches (standard sparse sampling) or compensate myopically (value of perfect information). The outcome is a flexible, practical technique for improving action selection in simple reinforcement learning scenarios." ] }
1205.3109
2953152866
Bayesian model-based reinforcement learning is a formally elegant approach to learning optimal behaviour under model uncertainty, trading off exploration and exploitation in an ideal way. Unfortunately, finding the resulting Bayes-optimal policies is notoriously taxing, since the search space becomes enormous. In this paper we introduce a tractable, sample-based method for approximate Bayes-optimal planning which exploits Monte-Carlo tree search. Our approach outperformed prior Bayesian model-based RL algorithms by a significant margin on several well-known benchmark problems -- because it avoids expensive applications of Bayes rule within the search tree by lazily sampling models from the current beliefs. We illustrate the advantages of our approach by showing it working in an infinite state space domain which is qualitatively out of reach of almost all previous work in Bayesian exploration.
Bayesian Exploration Bonus (BEB) solves the posterior mean MDP, but with an additional reward bonus that depends on visitation counts @cite_14 . Similarly, propose an algorithm with a different form of exploration bonus @cite_18 . These algorithms provide performance guarantees after a polynomial number of steps in the environment. However, behavior in the early steps of exploration is very sensitive to the precise exploration bonuses; and it turns out to be hard to translate sophisticated prior knowledge into the form of a bonus.
{ "cite_N": [ "@cite_18", "@cite_14" ], "mid": [ "1596778849", "2124352385" ], "abstract": [ "The explore exploit dilemma is one of the central challenges in Reinforcement Learning (RL). Bayesian RL solves the dilemma by providing the agent with information in the form of a prior distribution over environments; however, full Bayesian planning is intractable. Planning with the mean MDP is a common myopic approximation of Bayesian planning. We derive a novel reward bonus that is a function of the posterior distribution over environments, which, when added to the reward in planning with the mean MDP, results in an agent which explores efficiently and effectively. Although our method is similar to existing methods when given an uninformative or unstructured prior, unlike existing methods, our method can exploit structured priors. We prove that our method results in a polynomial sample complexity and empirically demonstrate its advantages in a structured exploration task.", "We consider the exploration exploitation problem in reinforcement learning (RL). The Bayesian approach to model-based RL offers an elegant solution to this problem, by considering a distribution over possible models and acting to maximize expected reward; unfortunately, the Bayesian solution is intractable for all but very restricted cases. In this paper we present a simple algorithm, and prove that with high probability it is able to perform e-close to the true (intractable) optimal Bayesian policy after some small (polynomial in quantities describing the system) number of time steps. The algorithm and analysis are motivated by the so-called PAC-MDP approach, and extend such results into the setting of Bayesian RL. In this setting, we show that we can achieve lower sample complexity bounds than existing algorithms, while using an exploration strategy that is much greedier than the (extremely cautious) exploration of PAC-MDP algorithms." ] }
1205.2555
2950687828
Integrating open data sources can yield high value information but raises major problems in terms of metadata extraction, data source integration and visualization of integrated data. In this paper, we describe WebSmatch, a flexible environment for Web data integration, based on a real, end-to-end data integration scenario over public data from Data Publica. WebSmatch supports the full process of importing, refining and integrating data sources and uses third party tools for high quality visualization. We use a typical scenario of public data integration which involves problems not solved by currents tools: poorly structured input data sources (XLS files) and rich visualization of integrated data.
In terms of metadata extraction, the problem of identifying charts in documents using machine learning techniques has been widely studied over the last decade. In @cite_3 , the authors propose a method to automatically detect bar-charts and pie-charts, using computer vision techniques and instance-based learning. The approach developed in @cite_12 relies on a multiclass Support Vector Machine, as machine learning classifier. It is able to identify more kinds of charts, namely bar-charts, curve-plots, pie-charts, scatter-plots and surface-plots. More generally, @cite_1 presents a survey of extraction techniques of diagrams in complex documents, such as scanned documents.
{ "cite_N": [ "@cite_1", "@cite_12", "@cite_3" ], "mid": [ "1522548419", "2097817347", "" ], "abstract": [ "Document image analysis is the study of converting documents from paper form to an electronic form that captures the information content of the document. Necessary processing includes recognition of document layout (to determine reading order, and to distinguish text from diagrams), recognition of text (called Optical Character Recognition, OCR), and processing of diagrams and photographs. The processing of diagrams has been an active research area for several decades. A selection of existing diagram recognition techniques are presented in this paper. Challenging problems in diagram recognition include (1) the great diversity of diagram types, (2) the difficulty of adequately describing the syntax and semantics of diagram notations, and (3) the need to handle imaging noise. Recognition techniques that are discussed include blackboard systems, stochastic grammars, Hidden Markov Models, and graph grammars.", "We present an approach for classifying images of charts based on the shape and spatial relationships of their primitives. Five categories are considered: bar-charts, curve-plots, pie-charts, scatter-plots and surface-plots. We introduce two novel features to represent the structural information based on (a) region segmentation and (b) curve saliency. The local shape is characterized using the Histograms of Oriented Gradients (HOG) and the Scale Invariant Feature Transform (SIFT) descriptors. Each image is represented by sets of feature vectors of each modality. The similarity between two images is measured by the overlap in the distribution of the features -measured using the Pyramid Match algorithm. A test image is classified based on its similarity with training images from the categories. The approach is tested with a database of images collected from the Internet.", "" ] }
1205.2265
1854414374
Online learning constitutes a mathematical and compelling framework to analyze sequential decision making problems in adversarial environments. The learner repeatedly chooses an action, the environment responds with an outcome, and then the learner receives a reward for the played action. The goal of the learner is to maximize his total reward. However, there are situations in which, in addition to maximizing the cumulative reward, there are some additional constraints on the sequence of decisions that must be satisfied on average by the learner. In this paper we study an extension to the online learning where the learner aims to maximize the total reward given that some additional constraints need to be satisfied. By leveraging on the theory of Lagrangian method in constrained optimization, we propose Lagrangian exponentially weighted average (LEWA) algorithm, which is a primal-dual variant of the well known exponentially weighted average algorithm, to efficiently solve constrained online decision making problems. Using novel theoretical analysis, we establish the regret and the violation of the constraint bounds in full information and bandit feedback models.
As is well known, a wide range of literature deals with the online decision making problem without constraints and there exist a number of regret-minimizing algorithms that have the optimal regret bound. The most well-known and successful work is probably the Hedge algorithm @cite_14 , which was a direct generalization of Littlestone and Warmuth's Weighted Majority (WM) algorithm @cite_4 . Other recent studies include the improved theoretical bounds and the parameter-free hedging algorithm @cite_19 and adaptive Hedge @cite_20 for decision-theoretic online learning. We refer readers to the @cite_11 for an in-depth discussion of this subject.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_19", "@cite_20", "@cite_11" ], "mid": [ "1988790447", "2093825590", "1519060350", "", "1570963478" ], "abstract": [ "In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in Rn. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line.", "We study the construction of prediction algorithms in a situation in which a learner faces a sequence of trials, with a prediction to be made in each, and the goal of the learner is to make few mistakes. We are interested in the case where the learner has reason to believe that one of some pool of known algorithms will perform well, but the learner does not know which one. A simple and effective method, based on weighted voting, is introduced for constructing a compound algorithm in such a circumstance. We call this method the Weighted Majority Algorithm. We show that this algorithm is robust in the presence of errors in the data. We discuss various versions of the Weighted Majority Algorithm and prove mistake bounds for them that are closely related to the mistake bounds of the best algorithms of the pool. For example, given a sequence of trials, if there is an algorithm in the pool A that makes at most m mistakes then the Weighted Majority Algorithm will make at most c(log |A| + m) mistakes on that sequence, where c is fixed constant.", "We study the problem of decision-theoretic online learning (DTOL). Motivated by practical applications, we focus on DTOL when the number of actions is very large. Previous algorithms for learning in this framework have a tunable learning rate parameter, and a barrier to using online-learning in practical applications is that it is not understood how to set this parameter optimally, particularly when the number of actions is large. In this paper, we offer a clean solution by proposing a novel and completely parameter-free algorithm for DTOL. We introduce a new notion of regret, which is more natural for applications with a large number of actions. We show that our algorithm achieves good performance with respect to this new notion of regret; in addition, it also achieves performance close to that of the best bounds achieved by previous algorithms with optimally-tuned parameters, according to previous notions of regret.", "", "1. Introduction 2. Prediction with expert advice 3. Tight bounds for specific losses 4. Randomized prediction 5. Efficient forecasters for large classes of experts 6. Prediction with limited feedback 7. Prediction and playing games 8. Absolute loss 9. Logarithmic loss 10. Sequential investment 11. Linear pattern recognition 12. Linear classification 13. Appendix." ] }
1205.2265
1854414374
Online learning constitutes a mathematical and compelling framework to analyze sequential decision making problems in adversarial environments. The learner repeatedly chooses an action, the environment responds with an outcome, and then the learner receives a reward for the played action. The goal of the learner is to maximize his total reward. However, there are situations in which, in addition to maximizing the cumulative reward, there are some additional constraints on the sequence of decisions that must be satisfied on average by the learner. In this paper we study an extension to the online learning where the learner aims to maximize the total reward given that some additional constraints need to be satisfied. By leveraging on the theory of Lagrangian method in constrained optimization, we propose Lagrangian exponentially weighted average (LEWA) algorithm, which is a primal-dual variant of the well known exponentially weighted average algorithm, to efficiently solve constrained online decision making problems. Using novel theoretical analysis, we establish the regret and the violation of the constraint bounds in full information and bandit feedback models.
As the first seminal paper in adversarial setting, @cite_9 introduced the online learning with simple path constraints. They considered the infinitely repeated two player games with stochastic rewards where for every joint action of the players, there is an additional stochastic constraint vector that is accumulated by the decision maker. The learner is asked to keep the cumulative constraint vector in a predefined set in the space of constraint vectors. They showed that if the convex set is affected by both decisions and rewards, the optimal reward is generally unattainable online. The positive result is that a relaxed goal, which is defined in terms of the convex hull of the constrained reward in hindsight is attainable. For the relaxed setting, they suggested two inefficient algorithms: one relies on Blackwell's approachability theory and the other is based on calibrated forecast of the adversary's actions. Given the implementation difficulties associated with these two methods, they suggested two efficient heuristic methods to attain the reward with meeting the constraint in the long run. We note that the analysis in @cite_9 is asymptotic while the bounds to be established in this work are applicable to finite repeated games.
{ "cite_N": [ "@cite_9" ], "mid": [ "2095762034" ], "abstract": [ "We study online learning where a decision maker interacts with Nature with the objective of maximizing her long-term average reward subject to some sample path average constraints. We define the reward-in-hindsight as the highest reward the decision maker could have achieved, while satisfying the constraints, had she known Nature's choices in advance. We show that in general the reward-in-hindsight is not attainable. The convex hull of the reward-in-hindsight function is, however, attainable. For the important case of a single constraint, the convex hull turns out to be the highest attainable function. Using a calibrated forecasting rule, we provide an explicit strategy that attains this convex hull. We also measure the performance of heuristic methods based on non-calibrated forecasters in experiments involving a CPU power management problem." ] }
1205.2265
1854414374
Online learning constitutes a mathematical and compelling framework to analyze sequential decision making problems in adversarial environments. The learner repeatedly chooses an action, the environment responds with an outcome, and then the learner receives a reward for the played action. The goal of the learner is to maximize his total reward. However, there are situations in which, in addition to maximizing the cumulative reward, there are some additional constraints on the sequence of decisions that must be satisfied on average by the learner. In this paper we study an extension to the online learning where the learner aims to maximize the total reward given that some additional constraints need to be satisfied. By leveraging on the theory of Lagrangian method in constrained optimization, we propose Lagrangian exponentially weighted average (LEWA) algorithm, which is a primal-dual variant of the well known exponentially weighted average algorithm, to efficiently solve constrained online decision making problems. Using novel theoretical analysis, we establish the regret and the violation of the constraint bounds in full information and bandit feedback models.
In @cite_17 the budget limited MAB was introduced where polling an arm is costly where the cost of each arm is fixed in advance. In this setting both the exploration and exploitation phases are limited by a global budget. This setting matches the stochastic rewards with deterministic constraints without violation game discussed before. It has been shown that existing MAB algorithms are not suitable to efficiently deal with costly arms. They proposed the @math algorithm that dedicates the first @math fraction of the total budget exclusively for exploration and the remaining @math fraction for exploitation. @cite_18 improves the bound obtained in @cite_17 by proposing a knapsack based UCB @cite_18 algorithm which extends the UCB algorithm by solving a knapsack problem at each round to cope with the constraints. We note that knapsack based UCB does not make explicit distinction between exploration and exploitation steps as done in @math algorithm. In both @cite_18 and @cite_17 the algorithm proceeds as long as sufficient budget existing to play the arms.
{ "cite_N": [ "@cite_18", "@cite_17" ], "mid": [ "2950303912", "1761637522" ], "abstract": [ "In budget-limited multi-armed bandit (MAB) problems, the learner's actions are costly and constrained by a fixed budget. Consequently, an optimal exploitation policy may not be to pull the optimal arm repeatedly, as is the case in other variants of MAB, but rather to pull the sequence of different arms that maximises the agent's total reward within the budget. This difference from existing MABs means that new approaches to maximising the total reward are required. Given this, we develop two pulling policies, namely: (i) KUBE; and (ii) fractional KUBE. Whereas the former provides better performance up to 40 in our experimental settings, the latter is computationally less expensive. We also prove logarithmic upper bounds for the regret of both policies, and show that these bounds are asymptotically optimal (i.e. they only differ from the best possible regret by a constant factor).", "We introduce the budget-limited multi-armed bandit (MAB), which captures situations where a learner's actions are costly and constrained by a fixed budget that is incommensurable with the rewards earned from the bandit machine, and then describe a first algorithm for solving it. Since the learner has a budget, the problem's duration is finite. Consequently an optimal exploitation policy is not to pull the optimal arm repeatedly, but to pull the combination of arms that maximises the agent's total reward within the budget. As such, the rewards for all arms must be estimated, because any of them may appear in the optimal combination. This difference from existing MABs means that new approaches to maximising the total reward are required. To this end, we propose an ∊-first algorithm, in which the first ∊ of the budget is used solely to learn the arms' rewards (exploration), while the remaining 1 - ∊ is used to maximise the received reward based on those estimates (exploitation). We derive bounds on the algorithm's loss for generic and uniform exploration methods, and compare its performance with traditional MAB algorithms under various distributions of rewards and costs, showing that it outperforms the others by up to 50 ." ] }
1205.2265
1854414374
Online learning constitutes a mathematical and compelling framework to analyze sequential decision making problems in adversarial environments. The learner repeatedly chooses an action, the environment responds with an outcome, and then the learner receives a reward for the played action. The goal of the learner is to maximize his total reward. However, there are situations in which, in addition to maximizing the cumulative reward, there are some additional constraints on the sequence of decisions that must be satisfied on average by the learner. In this paper we study an extension to the online learning where the learner aims to maximize the total reward given that some additional constraints need to be satisfied. By leveraging on the theory of Lagrangian method in constrained optimization, we propose Lagrangian exponentially weighted average (LEWA) algorithm, which is a primal-dual variant of the well known exponentially weighted average algorithm, to efficiently solve constrained online decision making problems. Using novel theoretical analysis, we establish the regret and the violation of the constraint bounds in full information and bandit feedback models.
Finally, we remark that our setting differs from the setting considered in @cite_8 which puts restrictions on the actions taken by the adversary and not the learner as in our case.
{ "cite_N": [ "@cite_8" ], "mid": [ "2166694847" ], "abstract": [ "We study repeated zero-sum games against an adversary on a budget. Given that an adversary has some constraint on the sequence of actions that he plays, we consider what ought to be the player's best mixed strategy with knowledge of this budget. We show that, for a general class of normal-form games, the min-imax strategy is indeed efficiently computable and relies on a \"random playout\" technique. We give three diverse applications of this new algorithmic template: a cost-sensitive \"Hedge\" setting, a particular problem in Metrical Task Systems, and the design of combinatorial prediction markets." ] }
1205.1975
2110321461
In infrastructure-less highly dynamic networks, computing and performing even basic tasks (such as routing and broadcasting) is a very challenging activity du e to the fact that connectivity does not necessarily hold, and the network may actually be disconnected at every time instant. Clearly the task of designing protocols for these networks is less difficult if t he environment allows waiting (i.e., it provides the nodes with store-carry-forward-like mechanisms such as local buffering) than if waiting is not feasible. No quantitative corroborations of this fact exis t (e.g., no answer to the question: how much easier?). In this paper, we consider these qualitative questions ab out dynamic networks, modeled as time-varying (or evolving) graphs, where edges exist only at some times. We examine the difficulty of the environment in terms of the expressivity of the corresponding time-varying graph; that is in terms of the language generated by the feasible journeys in the graph. We prove that the set of languages Lnowait when no waiting is allowed contains all computable languages. On the other end, using algebraic properties of quasi-orders, we prove that Lwait is just the family of regular languages. In other words, we prove that, when waiting is no longer forbidden, the power of the accepting automaton (difficulty of the envir onment) drops drastically from being as powerful as a Turing machine, to becoming that of a Finite-State machine. This (perhaps surprisingly large) gap is a measure of the computational power of waiting. We also study bounded waiting; that is when waiting is allowed at a node only for at most d time units. We prove the negative result that Lwait[d] = Lnowait; that is, the expressivity decreases only if the waiting is finite but unpredictable (i.e., under the co ntrol of the protocol designer and not of the environment).
The literature on dynamic networks and dynamic graphs could fill volumes. Here we briefly mention only some of the work most directly connected to the results of this paper. The idea of representing dynamic graphs as a sequence of (static) graphs, called evolving graph , was introduced in @cite_2 , to study basic network problems in dynamic networks from a centralized point of view @cite_9 @cite_11 . The evolving graph views the dynamics of the system as a sequence of global snapshots (taken either in discrete steps or when events occur). The equivalent model of time-varying graph (TVG), formalized in @cite_7 and used here, views the dynamics of the system from the local point of view of the entities: for any given entity, the local edges and neighborhood can be considered independently from the entire graph (e.g. how long it is available, with what properties, with what latency, etc.).
{ "cite_N": [ "@cite_9", "@cite_11", "@cite_7", "@cite_2" ], "mid": [ "1518080829", "1984196269", "1746637951", "1982459859" ], "abstract": [ "New technologies and the deployment of mobile and nomadic services are driving the emergence of complex communications networks, that have a highly dynamic behavior. This naturally engenders new route-discovery problems under changing conditions over these networks. Unfortunately, the temporal variations in the topology of dynamic networks are hard to be effectively captured in a classical graph model. In this paper, we use evolving graphs, which helps capture the dynamic characteristics of such networks, in order to compute multicast trees with minimum overall transmission time for a class of wireless mobile dynamic networks. We first show that computing different types of strongly connected components in evolving digraphs is NP-Complete, and then propose an algorithm to build all rooted directed minimum spanning trees in strongly connected dynamic networks.", "New technologies and the deployment of mobile and nomadic services are driving the emergence of complex communications networks, that have a highly dynamic behavior. This naturally engenders new route-discovery problems under changing conditions over these networks. Unfortunately, the temporal variations in the network topology are hard to be effectively captured in a classical graph model. In this paper, we use and extend a recently proposed graph theoretic model, which helps capture the evolving characteristic of such networks, in order to propose and formally analyze least cost journey (the analog of paths in usual graphs) in a class of dynamic networks, where the changes in the topology can be predicted in advance. Cost measures investigated here are hop count (shortest journeys), arrival date (foremost journeys), and time span (fastest journeys).", "The past few years have seen intensive research efforts carried out in some apparently unrelated areas of dynamic systems – delay-tolerant networks, opportunistic-mobility networks and social networks – obtaining closely related insights. Indeed, the concepts discovered in these investigations can be viewed as parts of the same conceptual universe, and the formal models proposed so far to express some specific concepts are the components of a larger formal description of this universe. The main contribution of this paper is to integrate the vast collection of concepts, formalisms and results found in the literature into a unified framework, which we call time-varying graphs TVGs. Using this framework, it is possible to express directly in the same formalism not only the concepts common to all those different areas, but also those specific to each. Based on this definitional work, employing both existing results and original observations, we present a hierarchical classification of TVGs; each class corresponds to a significant property examined in the distributed computing literature. We then examine how TVGs can be used to study the evolution of network properties, and propose different techniques, depending on whether the indicators for these properties are atemporal as in the majority of existing studies or temporal. Finally, we briefly discuss the introduction of randomness in TVGs.", "Wireless technologies and the deployment of mobile and nomadic services are driving the emergence of complex ad hoc networks that have a highly dynamic behavior. Modeling such dynamics and creating a reference model on which results could be compared and reproduced, was stated as a fundamental issue by a recent NSF workshop on networking. In this article we show how the modeling of time-changes unsettles old questions and allows for new insights into central problems in networking, such as routing metrics, connectivity, and spanning trees. Such modeling is made possible through evolving graphs, a simple combinatorial model that helps capture the behavior or dynamic networks over time." ] }
1205.1975
2110321461
In infrastructure-less highly dynamic networks, computing and performing even basic tasks (such as routing and broadcasting) is a very challenging activity du e to the fact that connectivity does not necessarily hold, and the network may actually be disconnected at every time instant. Clearly the task of designing protocols for these networks is less difficult if t he environment allows waiting (i.e., it provides the nodes with store-carry-forward-like mechanisms such as local buffering) than if waiting is not feasible. No quantitative corroborations of this fact exis t (e.g., no answer to the question: how much easier?). In this paper, we consider these qualitative questions ab out dynamic networks, modeled as time-varying (or evolving) graphs, where edges exist only at some times. We examine the difficulty of the environment in terms of the expressivity of the corresponding time-varying graph; that is in terms of the language generated by the feasible journeys in the graph. We prove that the set of languages Lnowait when no waiting is allowed contains all computable languages. On the other end, using algebraic properties of quasi-orders, we prove that Lwait is just the family of regular languages. In other words, we prove that, when waiting is no longer forbidden, the power of the accepting automaton (difficulty of the envir onment) drops drastically from being as powerful as a Turing machine, to becoming that of a Finite-State machine. This (perhaps surprisingly large) gap is a measure of the computational power of waiting. We also study bounded waiting; that is when waiting is allowed at a node only for at most d time units. We prove the negative result that Lwait[d] = Lnowait; that is, the expressivity decreases only if the waiting is finite but unpredictable (i.e., under the co ntrol of the protocol designer and not of the environment).
Both viewpoints have been extensively employed in the analysis of basic problems such as routing, broadcasting, gossiping and other forms of information spreading (e.g., @cite_14 @cite_29 @cite_15 @cite_17 @cite_24 ); to study problems of exploration in vehicular networks with periodic routes @cite_6 @cite_27 ; to examine failure detectors @cite_23 and consensus @cite_1 @cite_31 ; for the probabilistic analysis of informations spreading (e.g., @cite_8 @cite_19 ); and in the investigations of emerging properties in social networks (e.g., @cite_3 @cite_10 ). A characterization of classes of TVGs with respect to properties typically assumed in the research can be found in @cite_7 . The related investigations on dynamic networks include also the extensive work on population protocols (e.g., @cite_18 @cite_25 ); interestingly, the setting over which population protocols are defined is a particular class of time-varying graphs (recurrent interactions over a connected underlying graph). The impact of bounded waiting in dynamic networks has been investigated for exploration @cite_27 .
{ "cite_N": [ "@cite_31", "@cite_14", "@cite_18", "@cite_7", "@cite_8", "@cite_29", "@cite_1", "@cite_6", "@cite_3", "@cite_24", "@cite_19", "@cite_27", "@cite_23", "@cite_15", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "2135254481", "", "2123429492", "1746637951", "2045494719", "1609474097", "2120741723", "1532359370", "2163636580", "2120957127", "2013558775", "135016661", "2029761647", "2169517980", "2059763605", "", "2143859410" ], "abstract": [ "We study several variants of coordinated consensus in dynamic networks. We assume a synchronous model, where the communication graph for each round is chosen by a worst-case adversary. The network topology is always connected, but can change completely from one round to the next. The model captures mobile and wireless networks, where communication can be unpredictable. In this setting we study the fundamental problems of eventual, simultaneous, and Δ-coordinated consensus, as well as their relationship to other distributed problems, such as determining the size of the network. We show that in the absence of a good initial upper bound on the size of the network, eventual consensus is as hard as computing deterministic functions of the input, e.g., the minimum or maximum of inputs to the nodes. We also give an algorithm for computing such functions that is optimal in every execution. Next, we show that simultaneous consensus can never be achieved in less than n - 1 rounds in any execution, where n is the size of the network; consequently, simultaneous consensus is as hard as computing an upper bound on the number of nodes in the network. For Δ-coordinated consensus, we show that if the ratio between nodes with input 0 and input 1 is bounded away from 1, it is possible to decide in time n-Θ(√ nΔ), where Δ bounds the time from the first decision until all nodes decide. If the dynamic graph has diameter D, the time to decide is min O(nD Δ),n-Ω(nΔ D) , even if D is not known in advance. Finally, we show that (a) there is a dynamic graph such that for every input, no node can decide before time n-O(Δ0.28n0.72); and (b) for any diameter D = O(Δ), there is an execution with diameter D where no node can decide before time Ω(nD Δ). To our knowledge, our work constitutes the first study of Δ-coordinated consensus in general graphs.", "", "We consider the model of population protocols introduced by (Computation in networks of passively mobile finite-state sensors, pp. 290–299. ACM, New York, 2004), in which anonymous finite-state agents stably compute a predicate of the multiset of their inputs via two-way interactions in the family of all-pairs communication networks. We prove that all predicates stably computable in this model (and certain generalizations of it) are semilinear, answering a central open question about the power of the model. Removing the assumption of two-way interaction, we also consider several variants of the model in which agents communicate by anonymous message-passing where the recipient of each message is chosen by an adversary and the sender is not identified to the recipient. These one-way models are distinguished by whether messages are delivered immediately or after a delay, whether a sender can record that it has sent a message, and whether a recipient can queue incoming messages, refusing to accept new messages until it has had a chance to send out messages of its own. We characterize the classes of predicates stably computable in each of these one-way models using natural subclasses of the semilinear predicates.", "The past few years have seen intensive research efforts carried out in some apparently unrelated areas of dynamic systems – delay-tolerant networks, opportunistic-mobility networks and social networks – obtaining closely related insights. Indeed, the concepts discovered in these investigations can be viewed as parts of the same conceptual universe, and the formal models proposed so far to express some specific concepts are the components of a larger formal description of this universe. The main contribution of this paper is to integrate the vast collection of concepts, formalisms and results found in the literature into a unified framework, which we call time-varying graphs TVGs. Using this framework, it is possible to express directly in the same formalism not only the concepts common to all those different areas, but also those specific to each. Based on this definitional work, employing both existing results and original observations, we present a hierarchical classification of TVGs; each class corresponds to a significant property examined in the distributed computing literature. We then examine how TVGs can be used to study the evolution of network properties, and propose different techniques, depending on whether the indicators for these properties are atemporal as in the majority of existing studies or temporal. Finally, we briefly discuss the introduction of randomness in TVGs.", "An edge-Markovian process with birth-rate p and death-rate q generates sequences of graphs (G0,G1,G2,…) with the same node set [n] such that Gt is obtained from Gt−1 as follows: if e ∉ E(Gt−1) then e ∈ E(Gt) with probability p, and if e ∈ E(Gt−1) then e ∉ E(Gt) with probability q. (PODC 2008) analyzed thoroughly information dissemination in such dynamic graphs, by establishing bounds on their flooding time--flooding is the basic mechanism in which every node becoming aware of an information at step t forwards this information to all its neighbors at all forthcoming steps t∦ > t. In this paper, we establish tight bounds on the complexity of flooding for all possible birth rates and death rates, completing the previous results by Moreover, we note that despite its many advantages in term of simplicity and robustness, flooding suffers from its high bandwidth consumption. Hence we also show that flooding in dynamic graphs can be implemented in a more parsimonious manner, so that to save bandwidth, yet preserving efficiency in term of simplicity and completion time. For a positive integer k, we say that the flooding protocol is k-active if each node forwards an information only during the k time steps immediately following the step at which the node receives that information for the first time. We define the reachability threshold for the flooding protocol as the smallest integer k such that, for any source s ∈ [n], the k-active flooding protocol from s completes (i.e., reaches all nodes), and we establish tight bounds for this parameter. We show that, for a large spectrum of parameters p and q, the reachability threshold is by several orders of magnitude smaller than the flooding time. In particular, we show that it is even constant whenever the ratio p (p + q) exceeds log n n. Moreover, we also show that being active for a number of steps equal to the reachability threshold (up to a multiplicative constant) allows the flooding protocol to complete in optimal time, i.e., in asymptotically the same number of steps as when being perpetually active. These results demonstrate that flooding can be implemented in a practical and efficient manner in dynamic graphs. The main ingredient in the proofs of our results is a reduction lemma enabling to overcome the time dependencies in edge-Markovian dynamic graphs.", "Most highly dynamic infrastructure-less networks have in common that the assumption of connectivity does not necessarily hold at a given instant. Still, communication routes can be available between any pair of nodes over time and space. These networks (variously called delay-tolerant, disruptive-tolerant, challenged) are naturally modeled as time-varying graphs (or evolving graphs), where the existence of an edge is a function of time. In this paper we study deterministic computations under unstructured mobility, that is when the edges of the graph appear infinitely often but without any (known) pattern. In particular, we focus on the problem of broadcasting with termination detection. We explore the problem with respect to three possible metrics: the date of message arrival (foremost), the time spent doing the broadcast (fastest), and the number of hops used by the broadcast (shortest). We prove that the solvability and complexity of this problem vary with the metric considered, as well as with the type of knowledge a priori available to the entities. These results draw a complete computability map for this problem when mobility is unstructured.", "In this paper we investigate distributed computation in dynamic networks in which the network topology changes from round to round. We consider a worst-case model in which the communication links for each round are chosen by an adversary, and nodes do not know who their neighbors for the current round are before they broadcast their messages. The model captures mobile networks and wireless networks, in which mobility and interference render communication unpredictable. In contrast to much of the existing work on dynamic networks, we do not assume that the network eventually stops changing; we require correctness and termination even in networks that change continually. We introduce a stability property called T -interval connectivity (for T >= 1), which stipulates that for every T consecutive rounds there exists a stable connected spanning subgraph. For T = 1 this means that the graph is connected in every round, but changes arbitrarily between rounds. We show that in 1-interval connected graphs it is possible for nodes to determine the size of the network and compute any com- putable function of their initial inputs in O(n2) rounds using messages of size O(log n + d), where d is the size of the input to a single node. Further, if the graph is T-interval connected for T > 1, the computation can be sped up by a factor of T, and any function can be computed in O(n + n2 T) rounds using messages of size O(log n + d). We also give two lower bounds on the token dissemination problem, which requires the nodes to disseminate k pieces of information to all the nodes in the network. The T-interval connected dynamic graph model is a novel model, which we believe opens new avenues for research in the theory of distributed computing in wireless, mobile and dynamic networks.", "We study the computability and complexity of the exploration problem in a class of highly dynamic graphs: periodically varying (PV) graphs, where the edges exist only at some (unknown) times defined by the periodic movements of carriers. These graphs naturally model highly dynamic infrastructure-less networks such as public transports with fixed timetables, low earth orbiting (LEO) satellite systems, security guards' tours, etc. We establish necessary conditions for the problem to be solved. We also derive lower bounds on the amount of time required in general, as well as for the PV graphs defined by restricted classes of carriers movements: simple routes, and circular routes. We then prove that the limitations on computability and complexity we have established are indeed tight. We do so constructively presenting two worst case optimal solution algorithms, one for anonymous systems, and one for those with distinct nodes ids.", "In recent years, gossip-based algorithms have gained prominence as a methodology for designing robust and scalable communication schemes in large distributed systems. The premise underlying distributed gossip is very simple: in each time step, each node v in the system selects some other node w as a communication partner, generally by a simple randomized rule, and exchanges information with w; over a period of time, information spreads through the system in an \"epidemic fashion\". A fundamental issue which is not well understood is the following: how does the underlying low-level gossip mechanism (the means by which communication partners are chosen) affect one's ability to design efficient high-level gossip-based protocols? We establish one of the first concrete results addressing this question, by showing a fundamental limitation on the power of the commonly used uniform gossip mechanism for solving nearest-resource location problems. In contrast, very efficient protocols for this problem can be designed using a non-uniform spatial gossip mechanism, as established in earlier work with Alan Demers. We go on to consider the design of protocols for more complex problems, providing an efficient distributed gossip-based protocol for a set of nodes in Euclidean space to construct an approximate minimum spanning tree. Here too, we establish a contrasting limitation on the power of uniform gossip for solving this problem. Finally, we investigate gossip-based packet routing as a primitive that underpins the communication patterns in many protocols, and as a way to understand the capabilities of different gossip mechanisms at a general level.", "We investigate to what extent flooding and routing is possible if the graph is allowed to change unpredictably at each time step. We study what minimal requirements are necessary so that a node may correctly flood or route a message in a network whose links may change arbitrarily at any given point, subject to the condition that the underlying graph is connected. We look at algorithmic constraints such as limited storage, no knowledge of an upper bound on the number of nodes, and no usage of identifiers. We look at flooding as well as routing to some existing specified destination and give algorithms.", "We introduce stochastic time-dependency in evolving graphs: starting from an arbitrary initial edge probability distribution, at every time step, every edge changes its state (existing or not) according to a two-state Markovian process with probabilities p (edge birth-rate) and q (edge death-rate). If an edge exists at time t then, at time t+1, it dies with probability q. If instead the edge does not exist at time t, then it will come into existence at time t+1 with probability p. Such evolving graph model is a wide generalization of time-independent dynamic random graphs [6] and will be called edge-Markovian dynamic graphs. We investigate the speed of information dissemination in such dynamic graphs. We provide nearly tight bounds (which in fact turn out to be tight for a wide range of probabilities p and q) on the completion time of the flooding mechanism aiming to broadcast a piece of information from a source node to all nodes. In particular, we provide: i) A tight characterization of the class of edge-Markovian dynamic graphs where flooding time is constant and, thus, it does not asymptotically depend on the initial probability distribution. ii) A tight characterization of the class of edge-Markovian dynamic graphs where flooding time does not asymptotically depend on the edge death-rate q.", "We study the problem of exploration by a mobile entity (agent) of a class of dynamic networks, namely the periodically-varying graphs (the PV-graphs, modeling public transportation systems, among others). These are defined by a set of carriers following infinitely their prescribed route along the stations of the network. Flocchini, Mans, and Santoro [FMS09] (ISAAC 2009) studied this problem in the case when the agent must always travel on the carriers and thus cannot wait on a station. They described the necessary and sufficient conditions for the problem to be solvable and proved that the optimal number of steps (and thus of moves) to explore a n-node PV-graph of k carriers and maximal period p is in Θ(k·p2) in the general case. In this paper, we study the impact of the ability to wait at the stations. We exhibit the necessary and sufficient conditions for the problem to be solvable in this context, and we prove that waiting at the stations allows the agent to reduce the worst-case optimal number of moves by a multiplicative factor of at least Θ(p), while the time complexity is reduced to Θ(n·p). (In any connected PV-graph, we have n≤k·p.) We also show some complementary optimal results in specific cases (same period for all carriers, highly connected PV-graphs). Finally this new ability allows the agent to completely map the PV-graph, in addition to just explore it.", "Failure detectors are classical mechanisms which provide information about process failures and can help systems to cope with the high dynamics of self-organizing, unstructured and mobile wireless networks. Unreliable failure detectors of class ◊S are of special interest because they meet the weakest assumptions able to solve fundamental problems on the design of dependable systems. Unfortunately, a negative result states that no failure detector of that class can be implemented in a network of an unknown membership; but full membership knowledge as well as fully communication connectivity are no longer appropriate assumptions to the new scenario of dynamic networks. In this paper, we provide a discussion about the conditions and model able to implement failure detectors in dynamic networks and define a new class, namely ◊SM, which adapts the properties of the ◊S class to a dynamic network with an unknown membership.", "Delay-tolerant networks (DTNs) are characterized by a possible absence of end-to-end communication routes at any instant. In most cases, however, a form of connectivity can be established over time and space. This particularity leads to consider the relevance of a given route not only in terms of hops (topological length), but also in terms of time (temporal length). The problem of measuring temporal distances between individuals in a social network was recently addressed, based on a posteriori analysis of interaction traces. This paper focuses on the distributed version of this problem, asking whether every node in a network can know precisely and in real time how out-of-date it is with respect to every other. Answering affirmatively is simple when contacts between the nodes are punctual, using the temporal adaptation of vector clocks provided in (, 2008). It becomes more difficult when contacts have a duration and can overlap in time with each other. We demonstrate that the problem remains solvable with arbitrarily long contacts and non-instantaneous (though invariant and known) propagation delays on edges. This is done constructively by extending the temporal adaptation of vector clocks to non-punctual causality. The second part of the paper discusses how the knowledge of temporal lags could be used as a building block to solve more concrete problems, such as the construction of foremost broadcast trees or network backbones in periodically-varying DTNs.", "Connections in complex networks are inherently fluctuating over time and exhibit more dimensionality than analysis based on standard static graph measures can capture. Here, we introduce the concepts of temporal paths and distance in time-varying graphs. We define as temporal small world a time-varying graph in which the links are highly clustered in time, yet the nodes are at small average temporal distances. We explore the small-world behavior in synthetic time-varying networks of mobile agents and in real social and biological time-varying systems.", "", "We present a method for using real world mobility traces to identify tractable theoretical models for the study of distributed algorithms in mobile networks. We validate the method by deriving a vehicular ad hoc network model from a large corpus of position data generated by Boston-area taxicabs. Unlike previous work, our model does not assume global connectivity or eventual stability; it instead assumes only that some subset of processes are connected through transient paths (e.g., paths that exist over time). We use this model to study the problem of prioritized gossip, in which processes attempt to disseminate messages of different priority. Specifically, we present CabChat, a distributed prioritized gossip algorithm that leverages an interesting connection to the classic Tower of Hanoi problem to schedule the broadcast of packets of different priorities. Whereas previous studies of gossip leverage strong connectivity or stabilization assumptions to prove the time complexity of global termination, in our model, with its weak assumptions, we instead analyze CabChat with respect to its ability to deliver a high proportion of high priority messages over the transient paths that happen to exist in a given execution." ] }
1205.2051
2610932619
This work presents a classification of weak models of distributed computing. We focus on deterministic distributed algorithms, and study models of computing that are weaker versions of the widely-studied port-numbering model. In the port-numbering model, a node of degree @math d receives messages through @math d input ports and sends messages through @math d output ports, both numbered with @math 1 , 2 , ? , d . In this work, @math VV c is the class of all graph problems that can be solved in the standard port-numbering model. We study the following subclasses of @math VV c : Now we have many trivial containment relations, such as @math SB ⊆ MB ⊆ VB ⊆ VV ⊆ VV c , but it is not obvious if, for example, either of @math VB ⊆ SV or @math SV ⊆ VB should hold. Nevertheless, it turns out that we can identify a linear order on these classes. We prove that @math SB ? MB = VB ? SV = MV = VV ? VV c . The same holds for the constant-time versions of these classes. We also show that the constant-time variants of these classes can be characterised by a corresponding modal logic. Hence the linear order identified in this work has direct implications in the study of the expressibility of modal logic. Conversely, one can use tools from modal logic to study these classes.
The study of the port-numbering model was initiated by Angluin @cite_28 in 1980. Initially the main focus was on problems that have a nature---problems in which the local output of a node necessarily depends on the global properties of the input. Examples of papers from the first two decades after Angluin's pioneering work include @cite_31 , Yamashita and Kameda @cite_45 @cite_36 @cite_54 , and Boldi and Vigna @cite_7 , who studied global functions, leader election problems, spanning trees, and topological properties.
{ "cite_N": [ "@cite_7", "@cite_28", "@cite_36", "@cite_54", "@cite_45", "@cite_31" ], "mid": [ "1516914512", "1968191405", "", "", "2124068769", "2089155578" ], "abstract": [ "We provide effective (i.e., recursive) characterizations of the relations that can be computed on networks where all processors use the same algorithm, start from the same state, and know at least a bound on the network size. Three activation models are considered (synchronous, asynchronous, interleaved).", "This paper attempts to get at some of the fundamental properties of distributed computing by means of the following question: “How much does each processor in a network of processors need to know about its own identity, the identities of other processors, and the underlying connection network in order for the network to be able to carry out useful functions?” The approach we take is to require that the processors be designed without any knowledge (or only very broad knowledge) of the networks they are to be used in, and furthermore, that all processors with the same number of communication ports be identical. Given a particular network function, e.g., setting up a spanning tree, we ask whether processors may be designed so that when they are embedded in any connected network and started in some initial configuration, they are guaranteed to accomplish the desired function.", "", "", "In anonymous networks, the processors do not have identity numbers. We investigate the following representative problems on anonymous networks: (a) the leader election problem, (b) the edge election problem, (c) the spanning tree construction problem, and (d) the topology recognition problem. On a given network, the above problems may or may not be solvable, depending on the amount of information about the attributes of the network made available to the processors. Some possibilities are: (1) no network attribute information at all is available, (2) an upper bound on the number of processors in the network is available, (3) the exact number of processors in the network is available, and (4) the topology of the network is available. In terms of a new graph property called \"symmetricity\", in each of the four cases (1)-(4) above, we characterize the class of networks on which each of the four problems (a)(d) is solvable. We then relate the symmetricity of a network to its 1- and 2-factors.", "The computational capabilities of a system of n indistinguishable (anonymous) processors arranged on a ring in the synchronous and asynchronous models of distributed computation are analyzed. A precise characterization of the functions that can be computed in this setting is given. It is shown that any of these functions can be computed in O ( n 2 ) messages in the asynchronous model. This is also proved to be a lower bound for such elementary functions as AND, SUM, and Orientation. In the synchronous model any computable function can be computed in O ( n log n ) messages. A ring can be oriented and start synchronized within the same bounds. The main contribution of this paper is a new technique for proving lower bounds in the synchronous model. With this technique tight lower bounds of t( n log n ) (for particular n ) are proved for XOR, SUM, Orientation, and Start Synchronization. The technique is based on a string-producing mechanism from formal language theory, first introduced by Thue to study square-free words. Two methods for generalizing the synchronous lower bounds to arbitrary ring sizes are presented." ] }
1205.2051
2610932619
This work presents a classification of weak models of distributed computing. We focus on deterministic distributed algorithms, and study models of computing that are weaker versions of the widely-studied port-numbering model. In the port-numbering model, a node of degree @math d receives messages through @math d input ports and sends messages through @math d output ports, both numbered with @math 1 , 2 , ? , d . In this work, @math VV c is the class of all graph problems that can be solved in the standard port-numbering model. We study the following subclasses of @math VV c : Now we have many trivial containment relations, such as @math SB ⊆ MB ⊆ VB ⊆ VV ⊆ VV c , but it is not obvious if, for example, either of @math VB ⊆ SV or @math SV ⊆ VB should hold. Nevertheless, it turns out that we can identify a linear order on these classes. We prove that @math SB ? MB = VB ? SV = MV = VV ? VV c . The same holds for the constant-time versions of these classes. We also show that the constant-time variants of these classes can be characterised by a corresponding modal logic. Hence the linear order identified in this work has direct implications in the study of the expressibility of modal logic. Conversely, one can use tools from modal logic to study these classes.
At first sight, constant-time algorithms in stronger models and distributed algorithms in the port-numbering model seem to be orthogonal concepts. However, in many cases a local algorithm is also an algorithm in the port-numbering model. Indeed, a formal connection between local algorithms and the port-numbering model has been recently identified @cite_34 .
{ "cite_N": [ "@cite_34" ], "mid": [ "2135290452" ], "abstract": [ "In the study of deterministic distributed algorithms it is commonly assumed that each node has a unique O(log n)-bit identifier. We prove that for a general class of graph problems, local algorithms (constant-time distributed algorithms) do not need such identifiers: a port numbering and orientation is sufficient. Our result holds for so-called simple PO-checkable graph optimisation problems; this includes many classical packing and covering problems such as vertex covers, edge covers, matchings, independent sets, dominating sets, and edge dominating sets. We focus on the case of bounded-degree graphs and show that if a local algorithm finds a constant-factor approximation of a simple PO-checkable graph problem with the help of unique identifiers, then the same approximation ratio can be achieved on anonymous networks. As a corollary of our result and by prior work, we derive a tight lower bound on the local approximability of the minimum edge dominating set problem. Our main technical tool is an algebraic construction of homogeneously ordered graphs: We say that a graph is (α,r)-homogeneous if its nodes are linearly ordered so that an α fraction of nodes have pairwise isomorphic radius-r neighbourhoods. We show that there exists a finite (α,r)-homogeneous 2k-regular graph of girth at least g for any α" ] }
1205.2051
2610932619
This work presents a classification of weak models of distributed computing. We focus on deterministic distributed algorithms, and study models of computing that are weaker versions of the widely-studied port-numbering model. In the port-numbering model, a node of degree @math d receives messages through @math d input ports and sends messages through @math d output ports, both numbered with @math 1 , 2 , ? , d . In this work, @math VV c is the class of all graph problems that can be solved in the standard port-numbering model. We study the following subclasses of @math VV c : Now we have many trivial containment relations, such as @math SB ⊆ MB ⊆ VB ⊆ VV ⊆ VV c , but it is not obvious if, for example, either of @math VB ⊆ SV or @math SV ⊆ VB should hold. Nevertheless, it turns out that we can identify a linear order on these classes. We prove that @math SB ? MB = VB ? SV = MV = VV ? VV c . The same holds for the constant-time versions of these classes. We also show that the constant-time variants of these classes can be characterised by a corresponding modal logic. Hence the linear order identified in this work has direct implications in the study of the expressibility of modal logic. Conversely, one can use tools from modal logic to study these classes.
If we had no positive examples of problems in classes below @math , there would be little motivation for pursuing further. However, the recent work related to the vertex cover problem @cite_73 calls for further investigation. It turned out that @math -approximation of vertex cover is a graph problem that is not only in @math , but also in @math ---that is, we have a non-trivial graph problem that does not require any access to either outgoing or incoming port numbers. One ingredient of the vertex cover algorithm is the observation that @math , which raises the question of the existence of other similar collapses in the hierarchy of weak models.
{ "cite_N": [ "@cite_73" ], "mid": [ "2116636970" ], "abstract": [ "We present a distributed algorithm that finds a maximal edge packing in O(Δ + log* W) synchronous communication rounds in a weighted graph, independent of the number of nodes in the network; here Δ is the maximum degree of the graph and W is the maximum weight. As a direct application, we have a distributed 2-approximation algorithm for minimum-weight vertex cover, with the same running time. We also show how to find an @math -approximation of minimum-weight set cover in O(f2k2 + fk log* W) rounds; here k is the maximum size of a subset in the set cover instance, f is the maximum frequency of an element, and W is the maximum weight of a subset. The algorithms are deterministic, and they can be applied in anonymous networks." ] }
1205.2051
2610932619
This work presents a classification of weak models of distributed computing. We focus on deterministic distributed algorithms, and study models of computing that are weaker versions of the widely-studied port-numbering model. In the port-numbering model, a node of degree @math d receives messages through @math d input ports and sends messages through @math d output ports, both numbered with @math 1 , 2 , ? , d . In this work, @math VV c is the class of all graph problems that can be solved in the standard port-numbering model. We study the following subclasses of @math VV c : Now we have many trivial containment relations, such as @math SB ⊆ MB ⊆ VB ⊆ VV ⊆ VV c , but it is not obvious if, for example, either of @math VB ⊆ SV or @math SV ⊆ VB should hold. Nevertheless, it turns out that we can identify a linear order on these classes. We prove that @math SB ? MB = VB ? SV = MV = VV ? VV c . The same holds for the constant-time versions of these classes. We also show that the constant-time variants of these classes can be characterised by a corresponding modal logic. Hence the linear order identified in this work has direct implications in the study of the expressibility of modal logic. Conversely, one can use tools from modal logic to study these classes.
We are by no means the first to investigate the weak models. Computation in models that are strictly weaker than the standard port-numbering model has been studied since the 1990s, under various terms---see Table for a summary of terminology, and Table for an overview of the main differences in the research directions. Questions related to specific problems, models, and graph families have been studied previously, and indeed many of the techniques and ideas that we use are now standard---this includes the use of symmetry and isomorphisms, local views, covering graphs (lifts) and universal covering graphs, and factors and factorisations. Mayer, Naor, and Stockmeyer @cite_47 @cite_68 made it explicit that the parity of node degrees makes a huge difference in the port-numbering model, and Yamashita and Kameda @cite_45 discussed factors and factorisations in this context; the underlying graph-theoretic observations can be traced back to as far as Petersen's 1891 work @cite_35 . Some equivalences and separations between the classes are already known, or at least implicit in prior work---see, in particular, @cite_72 and Yamashita and Kameda @cite_10 .
{ "cite_N": [ "@cite_35", "@cite_72", "@cite_45", "@cite_47", "@cite_68", "@cite_10" ], "mid": [ "", "164544547", "2124068769", "2017345786", "", "2110664267" ], "abstract": [ "", "We consider the problem of electing a leader in an an-onymous network of processors. More precisely our modelis that of a directed graph, with vertices corresponding toprocessors, and arcs to communication links (we freely in-terchange symmetric digraphs and undirected graphs). Wemake no assumption on the structure of the network: self-loopsand parallel arcs are allowed. In particular, processorsare anonymous: they do not have unique identifiers.We consider both synchronous and asynchronous pro-cessor activation models, and models with and without“port awareness” (local names for outgoing and or for in-coming arcs). We consider both unidirectional and bidirec-tional links. Our models will be defined precisely in thesequel.It is well known that once a leader is found, many other", "In anonymous networks, the processors do not have identity numbers. We investigate the following representative problems on anonymous networks: (a) the leader election problem, (b) the edge election problem, (c) the spanning tree construction problem, and (d) the topology recognition problem. On a given network, the above problems may or may not be solvable, depending on the amount of information about the attributes of the network made available to the processors. Some possibilities are: (1) no network attribute information at all is available, (2) an upper bound on the number of processors in the network is available, (3) the exact number of processors in the network is available, and (4) the topology of the network is available. In terms of a new graph property called \"symmetricity\", in each of the four cases (1)-(4) above, we characterize the class of networks on which each of the four problems (a)(d) is solvable. We then relate the symmetricity of a network to its 1- and 2-factors.", "The purpose of this paper is a study of computation that can be done locally in a distributed network, where \"locally\" means within time (or distance) independent of the size of the network. Locally checkable labeling (LCL) problems are considered, where the legality of a labeling can be checked locally (e.g., coloring). The results include the following: There are nontrivial LCL problems that have local algorithms. There is a variant of the dining philosophers problem that can be solved locally. Randomization cannot make an LCL problem local; i.e., if a problem has a local randomized algorithm then it has a local deterministic algorithm. It is undecidable, in general, whether a given LCL has a local algorithm. However, it is decidable whether a given LCL has an algorithm that operates in a given time @math . Any LCL problem that has a local algorithm has one that is order-invariant (the algorithm depends only on the order of the processor IDs).", "", "In the networks considered in this paper, processors do not have distinct identity numbers. On such a network, we discuss the leader election problem and the problem of counting the number of processors having the same identity number. As the communication mode, we consider port-to-port, broadcast-to-port, port-to-mail box, and broadcast-to-mailbox. For each of the above communication modes, we present: an algorithm for counting the number of processors with the same identity number; an algorithm for solving the leader election problem; and a graph theoretical characterization of the solvable class for the leader election problem." ] }
1205.2051
2610932619
This work presents a classification of weak models of distributed computing. We focus on deterministic distributed algorithms, and study models of computing that are weaker versions of the widely-studied port-numbering model. In the port-numbering model, a node of degree @math d receives messages through @math d input ports and sends messages through @math d output ports, both numbered with @math 1 , 2 , ? , d . In this work, @math VV c is the class of all graph problems that can be solved in the standard port-numbering model. We study the following subclasses of @math VV c : Now we have many trivial containment relations, such as @math SB ⊆ MB ⊆ VB ⊆ VV ⊆ VV c , but it is not obvious if, for example, either of @math VB ⊆ SV or @math SV ⊆ VB should hold. Nevertheless, it turns out that we can identify a linear order on these classes. We prove that @math SB ? MB = VB ? SV = MV = VV ? VV c . The same holds for the constant-time versions of these classes. We also show that the constant-time variants of these classes can be characterised by a corresponding modal logic. Hence the linear order identified in this work has direct implications in the study of the expressibility of modal logic. Conversely, one can use tools from modal logic to study these classes.
Modal logic (see ) has, of course, been applied previously in the context of distributed systems. For example, in their seminal paper, Halpern and Moses @cite_5 use modal logic to model epistemic phenomena in distributed systems. A distributed system @math gives rise to a (see ), whose set @math of domain points corresponds to the set of partial runs of @math , that is, finite sequences of global states of @math . For each processor @math of @math , there is an @math such that @math if and only if @math and @math are from the point of view of processor @math . This framework suits well for epistemic considerations.
{ "cite_N": [ "@cite_5" ], "mid": [ "2022205872" ], "abstract": [ "Reasoning about knowledge seems to play a fundamental role in distributed systems. Indeed, such reasoning is a central part of the informal intuitive arguments used in the design of distributed protocols. Communication in a distributed system can be viewed as the act of transforming the system's state of knowledge. This paper presents a general framework for formalizing and reasoning about knowledge in distributed systems. It is shown that states of knowledge of groups of processors are useful concepts for the design and analysis of distributed protocols. In particular, distributed knowledge corresponds to knowledge that is “distributed” among the members of the group, while common knowledge corresponds to a fact being “publicly known.” The relationship between common knowledge and a variety of desirable actions in a distributed system is illustrated. Furthermore, it is shown that, formally speaking, in practical systems common knowledge cannot be attained. A number of weaker variants of common knowledge that are attainable in many cases of interest are introduced and investigated." ] }
1205.2074
2950899441
We study the phase transition of the coalitional manipulation problem for generalized scoring rules. Previously it has been shown that, under some conditions on the distribution of votes, if the number of manipulators is @math , where @math is the number of voters, then the probability that a random profile is manipulable by the coalition goes to zero as the number of voters goes to infinity, whereas if the number of manipulators is @math , then the probability that a random profile is manipulable goes to one. Here we consider the critical window, where a coalition has size @math , and we show that as @math goes from zero to infinity, the limiting probability that a random profile is manipulable goes from zero to one in a smooth fashion, i.e., there is a smooth phase transition between the two regimes. This result analytically validates recent empirical results, and suggests that deciding the coalitional manipulation problem may be of limited computational hardness in practice.
A recent line of research with an average-case algorithmic flavor also suggests that manipulation is indeed typically easy; see, e.g., the work of Kelly @cite_30 , Conitzer and Sandholm @cite_2 , Procaccia and Rosenschein @cite_25 , and @cite_26 for results on certain restricted classes of SCFs. A different approach, initiated by Friedgut, Kalai, Keller and Nisan @cite_23 @cite_36 , who studied the fraction of ranking profiles that are manipulable, also suggests that manipulation is easy on average; see further Xia and Conitzer @cite_16 , Dobzinski and Procaccia @cite_0 , Isaksson, Kindler and Mossel @cite_14 , and Mossel and R 'acz @cite_33 . We refer to the survey by Faliszewski and Procaccia @cite_12 for a detailed history of the surrounding literature. See also related literature in economics, e.g., @cite_34 @cite_6 @cite_4 .
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_14", "@cite_33", "@cite_4", "@cite_36", "@cite_34", "@cite_6", "@cite_0", "@cite_23", "@cite_2", "@cite_16", "@cite_25", "@cite_12" ], "mid": [ "2055053954", "2060522636", "2177286302", "2144211415", "2151939563", "2006063189", "", "1975543368", "1859798081", "2070986136", "1566914083", "", "1493942848", "2171945484" ], "abstract": [ "Explores, for several classes of social choice rules, the distribution of the number of profiles at which a rule can be strategically manipulated. In this paper, we will do comparative social choice, looking for information about how social choice rules compare in their vulnerability to strategic misrepresentation of preferences.", "We investigate the problem of coalitional manipulation in elections, which is known to be hard in a variety of voting rules. We put forward efficient algorithms for the problem in Borda, Maximin and Plurality with Runoff, and analyze their windows of error. Specifically, given an instance on which an algorithm fails, we bound the additional power the manipulators need in order to succeed. We finally discuss the implications of our results with respect to the popular approach of employing computational hardness to preclude manipulation.", "We prove a quantitative version of the Gibbard-Satterthwaite theorem. We show that a uniformly chosen voter profile for a neutral social choice function f of q ≥ 4 alternatives and n voters will be manipulable with probability at least 10−4∈2 n −3 q −30, where ∈ is the minimal statistical distance between f and the family of dictator functions. Our results extend those of [11], which were obtained for the case of 3 alternatives, and imply that the approach of masking manipulations behind computational hardness (as considered in [4,6,9,15,7]) cannot hide manipulations completely. Our proof is geometric. More specifically it extends the method of canonical paths to show that the measure of the profiles that lie on the interface of 3 or more outcomes is large. To the best of our knowledge our result is the first isoperimetric result to establish interface of more than two bodies.", "Recently, quantitative versions of the Gibbard-Satterthwaite theorem were proven for k=3 alternatives by Friedgut, Kalai, Keller and Nisan and for neutral functions on k ≥ 4 alternatives by Isaksson, Kindler and Mossel. In the present paper we prove a quantitative version of the Gibbard-Satterthwaite theorem for general social choice functions for any number k ≥ 3 of alternatives. In particular we show that for a social choice function f on k ≥ 3 alternatives and n voters, which is e-far from the family of nonmanipulable functions, a uniformly chosen voter profile is manipulable with probability at least inverse polynomial in n, k, and e-1. Removing the neutrality assumption of previous theorems is important for multiple reasons. For one, it is known that there is a conflict between anonymity and neutrality, and since most common voting rules are anonymous, they cannot always be neutral. Second, virtual elections are used in many applications in artificial intelligence, where there are often restrictions on the outcome of the election, and so neutrality is not a natural assumption in these situations. Ours is a unified proof which in particular covers all previous cases established before. The proof crucially uses reverse hypercontractivity in addition to several ideas from the two previous proofs. Much of the work is devoted to understanding functions of a single voter, and in particular we also prove a quantitative Gibbard-Satterthwaite theorem for one voter.", "Many analyses of plurality-rule elections predict the complete coordination of strategic voting, and hence support for only two candidates. Here I suggest that stable multi-candidate support will arise in equilibrium. A group of voters must partially coordinate behind one of two challenging candidates in order to dislodge a disliked incumbent. In a departure from existing models, the popular support for each challenger is uncertain. This support must be inferred from the private observation of informative signals, such as the social communication of preferences throughout the electorate, or the imperfect observation of opinion polls. The uniquely stable voting equilibrium entails only limited strategic voting and hence incomplete coordination. This is due to the surprising presence of negative feedback: an increase in the degree of strategic voting by others reduces the incentives for an individual to vote strategically. The incentive to vote strategically is lower in relatively marginal elections, after controlling for the distance from contention of a trailing preferred challenger. A calibration of the model applied to the UK General Election of 1997 is consistent with the impact of strategic voting and the reported accuracy of voters understanding of the electoral situation. It suggests that nearly 50 seats may have been lost by the Conservative party due to strategic voting.", "The Gibbard-Satterthwaite theorem states that every nondictatorial election rule among at least three alternatives can be strategically manipulated. We prove a quantitative version of the Gibbard-Satterthwaite theorem: a random manipulation by a single random voter will succeed with a nonnegligible probability for any election rule among three alternatives that is far from being a dictatorship and from having only two alternatives in its range.", "", "", "The recent result of Friedgut, Kalai and Nisan [9] gives a quantitative version of the Gibbard-Satterthwaite Theorem regarding manipulation in elections, but holds only for neutral social choice functions and three alternatives. We complement their theorem by proving a similar result regarding Pareto-Optimal social choice functions when the number of voters is two. We discuss the implications of our results with respect to the agenda of precluding manipulation in elections by means of computational hardness.", "The Gibbard-Satterthwaite theorem states that every non-trivial voting method among at least 3 alternatives can be strategically manipulated. We prove a quantitative version of the Gibbard-Satterthwaite theorem: a random manipulation by a single random voter will succeed with non-negligible probability for every neutral voting method among 3 alternatives that is far from being a dictatorship.", "Aggregating the preferences of self-interested agents is a key problem for multiagent systems, and one general method for doing so is to vote over the alternatives (candidates). Unfortunately, the Gibbard-Satterthwaite theorem shows that when there are three or more candidates, all reasonable voting rules are manipulable (in the sense that there exist situations in which a voter would benefit from reporting its preferences insincerely). To circumvent this impossibility result, recent research has investigated whether it is possible to make finding a beneficial manipulation computationally hard. This approach has had some limited success, exhibiting rules under which the problem of finding a beneficial manipulation is NP-hard, #P-hard, or even PSPACE-hard. Thus, under these rules, it is unlikely that a computationally efficient algorithm can be constructed that always finds a beneficial manipulation (when it exists). However, this still does not preclude the existence of an efficient algorithm that often finds a successful manipulation (when it exists). There have been attempts to design a rule under which finding a beneficial manipulation is usually hard, but they have failed. To explain this failure, in this paper, we show that it is in fact impossible to design such a rule, if the rule is also required to satisfy another property: a large fraction of the manipulable instances are both weakly monotone, and allow the manipulators to make either of exactly two candidates win. We argue why one should expect voting rules to have this property, and show experimentally that common voting rules clearly satisfy it. We also discuss approaches for potentially circumventing this impossibility result.", "", "Encouraging voters to truthfully reveal their preferences in an election has long been an important issue. Recently, computational complexity has been suggested as a means of precluding strategic behavior. Previous studies have shown that some voting protocols are hard to manipulate, but used NP-hardness as the complexity measure. Such a worst-case analysis may be an insufficient guarantee of resistance to manipulation. Indeed, we demonstrate that NP-hard manipulations may be tractable in the average-case. For this purpose, we augment the existing theory of average-case complexity with some new concepts. In particular, we consider elections distributed with respect to junta distributions, which concentrate on hard instances. We use our techniques to prove that scoring protocols are susceptible to manipulation by coalitions, when the number of candidates is constant.", "We provide an overview of more than two decades of work, mostly in AI, that studies computational complexity as a barrier against manipulation in elections." ] }
1205.2074
2950899441
We study the phase transition of the coalitional manipulation problem for generalized scoring rules. Previously it has been shown that, under some conditions on the distribution of votes, if the number of manipulators is @math , where @math is the number of voters, then the probability that a random profile is manipulable by the coalition goes to zero as the number of voters goes to infinity, whereas if the number of manipulators is @math , then the probability that a random profile is manipulable goes to one. Here we consider the critical window, where a coalition has size @math , and we show that as @math goes from zero to infinity, the limiting probability that a random profile is manipulable goes from zero to one in a smooth fashion, i.e., there is a smooth phase transition between the two regimes. This result analytically validates recent empirical results, and suggests that deciding the coalitional manipulation problem may be of limited computational hardness in practice.
Recent work by Xia @cite_21 is independent from, and closely related to, our work. As mentioned above, Xia's paper is concerned with computing the margin of victory in elections. He focuses on computational complexity questions and approximation algorithms, but one of his results is similar to Parts and of Theorem . However, our analysis is completely different; our approach facilitates the proof of Part of the theorem, which is our main contribution. An even more recent (and also independent) manuscript by Xia @cite_27 considers similar questions for generalized scoring rules and captures additional types of strategic behavior (such as control), but again, crucially, this work does not attempt to understand the phase transition (nor does it subsume our Theorem ).
{ "cite_N": [ "@cite_27", "@cite_21" ], "mid": [ "1644339822", "2065358633" ], "abstract": [ "In this paper, we propose a framework to study a general class of strategic behavior in voting, which we call vote operations. We prove the following theorem: if we fix the number of alternatives, generate @math votes i.i.d. according to a distribution @math , and let @math go to infinity, then for any @math , with probability at least @math , the minimum number of operations that are needed for the strategic individual to achieve her goal falls into one of the following four categories: (1) 0, (2) @math , (3) @math , and (4) @math . This theorem holds for any set of vote operations, any individual vote distribution @math , and any integer generalized scoring rule, which includes (but is not limited to) almost all commonly studied voting rules, e.g., approval voting, all positional scoring rules (including Borda, plurality, and veto), plurality with runoff, Bucklin, Copeland, maximin, STV, and ranked pairs. We also show that many well-studied types of strategic behavior fall under our framework, including (but not limited to) constructive destructive manipulation, bribery, and control by adding deleting votes, margin of victory, and minimum manipulation coalition size. Therefore, our main theorem naturally applies to these problems.", "The margin of victory of an election, defined as the smallest number k such that k voters can change the winner by voting differently, is an important measurement for robustness of the election outcome. It also plays an important role in implementing efficient post-election audits, which has been widely used in the United States to detect errors or fraud caused by malfunctions of electronic voting machines. In this paper, we investigate the computational complexity and (in)approximability of computing the margin of victory for various voting rules, including approval voting, all positional scoring rules (which include Borda, plurality, and veto), plurality with runoff, Bucklin, Copeland, maximin, STV, and ranked pairs. We also prove a dichotomy theorem, which states that for all continuous generalized scoring rules, including all voting rules studied in this paper, either with high probability the margin of victory is Θ(√n), or with high probability the margin of victory is Θ(n), where n is the number of voters. Most of our results are quite positive, suggesting that the margin of victory can be efficiently computed. This sheds some light on designing efficient post-election audits for voting rules beyond the plurality rule." ] }
1205.1225
1585455557
In this work, we present a technique to map any genus zero solid object onto a hexahedral decomposition of a solid cube. This problem appears in many applications ranging from finite element methods to visual tracking. From this, one can then hopefully utilize the proposed technique for shape analysis, registration, as well as other related computer graphics tasks. More importantly, given that we seek to establish a one-to-one correspondence of an input volume to that of a solid cube, our algorithm can naturally generate a quality hexahedral mesh as an output. In addition, we constrain the mapping itself to be volume preserving allowing for the possibility of further mesh simplification. We demonstrate our method both qualitatively and quantitatively on various 3D solid models
In this section, we review related works focusing on hexahedral meshing algorithms with the understanding that there exists a wide variety of possible approaches to construct volumetric meshes @cite_4 @cite_19 . Generally, the hexahedral mesh generation algorithms are categorized by two classes: structured and unstructured methods. Strictly speaking, a structured mesh is characterized by all of the interior mesh nodes having an equal number of adjacent elements. On the other hand, unstructured meshes relax the nodal valence requirement, allowing any number of elements to meet at a single node. We note that the mesh we construct falls into the structured mesh category, which includes mapping techniques @cite_27 and submapping approaches @cite_12 . In addition to these frameworks, structured meshing algorithms have arisen in the form of octree approaches @cite_16 @cite_31 , multiblock methods @cite_35 @cite_7 , and sweeping algorithms @cite_24 @cite_37 @cite_40 . While it is beyond the scope of this note to detail all of these methods (and is by no means exhaustive), we refer the interested reader to a survey conducted by Owen @cite_17 . In what follows, we will further constrain our discussion to approaches related to mapping techniques.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_4", "@cite_7", "@cite_24", "@cite_19", "@cite_27", "@cite_40", "@cite_31", "@cite_16", "@cite_12", "@cite_17" ], "mid": [ "2316804072", "4038604", "1978623566", "1986412839", "", "2076702317", "2153211532", "", "2069533434", "89335183", "1535528173", "" ], "abstract": [ "", "Sweep method is one of the most robust techniques to generate hexahedral meshes in extrusion volumes. One of the main issues to be dealt by any sweep algorithm is the projection of a source surface mesh onto the target surface. This paper presents a new algorithm to map a given mesh over the source surface onto the target surface. This projection is carried out by means of a least-squares approximation of an affine mapping defined between the parametric spaces of the surfaces. Once the new mesh is obtained on the parametric space of the target surface, it is mapped up according to the target surface parameterization. Therefore, the developed algorithm does not require to solve any root finding problem to ensure that the projected nodes are on the target surface. Afterwards, this projection algorithm is extended to three dimensional cases and it is used to generate the inner layers of elements in the physical space.", "This paper describes an automatic and efficient approach to construct unstructured tetrahedral and hexahedral meshes for a composite domain made up of heterogeneous materials. The boundaries of these material regions form non-manifold surfaces. In earlier papers, we developed an octree-based isocontouring method to construct unstructured 3D meshes for a single material (homogeneous) domain with manifold boundary. In this paper, we introduce the notion of a material change edge and use it to identify the interface between two or several different materials. A novel method to calculate the minimizer point for a cell shared by more than two materials is provided, which forms a non-manifold node on the boundary. We then mesh all the material regions simultaneously and automatically while conforming to their boundaries directly from volumetric data. Both material change edges and interior edges are analyzed to construct tetrahedral meshes, and interior grid points are analyzed for proper hexahedral mesh construction. Finally, edge-contraction and smoothing methods are used to improve the quality of tetrahedral meshes, and a combination of pillowing, geometric flow and optimization techniques is used for hexahedral mesh quality improvement. The shrink set of pillowing schemes is defined automatically as the boundary of each material region. Several application results of our multi-material mesh generation method are also provided.", "Finite element (FE) analysis is a valuable tool in musculoskelet al research. The demands associated with mesh development, however, often prove daunting. In an effort to facilitate anatomic FE model development we have developed an open-source software toolkit (IA-FEMesh). IA-FEMesh employs a multiblock meshing scheme aimed at hexahedral mesh generation. An emphasis has been placed on making the tools interactive, in an effort to create a user friendly environment. The goal is to provide an efficient and reliable method for model development, visualization, and mesh quality evaluation. While these tools have been developed, initially, in the context of skelet al structures they can be applied to countless applications.", "", "Despite thesuccess of quad-based 2D surface parameterization methods, effectiveparameterization algorithms for 3D volumes with cubes, i.e. hexahedral elements, are still missing. CUBECOVERis a first approach for generating a hexahedral tessellation of a given volume with boundary aligned cubes which are guided by a frame field. The input of CUBECOVER is a tetrahedral volume mesh. First, a frame field is designed with manual input from the designer. It guides the interior and boundary layout of the parameterization. Then, the parameterization and the hexahedral mesh are computed so as to align with the given frame field. CUBECOVER has similarities to the QUADCOVER algorithm and extends it from 2D surfaces to 3D volumes. The paper also provides theoretical results for 3D hexahedral parameterizations and analyses topological properties of the appropriate function space.", "Harmonic volumetric mapping aims to establish a smooth bijective correspondence between two solid shapes with the same topology. In this paper, we develop an automatic meshless method for creating such a mapping between two given objects. With the shell surface mapping as the boundary condition, we first solve a linear system constructed by a boundary method called the method of fundamental solution, and then represent the mapping using a set of points with different weights in the vicinity of the shell of the given model. Our algorithm is a true meshless method (without the need of any specific meshing structure within the solid interior) and the behavior of the interior region is directly determined by the boundary, which can improve the computational efficiency and robustness significantly. Therefore, our algorithm can be applied to massive volume data sets with various geometric primitives and topological types. We demonstrate the utility and efficacy of our algorithm in information transfer, shape registration, deformation sequence analysis, tetrahedral remeshing, and solid texture synthesis.", "", "This paper describes an algorithm to extract adaptive and quality quadrilateral hexahedral meshes directly from volumetric data. First, a bottom-up surface topology preserving octree-based algorithm is applied to select a starting octree level. Then the dual contouring method is used to extract a preliminary uniform quad hex mesh, which is decomposed into finer quads hexes adaptively without introducing any hanging nodes. The positions of all boundary vertices are recalculated to approximate the boundary surface more accurately. Mesh adaptivity can be controlled by a feature sensitive error function, the regions that users are interested in, or finite element calculation results. Finally, a relaxation based technique is deployed to improve mesh quality. Several demonstration examples are provided from a wide variety of application domains. Some extracted meshes have been extensively used in finite element simulations.", "", "", "" ] }
1205.1225
1585455557
In this work, we present a technique to map any genus zero solid object onto a hexahedral decomposition of a solid cube. This problem appears in many applications ranging from finite element methods to visual tracking. From this, one can then hopefully utilize the proposed technique for shape analysis, registration, as well as other related computer graphics tasks. More importantly, given that we seek to establish a one-to-one correspondence of an input volume to that of a solid cube, our algorithm can naturally generate a quality hexahedral mesh as an output. In addition, we constrain the mapping itself to be volume preserving allowing for the possibility of further mesh simplification. We demonstrate our method both qualitatively and quantitatively on various 3D solid models
In addition to these related works, Martin and Cohen @cite_29 recently presented a methodology for hexahedral meshing that is able to handle higher order genus species through B-splines and T-splines. In contrast to these proposed approaches, our algorithm only requires the object to be of genus zero topology. By genus zero we mean that the solid has no holes (i.e., the surface is contractible). However, if the object is not of genus zero, it may be processed by separating the object into several genus zero parts.
{ "cite_N": [ "@cite_29" ], "mid": [ "2498162759" ], "abstract": [ "In this paper we present a methodology to create higher order parametric trivariate representations such as B-splines or T-splines, from closed triangle meshes with higher genus or bifurcations. The input can consist of multiple interior boundaries which represent inner object material attributes. Fundamental to our approach is the use of a midsurface in combination with harmonic functions to decompose the object into a small number of trivariate tensor-product patches that respect material attributes. The methodology is applicable to thin solid models which we extend using the flexibility of harmonic functions and demonstrate our technique, among other objects, on a genus-1 pelvis data set containing an interior triangle mesh separating the cortical part of the bone from the trabecular part. Finally, a B-spline representation is generated from the parameterization." ] }
1205.1312
2951799043
We propose a general method for converting online algorithms to local computation algorithms by selecting a random permutation of the input, and simulating running the online algorithm. We bound the number of steps of the algorithm using a query tree, which models the dependencies between queries. We improve previous analyses of query trees on graphs of bounded degree, and extend the analysis to the cases where the degrees are distributed binomially, and to a special case of bipartite graphs. Using this method, we give a local computation algorithm for maximal matching in graphs of bounded degree, which runs in time and space O(log^3 n). We also show how to convert a large family of load balancing algorithms (related to balls and bins problems) to local computation algorithms. This gives several local load balancing algorithms which achieve the same approximation ratios as the online algorithms, but run in O(log n) time and space. Finally, we modify existing local computation algorithms for hypergraph 2-coloring and k-CNF and use our improved analysis to obtain better time and space bounds, of O(log^4 n), removing the dependency on the maximal degree of the graph from the exponent.
Nguyen and Onak @cite_14 focus on transforming classical approximation algorithms into constant-time algorithms that approximate the size of the optimal solution of problems such as vertex cover and maximum matching. They generate a random number @math , called the rank, for each node. These ranks are used to bound the query tree size.
{ "cite_N": [ "@cite_14" ], "mid": [ "2109330224" ], "abstract": [ "We present a technique for transforming classical approximation algorithms into constant-time algorithms that approximate the size of the optimal solution. Our technique is applicable to a certain subclass of algorithms that compute a solution in a constant number of phases. The technique is based on greedily considering local improvements in random order.The problems amenable to our technique include vertex cover, maximum matching, maximum weight matching, set cover, and minimum dominating set. For example, for maximum matching, we give the first constant-time algorithm that for the class of graphs of degree bounded by d, computes the maximum matching size to within epsivn, for any epsivn > 0, where n is the number of nodes in the graph. The running time of the algorithm is independent of n, and only depends on d and epsiv." ] }
1205.1143
2135389419
The literature search has always been an important part of an academic research. It greatly helps to improve the quality of the research process and output, and increase the efficiency of the researchers in terms of their novel contribution to science. As the number of published papers increases every year, a manual search becomes more exhaustive even with the help of today's search engines since they are not specialized for this task. In academics, two relevant papers do not always have to share keywords, cite one another, or even be in the same field. Although a well-known paper is usually an easy pray in such a hunt, relevant papers using a different terminology, especially recent ones, are not obvious to the eye. In this work, we propose paper recommendation algorithms by using the citation information among papers. The proposed algorithms are direction aware in the sense that they can be tuned to find either recent or traditional papers. The algorithms require a set of papers as input and recommend a set of related ones. If the user wants to give negative or positive feedback on the suggested paper set, the recommendation is refined. The search process can be easily guided in that sense by relevance feedback. We show that this slight guidance helps the user to reach a desired paper in a more efficient way. We adapt our models and algorithms also for the venue and reviewer recommendation tasks. Accuracy of the models and algorithms is thoroughly evaluated by comparison with multiple baselines and algorithms from the literature in terms of several objectives specific to citation, venue, and reviewer recommendation tasks. All of these algorithms are implemented within a publicly available web-service framework (this http URL) which currently uses the data from DBLP and CiteSeer to construct the proposed citation graph.
There are various citation analysis-based paper recommendation methods depending on a pairwise similarity measure between two papers. Bibliographic coupling, which is one of the earliest works, considers papers having similar citations as related @cite_0 . Another early work, the Cocitation method, considers papers which are cited by the same papers as related @cite_19 . A similar cites cited approach by using collaboration filtering is proposed by McNee et al. @cite_18 . Another method, common citation @math inverse document frequency (CCIDF) also considers only common citations, but by weighting them with respect to their inverse frequencies @cite_16 .
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_18", "@cite_16" ], "mid": [ "1970859146", "2005207065", "2116655493", "2154498027" ], "abstract": [ "This report describes the results of automatic processing of a large number of scientific papers according to a rigorously defined criterion of coupling. The population of papers under study was ordered into groups that satisfy the stated criterion of interrelation. An examination of the papers that constitute the groups shows a high degree of logical correlation.", "A new form of document coupling called co-citation is defined as the frequency with which two documents are cited together. The co-citation frequency of two scientific papers can be determined by comparing lists of citing documents in the Science Citation Index and counting identical entries. Networks of co-cited papers can be generated for specific scientific specialties, and an example is drawn from the literature of particle physics. Co-citation patterns are found to differ significantly from bibliographic coupling patterns, but to agree generally with patterns of direct citation. Clusters of co-cited papers provide a new way to study the specialty structure of science. They may provide a new approach to indexing and to the creation of SDI profiles.", "Collaborative filtering has proven to be valuable for recommending items in many different domains. In this paper, we explore the use of collaborative filtering to recommend research papers, using the citation web between papers to create the ratings matrix. Specifically, we tested the ability of collaborative filtering to recommend citations that would be suitable additional references for a target research paper. We investigated six algorithms for selecting citations, evaluating them through offline experiments against a database of over 186,000 research papers contained in ResearchIndex. We also performed an online experiment with over 120 users to gauge user opinion of the effectiveness of the algorithms and of the utility of such recommendations for common research tasks. We found large differences in the accuracy of the algorithms in the offline experiment, especially when balanced for coverage. In the online experiment, users felt they received quality recommendations, and were enthusiastic about the idea of receiving recommendations in this domain.", "The revolution the Web has brought to information dissemination is not so much due to the availability of data-huge amounts of information has long been available in libraries-but rather the improved efficiency of accessing (improved accessibility to) that information. The Web promises to make more scientific articles more easily available. By making the context of citations easily and quickly browsable, autonomous citation indexing can help to evaluate the importance of individual contributions more accurately and quickly. Digital libraries incorporating ACI can help organize scientific literature and may significantly improve the efficiency of dissemination and feedback. ACI may also help speed the transition to scholarly electronic publishing." ] }
1205.1143
2135389419
The literature search has always been an important part of an academic research. It greatly helps to improve the quality of the research process and output, and increase the efficiency of the researchers in terms of their novel contribution to science. As the number of published papers increases every year, a manual search becomes more exhaustive even with the help of today's search engines since they are not specialized for this task. In academics, two relevant papers do not always have to share keywords, cite one another, or even be in the same field. Although a well-known paper is usually an easy pray in such a hunt, relevant papers using a different terminology, especially recent ones, are not obvious to the eye. In this work, we propose paper recommendation algorithms by using the citation information among papers. The proposed algorithms are direction aware in the sense that they can be tuned to find either recent or traditional papers. The algorithms require a set of papers as input and recommend a set of related ones. If the user wants to give negative or positive feedback on the suggested paper set, the recommendation is refined. The search process can be easily guided in that sense by relevance feedback. We show that this slight guidance helps the user to reach a desired paper in a more efficient way. We adapt our models and algorithms also for the venue and reviewer recommendation tasks. Accuracy of the models and algorithms is thoroughly evaluated by comparison with multiple baselines and algorithms from the literature in terms of several objectives specific to citation, venue, and reviewer recommendation tasks. All of these algorithms are implemented within a publicly available web-service framework (this http URL) which currently uses the data from DBLP and CiteSeer to construct the proposed citation graph.
More recent works define different measures such as Katz which is proposed by Liben-Nowell and Kleinberg for a study on the link prediction problem on social networks @cite_20 and used later for information retrieval purposes including citation recommendation by Strohman et al. @cite_14 . For two papers in the citation network, the Katz measure counts the number of paths by favoring the shorter ones. Lu et al. stated that both bibliographic coupling and Cocitation methods are only suitable for special cases due to their very local nature @cite_4 . They proposed a method which computes the similarity of two papers by using a vector based representation of their neighborhoods in the citation network and compared the method with CCIDF. Liang et al. argued that most of the methods stated above considers only direct references and citations alone @cite_5 . Even Katz and the vector based method of @cite_4 consider the links in the citation network as simple links. Instead, Liang et al. added a weight attribute to each link and proposed the method Global Relation Strength which computes the similarity of two papers by using a Katz-like approach.
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_4", "@cite_20" ], "mid": [ "1828964362", "1995326326", "2102549231", "2148847267" ], "abstract": [ "With the tremendous amount of research publications, recommending relevant papers to researchers to fulfill their information need becomes a significant problem. The major challenge to be tackled by our work is that given a target paper, how to effectively recommend a set of relevant papers from an existing citation network. In this paper, we propose a novel method to address the problem by incorporating various citation relations for a proper set of papers, which are more relevant but with a very limited size. The proposed method has two unique properties. Firstly, a metric called Local Relation Strength is defined to measure the dependency between cited and citing papers. Secondly, a model called Global Relation Strength is proposed to capture the relevance between two papers in the whole citation graph. We evaluate our proposed model on a real-world publication dataset and conduct an extensive comparison with the state-of-the-art baseline methods. The experimental results demonstrate that our method can have a promising improvement over the state-of-the-art techniques.", "We approach the problem of academic literature search by considering an unpublished manuscript as a query to a search system. We use the text of previous literature as well as the citation graph that connects it to find relevant related material. We evaluate our technique with manual and automatic evaluation methods, and find an order of magnitude improvement in mean average precision as compared to a text similarity baseline.", "Published scientific articles are linked together into a graph, the citation graph, through their citations. This paper explores the notion of similarity based on connectivity alone, and proposes several algorithms to quantify it. Our metrics take advantage of the local neighborhoods of the nodes in the citation graph. Two variants of link-based similarity estimation between two nodes are described, one based on the separate local neighborhoods of the nodes, and another based on the joint local neighborhood expanded from both nodes at the same time. The algorithms are implemented and evaluated on a subgraph of the citation graph of computer science in a retrieval context. The results are compared with text-based similarity, and demonstrate the complementarity of link-based and text-based retrieval.", "Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link-prediction problem, and we develop approaches to link prediction based on measures for analyzing the “proximity” of nodes in a network. Experiments on large coauthorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. © 2007 Wiley Periodicals, Inc." ] }
1205.1143
2135389419
The literature search has always been an important part of an academic research. It greatly helps to improve the quality of the research process and output, and increase the efficiency of the researchers in terms of their novel contribution to science. As the number of published papers increases every year, a manual search becomes more exhaustive even with the help of today's search engines since they are not specialized for this task. In academics, two relevant papers do not always have to share keywords, cite one another, or even be in the same field. Although a well-known paper is usually an easy pray in such a hunt, relevant papers using a different terminology, especially recent ones, are not obvious to the eye. In this work, we propose paper recommendation algorithms by using the citation information among papers. The proposed algorithms are direction aware in the sense that they can be tuned to find either recent or traditional papers. The algorithms require a set of papers as input and recommend a set of related ones. If the user wants to give negative or positive feedback on the suggested paper set, the recommendation is refined. The search process can be easily guided in that sense by relevance feedback. We show that this slight guidance helps the user to reach a desired paper in a more efficient way. We adapt our models and algorithms also for the venue and reviewer recommendation tasks. Accuracy of the models and algorithms is thoroughly evaluated by comparison with multiple baselines and algorithms from the literature in terms of several objectives specific to citation, venue, and reviewer recommendation tasks. All of these algorithms are implemented within a publicly available web-service framework (this http URL) which currently uses the data from DBLP and CiteSeer to construct the proposed citation graph.
Many works use random walk with restarts (RWR) for citation analysis @cite_15 @cite_3 @cite_23 @cite_2 . RWR is a well known and efficient technique used for different tasks including computing the relevance of two vertices in a graph @cite_25 . It is very similar to the well known PageRank algorithm which is used by Both Li and Willett @cite_23 (ArticleRank) and Ma et al. @cite_3 to evaluate the importance of the academic papers. Gori and Pucci @cite_15 proposed an algorithm PaperRank for RWR-based paper recommendation which can also be seen as a Personalized PageRank computation @cite_11 on the citation graph. Lao and Cohen @cite_2 also used RWR for paper recommendation in citation networks and proposed a learnable proximity measure for weighting the edges by using machine learning techniques.
{ "cite_N": [ "@cite_3", "@cite_23", "@cite_2", "@cite_15", "@cite_25", "@cite_11" ], "mid": [ "2040906208", "2038282574", "2029249040", "2102870745", "2063049279", "2069153192" ], "abstract": [ "The paper attempts to provide an alternative method for measuring the importance of scientific papers based on the Google's PageRank. The method is a meaningful extension of the common integer counting of citations and is then experimented for bringing PageRank to the citation analysis in a large citation network. It offers a more integrated picture of the publications' influence in a specific field. We firstly calculate the PageRanks of scientific papers. The distributional characteristics and comparison with the traditionally used number of citations are then analyzed in detail. Furthermore, the PageRank is implemented in the evaluation of research influence for several countries in the field of Biochemistry and Molecular Biology during the time period of 2000-2005. Finally, some advantages of bringing PageRank to the citation analysis are concluded.", "Purpose – The purpose of this paper is to suggest an alternative to the widely used Times Cited criterion for analysing citation networks. The approach involves taking account of the natures of the papers that cite a given paper, so as to differentiate between papers that attract the same number of citations.Design methodology approach – ArticleRank is an algorithm that has been derived from Google's PageRank algorithm to measure the influence of journal articles. ArticleRank is applied to two datasets – a citation network based on an early paper on webometrics, and a self‐citation network based on the 19 most cited papers in the Journal of Documentation – using citation data taken from the Web of Knowledge database.Findings – ArticleRank values provide a different ranking of a set of papers from that provided by the corresponding Times Cited values, and overcomes the inability of the latter to differentiate between papers with the same numbers of citations. The difference in rankings between Times Cited ...", "Scientific literature with rich metadata can be represented as a labeled directed graph. This graph representation enables a number of scientific tasks such as ad hoc retrieval or named entity recognition (NER) to be formulated as typed proximity queries in the graph. One popular proximity measure is called Random Walk with Restart (RWR), and much work has been done on the supervised learning of RWR measures by associating each edge label with a parameter. In this paper, we describe a novel learnable proximity measure which instead uses one weight per edge label sequence: proximity is defined by a weighted combination of simple \"path experts\", each corresponding to following a particular sequence of labeled edges. Experiments on eight tasks in two subdomains of biology show that the new learning method significantly outperforms the RWR model (both trained and untrained). We also extend the method to support two additional types of experts to model intrinsic properties of entities: query-independent experts, which generalize the PageRank measure, and popular entity experts which allow rankings to be adjusted for particular entities that are especially important.", "Every day researchers from all over the world have to filter the huge mass of existing research papers with the crucial aim of finding out useful publications related to their current work. In this paper we propose a research paper recommending algorithm based on the Citation Graph and random-walker properties. The PaperRank algorithm is able to assign a preference score to a set of documents contained in a digital library and linked one each other by bibliographic references. A data set of papers extracted by ACM Portal has been used for testing and very promising performances have been measured.", "Given an image (or video clip, or audio song), how do we automatically assign keywords to it? The general problem is to find correlations across the media in a collection of multimedia objects like video clips, with colors, and or motion, and or audio, and or text scripts. We propose a novel, graph-based approach, \"MMG\", to discover such cross-modal correlations.Our \"MMG\" method requires no tuning, no clustering, no user-determined constants; it can be applied to any multimedia collection, as long as we have a similarity function for each medium; and it scales linearly with the database size. We report auto-captioning experiments on the \"standard\" Corel image database of 680 MB, where it outperforms domain specific, fine-tuned methods by up to 10 percentage points in captioning accuracy (50 relative improvement).", "Recent web search techniques augment traditional text matching with a global notion of \"importance\" based on the linkage structure of the web, such as in Google's PageRank algorithm. For more refined searches, this global notion of importance can be specialized to create personalized views of importance--for example, importance scores can be biased according to a user-specified set of initially-interesting pages. Computing and storing all possible personalized views in advance is impractical, as is computing personalized views at query time, since the computation of each view requires an iterative computation over the web graph. We present new graph-theoretical results, and a new technique based on these results, that encode personalized views as partial vectors. Partial vectors are shared across multiple personalized views, and their computation and storage costs scale well with the number of views. Our approach enables incremental computation, so that the construction of personalized views from partial vectors is practical at query time. We present efficient dynamic programming algorithms for computing partial vectors, an algorithm for constructing personalized views from partial vectors, and experimental results demonstrating the effectiveness and scalability of our techniques." ] }
1205.1143
2135389419
The literature search has always been an important part of an academic research. It greatly helps to improve the quality of the research process and output, and increase the efficiency of the researchers in terms of their novel contribution to science. As the number of published papers increases every year, a manual search becomes more exhaustive even with the help of today's search engines since they are not specialized for this task. In academics, two relevant papers do not always have to share keywords, cite one another, or even be in the same field. Although a well-known paper is usually an easy pray in such a hunt, relevant papers using a different terminology, especially recent ones, are not obvious to the eye. In this work, we propose paper recommendation algorithms by using the citation information among papers. The proposed algorithms are direction aware in the sense that they can be tuned to find either recent or traditional papers. The algorithms require a set of papers as input and recommend a set of related ones. If the user wants to give negative or positive feedback on the suggested paper set, the recommendation is refined. The search process can be easily guided in that sense by relevance feedback. We show that this slight guidance helps the user to reach a desired paper in a more efficient way. We adapt our models and algorithms also for the venue and reviewer recommendation tasks. Accuracy of the models and algorithms is thoroughly evaluated by comparison with multiple baselines and algorithms from the literature in terms of several objectives specific to citation, venue, and reviewer recommendation tasks. All of these algorithms are implemented within a publicly available web-service framework (this http URL) which currently uses the data from DBLP and CiteSeer to construct the proposed citation graph.
As far as we know, none of these works study the recent traditional paper recommendation problem. The closest work is Claper @cite_1 which is an automatic system that measure how much a paper is classical, allowing to rank a list of paper to highlight the most classical ones.
{ "cite_N": [ "@cite_1" ], "mid": [ "2055899255" ], "abstract": [ "Classical papers are of great help for beginners to get familiar with a new research area. However, digging them out is a difficult problem. This paper proposes Claper, a novel academic recommendation system based on two proven principles: the Principle of Download Persistence and the Principle of Citation Approaching (we prove them based on real-world datasets). The principle of download persistence indicates that classical papers have few decreasing download frequencies since they were published. The principle of citation approaching indicates that a paper which cites a classical paper is likely to cite citations of that classical paper. Our experimental results based on large-scale real-world datasets illustrate Claper can effectively recommend classical papers of high quality to beginners and thus help them enter their research areas." ] }
1205.1456
2949231373
We study the problem of analyzing influence of various factors affecting individual messages posted in social media. The problem is challenging because of various types of influences propagating through the social media network that act simultaneously on any user. Additionally, the topic composition of the influencing factors and the susceptibility of users to these influences evolve over time. This problem has not studied before, and off-the-shelf models are unsuitable for this purpose. To capture the complex interplay of these various factors, we propose a new non-parametric model called the Dynamic Multi-Relational Chinese Restaurant Process. This accounts for the user network for data generation and also allows the parameters to evolve over time. Designing inference algorithms for this model suited for large scale social-media data is another challenge. To this end, we propose a scalable and multi-threaded inference algorithm based on online Gibbs Sampling. Extensive evaluations on large-scale Twitter and Facebook data show that the extracted topics when applied to authorship and commenting prediction outperform state-of-the-art baselines. More importantly, our model produces valuable insights on topic trends and user personality trends, beyond the capability of existing approaches.
Non-parametric models: The Dirichlet Process (DP) @cite_5 is a prior over a countably infinite set of atoms, and is popularly used as a prior for mixture models (DP Mixture Model) in applications, where the number of clusters is difficult to provide as a parameter. The Chinese Restaurant Process @cite_18 provides a generative description for the Dirichlet Process, and is useful for designing sampling algorithms for DP mixture models. The distributions defined by these models are exchangeable, in that different permutations of the data are equally probable.
{ "cite_N": [ "@cite_5", "@cite_18" ], "mid": [ "1967687583", "1551893515" ], "abstract": [ "process. This paper extends Ferguson's result to cases where the random measure is a mixing distribution for a parameter which determines the distribution from which observations are made. The conditional distribution of the random measure, given the observations, is no longer that of a simple Dirichlet process, but can be described as being a mixture of Dirichlet processes. This paper gives a formal definition for these mixtures and develops several theorems about their properties, the most important of which is a closure property for such mixtures. Formulas for computing the conditional distribution are derived and applications to problems in bio-assay, discrimination, regression, and mixing distributions are given.", "Preliminaries.- Bell polynomials, composite structures and Gibbs partitions.- Exchangeable random partitions.- Sequential constructions of random partitions.- Poisson constructions of random partitions.- Coagulation and fragmentation processes.- Random walks and random forests.- The Brownian forest.- Brownian local times, branching and Bessel processes.- Brownian bridge asymptotics for random mappings.- Random forests and the additive coalescent." ] }
1205.1456
2949231373
We study the problem of analyzing influence of various factors affecting individual messages posted in social media. The problem is challenging because of various types of influences propagating through the social media network that act simultaneously on any user. Additionally, the topic composition of the influencing factors and the susceptibility of users to these influences evolve over time. This problem has not studied before, and off-the-shelf models are unsuitable for this purpose. To capture the complex interplay of these various factors, we propose a new non-parametric model called the Dynamic Multi-Relational Chinese Restaurant Process. This accounts for the user network for data generation and also allows the parameters to evolve over time. Designing inference algorithms for this model suited for large scale social-media data is another challenge. To this end, we propose a scalable and multi-threaded inference algorithm based on online Gibbs Sampling. Extensive evaluations on large-scale Twitter and Facebook data show that the extracted topics when applied to authorship and commenting prediction outperform state-of-the-art baselines. More importantly, our model produces valuable insights on topic trends and user personality trends, beyond the capability of existing approaches.
Many applications require multiple coupled Dirichlet Processes. The Hierarchical Dirichlet Process (HDP) @cite_23 is one way to introduce coupling using a two level structure. The HDP can be useful, for example, for extending the popular Latent Dirichlet Allocation (LDA) model @cite_8 , for countably infinite number of topics @cite_0 . The HDP can be equivalently represented by an extension of the CRP called the Chinese Restaurant Franchise (CRF) @cite_23 . Just as the CRF introduces coupling between CRPs, the MultiRelCRP introduces coupling between RelCRPs. However, the nature of the coupling in the MultiRelCRP can be much richer, depending on the relationships, as we explain in subsec:mrelcrp .
{ "cite_N": [ "@cite_0", "@cite_23", "@cite_8" ], "mid": [ "2142534468", "2158266063", "1880262756" ], "abstract": [ "Historical user activity is key for building user profiles to predict the user behavior and affinities in many web applications such as targeting of online advertising, content personalization and social recommendations. User profiles are temporal, and changes in a user's activity patterns are particularly useful for improved prediction and recommendation. For instance, an increased interest in car-related web pages may well suggest that the user might be shopping for a new vehicle.In this paper we present a comprehensive statistical framework for user profiling based on topic models which is able to capture such effects in a fully fashion. Our method models topical interests of a user dynamically where both the user association with the topics and the topics themselves are allowed to vary over time, thus ensuring that the profiles remain current. We describe a streaming, distributed inference algorithm which is able to handle tens of millions of users. Our results show that our model contributes towards improved behavioral targeting of display advertising relative to baseline models that do not incorporate topical and or temporal dependencies. As a side-effect our model yields human-understandable results which can be used in an intuitive fashion by advertisers.", "We consider problems involving groups of data where each observation within a group is a draw from a mixture model and where it is desirable to share mixture components between groups. We assume that the number of mixture components is unknown a priori and is to be inferred from the data. In this setting it is natural to consider sets of Dirichlet processes, one for each group, where the well-known clustering property of the Dirichlet process provides a nonparametric prior for the number of mixture components within each group. Given our desire to tie the mixture models in the various groups, we consider a hierarchical model, specifically one in which the base measure for the child Dirichlet processes is itself distributed according to a Dirichlet process. Such a base measure being discrete, the child Dirichlet processes necessarily share atoms. Thus, as desired, the mixture models in the different groups necessarily share mixture components. We discuss representations of hierarchical Dirichlet processes ...", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model." ] }
1205.1456
2949231373
We study the problem of analyzing influence of various factors affecting individual messages posted in social media. The problem is challenging because of various types of influences propagating through the social media network that act simultaneously on any user. Additionally, the topic composition of the influencing factors and the susceptibility of users to these influences evolve over time. This problem has not studied before, and off-the-shelf models are unsuitable for this purpose. To capture the complex interplay of these various factors, we propose a new non-parametric model called the Dynamic Multi-Relational Chinese Restaurant Process. This accounts for the user network for data generation and also allows the parameters to evolve over time. Designing inference algorithms for this model suited for large scale social-media data is another challenge. To this end, we propose a scalable and multi-threaded inference algorithm based on online Gibbs Sampling. Extensive evaluations on large-scale Twitter and Facebook data show that the extracted topics when applied to authorship and commenting prediction outperform state-of-the-art baselines. More importantly, our model produces valuable insights on topic trends and user personality trends, beyond the capability of existing approaches.
Temporal evolution has been addressed in the context of non-parametric models @cite_22 @cite_7 @cite_0 , where the parameters of the the corresponding static model become functions of time. Some of the approaches are amenable to scalable inference, while others are not. For the Dynamic MRelCRP, we use the dynamic evolution of the parameters proposed in the context of Recurrent CRF @cite_7 @cite_0 , because of the scalability of the associated inference problem. Note, however, that the similarity between the Recurrent CRF and the Dynamic-MRelCRP is only in the temporal evolution of model parameters. The static model is a HDP CRF, as compared to the RelCRP in our case.
{ "cite_N": [ "@cite_0", "@cite_22", "@cite_7" ], "mid": [ "2142534468", "258053484", "2951657789" ], "abstract": [ "Historical user activity is key for building user profiles to predict the user behavior and affinities in many web applications such as targeting of online advertising, content personalization and social recommendations. User profiles are temporal, and changes in a user's activity patterns are particularly useful for improved prediction and recommendation. For instance, an increased interest in car-related web pages may well suggest that the user might be shopping for a new vehicle.In this paper we present a comprehensive statistical framework for user profiling based on topic models which is able to capture such effects in a fully fashion. Our method models topical interests of a user dynamically where both the user association with the topics and the topics themselves are allowed to vary over time, thus ensuring that the profiles remain current. We describe a streaming, distributed inference algorithm which is able to handle tens of millions of users. Our results show that our model contributes towards improved behavioral targeting of display advertising relative to baseline models that do not incorporate topical and or temporal dependencies. As a side-effect our model yields human-understandable results which can be used in an intuitive fashion by advertisers.", "Clustering is an important data mining task for exploration and visualization of different data types like news stories, scientific publications, weblogs, etc. Due to the evolving nature of these data, evolutionary clustering, also known as dynamic clustering, has recently emerged to cope with the challenges of mining temporally smooth clusters over time. A good evolutionary clustering algorithm should be able to fit the data well at each time epoch, and at the same time results in a smooth cluster evolution that provides the data analyst with a coherent and easily interpretable model. In this paper we introduce the temporal Dirichlet process mixture model (TDPM) as a framework for evolutionary clustering. TDPM is a generalization of the DPM framework for clustering that automatically grows the number of clusters with the data. In our framework, the data is divided into epochs; all data points inside the same epoch are assumed to be fully exchangeable, whereas the temporal order is maintained across epochs. Moreover, The number of clusters in each epoch is unbounded: the clusters can retain, die out or emerge over time, and the actual parameterization of each cluster can also evolve over time in a Markovian fashion. We give a detailed and intuitive construction of this framework using the recurrent Chinese restaurant process (RCRP) metaphor, as well as a Gibbs sampling algorithm to carry out posterior inference in order to determine the optimal cluster evolution. We demonstrate our model over simulated data by using it to build an infinite dynamic mixture of Gaussian factors, and over real dataset by using it to build a simple non-parametric dynamic clustering-topic model and apply it to analyze the NIPS12 document collection.", "Topic models have proven to be a useful tool for discovering latent structures in document collections. However, most document collections often come as temporal streams and thus several aspects of the latent structure such as the number of topics, the topics' distribution and popularity are time-evolving. Several models exist that model the evolution of some but not all of the above aspects. In this paper we introduce infinite dynamic topic models, iDTM, that can accommodate the evolution of all the aforementioned aspects. Our model assumes that documents are organized into epochs, where the documents within each epoch are exchangeable but the order between the documents is maintained across epochs. iDTM allows for unbounded number of topics: topics can die or be born at any epoch, and the representation of each topic can evolve according to a Markovian dynamics. We use iDTM to analyze the birth and evolution of topics in the NIPS community and evaluated the efficacy of our model on both simulated and real datasets with favorable outcome." ] }
1205.1456
2949231373
We study the problem of analyzing influence of various factors affecting individual messages posted in social media. The problem is challenging because of various types of influences propagating through the social media network that act simultaneously on any user. Additionally, the topic composition of the influencing factors and the susceptibility of users to these influences evolve over time. This problem has not studied before, and off-the-shelf models are unsuitable for this purpose. To capture the complex interplay of these various factors, we propose a new non-parametric model called the Dynamic Multi-Relational Chinese Restaurant Process. This accounts for the user network for data generation and also allows the parameters to evolve over time. Designing inference algorithms for this model suited for large scale social-media data is another challenge. To this end, we propose a scalable and multi-threaded inference algorithm based on online Gibbs Sampling. Extensive evaluations on large-scale Twitter and Facebook data show that the extracted topics when applied to authorship and commenting prediction outperform state-of-the-art baselines. More importantly, our model produces valuable insights on topic trends and user personality trends, beyond the capability of existing approaches.
(a) Most content analysis papers @cite_14 use standard topic models such as LDA @cite_8 or basic metrics like tf-idf. Focusing on the specific content of miroblogs, Ramage et. al. @cite_15 proposed an LDA variant that accounts for hashtags in content analysis. One problem with this approach is that hastags are not general features of social media data, and are often unreliable. There is little modeling work that takes into account the rich features of social media such as network, geography, etc.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_8" ], "mid": [ "2137958601", "2063904635", "1880262756" ], "abstract": [ "As microblogging grows in popularity, services like Twitter are coming to support information gathering needs above and beyond their traditional roles as social networks. But most users’ interaction with Twitter is still primarily focused on their social graphs, forcing the often inappropriate conflation of “people I follow” with “stuff I want to read.” We characterize some information needs that the current Twitter interface fails to support, and argue for better representations of content for solving these challenges. We present a scalable implementation of a partially supervised learning model (Labeled LDA) that maps the content of the Twitter feed into dimensions. These dimensions correspond roughly to substance, style, status, and social characteristics of posts. We characterize users and tweets using this model, and present results on two information consumption oriented tasks.", "Social networks such as Facebook, LinkedIn, and Twitter have been a crucial source of information for a wide spectrum of users. In Twitter, popular information that is deemed important by the community propagates through the network. Studying the characteristics of content in the messages becomes important for a number of tasks, such as breaking news detection, personalized message recommendation, friends recommendation, sentiment analysis and others. While many researchers wish to use standard text mining tools to understand messages on Twitter, the restricted length of those messages prevents them from being employed to their full potential. We address the problem of using standard topic models in micro-blogging environments by studying how the models can be trained on the dataset. We propose several schemes to train a standard topic model and compare their quality and effectiveness through a set of carefully designed experiments from both qualitative and quantitative perspectives. We show that by training a topic model on aggregated messages we can obtain a higher quality of learned model which results in significantly better performance in two real-world classification problems. We also discuss how the state-of-the-art Author-Topic model fails to model hierarchical relationships between entities in Social Media.", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model." ] }
1205.1456
2949231373
We study the problem of analyzing influence of various factors affecting individual messages posted in social media. The problem is challenging because of various types of influences propagating through the social media network that act simultaneously on any user. Additionally, the topic composition of the influencing factors and the susceptibility of users to these influences evolve over time. This problem has not studied before, and off-the-shelf models are unsuitable for this purpose. To capture the complex interplay of these various factors, we propose a new non-parametric model called the Dynamic Multi-Relational Chinese Restaurant Process. This accounts for the user network for data generation and also allows the parameters to evolve over time. Designing inference algorithms for this model suited for large scale social-media data is another challenge. To this end, we propose a scalable and multi-threaded inference algorithm based on online Gibbs Sampling. Extensive evaluations on large-scale Twitter and Facebook data show that the extracted topics when applied to authorship and commenting prediction outperform state-of-the-art baselines. More importantly, our model produces valuable insights on topic trends and user personality trends, beyond the capability of existing approaches.
(b) In the context of microblogging sites, content recommendation approaches @cite_2 @cite_4 @cite_13 assessing user interests based on their activities. Recently, Wen et. al @cite_21 have proposed an approach which studies the influence of the network on users. Ahmed et. al. @cite_0 model the dynamics of user interest and also the account generic popularity of a particular item, but do not consider the influence of various external factors like network of users and geography. Thus most of related work either deals with the influence of a single factor or user preferences.
{ "cite_N": [ "@cite_4", "@cite_21", "@cite_0", "@cite_2", "@cite_13" ], "mid": [ "2127785456", "2094858374", "2142534468", "2151451758", "" ], "abstract": [ "More and more web users keep up with newest information through information streams such as the popular micro-blogging website Twitter. In this paper we studied content recommendation on Twitter to better direct user attention. In a modular approach, we explored three separate dimensions in designing such a recommender: content sources, topic interest models for users, and social voting. We implemented 12 recommendation engines in the design space we formulated, and deployed them to a recommender service on the web to gather feedback from real Twitter users. The best performing algorithm improved the percentage of interesting content to 72 from a baseline of 33 . We conclude this work by discussing the implications of our recommender design and how our design can generalize to other information streams.", "This paper intends to provide some insights of a scientific problem: how likely one's interests can be inferred from his her social connections -- friends, friends' friends, 3-degree friends, etc? Is \"Birds of a Feather Flocks Together\" a norm? We do not consider the friending activity on online social networking sites. Instead, we conduct this study by implementing a privacy-preserving large distribute social sensor system in a large global IT company to capture the multifaceted activities of 30,000+ people, including communications (e.g., emails, instant messaging, etc) and Web 2.0 activities (e.g., social bookmarking, file sharing, blogging, etc). These activities occupy the majority of employees' time in work, and thus, provide a high quality approximation to the real social connections of employees in the workplace context. In addition to such \"informal networks\", we investigated the \"formal networks\", such as their hierarchical structure, as well as the demographic profile data such as geography, job role, self-specified interests, etc. Because user ID matching across multiple sources on the Internet is very difficult, and most user activity logs have to be anonymized before they are processed, no prior studies could collect comparable multifaceted activity data of individuals. That makes this study unique. In this paper, we present a technique to predict the inference quality by utilizing (1) network analysis and network autocorrelation modeling of informal and formal networks, and (2) regression models to predict user interest inference quality from network characteristics. We verify our findings with experiments on both implicit user interests indicated by the content of communications or Web 2.0 activities, and explicit user interests specified in user profiles. We demonstrate that the inference quality prediction increases the inference quality of implicit interests by 42.8 , and inference quality of explicit interests by up to 101 .", "Historical user activity is key for building user profiles to predict the user behavior and affinities in many web applications such as targeting of online advertising, content personalization and social recommendations. User profiles are temporal, and changes in a user's activity patterns are particularly useful for improved prediction and recommendation. For instance, an increased interest in car-related web pages may well suggest that the user might be shopping for a new vehicle.In this paper we present a comprehensive statistical framework for user profiling based on topic models which is able to capture such effects in a fully fashion. Our method models topical interests of a user dynamically where both the user association with the topics and the topics themselves are allowed to vary over time, thus ensuring that the profiles remain current. We describe a streaming, distributed inference algorithm which is able to handle tens of millions of users. Our results show that our model contributes towards improved behavioral targeting of display advertising relative to baseline models that do not incorporate topical and or temporal dependencies. As a side-effect our model yields human-understandable results which can be used in an intuitive fashion by advertisers.", "Recommending news stories to users, based on their preferences, has long been a favourite domain for recommender systems research. In this paper, we describe a novel approach to news recommendation that harnesses real-time micro-blogging activity, from a service such as Twitter, as the basis for promoting news stories from a user's favourite RSS feeds. A preliminary evaluation is carried out on an implementation of this technique that shows promising results.", "" ] }
1205.0652
1529281365
Performance of data forwarding in Delay Tolerant Networks (DTNs) benefits considerably if one can make use of human mobility in terms of social structures. However, it is difficult and time-consuming to calculate the centrality and similarity of nodes by using solutions for traditional social networks, this is mainly because of the transient node contact and the intermittently connected environment. In this work, we are interested in the following question: Can we explore some other stable social attributes to quantify the centrality and similarity of nodes? Taking GPS traces of human walks from the real world, we find that there exist two known phenomena. One is public hotspot, the other is personal hotspot. Motivated by this observation, we present Hoten (hotspot and entropy), a novel routing metric to improve routing performance in DTNs. First, we use the relative entropy between the public hotspots and the personal hotspots to compute the centrality of nodes. Then we utilize the inverse symmetrized entropy of the personal hotspots between two nodes to compute the similarity between them. Third, we exploit the entropy of personal hotspots of a node to estimate its personality. Besides, we propose a method to ascertain the optimized size of hotspot. Finally, we compare our routing strategy with other state-of-the-art routing schemes through extensive trace-driven simulations, the results show that Hoten largely outperforms other solutions, especially in terms of combined overhead packet delivery ratio and the average number of hops per message.
Note that most aforementioned schemes do not take the social structures into account. However, with the recent popularization of personal hand-held mobile devices, human walks gradually play a critical role in the network performance, since devices may fail to connect with each other when people move around. Recently, a few works attempt to uncover the underlying stable network structure in real traces by using social networks analysis technology @cite_17 . For example, SimBet @cite_31 exploited betweenness centrality and social similarity of ego networks @cite_13 to differentiate nodes. Messages will be forwarded to such nodes which have relatively big SimBet values to increase the probability of finding better relays to the final destination. P. @cite_11 proposed BUBBLE, which combined node centrality and community structure to make forwarding decisions. They assumed that each node had a global rank across the whole system and a local rank within its local community. When a message is out of the community of the destination, it is forwarded to the node with a high global rank, when the message enters into the range of the destination community, it is delivered to the node with a high local rank in that community.
{ "cite_N": [ "@cite_31", "@cite_11", "@cite_13", "@cite_17" ], "mid": [ "2082674813", "2135712710", "2012909434", "2046868922" ], "abstract": [ "Message delivery in sparse Mobile Ad hoc Networks (MANETs) is difficult due to the fact that the network graph is rarely (if ever) connected. A key challenge is to find a route that can provide good delivery performance and low end-to-end delay in a disconnected network graph where nodes may move freely. This paper presents a multidisciplinary solution based on the consideration of the so-called small world dynamics which have been proposed for economy and social studies and have recently revealed to be a successful approach to be exploited for characterising information propagation in wireless networks. To this purpose, some bridge nodes are identified based on their centrality characteristics, i.e., on their capability to broker information exchange among otherwise disconnected nodes. Due to the complexity of the centrality metrics in populated networks the concept of ego networks is exploited where nodes are not required to exchange information about the entire network topology, but only locally available information is considered. Then SimBet Routing is proposed which exploits the exchange of pre-estimated \"betweenness' centrality metrics and locally determined social \"similarity' to the destination node. We present simulations using real trace data to demonstrate that SimBet Routing results in delivery performance close to Epidemic Routing but with significantly reduced overhead. Additionally, we show that SimBet Routing outperforms PRoPHET Routing, particularly when the sending and receiving nodes have low connectivity.", "In this paper we seek to improve our understanding of human mobility in terms of social structures, and to use these structures in the design of forwarding algorithms for Pocket Switched Networks (PSNs). Taking human mobility traces from the real world, we discover that human interaction is heterogeneous both in terms of hubs (popular individuals) and groups or communities. We propose a social based forwarding algorithm, BUBBLE, which is shown empirically to improve the forwarding efficiency significantly compared to oblivious forwarding schemes and to PROPHET algorithm. We also show how this algorithm can be implemented in a distributed way, which demonstrates that it is applicable in the decentralised environment of PSNs.", "In this paper, we look at the betweenness centrality of ego in an ego network. We discuss the issue of normalization and develop an efficient and simple algorithm for calculating the betweenness score. We then examine the relationship between the ego betweenness and the betweenness of the actor in the whole network. Whereas, we can show that there is no theoretical link between the two we undertake a simulation study, which indicates that the local ego betweenness is highly correlated with the betweenness of the actor in the complete network.", "Abstract Egocentric centrality measures (for data on a node’s first-order zone) parallel to Freeman’s [Social Networks 1 (1979) 215] centrality measures for complete (sociocentric) network data are considered. Degree-based centrality is in principle identical for egocentric and sociocentric network data. A closeness measure is uninformative for egocentric data, since all geodesic distances from ego to other nodes in the first-order zone are 1 by definition. The extent to which egocentric and sociocentric versions of Freeman’s betweenness centrality measure correspond is explored empirically. Across seventeen diverse networks, that correspondence is found to be relatively close—though variations in egocentric network composition do lead to some notable differences in egocentric and sociocentric betweennness. The findings suggest that research design has a relatively modest impact on assessing the relative betweenness of nodes, and that a betweenness measure based on egocentric network data could be a reliable substitute for Freeman’s betweenness measure when it is not practical to collect complete network data. However, differences in the research methods used in sociocentric and egocentric studies could lead to additional differences in the respective betweenness centrality measures." ] }
1205.0679
1994748751
The existential k-pebble game characterizes the expressive power of the existential-positive k-variable fragment of first-order logic on finite structures. The winner of the existential k-pebble game on two given finite structures can be determined in time O(n2k) by dynamic programming on the graph of game configurations. We show that there is no O(n(k-3) 12)-time algorithm that decides which player can win the existential k-pebble game on two given structures. This lower bound is unconditional and does not rely on any complexity-theoretic assumptions. Establishing strong k-consistency is a well-known heuristic for solving the constraint satisfaction problem (CSP). By the game characterization of Kolaitis and Vardi our result implies that there is no O(n(k-3) 12)-time algorithm that decides if strong k-consistency can be established for a given CSP-instance.
Finally, the parameterized complexity of @math -consistency has also been investigated by Gaspers and Szeider @cite_3 . We discuss their work after the introduction to @math -consistency in Section .
{ "cite_N": [ "@cite_3" ], "mid": [ "1528594376" ], "abstract": [ "We investigate the parameterized complexity of deciding whether a constraint network is k-consistent. We show that, parameterized by k, the problem is complete for the complexity class co-W[2]. As secondary parameters we consider the maximum domain size d and the maximum number l of constraints in which a variable occurs. We show that parameterized by k + d, the problem drops down one complexity level and becomes co-W[1]-complete. Parameterized by k +d+l the problem drops down one more level and becomes fixed-parameter tractable. We further show that the same complexity classification applies to strong k-consistency, directional k-consistency, and strong directional k-consistency. Our results establish a super-polynomial separation between input size and time complexity. Thus we strengthen the known lower bounds on time complexity of k-consistency that are based on input size." ] }
1205.0610
2402950858
Multiple instance learning (MIL) has attracted great attention recently in machine learning community. However, most MIL algorithms are very slow and cannot be applied to large datasets. In this paper, we propose a greedy strategy to speed up the multiple instance learning process. Our contribution is two fold. First, we propose a density ratio model, and show that maximizing a density ratio function is the low bound of the DD model under certain conditions. Secondly, we make use of a histogram ratio between positive bags and negative bags to represent the density ratio function and find codebooks separately for positive bags and negative bags by a greedy strategy. For testing, we make use of a nearest neighbor strategy to classify new bags. We test our method on both small benchmark datasets and the large TRECVID MED11 dataset. The experimental results show that our method yields comparable accuracy to the current state of the art, while being up to at least one order of magnitude faster.
One of the earliest algorithms for learning from multiple instances was developed by @cite_24 for drug activity prediction. Their algorithm, the axis-parallel rectangle (APR) method, expands or shrinks a hyper-rectangle in the instance feature space with the goal of finding the smallest box that covers at least one instance from each positive bag and no instances from any negative bag. Following this seminal work, there has been a significant amount of research devoted to MIL problems using different learning models, such as DD @cite_13 , EM-DD @cite_15 , and extended Citation kNN @cite_19 .
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_15", "@cite_13" ], "mid": [ "2110119381", "2078579128", "2163474322", "2154318594" ], "abstract": [ "The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89 correct predictions on a musk odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms.", "As opposed to traditional supervised learning, multiple-instance learning concerns the problem of classifying a bag of instances, given bags that are labeled by a teacher as being overall positive or negative. Current research mainly concentrates on adapting traditional concept learning to solve this problem. In this paper we investigate the use of lazy learning and Hausdorff distance to approach the multipleinstance problem. We present two variants of the K-nearest neighbor algorithm, called BayesianKNN and Citation-KNN, solving the multipleinstance problem. Experiments on the Drug discovery benchmark data show that both algorithms are competitive with the best ones conceived in the concept learning framework. Further work includes exploring of a combination of lazy and eager multiple-instance problem classifiers.", "We present a new multiple-instance (MI) learning technique (EM-DD) that combines EM with the diverse density (DD) algorithm. EM-DD is a general-purpose MI algorithm that can be applied with boolean or real-value labels and makes real-value predictions. On the boolean Musk benchmarks, the EM-DD algorithm without any tuning significantly outperforms all previous algorithms. EM-DD is relatively insensitive to the number of relevant attributes in the data set and scales up well to large bag sizes. Furthermore, EM-DD provides a new framework for MI learning, in which the MI problem is converted to a single-instance setting by using EM to estimate the instance responsible for the label of the bag.", "Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept given positive and negative bags of instances. Each bag may contain many instances, but a bag is labeled positive even if only one of the instances in it falls within the concept. A bag is labeled negative only if all the instances in it are negative. We describe a new general framework, called Diverse Density, for solving multiple-instance learning problems. We apply this framework to learn a simple description of a person from a series of images (bags) containing that person, to a stock selection problem, and to the drug activity prediction problem." ] }
1205.0581
2952288288
We consider the classical secret sharing problem in the case where all agents are selfish but rational. In recent work, Kol and Naor show that, when there are two players, in the non-simultaneous communication model, i.e. when rushing is possible, there is no Nash equilibrium that ensures both players learn the secret. However, they describe a mechanism for this problem, for any number of players, that is an epsilon-Nash equilibrium, in that no player can gain more than epsilon utility by deviating from it. Unfortunately, the Kol and Naor mechanism, and, to the best of our knowledge, all previous mechanisms for this problem require each agent to send O(n) messages in expectation, where n is the number of agents. This may be problematic for some applications of rational secret sharing such as secure multi-party computation and simulation of a mediator. We address this issue by describing mechanisms for rational secret sharing that are designed for large n. Both of our results hold for n > 2, and are Nash equilbria, rather than just epsilon-Nash equilbria. Our first result is a mechanism for n-out-of-n rational secret sharing that is scalable in the sense that it requires each agent to send only an expected O(log n) bits. Moreover, the latency of this mechanism is O(log n) in expectation, compared to O(n) expected latency for the Kol and Naor result. Our second result is a mechanism for a relaxed variant of rational m-out-of-n secret sharing where m = Theta(n). It requires each processor to send O(log n) bits and has O(log n) latency. Both of our mechanisms are non-cryptographic, and are not susceptible to backwards induction.
The work of Kol and Naor @cite_10 is closest to our own work. They show that in the non-simultaneous broadcast model ( , when rushing is possible), there is no Nash equilibrium that ensures all agents learn the secret, at least for the case of two players. They thus consider and solve the problem of designing an @math -Nash equilibrium for the problem in this communication model. An @math -Nash equilibrium is close to an equilibrium in the sense that no player can gain more than @math utility by unilaterally deviating from it. Furthermore, the equilibrium they achieve is in the sense that after any history that is consistent with all players following the protocol, following the protocol continues to be an @math -Nash equilibrium. As we have already discussed, our protocols make use of several clever ideas from their result.
{ "cite_N": [ "@cite_10" ], "mid": [ "1969882998" ], "abstract": [ "We consider the rational versions of two of the classical problems in foundations of cryptography: secret sharing and multiparty computation, suggested by Halpern and Teague (STOC 2004). Our goal is to design games and fair strategies that encourage rational participants to exchange information about their inputs for their mutual benefit, when the only mean of communication is a broadcast channel. We show that protocols for the above information exchanging tasks, where players' values come from a bounded domain, cannot satisfy some of the most desirable properties. In contrast, we provide a rational secret sharing scheme with simultaneous broadcast channel in which shares are taken from an unbounded domain, but have finite (and polynomial sized) expectation. Previous schemes (mostly cryptographic) have required computational assumptions, making them inexact and susceptible to backward induction, or used stronger communication channels. Our scheme is non-cryptographic, immune to backward induction, and satisfies a stronger rationality concept (strict Nash equilibrium). We show that our solution can also be used to construct an e-Nash equilibrium secret sharing scheme for the case of a non-simultaneous broadcast channel." ] }
1205.0581
2952288288
We consider the classical secret sharing problem in the case where all agents are selfish but rational. In recent work, Kol and Naor show that, when there are two players, in the non-simultaneous communication model, i.e. when rushing is possible, there is no Nash equilibrium that ensures both players learn the secret. However, they describe a mechanism for this problem, for any number of players, that is an epsilon-Nash equilibrium, in that no player can gain more than epsilon utility by deviating from it. Unfortunately, the Kol and Naor mechanism, and, to the best of our knowledge, all previous mechanisms for this problem require each agent to send O(n) messages in expectation, where n is the number of agents. This may be problematic for some applications of rational secret sharing such as secure multi-party computation and simulation of a mediator. We address this issue by describing mechanisms for rational secret sharing that are designed for large n. Both of our results hold for n > 2, and are Nash equilbria, rather than just epsilon-Nash equilbria. Our first result is a mechanism for n-out-of-n rational secret sharing that is scalable in the sense that it requires each agent to send only an expected O(log n) bits. Moreover, the latency of this mechanism is O(log n) in expectation, compared to O(n) expected latency for the Kol and Naor result. Our second result is a mechanism for a relaxed variant of rational m-out-of-n secret sharing where m = Theta(n). It requires each processor to send O(log n) bits and has O(log n) latency. Both of our mechanisms are non-cryptographic, and are not susceptible to backwards induction.
The impossibility of a Nash equilibrium for two players carries over to the setting with secure private channels, since there is no difference between private channels and broadcast channels when there are only two players. However, one might hope that the algorithm of Kol and Naor @cite_10 could be simulated over secure private channels to give an everlasting @math -Nash equilibrium. Unfortunately, simulation of broadcast over private channels is expensive, requiring each player to send @math messages per round.
{ "cite_N": [ "@cite_10" ], "mid": [ "1969882998" ], "abstract": [ "We consider the rational versions of two of the classical problems in foundations of cryptography: secret sharing and multiparty computation, suggested by Halpern and Teague (STOC 2004). Our goal is to design games and fair strategies that encourage rational participants to exchange information about their inputs for their mutual benefit, when the only mean of communication is a broadcast channel. We show that protocols for the above information exchanging tasks, where players' values come from a bounded domain, cannot satisfy some of the most desirable properties. In contrast, we provide a rational secret sharing scheme with simultaneous broadcast channel in which shares are taken from an unbounded domain, but have finite (and polynomial sized) expectation. Previous schemes (mostly cryptographic) have required computational assumptions, making them inexact and susceptible to backward induction, or used stronger communication channels. Our scheme is non-cryptographic, immune to backward induction, and satisfies a stronger rationality concept (strict Nash equilibrium). We show that our solution can also be used to construct an e-Nash equilibrium secret sharing scheme for the case of a non-simultaneous broadcast channel." ] }
1205.0581
2952288288
We consider the classical secret sharing problem in the case where all agents are selfish but rational. In recent work, Kol and Naor show that, when there are two players, in the non-simultaneous communication model, i.e. when rushing is possible, there is no Nash equilibrium that ensures both players learn the secret. However, they describe a mechanism for this problem, for any number of players, that is an epsilon-Nash equilibrium, in that no player can gain more than epsilon utility by deviating from it. Unfortunately, the Kol and Naor mechanism, and, to the best of our knowledge, all previous mechanisms for this problem require each agent to send O(n) messages in expectation, where n is the number of agents. This may be problematic for some applications of rational secret sharing such as secure multi-party computation and simulation of a mediator. We address this issue by describing mechanisms for rational secret sharing that are designed for large n. Both of our results hold for n > 2, and are Nash equilbria, rather than just epsilon-Nash equilbria. Our first result is a mechanism for n-out-of-n rational secret sharing that is scalable in the sense that it requires each agent to send only an expected O(log n) bits. Moreover, the latency of this mechanism is O(log n) in expectation, compared to O(n) expected latency for the Kol and Naor result. Our second result is a mechanism for a relaxed variant of rational m-out-of-n secret sharing where m = Theta(n). It requires each processor to send O(log n) bits and has O(log n) latency. Both of our mechanisms are non-cryptographic, and are not susceptible to backwards induction.
In @cite_19 we overcame this difficulty, providing a algorithm for rational secret sharing, in which each player only sends @math bits per round and the expected number of rounds is constant (although each round takes @math time). Moreover, following the protocol is an @math -Nash equilibrium. Unfortunately, a certain bad event with small but constant probability caused some players, when they recognized it, to deviate from the protocol so that the equilibrium is not everlasting. This paper is the full version of @cite_19 . However, we improve on the work in @cite_19 in two ways. First, we remove all probability of error for @math -out-of- @math secret sharing, and improve the probability of error for @math -out-of- @math from a constant to an inverse polynomial. Second, we show that our new protocol is a Nash equilibrium, not just an @math -Nash equilibrium, as long as @math .
{ "cite_N": [ "@cite_19" ], "mid": [ "2001578254" ], "abstract": [ "We consider the classical secret sharing problem in the case where all agents are selfish but rational. In recent work, Kol and Naor show that in the non-simultaneous communciation model (i.e. when rushing is possible), there is no Nash equilibrium that ensures all agents learn the secret. However, they describe a mechanism for this problem that is an e-Nash equilibrium, i.e. it is close to an equilibrium in the sense that no player can gain more than e utility by deviating from it. Unfortunately, the Kol and Naor mechanism, and, to the best of our knowledge, all previous mechanisms for this problem require each agent to send O(n) messages in expectation, where n is the number of agents. This may be problematic for some applications of rational secret sharing such as secure multiparty computation and simulation of a mediator. We address this issue by describing a mechanism for rational n-out-of-n secret sharing that is an e-Nash equilibrium, and is scalable in the sense that it requires each agent to send only an expected O(1) bits. Moreover, the latency of our mechanism is O(log n) in expectation, compared to O(n) expected latency for the Kol and Naor result. We also design mechanisms for a relaxed variant of rational m-out-of-n secret sharing where m = Θ(n) that require each processor to send O(log n) bits and have O( n) latency. Our mechanisms are non-cryptographic, and are not susceptible to backwards induction." ] }
1205.0435
1878897978
Social coordination allows users to move beyond awareness of their friends to efficiently coordinating physical activities with others. While specific forms of social coordination can be seen in tools such as Evite, Meetup and Groupon, we introduce a more general model using what we call enmeshed queries. An enmeshed query allows users to declaratively specify an intent to coordinate by specifying social attributes such as the desired group size and who what when, and the database returns matching queries. Enmeshed queries are continuous, but new queries (and not data) answer older queries; the variable group size also makes enmeshed queries different from entangled queries, publish-subscribe systems, and dating services. We show that even offline group coordination using enmeshed queries is NP-hard. We then introduce efficient heuristics that use selective indices such as location and time to reduce the space of possible matches; we also add refinements such as delayed evaluation and using the relative matchability of users to determine search order. We describe a centralized implementation and evaluate its performance against an optimal algorithm. We show that the combination of not stopping prematurely (after finding a match) and delayed evaluation results in an algorithm that finds 86 of the matches found by an optimal algorithm, and takes an average of 40 usec per query using 1 core of a 2.5 Ghz server machine. Further, the algorithm has good latency, is reasonably fair to large group size requests, and can be scaled to global workloads using multiple cores and multiple servers. We conclude by describing potential generalizations that add prices, recommendations, and data mining to basic enmeshed queries.
Pub-sub systems @cite_1 and continuous query systems @cite_0 also provide declarative continuous query evaluation, but each query is logically independent; further there are no constraints on groups. On the other hand, enmeshed queries find matches among queries. Our approach to find candidate query matches is related to work in efficiently evaluating boolean expressions @cite_7 . Finally, enmeshed queries differ from nested transactions @cite_8 , since at query time the system does not know which other enmeshed queries it is waiting on. Table summarizes these comparisons.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_7", "@cite_8" ], "mid": [ "2002110561", "1794157345", "2146032463", "2114811697" ], "abstract": [ "In a database to which data is continually added, users may wish to issue a permanent query and be notified whenever data matches the query. If such continuous queries examine only single records, this can be implemented by examining each record as it arrives. This is very efficient because only the incoming record needs to be scanned. This simple approach does not work for queries involving joins or time. The Tapestry system allows users to issue such queries over a database of mail and bulletin board messages. The user issues a static query, such as “show me all messages that have been replied to by Jones,” as though the database were fixed and unchanging. Tapestry converts the query into an incremental query that efficiently finds new matches to the original query as new messages are added to the database. This paper describes the techniques used in Tapestry, which do not depend on triggers and thus be implemented on any commercial database that supports SQL. Although Tapestry is designed for filtering mail and news messages, its techniques are applicable to any append-only database.", "", "We consider the problem of efficiently indexing Disjunctive Normal Form (DNF) and Conjunctive Normal Form (CNF) Boolean expressions over a high-dimensional multi-valued attribute space. The goal is to rapidly find the set of Boolean expressions that evaluate to true for a given assignment of values to attributes. A solution to this problem has applications in online advertising (where a Boolean expression represents an advertiser's user targeting requirements, and an assignment of values to attributes represents the characteristics of a user visiting an online page) and in general any publish subscribe system (where a Boolean expression represents a subscription, and an assignment of values to attributes represents an event). All existing solutions that we are aware of can only index a specialized sub-set of conjunctive and or disjunctive expressions, and cannot efficiently handle general DNF and CNF expressions (including NOTs) over multi-valued attributes. In this paper, we present a novel solution based on the inverted list data structure that enables us to index arbitrarily complex DNF and CNF Boolean expressions over multi-valued attributes. An interesting aspect of our solution is that, by virtue of leveraging inverted lists traditionally used for ranked information retrieval, we can efficiently return the top-N matching Boolean expressions. This capability enables emerging applications such as ranked publish subscribe systems [16], where only the top subscriptions that match an event are desired. For example, in online advertising there is a limit on the number of advertisements that can be shown on a given page and only the \"best\" advertisements can be displayed. We have evaluated our proposed technique based on data from an online advertising application, and the results show a dramatic performance improvement over prior techniques.", "A new formal model is presented for studying concurrency and resiliency properties for nested transactions. The model is used to state and prove correctness of a well-known locking algorithm." ] }
1205.0435
1878897978
Social coordination allows users to move beyond awareness of their friends to efficiently coordinating physical activities with others. While specific forms of social coordination can be seen in tools such as Evite, Meetup and Groupon, we introduce a more general model using what we call enmeshed queries. An enmeshed query allows users to declaratively specify an intent to coordinate by specifying social attributes such as the desired group size and who what when, and the database returns matching queries. Enmeshed queries are continuous, but new queries (and not data) answer older queries; the variable group size also makes enmeshed queries different from entangled queries, publish-subscribe systems, and dating services. We show that even offline group coordination using enmeshed queries is NP-hard. We then introduce efficient heuristics that use selective indices such as location and time to reduce the space of possible matches; we also add refinements such as delayed evaluation and using the relative matchability of users to determine search order. We describe a centralized implementation and evaluate its performance against an optimal algorithm. We show that the combination of not stopping prematurely (after finding a match) and delayed evaluation results in an algorithm that finds 86 of the matches found by an optimal algorithm, and takes an average of 40 usec per query using 1 core of a 2.5 Ghz server machine. Further, the algorithm has good latency, is reasonably fair to large group size requests, and can be scaled to global workloads using multiple cores and multiple servers. We conclude by describing potential generalizations that add prices, recommendations, and data mining to basic enmeshed queries.
There is also related work on team formation in social networks @cite_2 that studies the following problem: given a set of people (i.e. nodes in a graph) with certain skills and communication costs across people (i.e. edges in the graph), and a task @math that requires some set of skills, find a subset of people to perform @math with minimal communication costs. They prove that such problems are NP hard, and describe heuristics to reduce computation. Our problem is not the same as task formation because tasks are not explicit first-class entities in enmeshed queries but are, instead, implicit in the desires of users. Further, our metric is maximizing matches and not minimizing communication.
{ "cite_N": [ "@cite_2" ], "mid": [ "2145604831" ], "abstract": [ "Given a task T, a pool of individuals X with different skills, and a social network G that captures the compatibility among these individuals, we study the problem of finding X, a subset of X, to perform the task. We call this the T EAM F ORMATION problem. We require that members of X' not only meet the skill requirements of the task, but can also work effectively together as a team. We measure effectiveness using the communication cost incurred by the subgraph in G that only involves X'. We study two variants of the problem for two different communication-cost functions, and show that both variants are NP-hard. We explore their connections with existing combinatorial problems and give novel algorithms for their solution. To the best of our knowledge, this is the first work to consider the T EAM F ORMATION problem in the presence of a social network of individuals. Experiments on the DBLP dataset show that our framework works well in practice and gives useful and intuitive results." ] }
1205.0144
2952587604
The significant progress in constructing graph spanners that are sparse (small number of edges) or light (low total weight) has skipped spanners that are everywhere-sparse (small maximum degree). This disparity is in line with other network design problems, where the maximum-degree objective has been a notorious technical challenge. Our main result is for the Lowest Degree 2-Spanner (LD2S) problem, where the goal is to compute a 2-spanner of an input graph so as to minimize the maximum degree. We design a polynomial-time algorithm achieving approximation factor @math , where @math is the maximum degree of the input graph. The previous @math -approximation was proved nearly two decades ago by Kortsarz and Peleg [SODA 1994, SICOMP 1998]. Our main conceptual contribution is to establish a formal connection between LD2S and a variant of the Densest k-Subgraph (DkS) problem. Specifically, we design for both problems strong relaxations based on the Sherali-Adams linear programming (LP) hierarchy, and show that "faithful" randomized rounding of the DkS-variant can be used to round LD2S solutions. Our notion of faithfulness intuitively means that all vertices and edges are chosen with probability proportional to their LP value, but the precise formulation is more subtle. Unfortunately, the best algorithms known for DkS use the Lov 'asz-Schrijver LP hierarchy in a non-faithful way [Bhaskara, Charikar, Chlamtac, Feige, and Vijayaraghavan, STOC 2010]. Our main technical contribution is to overcome this shortcoming, while still matching the gap that arises in random graphs by planting a subgraph with same log-density.
Graph spanners, first introduced by Peleg and Sch "a ffer @cite_2 and Peleg and Ullman @cite_23 , have been studied extensively, with applications ranging from routing in networks (e.g. @cite_41 @cite_10 ) to solving linear systems (e.g. @cite_13 @cite_15 ). The foundational result on spanners is due to Alth "ofer, Das, Dobkin, Joseph and Soares @cite_30 , who gave an algorithm that, given a graph and an integer @math , constructs a @math -spanner with @math edges. Unfortunately this result obviously does not give anything nontrivial for @math -spanners, and indeed it is easy to see that there exist graphs for which every @math -spanner has @math edges, thus nontrivial absolute bounds on the size of a @math -spanner are not possible. Kortsarz and Peleg @cite_38 were the first to consider relative bounds for spanners. They gave a greedy @math -approximation algorithm for the problem of finding a @math -spanner with the minimum number of edges. This was then extended to variants of @math -spanners, e.g. @cite_20 and @cite_42 @cite_26 (for which only @math is known). All of these bounds are basically optimal, assuming @math , due to a hardness result of Kortsarz @cite_39 .
{ "cite_N": [ "@cite_13", "@cite_30", "@cite_38", "@cite_26", "@cite_41", "@cite_42", "@cite_39", "@cite_23", "@cite_2", "@cite_15", "@cite_10", "@cite_20" ], "mid": [ "2045107949", "2002041206", "2057046770", "2952422587", "1996879666", "", "1997369416", "2018963243", "", "", "2045446569", "1534368076" ], "abstract": [ "We present algorithms for solving symmetric, diagonally-dominant linear systems to accuracy e in time linear in their number of non-zeros and log (κ f (A) e), where κ f (A) is the condition number of the matrix defining the linear system. Our algorithm applies the preconditioned Chebyshev iteration with preconditioners designed using nearly-linear time algorithms for graph sparsification and graph partitioning.", "Given a graphG, a subgraphG' is at-spanner ofG if, for everyu,v ?V, the distance fromu tov inG' is at mostt times longer than the distance inG. In this paper we give a simple algorithm for constructing sparse spanners for arbitrary weighted graphs. We then apply this algorithm to obtain specific results for planar graphs and Euclidean graphs. We discuss the optimality of our results and present several nearly matching lower bounds.", "A k-spanner of a connected graph G = (V, E) is a subgraph G? consisting of all the vertices of V and a subset of the edges, with the additional property that the distance between any two vertices in G? is larger than that distance in G by no more than a factor of k. This note concerns the problem of finding the sparsest 2-spanner in a given graph and presents an approximation algorithm for this problem with approximation ratio log(|E| |V|)", "A natural requirement of many distributed structures is fault-tolerance: after some failures, whatever remains from the structure should still be effective for whatever remains from the network. In this paper we examine spanners of general graphs that are tolerant to vertex failures, and significantly improve their dependence on the number of faults @math , for all stretch bounds. For stretch @math we design a simple transformation that converts every @math -spanner construction with at most @math edges into an @math -fault-tolerant @math -spanner construction with at most @math edges. Applying this to standard greedy spanner constructions gives @math -fault tolerant @math -spanners with @math edges. The previous construction by Chechik, Langberg, Peleg, and Roddity [STOC 2009] depends similarly on @math but exponentially on @math (approximately like @math ). For the case @math and unit-length edges, an @math -approximation algorithm is known from recent work of Dinitz and Krauthgamer [arXiv 2010], where several spanner results are obtained using a common approach of rounding a natural flow-based linear programming relaxation. Here we use a different (stronger) LP relaxation and improve the approximation ratio to @math , which is, notably, independent of the number of faults @math . We further strengthen this bound in terms of the maximum degree by using the Local Lemma. Finally, we show that most of our constructions are inherently local by designing equivalent distributed algorithms in the LOCAL model of distributed computation.", "This paper deals with the problem of maintaining a distributed directory server, that enables us to keep track of mobile users in a distributed network. The paper introduces the graph-theoretic concept of regional matching , and demonstrates how finding a regional matching with certain parameters enables efficient tracking. The communication overhead of our tracking mechanism is within a polylogarithmic factor of the lower bound.", "", "A k -spanner of a connected graph G=(V,E) is a subgraph G' consisting of all the vertices of V and a subset of the edges, with the additional property that the distance between any two vertices in G' is larger than the distance in G by no more than a factor of k . This paper concerns the hardness of finding spanners with a number of edges close to the optimum. It is proved that for every fixed k , approximating the spanner problem is at least as hard as approximating the set-cover problem.", "The synchronizer is a simulation methodology introduced by Awerbuch [J. Assoc. Comput. Math., 32 (1985), pp. 804–823] for simulating a synchronous network by an asynchronous one, thus enabling the execution of a synchronous algorithm on an asynchronous network. In this paper a novel technique for constructing network synchronizers is presented. This technique is developed from some basic relationships between synchronizers and the structure of a t-spanning subgraph over the network. As a special result, a synchronizer for the hypercube with optimal time and communication complexities is obtained.", "", "", "Let G = (V,E) be an undirected weighted graph with vVv = n and vEv = m. Let k ≥ 1 be an integer. We show that G = (V,E) can be preprocessed in O(kmn1 k) expected time, constructing a data structure of size O(kn1p1 k), such that any subsequent distance query can be answered, approximately, in O(k) time. The approximate distance returned is of stretch at most 2k−1, that is, the quotient obtained by dividing the estimated distance by the actual distance lies between 1 and 2k−1. A 1963 girth conjecture of Erdos, implies that Ω(n1p1 k) space is needed in the worst case for any real stretch strictly smaller than 2kp1. The space requirement of our algorithm is, therefore, essentially optimal. The most impressive feature of our data structure is its constant query time, hence the name \"oracle\". Previously, data structures that used only O(n1p1 k) space had a query time of Ω(n1 k).Our algorithms are extremely simple and easy to implement efficiently. They also provide faster constructions of sparse spanners of weighted graphs, and improved tree covers and distance labelings of weighted or unweighted graphs.", "This paper studies the client-server version of the 2-spanner problem. This is a natural generalization of the sparse 2-spanner problem that turns out to be useful for certain network design problems. The paper presents a logarithmic ratio approximation algorithm for this problem. The algorithm and the analysis are extended to the directed client-server 2-spanner problem and to the client-server augmentation 2-spanner problem, establishing a logarithmic approximation ratio for these problems too. The paper also studies the bounded-diameter network design (BDND) problem, and establishes a logarithmic approximation ratio for the problem, using the tools developed for the client-server 2-spanner problem." ] }
1204.6691
1500216327
Cloud computing is revolutionizing the ICT landscape by providing scalable and efficient computing resources on demand. The ICT industry --- especially data centers, are responsible for considerable amounts of CO2 emissions and will very soon be faced with legislative restrictions, such as the Kyoto protocol, defining caps at different organizational levels (country, industry branch etc.) A lot has been done around energy efficient data centers, yet there is very little work done in defining flexible models considering CO2. In this paper we present a first attempt of modeling data centers in compliance with the Kyoto protocol. We discuss a novel approach for trading credits for emission reductions across data centers to comply with their constraints. CO2 caps can be integrated with Service Level Agreements and juxtaposed to other computing commodities (e.g. computational power, storage), setting a foundation for implementing next-generation schedulers and pricing models that support Kyoto-compliant CO2 trading schemes.
There already exist cloud computing energy efficient scheduling solutions, such as @cite_15 @cite_6 which try to minimize energy consumption, but they lack a strict quantitative model similar to PUE that would be convenient as a legislative control measure to express exactly how much they alter emission levels. From the management perspective, these methods work more in a best-effort manner, attempting first and foremost to satisfy SLA constrains.
{ "cite_N": [ "@cite_15", "@cite_6" ], "mid": [ "48417026", "1493613888" ], "abstract": [ "The emergence of Cloud Computing raises the question of dynamically allocating resources of physical (PM) and virtual machines (VM) in an on-demand and autonomic way. Yet, using Cloud Computing infrastructures efficiently requires fulfilling three partially contradicting goals: first, achieving low violation rates of Service Level Agreements (SLA) that define non-functional goals between the Cloud provider and the customer; second, achieving high resource utilization; and third achieving the first two issues by as few time- and energy consuming reallocation actions as possible. To achieve these goals we propose a novel approach with escalation levels to divide all possible actions into five levels. These levels range from changing the configuration of VMs over migrating them to other PMs to outsourcing applications to other Cloud providers. In this paper we focus on changing the resource configuration of VMs in terms of storage, memory, CPU power and bandwidth, and propose a knowledge management approach using rules with threat thresholds to tackle this problem. Simulation reveals major improvements as compared to recent related work considering SLA violations, resource utilization and action efficiency, as well as time performance.", "With the emergence of Cloud Computing resources of physical machines have to be allocated to virtual machines (VMs) in an on-demand way. However, the efficient allocation of resources like memory, storage or bandwidth to a VM is not a trivial task. On the one hand, the Service Level Agreement (SLA) that defines QoS goals for arbitrary parameters between the Cloud provider and the customer should not be violated. On the other hand, the Cloud providers aim to maximize their profit, where optimizing resource usage is an important part. In this paper we develop a simulation engine that mimics the control cycle of an autonomic manager to evaluate different knowledge management techniques (KM) feasible for efficient resource management and SLA attainment. We especially focus on the use of Case Based Reasoning (CBR) for KM and decision-making. We discuss its suitability for efficiently governing on-demand resource allocation in Cloud infrastructures by evaluating it with the simulation engine." ] }