aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1311.4778
2952908729
We study a geometric representation problem, where we are given a set @math of axis-aligned rectangles with fixed dimensions and a graph with vertex set @math . The task is to place the rectangles without overlap such that two rectangles touch if and only if the graph contains an edge between them. We call this problem Contact Representation of Word Networks (CROWN). It formalizes the geometric problem behind drawing word clouds in which semantically related words are close to each other. Here, we represent words by rectangles and semantic relationships by edges. We show that CROWN is strongly NP-hard even restricted trees and weakly NP-hard if restricted stars. We consider the optimization problem Max-CROWN where each adjacency induces a certain profit and the task is to maximize the sum of the profits. For this problem, we present constant-factor approximations for several graph classes, namely stars, trees, planar graphs, and graphs of bounded degree. Finally, we evaluate the algorithms experimentally and show that our best method improves upon the best existing heuristic by 45 .
Hierarchically clustered document collections are visualized with self-organizing maps @cite_4 and Voronoi treemaps @cite_3 . The early word-cloud approaches did not explicitly use semantic information, such as word relatedness, in placing the words in the cloud. More recent approaches attempt to do so, as in ManiWordle @cite_13 and in parallel tag clouds @cite_0 . The most relevant approaches rely on force-directed graph visualization methods @cite_8 and a seam-carving image processing method together with a force-directed heuristic @cite_12 . The semantics-preserving word cloud problem is related to classic graph layout problems, where the goal is to draw graphs so that vertex labels are readable and Euclidean distances between pairs of vertices are proportional to the underlying graph distance between them. Typically, however, vertices are treated as points and label overlap removal is a post-processing step @cite_6 @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_8", "@cite_3", "@cite_6", "@cite_0", "@cite_13", "@cite_12" ], "mid": [ "2098329200", "2156985538", "1970773000", "1990402145", "1584926591", "2075038621", "2141558464", "2122580360" ], "abstract": [ "When drawing graphs whose nodes contain text or graphics, the nontrivial node sizes must be taken into account, either as partof the initial layout or as a post-processing step. The core problem in avoiding or removing overlaps is to retain the structural information inherent in a layout while minimizing the additional area required. Thispaper presents a new node overlap removal algorithm that does well at retaining a graph’s shape while using little additional area and time. As part ofthe analysis, we consider and evaluate two measures of dissimilarity for two layouts of the same graph.", "Powerful methods for interactive exploration and search from collections of free-form textual documents are needed to manage the ever-increasing flood of digital information. In this article we present a method, WEBSOM, for automatic organization of full-text document collections using the self-organizing map (SOM) algorithm. The document collection is ordered onto a map in an unsupervised manner utilizing statistical information of short word contexts. The resulting ordered map where similar documents lie near each other thus presents a general view of the document space. With the aid of a suitable (WWW-based) interface, documents in interesting areas of the map can be browsed. The browsing can also be interactively extended to related topics, which appear in nearby areas on the map. Along with the method we present a case study of its use.", "In this paper, we introduce a visualization method that couples a trend chart with word clouds to illustrate temporal content evolutions in a set of documents. Specifically, we use a trend chart to encode the overall semantic evolution of document content over time. In our work, semantic evolution of a document collection is modeled by varied significance of document content, represented by a set of representative keywords, at different time points. At each time point, we also use a word cloud to depict the representative keywords. Since the words in a word cloud may vary one from another over time (e.g., words with increased importance), we use geometry meshes and an adaptive force-directed model to lay out word clouds to highlight the word differences between any two subsequent word clouds. Our method also ensures semantic coherence and spatial stability of word clouds over time. Our work is embodied in an interactive visual analysis system that helps users to perform text analysis and derive insights from a large collection of documents. Our preliminary evaluation demonstrates the usefulness and usability of our work.", "We propose a method to highlight query hits in hierarchically clustered collections of interrelated items such as digital libraries or knowledge bases. The method is based on the idea that organizing search results similarly to their arrangement on a fixed reference map facilitates orientation and assessment by preserving a user's mental map. Here, the reference map is built from an MDS layout of the items in a Voronoi treemap representing their hierarchical clustering, and we use techniques from dynamic graph layout to align query results with the map. The approach is illustrated on an archive of newspaper articles.", "The problem of node overlap removal is to adjust the layout generated by typical graph drawing methods so that nodes of non-zero width and height do not overlap, yet are as close as possible to their original positions. We give an O(n log n) algorithm for achieving this assuming that the number of nodes overlapping any single node is bounded by some constant. This method has two parts, a constraint generation algorithm which generates a linear number of “separation” constraints and an algorithm for finding a solution to these constraints “close” to the original node placement values. We also extend our constraint solving algorithm to give an active set based algorithm which is guaranteed to find the optimal solution but which has considerably worse theoretical complexity. We compare our method with convex quadratic optimization and force scan approaches and find that it is faster than either, gives results of better quality than force scan methods and similar quality to the quadratic optimisation approach.", "Do court cases differ from place to place? What kind of picture do we get by looking at a country's collection of law cases? We introduce Parallel Tag Clouds: a new way to visualize differences amongst facets of very large metadata-rich text corpora. We have pointed Parallel Tag Clouds at a collection of over 600,000 US Circuit Court decisions spanning a period of 50 years and have discovered regional as well as linguistic differences between courts. The visualization technique combines graphical elements from parallel coordinates and traditional tag clouds to provide rich overviews of a document collection while acting as an entry point for exploration of individual texts. We augment basic parallel tag clouds with a details-in-context display and an option to visualize changes over a second facet of the data, such as time. We also address text mining challenges such as selecting the best words to visualize, and how to do so in reasonable time periods to maintain interactivity.", "Among the multifarious tag-clouding techniques, Wordle stands out to the community by providing an aesthetic layout, eliciting the emergence of the participatory culture and usage of tag-clouding in the artistic creations. In this paper, we introduce ManiWordle, a Wordle-based visualization tool that revamps interactions with the layout by supporting custom manipulations. ManiWordle allows people to manipulate typography, color, and composition not only for the layout as a whole, but also for the individual words, enabling them to have better control over the layout result. We first describe our design rationale along with the interaction techniques for tweaking the layout. We then present the results both from the preliminary usability study and from the comparative study between ManiWordle and Wordle. The results suggest that ManiWordle provides higher user satisfaction and an efficient method of creating the desired \"art work,\" harnessing the power behind the ever-increasing popularity of Wordle.", "Word clouds are proliferating on the Internet and have received much attention in visual analytics. Although word clouds can help users understand the major content of a document collection quickly, their ability to visually compare documents is limited. This paper introduces a new method to create semantic-preserving word clouds by leveraging tailored seam carving, a well-established content-aware image resizing operator. The method can optimize a word cloud layout by removing a left-to-right or top-to-bottom seam iteratively and gracefully from the layout. Each seam is a connected path of low energy regions determined by a Gaussian-based energy function. With seam carving, we can pack the word cloud compactly and effectively, while preserving its overall semantic structure. Furthermore, we design a set of interactive visualization techniques for the created word clouds to facilitate visual text analysis and comparison. Case studies are conducted to demonstrate the effectiveness and usefulness of our techniques." ] }
1311.4778
2952908729
We study a geometric representation problem, where we are given a set @math of axis-aligned rectangles with fixed dimensions and a graph with vertex set @math . The task is to place the rectangles without overlap such that two rectangles touch if and only if the graph contains an edge between them. We call this problem Contact Representation of Word Networks (CROWN). It formalizes the geometric problem behind drawing word clouds in which semantically related words are close to each other. Here, we represent words by rectangles and semantic relationships by edges. We show that CROWN is strongly NP-hard even restricted trees and weakly NP-hard if restricted stars. We consider the optimization problem Max-CROWN where each adjacency induces a certain profit and the task is to maximize the sum of the profits. For this problem, we present constant-factor approximations for several graph classes, namely stars, trees, planar graphs, and graphs of bounded degree. Finally, we evaluate the algorithms experimentally and show that our best method improves upon the best existing heuristic by 45 .
In rectangle representations of graphs, vertices are axis-aligned rectangles with non-intersecting interiors and edges correspond to rectangles with non-zero length common boundary. Every graph that can be represented this way is planar and every triangle in such a graph is a facial triangle; these two conditions are also sufficient to guarantee a rectangle representation @cite_2 @cite_16 @cite_15 @cite_14 . In a recent survey, Felsner @cite_20 reviews many rectangulation variants, including squarings. Algorithms for area-preserving rectangular cartograms are also related @cite_7 . Area-universal rectangular representations where vertex weights are represented by area have been characterized @cite_21 and edge-universal representations, where edge weights are represented by length of contacts have been studied @cite_22 . Unlike cartograms, in our setting there is no inherent geography, and hence, words can be positioned anywhere. Moreover, each word has fixed dimensions enforced by its frequency in the input text, rather than just fixed area.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_7", "@cite_21", "@cite_2", "@cite_15", "@cite_16", "@cite_20" ], "mid": [ "2099777423", "2115023660", "2322867406", "2082251733", "1971244706", "1973624728", "2082205964", "24725585" ], "abstract": [ "This article focuses on a combinatorial structure specific to triangulated plane graphs with quadrangular outer face and no separating triangle, which are called irreducible triangulations. The structure has been introduced by Xin He under the name of regular edge-labelling and consists of two bipolar orientations that are transversal. For this reason, the terminology used here is that of transversal structures. The main results obtained in the article are a bijection between irreducible triangulations and ternary trees, and a straight-line drawing algorithm for irreducible triangulations. For a random irreducible triangulation with n vertices, the grid size of the drawing is asymptotically with high probability 11n 27x11n 27 up to an additive error of O(n). In contrast, the best previously known algorithm for these triangulations only guarantees a grid size (@?n 2@?-1)x@?n 2@?.", "We study contact representations of edge-weighted planar graphs, where vertices are rectangles or rectilinear polygons and edges are polygon contacts whose lengths represent the edge weights. We show that for any given edge-weighted planar graph whose outer face is a quadrangle, that is internally triangulated and that has no separating triangles we can construct in linear time an edge-proportional rectangular dual if one exists and report failure otherwise. For a given combinatorial structure of the contact representation and edge weights interpreted as lower bounds on the contact lengths, a corresponding contact representation that minimizes the size of the enclosing rectangle can be found in linear time. If the combinatorial structure is not fixed, we prove NP-hardness of deciding whether a contact representation with bounded contact lengths exists. Finally, we give a complete characterization of the rectilinear polygon complexity required for representing biconnected internally triangulated graphs: For outerplanar graphs complexity 8 is sufficient and necessary, and for graphs with two adjacent or multiple non-adjacent internal vertices the complexity is unbounded.", "T HE idea of the statistical cartogram occurred to the author when he had occasion to prepare maps of the United States showing the distribution of various economic units, such as steel factories, textile mills, power plants, banks, etc. These maps were far too crowded in the northeast to be useful, while elsewhere, for the most part, they were relatively empty. If a way could be found to increase the scale of the northeastern region and reduce that of the west, distribution could be shown more clearly. Simple distortion of the map would be misleading, but, if we go a step farther, discard altogether the outlines of the country, and give each region a rectangular form of size proportional to the value represented, we arrive at the rectangular statistical cartogram. For purposes of comparison it is essential that a definite system of construction should be followed and identical arrangement should be used whatever values are represented. The system here used starts always with the larger divisions and by \"proportionate halving\" arrives at the smaller ones. It should be emphasized that the statistical cartogram is not a map. Although it has roughly the proportions of the country and retains as far as possible the relative locations of the various regions, the cartogram is purely a geometrical design to visualize certain statistical facts and to work out certain problems of distribution. Examples of these cartograms are given in the accompanying figures. The division into regions follows the usage of the United States Census Bureau, because only from this source are data available. If natural geographic regions could be used instead, the cartograms would be still more instructive.", "A rectangular layout is a partition of a rectangle into a finite set of interior-disjoint rectangles. These layouts are used as rectangular cartograms in cartography, as floorplans in building architecture and VLSI design, and as graph drawings. Often areas are associated with the rectangles of a rectangular layout and it is desirable for one rectangular layout to represent several area assignments. A layout is area-universal if any assignment of areas to rectangles can be realized by a combinatorially equivalent rectangular layout. We identify a simple necessary and sufficient condition for a rectangular layout to be area-universal: a rectangular layout is area-universal if and only if it is one-sided. We also investigate similar questions for perimeter assignments. The adjacency requirements for the rectangles of a rectangular layout can be specified in various ways, most commonly via the dual graph of the layout. We show how to find an area-universal layout for a given set of adjacency requirements whe...", "We prove that every planar graph is the intersection graph of a collection of three-dimensional boxes, with intersections occuring only in the boundaries of the boxes. Furthermore, we characterize the graphs that have such representations (called strict representations) in the plane. These are precisely the proper subgraphs of 4-connected planar triangulations, which we characterize by forbidden sub-graphs. Finally, we strengthen a result of E. R. Scheinerman (“Intersection Classes and Multiple Intersection Parameters”, Ph. D. thesis, Princeton Univ., 1984) to show that every planar graph has a strict representation using at most two rectangles per vertex.", "Contact graphs of isothetic rectangles unify many concepts from applications including VLSI and architectural design, computational geometry, and GIS. Minimizing the area of their corresponding rectangular layouts is a key problem. We study the area-optimization problem and show that it is NP-hard to find a minimum-area rectangular layout of a given contact graph. We present O(n)-time algorithms that construct O(n2)-area rectangular layouts for general contact graphs and O(n log n)-area rectangular layouts for trees. (For trees, this is an O(log n)-approximation algorithm.) We also present an infinite family of graphs (respectively, trees) that require Ω(n2) (respectively, Ω(n log n))area. We derive these results by presenting a new characterization of graphs that admit rectangular layouts, using the related concept of rectangular duals. A corollary to our results relates the class of graphs that admit rectangular layouts to rectangle-of-influence drawings.", "We propose a linear-time algorithm for generating a planar layout of a planar graph. Each vertex is represented by a horizontal line segment and each edge by a vertical line segment. All endpoints of the segments have integer coordinates. The total space occupied by the layout is at mostn by at most 2n---4. Our algorithm, a variant of one by Otten and van Wijk, generally produces a more compact layout than theirs and allows the dual of the graph to be laid out in an interlocking way. The algorithm is based on the concept of abipolar orientation. We discuss relationships among the bipolar orientations of a planar graph.", "In the first part of this survey, we consider planar graphs that can be represented by a dissections of a rectangle into rectangles. In rectangular drawings, the corners of the rectangles represent the vertices. The graph obtained by taking the rectangles as vertices and contacts as edges is the rectangular dual. In visibility graphs and segment contact graphs, the vertices correspond to horizontal or to horizontal and vertical segments of the dissection. Special orientations of graphs turn out to be helpful when dealing with characterization and representation questions. Therefore, we look at orientations with prescribed degrees, bipolar orientations, separating decompositions, and transversal structures." ] }
1311.4655
2951573681
This paper develops new theory and algorithms for 1D general mode decompositions. First, we introduce the 1D synchrosqueezed wave packet transform and prove that it is able to estimate the instantaneous information of well-separated modes from their superposition accurately. The synchrosqueezed wave packet transform has a better resolution than the synchrosqueezed wavelet transform in the time-frequency domain for separating high frequency modes. Second, we present a new approach based on diffeomorphisms for the spectral analysis of general shape functions. These two methods lead to a framework for general mode decompositions under a weak well-separation condition and a well different condition. Numerical examples of synthetic and real data are provided to demonstrate the fruitful applications of these methods.
There are three other research lines to address the mode decomposition problems of the form . The first one is the empirical mode decomposition (EMD) method initialized by in @cite_20 and refined in @cite_17 . To improve the noise resistance of the EMD methods, some variants have been proposed in @cite_10 @cite_8 . It has been shown that the EMD methods can decompose some signals into more general components of the form instead of the form in some cases (see Figure left) in @cite_19 . In this sense, the EMD methods are able to reflect the nonlinear evolution of the physically meaningful oscillations using general shape functions. However, this advantage is not stable and consistent as illustrated in Figure right. It is worth more effort to understand the EMD methods on general mode decompositions.
{ "cite_N": [ "@cite_8", "@cite_19", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "2120390927", "2152722840", "2160724632", "2007221293", "2028497691" ], "abstract": [ "A new Ensemble Empirical Mode Decomposition (EEMD) is presented. This new approach consists of sifting an ensemble of white noise-added signal (data) and treats the mean as the final true result. Finite, not infinitesimal, amplitude white noise is necessary to force the ensemble to exhaust all possible solutions in the sifting process, thus making the different scale signals to collate in the proper intrinsic mode functions (IMF) dictated by the dyadic filter banks. As EEMD is a time–space analysis method, the added white noise is averaged out with sufficient number of trials; the only persistent part that survives the averaging process is the component of the signal (original data), which is then treated as the true and more physical meaningful answer. The effect of the added white noise is to provide a uniform reference frame in the time–frequency space; therefore, the added noise collates the portion of the signal of comparable scale in one IMF. With this ensemble mean, one can separate scales naturall...", "In this paper, we present some general considerations about data analysis from the perspective of a physical scientist and advocate the physical, instead of mathematical, analysis of data. These considerations have been accompanying our development of novel adaptive, local analysis methods, especially the empirical mode decomposition and its major variation, the ensemble empirical mode decomposition, and its preliminary mathematical explanations. A particular emphasis will be on the advantages and disadvantages of mathematical and physical constraints associated with various analysis methods. We argue that, using data analysis in a given temporal domain of observation as an example, the mathematical constraints imposed on data may lead to difficulties in understanding the physics behind the data. With such difficulties in mind, we promote adaptive, local analysis method, which satisfies fundamental physical principle of consequent evolution of a system being not able to change the past evolution of the system. We also argue, using the ensemble empirical mode decomposition as an example, that noise can be helpful to extract physically meaningful signals hidden in noisy data.", "In this paper, we propose a variant of the Empirical Mode Decomposition method to decompose multiscale data into their intrinsic mode functions. Under the assumption that the multiscale data satisfy certain scale separation property, we show that the proposed method can extract the intrinsic mode functions accurately and uniquely.", "A new method for analysing nonlinear and non-stationary data has been developed. The key part of the method is the ‘empirical mode decomposition’ method with which any complicated data set can be decomposed into a finite and often small number of ‘intrinsic mode functions’ that admit well-behaved Hilbert transforms. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and non-stationary processes. With the Hilbert transform, the ‘instrinic mode functions’ yield instantaneous frequencies as functions of time that give sharp identifications of imbedded structures. The final presentation of the results is an energy-frequency-time distribution, designated as the Hilbert spectrum. In this method, the main conceptual innovations are the introduction of ‘intrinsic mode functions’ based on local properties of the signal, which make the instantaneous frequency meaningful; and the introduction of the instantaneous frequencies for complicated data sets, which eliminate the need for spurious harmonics to represent nonlinear and non-stationary signals. Examples from the numerical results of the classical nonlinear equation systems and data representing natural phenomena are given to demonstrate the power of this new method. Classical nonlinear system data are especially interesting, for they serve to illustrate the roles played by the nonlinear and non-stationary effects in the energy-frequency-time distribution.", "Instantaneous frequency (IF) is necessary for understanding the detailed mechanisms for nonlinear and nonstationary processes. Historically, IF was computed from analytic signal (AS) through the Hilbert transform. This paper offers an overview of the difficulties involved in using AS, and two new methods to overcome the difficulties for computing IF. The first approach is to compute the quadrature (defined here as a simple 90° shift of phase angle) directly. The second approach is designated as the normalized Hilbert transform (NHT), which consists of applying the Hilbert transform to the empirically determined FM signals. Additionally, we have also introduced alternative methods to compute local frequency, the generalized zero-crossing (GZC), and the teager energy operator (TEO) methods. Through careful comparisons, we found that the NHT and direct quadrature gave the best overall performance. While the TEO method is the most localized, it is limited to data from linear processes, the GZC method is the m..." ] }
1311.4655
2951573681
This paper develops new theory and algorithms for 1D general mode decompositions. First, we introduce the 1D synchrosqueezed wave packet transform and prove that it is able to estimate the instantaneous information of well-separated modes from their superposition accurately. The synchrosqueezed wave packet transform has a better resolution than the synchrosqueezed wavelet transform in the time-frequency domain for separating high frequency modes. Second, we present a new approach based on diffeomorphisms for the spectral analysis of general shape functions. These two methods lead to a framework for general mode decompositions under a weak well-separation condition and a well different condition. Numerical examples of synthetic and real data are provided to demonstrate the fruitful applications of these methods.
By extracting the components one-by-one from the most oscillatory one, Hou and Shi proposed a nonlinear optimization scheme to decompose signals. The first model in @cite_32 is based on nonlinear @math minimization, which is computationally costly. To deal with this problem, the second paper @cite_15 proposed a nonlinear matching pursuit model based on sparse representations of signals in a data-driven time-frequency dictionary, which has a fast algorithm for periodic data. Under some sparsity assumptions, the analysis of convergence for the latter scheme has been recently studied in @cite_29 .
{ "cite_N": [ "@cite_15", "@cite_29", "@cite_32" ], "mid": [ "", "2156455340", "2110983594" ], "abstract": [ "", "In a recent paper, Hou and Shi introduced a new adaptive data analysis method to analyze nonlinear and non-stationary data. The main idea is to look for the sparsest representation of multiscale data within the largest possible dictionary consisting of intrinsic mode functions of the form @math , where @math , @math consists of the functions smoother than @math and @math . This problem was formulated as a nonlinear @math optimization problem and an iterative nonlinear matching pursuit method was proposed to solve this nonlinear optimization problem. In this paper, we prove the convergence of this nonlinear matching pursuit method under some sparsity assumption on the signal. We consider both well-resolved and sparse sampled signals. In the case without noise, we prove that our method gives exact recovery of the original signal.", "We introduce a new adaptive method for analyzing nonlinear and nonstationary data. This method is inspired by the empirical mode decomposition (EMD) method and the recently developed compressed sensing theory. The main idea is to look for the sparsest representation of multiscale data within the largest possible dictionary consisting of intrinsic mode functions of the form a(t )c os(θ(t)) ,w herea ≥ 0i s assumed to be smoother than cos(θ(t)) and θ is a piecewise smooth increasing function. We formulate this problem as a nonlinear L 1 optimization problem. Further, we propose an iterative algorithm to solve this nonlinear optimization problem recursively. We also introduce an adaptive filter method to decompose data with noise. Numerical examples are given to demonstrate the robustness of our method and comparison is made with the EMD method. One advantage of performing such a decomposition is to preserve some intrinsic physical property of the signal, such as trend and instantaneous frequency. Our method shares many important properties of the original EMD method. Because our method is based on a solid mathematical formulation, its performance does not depend on numerical parameters such as the number of shifting or stop criterion, which seem to have a major effect on the original EMD method. Our method is also less sensitive to noise perturbation and the end effect compared with the original EMD method." ] }
1311.4655
2951573681
This paper develops new theory and algorithms for 1D general mode decompositions. First, we introduce the 1D synchrosqueezed wave packet transform and prove that it is able to estimate the instantaneous information of well-separated modes from their superposition accurately. The synchrosqueezed wave packet transform has a better resolution than the synchrosqueezed wavelet transform in the time-frequency domain for separating high frequency modes. Second, we present a new approach based on diffeomorphisms for the spectral analysis of general shape functions. These two methods lead to a framework for general mode decompositions under a weak well-separation condition and a well different condition. Numerical examples of synthetic and real data are provided to demonstrate the fruitful applications of these methods.
The third method is the empirical wavelet transform recently proposed in @cite_26 @cite_33 by Gilles, Tran and Osher, which empirically builds a wavelet filter bank according to the energy distribution of a given signal in the Fourier domain so as to obtain an adaptive time-frequency representation.
{ "cite_N": [ "@cite_26", "@cite_33" ], "mid": [ "2019900743", "2025016692" ], "abstract": [ "Some recent methods, like the empirical mode decomposition (EMD), propose to decompose a signal accordingly to its contained information. Even though its adaptability seems useful for many applications, the main issue with this approach is its lack of theory. This paper presents a new approach to build adaptive wavelets. The main idea is to extract the different modes of a signal by designing an appropriate wavelet filter bank. This construction leads us to a new wavelet transform, called the empirical wavelet transform. Many experiments are presented showing the usefulness of this method compared to the classic EMD.", "A recently developed approach, called “empirical wavelet transform,” aims to build one-dimensional (1D) adaptive wavelet frames accordingly to the analyzed signal. In this paper, we present several extensions of this approach to two-dimensional (2D) signals (images). We revisit some well-known transforms (tensor wavelets, Littlewood--Paley wavelets, ridgelets, and curvelets) and show that it is possible to build their empirical counterparts. We prove that such constructions lead to different adaptive frames which show some promising properties for image analysis and processing." ] }
1311.5014
2949599870
With the increasing availability of flexible wireless 802.11 devices, the potential exists for users to selfishly manipulate their channel access parameters and gain a performance advantage. Such practices can have a severe negative impact on compliant stations. To enable access points to counteract these selfish behaviours and preserve fairness in wireless networks, in this paper we propose a policing mechanism that drives misbehaving users into compliant operation without requiring any cooperation from clients. This approach is demonstrably effective against a broad class of misbehaviours, soundly-based, i.e. provably hard to circumvent and amenable to practical implementation on existing commodity hardware.
The underlying principle behind our approach is to control the attempt rate of misbehaving clients by censoring the generation of MAC layer acknowledgements (ACKs). ACK skipping has been suggested as a means to allocate bandwidth for traffic prioritisation in a network of well-behaved stations @cite_10 @cite_36 @cite_33 , but to the best of our knowledge has not been implemented to date with real devices as this fundamental operation is handled at the firmware level.
{ "cite_N": [ "@cite_36", "@cite_10", "@cite_33" ], "mid": [ "2163079953", "1590305083", "2126820294" ], "abstract": [ "The EDCA access mechanism of the upcoming 802.11e standard supports legacy DCF stations, but with substantially degraded performance. The reason being that DCF stations typically compete for access with overly small contention windows (CWs). In this letter we propose a new technique that, implemented at the access points (AP's), mitigates the impact of legacy stations on EDCA. The key idea of the technique is that, upon receiving a frame from a legacy station, the AP skips the ACK frame reply with a certain probability. When missing the ACK, the legacy station increases its CW and thus our technique allows us to have some control over the CW's of the legacy stations. We show by means of an example that this technique improves the overall performance of the WLAN.", "In recent years the idea of access network has changed radically, and the diffusion of wireless technologies, and especially IEEE 802.11 compliant devices, has made wireless connections increasing popular. Despite the great success of the IEEE 802.11 standard, several problems concerning security, power consumption and quality of service of wireless LANs remain partially unsolved. The QoS issue is especially relevant for multimedia communications, generally needing performance guarantees from the network infrastructure to fulfill application requirements. In this paper we analyze the behavior of the Frame Dropping (FD) mechanism, a QoS mechanism for multimedia communications using simulations, and compare it with that of another well assessed mechanism: Distributed Fair Scheduling (DFS). Simulation results show that, though DFS achieve a better channel utilization, FD behavior is comparable with that of DFS under all the main performance indices, and it exhibits effective traffic differentiation capabilities.", "Although the EDCA access mechanism of the 802.11e standard supports legacy DCF stations, the presence of DCF stations in the WLAN jeopardizes the provisioning of the service guarantees committed to the EDCA stations. The reason is that DCF stations compete with Contention Windows (CWs) that are predefined and cannot be modified, and as a result, the impact of the DCF stations on the service received by the EDCA stations cannot be controlled. In this paper, we address the problem of providing throughput guarantees to EDCA stations in a WLAN in which EDCA and DCF stations coexist. To this aim, we propose a technique that, implemented at the Access Point (AP), mitigates the impact of DCF stations on EDCA by skipping with a certain probability the Ack reply to a frame from a DCF station. When missing the Ack, the DCF station increases its CW, and thus, our technique allows us to have some control over the CWs of the legacy DCF stations. In our approach, the probability of skipping an Ack frame is dynamically adjusted by means of an adaptive algorithm. This algorithm is based on a widely used controller from classical control theory, namely a Proportional Controller. In order to find an adequate configuration of the controller, we conduct a control-theoretic analysis of the system. Simulation results show that the proposed approach is effective in providing throughput guarantees to EDCA stations in presence of DCF stations." ] }
1311.5014
2949599870
With the increasing availability of flexible wireless 802.11 devices, the potential exists for users to selfishly manipulate their channel access parameters and gain a performance advantage. Such practices can have a severe negative impact on compliant stations. To enable access points to counteract these selfish behaviours and preserve fairness in wireless networks, in this paper we propose a policing mechanism that drives misbehaving users into compliant operation without requiring any cooperation from clients. This approach is demonstrably effective against a broad class of misbehaviours, soundly-based, i.e. provably hard to circumvent and amenable to practical implementation on existing commodity hardware.
The solution we propose leverages our previous design @cite_20 , but differs in that here: we aim to control the transmission attempt rate instead of throughput, thus seeking to equalise stations' air time @cite_22 . By driving the channel access probabilities of all clients to the same value, regardless of the contention parameters they employ, we effectively preserve short-term fairness. We allow carrying forward penalties, thus also achieve long-term fairness. Finally, we guarantee that the mechanism cannot be gamed by greedy users that detect its operation.
{ "cite_N": [ "@cite_22", "@cite_20" ], "mid": [ "2099243528", "2115323404" ], "abstract": [ "We provide the first rigorous analysis of proportional fairness in 802.11 WLANs. This analysis corrects prior approximate studies. We show that there exists a unique proportional fair rate allocation and completely characterise the allocation in terms of a new airtime quantity, the total air-time.", "In this paper we propose a feedback-based scheme to penalise misbehaving nodes in an 802.11 network based on gains achieved due to their (mis)configuration. Only the access point (AP) is modified and the scheme requires no additional communication or cooperation from other nodes. We achieve this by failing to send MAC-level ACKs with a probability determined by the online feedback scheme. The scheme is designed so that it can incentivise nodes to configure themselves correctly and avoids the need to explicitly detect misbehaving nodes." ] }
1311.4665
1658042064
A standard way to approximate the distance between two vertices p and q in a graph is to compute a shortest path from p to q that goes through one of k sources, which are well-chosen vertices. Precomputing the distance between each of the k sources to all vertices yields an efficient computation of approximate distances between any two vertices. One standard method for choosing k sources is the so-called Farthest Point Sampling (FPS), which starts with a random vertex as the first source, and iteratively selects the farthest vertex from the already selected sources.In this paper, we analyze the stretch factor F FPS of approximate geodesics computed using FPS, which is the maximum, over all pairs of distinct vertices, of their approximated distance over their geodesic distance in the graph. We show that F FPS can be bounded in terms of the minimal value F * of the stretch factor obtained using an optimal placement of k sources as F FPS ≤ 2 r e 2 F * + 2 r e 2 + 8 r e + 1 , where r e is the length ratio of longest edge over the shortest edge in the graph. We further show that the factor r e is not an artefact of the analysis by providing a class of graphs for which F FPS ź 1 2 r e F * .
Computing geodesics on polyhedral surfaces is a well-studied problem for which we refer to the recent survey by @cite_8 . In this paper, we restrict geodesics to be shortest paths along edges of the underlying graph.
{ "cite_N": [ "@cite_8" ], "mid": [ "1561451935" ], "abstract": [ "This survey gives a brief overview of theoretically and practically relevant algorithms to compute geodesic paths and distances on three-dimensional surfaces. The survey focuses on three-dimensional polyhedral surfaces. The goal of this survey is to identify the most relevant open problems, both theoretical and practical." ] }
1311.4665
1658042064
A standard way to approximate the distance between two vertices p and q in a graph is to compute a shortest path from p to q that goes through one of k sources, which are well-chosen vertices. Precomputing the distance between each of the k sources to all vertices yields an efficient computation of approximate distances between any two vertices. One standard method for choosing k sources is the so-called Farthest Point Sampling (FPS), which starts with a random vertex as the first source, and iteratively selects the farthest vertex from the already selected sources.In this paper, we analyze the stretch factor F FPS of approximate geodesics computed using FPS, which is the maximum, over all pairs of distinct vertices, of their approximated distance over their geodesic distance in the graph. We show that F FPS can be bounded in terms of the minimal value F * of the stretch factor obtained using an optimal placement of k sources as F FPS ≤ 2 r e 2 F * + 2 r e 2 + 8 r e + 1 , where r e is the length ratio of longest edge over the shortest edge in the graph. We further show that the factor r e is not an artefact of the analysis by providing a class of graphs for which F FPS ź 1 2 r e F * .
The FPS algorithm has been used for a variety of isometry-invariant surface processing tasks. The algorithm was first introduced for graph clustering @cite_14 , and later independently developed for 2D images @cite_6 and extended to 3D meshes @cite_9 . Ben @cite_10 and Giard and Macq @cite_7 used this sampling strategy to efficiently compute approximate geodesic distances, Elad and Kimmel @cite_13 and M ' e moli and Sapiro @cite_19 used FPS in the context of shape recognition. @cite_16 and @cite_4 used FPS to efficiently compute point-to-point correspondences between surfaces. While it has been shown experimentally that FPS is a good heuristic for isometry-invariant surface processing tasks @cite_10 @cite_7 @cite_13 @cite_19 @cite_16 @cite_4 , to the best of our knowledge, the worst-case stretch of the geodesics has not been analyzed theoretically.
{ "cite_N": [ "@cite_13", "@cite_14", "@cite_4", "@cite_7", "@cite_9", "@cite_6", "@cite_19", "@cite_16", "@cite_10" ], "mid": [ "2159361280", "1973264045", "2123631536", "1810598584", "92634156", "2563408008", "2137387796", "2098578926", "2162114735" ], "abstract": [ "Isometric surfaces share the same geometric structure, also known as the \"first fundamental form.\" For example, all possible bendings of a given surface that includes all length preserving deformations without tearing or stretching the surface are considered to be isometric. We present a method to construct a bending invariant signature for such surfaces. This invariant representation is an embedding of the geometric structure of the surface in a small dimensional Euclidean space in which geodesic distances are approximated by Euclidean ones. The bending invariant representation is constructed by first measuring the intergeodesic distances between uniformly distributed points on the surface. Next, a multidimensional scaling technique is applied to extract coordinates in a finite dimensional Euclidean space in which geodesic distances are replaced by Euclidean ones. Applying this transform to various surfaces with similar geodesic structures (first fundamental form) maps them into similar signature surfaces. We thereby translate the problem of matching nonrigid objects in various postures into a simpler problem of matching rigid objects. As an example, we show a simple surface classification method that uses our bending invariant signatures.", "The problem of clustering a set of points so as to minimize the maximum intercluster distance is studied. An O(kn) approximation algorithm, where n is the number of points and k is the number of clusters, that guarantees solutions with an objective function value within two times the optimal solution value is presented. This approximation algorithm succeeds as long as the set of points satisfies the triangular inequality. We also show that our approximation algorithm is best possible, with respect to the approximation bound, if PZ NP.", "We present an approach to find dense point-to-point correspondences between two deformed surfaces corresponding to different postures of the same non-rigid object in a fully automatic way. The approach requires no prior knowledge about the shapes being registered or the initial alignment of the shapes. We consider surfaces that are represented by possibly incomplete triangular meshes. We model the deformations of an object as isometries. To solve the correspondence problem, our approach maps the intrinsic geometries of the surfaces into a low-dimensional Euclidean space via multi-dimensional scaling. This results in posture-invariant shapes that can be registered using rigid correspondence algorithms.", "", "We introduce the Fast Marching farthest point sampling (FastFPS) approach for the progressive sampling of planar domains and curved manifolds in triangulated, point cloud or implicit form. By using Fast Marching methods 2 , 3, 6 for the incremental computation of distance maps across the sampling domain, we obtain a farthest point sampling technique superior to earlier point sampling principles in two important respects. Firstly, our method performs equally well in both the uniform and the adaptive case. Secondly, the algorithm is applicable to both images and higher dimensional surfaces in triangulated, point cloud or implicit form. This paper presents the methods underlying the algorithm and gives examples for the processing of images and triangulated surfaces. A companion report 4 provides details regarding the application of the FastFPS algorithm to point clouds and implicit surfaces.", "A new method of farthest point strategy (FPS) for progressive image acquisition-an acquisition process that enables an approximation of the whole image at each sampling stage-is presented. Its main advantage is in retaining its uniformity with the increased density, providing efficient means for sparse image sampling and display. In contrast to previously presented stochastic approaches, the FPS guarantees the uniformity in a deterministic min-max sense. Within this uniformity criterion, the sampling points are irregularly spaced, exhibiting anti-aliasing properties comparable to those characteristic of the best available method (Poisson disk). A straightforward modification of the FPS yields an image-dependent adaptive sampling scheme. An efficient O(N log N) algorithm for both versions is introduced, and several applications of the FPS are discussed.", "Point clouds are one of the most primitive and fundamental surface representations. A popular source of point clouds are three dimensional shape acquisition devices such as laser range scanners. Another important field where point clouds are found is in the representation of high-dimensional manifolds by samples. With the increasing popularity and very broad applications of this source of data, it is natural and important to work directly with this representation, without having to go to the intermediate and sometimes impossible and distorting steps of surface reconstruction. A geometric framework for comparing manifolds given by point clouds is presented in this paper. The underlying theory is based on Gromov-Hausdorff distances, leading to isometry invariant and completely geometric comparisons. This theory is embedded in a probabilistic setting as derived from random sampling of manifolds, and then combined with results on matrices of pairwise geodesic distances to lead to a computational implementation of the framework. The theoretical and computational results here presented are complemented with experiments for real three dimensional shapes.", "An efficient algorithm for isometry-invariant matching of surfaces is presented. The key idea is computing the minimum-distortion mapping between two surfaces. For this purpose, we introduce the generalized multidimensional scaling, a computationally efficient continuous optimization algorithm for finding the least distortion embedding of one surface into another. The generalized multidimensional scaling algorithm allows for both full and partial surface matching. As an example, it is applied to the problem of expression-invariant three-dimensional face recognition.", "We present a heuristic algorithm to compute approximate geodesic distances on a triangular manifold S containing n vertices with partially missing data. The proposed method computes an approximation of the geodesic distance between two vertices pi and pj on S and provides an upper bound of the geodesic distance that is shown to be optimal in the worst case. This yields a relative error bound of the estimate that is worst-case optimal. The algorithm approximates the geodesic distance without trying to reconstruct the missing data by embedding the surface in a low dimensional space via multi-dimensional scaling (MDS). We derive a new heuristic method to add an object to the embedding computed via least-squares MDS." ] }
1311.4665
1658042064
A standard way to approximate the distance between two vertices p and q in a graph is to compute a shortest path from p to q that goes through one of k sources, which are well-chosen vertices. Precomputing the distance between each of the k sources to all vertices yields an efficient computation of approximate distances between any two vertices. One standard method for choosing k sources is the so-called Farthest Point Sampling (FPS), which starts with a random vertex as the first source, and iteratively selects the farthest vertex from the already selected sources.In this paper, we analyze the stretch factor F FPS of approximate geodesics computed using FPS, which is the maximum, over all pairs of distinct vertices, of their approximated distance over their geodesic distance in the graph. We show that F FPS can be bounded in terms of the minimal value F * of the stretch factor obtained using an optimal placement of k sources as F FPS ≤ 2 r e 2 F * + 2 r e 2 + 8 r e + 1 , where r e is the length ratio of longest edge over the shortest edge in the graph. We further show that the factor r e is not an artefact of the analysis by providing a class of graphs for which F FPS ź 1 2 r e F * .
The problem we study is closely related to the @math -center problem, which aims at finding @math centers (or sources) @math , such that the maximum distance of any point to its closest center is minimized. With the notation defined above, the @math -center problem aims at finding @math , such that @math is minimized. This problem is @math -hard and FPS gives a @math -approximation, which means that the @math centers @math found using FPS have the property that @math @cite_14 .
{ "cite_N": [ "@cite_14" ], "mid": [ "1973264045" ], "abstract": [ "The problem of clustering a set of points so as to minimize the maximum intercluster distance is studied. An O(kn) approximation algorithm, where n is the number of points and k is the number of clusters, that guarantees solutions with an objective function value within two times the optimal solution value is presented. This approximation algorithm succeeds as long as the set of points satisfies the triangular inequality. We also show that our approximation algorithm is best possible, with respect to the approximation bound, if PZ NP." ] }
1311.4665
1658042064
A standard way to approximate the distance between two vertices p and q in a graph is to compute a shortest path from p to q that goes through one of k sources, which are well-chosen vertices. Precomputing the distance between each of the k sources to all vertices yields an efficient computation of approximate distances between any two vertices. One standard method for choosing k sources is the so-called Farthest Point Sampling (FPS), which starts with a random vertex as the first source, and iteratively selects the farthest vertex from the already selected sources.In this paper, we analyze the stretch factor F FPS of approximate geodesics computed using FPS, which is the maximum, over all pairs of distinct vertices, of their approximated distance over their geodesic distance in the graph. We show that F FPS can be bounded in terms of the minimal value F * of the stretch factor obtained using an optimal placement of k sources as F FPS ≤ 2 r e 2 F * + 2 r e 2 + 8 r e + 1 , where r e is the length ratio of longest edge over the shortest edge in the graph. We further show that the factor r e is not an artefact of the analysis by providing a class of graphs for which F FPS ź 1 2 r e F * .
In the context of isometry-invariant shape processing, we are interested in bounding the stretch induced by the approximation rather than ensuring that every point has a close-by source. A related problem that has been studied in the context of networks by K " o @cite_0 is the edge-dilation @math -center problem , where every point, @math , is assigned a source, @math , and the distance between two points @math and @math is approximated by the length of the path through @math and @math . The aim is then to find a set of sources that minimizes the worst stretch, and K " o show that this problem is @math -hard and propose an approximation algorithm to solve the problem.
{ "cite_N": [ "@cite_0" ], "mid": [ "2052433163" ], "abstract": [ "We provide an approximation algorithm for selecting centers in a complete graph so as to minimize the maximum ratio of the distance between any two nodes via their respective centers to their true graph distance. Placing centers under such an objective function is important in designing efficient communication networks which rely on hubs for routing." ] }
1311.4021
1790661775
Fixed-parameter tractability analysis and scheduling are two core domains of combinatorial optimization which led to deep understanding of many important algorithmic questions. However, even though fixed-parameter algorithms are appealing for many reasons, no such algorithms are known for many fundamental scheduling problems. In this paper we present the first fixed-parameter algorithms for classical scheduling problems such as makespan minimization, scheduling with job-dependent cost functions-one important example being weighted flow time-and scheduling with rejection. To this end, we identify crucial parameters that determine the problems' complexity. In particular, we manage to cope with the problem complexity stemming from numeric input values, such as job processing times, which is usually a core bottleneck in the design of fixed-parameter algorithms. We complement our algorithms with W[1]-hardness results showing that for smaller sets of parameters the respective problems do not allow FPT-algorithms. In particular, our positive and negative results for scheduling with rejection explore a research direction proposed by D 'aniel Marx. We hope that our contribution yields a new and fresh perspective on scheduling and fixed-parameter algorithms and will lead to further fruitful interdisciplinary research connecting these two areas.
One very classical scheduling problem studied in this paper is to schedule a set of jobs non-preemptively on a set of @math identical machines, i.e., @math . Research for it dates back to the 1960s when Graham showed that the greedy list scheduling algorithm yields a @math -approximation and a @math -approximation when the jobs are ordered non-decreasingly by length @cite_12 . After a series of improvements @cite_19 @cite_24 @cite_0 @cite_2 , Hochbaum and Shmoys present a polynomial time approximation scheme (PTAS), even if the number of machines is part of the input @cite_21 . On unrelated machines, the problem is @math -hard to approximate with a better factor than @math @cite_13 @cite_22 and there is a 2-approximation algorithm @cite_22 that extends to the generalized assignment problem @cite_32 . For the restricted assignment case, i.e., each job has a fixed processing time and a set of machines where one can assign it to, Svensson @cite_16 gives a polynomial time algorithm that estimates the optimal makespan up to a factor of @math .
{ "cite_N": [ "@cite_22", "@cite_21", "@cite_32", "@cite_24", "@cite_19", "@cite_0", "@cite_2", "@cite_16", "@cite_13", "@cite_12" ], "mid": [ "2046825771", "2093979815", "2014369282", "2020196988", "2019133545", "1605513453", "2067297441", "2135932424", "2081855231", "2104680817" ], "abstract": [ "We consider the following scheduling problem. There are m parallel machines and n independent jobs. Each job is to be assigned to one of the machines. The processing of job j on machine i requires time pij. The objective is to find a schedule that minimizes the makespan. Our main result is a polynomial algorithm which constructs a schedule that is guaranteed to be no longer than twice the optimum. We also present a polynomial approximation scheme for the case that the number of machines is fixed. Both approximation results are corollaries of a theorem about the relationship of a class of integer programming problems and their linear programming relaxations. In particular, we give a polynomial method to round the fractional extreme points of the linear program to integral points that nearly satisfy the constraints. In contrast to our main result, we prove that no polynomial algorithm can achieve a worst-case ratio less than 3 2 unless P = NP. We finally obtain a complexity classification for all special cases with a fixed number of processing times.", "The problem of scheduling a set of n jobs on m identical machines so as to minimize the makespan time is perhaps the most well-studied problem in the theory of approximation algorithms for NP-hard optimization problems. In this paper the strongest possible type of result for this problem, a polynomial approximation scheme, is presented. More precisely, for each e, an algorithm that runs in time O (( n e) 1 e 2 ) and has relative error at most e is given. In addition, more practical algorithms for e = 1 5 + 2 - k and e = 1 6 + 2 - k , which have running times O ( n ( k + log n )) and O ( n ( km 4 + log n )) are presented. The techniques of analysis used in proving these results are extremely simple, especially in comparison with the baroque weighting techniques used previously. The scheme is based on a new approach to constructing approximation algorithms, which is called dual approximation algorithms, where the aim is to find superoptimal, but infeasible, solutions, and the performance is measured by the degree of infeasibility allowed. This notion should find wide applicability in its own right and should be considered for any optimization problem where traditional approximation algorithms have been particularly elusive.", "The generalized assignment problem can be viewed as the following problem of scheduling parallel machines with costs. Each job is to be processed by exactly one machine; processing jobj on machinei requires timep ij and incurs a cost ofc ij ; each machinei is available forT i time units, and the objective is to minimize the total cost incurred. Our main result is as follows. There is a polynomial-time algorithm that, given a valueC, either proves that no feasible schedule of costC exists, or else finds a schedule of cost at mostC where each machinei is used for at most 2T i time units.", "This paper considers the problem of nonpreemptively scheduling n independent jobs on m identical, parallel processors with the object of minimizing the “makespan”, or completion time for the entire set of jobs. Coffman, Garey, and Johnson [SIAM J. Comput., 7 (1978), pp. 1–17] described an algorithm MULTIFIT which has a considerably better worst case performance than the largest processing time first algorithm. In this paper we tighten the bounds obtained in that paper on the worst case behavior of this algorithm by giving an example showing that it may be as bad as 13 11 and proving that it can be no worse than 6 5.", "We consider one of the basic, well-studied problems of scheduling theory, that of nonpreemptively scheduling n independent tasks on m identical, parallel processors with the objective of minimizing the “makespan,” i.e., the total timespan required to process all the given tasks. Because this problem is @math -complete and apparently intractable in general, much effort has been directed toward devising fast algorithms which find near-optimal schedules. The well-known LPT (Largest Processing Time first) algorithm always finds a schedule having makespan within @math of the minimum possible makespan, and this is the best such bound satisfied by any previously published fast algorithm. We describe a comparably fast algorithm, based on techniques from “bin-packing,” which we prove satisfies a bound of 1.220. On the basis of exact upper bounds determined for each @math , we conjecture that the best possible general bound for our algorithm is actually @math .", "Consideration is given to the problem of nonpreemptively scheduling a set of N independent tasks to a system of M identical processors, with the objective to minimize the overall finish time. Since this problem is known to be NP-Hard, and hence unlikely to permit an efficient solution procedure, heuristic algorithms are studied in an effort to provide near-optimal results. Worst-case analysis is used to gauge the worth of a scheduling procedure. For a particular algorithm, an upper bound is sought for the length of its schedule expressed relative to an optimal assignment of tasks to processors. Departing from more traditional schemes of only determining an algorithm's worst-case performance bound, the work contained herein focuses on modifying a scheduling heuristic for \"bad\" regions of the input space, thereby improving its worst-case bound. It is shown that rather simple alterations, having little effect on the run-time of an algorithm, may guarantee a significantly better behavior. The O 1-INTERCHANGE heuristic is improved so that, while its time complexity is still O(NlogM), its worst-case performance bound is reduced from 2 to 4 3 times optimal. The familiar LPT algorithm is altered so that, while its time complexity is still O(NlogN), its worst-case bound is reduced from 4 3 to 5 4 times optimal. The major effort of this research is devoted to proving that the MULTIFIT heuristic can be modified, without increasing its time complexity from O(NlogN), so that its worst-case performance bound is reduced from some value in the range 13 11, 6 5 to 72 61 times optimal, a better bound as of this writing than that yielded by any other known polynomial-time algorithm.", "The following job sequencing problems are studied: (i) single processor job sequencing with deadlines, (ii) job sequencing on m -identical processors to minimize finish time and related problems, (iii) job sequencing on 2-identical processors to minimize weighted mean flow time. Dynamic programming type algorithms are presented to obtain optimal solutions to these problems, and three general techniques are presented to obtain approximate solutions for optimization problems solvable in this way. The techniques are applied to the problems above to obtain polynomial time algorithms that generate “good” approximate solutions.", "One of the classic results in scheduling theory is the @math -approximation algorithm by Lenstra, Shmoys, and Tardos for the problem of scheduling jobs to minimize makespan on unrelated machines; i.e., job @math requires time @math if processed on machine @math . More than two decades after its introduction it is still the algorithm of choice even in the restricted model where processing times are of the form @math . This problem, also known as the restricted assignment problem, is NP-hard to approximate within a factor less than @math , which is also the best known lower bound for the general version. Our main result is a polynomial time algorithm that estimates the optimal makespan of the restricted assignment problem within a factor @math , where @math is an arbitrarily small constant. The result is obtained by upper bounding the integrality gap of a certain strong linear program, known as the configuration LP, that was previously successfu...", "We design a 1.75-approximation algorithm for a special case of scheduling parallel machines to minimize the makespan, namely the case where each job can be assigned to at most two machines with the same processing time on either machine. (This is a special case of so-called restricted assignment, where the set of eligible machines can be arbitrary for each job.) We also show that even for this special case it is NP-hart to compute better than 1.5 approximation. This is the first improvement of the approximation ratio 2 of Lenstra, Shmoys, and Tardos [Approximation algorithms for scheduling unrelated parallel machines, Math. Program. 46:259--271, 1990], for any special case with unbounded number of machines. Our lower bound yields the same ratio as their bound which works for restricted assignment, and which is still the state-of-the-art lower bound even for the most general case.", "An apparatus for generating sparks over a selected area to be used for theatrical effects. Met al wire having a diameter in the range of 0.020-0.125 inches is provided by two, independent supply sources. Each wire supply source is coupled to a wire guide which imposes synchronous, linear movement to each wire source at a selected rate. Each wire source is coupled to a tip assembly which places the terminus of each wire source adjacent one another. The positive and negative electrodes of a direct current power source are electrically connected to a respective terminus of each of the pair of wire sources, the output of the direct current power source is amplified to voltage sufficient to atomize the wire when the power source is short circuited. The atomization of the wire results in the production of heated, met allic particles simulating generated sparks. A source of compressed air is disposed adjacent the point of atomization. The atomized particles are disseminated across an area determined by the force imposed thereon by the compressed air." ] }
1311.4021
1790661775
Fixed-parameter tractability analysis and scheduling are two core domains of combinatorial optimization which led to deep understanding of many important algorithmic questions. However, even though fixed-parameter algorithms are appealing for many reasons, no such algorithms are known for many fundamental scheduling problems. In this paper we present the first fixed-parameter algorithms for classical scheduling problems such as makespan minimization, scheduling with job-dependent cost functions-one important example being weighted flow time-and scheduling with rejection. To this end, we identify crucial parameters that determine the problems' complexity. In particular, we manage to cope with the problem complexity stemming from numeric input values, such as job processing times, which is usually a core bottleneck in the design of fixed-parameter algorithms. We complement our algorithms with W[1]-hardness results showing that for smaller sets of parameters the respective problems do not allow FPT-algorithms. In particular, our positive and negative results for scheduling with rejection explore a research direction proposed by D 'aniel Marx. We hope that our contribution yields a new and fresh perspective on scheduling and fixed-parameter algorithms and will lead to further fruitful interdisciplinary research connecting these two areas.
Until now, to the best of our knowledge, no fixed-parameter algorithms for the classical scheduling problems studied in this paper have been devised. Though restricted scheduling problems have been considered by the parameterized complexity community, this generally meant that jobs are represented as vertices of a graph with conflicting jobs connected by an edge, and then one finds a maximum independent set in the graph; examples are given by van @cite_33 . In contrast, classical scheduling problems investigated in the framework of parameterized complexity appear to be intractable; for example, @math -processor scheduling with precedence constraints is @math -hard @cite_5 and scheduling unit-length tasks with deadlines and precedence constraints and @math tardy tasks is @math -hard @cite_34 , for parameter @math .
{ "cite_N": [ "@cite_5", "@cite_34", "@cite_33" ], "mid": [ "2029604753", "2035177751", "" ], "abstract": [ "It is shown that the Precedence Constrained K-Processor Scheduling problem is hard for the parameterized complexity class W[2]. This means that there does not exist a constant c, such that for all fixed K, the Precedence Constrained K-Processor Scheduling problem can be solved in O(n^c) time, unless an unlikely collapse occurs in the parameterized complexity hierarchy. That is, if the problem can be solved in polynomial time for each fixed K, then it is likely that the degree of the running time polynomial must increase as the number of processors K increases.", "Given a set T of tasks, each of unit length and having an individual deadline d(t) ∈ Z+, a set of precedence constraints on T, and a positive integer k ≤ |T|, we can ask \"Is there a one-processor schedule for T that obeys the precedence constraints and contains no more than k late tasks?\" This is a well-known NP-complete problem.We might also inquire \"Is there a one-processor schedule for T that obeys the precedence constraints and contains at least k tasks that are on time i.e. no more than |T| - k late tasks?\"Within the framework of classical complexity theory, these two questions are merely different instances of the same problem. Within the recently developed framework of parameterized complexity theory, however, they give rise to two separate problems that may be studied independently of one another.We investigate these problems from the parameterized point of view. We show that, in the general case, both these problems are hard for the parameterized complexity class W[1].In contrast, in the case where the set of precedence constraints can be modelled by a partial order of bounded width, we show that both these problems are fixed parameter tractable.", "" ] }
1311.4021
1790661775
Fixed-parameter tractability analysis and scheduling are two core domains of combinatorial optimization which led to deep understanding of many important algorithmic questions. However, even though fixed-parameter algorithms are appealing for many reasons, no such algorithms are known for many fundamental scheduling problems. In this paper we present the first fixed-parameter algorithms for classical scheduling problems such as makespan minimization, scheduling with job-dependent cost functions-one important example being weighted flow time-and scheduling with rejection. To this end, we identify crucial parameters that determine the problems' complexity. In particular, we manage to cope with the problem complexity stemming from numeric input values, such as job processing times, which is usually a core bottleneck in the design of fixed-parameter algorithms. We complement our algorithms with W[1]-hardness results showing that for smaller sets of parameters the respective problems do not allow FPT-algorithms. In particular, our positive and negative results for scheduling with rejection explore a research direction proposed by D 'aniel Marx. We hope that our contribution yields a new and fresh perspective on scheduling and fixed-parameter algorithms and will lead to further fruitful interdisciplinary research connecting these two areas.
Mere exemptions seem to be an algorithm by Marx and Schlotter @cite_25 for makespan minimization where @math jobs have processing time @math and all other jobs have processing time 1, for combined parameter @math , and work by @cite_4 who consider checking of a schedule (rather than optimization).
{ "cite_N": [ "@cite_4", "@cite_25" ], "mid": [ "92357456", "2037963188" ], "abstract": [ "We investigate the computational complexity of two global constraints, CUMULATIVE and INTERDISTANCE. These are key constraints in modeling and solving scheduling problems. Enforcing domain consistency on both is NP-hard. However, restricted versions of these constraints are often sufficient in practice. Some examples include scheduling problems with a large number of similar tasks, or tasks sparsely distributed over time. Another example is runway sequencing problems in air-traffic control, where landing periods have a regular pattern. Such cases can be characterized in terms of structural restrictions on the constraints. We identify a number of such structural restrictions and investigate how they impact the computational complexity of propagating these global constraints. In particular, we prove that such restrictions often make propagation tractable.", "We study the Hospitals Residents with Couples problem, a variant of the classical Stable Marriage problem. This is the extension of the Hospitals Residents problem where residents are allowed to form pairs and submit joint rankings over hospitals. We use the framework of parameterized complexity, considering the number of couples as a parameter. We also apply a local search approach, and examine the possibilities for giving FPT algorithms applicable in this context. Furthermore, we also investigate the matching problem containing couples that is the simplified version of the Hospitals Residents with Couples problem modeling the case when no preferences are given." ] }
1311.4021
1790661775
Fixed-parameter tractability analysis and scheduling are two core domains of combinatorial optimization which led to deep understanding of many important algorithmic questions. However, even though fixed-parameter algorithms are appealing for many reasons, no such algorithms are known for many fundamental scheduling problems. In this paper we present the first fixed-parameter algorithms for classical scheduling problems such as makespan minimization, scheduling with job-dependent cost functions-one important example being weighted flow time-and scheduling with rejection. To this end, we identify crucial parameters that determine the problems' complexity. In particular, we manage to cope with the problem complexity stemming from numeric input values, such as job processing times, which is usually a core bottleneck in the design of fixed-parameter algorithms. We complement our algorithms with W[1]-hardness results showing that for smaller sets of parameters the respective problems do not allow FPT-algorithms. In particular, our positive and negative results for scheduling with rejection explore a research direction proposed by D 'aniel Marx. We hope that our contribution yields a new and fresh perspective on scheduling and fixed-parameter algorithms and will lead to further fruitful interdisciplinary research connecting these two areas.
A potential reason for this lack of positive results (fixed-parameter algorithms) might be that the knowledge of fixed-parameter algorithms for problems is still in a nascent stage, and scheduling problems are inherently weighted, having job processing times, job weights, etc. We remark though that some scheduling-type problems can be addressed by choosing as parameter the number of numbers'', as done by @cite_26 .
{ "cite_N": [ "@cite_26" ], "mid": [ "2125578939" ], "abstract": [ "The usefulness of parameterized algorithmics has often depended on what Niedermeier has called “the art of problem parameterization”. In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable." ] }
1311.4001
1832679686
We study the minimum number of constraints needed to formulate random instances of the maximum stable set problem via linear programs (LPs), in two distinct models. In the uniform model, the constraints of the LP are not allowed to depend on the input graph, which should be encoded solely in the objective function. There we prove a @math 2Ω(n logn) lower bound with probability at least @math 1-2-2n for every LP that is exact for a randomly selected set of instances; each graph on at most n vertices being selected independently with probability @math pź2-n 42+n. In the non-uniform model, the constraints of the LP may depend on the input graph, but we allow weights on the vertices. The input graph is sampled according to the G(n, p) model. There we obtain upper and lower bounds holding with high probability for various ranges of p. We obtain a super-polynomial lower bound all the way from @math p=Ωlog6+źnn to @math p=o1logn. Our upper bound is close to this as there is only an essentially quadratic gap in the exponent, which currently also exists in the worst-case model. Finally, we state a conjecture that would close this gap, both in the average-case and worst-case models.
Our work is most directly related to @cite_7 and @cite_8 @cite_12 , where the framework for bounding the size of approximate linear programming formulations was laid out. We also borrow a few ideas from @cite_14 to set up our uniform model. We will also employ a robustness theorem from @cite_3 for dropping constraints and feasible solutions.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_3", "@cite_12" ], "mid": [ "2038907843", "2119368733", "2150285483", "2081052967", "" ], "abstract": [ "We prove super-polynomial lower bounds on the size of linear programming relaxations for approximation versions of constraint satisfaction problems. We show that for these problems, polynomial-sized linear programs are exactly as powerful as programs arising from a constant number of rounds of the Sherali-Adams hierarchy. In particular, any polynomial-sized linear program for Max Cut has an integrality gap of 1 2 and any such linear program for Max 3-Sat has an integrality gap of 7 8.", "We solve a 20-year old problem posed by Yannakakis and prove that there exists no polynomial-size linear program (LP) whose associated polytope projects to the traveling salesman polytope, even if the LP is not required to be symmetric. Moreover, we prove that this holds also for the cut polytope and the stable set polytope. These results were discovered through a new connection that we make between one-way quantum communication protocols and semidefinite programming reformulations of LPs.", "We develop a framework for proving approximation limits of polynomial size linear programs (LPs) from lower bounds on the nonnegative ranks of suitably defined matrices. This framework yields unconditional impossibility results that are applicable to any LP as opposed to only programs generated by hierarchies. Using our framework, we prove that O(n1 2-ϵ)-approximations for CLIQUE require LPs of size 2nΩ(ϵ). This lower bound applies to LPs using a certain encoding of CLIQUE as a linear optimization problem. Moreover, we establish a similar result for approximations of semidefinite programs by LPs. Our main technical ingredient is a quantitative improvement of Razborov’s [38] rectangle corruption lemma for the high error regime, which gives strong lower bounds on the nonnegative rank of shifts of the unique disjointness matrix.", "We provide a new framework for establishing strong lower bounds on the nonnegative rank of matrices by means of common information, a notion previously introduced in [1]. Common information is a natural lower bound for the nonnegative rank of a matrix and by combining it with He linger distance estimations we can compute the (almost) exact common information of UDISJ partial matrix. We also establish robustness of this estimation under various perturbations of the UDISJ partial matrix, where rows and columns are randomly or adversarially removed or where entries are randomly or adversarially altered. This robustness translates, via a variant of Yannakakis' Factorization Theorem, to lower bounds on the average case and adversarial approximate extension complexity. We present the first family of polytopes, the hard pair introduced in [2] related to the CLIQUE problem, with high average case and adversarial approximate extension complexity. We also provide an information theoretic variant of the fooling set method that allows us to extend fooling set lower bounds from extension complexity to approximate extension complexity.", "" ] }
1311.3425
1993649018
Distributed computing models typically assume reliable communication between processors. While such assumptions often hold for engineered networks, e.g., due to underlying error correction protocols, their relevance to biological systems, wherein messages are often distorted before reaching their destination, is quite limited. In this study we aim at bridging this gap by rigorously analyzing a model of communication in large anonymous populations composed of simple agents which interact through short and highly unreliable messages. We focus on the rumor-spreading problem and the majority-consensus problem, two fundamental tasks in distributed computing, and initiate their study under communication noise. Our model for communication is extremely weak and follows the push gossip communication paradigm: In each synchronous round each agent that wishes to send information delivers a message to a random anonymous agent. This communication is further restricted to contain only one bit (essentially representing an opinion). Lastly, the system is assumed to be so noisy that the bit in each message sent is flipped independently with probability 1 2-e, for some small Ae >0. Even in this severely restricted, stochastic and noisy setting we give natural protocols that solve the noisy rumor-spreading and the noisy majority-consensus problems efficiently. Our protocols run in O(log n e2) rounds and use O(n log n e2) messages bits in total, where n is the number of agents. These bounds are asymptotically optimal and, in fact, are as fast and message efficient as if each agent would have been simultaneously informed directly by the source. Our efficient, robust, and simple algorithms suggest balancing between silence and transmission, synchronization, and majority-based decisions as important ingredients towards understanding collective communication schemes in anonymous and noisy populations.
Our paper falls within the scope of natural algorithms, a recent attempt to investigate biological phenomena from an algorithmic perspective @cite_53 @cite_22 @cite_2 @cite_27 @cite_37 @cite_52 . Within this framework, many works in the computer science discipline have studied different computational aspects of abstract systems composed of simple and restricted individuals. This includes, in particular, the study of population protocols @cite_26 @cite_36 @cite_13 @cite_44 @cite_23 , which considers individuals with constant memory size interacting in pairs (using constant size messages) in a communication pattern which is either uniformly at random or adversarial, and the beeping model @cite_53 @cite_11 @cite_5 , which assumes a fixed network with extremely restricted communication. However, despite interesting results obtained in such models, the understanding of their fault-tolerance aspects is still lacking @cite_13 @cite_44 . Here, we study basic distributed tasks in a model that includes highly restricted noisy communication.
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_22", "@cite_36", "@cite_53", "@cite_52", "@cite_44", "@cite_27", "@cite_23", "@cite_2", "@cite_5", "@cite_13", "@cite_11" ], "mid": [ "2053831174", "2706788079", "2002188687", "1501097462", "1984727163", "2568440864", "1479726662", "1844529830", "2105431882", "", "2052474723", "2082781959", "" ], "abstract": [ "We use distributed computing tools to provide a new perspective on the behavior of cooperative biological ensembles. We introduce the Ants Nearby Treasure Search (ANTS) problem, a generalization of the classical cow-path problem [10, 20, 41, 42], which is relevant for collective foraging in animal groups. In the ANTS problem, k identical (probabilistic) agents, initially placed at some central location, collectively search for a treasure in the two-dimensional plane. The treasure is placed at a target location by an adversary and the goal is to find it as fast as possible as a function of both k and D, where D is the distance between the central location and the target. This is biologically motivated by cooperative, central place foraging, such as performed by ants around their nest. In this type of search there is a strong preference to locate nearby food sources before those that are further away. We focus on trying to find what can be achieved if communication is limited or altogether absent. Indeed, to avoid overlaps agents must be highly dispersed making communication difficult. Furthermore, if the agents do not commence the search in synchrony, then even initial communication is problematic. This holds, in particular, with respect to the question of whether the agents can communicate and conclude their total number, k. It turns out that the knowledge of k by the individual agents is crucial for performance. Indeed, it is a straightforward observation that the time required for finding the treasure is Ω(D + D2 k), and we show in this paper that this bound can be matched if the agents have knowledge of k up to some constant approximation. We present a tight bound for the competitive penalty that must be paid, in the running time, if the agents have no information about k. Specifically, this bound is slightly more than logarithmic in the number of agents. In addition, we give a lower bound for the setting in which the agents are given some estimation of k. Informally, our results imply that the agents can potentially perform well without any knowledge of their total number k, however, to further improve, they must use some information regarding k. Finally, we propose a uniform algorithm that is both efficient and extremely simple, suggesting its relevance for actual biological scenarios.", "The computational power of networks of small resource-limited mobile agents is explored. Two new models of computation based on pairwise interactions of finite-state agents in populations of finite but unbounded size are defined. With a fairness condition on interactions, the concept of stable computation of a function or predicate is defined. Protocols are given that stably compute any predicate in the class definable by formulas of Presburger arithmetic, which includes Boolean combinations of threshold-k, majority, and equivalence modulo m. All stably computable predicates are shown to be in NL. Assuming uniform random sampling of interacting pairs yields the model of conjugating automata. Any counter machine with O (1) counters of capacity O (n) can be simulated with high probability by a conjugating automaton in a population of size n. All predicates computable with high probability in this model are shown to be in P; they can also be computed by a randomized logspace machine in exponential time. Several open problems and promising future directions are discussed.", "Swarm formation and swarm flocking may conflict each other. Without explicit communication, such conflicts may lead to undesired topological changes since there is no global signal that facilitates coordinated and safe switching from one behavior to the other. Moreover, without coordination signals multiple swarm members might simultaneously assume leadership, and their conflicting leading directions are likely to prevent successful flocking. To the best of our knowledge, we present the first set of swarm flocking algorithms that maintain connectivity while electing direction for flocking, under conditions of no communication. The algorithms allow spontaneous direction requests and support direction changes.", "Population protocols are used as a theoretical model for a collection (or population) of tiny mobile agents that interact with one another to carry out a computation. The agents are identically programmed finite state machines. Input values are initially distributed to the agents, and pairs of agents can exchange state information with other agents when they are close together. The movement pattern of the agents is unpredictable, but subject to some fairness constraints, and computations must eventually converge to the correct output value in any schedule that results from that movement. This framework can be used to model mobile ad hoc networks of tiny devices or collections of molecules undergoing chemical reactions. This chapter surveys results that describe what can be computed in various versions of the population protocol model.", "Computational and biological systems are often distributed so that processors (cells) jointly solve a task, without any of them receiving all inputs or observing all outputs. Maximal independent set (MIS) selection is a fundamental distributed computing procedure that seeks to elect a set of local leaders in a network. A variant of this problem is solved during the development of the fly’s nervous system, when sensory organ precursor (SOP) cells are chosen. By studying SOP selection, we derived a fast algorithm for MIS selection that combines two attractive features. First, processors do not need to know their degree; second, it has an optimal message complexity while only using one-bit messages. Our findings suggest that simple and efficient algorithms can be developed on the basis of biologically derived insights.", "Evolutionary dynamics has been traditionally studied in the context of homogeneous populations, mainly described by the Moran process [P. Moran, Random processes in genetics, Proceedings of the Cambridge Philosophical Society 54 (1) (1958) 60-71]. Recently, this approach has been generalized in [E. Lieberman, C. Hauert, M.A. Nowak, Evolutionary dynamics on graphs, Nature 433 (2005) 312-316] by arranging individuals on the nodes of a network (in general, directed). In this setting, the existence of directed arcs enables the simulation of extreme phenomena, where the fixation probability of a randomly placed mutant (i.e., the probability that the offspring of the mutant eventually spread over the whole population) is arbitrarily small or large. On the other hand, undirected networks (i.e., undirected graphs) seem to have a smoother behavior, and thus it is more challenging to find suppressors amplifiers of selection, that is, graphs with smaller greater fixation probability than the complete graph (i.e., the homogeneous population). In this paper we focus on undirected graphs. We present the first class of undirected graphs which act as suppressors of selection, by achieving a fixation probability that is at most one half of that of the complete graph, as the number of vertices increases. Moreover, we provide some generic upper and lower bounds for the fixation probability of general undirected graphs. As our main contribution, we introduce the natural alternative of the model proposed in [E. Lieberman, C. Hauert, M.A. Nowak, Evolutionary dynamics on graphs, Nature 433 (2005) 312-316]. In our new evolutionary model, all individuals interact simultaneously and the result is a compromise between aggressive and non-aggressive individuals. We prove that our new model of mutual influences admits a potential function, which guarantees the convergence of the system for any graph topology and any initial fitness vector of the individuals. Furthermore, we prove fast convergence to the stable state for the case of the complete graph, as well as we provide almost tight bounds on the limit fitness of the individuals. Apart from being important on its own, this new evolutionary model appears to be useful also in the abstract modeling of control mechanisms over invading populations in networks. We demonstrate this by introducing and analyzing two alternative control approaches, for which we bound the time needed to stabilize to the ''healthy'' state of the system.", "Developing self-stabilizing solutions is considered to be more challenging and complicated than developing classical solutions, where a proper initialization of the variables can be assumed. This remark holds for a large variety of models. Hence, to ease the task of the developers, some automatic techniques have been proposed to design self-stabilizing algorithms. In this paper, we propose an automatic transformer for algorithms in population protocols model . This model introduced recently for networks with a large number of resource-limited mobile agents. For our purposes, we use a variant of this model. Mainly, we assume agents having characteristics (e.g., moving speed, communication radius) affecting their intercommunication \"speed\" and considered through the notion of cover time . The automatic transformer takes as an input an algorithm solving a static problem and outputs a self-stabilizing algorithm for the same problem. We prove that our transformer is correct and we analyze its stabilization complexity.", "Initial knowledge regarding group size can be crucial for collective performance. We study this relation in the context of the Ants Nearby Treasure Search (ANTS) problem [18], which models natural cooperative foraging behavior such as that performed by ants around their nest. In this problem, k (probabilistic) agents, initially placed at some central location, collectively search for a treasure on the two-dimensional grid. The treasure is placed at a target location by an adversary and the goal is to find it as fast as possible as a function of both k and D, where D is the (unknown) distance between the central location and the target. It is easy to see that T=Ω(D+D2 k) time units are necessary for finding the treasure. Recently, it has been established that O(T) time is sufficient if the agents know their total number k (or a constant approximation of it), and enough memory bits are available at their disposal [18]. In this paper, we establish lower bounds on the agent memory size required for achieving certain running time performances. To the best our knowledge, these bounds are the first non-trivial lower bounds for the memory size of probabilistic searchers. For example, for every given positive constant e, terminating the search by time O(log1−ek ·T) requires agents to use Ω(loglogk) memory bits. From a high level perspective, we illustrate how methods from distributed computing can be useful in generating lower bounds for cooperative biological ensembles. Indeed, if experiments that comply with our setting reveal that the ants' search is time efficient, then our theoretical lower bounds can provide some insight on the memory they use for this task.", "We extend here the Population Protocol (PP) model of (2004, 2006) [2,4] in order to model more powerful networks of resource-limited agents that are possibly mobile. The main feature of our extended model, called the Mediated Population Protocol (MPP) model, is to allow the edges of the interaction graph to have states that belong to a constant-size set. We then allow the protocol rules for pairwise interactions to modify the corresponding edge state. The descriptions of our protocols preserve both the uniformity and anonymity properties of PPs, that is, they do not depend on the size of the population and do not use unique identifiers. We focus on the computational power of the MPP model on complete interaction graphs and initially identical edges. We provide the following exact characterization of the class MPS of stably computable predicates: a predicate is in MPS iff it is symmetric and is in NSPACE(n^2).", "", "A new model that depicts a network of randomized finite state machines operating in an asynchronous environment is introduced. This model, that can be viewed as a hybrid of the message passing model and cellular automata is suitable for applying the distributed computing lens to the study of networks of sub-microprocessor devices, e.g., biological cellular networks and man-made nano-networks. Although the computation and communication capabilities of each individual device in the new model are, by design, much weaker than those of an abstract computer, we show that some of the most important and extensively studied distributed computing problems can still be solved efficiently.", "This article studies self-stabilization in networks of anonymous, asynchronously interacting nodes where the size of the network is unknown. Constant-space protocols are given for Dijkstra-style round-robin token circulation, leader election in rings, two-hop coloring in degree-bounded graphs, and establishing consistent global orientation in an undirected ring. A protocol to construct a spanning tree in regular graphs using O(log D) memory is also given, where D is the diameter of the graph. A general method for eliminating nondeterministic transitions from the self-stabilizing implementation of a large family of behaviors is used to simplify the constructions, and general conditions under which protocol composition preserves behavior are used in proving their correctness.", "" ] }
1311.3425
1993649018
Distributed computing models typically assume reliable communication between processors. While such assumptions often hold for engineered networks, e.g., due to underlying error correction protocols, their relevance to biological systems, wherein messages are often distorted before reaching their destination, is quite limited. In this study we aim at bridging this gap by rigorously analyzing a model of communication in large anonymous populations composed of simple agents which interact through short and highly unreliable messages. We focus on the rumor-spreading problem and the majority-consensus problem, two fundamental tasks in distributed computing, and initiate their study under communication noise. Our model for communication is extremely weak and follows the push gossip communication paradigm: In each synchronous round each agent that wishes to send information delivers a message to a random anonymous agent. This communication is further restricted to contain only one bit (essentially representing an opinion). Lastly, the system is assumed to be so noisy that the bit in each message sent is flipped independently with probability 1 2-e, for some small Ae >0. Even in this severely restricted, stochastic and noisy setting we give natural protocols that solve the noisy rumor-spreading and the noisy majority-consensus problems efficiently. Our protocols run in O(log n e2) rounds and use O(n log n e2) messages bits in total, where n is the number of agents. These bounds are asymptotically optimal and, in fact, are as fast and message efficient as if each agent would have been simultaneously informed directly by the source. Our efficient, robust, and simple algorithms suggest balancing between silence and transmission, synchronization, and majority-based decisions as important ingredients towards understanding collective communication schemes in anonymous and noisy populations.
Disseminating information to all the nodes of a network is one of the most fundamental communication primitives. In particular, the broadcast problem, where a single piece of information initially residing at some source node is to be disseminated, and variants of it have received a lot of attention in the literature, see, e.g., @cite_7 @cite_47 @cite_39 @cite_46 @cite_33 @cite_57 @cite_56 @cite_58 @cite_49 @cite_9 @cite_15 . Much of this research was devoted to bounding measures such as the number of rounds, and the total number of messages. Fault tolerant broadcast algorithms have also been studied extensively, especially in complete networks and in synchronous environments, where the focus has been on weak types of failures such as (probabilistic) message failures and initial node crashes. Essentially, it has been shown that there exist broadcast protocols that can overcome such faults with a relatively little penalty @cite_39 @cite_19 @cite_29 @cite_51 @cite_56 @cite_54 @cite_9 @cite_58 .
{ "cite_N": [ "@cite_47", "@cite_33", "@cite_7", "@cite_15", "@cite_9", "@cite_29", "@cite_54", "@cite_39", "@cite_57", "@cite_56", "@cite_19", "@cite_49", "@cite_46", "@cite_58", "@cite_51" ], "mid": [ "2068322213", "", "2038562061", "2068749247", "2157004711", "2953373073", "2952550120", "1865724022", "2952030388", "2056496732", "2775875217", "179639306", "2026433959", "2071202952", "1970542229" ], "abstract": [ "In this paper, we study the question of how efficiently a collection of interconnected nodes can perform a global computation in the GOSSIP model of communication. In this model, nodes do not know the global topology of the network, and they may only initiate contact with a single neighbor in each round. This model contrasts with the much less restrictive LOCAL model, where a node may simultaneously communicate with all of its neighbors in a single round. A basic question in this setting is how many rounds of communication are required for the information dissemination problem, in which each node has some piece of information and is required to collect all others. In the LOCAL model, this is quite simple: each node broadcasts all of its information in each round, and the number of rounds required will be equal to the diameter of the underlying communication graph. In the GOSSIP model, each node must independently choose a single neighbor to contact, and the lack of global information makes it difficult to make any sort of principled choice. As such, researchers have focused on the uniform gossip algorithm, in which each node independently selects a neighbor uniformly at random. When the graph is well-connected, this works quite well. In a string of beautiful papers, researchers proved a sequence of successively stronger bounds on the number of rounds required in terms of the conductance φ and graph size n, culminating in a bound of O(φ-1 log n). In this paper, we show that a fairly simple modification of the protocol gives an algorithm that solves the information dissemination problem in at most O(D + polylog (n)) rounds in a network of diameter D, with no dependence on the conductance. This is at most an additive polylogarithmic factor from the trivial lower bound of D, which applies even in the LOCAL model. In fact, we prove that something stronger is true: any algorithm that requires T rounds in the LOCAL model can be simulated in O(T + polylog(n)) rounds in the GOSSIP model. We thus prove that these two models of distributed computation are essentially equivalent.", "", "Whru a dilt lhSC is replicated at, many sites2 maintaining mutual consistrnry among t,he sites iu the fac:e of updat,es is a signitirant problem. This paper descrikrs several randomized algorit,hms for dist,rihut.ing updates and driving t,he replicas toward consist,c>nc,y. The algorit Inns are very simple and require few guarant,ees from the underlying conllllunicat.ioll system, yc+ they rnsutc t.hat. the off( c t, of (‘very update is evcnt,uwlly rf+irt-ted in a11 rq1ica.s. The cost, and parformancc of t,hr algorithms arc tuned I>? c oosing appropriat,c dist,rilMions in t,hc randoinizat,ioii step. TIN> idgoritlmls ilr(’ c*los *ly analogoIls t,o epidemics, and t,he epidcWliolog)litc , ilitlh iii Illld rsti4lldill tlicir bc*liavior. One of tlW i$,oritlims 11&S brc>n implrmcWrd in the Clraringhousr sprv(brs of thr Xerox C’orporat c Iiitcrnc4, solviiig long-standing prol>lf lns of high traffic and tlatirl>ilsr inconsistcllcp.", "We study the communication complexity of rumor spreading in the random phone-call model. Suppose nplayers communicate in parallel rounds, where in each round every player calls a randomly selected communication partner. A player u is allowed to exchange messages during a round only with the player that u called, and with all the players that @math received calls from, in that round. In every round, a (possibly empty) set of rumors to be distributed among all players is generated, and each of the rumors is initially placed in a subset of the players. Karp et. al Karp2000 showed that no rumor-spreading algorithm that spreads a rumor to all players with constant probability can be both time-optimal, taking O(lg n) rounds, and message-optimal, using O(n) messages per rumor. For address-oblivious algorithms, in particular, they showed that Ω(n lg lg n) messages per rumor are required, and they described an algorithm that matches this bound and takes O(lg n) rounds. We investigate the number of communication bits required for rumor spreading. On the lower-bound side, we establish that any address-oblivious algorithm taking O(lg n) rounds requires Ω(n (b+ lg lg n)) communication bits to distribute a rumor of size b bits. On the upper-bound side, we propose an address-oblivious algorithm that takes O(lg n) rounds and uses O(n(b+ lg lg n lg b)) bits. These results show that, unlike the case for the message complexity, optimality in terms of both the running time and the bit communication complexity is attainable, except for very small rumor sizes b", "Investigates the class of epidemic algorithms that are commonly used for the lazy transmission of updates to distributed copies of a database. These algorithms use a simple randomized communication mechanism to ensure robustness. Suppose n players communicate in parallel rounds in each of which every player calls a randomly selected communication partner. In every round, players can generate rumors (updates) that are to be distributed among all players. Whenever communication is established between two players, each one must decide which of the rumors to transmit. The major problem is that players might not know which rumors their partners have already received. For example, a standard algorithm forwarding each rumor form the calling to the called players for spl Theta (ln n) rounds needs to transmit the rumor spl Theta (n ln n) times in order to ensure that every player finally receives the rumor with high probability. We investigate whether such a large communication overhead is inherent to epidemic algorithms. On the positive side, we show that the communication overhead can be reduced significantly. We give an algorithm using only O(n ln ln n) transmissions and O(ln n) rounds. In addition, we prove the robustness of this algorithm. On the negative side, we show that any address-oblivious algorithm needs to send spl Omega (n ln ln n) messages for each rumor, regardless of the number of rounds. Furthermore, we give a general lower bound showing that time and communication optimality cannot be achieved simultaneously using random phone calls, i.e. every algorithm that distributes a rumor in O(ln n) rounds needs spl omega (n) transmissions.", "Randomized rumor spreading is a classical protocol to disseminate information across a network. At SODA 2008, a quasirandom version of this protocol was proposed and competitive bounds for its run-time were proven. This prompts the question: to what extent does the quasirandom protocol inherit the second principal advantage of randomized rumor spreading, namely robustness against transmission failures? In this paper, we present a result precise up to @math factors. We limit ourselves to the network in which every two vertices are connected by a direct link. Run-times accurate to their leading constants are unknown for all other non-trivial networks. We show that if each transmission reaches its destination with a probability of @math , after @math rounds the quasirandom protocol has informed all @math nodes in the network with probability at least @math . Note that this is faster than the intuitively natural @math factor increase over the run-time of approximately @math for the non-corrupted case. We also provide a corresponding lower bound for the classical model. This demonstrates that the quasirandom model is at least as robust as the fully random model despite the greatly reduced degree of independent randomness.", "We study gossip algorithms for the rumor spreading problem which asks each node to deliver a rumor to all nodes in an unknown network. Gossip algorithms allow nodes only to call one neighbor per round and have recently attracted attention as message efficient, simple and robust solutions to the rumor spreading problem. Recently, non-uniform random gossip schemes were devised to allow efficient rumor spreading in networks with bottlenecks. In particular, [Censor-, STOC'12] gave an O(log^3 n) algorithm to solve the 1-local broadcast problem in which each node wants to exchange rumors locally with its 1-neighborhood. By repeatedly applying this protocol one can solve the global rumor spreading quickly for all networks with small diameter, independently of the conductance. This and all prior gossip algorithms for the rumor spreading problem have been inherently randomized in their design and analysis. This resulted in a parallel research direction trying to reduce and determine the amount of randomness needed for efficient rumor spreading. This has been done via lower bounds for restricted models and by designing gossip algorithms with a reduced need for randomness. The general intuition and consensus of these results has been that randomization plays a important role in effectively spreading rumors. In this paper we improves over this state of the art in several ways by presenting a deterministic gossip algorithm that solves the the k-local broadcast problem in 2(k+log n)log n rounds. Besides being the first efficient deterministic solution to the rumor spreading problem this algorithm is interesting in many aspects: It is simpler, more natural, more robust and faster than its randomized pendant and guarantees success with certainty instead of with high probability. Its analysis is furthermore simple, self-contained and fundamentally different from prior works.", "We propose a new protocol for the fundamental problem of disseminating a piece of information to all members of a group of n players. It builds upon the classical randomized rumor spreading protocol and several extensions. The main achievements are the following: Our protocol spreads a rumor from one node to all other nodes in the asymptotically optimal time of (1 + o(1)) log2 n. The whole process can be implemented in a way such that only O(nf(n)) calls are made, where f(n) = ω(1) can be arbitrary. In spite of these quantities being close to the theoretical optima, the protocol remains relatively robust against failures; for random node failures, our algorithm again comes arbitrarily close to the theoretical optima. The protocol can be extended to also deal with adversarial node failures. The price for that is only a constant factor increase in the run-time, where the constant factor depends on the fraction of failing nodes the protocol is supposed to cope with. It can easily be implemented such that only O(n) calls to properly working nodes are made. In contrast to the push-pull protocol by [FOCS 2000], our algorithm only uses push operations, i.e., only informed nodes take active actions in the network. On the other hand, we discard address-obliviousness. To the best of our knowledge, this is the first randomized push algorithm that achieves an asymptotically optimal running time.", "We give a new technique to analyze the stopping time of gossip protocols that are based on random linear network coding (RLNC). Our analysis drastically simplifies, extends and strengthens previous results. We analyze RLNC gossip in a general framework for network and communication models that encompasses and unifies the models used previously in this context. We show, in most settings for the first time, that it converges with high probability in the information-theoretically optimal time. Most stopping times are of the form O(k + T) where k is the number of messages to be distributed and T is the time it takes to disseminate one message. This means RLNC gossip achieves \"perfect pipelining\". Our analysis directly extends to highly dynamic networks in which the topology can change completely at any time. This remains true even if the network dynamics are controlled by a fully adaptive adversary that knows the complete network state. Virtually nothing besides simple O(kT) sequential flooding protocols was previously known for such a setting. While RLNC gossip works in this wide variety of networks its analysis remains the same and extremely simple. This contrasts with more complex proofs that were put forward to give less strong results for various special cases.", "We consider broadcasting from a fault-free source to all nodes of a completely connected n-node network in the presence of k faulty nodes. Every node can communicate with at most one other node in a unit of time and during this period every pair of communicating nodes can exchange information packets. Faulty nodes cannot send information. Broadcasting is adaptive, i.e., a node schedules its next communication on the basis of information currently available to it. Assuming that the fraction of faulty nodes is bounded by a constant smaller than 1, we construct a broadcasting algorithm working in worst-case time O(log2 n).", "", "We study the relation between the rate at which rumors spread throughout a graph and the vertex expansion of the graph. We consider the standard rumor spreading protocol where every node chooses a random neighbor in each round and the two nodes exchange the rumors they know. For any n-node graph with vertex expansion α, we show that this protocol spreads a rumor from a single node to all other nodes in [EQUATION] rounds with high probability. Further, we construct graphs for which Ω(α−1 log2 n) rounds are needed. Our results complement a long series of works that relate rumor spreading to edge-based notions of expansion, resolving one of the most natural questions on the connection between rumor spreading and expansion.", "In this paper, we introduce the problem of Continuous Gossip in which rumors are continually and dynamically injected throughout the network. Each rumor has a deadline, and the goal of a continuous gossip protocol is to ensure good \"Quality of Delivery,\" i.e., to deliver every rumor to every process before the deadline expires. Thus, a trivial solution to the problem of Continuous Gossip is simply for every process to broadcast every rumor as soon as it is injected. Unfortunately, this solution has a high per-round message complexity. Complicating matters, we focus our attention on a highly dynamic network in which processes may continually crash and recover. In order to achieve good per-round message complexity in a dynamic network, processes need to continually form and re-form coalitions that cooperate to spread their rumors throughout the network. The key challenge for a Continuous Gossip protocol is the ongoing adaptation to the ever-changing set of active rumors and non-crashed process. In this work we show how to address this challenge; we develop randomized and deterministic protocols for Continuous Gossip and prove lower bounds on the per-round message-complexity, indicating that our protocols are close to optimal.", "Gossip algorithms spread information in distributed networks by nodes repeatedly forwarding information to a few random contacts. By their very nature, gossip algorithms tend to be distributed and fault tolerant. If done right, they can also be fast and message-efficient. A common model for gossip communication is the random phone call model, in which in each synchronous round each node can PUSH or PULL information to or from a random other node. For example, [FOCS 2000] gave algorithms in this model that spread a message to all nodes in Θ(log n) rounds while sending only O(log log n) messages per node on average. They also showed that at least Θ(log n) rounds are necessary in this model and that algorithms achieving this round-complexity need to send ω(1) messages per node on average. Recently, Avin and Elsasser [DISC 2013], studied the random phone call model with the natural and commonly used assumption of direct addressing. Direct addressing allows nodes to directly contact nodes whose ID (e.g., IP address) was learned before. They show that in this setting, one can \"break the log n barrier\" and achieve a gossip algorithm running in O(√log n) rounds, albeit while using O(√log n) messages per node. In this paper we study the same model and give a simple gossip algorithm which spreads a message in only O(log log n) rounds. We furthermore prove a matching Ω(log log n) lower bound which shows that this running time is best possible. In particular we show that any gossip algorithm takes with high probability at least 0.99 log log n rounds to terminate. Lastly, our algorithm can be tweaked to send only O(1) messages per node on average with only O(log n) bits per message. Our algorithm therefore simultaneously achieves the optimal round-, message-, and bit-complexity for this setting. As all prior gossip algorithms, our algorithm is also robust against failures. In particular, if in the beginning an oblivious adversary fails any F nodes our algorithm still, with high probability, informs all but o(F) surviving nodes.", "In this paper, we study the following randomized broadcasting protocol. At some time t an information r is placed at one of the nodes of a graph. In the succeeding steps, each informed node chooses one neighbor, independently and uniformly at random, and informs this neighbor by sending a copy of r to it. We begin by developing tight lower and upper bounds on the runtime of the algorithm described above. First, it is shown that on ?-regular graphs this algorithm requires at least log2?1?n+log(???1)?n?o(logn)?1.69log2n rounds to inform all n nodes. Together with a result of Pittel B. Pittel, On spreading a rumor, SIAM Journal on Applied Mathematics, 47 (1) (1987) 213?223 this bound implies that the algorithm has the best performance on complete graphs among all regular graphs. For general graphs, we prove a slightly weaker lower bound of log2?1?n+log4n?o(logn)?1.5log2n, where ? denotes the maximum degree of G. We also prove two general upper bounds, (1+o(1))nlnn and O(n??), respectively, where ? denotes the minimum degree.The second part of this paper is devoted to the analysis of fault-tolerance. We show that if the informed nodes are allowed to fail in some step with probability 1?p, then the broadcasting time increases by at most a factor 6 p. As a by-product, we determine the performance of agent based broadcasting in certain graphs and obtain bounds for the runtime of randomized broadcasting on Cartesian products of graphs." ] }
1311.3425
1993649018
Distributed computing models typically assume reliable communication between processors. While such assumptions often hold for engineered networks, e.g., due to underlying error correction protocols, their relevance to biological systems, wherein messages are often distorted before reaching their destination, is quite limited. In this study we aim at bridging this gap by rigorously analyzing a model of communication in large anonymous populations composed of simple agents which interact through short and highly unreliable messages. We focus on the rumor-spreading problem and the majority-consensus problem, two fundamental tasks in distributed computing, and initiate their study under communication noise. Our model for communication is extremely weak and follows the push gossip communication paradigm: In each synchronous round each agent that wishes to send information delivers a message to a random anonymous agent. This communication is further restricted to contain only one bit (essentially representing an opinion). Lastly, the system is assumed to be so noisy that the bit in each message sent is flipped independently with probability 1 2-e, for some small Ae >0. Even in this severely restricted, stochastic and noisy setting we give natural protocols that solve the noisy rumor-spreading and the noisy majority-consensus problems efficiently. Our protocols run in O(log n e2) rounds and use O(n log n e2) messages bits in total, where n is the number of agents. These bounds are asymptotically optimal and, in fact, are as fast and message efficient as if each agent would have been simultaneously informed directly by the source. Our efficient, robust, and simple algorithms suggest balancing between silence and transmission, synchronization, and majority-based decisions as important ingredients towards understanding collective communication schemes in anonymous and noisy populations.
Broadcast related problems were studied in other contexts as well, often with settings where communication noise is inherent. Engineers have studied the related problem of sensor network consensus formation in the presence of communication noise and have demonstrated, for example, tradeoffs between consensus quality and running time @cite_14 . Physicists have studied the spreading of epidemics @cite_16 and the formation of consensus around a zealot in voter models @cite_48 @cite_6 within probabilistic settings that include communication noise. These physically inspired studies often assume very simple algorithms and analyze their performance - this is different from a computer science approach which focuses on identifying the most efficient algorithms. Indeed, broadcast within a noisy voter model setting is expected to yield long convergence times, polynomial in the number of agents.
{ "cite_N": [ "@cite_48", "@cite_14", "@cite_6", "@cite_16" ], "mid": [ "1969871566", "2155723880", "1965044289", "2030539428" ], "abstract": [ "A method for studying the exact properties of a class of inhomogeneous stochastic many-body systems is developed and presented in the framework of a voter model perturbed by the presence of a \"zealot,\" an individual allowed to favor an \"opinion.\" We compute exactly the magnetization of this model and find that in one (1D) and two dimensions (2D) it evolves, algebraically ( t-1 2) in 1D and much slower ( 1 lnt) in 2D, towards the unanimity state chosen by the zealot. In higher dimensions the stationary magnetization is no longer uniform: the zealot cannot influence all the individuals. The implications to other physical problems are also pointed out", "The paper studies average consensus with random topologies (intermittent links) and noisy channels. Consensus with noise in the network links leads to the bias-variance dilemma-running consensus for long reduces the bias of the final average estimate but increases its variance. We present two different compromises to this tradeoff: the A-ND algorithm modifies conventional consensus by forcing the weights to satisfy a persistence condition (slowly decaying to zero;) and the A-NC algorithm where the weights are constant but consensus is run for a fixed number of iterations [^(iota)], then it is restarted and rerun for a total of [^(p)] runs, and at the end averages the final states of the [^(p)] runs (Monte Carlo averaging). We use controlled Markov processes and stochastic approximation arguments to prove almost sure convergence of A-ND to a finite consensus limit and compute explicitly the mean square error (mse) (variance) of the consensus limit. We show that A-ND represents the best of both worlds-zero bias and low variance-at the cost of a slow convergence rate; rescaling the weights balances the variance versus the rate of bias reduction (convergence rate). In contrast, A-NC, because of its constant weights, converges fast but presents a different bias-variance tradeoff. For the same number of iterations [^(iota)][^(p)] , shorter runs (smaller [^(iota)] ) lead to high bias but smaller variance (larger number [^(p)] of runs to average over.) For a static nonrandom network with Gaussian noise, we compute the optimal gain for A-NC to reach in the shortest number of iterations [^(iota)][^(p)] , with high probability (1-delta), (epsiv, delta)-consensus (epsiv residual bias). Our results hold under fairly general assumptions on the random link failures and communication noise.", "We study the voter model with a finite density of zealots—voters that never change opinion. For equal numbers of zealots of each species, the distribution of magnetization (opinions) is Gaussian in the mean-field limit, as well as in one and two dimensions, with a width that is proportional to , where Z is the number of zealots, independent of the total number of voters. Thus just a few zealots can prevent consensus or even the formation of a robust majority.", "The study of social networks, and in particular the spread of disease on networks, has attracted considerable recent attention in the physics community. In this paper, we show that a large class of standard epidemiological models, the so-called susceptible infective removed (SIR) models can be solved exactly on a wide variety of networks. In addition to the standard but unrealistic case of fixed infectiveness time and fixed and uncorrelated probability of transmission between all pairs of individuals, we solve cases in which times and probabilities are nonuniform and correlated. We also consider one simple case of an epidemic in a structured population, that of a sexually transmitted disease in a population divided into men and women. We confirm the correctness of our exact solutions with numerical simulations of SIR epidemics on networks." ] }
1311.3735
1585729493
Dealing with structured data needs the use of expressive representation formalisms that, however, puts the problem to deal with the computational complexity of the machine learning process. Furthermore, real world domains require tools able to manage their typical uncertainty. Many statistical relational learning approaches try to deal with these problems by combining the construction of relevant relational features with a probabilistic tool. When the combination is static (static propositionalization), the constructed features are considered as boolean features and used offline as input to a statistical learner; while, when the combination is dynamic (dynamic propositionalization), the feature construction and probabilistic tool are combined into a single process. In this paper we propose a selective propositionalization method that search the optimal set of relational features to be used by a probabilistic learner in order to minimize a loss function. The new propositionalization approach has been combined with the random subspace ensemble method. Experiments on real-world datasets shows the validity of the proposed method.
nFOIL @cite_1 and kFOIL @cite_17 are two examples of dynamic propositionalization. Differently from the static propositionalization, where firstly the features have been generated and then the parameters for a statistical learner are estimated, they tightly integrates the learning of the features with the statistical propositional learner. The criterion according to which the features are generated is that of a statistical learner, a na " i ve Bayes in the case of nFOIL and a support vector machine (SVM) for kFOIL . Both the methods employ an adaptation of the well-known FOIL algorithm @cite_11 that implements a separate-and-conquer rule learning algorithm.
{ "cite_N": [ "@cite_11", "@cite_1", "@cite_17" ], "mid": [ "1999138184", "", "1608154539" ], "abstract": [ "This paper describes FOIL, a system that learns Horn clauses from data expressed as relations. FOIL is based on ideas that have proved effective in attribute-value learning systems, but extends them to a first-order formalism. This new system has been applied successfully to several tasks taken from the machine learning literature.", "", "A novel and simple combination of inductive logic programming with kernel methods is presented. The kFOIL algorithm integrates the well-known inductive logic programming system FOIL with kernel methods. The feature space is constructed by leveraging FOIL search for a set of relevant clauses. The search is driven by the performance obtained by a support vector machine based on the resulting kernel. In this way, kFOIL implements a dynamic propositionalization approach. Both classification and regression tasks can be naturally handled. Experiments in applying kFOIL to well-known benchmarks in chemoinformatics show the promise of the approach." ] }
1311.3735
1585729493
Dealing with structured data needs the use of expressive representation formalisms that, however, puts the problem to deal with the computational complexity of the machine learning process. Furthermore, real world domains require tools able to manage their typical uncertainty. Many statistical relational learning approaches try to deal with these problems by combining the construction of relevant relational features with a probabilistic tool. When the combination is static (static propositionalization), the constructed features are considered as boolean features and used offline as input to a statistical learner; while, when the combination is dynamic (dynamic propositionalization), the feature construction and probabilistic tool are combined into a single process. In this paper we propose a selective propositionalization method that search the optimal set of relational features to be used by a probabilistic learner in order to minimize a loss function. The new propositionalization approach has been combined with the random subspace ensemble method. Experiments on real-world datasets shows the validity of the proposed method.
This approach is however sensitive to the ordering of the selected candidate features that determine the choice of the following features. Furthermore, for the case of na " i ve Bayes, as reported in @cite_6 , the model can suffer from oversensitivity to redundant and or irrelevant attributes. Even for the SVMs has been shown in @cite_14 that they can perform badly in the situation of many irrelevant examples and or features.
{ "cite_N": [ "@cite_14", "@cite_6" ], "mid": [ "2097839764", "2951961140" ], "abstract": [ "We introduce a method of feature selection for Support Vector Machines. The method is based upon finding those features which minimize bounds on the leave-one-out error. This search can be efficiently performed via gradient descent. The resulting algorithms are shown to be superior to some standard feature selection algorithms on both toy data and real-life problems of face recognition, pedestrian detection and analyzing DNA microarray data.", "In this paper, we examine previous work on the naive Bayesian classifier and review its limitations, which include a sensitivity to correlated features. We respond to this problem by embedding the naive Bayesian induction scheme within an algorithm that c arries out a greedy search through the space of features. We hypothesize that this approach will improve asymptotic accuracy in domains that involve correlated features without reducing the rate of learning in ones that do not. We report experimental results on six natural domains, including comparisons with decision-tree induction, that support these hypotheses. In closing, we discuss other approaches to extending naive Bayesian classifiers and outline some directions for future research." ] }
1311.3735
1585729493
Dealing with structured data needs the use of expressive representation formalisms that, however, puts the problem to deal with the computational complexity of the machine learning process. Furthermore, real world domains require tools able to manage their typical uncertainty. Many statistical relational learning approaches try to deal with these problems by combining the construction of relevant relational features with a probabilistic tool. When the combination is static (static propositionalization), the constructed features are considered as boolean features and used offline as input to a statistical learner; while, when the combination is dynamic (dynamic propositionalization), the feature construction and probabilistic tool are combined into a single process. In this paper we propose a selective propositionalization method that search the optimal set of relational features to be used by a probabilistic learner in order to minimize a loss function. The new propositionalization approach has been combined with the random subspace ensemble method. Experiments on real-world datasets shows the validity of the proposed method.
Since, the effectiveness of learning algorithms strongly depends on the used features, a feature selection task is very desirable. The aim of feature selection is to find an optimal subset of the input features leading to high classification performance, or, more generally, to carry out the classification task optimally. However, the search for a variable subset is a NP-hard problem. Therefore, the optimal solution cannot be guaranteed to be reached except when performing an exhaustive search in the solution space. Using stochastic local search procedures @cite_7 allows one to obtain good solutions without having to explore the whole solution space.
{ "cite_N": [ "@cite_7" ], "mid": [ "1591939288" ], "abstract": [ "Prologue Part I. Foundations 1. Introduction 2. SLS Methods 3. Generalised Local Search Machines 4. Empirical Analysis of SLS Algorithms 5. Search Space Structure and SLS Performance Part II. Applications 6. SAT and Constraint Satisfaction 7. MAX-SAT and MAX-CSP 8. Travelling Salesman Problems 9. Scheduling Problems 10. Other Combinatorial Problems Epilogue Glossary" ] }
1311.3735
1585729493
Dealing with structured data needs the use of expressive representation formalisms that, however, puts the problem to deal with the computational complexity of the machine learning process. Furthermore, real world domains require tools able to manage their typical uncertainty. Many statistical relational learning approaches try to deal with these problems by combining the construction of relevant relational features with a probabilistic tool. When the combination is static (static propositionalization), the constructed features are considered as boolean features and used offline as input to a statistical learner; while, when the combination is dynamic (dynamic propositionalization), the feature construction and probabilistic tool are combined into a single process. In this paper we propose a selective propositionalization method that search the optimal set of relational features to be used by a probabilistic learner in order to minimize a loss function. The new propositionalization approach has been combined with the random subspace ensemble method. Experiments on real-world datasets shows the validity of the proposed method.
Differently from a dynamic propositionalization, we firstly construct a set of features and then we adopt a wrapper feature selection approach, that uses a stochastic local search procedure, embedding a na "ive Bayes classifier to select an optimal subset of the features. The optimal subset is searched using a Greedy Randomized Search Procedure (GRASP) @cite_5 and the search is guided by the predictive power of the selected subset computed using a na "ive Bayes approach.
{ "cite_N": [ "@cite_5" ], "mid": [ "2090275183" ], "abstract": [ "Today, a variety of heuristic approaches are available to the operations research practitioner. One methodology that has a strong intuitive appeal, a prominent empirical track record, and is trivial to efficiently implement on parallel processors is GRASP (Greedy Randomized Adaptive Search Procedures). GRASP is an iterative randomized sampling technique in which each iteration provides a solution to the problem at hand. The incumbent solution over all GRASP iterations is kept as the final result. There are two phases within each GRASP iteration: the first intelligently constructs an initial solution via an adaptive randomized greedy function; the second applies a local search procedure to the constructed solution in hope of finding an improvement. In this paper, we define the various components comprising a GRASP and demonstrate, step by step, how to develop such heuristics for combinatorial optimization problems. Intuitive justifications for the observed empirical behavior of the methodology are discussed. The paper concludes with a brief literature review of GRASP implementations and mentions two industrial applications." ] }
1311.3414
2168156367
This paper is about understanding the nature of bug fixing by analyzing thousands of bug fix transactions of software repositories. It then places this learned knowledge in the context of automated program repair. We give extensive empirical results on the nature of human bug fixes at a large scale and a fine granularity with abstract syntax tree differencing. We set up mathematical reasoning on the search space of automated repair and the time to navigate through it. By applying our method on 14 repositories of Java software and 89,993 versioning transactions, we show that not all probabilistic repair models are equivalent.
Purushothaman and Perry @cite_38 studied small commits (in terms of number of lines of code) of proprietary software at Lucent Technology. They showed the impact of small commits with respect to introducing new bugs, and whether they are oriented toward corrective, perfective or adaptive maintenance. German @cite_27 asked different research questions on what he calls modification requests'' (small improvements or bug fix), in particular with respect to authorship and change coupling (files that are often changed together). Alali and colleagues @cite_11 discussed the relations between different size metrics for commits ( # of files, LOC and # of hunks), along the same line as Hattori and Lanza @cite_28 who also consider the relationship between commit keywords and engineering activities. Finally, @cite_12 @cite_3 focus on large commits, to determine whether they reflect specific engineering activities such as license modifications. Compared to these studies on commits that mostly focus, on metadata (e.g. authorship, commit text) or size metrics (number of changer files, number of hunks, etc.), we discuss the content of commits and the kind of source code change they contain. @cite_18 and @cite_32 studied the versioning history to find patterns of change, i.e. groups of similar versioning transactions.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_28", "@cite_32", "@cite_3", "@cite_27", "@cite_12", "@cite_11" ], "mid": [ "2120703352", "2122734351", "2018638699", "2108400985", "2146648240", "2119029602", "2102212651", "2150198410" ], "abstract": [ "Understanding the impact of software changes has been a challenge since software systems were first developed. With the increasing size and complexity of systems, this problem has become more difficult. There are many ways to identify the impact of changes on the system from the plethora of software artifacts produced during development, maintenance, and evolution. We present the analysis of the software development process using change and defect history data. Specifically, we address the problem of small changes by focusing on the properties of the changes rather than the properties of the code itself. Our study reveals that 1) there is less than 4 percent probability that a one-line change introduces a fault in the code, 2) nearly 10 percent of all changes made during the maintenance of the software under consideration were one-line changes, 3) nearly 50 percent of the changes were small changes, 4) nearly 40 percent of changes to fix faults resulted in further faults, 5) the phenomena of change differs for additions, deletions, and modifications as well as for the number of lines affected, and 6) deletions of up to 10 lines did not cause faults.", "The reasons why software is changed are manyfold; new features are added, bugs have to be fixed, or the consistency of coding rules has to be re-established. Since there are many types of of source code changes we want to explore whether they appear frequently together in time and whether they describe specific development activities. We describe a semi-automated approach to discover patterns of such change types using agglomerative hierarchical clustering. We extracted source code changes of one commercial and two open-source software systems and applied the clustering. We found that change type patterns do describe development activities and affect the control flow, the exception flow, or change the API.", "Information contained in versioning system commits has been frequently used to support software evolution research. Concomitantly, some researchers have tried to relate commits to certain activities, e.g., large commits are more likely to be originated from code management activities, while small ones are related to development activities. However, these characterizations are vague, because there is no consistent definition of what is a small or a large commit. In this paper, we study the nature of commits in two dimensions. First, we define the size of commits in terms of number of files, and then we classify commits based on the content of their comments. To perform this study, we use the history log of nine large open source projects.", "Modern software has to evolve to meet the needs of stakeholders; but the nature and scope of this evolution is difficult to anticipate and manage. In this paper, we examine techniques which can discover interesting patterns of evolution in large object-oriented systems. To locate patterns, we use clustering to group together classes which change in the same manner at the same time. Then, we use dynamic time warping to find if a group of classes is similar to another when we ignore the exact moment when changes occur. Groups that exhibit distinctive evolution properties are potential candidates for new evolution patterns. Finally, in a study of two industrial open-source libraries, we identified four new types of change patterns whose usefulness is determined by perusal of the release notes and the architecture.", "Large software systems undergo significant evolution during their lifespan, yet often individual changes are not well documented. In this work, we seek to automatically classify large changes into various categories of maintenance tasks — corrective, adaptive, perfective, feature addition, and non-functional improvement — using machine learning techniques. In a previous paper, we found that many commits could be classified easily and reliably based solely on the manual analysis of the commit metadata and commit messages (i.e., without reference to the source code). Our extension is the automation of classification by training Machine Learners on features extracted from the commit metadata, such as the word distribution of a commit message, commit author, and modules modified. We validated the results of the learners via 10-fold cross validation, which achieved accuracies consistently above 50 , indicating good to fair results. We found that the identity of the author of a commit provided much information about the maintenance class of a commit, almost as much as the words of the commit message. This implies that for most large commits, the Source Control System (SCS) commit messages plus the commit author identity is enough information to accurately and automatically categorize the nature of the maintenance task.", "Software is typically improved and modified in small increments. These changes are usually stored in a configuration management or version control system and can be retrieved. We retrieved each individual modification made to a mature software project and proceeded to analyze them. We studied the characteristics of these modification requests (MRs), the interrelationships of the files that compose them, and their authors. We propose several metrics to quantify MRs, and use these metrics to create visualization graphs that can be used to understand the interrelationships.", "Research in the mining of software repositories has frequently ignored commits that include a large number of files (we call these large commits). The main goal of this paper is to understand the rationale behind large commits, and if there is anything we can learn from them. To address this goal we performed a case study that included the manual classification of large commits of nine open source projects. The contributions include a taxonomy of large commits, which are grouped according to their intention. We contrast large commits against small commits and show that large commits are more perfective while small commits are more corrective. These large commits provide us with a window on the development practices of maintenance teams.", "The research examines the version histories of nine open source software systems to uncover trends and characteristics of how developers commit source code to version control systems (e.g., subversion). The goal is to characterize what a typical or normal commit looks like with respect to the number of files, number of lines, and number of hunks committed together. The results of these three characteristics are presented and the commits are categorized from extra small to extra large. The findings show that approximately 75 of commits are quite small for the systems examined along all three characteristics. Additionally, the commit messages are examined along with the characteristics. The most common words are extracted from the commit messages and correlated with the size categories of the commits. It is observed that sized categories can be indicative of the types of maintenance activities being performed." ] }
1311.3414
2168156367
This paper is about understanding the nature of bug fixing by analyzing thousands of bug fix transactions of software repositories. It then places this learned knowledge in the context of automated program repair. We give extensive empirical results on the nature of human bug fixes at a large scale and a fine granularity with abstract syntax tree differencing. We set up mathematical reasoning on the search space of automated repair and the time to navigate through it. By applying our method on 14 repositories of Java software and 89,993 versioning transactions, we show that not all probabilistic repair models are equivalent.
@cite_29 manually identified 27 bug fix patterns on Java software. Those patterns are precise enough to be automatically extractable from software repositories. They provide and discuss the frequencies of the occurrence of those patterns in 7 open source projects. This work is closely related to ours: we both identify automatically extractable repair actions of software. The main difference is that our repair actions are discovered fully automatically based on AST differencing (there is no prior manual analysis to find them). Furthermore, since our repair actions are meant to be used in an automated program repair setup, they are smaller and more atomic.
{ "cite_N": [ "@cite_29" ], "mid": [ "2149321161" ], "abstract": [ "Twenty-seven automatically extractable bug fix patterns are defined using the syntax components and context of the source code involved in bug fix changes. Bug fix patterns are extracted from the configuration management repositories of seven open source projects, all written in Java (Eclipse, Columba, JEdit, Scarab, ArgoUML, Lucene, and MegaMek). Defined bug fix patterns cover 45.7 to 63.3 of the total bug fix hunk pairs in these projects. The frequency of occurrence of each bug fix pattern is computed across all projects. The most common individual patterns are MC-DAP (method call with different actual parameter values) at 14.9---25.5 , IF-CC (change in if conditional) at 5.6---18.6 , and AS-CE (change of assignment expression) at 6.0---14.2 . A correlation analysis on the extracted pattern instances on the seven projects shows that six have very similar bug fix pattern frequencies. Analysis of if conditional bug fix sub-patterns shows a trend towards increasing conditional complexity in if conditional fixes. Analysis of five developers in the Eclipse projects shows overall consistency with project-level bug fix pattern frequencies, as well as distinct variations among developers in their rates of producing various bug patterns. Overall, data in the paper suggest that developers have difficulty with specific code situations at surprisingly consistent rates. There appear to be broad mechanisms causing the injection of bugs that are largely independent of the type of software being produced." ] }
1311.3414
2168156367
This paper is about understanding the nature of bug fixing by analyzing thousands of bug fix transactions of software repositories. It then places this learned knowledge in the context of automated program repair. We give extensive empirical results on the nature of human bug fixes at a large scale and a fine granularity with abstract syntax tree differencing. We set up mathematical reasoning on the search space of automated repair and the time to navigate through it. By applying our method on 14 repositories of Java software and 89,993 versioning transactions, we show that not all probabilistic repair models are equivalent.
Kim @cite_15 use versioning history to mine project-specific bug fix patterns. Williams and Hollingsworth @cite_24 also learn some repair knowledge from versioning history. They mine how to statically recognize where checks on return values should be inserted. Livshits and Zimmermann @cite_30 mine co-changed method calls. The difference with those close pieces of research is that we enlarge the scope of mined knowledge: from project-specific knowledge @cite_15 to domain-independant repair actions, and from one single repair action @cite_24 @cite_30 to 41 and 173 repair actions.
{ "cite_N": [ "@cite_24", "@cite_15", "@cite_30" ], "mid": [ "2098629748", "2029853454", "2124666592" ], "abstract": [ "We describe a method to use the source code change history of a software project to drive and help to refine the search for bugs. Based on the data retrieved from the source code repository, we implement a static source code checker that searches for a commonly fixed bug and uses information automatically mined from the source code repository to refine its results. By applying our tool, we have identified a total of 178 warnings that are likely bugs in the Apache Web server source code and a total of 546 warnings that are likely bugs in Wine, an open-source implementation of the Windows API. We show that our technique is more effective than the same static analysis that does not use historical data from the source code repository.", "The change history of a software project contains a rich collection of code changes that record previous development experience. Changes that fix bugs are especially interesting, since they record both the old buggy code and the new fixed code. This paper presents a bug finding algorithm using bug fix memories: a project-specific bug and fix knowledge base developed by analyzing the history of bug fixes. A bug finding tool, BugMem, implements the algorithm. The approach is different from bug finding tools based on theorem proving or static model checking such as Bandera, ESC Java, FindBugs, JLint, and PMD. Since these tools use pre-defined common bug patterns to find bugs, they do not aim to identify project-specific bugs. Bug fix memories use a learning process, so the bug patterns are project-specific, and project-specific bugs can be detected. The algorithm and tool are assessed by evaluating if real bugs and fixes in project histories can be found in the bug fix memories. Analysis of five open source projects shows that, for these projects, 19.3 -40.3 of bugs appear repeatedly in the memories, and 7.9 -15.5 of bug and fix pairs are found in memories. The results demonstrate that project-specific bug fix patterns occur frequently enough to be useful as a bug detection technique. Furthermore, for the bug and fix pairs, it is possible to both detect the bug and provide a strong suggestion for the fix. However, there is also a high false positive rate, with 20.8 -32.5 of non-bug containing changes also having patterns found in the memories. A comparison of BugMem with a bug finding tool, PMD, shows that the bug sets identified by both tools are mostly exclusive, indicating that BugMem complements other bug finding tools.", "A great deal of attention has lately been given to addressing software bugs such as errors in operating system drivers or security bugs. However, there are many other lesser known errors specific to individual applications or APIs and these violations of application-specific coding rules are responsible for a multitude of errors. In this paper we propose DynaMine, a tool that analyzes source code check-ins to find highly correlated method calls as well as common bug fixes in order to automatically discover application-specific coding patterns. Potential patterns discovered through mining are passed to a dynamic analysis tool for validation; finally, the results of dynamic analysis are presented to the user.The combination of revision history mining and dynamic analysis techniques leveraged in DynaMine proves effective for both discovering new application-specific patterns and for finding errors when applied to very large applications with many man-years of development and debugging effort behind them. We have analyzed Eclipse and jEdit, two widely-used, mature, highly extensible applications consisting of more than 3,600,000 lines of code combined. By mining revision histories, we have discovered 56 previously unknown, highly application-specific patterns. Out of these, 21 were dynamically confirmed as very likely valid patterns and a total of 263 pattern violations were found." ] }
1311.3414
2168156367
This paper is about understanding the nature of bug fixing by analyzing thousands of bug fix transactions of software repositories. It then places this learned knowledge in the context of automated program repair. We give extensive empirical results on the nature of human bug fixes at a large scale and a fine granularity with abstract syntax tree differencing. We set up mathematical reasoning on the search space of automated repair and the time to navigate through it. By applying our method on 14 repositories of Java software and 89,993 versioning transactions, we show that not all probabilistic repair models are equivalent.
The evaluation of AST differencing tools often gives hints about common change actions of software. For instance, @cite_6 showed the six most common types of changes for the Apache web server and the GCC compiler, the number one being Altering existing function bodies''. This example clearly shows the difference with our work: we provide change and repair actions at a very fine granularity. Similarly, @cite_19 gives interesting numerical findings about software evolution such as the evolution of added functions and global variables of C code. It also remains at granularity that is coarser compared to our analysis. @cite_37 gives some frequency numbers of their change types in order to validate the accuracy and the runtime performance of their distilling algorithm. Those numbers were not --- and not meant to be --- representative of the overall abundance of change types. @cite_2 discuss the relations between 7 categories of change types and not the detailed change actions as we do.
{ "cite_N": [ "@cite_19", "@cite_37", "@cite_6", "@cite_2" ], "mid": [ "2146957318", "2153150125", "2126859103", "2170339773" ], "abstract": [ "Mining software repositories at the source code level can provide a greater understanding of how software evolves. We present a tool for quickly comparing the source code of different versions of a C program. The approach is based on partial abstract syntax tree matching, and can track simple changes to global variables, types and functions. These changes can characterize aspects of software evolution useful for answering higher level questions. In particular, we consider how they could be used to inform the design of a dynamic software updating system. We report results based on measurements of various versions of popular open source programs. including BIND, OpenSSH, Apache, Vsftpd and the Linux kernel.", "A key issue in software evolution analysis is the identification of particular changes that occur across several versions of a program. We present change distilling, a tree differencing algorithm for fine-grained source code change extraction. For that, we have improved the existing algorithm by for extracting changes in hierarchically structured data. Our algorithm extracts changes by finding both a match between the nodes of the compared two abstract syntax trees and a minimum edit script that can transform one tree into the other given the computed matching. As a result, we can identify fine-grained change types between program versions according to our taxonomy of source code changes. We evaluated our change distilling algorithm with a benchmark that we developed, which consists of 1,064 manually classified changes in 219 revisions of eight methods from three different open source projects. We achieved significant improvements in extracting types of source code changes: Our algorithm approximates the minimum edit script 45 percent better than the original change extraction approach by We are able to find all occurring changes and almost reach the minimum conforming edit script, that is, we reach a mean absolute percentage error of 34 percent, compared to the 79 percent reached by the original algorithm. The paper describes both our change distilling algorithm and the results of our evolution.", "This paper describes an automated tool called Dex (difference extractor) for analyzing syntactic and semantic changes in large C-language code bases. It is applied to patches obtained from a source code repository, each of which comprises the code changes made to accomplish a particular task. Dex produces summary statistics characterizing these changes for all of the patches that are analyzed. Dex applies a graph differencing algorithm to abstract semantic graphs (ASGs) representing each version. The differences are then analyzed to identify higher-level program changes. We describe the design of Dex, its potential applications, and the results of applying it to analyze bug fixes from the Apache and GCC projects. The results include detailed information about the nature and frequency of missing condition defects in these projects.", "There exist many approaches that help in pointing developers to the change-prone parts of a software system. Although beneficial, they mostly fall short in providing details of these changes. Fine-grained source code changes (SCC) capture such detailed code changes and their semantics on the statement level. These SCC can be condition changes, interface modifications, inserts or deletions of methods and attributes, or other kinds of statement changes. In this paper, we explore prediction models for whether a source file will be affected by a certain type of SCC. These predictions are computed on the static source code dependency graph and use social network centrality measures and object-oriented metrics. For that, we use change data of the Eclipse platform and the Azureus 3 project. The results show that Neural Network models can predict categories of SCC types. Furthermore, our models can output a list of the potentially change-prone files ranked according to their change-proneness, overall and per change type category." ] }
1311.3414
2168156367
This paper is about understanding the nature of bug fixing by analyzing thousands of bug fix transactions of software repositories. It then places this learned knowledge in the context of automated program repair. We give extensive empirical results on the nature of human bug fixes at a large scale and a fine granularity with abstract syntax tree differencing. We set up mathematical reasoning on the search space of automated repair and the time to navigate through it. By applying our method on 14 repositories of Java software and 89,993 versioning transactions, we show that not all probabilistic repair models are equivalent.
We have already mentioned many pieces of work on automated software repair (incl. @cite_14 @cite_8 @cite_34 @cite_1 @cite_0 @cite_16 ). We have discussed in details the relationship of our work with GenProg. Let us now compare with the other close papers.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_1", "@cite_16", "@cite_0", "@cite_34" ], "mid": [ "2121898351", "2122947685", "2156553998", "2007777090", "2024753698", "2962708851" ], "abstract": [ "Tools and analyses that find bugs in software are becoming increasingly prevalent. However, even after the potential false alarms raised by such tools are dealt with, many real reported errors may go unfixed. In such cases the programmers have judged the benefit of fixing the bug to be less than the time cost of understanding and fixing it.The true utility of a bug-finding tool lies not in the number of bugs it finds but in the number of bugs it causes to be fixed.Analyses that find safety-policy violations typically give error reports as annotated backtraces or counterexamples. We propose that bug reports additionally contain a specially-constructed patch describing an example way in which the program could be modified to avoid the reported policy violation. Programmers viewing the analysis output can use such patches as guides, starting points, or as an additional way of understanding what went wrong.We present an algorithm for automatically constructing such patches given model-checking and policy information typically already produced by most such analyses. We are not aware of any previous automatic techniques for generating patches in response to safety policy violations. Our patches can suggest additional code not present in the original program, and can thus help to explain bugs related to missing program elements. In addition, our patches do not introduce any new violations of the given safety policy.To evaluate our method we performed a software engineering experiment, applying our algorithm to over 70 bug reports produced by two off-the-shelf bug-finding tools running on large Java programs. Bug reports also accompanied by patches were three times as likely to be addressed as standard bug reports.This work represents an early step toward developing new ways to report bugs and to make it easier for programmers to fix them. Even a minor increase in our ability to fix bugs would be a great increase for the quality of software.", "Automatic program repair has been a longstanding goal in software engineering, yet debugging remains a largely manual process. We introduce a fully automated method for locating and repairing bugs in software. The approach works on off-the-shelf legacy applications and does not require formal specifications, program annotations or special coding practices. Once a program fault is discovered, an extended form of genetic programming is used to evolve program variants until one is found that both retains required functionality and also avoids the defect in question. Standard test cases are used to exercise the fault and to encode program requirements. After a successful repair has been discovered, it is minimized using structural differencing algorithms and delta debugging. We describe the proposed method and report experimental results demonstrating that it can successfully repair ten different C programs totaling 63,000 lines in under 200 seconds, on average.", "Advances in recent years have made it possible in some cases to locate a bug (the source of a failure) automatically. But debugging is also about correcting bugs. Can tools do this automatically? The results reported in this paper, from the new PACHIKA tool, suggest that such a goal may be reachable. PACHIKA leverages differences in program behavior to generate program fixes directly. It automatically summarizes executions to object behavior models, determines differences between passing and failing runs, generates possible fixes, and assesses them via the regression test suite. Evaluated on the ASPECTJ bug history, PACHIKA generates a valid fix for 3 out of 18 crashing bugs; each fix pinpoints the bug location and passes the ASPECTJ test suite.", "We present a technique that finds and executes workarounds for faulty Web applications automatically and at runtime. Automatic workarounds exploit the inherent redundancy of Web applications, whereby a functionality of the application can be obtained through different sequences of invocations of Web APIs. In general, runtime workarounds are applied in response to a failure, and require that the application remain in a consistent state before and after the execution of a workaround. Therefore, they are ideally suited for interactive Web applications, since those allow the user to act as a failure detector with minimal effort, and also either use read-only state or manage their state through a transactional data store. In this paper we focus on faults found in the access libraries of widely used Web applications such as Google Maps. We start by classifying a number of reported faults of the Google Maps and YouTube APIs that have known workarounds. From those we derive a number of general and API-specific program-rewriting rules, which we then apply to other faults for which no workaround is known. Our experiments show that workarounds can be readily deployed within Web applications, through a simple client-side plug-in, and that program-rewriting rules derived from elementary properties of a common library can be effective in finding valid and previously unknown workarounds.", "Abstract: Testing and fault localization are very expensive software engineering tasks that have been tried to be automated. Although many successful techniques have been designed, the actual change of the code for fixing the discovered faults is still a human-only task. Even in the ideal case in which automated tools could tell us exactly where the location of a fault is, it is not always trivial how to fix the code. In this paper we analyse the possibility of automating the complex task of fixing faults. We propose to model this task as a search problem, and hence to use for example evolutionary algorithms to solve it. We then discuss the potential of this approach and how its current limitations can be addressed in the future. This task is extremely challenging and mainly unexplored in the literature. Hence, this paper only covers an initial investigation and gives directions for future work. A research prototype called JAFF and a case study are presented to give first validation of this approach.", "In program debugging, finding a failing run is only the first step; what about correcting the fault? Can we automate the second task as well as the first? The AutoFix-E tool automatically generates and validates fixes for software faults. The key insights behind AutoFix-E are to rely on contracts present in the software to ensure that the proposed fixes are semantically sound, and on state diagrams using an abstract notion of state based on the boolean queries of a class. Out of 42 faults found by an automatic testing tool in two widely used Eiffel libraries, AutoFix-E proposes successful fixes for 16 faults. Submitting some of these faults to experts shows that several of the proposed fixes are identical or close to fixes proposed by humans." ] }
1311.3414
2168156367
This paper is about understanding the nature of bug fixing by analyzing thousands of bug fix transactions of software repositories. It then places this learned knowledge in the context of automated program repair. We give extensive empirical results on the nature of human bug fixes at a large scale and a fine granularity with abstract syntax tree differencing. We set up mathematical reasoning on the search space of automated repair and the time to navigate through it. By applying our method on 14 repositories of Java software and 89,993 versioning transactions, we show that not all probabilistic repair models are equivalent.
@cite_34 presented AutoFix-E, an automated repair tool which works with contracts. In our perspective, AutoFix-E is based on two repair actions: adding sequences of state-changing statements (called mutators'') and adding a precondition (of the form of an if'' conditional). Their fix schemas are combinations of those two elementary repair actions. In contrast, we have 173 basic repair actions and we are able to predict repair shapes that consist of combinations of 4 repair actions. However, our approach is more theoretical than theirs. Our probabilistic view on repair may fasten their repair approach: it is likely that not all fix schemas'' are equivalent. For instance, according to our experience, adding a precondition is a very common kind of fix in real bugs.
{ "cite_N": [ "@cite_34" ], "mid": [ "2962708851" ], "abstract": [ "In program debugging, finding a failing run is only the first step; what about correcting the fault? Can we automate the second task as well as the first? The AutoFix-E tool automatically generates and validates fixes for software faults. The key insights behind AutoFix-E are to rely on contracts present in the software to ensure that the proposed fixes are semantically sound, and on state diagrams using an abstract notion of state based on the boolean queries of a class. Out of 42 faults found by an automatic testing tool in two widely used Eiffel libraries, AutoFix-E proposes successful fixes for 16 faults. Submitting some of these faults to experts shows that several of the proposed fixes are identical or close to fixes proposed by humans." ] }
1311.3414
2168156367
This paper is about understanding the nature of bug fixing by analyzing thousands of bug fix transactions of software repositories. It then places this learned knowledge in the context of automated program repair. We give extensive empirical results on the nature of human bug fixes at a large scale and a fine granularity with abstract syntax tree differencing. We set up mathematical reasoning on the search space of automated repair and the time to navigate through it. By applying our method on 14 repositories of Java software and 89,993 versioning transactions, we show that not all probabilistic repair models are equivalent.
@cite_26 invented an approach to repair bugs using mutations inspired from the field of mutation testing. The approach uses a fault localization technique to obtain the candidate faulty locations. For a given location, it applies mutations, producing mutants of the program. Eventually, a mutant is classified as fixed'' if it passes the test suite of the program. Their repair actions are composed of mutations of arithmetic, relational, logical, and assignment operators. Compared to our work, mutating a program is a special kind of fix synthesis where no explicit high-level repair shapes are manipulated. Also, in the light of our results, we assume that a mutation-based repair process would be faster using probabilities on top of the mutation operators.
{ "cite_N": [ "@cite_26" ], "mid": [ "2151497118" ], "abstract": [ "This paper proposes a strategy for automatically fixing faults in a program by combining the processes of mutation and fault localization. Statements that are ranked in order of their suspiciousness of containing faults can then be mutated in the same order to produce possible fixes for the faulty program. The proposed strategy is evaluated against the seven benchmark programs of the Siemens suite and the Ant program. Results indicate that the strategy is effective at automatically suggesting fixes for faults without any human intervention." ] }
1311.3062
1513799234
Consider the Ants Nearby Treasure Search (ANTS) problem introduced by Feinerman, Korman, Lotker, and Sereni (PODC 2012), where @math mobile agents, initially placed at the origin of an infinite grid, collaboratively search for an adversarially hidden treasure. In this paper, the model of is adapted such that the agents are controlled by a (randomized) finite state machine: they possess a constant-size memory and are able to communicate with each other through constant-size messages. Despite the restriction to constant-size memory, we show that their collaborative performance remains the same by presenting a distributed algorithm that matches a lower bound established by on the run-time of any ANTS algorithm.
Our work is strongly inspired by @cite_10 @cite_4 who introduce the aforementioned problem called and study it assuming that the ants (a.k.a. ) are controlled by a Turing machine (with or without space bounds) and that communication is allowed only in the nest. They show that if the @math agents know a constant approximation of @math , then they can find the food source (a.k.a. ) in time @math . Moreover, observe a matching lower bound and prove that this lower bound cannot be matched without some knowledge of @math . In contrast to the model studied in @cite_10 @cite_4 , the agents in our model can communicate anywhere on the grid as long as they share the same grid cell. However, due to their weak control unit (a FSM), their communication capabilities are very limited even when they do share the same grid cell (see ). Notice that the stronger computational model assumed by enables an individual agent in their setting to perform tasks way beyond the capabilities of a (single) agent in our setting, e.g., list the grid cells it has already visited or perform spiral searches (that play a major role in their upper bound).
{ "cite_N": [ "@cite_10", "@cite_4" ], "mid": [ "2053831174", "1844529830" ], "abstract": [ "We use distributed computing tools to provide a new perspective on the behavior of cooperative biological ensembles. We introduce the Ants Nearby Treasure Search (ANTS) problem, a generalization of the classical cow-path problem [10, 20, 41, 42], which is relevant for collective foraging in animal groups. In the ANTS problem, k identical (probabilistic) agents, initially placed at some central location, collectively search for a treasure in the two-dimensional plane. The treasure is placed at a target location by an adversary and the goal is to find it as fast as possible as a function of both k and D, where D is the distance between the central location and the target. This is biologically motivated by cooperative, central place foraging, such as performed by ants around their nest. In this type of search there is a strong preference to locate nearby food sources before those that are further away. We focus on trying to find what can be achieved if communication is limited or altogether absent. Indeed, to avoid overlaps agents must be highly dispersed making communication difficult. Furthermore, if the agents do not commence the search in synchrony, then even initial communication is problematic. This holds, in particular, with respect to the question of whether the agents can communicate and conclude their total number, k. It turns out that the knowledge of k by the individual agents is crucial for performance. Indeed, it is a straightforward observation that the time required for finding the treasure is Ω(D + D2 k), and we show in this paper that this bound can be matched if the agents have knowledge of k up to some constant approximation. We present a tight bound for the competitive penalty that must be paid, in the running time, if the agents have no information about k. Specifically, this bound is slightly more than logarithmic in the number of agents. In addition, we give a lower bound for the setting in which the agents are given some estimation of k. Informally, our results imply that the agents can potentially perform well without any knowledge of their total number k, however, to further improve, they must use some information regarding k. Finally, we propose a uniform algorithm that is both efficient and extremely simple, suggesting its relevance for actual biological scenarios.", "Initial knowledge regarding group size can be crucial for collective performance. We study this relation in the context of the Ants Nearby Treasure Search (ANTS) problem [18], which models natural cooperative foraging behavior such as that performed by ants around their nest. In this problem, k (probabilistic) agents, initially placed at some central location, collectively search for a treasure on the two-dimensional grid. The treasure is placed at a target location by an adversary and the goal is to find it as fast as possible as a function of both k and D, where D is the (unknown) distance between the central location and the target. It is easy to see that T=Ω(D+D2 k) time units are necessary for finding the treasure. Recently, it has been established that O(T) time is sufficient if the agents know their total number k (or a constant approximation of it), and enough memory bits are available at their disposal [18]. In this paper, we establish lower bounds on the agent memory size required for achieving certain running time performances. To the best our knowledge, these bounds are the first non-trivial lower bounds for the memory size of probabilistic searchers. For example, for every given positive constant e, terminating the search by time O(log1−ek ·T) requires agents to use Ω(loglogk) memory bits. From a high level perspective, we illustrate how methods from distributed computing can be useful in generating lower bounds for cooperative biological ensembles. Indeed, if experiments that comply with our setting reveal that the ants' search is time efficient, then our theoretical lower bounds can provide some insight on the memory they use for this task." ] }
1311.3062
1513799234
Consider the Ants Nearby Treasure Search (ANTS) problem introduced by Feinerman, Korman, Lotker, and Sereni (PODC 2012), where @math mobile agents, initially placed at the origin of an infinite grid, collaboratively search for an adversarially hidden treasure. In this paper, the model of is adapted such that the agents are controlled by a (randomized) finite state machine: they possess a constant-size memory and are able to communicate with each other through constant-size messages. Despite the restriction to constant-size memory, we show that their collaborative performance remains the same by presenting a distributed algorithm that matches a lower bound established by on the run-time of any ANTS algorithm.
Distributed computing by finite state machines has been studied in several different contexts including @cite_15 @cite_1 and the recent work of Emek and Wattenhofer @cite_3 from which we borrowed the agents communication model (see ). In that regard, the line of work closest to our paper is probably the one studying graph exploration by FSM controlled agents, see, e.g., @cite_8 .
{ "cite_N": [ "@cite_15", "@cite_1", "@cite_3", "@cite_8" ], "mid": [ "2098579316", "1501097462", "2052474723", "2039936247" ], "abstract": [ "We explore the computational power of networks of small resource-limited mobile agents. We define two new models of computation based on pairwise interactions of finite-state agents in populations of finite but unbounded size. With a fairness condition on interactions, we define the concept of stable computation of a function or predicate, and give protocols that stably compute functions in a class including Boolean combinations of threshold-k, parity, majority, and simple arithmetic. We prove that all stably computable predicates are in NL. With uniform random sampling of pairs to interact, we define the model of conjugating automata and show that any counter machine with O(1) counters of capacity O(n) can be simulated with high probability by a protocol in a population of size n. We prove that all predicates computable with high probability in this model are in P ∩ RL. Several open problems and promising future directions are discussed.", "Population protocols are used as a theoretical model for a collection (or population) of tiny mobile agents that interact with one another to carry out a computation. The agents are identically programmed finite state machines. Input values are initially distributed to the agents, and pairs of agents can exchange state information with other agents when they are close together. The movement pattern of the agents is unpredictable, but subject to some fairness constraints, and computations must eventually converge to the correct output value in any schedule that results from that movement. This framework can be used to model mobile ad hoc networks of tiny devices or collections of molecules undergoing chemical reactions. This chapter surveys results that describe what can be computed in various versions of the population protocol model.", "A new model that depicts a network of randomized finite state machines operating in an asynchronous environment is introduced. This model, that can be viewed as a hybrid of the message passing model and cellular automata is suitable for applying the distributed computing lens to the study of networks of sub-microprocessor devices, e.g., biological cellular networks and man-made nano-networks. Although the computation and communication capabilities of each individual device in the new model are, by design, much weaker than those of an abstract computer, we show that some of the most important and extensively studied distributed computing problems can still be solved efficiently.", "A finite automaton, simply referred to as a robot, has to explore a graph whose nodes are unlabeled and whose edge ports are locally labeled at each node. The robot has no a priori knowledge of the topology of the graph or of its size. Its task is to traverse all the edges of the graph. We first show that, for any K-state robot and any d ≥ 3, there exists a planar graph of maximum degree d with at most K + 1 nodes that the robot cannot explore. This bound improves all previous bounds in the literature. More interestingly, we show that, in order to explore all graphs of diameter D and maximum degree d, a robot needs Ω(D log d) memory bits, even if we restrict the exploration to planar graphs. This latter bound is tight. Indeed, a simple DFS up to depth D + 1 enables a robot to explore any graph of diameter D and maximum degree d using a memory of size O(D log d) bits. We thus prove that the worst case space complexity of graph exploration is Θ(D log d) bits." ] }
1311.3062
1513799234
Consider the Ants Nearby Treasure Search (ANTS) problem introduced by Feinerman, Korman, Lotker, and Sereni (PODC 2012), where @math mobile agents, initially placed at the origin of an infinite grid, collaboratively search for an adversarially hidden treasure. In this paper, the model of is adapted such that the agents are controlled by a (randomized) finite state machine: they possess a constant-size memory and are able to communicate with each other through constant-size messages. Despite the restriction to constant-size memory, we show that their collaborative performance remains the same by presenting a distributed algorithm that matches a lower bound established by on the run-time of any ANTS algorithm.
Graph exploration in general is a fundamental problem in computer science. In the typical case, the goal is for a single agent to visit all nodes in a given graph. As an example, the exploration of trees was studied in @cite_6 , the exploration of finite undirected graphs was studied in @cite_11 @cite_0 , and the exploration of strongly connected digraphs was studied in @cite_2 @cite_9 . When a deterministic agent is exploring a graph, memory usage becomes an issue. With randomized agents, it is well-known that random walks allow a single agent to visit all nodes of a finite undirected graph in polynomial time @cite_5 . The speed up gained from using multiple random walks was studied by @cite_12 . Notice that in an infinite grid, the expected time it takes for a random walk to reach any designated cell is infinite.
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "2008110774", "2017617294", "2049516232", "2077944048", "2156184725", "2160504298", "2051006450" ], "abstract": [ "We consider exploration problems where a robot has to construct a complete map of an unknown environment. We assume that the environment is modeled by a directed, strongly connected graph. The robot's task is to visit all nodes and edges of the graph using the minimum number R of edge traversals. Deng and Papadimitriou [ Proceedings of the 31st Symposium on the Foundations of Computer Science, 1990, pp. 356--361] showed an upper bound for R of dO(d) m and Koutsoupias (reported by Deng and Papadimitriou) gave a lower bound of @math , where m is the number of edges in the graph and d is the minimum number of edges that have to be added to make the graph Eulerian. We give the first subexponential algorithm for this exploration problem, which achieves an upper bound of dO(log d) m. We also show a matching lower bound of @math for our algorithm. Additionally, we give lower bounds of @math , respectively, @math for various other natural exploration algorithms.", "A robot with k-bit memory has to explore a tree whose nodes are unlabeled and edge ports are locally labeled at each node. The robot has no a priori knowledge of the topology of the tree or of its size, and its aim is to traverse all the edges. While O(log Δ) bits of memory suffice to explore any tree of maximum degree Δ if stopping is not required, we show that bounded memory is not sufficient to explore with stop all trees of bounded degree (indeed Ω (log log log n) bits of memory are needed for some such trees of size n). For the more demanding task requiring to stop at the starting node after completing exploration, we show a sharper lower bound Ω (log n) on required memory size, and present an algorithm to accomplish this task with O(log2 n)-bit memory, for all n-node trees.", "We present a deterministic, log-space algorithm that solves st-connectivity in undirected graphs. The previous bound on the space complexity of undirected st-connectivity was log4 3(ṡ) obtained by Armoni, Ta-Shma, Wigderson and Zhou (JACM 2000). As undirected st-connectivity is complete for the class of problems solvable by symmetric, nondeterministic, log-space computations (the class SL), this algorithm implies that SL e L (where L is the class of problems solvable by deterministic log-space computations). Independent of our work (and using different techniques), Trifonov (STOC 2005) has presented an O(log n log log n)-space, deterministic algorithm for undirected st-connectivity. Our algorithm also implies a way to construct in log-space a fixed sequence of directions that guides a deterministic walk through all of the vertices of any connected graph. Specifically, we give log-space constructible universal-traversal sequences for graphs with restricted labeling and log-space constructible universal-exploration sequences for general graphs.", "We wish to explore all edges of an unknown directed, strongly connected graph. At each point, we have a map of all nodes and edges we have visited, we can recognize these nodes and edges if we see them again, and we know how many unexplored edges emanate from each node we have visited, but we cannot tell where each leads until we traverse it. We wish to minimize the ratio of the total number of edges traversed divided by the optimum number of traversals, had we known the graph. For Eulerian graphs, this ratio cannot be better than two, and two is achievable by a simple algorithm. In contrast, the ratio is unbounded when the deficiency of the graph (the number of edges that have to be added to make it Eulerian) is unbounded. Our main result is an algorithm that achieves a bounded ratio when the deficiency is bounded. © 1999 John Wiley & Sons, Inc. J Graph Theory 32: 265–297, 1999", "", "We pose a new and intriguing question motivated by distributed computing regarding random walks on graphs: How long does it take for several independent random walks, starting from the same vertex, to cover an entire graph? We study the cover time - the expected time required to visit every node in a graph at least once - and we show that for a large collection of interesting graphs, running many random walks in parallel yields a speed-up in the cover time that is linear in the number of parallel walks. We demonstrate that an exponential speed-up is sometimes possible, but that some natural graphs allow only a logarithmic speed-up. A problem related to ours (in which the walks start from some probablistic distribution on vertices) was previously studied in the context of space efficient algorithms for undirected s-t-connectivity and our results yield, in certain cases, an improvement upon some of the earlier bounds.", "" ] }
1311.3037
1549544029
Characterizing large online social networks (OSNs) through node querying is a challenging task. OSNs often impose severe constraints on the query rate, hence limiting the sample size to a small fraction of the total network. Various ad-hoc subgraph sampling methods have been proposed, but many of them give biased estimates and no theoretical basis on the accuracy. In this work, we focus on developing sampling methods for OSNs where querying a node also reveals partial structural information about its neighbors. Our methods are optimized for NoSQL graph databases (if the database can be accessed directly), or utilize Web API available on most major OSNs for graph sampling. We show that our sampling method has provable convergence guarantees on being an unbiased estimator, and it is more accurate than current state-of-the-art methods. We characterize metrics such as node label density estimation and edge label density estimation, two of the most fundamental network characteristics from which other network characteristics can be derived. We evaluate our methods on-the-fly over several live networks using their native APIs. Our simulation studies over a variety of offline datasets show that by including neighborhood information, our method drastically (4-fold) reduces the number of samples required to achieve the same estimation accuracy of state-of-the-art methods.
Maiya and Berger-Wolf @cite_33 empirically investigates the performance of a number of subgraph sampling methods (e.g., breadth-first search, random walks, etc.) and their performance in respect to various topological properties (e.g., degree, clustering coefficient). Maiya and Berger-Wolf, however, does not use neighborhood information to improve the estimators or provide convergence guarantees. The literature also shows a variety of subgraph sampling works without convergence or accuracy guarantees @cite_28 @cite_3 , which have been empirically tested over a variety of networks. The above works @cite_33 @cite_28 @cite_3 also consider subgraph sampling techniques that can preserve other metrics, such as the eigenvalues of the original network @cite_28 , but without accuracy guarantees.
{ "cite_N": [ "@cite_28", "@cite_33", "@cite_3" ], "mid": [ "2146008005", "", "2157747946" ], "abstract": [ "Given a huge real graph, how can we derive a representative sample? There are many known algorithms to compute interesting measures (shortest paths, centrality, betweenness, etc.), but several of them become impractical for large graphs. Thus graph sampling is essential.The natural questions to ask are (a) which sampling method to use, (b) how small can the sample size be, and (c) how to scale up the measurements of the sample (e.g., the diameter), to get estimates for the large graph. The deeper, underlying question is subtle: how do we measure success?.We answer the above questions, and test our answers by thorough experiments on several, diverse datasets, spanning thousands nodes and edges. We consider several sampling methods, propose novel methods to check the goodness of sampling, and develop a set of scaling laws that describe relations between the properties of the original and the sample.In addition to the theoretical contributions, the practical conclusions from our work are: Sampling strategies based on edge selection do not perform well; simple uniform random node selection performs surprisingly well. Overall, best performing methods are the ones based on random-walks and \"forest fire\"; they match very accurately both static as well as evolutionary graph patterns, with sample sizes down to about 15 of the original graph.", "", "While data mining in chemoinformatics studied graph data with dozens of nodes, systems biology and the Internet are now generating graph data with thousands and millions of nodes. Hence data mining faces the algorithmic challenge of coping with this significant increase in graph size: Classic algorithms for data analysis are often too expensive and too slow on large graphs. While one strategy to overcome this problem is to design novel efficient algorithms, the other is to 'reduce' the size of the large graph by sampling. This is the scope of this paper: We will present novel Metropolis algorithms for sampling a 'representative' small subgraph from the original large graph, with 'representative' describing the requirement that the sample shall preserve crucial graph properties of the original graph. In our experiments, we improve over the pioneering work of Leskovec and Faloutsos (KDD 2006), by producing representative subgraph samples that are both smaller and of higher quality than those produced by other methods from the literature." ] }
1311.3037
1549544029
Characterizing large online social networks (OSNs) through node querying is a challenging task. OSNs often impose severe constraints on the query rate, hence limiting the sample size to a small fraction of the total network. Various ad-hoc subgraph sampling methods have been proposed, but many of them give biased estimates and no theoretical basis on the accuracy. In this work, we focus on developing sampling methods for OSNs where querying a node also reveals partial structural information about its neighbors. Our methods are optimized for NoSQL graph databases (if the database can be accessed directly), or utilize Web API available on most major OSNs for graph sampling. We show that our sampling method has provable convergence guarantees on being an unbiased estimator, and it is more accurate than current state-of-the-art methods. We characterize metrics such as node label density estimation and edge label density estimation, two of the most fundamental network characteristics from which other network characteristics can be derived. We evaluate our methods on-the-fly over several live networks using their native APIs. Our simulation studies over a variety of offline datasets show that by including neighborhood information, our method drastically (4-fold) reduces the number of samples required to achieve the same estimation accuracy of state-of-the-art methods.
Breadth-First-Search (BFS) introduces a large bias towards high degree nodes, and it is difficult to remove these biases in general, although it can be ameliorated if the network in question is almost random @cite_1 . Random walk (RW) is biased to sample high degree nodes, however its bias is known and can be easily corrected @cite_19 . Random walks in the form of Respondent Driven Sampling (RDS) @cite_15 @cite_6 has been used to estimate population densities using snowball samples of sociological studies. RDS was developed for small social networks with hidden links while our method considers large online social networks without hidden links.
{ "cite_N": [ "@cite_19", "@cite_15", "@cite_1", "@cite_6" ], "mid": [ "2103799649", "2142645441", "2106315062", "2117740169" ], "abstract": [ "Estimating characteristics of large graphs via sampling is a vital part of the study of complex networks. Current sampling methods such as (independent) random vertex and random walks are useful but have drawbacks. Random vertex sampling may require too many resources (time, bandwidth, or money). Random walks, which normally require fewer resources per sample, can suffer from large estimation errors in the presence of disconnected or loosely connected graphs. In this work we propose a new m-dimensional random walk that uses m dependent random walkers. We show that the proposed sampling method, which we call Frontier sampling, exhibits all of the nice sampling properties of a regular random walk. At the same time, our simulations over large real world graphs show that, in the presence of disconnected or loosely connected components, Frontier sampling exhibits lower estimation errors than regular random walks. We also show that Frontier sampling is more suitable than random vertex sampling to sample the tail of the degree distribution of the graph.", "Researchers studying hidden populations–including injection drug users, men who have sex with men, and the homeless–find that standard probability sampling methods are either inapplicable or prohibitively costly because their subjects lack a sampling frame, have privacy concerns, and constitute a small part of the general population. Therefore, researchers generally employ non-probability methods, including location sampling methods such as targeted sampling, and chain-referral methods such as snowball and respondent-driven sampling. Though nonprobability methods succeed in accessing the hidden populations, they have been insufficient for statistical inference. This paper extends the respondent-driven sampling method to show that when biases associated with chain-referral methods are analyzed in sufficient detail, a statistical theory of the sampling process can be constructed, based on which the sampling process can be redesigned to permit the derivation of indicators that are not biased and have known levels of precision. The results are based on a study of 190 injection drug users in a small Connecticut city.", "Breadth First Search (BFS) is a widely used approach for sampling large graphs. However, it has been empirically observed that BFS sampling is biased toward high-degree nodes, which may strongly affect the measurement results. In this paper, we quantify and correct the degree bias of BFS. First, we consider a random graph RG(pk) with an arbitrary degree distribution pk. For this model, we calculate the node degree distribution expected to be observed by BFS as a function of the fraction f of covered nodes. We also show that, for RG(pk), all commonly used graph traversal techniques (BFS, DFS, Forest Fire, Snowball Sampling, RDS) have exactly the same bias. Next, we propose a practical BFS-bias correction procedure that takes as input a collected BFS sample together with the fraction f. Our correction technique is exact (i.e., leads to unbiased estimation) for RG(pk). Furthermore, it performs well when applied to a broad range of Internet topologies and to two large BFS samples of Facebook and Orkut networks.", "Standard statistical methods often provide no way to make accurate estimates about the characteristics of hidden populations such as injection drug users, the homeless, and artists. In this paper, we further develop a sampling and estimation technique called respondent-driven sampling, which allows researchers to make asymptotically unbiased estimates about these hidden populations. The sample is selected with a snowball-type design that can be done more cheaply, quickly, and easily than other methods currently in use. Further, we can show that under certain specified (and quite general) conditions, our estimates for the percentage of the population with a specific trait are asymptotically unbiased. We further show that these estimates are asymptotically unbiased no matter how the seeds are selected. We conclude with a comparison of respondent-driven samples of jazz musicians in New York and San Francisco, with corresponding institutional samples of jazz musicians from these cities. The results show that ..." ] }
1311.3037
1549544029
Characterizing large online social networks (OSNs) through node querying is a challenging task. OSNs often impose severe constraints on the query rate, hence limiting the sample size to a small fraction of the total network. Various ad-hoc subgraph sampling methods have been proposed, but many of them give biased estimates and no theoretical basis on the accuracy. In this work, we focus on developing sampling methods for OSNs where querying a node also reveals partial structural information about its neighbors. Our methods are optimized for NoSQL graph databases (if the database can be accessed directly), or utilize Web API available on most major OSNs for graph sampling. We show that our sampling method has provable convergence guarantees on being an unbiased estimator, and it is more accurate than current state-of-the-art methods. We characterize metrics such as node label density estimation and edge label density estimation, two of the most fundamental network characteristics from which other network characteristics can be derived. We evaluate our methods on-the-fly over several live networks using their native APIs. Our simulation studies over a variety of offline datasets show that by including neighborhood information, our method drastically (4-fold) reduces the number of samples required to achieve the same estimation accuracy of state-of-the-art methods.
The Metropolis-Hasting RW (MHRW) @cite_20 modifies the RW procedure, aimed at sampling nodes with equal probability. However, in Ribeiro and Towsley @cite_27 we prove that MHRW degree distribution estimates perform poorly in comparison to RWs, more markedly for large degree nodes whose error grows proportionally to the degree value. Empirically, the accuracy of RW and MHRW has beeen compared in @cite_21 @cite_32 and, as predicted by our theoretical results, RW is consistently more accurate than MHRW.
{ "cite_N": [ "@cite_27", "@cite_21", "@cite_32", "@cite_20" ], "mid": [ "1994312128", "2168380307", "2137135938", "2170358724" ], "abstract": [ "Estimating characteristics of large graphs via sampling is vital in the study of complex networks. In this work, we study the Mean Squared Error (MSE) associated with different sampling methods for the degree distribution. These sampling methods include independent random vertex (RV) and random edge (RE) sampling, and crawling methods such as random walks (RWs) and the widely used Metropolis-Hastings algorithm for uniformly sampling vertices (MHRWu). We see that the RW MSE is upper bounded by a quantity that is proportional to the RE MSE and inversely proportional to the spectral gap of the RW transition probability matrix. We also determine conditions under which RW is preferable to RV. Finally, we present an approximation of the MHRWu MSE. We evaluate the accuracy of our approximations and bounds through simulations on large real world graphs.", "short-lived or high degree peers due to the dynamics of peer participation or the heterogeneity of peer degrees, respectively. This paper presents Respondent-Driven Sampling (RDS) as a promising technique for sampling unstructured P2P overlays. This allows one to accurately estimate the distribution of a desired peer property without capturing the entire overlay structure. RDS is a variant of snowball sampling that has been proposed and used in the social sciences to characterize hidden population in a society [9], [13]. We apply the RDS technique to unstructured P2P network and evaluate its performance over a wide range of static and dynamic graphs as well as a widely deployed P2P system. Throughout our evaluation, we compare and contrast the performance of the RDS technique with another sampling technique, namely Metropolized Random Walk (MRW), that we developed in our earlier work [16]. Our main findings can be summarized as follows: First, RDS outperforms MRW across all scenarios. In particular, RDS exhibits a significantly better performance than MRW when the overlay structure exhibits a combination of highly skewed node degrees and highly skewed (local) clustering coefficients. Second, our simulation and empirical evaluations reveal that both the RDS and MRW techniques can accurately estimate key peer properties over dynamic unstructured overlays. Third, our empirical evaluations suggest that the efficiency of the two sampling techniques in practice is lower than in our simulations involving synthetic graphs. We attribute this to our inability to capture accurate reference snapshots. The rest of the paper is organized as follows: Section II presents an overview of both the RDS and MRW techniques, and sketches our evaluation methodology. We examine both techniques over variety of static and dynamic graphs in Section III and IV, respectively. Section V presents the empirical evaluation of the two sampling techniques over Gnutella network.", "With more than 250 million active users, Facebook (FB) is currently one of the most important online social networks. Our goal in this paper is to obtain a representative (unbiased) sample of Facebook users by crawling its social graph. In this quest, we consider and implement several candidate techniques. Two approaches that are found to perform well are the Metropolis-Hasting random walk (MHRW) and a re-weighted random walk (RWRW). Both have pros and cons, which we demonstrate through a comparison to each other as well as to the \"ground-truth\" (UNI - obtained through true uniform sampling of FB userIDs). In contrast, the traditional Breadth-First-Search (BFS) and Random Walk (RW) perform quite poorly, producing substantially biased results. In addition to offline performance assessment, we introduce online formal convergence diagnostics to assess sample quality during the data collection process. We show how these can be used to effectively determine when a random walk sample is of adequate size and quality for subsequent use (i.e., when it is safe to cease sampling). Using these methods, we collect the first, to the best of our knowledge, unbiased sample of Facebook. Finally, we use one of our representative datasets, collected through MHRW, to characterize several key properties of Facebook.", "This paper presents a detailed examination of how the dynamic and heterogeneous nature of real-world peer-to-peer systems can introduce bias into the selection of representative samples of peer properties (e.g., degree, link bandwidth, number of files shared). We propose the Metropolized Random Walk with Backtracking (MRWB) as a viable and promising technique for collecting nearly unbiased samples and conduct an extensive simulation study to demonstrate that our technique works well for a wide variety of commonly-encountered peer-to-peer network conditions. We have implemented the MRWB algorithm for selecting peer addresses uniformly at random into a tool called ion-sampler. Using the Gnutella network, we empirically show that ion-sampler. yields more accurate samples than tools that rely on commonly-used sampling techniques and results in dramatic improvements in efficiency and scalability compared to performing a full crawl." ] }
1311.2783
265291222
We develop a theory of minors for alternating dimaps --- orientably embedded digraphs where, at each vertex, the incident edges (taken in the order given by the embedding) are directed alternately into, and out of, the vertex. We show that they are related by the triality relation of Tutte. They do not commute in general, though do in many circumstances, and we characterise the situations where they do. The relationship with triality is reminiscent of similar relationships for binary functions, due to the author, so we characterise those alternating dimaps which correspond to binary functions. We give a characterisation of alternating dimaps of at most a given genus, using a finite set of excluded minors. We also use the minor operations to define simple Tutte invariants for alternating dimaps and characterise them. We establish a connection with the Tutte polynomial, and pose the problem of characterising universal Tutte-like invariants for alternating dimaps based on these minor operations.
It is interesting to note that this stream of research, first seen in Tutte's 1948 paper @cite_16 , can be traced back to the same source that eventually gave rise to Tutte's work on minor operations and his eponymous polynomial. Historically, the source of both streams was the famous paper on squaring the square'' @cite_10 . The 1948 paper extended the theory to triangulating the triangle'' (where all triangles are equilateral) and introduced triality, among other things. However, this stream has not previously seen the development of minor operations or Tutte-like invariants for alternating dimaps.
{ "cite_N": [ "@cite_16", "@cite_10" ], "mid": [ "2964090757", "2043790108" ], "abstract": [ "Let T = (T*, T(Delta)) he a spherical latin bitrade. With each a = (a(1), a(2), a(3)) is an element of T* associate a set of linear equations Eq(T, a) of the form b(1) + b(2) = b(3), where b = (b(1), b(2), b(3)) runs through T* a . Assume a(1) = 0 = a(2) and a(3) = 1. Then Eq(T, a) has in rational numbers a unique solution b(i) = (b) over bar (i). Suppose that (b) over bar (i) not equal (c) over bar (i) for all b, c is an element of T* such that (b) over bar (i) not equal c(i) and i is an element of 1, 2, 3 . We prove that then T(Delta) can be interpreted as a dissection of an equilateral triangle. We also consider group modifications of latin bitrades and show that the methods for generating the dissections can be used for a proof that T* can be embedded into the operational table of a finite abelian group, for every spherical latin bitrade T. (C) 2009 Wiley Periodicals, Inc. J Combin Designs 18: 1-24, 2010", "We consider the problem of dividing a rectangle into a finite number of non-overlapping squares, no two of which are equal. A dissection of a rectangle R into a finite number n of non-overlapping squares is called a squaring of R of order n; and the n squares are the elements of the dissection. The term “elements” is also used for the lengths of the sides of the elements. If there is more than one element and the elements are all unequal, the squaring is called perfect, and R is a perfect rectangle." ] }
1311.2878
2949637267
Most models of social contagion take peer exposure to be a corollary of adoption, yet in many settings, the visibility of one's adoption behavior happens through a separate decision process. In online systems, product designers can define how peer exposure mechanisms work: adoption behaviors can be shared in a passive, automatic fashion, or occur through explicit, active sharing. The consequences of these mechanisms are of substantial practical and theoretical interest: passive sharing may increase total peer exposure but active sharing may expose higher quality products to peers who are more likely to adopt. We examine selection effects in online sharing through a large-scale field experiment on Facebook that randomizes whether or not adopters share Offers (coupons) in a passive manner. We derive and estimate a joint discrete choice model of adopters' sharing decisions and their peers' adoption decisions. Our results show that active sharing enables a selection effect that exposes peers who are more likely to adopt than the population exposed under passive sharing. We decompose the selection effect into two distinct mechanisms: active sharers expose peers to higher quality products, and the peers they share with are more likely to adopt independently of product quality. Simulation results show that the user-level mechanism comprises the bulk of the selection effect. The study's findings are among the first to address downstream peer effects induced by online sharing mechanisms, and can inform design in settings where a surplus of sharing could be viewed as costly.
Recent studies are also beginning to analyze mechanisms of information transmission and their causal interpretations. Since individuals form relationships with similar others @cite_15 , network autocorrelation does not necessarily imply that an individuals influence their peers' behaviors @cite_11 @cite_24 . This problem is exacerbated when the assumed exposure model omits backdoor paths which could plausibly account for the correlations @cite_21 . Even given perfect observability of the network process and abundant behavioral data, latent homophily or confounding factors could drive the assortativity in peer outcomes.
{ "cite_N": [ "@cite_24", "@cite_15", "@cite_21", "@cite_11" ], "mid": [ "", "2130354913", "2149084727", "1970874697" ], "abstract": [ "", "Similarity breeds connection. This principle—the homophily principle—structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship. The result is that people's personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics. Homophily limits people's social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience. Homophily in race and ethnicity creates the strongest divides in our personal environments, with age, religion, education, occupation, and gender following in roughly that order. Geographic propinquity, families, organizations, and isomorphic positions in social systems all create contexts in which homophilous relations form. Ties between nonsimilar individuals also dissolve at a higher rate, which sets the stage for the formation of niches (localize...", "The authors consider processes on social networks that can potentially involve three factors: homophily, or the formation of social ties due to matching individual traits; social contagion, also known as social influence; and the causal effect of an individual’s covariates on his or her behavior or other measurable responses. The authors show that generically, all of these are confounded with each other. Distinguishing them from one another requires strong assumptions on the parametrization of the social process or on the adequacy of the covariates used (or both). In particular the authors demonstrate, with simple examples, that asymmetries in regression coefficients cannot identify causal effects and that very simple models of imitation (a form of social contagion) can produce substantial correlations between an individual’s enduring traits and his or her choices, even when there is no intrinsic affinity between them. The authors also suggest some possible constructive responses to these results.", "Network-based marketing refers to a collection of marketing techniques that take advantage of links between consumers to increase sales. We concentrate on the consumer networks formed using direct interactions (e.g., communications) between consumers. We survey the diverse literature on such marketing with an emphasis on the statistical methods used and the data to which these methods have been applied. We also provide a discussion of challenges and opportunities for this burgeoning research topic. Our survey highlights a gap in the literature. Because of inadequate data, prior studies have not been able to provide direct, statistical support for the hypothesis that network linkage can directly affect product service adoption. Using a new data set that represents the adoption of a new telecommunications service, we show very strong support for the hypothesis. Specifically, we show three main results: (1) Network neighbors''--those consumers linked to a prior customer--adopt the service at a rate 3--5 times greater than baseline groups selected by the best practices of the firm's marketing team. In addition, analyzing the network allows the firm to acquire new customers who otherwise would have fallen through the cracks, because they would not have been identified based on traditional attributes. (2) Statistical models, built with a very large amount of geographic, demographic and prior purchase data, are significantly and substantially improved by including network information. (3) More detailed network information allows the ranking of the network neighbors so as to permit the selection of small sets of individuals with very high probabilities of adoption." ] }
1311.2978
1575014931
In this paper, we explore a set of novel features for authorship attribution of documents. These features are derived from a word network representation of natural language text. As has been noted in previous studies, natural language tends to show complex network structure at word level, with low degrees of separation and scale-free (power law) degree distribution. There has also been work on authorship attribution that incorporates ideas from complex networks. The goal of our paper is to explore properties of these complex networks that are suitable as features for machine-learning-based authorship attribution of documents. We performed experiments on three different datasets, and obtained promising results.
While authorship attribution is a well-known problem in NLP (see, e.g., the surveys by Juola @cite_10 , Stamatatos @cite_6 , and @cite_1 ), complex networks have only recently been applied to the authorship attribution problem.
{ "cite_N": [ "@cite_1", "@cite_10", "@cite_6" ], "mid": [ "1987380777", "", "2126631960" ], "abstract": [ "Statistical authorship attribution has a long history, culminating in the use of modern machine learning classification methods. Nevertheless, most of this work suffers from the limitation of assuming a small closed set of candidate authors and essentially unlimited training text for each. Real-life authorship attribution problems, however, typically fall short of this ideal. Thus, following detailed discussion of previous work, three scenarios are considered here for which solutions to the basic attribution problem are inadequate. In the first variant, the profiling problem, there is no candidate set at all; in this case, the challenge is to provide as much demographic or psychological information as possible about the author. In the second variant, the needle-in-a-haystack problem, there are many thousands of candidates for each of whom we might have a very limited writing sample. In the third variant, the verification problem, there is no closed candidate set but there is one suspect; in this case, the challenge is to determine if the suspect is or is not the author. For each variant, it is shown how machine learning methods can be adapted to handle the special challenges of that variant. © 2009 Wiley Periodicals, Inc.", "", "Authorship attribution supported by statistical or computational methods has a long history starting from the 19th century and is marked by the seminal study of Mosteller and Wallace (1964) on the authorship of the disputed “Federalist Papers.” During the last decade, this scientific field has been developed substantially, taking advantage of research advances in areas such as machine learning, information retrieval, and natural language processing. The plethora of available electronic texts (e.g., e-mail messages, online forum messages, blogs, source code, etc.) indicates a wide variety of applications of this technology, provided it is able to handle short and noisy text from multiple candidate authors. In this article, a survey of recent advances of the automated approaches to attributing authorship is presented, examining their characteristics for both text representation and text classification. The focus of this survey is on computational requirements and settings rather than on linguistic or literary issues. We also discuss evaluation methodologies and criteria for authorship attribution studies and list open questions that will attract future work in this area. © 2009 Wiley Periodicals, Inc." ] }
1311.2978
1575014931
In this paper, we explore a set of novel features for authorship attribution of documents. These features are derived from a word network representation of natural language text. As has been noted in previous studies, natural language tends to show complex network structure at word level, with low degrees of separation and scale-free (power law) degree distribution. There has also been work on authorship attribution that incorporates ideas from complex networks. The goal of our paper is to explore properties of these complex networks that are suitable as features for machine-learning-based authorship attribution of documents. We performed experiments on three different datasets, and obtained promising results.
@cite_20 defined a probability measure called for authorship identification in Persian language. They obtained good accuracy in authorship classification of Persian books using the power-law exponent of word networks of those books and the so-called (q-parameter) of the networks.
{ "cite_N": [ "@cite_20" ], "mid": [ "1991638834" ], "abstract": [ "Authorship analysis by means of textual features is an important task in linguistic studies. We employ complex networks theory to tackle this disputed problem. In this work, we focus on some measurable quantities of word co-occurrence network of each book for authorship characterization. Based on the network features, attribution probability is defined for authorship identification. Furthermore, two scaling exponents, q-parameter and α-exponent, are combined to classify personal writing style with acceptable high resolution power. The q-parameter, generally known as the nonextensivity measure, is calculated for degree distribution and the α-exponent comes from a power law relationship between number of links and number of nodes in the co-occurrence network constructed for different books written by each author. The applicability of the presented method is evaluated in an experiment with thirty six books of five Persian litterateurs. Our results show high accuracy rate in authorship attribution." ] }
1311.2839
2951850108
We study gossip algorithms for the rumor spreading problem which asks one node to deliver a rumor to all nodes in an unknown network. We present the first protocol for any expander graph @math with @math nodes such that, the protocol informs every node in @math rounds with high probability, and uses @math random bits in total. The runtime of our protocol is tight, and the randomness requirement of @math random bits almost matches the lower bound of @math random bits for dense graphs. We further show that, for many graph families, polylogarithmic number of random bits in total suffice to spread the rumor in @math rounds. These results together give us an almost complete understanding of the randomness requirement of this fundamental gossip process. Our analysis relies on unexpectedly tight connections among gossip processes, Markov chains, and branching programs. First, we establish a connection between rumor spreading processes and Markov chains, which is used to approximate the rumor spreading time by the mixing time of Markov chains. Second, we show a reduction from rumor spreading processes to branching programs, and this reduction provides a general framework to derandomize gossip processes. In addition to designing rumor spreading protocols, these novel techniques may have applications in studying parallel and multiple random walks, and randomness complexity of distributed algorithms.
The study of determining and reducing the amount of randomness required for rumor spreading has been studied extensively in the past years. proposed a version of the rumor spreading push protocol. In contrast to @math random bits that used in the standard push model, the quasi-random rumor spreading model uses @math random bits, and has been shown to be efficient on several graph topologies @cite_4 @cite_29 . Further progress along this line include @cite_21 @cite_30 . Besides this, researchers also studied the question of designing randomness-efficient or deterministic protocols for similar problems. For instance, presented one deterministic gossip algorithm for the @math -local broadcast and the global broadcast problem. However, the algorithms in @cite_11 require that all nodes in the graph have unique identifiers (UID), and every node knows its own and the neighbors' UIDs. Hence the techniques developed there cannot be applied to our setting.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_29", "@cite_21", "@cite_11" ], "mid": [ "2148754694", "2078958987", "1771950282", "2009267577", "2952550120" ], "abstract": [ "We consider the classical rumor spreading problem, where a piece of information must be disseminated from a single node to all n nodes of a given network. We devise two simple push-based protocols, in which nodes choose the neighbor they send the information to in each round using pairwise independent hash functions, or a pseudo-random generator, respectively. For several wellstudied topologies our algorithms use exponentially fewer random bits than previous protocols. For example, in complete graphs, expanders, and random graphs only a polylogarithmic number of random bits are needed in total to spread the rumor inO(logn) rounds with high probability. Previous explicit algorithms, e.g., [10, 17, 6, 15], require ( n) random bits to achieve the same round complexity. For complete graphs, the amount of randomness used by our hashing-based algorithm is within anO(logn)-factor of the theoretical minimum determined by Giakkoupis and Woelfel [15].", "In this paper, we provide a detailed comparison between a fully randomized protocol for rumor spreading on a complete graph and a quasirandom protocol introduced by Doerr, Friedrich, and Sauerwald [Quasirandom rumor spreading, in Proceedings of the 19th Annual ACM-SIAM Symposium on Discrete Algorithms, ACM, New York, SIAM, Philadelphia, 2008, pp. 773-781]. In the former, initially there is one vertex which holds a piece of information, and during each round every one of the informed vertices chooses uniformly at random and independently one of its neighbors and informs it. In the quasirandom version of this method (cf. Doerr, Friedrich, and Sauerwald) each vertex has a cyclic list of its neighbors. Once a vertex has been informed, it chooses uniformly at random only one neighbor. In the following round, it informs this neighbor, and at each subsequent round it picks the next neighbor from its list and informs it. We give a precise analysis of the evolution of the quasirandom protocol on the complete graph with @math vertices and show that it evolves essentially in the same way as the randomized protocol. In particular, if @math denotes the number of rounds that are needed until all vertices are informed, we show that for any slowly growing function @math , we have @math , with probability @math .", "Randomized rumor spreading is an efficient protocol to distribute information in networks. Recently, a quasirandom version has been proposed and proven to work equally well on many graphs and better for sparse random graphs. In this work we show three main results for the quasirandom rumor spreading model. We exhibit a natural expansion property for networks which suffices to make quasirandom rumor spreading inform all nodes of the network in logarithmic time with high probability. This expansion property is satisfied, among others, by many expander graphs, random regular graphs, and Erdős-Renyi random graphs. For all network topologies, we show that if one of the push or pull model works well, so does the other. We also show that quasirandom rumor spreading is robust against transmission failures. If each message sent out gets lost with probability f , then the runtime increases only by a factor of @math .", "We investigate the randomness requirements of the classical rumor spreading problem on fully connected graphs with n vertices. In the standard random protocol, where each node that knows the rumor sends it to a randomly chosen neighbor in every round, each node needs O((log n)2) random bits in order to spread the rumor in O(log n) rounds with high probability (w.h.p.). For the simple quasirandom rumor spreading protocol proposed by Doerr, Friedrich, and Sauerwald (2008), [log n] random bits per node are sufficient. A lower bound by Doerr and Fouz (2009) shows that this is asymptotically tight for a slightly more general class of protocols, the so-called gate-model. In this paper, we consider general rumor spreading protocols. We provide a simple push-protocol that requires only a total of O(n log log n) random bits (i.e., on average O(log log n) bits per node) in order to spread the rumor in O(log n) rounds w.h.p. We also investigate the theoretical minimal randomness requirements of efficient rumor spreading. We prove the existence of a (non-uniform) push-protocol for which a total of 2 log n + log log n + o(log log n) random bits suffice to spread the rumor in log n + ln n + O(1) rounds with probability 1 − o(1). This is contrasted by a simple time-randomness tradeoff for the class of all rumor spreading protocols, according to which any protocol that uses log n − log log n − ω(1) random bits requires ω(log n) rounds to spread the rumor.", "We study gossip algorithms for the rumor spreading problem which asks each node to deliver a rumor to all nodes in an unknown network. Gossip algorithms allow nodes only to call one neighbor per round and have recently attracted attention as message efficient, simple and robust solutions to the rumor spreading problem. Recently, non-uniform random gossip schemes were devised to allow efficient rumor spreading in networks with bottlenecks. In particular, [Censor-, STOC'12] gave an O(log^3 n) algorithm to solve the 1-local broadcast problem in which each node wants to exchange rumors locally with its 1-neighborhood. By repeatedly applying this protocol one can solve the global rumor spreading quickly for all networks with small diameter, independently of the conductance. This and all prior gossip algorithms for the rumor spreading problem have been inherently randomized in their design and analysis. This resulted in a parallel research direction trying to reduce and determine the amount of randomness needed for efficient rumor spreading. This has been done via lower bounds for restricted models and by designing gossip algorithms with a reduced need for randomness. The general intuition and consensus of these results has been that randomization plays a important role in effectively spreading rumors. In this paper we improves over this state of the art in several ways by presenting a deterministic gossip algorithm that solves the the k-local broadcast problem in 2(k+log n)log n rounds. Besides being the first efficient deterministic solution to the rumor spreading problem this algorithm is interesting in many aspects: It is simpler, more natural, more robust and faster than its randomized pendant and guarantees success with certainty instead of with high probability. Its analysis is furthermore simple, self-contained and fundamentally different from prior works." ] }
1311.2702
2044357496
Writing documentation about software internals is rarely considered a rewarding activity. It is highly time-consuming and the resulting documentation is fragile when the software is continuously evolving in a multi-developer setting. Unfortunately, traditional programming environments poorly support the writing and maintenance of documentation. Consequences are severe as the lack of documentation on software structure negatively impacts the overall quality of the software product. We show that using a controlled natural language with a reasoner and a query engine is a viable technique for verifying the consistency and accuracy of documentation and source code. Using ACE, a state-of-the-art controlled natural language, we present positive results on the comprehensibility and the general feasibility of creating and verifying documentation. As a case study, we used automatic documentation verification to identify and fix severe flaws in the architecture of a non-trivial piece of software. Moreover, a user experiment shows that our language is faster and easier to learn and understand than other formal languages for software documentation.
The use of logic to describe and check software architecture has been explored in depth @cite_41 @cite_35 . In this context, a number of different tools have been implemented, Reflexion Models @cite_38 , ArchJava @cite_0 , Lattix Inc's dependency manager @cite_2 , and Intentional Views @cite_27 . However, in contrast to the approach presented here, the resulting formal models of the architecture cannot be read and queried in a natural way and, as far as we are aware of, they have not been used in practice to software. Users are supposed to learn how to read and write statements in some sort of formal logic. This could be a major hindrance for the broad adoption of such systems, especially if the model of the architecture has to be verified by non-computer-scientists.
{ "cite_N": [ "@cite_35", "@cite_38", "@cite_41", "@cite_0", "@cite_27", "@cite_2" ], "mid": [ "114886242", "2021672791", "", "2133254848", "1617811580", "2169291221" ], "abstract": [ "", "Software engineers often use high-level models (for instance, box and arrow sketches) to reason and communicate about an existing software system. One problem with high-level models is that they are almost always inaccurate with respect to the system's source code. We have developed an approach that helps an engineer use a high-level model of the structure of an existing software system as a lens through which to see a model of that system's source code. In particular, an engineer de nes a high-level model and speci es how the model maps to the source. A tool then computes a software re exion model that shows where the engineer's high-level model agrees with and where it di ers from a model of the source. The paper provides a formal characterization of reexion models, discusses practical aspects of the approach, and relates experiences of applying the approach and tools to a number of di erent systems. The illustrative example used in the paper describes the application of re exion models to NetBSD, an implementation of Unix comprised of 250,000 lines of C code. In only a few hours, an engineer computed several re exion models that provided him with a useful, global overview of the structure of the NetBSD virtual memory subsystem. The approach has also been applied to aid in the understanding and experimental reengineering of the Microsoft Excel spreadsheet product. This research was funded in part by the NSF grant CCR-8858804 and a Canadian NSERC post-graduate scholarship. 0 Permission to make digital hard copies of all or part of this material without fee is granted provided that the copies are not made or distributed for pro t or commercial advantage, the ACM copyright server notice, the title of the publication and its date appear, and notice is given that copyright is by permission of the Association for Computing Machinery, Inc. (ACM). To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior speci c permission and or a fee. SIGSOFT '95 Washington, D.C., USA c 1995 ACM 0-89791-716-2 95 0010...$3.50", "", "Software architecture describes the structure of a system, enabling more effective design, program understanding, and formal analysis. However, existing approaches decouple implementation code from architecture, allowing inconsistencies, causing confusion, violating architectural properties, and inhibiting software evolution. ArchJava is an extension to Java that seamlessly unifies software architecture with implementation, ensuring that the implementation conforms to architectural constraints. A case study applying ArchJava to a circuit-design application suggests that ArchJava can express architectural structure effectively within an implementation, and that it can aid in program understanding and software evolution.", "Intensional views and relations have been proposed as a way of actively documenting high-level structural regularities in the source code of a software system. By checking conformance of these intensional views and relations against the source code, they supposedly facilitate a variety of software maintenance and evolution tasks. In this paper, by performing a case study on three different versions of the SmallWiki application, we critically analyze in how far the model of intensional views and its current generation of tools provide support for co-evolving high-level design and source code of a software system.", "An approach to managing the architecture of large software systems is presented. Dependencies are extracted from the code by a conventional static analysis, and shown in a tabular form known as the 'Dependency Structure Matrix' (DSM). A variety of algorithms are available to help organize the matrix in a form that reflects the architecture and highlights patterns and problematic dependencies. A hierarchical structure obtained in part by such algorithms, and in part by input from the user, then becomes the basis for 'design rules' that capture the architect's intent about which dependencies are acceptable. The design rules are applied repeatedly as the system evolves, to identify violations, and keep the code and its architecture in conformance with one another. The analysis has been implemented in a tool called LDM which has been applied in several commercial projects; in this paper, a case study application to Haystack, an information retrieval system, is described." ] }
1311.2702
2044357496
Writing documentation about software internals is rarely considered a rewarding activity. It is highly time-consuming and the resulting documentation is fragile when the software is continuously evolving in a multi-developer setting. Unfortunately, traditional programming environments poorly support the writing and maintenance of documentation. Consequences are severe as the lack of documentation on software structure negatively impacts the overall quality of the software product. We show that using a controlled natural language with a reasoner and a query engine is a viable technique for verifying the consistency and accuracy of documentation and source code. Using ACE, a state-of-the-art controlled natural language, we present positive results on the comprehensibility and the general feasibility of creating and verifying documentation. As a case study, we used automatic documentation verification to identify and fix severe flaws in the architecture of a non-trivial piece of software. Moreover, a user experiment shows that our language is faster and easier to learn and understand than other formal languages for software documentation.
W " @cite_25 present an approach to use controlled natural language in the context of software engineering that is in some respects very similar to our approach. Their system allows developers to ask questions about their source code in a controlled language. The difference to our approach is that only questions are supported. There is no possibility to augment the underlying model by annotations or documentation statements.
{ "cite_N": [ "@cite_25" ], "mid": [ "2153034577" ], "abstract": [ "The feature list of modern IDEs is steadily growing and mastering these tools becomes more and more demanding, especially for novice programmers. Despite their remarkable capabilities, IDEs often still cannot directly answer the questions that arise during program comprehension tasks. Instead developers have to map their questions to multiple concrete queries that can be answered only by combining several tools and examining the output of each of them manually to distill an appropriate answer. Existing approaches have in common that they are either limited to a set of predefined, hardcoded questions, or that they require to learn a specific query language only suitable for that limited purpose. We present a framework to query for information about a software system using guided-input natural language resembling plain English. For that, we model data extracted by classical software analysis tools with an OWL ontology and use knowledge processing technologies from the Semantic Web to query it. We use a case study to demonstrate how our framework can be used to answer queries about static source code information for program comprehension purposes." ] }
1311.2702
2044357496
Writing documentation about software internals is rarely considered a rewarding activity. It is highly time-consuming and the resulting documentation is fragile when the software is continuously evolving in a multi-developer setting. Unfortunately, traditional programming environments poorly support the writing and maintenance of documentation. Consequences are severe as the lack of documentation on software structure negatively impacts the overall quality of the software product. We show that using a controlled natural language with a reasoner and a query engine is a viable technique for verifying the consistency and accuracy of documentation and source code. Using ACE, a state-of-the-art controlled natural language, we present positive results on the comprehensibility and the general feasibility of creating and verifying documentation. As a case study, we used automatic documentation verification to identify and fix severe flaws in the architecture of a non-trivial piece of software. Moreover, a user experiment shows that our language is faster and easier to learn and understand than other formal languages for software documentation.
Kimmig al @cite_36 propose an approach for querying source code using a natural language. Users write queries in a simple question that follows the pattern question word - verb - noun - verb ( Where is balance read?'' with balance'' being an instance variable). Our approach differs on two points: (i) With our approach, new concepts and relations are easy to define. Their approach is fairly limited in that respect. (ii) They apply a sophisticated procedure of cleaning and tokenizing the natural language queries, before formalizing them. With our approach, documentation is written in a precise and unambiguous language in the first place. Buse and Weimer @cite_10 synthesize succinct human-readable documentation from software modifications. They employ an approach based on code summarization and symbolic execution, summarizing the runtime conditions necessary for the control flow to reach a modified statement. It is unclear, however, whether such techniques are useful for more than helping to write version log messages.
{ "cite_N": [ "@cite_36", "@cite_10" ], "mid": [ "2093440971", "2057049321" ], "abstract": [ "One common task of developing or maintaining software is searching the source code for information like specific method calls or write accesses to certain fields. This kind of information is required to correctly implement new features and to solve bugs. This paper presents an approach for querying source code with natural language.", "Source code modifications are often documented with log messages. Such messages are a key component of software maintenance: they can help developers validate changes, locate and triage defects, and understand modifications. However, this documentation can be burdensome to create and can be incomplete or inaccurate. We present an automatic technique for synthesizing succinct human-readable documentation for arbitrary program differences. Our algorithm is based on a combination of symbolic execution and a novel approach to code summarization. The documentation it produces describes the effect of a change on the runtime behavior of a program, including the conditions under which program behavior changes and what the new behavior is. We compare our documentation to 250 human-written log messages from 5 popular open source projects. Employing a human study, we find that our generated documentation is suitable for supplementing or replacing 89 of existing log messages that directly describe a code change." ] }
1311.2032
2952520656
For a set of @math points in the plane, this paper presents simple kinetic data structures (KDS's) for solutions to some fundamental proximity problems, namely, the all nearest neighbors problem, the closest pair problem, and the Euclidean minimum spanning tree (EMST) problem. Also, the paper introduces KDS's for maintenance of two well-studied sparse proximity graphs, the Yao graph and the Semi-Yao graph. We use sparse graph representations, the Pie Delaunay graph and the Equilateral Delaunay graph, to provide new solutions for the proximity problems. Then we design KDS's that efficiently maintain these sparse graphs on a set of @math moving points, where the trajectory of each point is assumed to be an algebraic function of constant maximum degree @math . We use the kinetic Pie Delaunay graph and the kinetic Equilateral Delaunay graph to create KDS's for maintenance of the Yao graph, the Semi-Yao graph, all the nearest neighbors, the closest pair, and the EMST. Our KDS's use @math space and @math preprocessing time. We provide the first KDS's for maintenance of the Semi-Yao graph and the Yao graph. Our KDS processes @math (resp. @math ) events to maintain the Semi-Yao graph (resp. the Yao graph); each event can be processed in time @math in an amortized sense. Here, @math is an extremely slow-growing function. Our KDS for maintenance of all the nearest neighbors and the closest pair processes @math events. For maintenance of the EMST, our KDS processes @math events. For all three of these problems, each event can be handled in time @math in an amortized sense. We improve the previous randomized kinetic algorithm for maintenance of all the nearest neighbors by Agarwal, Kaplan, and Sharir, and the previous EMST KDS by Rahmati and Zarei.
The nearest neighbor graph is a subgraph of the Delaunay triangulation and the Euclidean minimum spanning tree. Thus by maintaining either one of these supergraphs over time, all the nearest neighbors can also be maintained. In particular, by using the kinetic Delaunay triangulation @cite_22 or the kinetic Euclidean minimum spanning tree @cite_26 , together with a basic tool in the KDS framework called the kinetic tournament tree @cite_4 , we can maintain all the nearest neighbors over time. For both these two approaches, the number of internal events is nearly cubic in @math . Since the number of external events for all the nearest neighbors is nearly quadratic, neither of these two approaches will give an efficient KDS as defined above.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_22" ], "mid": [ "2175772512", "2086474457", "" ], "abstract": [ "This paper presents a kinetic data structure (KDS) for maintenance of the Euclidean minimum spanning tree (EMST) on a set of moving points in 2-dimensional space. For a set of n points moving in the plane we build a KDS of size O(n) in O(nlogn) preprocessing time by which the EMST is maintained efficiently during the motion. This is done by applying the required changes to the combinatorial structure of the EMST which is changed in discrete timestamps. We assume that the motion of the points, i.e. x and y coordinates of the points, are defined by algebraic functions of constant maximum degree. In terms of the KDS performance parameters, our KDS is responsive, local, and compact. The presented KDS is based on monitoring changes of the Delaunay triangulation of the points and edge-length changes of the edges of the current Delaunay triangulation.", "Akinetic data structure(KDS) maintains an attribute of interest in a system of geometric objects undergoing continuous motion. In this paper we develop a concentual framework for kinetic data structures, we propose a number of criteria for the quality of such structures, and we describe a number of fundamental techniques for their design. We illustrate these general concepts by presenting kinetic data structures for maintaining the convex hull and the closest pair of moving points in the plane; these structures behave well according to the proposed quality criteria for KDSs.", "" ] }
1311.2032
2952520656
For a set of @math points in the plane, this paper presents simple kinetic data structures (KDS's) for solutions to some fundamental proximity problems, namely, the all nearest neighbors problem, the closest pair problem, and the Euclidean minimum spanning tree (EMST) problem. Also, the paper introduces KDS's for maintenance of two well-studied sparse proximity graphs, the Yao graph and the Semi-Yao graph. We use sparse graph representations, the Pie Delaunay graph and the Equilateral Delaunay graph, to provide new solutions for the proximity problems. Then we design KDS's that efficiently maintain these sparse graphs on a set of @math moving points, where the trajectory of each point is assumed to be an algebraic function of constant maximum degree @math . We use the kinetic Pie Delaunay graph and the kinetic Equilateral Delaunay graph to create KDS's for maintenance of the Yao graph, the Semi-Yao graph, all the nearest neighbors, the closest pair, and the EMST. Our KDS's use @math space and @math preprocessing time. We provide the first KDS's for maintenance of the Semi-Yao graph and the Yao graph. Our KDS processes @math (resp. @math ) events to maintain the Semi-Yao graph (resp. the Yao graph); each event can be processed in time @math in an amortized sense. Here, @math is an extremely slow-growing function. Our KDS for maintenance of all the nearest neighbors and the closest pair processes @math events. For maintenance of the EMST, our KDS processes @math events. For all three of these problems, each event can be handled in time @math in an amortized sense. We improve the previous randomized kinetic algorithm for maintenance of all the nearest neighbors by Agarwal, Kaplan, and Sharir, and the previous EMST KDS by Rahmati and Zarei.
Basch, Guibas, and Zhang @cite_24 used a multidimensional range tree to maintain the closest pair. Their KDS uses @math space and processes @math events, each in worst-case time @math . Their KDS, which can be used for higher dimensions as well, is responsive, efficient, compact, and local. The same KDS with the same complexities as @cite_24 was independently presented by Agarwal, Kaplan, and Sharir @cite_11 ; the KDS by Agarwal al supports point insertions and deletions. Fu and Lee @cite_17 proposed the first kinetic algorithm for maintenance of an EMST on a set of @math moving points. Their algorithm uses @math preprocessing time and @math space, where @math is the maximum possible number of changes in the EMST from time @math to @math . At any given time, the algorithm constructs the EMST in linear time.
{ "cite_N": [ "@cite_24", "@cite_17", "@cite_11" ], "mid": [ "1987458256", "1979207937", "2115704441" ], "abstract": [ "A kinetic data structure for the maintenance of a multidimensional range search tree is introduced. This structure is used as a building block to obtain kinetic data structures for two classical geometric proximity problems in arbitrary dlmensions: the first structure maintains the closest pair of a set of continuously moving points, and is provably efficient. The second structure maintains a spanning tree of the moving points whose cost remains within some prescribed factor of the minimum spanning tree.", "We propose three indexing schemes for storing a set S of N points in the plane, each moving along a linear trajectory, so that any query of the following form can be answered quickly: Given a rectangle R and a real value t, report all K points of S that lie inside R at time t. We first present an indexing structure that, for any given constant e > 0, uses O(N B) disk blocks and answers a query in O(N B1 2+e + K B)I Os, where B is the block size. It can also report all the points of S that lie inside R during a given time interval. A point can be inserted or deleted, or the trajectory of a point can be changed, in O(log2BN) I Os. Next, we present a general approach that improves the query time if the queries arrive in chronological order, by allowing the index to evolve over time. We obtain a tradeoff between the query time and the number of times the index needs to be updated as the points move. We also describe an indexing scheme in which the number of I Os required to answer a query depends monotonically on the difference between the query time stamp t and the current time. Finally, we develop an efficient indexing scheme to answer approximate nearest-neighbor queries among moving points.", "We present simple, fully dynamic and kinetic data structures, which are variants of a dynamic two-dimensional range tree, for maintaining the closest pair and all nearest neighbors for a set of n moving points in the plane; insertions and deletions of points are also allowed. If no insertions or deletions take place, the structure for the closest pair uses O(n log n) space, and processes O(n2βs+2(n)log n) critical events, each in O(log2n) time. Here s is the maximum number of times where the distances between any two specific pairs of points can become equal, βs(q) e λs(q) q, and λs(q) is the maximum length of Davenport-Schinzel sequences of order s on q symbols. The dynamic version of the problem incurs a slight degradation in performance: If m ≥ n insertions and deletions are performed, the structure still uses O(n log n) space, and processes O(mnβs+2(n)log3 n) events, each in O(log3n) time. Our kinetic data structure for all nearest neighbors uses O(n log2 n) space, and processes O(n2β2s+2(n)log3 n) critical events. The expected time to process all events is O(n2βs+22(n) log4n), though processing a single event may take Θ(n) expected time in the worst case. If m ≥ n insertions and deletions are performed, then the expected number of events is O(mnβ2s+2(n) log3n) and processing them all takes O(mnβ2s+2(n) log4n). An insertion or deletion takes O(n) expected time." ] }
1311.2032
2952520656
For a set of @math points in the plane, this paper presents simple kinetic data structures (KDS's) for solutions to some fundamental proximity problems, namely, the all nearest neighbors problem, the closest pair problem, and the Euclidean minimum spanning tree (EMST) problem. Also, the paper introduces KDS's for maintenance of two well-studied sparse proximity graphs, the Yao graph and the Semi-Yao graph. We use sparse graph representations, the Pie Delaunay graph and the Equilateral Delaunay graph, to provide new solutions for the proximity problems. Then we design KDS's that efficiently maintain these sparse graphs on a set of @math moving points, where the trajectory of each point is assumed to be an algebraic function of constant maximum degree @math . We use the kinetic Pie Delaunay graph and the kinetic Equilateral Delaunay graph to create KDS's for maintenance of the Yao graph, the Semi-Yao graph, all the nearest neighbors, the closest pair, and the EMST. Our KDS's use @math space and @math preprocessing time. We provide the first KDS's for maintenance of the Semi-Yao graph and the Yao graph. Our KDS processes @math (resp. @math ) events to maintain the Semi-Yao graph (resp. the Yao graph); each event can be processed in time @math in an amortized sense. Here, @math is an extremely slow-growing function. Our KDS for maintenance of all the nearest neighbors and the closest pair processes @math events. For maintenance of the EMST, our KDS processes @math events. For all three of these problems, each event can be handled in time @math in an amortized sense. We improve the previous randomized kinetic algorithm for maintenance of all the nearest neighbors by Agarwal, Kaplan, and Sharir, and the previous EMST KDS by Rahmati and Zarei.
For any @math , Basch, Guibas, and Zhang @cite_24 presented a KDS for a @math -EMST whose total weight is within a factor of @math of the total weight of an exact EMST. For a set of points in the plane, their KDS uses @math space and @math preprocessing time, and processes @math events, each in @math time; their KDS works for higher dimensions. They claim that their structure can be used to maintain the minimum spanning tree in the @math and @math metrics.
{ "cite_N": [ "@cite_24" ], "mid": [ "1987458256" ], "abstract": [ "A kinetic data structure for the maintenance of a multidimensional range search tree is introduced. This structure is used as a building block to obtain kinetic data structures for two classical geometric proximity problems in arbitrary dlmensions: the first structure maintains the closest pair of a set of continuously moving points, and is provably efficient. The second structure maintains a spanning tree of the moving points whose cost remains within some prescribed factor of the minimum spanning tree." ] }
1311.2442
1826958006
The fast evolving nature of modern cyber threats and network monitoring needs calls for new, "software-defined", approaches to simplify and quicken programming and deployment of online (stream-based) traffic analysis functions. StreaMon is a carefully designed data-plane abstraction devised to scalably decouple the "programming logic" of a traffic analysis application (tracked states, features, anomaly conditions, etc.) from elementary primitives (counting and metering, matching, events generation, etc), efficiently pre-implemented in the probes, and used as common instruction set for supporting the desired logic. Multi-stage multi-step real-time tracking and detection algorithms are supported via the ability to deploy custom states, relevant state transitions, and associated monitoring actions and triggering conditions. Such a separation entails platform-independent, portable, online traffic analysis tasks written in a high level language, without requiring developers to access the monitoring device internals and program their custom monitoring logic via low level compiled languages (e.g., C, assembly, VHDL). We validate our design by developing a prototype and a set of simple (but functionally demanding) use-case applications and by testing them over real traffic traces.
In the literature, several monitoring platforms have targeted monitoring applications' programmability. A Monitoring API for programmable HW network adapters is proposed in @cite_19 . On top of such probe, network administrators may implement custom C++ monitoring applications. One of the developed applications is Appmon @cite_26 . It uses deep packet inspection to classify observed flows and attribute these flows to an application. Flow are stored in an hash table and retrieved when an flow is observed again. This way to handle states bears some resemblance with that proposed in this work, which however makes usage of (much) more descriptive eXtended Finite State Machines. CoMo @cite_21 is another well known network monitoring platform. We share with CoMo the (for us, side) idea of extensible plug-in metric modules, but besides this we are quite orthogonal to such work, as we rather focus on how to combine metrics with features and states using higher level programming techniques (versus CoMo's low level queries).
{ "cite_N": [ "@cite_19", "@cite_26", "@cite_21" ], "mid": [ "2115472007", "", "100525827" ], "abstract": [ "Network monitoring and measurement is commonly regarded as an essential function for understanding, managing and improving the performance and security of network infrastructures. Traditional passive network monitoring approaches are not adequate for fine-grained performance measurements nor for security applications. In addition, many applications would benefit from monitoring data gathered at multiple vantage points within a network infrastructure. This paper presents the design and implementation of DiMAPI, an application programming interface for distributed passive network monitoring. DiMAPI extends the notion of the network flow with the scope attribute, which enables flow creation and manipulation over a set of local and remote monitoring sensors. Experiments with a number of applications on top of DiMAPI show that it has reasonable performance, while the response latency is very close to the actual round trip time between the monitoring application and the monitoring sensors. A broad range of monitoring applications can benefit from DiMAPI to efficiently perform advanced monitoring tasks over a potentially large number of passive monitoring sensors.", "", "A device is provided for influencing starting of an internal combustion engine of a motor vehicle that has an electronically controlled gearbox, an electrically acting interlock that prevents the internal combustion engine from being put into operation when a gearbox status causing a driving force connection is selected, and a starter with an electromagnetic disengagement switch that is suppliable with battery current from an ignition switch during actuation of the starter. The device has an electronic engine control unit for at least one of ignition and metering of fuel, with a communicative connection communicatively connecting the engine control unit to the gearbox control. A direct connection is provided from the ignition switch to the electromagnetic disengagement switch for supplying battery current to the electromagnetic disengagement switch without an intermediate disabling switch. A recorder records the position of an element influencing the gear-selection status of the gearbox and transmits a corresponding signal to the gearbox control. The gearbox control acts upon the engine control unit via the communicative connection to disable at least one of the ignition and the fuel metering when the element influencing the gear-selection status of the gearbox is in a position which normally effects a driving force connection through the gearbox." ] }
1311.2442
1826958006
The fast evolving nature of modern cyber threats and network monitoring needs calls for new, "software-defined", approaches to simplify and quicken programming and deployment of online (stream-based) traffic analysis functions. StreaMon is a carefully designed data-plane abstraction devised to scalably decouple the "programming logic" of a traffic analysis application (tracked states, features, anomaly conditions, etc.) from elementary primitives (counting and metering, matching, events generation, etc), efficiently pre-implemented in the probes, and used as common instruction set for supporting the desired logic. Multi-stage multi-step real-time tracking and detection algorithms are supported via the ability to deploy custom states, relevant state transitions, and associated monitoring actions and triggering conditions. Such a separation entails platform-independent, portable, online traffic analysis tasks written in a high level language, without requiring developers to access the monitoring device internals and program their custom monitoring logic via low level compiled languages (e.g., C, assembly, VHDL). We validate our design by developing a prototype and a set of simple (but functionally demanding) use-case applications and by testing them over real traffic traces.
The Real-Time Communications Monitoring (RTCMon) framework @cite_12 permits development of monitoring applications, but again the development language is a low level one (C++), and (unlike us) any feature extraction and state handling must be dealt with inside the custom application logic developed by the programmer. CoralReef @cite_8 , FLAME @cite_9 and Blockmon @cite_13 are other frameworks which grant full programmability by permitting the monitoring application developers to hook'' their custom C C++ Perl traffic analysis function to the platform. On a different line, a number of monitoring frameworks are based on suitable extensions of Data Stream Management Systems (DSMS). PaQueT @cite_10 , and more recently BackStreamDB @cite_20 , are programmable monitoring frameworks developed as an extension of the Borealis DSMS @cite_11 . Ease of programming and high flexibility is provided by permitting users to define new metrics by simply performing queries to the DSMS. The DSMS is configured through an XML file that is processed to obtain a C++ application code. Gigascope @cite_2 is another stream database for network monitoring that provide an architecture programmable via SQL-like queries.
{ "cite_N": [ "@cite_13", "@cite_8", "@cite_9", "@cite_2", "@cite_20", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "", "21741679", "", "2144261930", "1556556508", "2153019170", "2079568980", "2115503987" ], "abstract": [ "", "", "", "We have developed Gigascope, a stream database for network applications including traffic analysis, intrusion detection, router configuration analysis, network research, network monitoring, and performance monitoring and debugging. Gigascope is undergoing installation at many sites within the AT&T network, including at OC48 routers, for detailed monitoring. In this paper we describe our motivation for and constraints in developing Gigascope, the Gigascope architecture and query language, and performance issues. We conclude with a discussion of stream database research problems we have found in our application.", "Monitoring the traffic of wide area networks consisting of several autonomous systems connected through a high-speed backbone is a challenge due to the huge amount of traffic. Keeping logs for obtaining measurements is unfeasible. This work describes a distributed real-time strategy for backbone traffic monitoring that does not employ logs and allows arbitrary metrics to be collected about the traffic of the backbone as a whole. Traffic is sampled by monitors that are distributed across the backbone and are accessed by a Stream Processing Engine (SPE). Besides the distributed monitoring architecture, we present an implementation (BackStreamDB) that was deployed on a national backbone. Case studies are described that show the system flexibility. Experiments are reported in which we evaluated the amount of traffic that can be handled.", "Network monitoring is a complex task that generally requires the use of different tools for specific purposes. This paper describes a flexible network monitoring tool, called PaQueT, designed to meet a wide range of monitoring needs. The user can define metrics as queries in a process similar to writing queries on a database management system. This approach provides an easy mechanism to adapt the tool as system requirements evolve. PaQueT allows one to monitor values ranging from packet level metrics to those usually provided only by tools based on Netflow or SNMP. PaQueT has been developed as an extension of Borealis Data Stream Management System. The first advantage of our approach is the ability to generate measurements in real time, minimizing the volume of data stored; second, the tool can be easily extended to consider several types of network protocols. We have conducted an experimental study to verify the effectiveness of our approach, and to determine its capacity to process large volumes of data.", "The use of the Internet as a medium for real-time communications has grown significantly over the past few years. However, the best-effort model of this network is not particularly well-suited to the demands of users who are familiar with the reliability, quality and security of the Public Switched Telephone Network. If the growth is to continue, monitoring and real time analysis of communication data will be needed in order to ensure good call quality, and should degradation occur, to take corrective action. Writing this type of monitoring application is difficult and time consuming: VoIP traffic not only tends to use dynamic ports, but its real-time nature, along with the fact that its packets tend to be small, impose non-trivial performance requirements. In this paper we present RTC-Mon, the Real-Time Communications Monitoring framework, which provides an extensible platform for the quick development of high-speed, real-time monitoring applications. While the focus is on VoIP traffic, the framework is general and is capable of monitoring any type of real-time communications traffic. We present testbed performance results for the various components of RTC-Mon, showing that it can monitor a large number of concurrent flows without losing packets. In addition, we implemented a proof-of-concept application that can not only track statistics about a large number of calls and their users, but that consists of only 800 lines of code, showing that the framework is efficient and that it also significantly reduces development time.", "Borealis is a second-generation distributed stream processing engine that is being developed at Brandeis University, Brown University, and MIT. Borealis inherits core stream processing functionality from Aurora [14] and distribution functionality from Medusa [51]. Borealis modifies and extends both systems in non-trivial and critical ways to provide advanced capabilities that are commonly required by newly-emerging stream processing applications. In this paper, we outline the basic design and functionality of Borealis. Through sample real-world applications, we motivate the need for dynamically revising query results and modifying query specifications. We then describe how Borealis addresses these challenges through an innovative set of features, including revision records, time travel, and control lines. Finally, we present a highly flexible and scalable QoS-based optimization model that operates across server and sensor networks and a new fault-tolerance model with flexible consistency-availability trade-offs." ] }
1311.2442
1826958006
The fast evolving nature of modern cyber threats and network monitoring needs calls for new, "software-defined", approaches to simplify and quicken programming and deployment of online (stream-based) traffic analysis functions. StreaMon is a carefully designed data-plane abstraction devised to scalably decouple the "programming logic" of a traffic analysis application (tracked states, features, anomaly conditions, etc.) from elementary primitives (counting and metering, matching, events generation, etc), efficiently pre-implemented in the probes, and used as common instruction set for supporting the desired logic. Multi-stage multi-step real-time tracking and detection algorithms are supported via the ability to deploy custom states, relevant state transitions, and associated monitoring actions and triggering conditions. Such a separation entails platform-independent, portable, online traffic analysis tasks written in a high level language, without requiring developers to access the monitoring device internals and program their custom monitoring logic via low level compiled languages (e.g., C, assembly, VHDL). We validate our design by developing a prototype and a set of simple (but functionally demanding) use-case applications and by testing them over real traffic traces.
Finally, while our work is, to the best of our knowledge, the first which exploits eXtended Finite State Machines (XFSM) for programming custom monitoring logic, we acknowledge that the idea of using XFSM as programming language for networking purposes was proposed in a completely different field (wireless MAC protocols programmability) by @cite_27 .
{ "cite_N": [ "@cite_27" ], "mid": [ "1968861283" ], "abstract": [ "Programmable wireless platforms aim at responding to the quest for wireless access flexibility and adaptability. This paper introduces the notion of wireless MAC processors. Instead of implementing a specific MAC protocol stack, Wireless MAC processors do support a set of Medium Access Control “commands” which can be run-time composed (programmed) through software-defined state machines, thus providing the desired MAC protocol operation. We clearly distinguish from related work in this area as, unlike other works which rely on dedicated DSPs or programmable hardware platforms, we experimentally prove the feasibility of the wireless MAC processor concept over ultra-cheap commodity WLAN hardware cards. Specifically, we reflash the firmware of the commercial Broadcom AirForce54G off-the-shelf chipset, replacing its 802.11 WLAN MAC protocol implementation with our proposed extended state machine execution engine. We prove the flexibility of the proposed approach through three use-case implementation examples." ] }
1311.2236
2151244157
We study the problem of distribution to real-value regression, where one aims to regress a mapping @math that takes in a distribution input covariate @math (for a non-parametric family of distributions @math ) and outputs a real-valued response @math . This setting was recently studied, and a "Kernel-Kernel" estimator was introduced and shown to have a polynomial rate of convergence. However, evaluating a new prediction with the Kernel-Kernel estimator scales as @math . This causes the difficult situation where a large amount of data may be necessary for a low estimation risk, but the computation cost of estimation becomes infeasible when the data-set is too large. To this end, we propose the Double-Basis estimator, which looks to alleviate this big data problem in two ways: first, the Double-Basis estimator is shown to have a computation complexity that is independent of the number of of instances @math when evaluating new predictions after training; secondly, the Double-Basis estimator is shown to have a fast rate of convergence for a general class of mappings @math .
DRR is related to the functional analysis, where one regresses a mapping whose input domain are functions @cite_12 . However, the objects DRR works over--distributions and their pdfs--are inferred through sets of samples drawn from the objects, with finite sizes. In functional analysis, the functions are inferred through observations of @math pairs that are often taken to be an arbitrarily dense grid in the domain of the functions. For a comprehensive survey in functional analysis see @cite_12 @cite_11 . Also, recently @cite_15 studied the problem of distribution to distribution regression, where both input and output covariates are distributions.
{ "cite_N": [ "@cite_15", "@cite_12", "@cite_11" ], "mid": [ "", "1583788335", "1576898078" ], "abstract": [ "", "Introduction to functional nonparametric statistics.- Some functional datasets and associated statistical problematics.- What is a well adapted space for functional data?.- Local weighting of functional variables.- Functional nonparametric prediction methodologies.- Some selected asymptotics.- Computational issues.- Nonparametric supervised classification for functional data.- Nonparametric unsupervised classification for functional data.- Mixing, nonparametric and functional statistics.- Some selected asymptotics.- Application to continuous time processes prediction.- Small ball probabilities, semi-metric spaces and nonparametric statistics.- Conclusion and perspectives.", "Introduction.- Life Course Data in Criminology.- The Nondurable Goods Index.- Bone Shapes from a Paleopathology Study.- Modeling Reaction Time Distributions.- Zooming in on Human Growth.- Time Warping Handwriting and Weather Records.- How do Bone Shapes Indicate Arthritis?- Functional Models for Test Items.- Predicting Lip Acceleration from Electromyography.- Variable Seasonal Trend in the Goods Index.- The Dynamics of Handwriting Printed Characters.- A Differential Equation for Juggling." ] }
1311.2234
2950080319
We present the FuSSO, a functional analogue to the LASSO, that efficiently finds a sparse set of functional input covariates to regress a real-valued response against. The FuSSO does so in a semi-parametric fashion, making no parametric assumptions about the nature of input functional covariates and assuming a linear form to the mapping of functional covariates to the response. We provide a statistical backing for use of the FuSSO via proof of asymptotic sparsistency under various conditions. Furthermore, we observe good results on both synthetic and real-world data.
Lastly, it is worth noting that in our estimator we will have an additive linear model, @math where we search for @math in a broad, non-parametric family such that many @math are the zero function. Such a task is similar in nature to the SpAM estimator @cite_12 , in which one also has an additive model @math (in the dimensions of a real vector @math ) and searches for @math in a broad, non-parametric family such that many @math are the zero function. Note though, that in the SpAM model, the @math functions are applied to real covariates via a function evaluation. In the FuSSO model, @math are applied to functional covariates via an inner product; that is, FuSSO works over functional, not real-valued covariates, unlike SpAM.
{ "cite_N": [ "@cite_12" ], "mid": [ "2593996946" ], "abstract": [ "We present a new class of methods for high dimensional non-parametric regression and classification called sparse additive models. Our methods combine ideas from sparse linear modelling and additive non-parametric regression. We derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than the sample size. Sparse additive models are essentially a functional version of the grouped lasso of Yuan and Lin. They are also closely related to the COSSO model of Lin and Zhang but decouple smoothing and sparsity, enabling the use of arbitrary non-parametric smoothers. We give an analysis of the theoretical properties of sparse additive models and present empirical results on synthetic and real data, showing that they can be effective in fitting sparse non-parametric models in high dimensional data. Copyright (c) 2009 Royal Statistical Society." ] }
1311.2139
2271890745
In structured output learning, obtaining labelled data for real-world applications is usually costly, while unlabelled examples are available in abundance. Semi-supervised structured classification has been developed to handle large amounts of unlabelled structured data. In this work, we consider semi-supervised structural SVMs with domain constraints. The optimization problem, which in general is not convex, contains the loss terms associated with the labelled and unlabelled examples along with the domain constraints. We propose a simple optimization approach, which alternates between solving a supervised learning problem and a constraint matching problem. Solving the constraint matching problem is difficult for structured prediction, and we propose an efficient and effective hill-climbing method to solve it. The alternating optimization is carried out within a deterministic annealing framework, which helps in effective constraint matching, and avoiding local minima which are not very useful. The algorithm is simple to implement and achieves comparable generalization performance on benchmark datasets.
A related work to our approach is the transductive SVM (TSVM) for multi-class and hierarchical classification by , where the idea of TSVMs in was extended to multi-class problems. The main challenge for multi-class problems was in designing an efficient procedure to handle the combinatorial optimization involving the labels @math for unlabeled examples. Note that for multi-class problems, @math for some @math . @cite_0 showed that the combinatorial optimization for multi-class label switching results in an integer program, and proposed a transportation simplex method to solve it approximately. However, the transportation simplex method turned out to be in-efficient and an efficient label-switching procedure was given in . A deterministic annealing method and domain constraints in the form of class-ratios were also used in the training. We note however that a straightforward extension of TSVM to structured output learning is hindered by the complexity of solving the associated label switching problem. Extending the label switching procedure to structured outputs is much more challenging, due to their complex structure and the large cardinality of the output space.
{ "cite_N": [ "@cite_0" ], "mid": [ "1496025956" ], "abstract": [ "Transductive SVM (TSVM) is a well known semi-supervised large margin learning method for binary text classification. In this paper we extend this method to multi-class and hierarchical classification problems. We point out that the determination of labels of unlabeled examples with fixed classifier weights is a linear programming problem. We devise an efficient technique for solving it. The method is applicable to general loss functions. We demonstrate the value of the new method using large margin loss on a number of multiclass and hierarchical classification datasets. For maxent loss we show empirically that our method is better than expectation regularization constraint and posterior regularization methods, and competitive with the version of entropy regularization method which uses label constraints." ] }
1311.1695
2951514311
The graph Laplacian, a typical representation of a network, is an important matrix that can tell us much about the network structure. In particular its eigenpairs (eigenvalues and eigenvectors) incubate precious topological information about the network at hand, including connectivity, partitioning, node distance and centrality. Real networks might be very large in number of nodes (actors); luckily, most real networks are sparse, meaning that the number of edges (binary connections among actors) are few with respect to the maximum number of possible edges. In this paper we experimentally compare three state-of-the-art algorithms for computation of a few among the smallest eigenpairs of large and sparse matrices: the Implicitly Restarted Lanczos Method, which is the current implementation in the most popular scientific computing environments (Matlab ), the Jacobi-Davidson method, and the Deflation Accelerated Conjugate Gradient method. We implemented the algorithms in a uniform programming setting and tested them over diverse real-world networks including biological, technological, information, and social networks. It turns out that the Jacobi-Davidson method displays the best performance in terms of number of matrix-vector products and CPU time.
Other methods are efficiently employed for computing a number of eigenpairs of sparse matrices. Among these, we mention the Rayleigh Quotient iteration whose inexact variant has been recently analyzed by @cite_2 . A method which has some common features with DACG is LOBPCG (Locally Optimal Block Preconditioned Conjugate Gradient Method) which has been proposed by @cite_3 , and is currently available under the hypre package developed in the @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_2" ], "mid": [ "", "2051142108", "2053608672" ], "abstract": [ "", "We describe new algorithms of the locally optimal block preconditioned conjugate gradient (LOBPCG) method for symmetric eigenvalue problems, based on a local optimization of a three-term recurrence, and suggest several other new methods. To be able to compare numerically different methods in the class, with different preconditioners, we propose a common system of model tests, using random preconditioners and initial guesses. As the \"ideal\" control algorithm, we advocate the standard preconditioned conjugate gradient method for finding an eigenvector as an element of the null-space of the corresponding homogeneous system of linear equations under the assumption that the eigenvalue is known. We recommend that every new preconditioned eigensolver be compared with this \"ideal\" algorithm on our model test problems in terms of the speed of convergence, costs of every iteration, and memory requirements. We provide such comparison for our LOBPCG method. Numerical results establish that our algorithm is practically as efficient as the ideal'' algorithm when the same preconditioner is used in both methods. We also show numerically that the LOBPCG method provides approximations to first eigenpairs of about the same quality as those by the much more expensive global optimization method on the same generalized block Krylov subspace. We propose a new version of block Davidson's method as a generalization of the LOBPCG method. Finally, direct numerical comparisons with the Jacobi--Davidson method show that our method is more robust and converges almost two times faster.", "We present a detailed convergence analysis of preconditioned MINRES for approximately solving the linear systems that arise when Rayleigh quotient iteration is used to compute the lowest eigenpair of a symmetric positive definite matrix. We provide insight into the initial stagnation of MINRES iteration in both a qualitative and quantitative way and show that the convergence of MINRES mainly depends on how quickly the unique negative eigenvalue of the preconditioned shifted coefficient matrix is approximated by its corresponding harmonic Ritz value. By exploring when the negative Ritz value appears in MINRES iteration, we obtain a better understanding of the limitation of preconditioned MINRES in this context and the virtue of a new type of preconditioner with “tuning.” A comparison of MINRES with SYMMLQ in this context is also given. Finally, we show that tuning based on a rank-2 modification can be applied with little additional cost to guarantee positive definiteness of the tuned preconditioner." ] }
1311.1839
1661823234
We describe a whole-body dynamic walking controller implemented as a convex quadratic program. The controller solves an optimal control problem using an approximate value function derived from a simple walking model while respecting the dynamic, input, and contact constraints of the full robot dynamics. By exploiting sparsity and temporal structure in the optimization with a custom active-set algorithm, we surpass the performance of the best available off-the-shelf solvers and achieve 1kHz control rates for a 34-DOF humanoid. We describe applications to balancing and walking tasks using the simulated Atlas robot in the DARPA Virtual Robotics Challenge.
@cite_10 @cite_21 used CLFs for walking control design by solving QPs that minimize the input norm, @math , while satisfying constraints on the negativity of @math . By contrast, we placed no constraint on @math and instead minimized an objective of the form @math , where @math is an instantaneous cost on @math and @math . This approach gave us the significant practical robustness while making the QP less prone to infeasibilities.
{ "cite_N": [ "@cite_21", "@cite_10" ], "mid": [ "2170777202", "2026616272" ], "abstract": [ "This paper briefly presents the process of formally achieving bipedal robotic walking through controller synthesis inspired by human locomotion. Motivated by the hierarchical control present in humans, we begin by viewing the human as a \"black box\" and describe outputs, or virtual constraints, that appear to characterize human walking. By considering the equivalent outputs for the bipedal robot, a nonlinear controller can be constructed that drives the outputs of the robot to the outputs of the human; moreover, the parameters of this controller can be optimized so that stable robotic walking is provably achieved while simultaneously producing outputs of the robot that are as close as possible to those of a human. Finally, considering a control Lyapunov function based representation of these outputs allows for the class of controllers that provably achieve stable robotic walking can be greatly enlarged. The end result is the generation of bipedal robotic walking that is remarkably human-like and is experimentally realizable, as evidenced by the implementation of the resulting controllers on multiple robotic platforms.", "Hybrid zero dynamics extends the Byrnes-Isidori notion of zero dynamics to a class of hybrid models called systems with impulse effects. Specifically, given a smooth submanifold that is contained in the zero set of an output function and is invariant under both the continuous flow of the system with impulse effects as well as its reset map, the restriction dynamics is called the hybrid zero dynamics. Prior results on the stabilization of periodic orbits of the hybrid zero dynamics have relied on input-output linearization of the transverse variables. The principal result of this paper shows how control Lyapunov functions can be used to exponentially stabilize periodic orbits of the hybrid zero dynamics, thereby significantly extending the class of stabilizing controllers. An illustration of this result on a model of a bipedal walking robot is provided." ] }
1311.1839
1661823234
We describe a whole-body dynamic walking controller implemented as a convex quadratic program. The controller solves an optimal control problem using an approximate value function derived from a simple walking model while respecting the dynamic, input, and contact constraints of the full robot dynamics. By exploiting sparsity and temporal structure in the optimization with a custom active-set algorithm, we surpass the performance of the best available off-the-shelf solvers and achieve 1kHz control rates for a 34-DOF humanoid. We describe applications to balancing and walking tasks using the simulated Atlas robot in the DARPA Virtual Robotics Challenge.
Other uses of active-set methods for MPC have exploited the temporal relationship between the QPs arising in MPC. compared active-set and interior-point strategies for MPC @cite_22 . The described an active-set approach based on Schur complements for efficiently resolving KKT conditions after changes are made to the active set. This framework is analogous to the solution method we discuss in Section . In the discrete time setting, Wang and Boyd @cite_13 describe an approach to quickly evaluating control-Lyapunov policies using explicit enumeration of active sets in cases where the number of states is roughly equal to the square of the number of inputs.
{ "cite_N": [ "@cite_13", "@cite_22" ], "mid": [ "1977192803", "2095633277" ], "abstract": [ "The evaluation of a control-Lyapunov policy, with quadratic Lyapunov function, requires the solution of a quadratic program (QP) at each time step. For small problems this QP can be solved explicitly; for larger problems an online optimization method can be used. For this reason the control-Lyapunov control policy is considered a computationally intensive control law, as opposed to an “analytical” control law, such as conventional linear state feedback, linear quadratic Gaussian control, or H∞, too complex or slow to be used in high speed control applications. In this note we show that by precomputing certain quantities, the control-Lyapunov policy can be evaluated extremely efficiently. We will show that when the number of inputs is on the order of the square-root of the state dimension, the cost of evaluating a control-Lyapunov policy is on the same order as the cost of evaluating a simple linear state feedback policy, and less (in order) than the cost of updating a Kalman filter state estimate. To give an idea of the speeds involved, for a problem with 100 states and 10 inputs, the control-Lyapunov policy can be evaluated in around 67 μs, on a 2 GHz AMD processor; the same processor requires 40 μs to carry out a Kalman filter update.", "We consider a comparison of active set vs. interior point strategies for the solution of receding time horizon problems in nonlinear model predictive control (NMPC). For this study we consider a control algorithm where we form quadratic programs (QPs) in each time horizon by linearizing the model. We also ignore second order information on the model and constraints. This approach can be viewed as a direct nonlinear extension of MPC with linear models and is easily tailored to include stabilizing constraints. Using this framework we consider the application of three active set strategies as well as interior point methods applied to both the NMPC and the QP subproblem. The first two active set methods (QPOPT and and QKWIK) are general purpose solvers that have been incorporated into SQP algorithms previously, while the third is a Schur complement approach that can easily exploit the sparse structure of the KKT matrix in MPC." ] }
1311.1839
1661823234
We describe a whole-body dynamic walking controller implemented as a convex quadratic program. The controller solves an optimal control problem using an approximate value function derived from a simple walking model while respecting the dynamic, input, and contact constraints of the full robot dynamics. By exploiting sparsity and temporal structure in the optimization with a custom active-set algorithm, we surpass the performance of the best available off-the-shelf solvers and achieve 1kHz control rates for a 34-DOF humanoid. We describe applications to balancing and walking tasks using the simulated Atlas robot in the DARPA Virtual Robotics Challenge.
@cite_18 consider the MPC problems where the cost function and dynamic constraints are the same at each time step; i.e., the QPs solved at iteration differ only by a single constraint that enforces initial conditions. By smoothly varying the initial conditions from the previous to the current state, they were able to track a piecewise linear path traced by the optimal solution, where knot points in the path correspond to changes in the active set. Since the controller we considered had changing cost and constraint structure, this method would have been difficult to apply.
{ "cite_N": [ "@cite_18" ], "mid": [ "1983864916" ], "abstract": [ "Nearly all algorithms for linear model predictive control (MPC) either rely on the solution of convex quadratic programs (QPs) in real time, or on an explicit precalculation of this solution for all possible problem instances. In this paper, we present an online active set strategy for the fast solution of parametric QPs arising in MPC. This strategy exploits solution information of the previous QP under the assumption that the active set does not change much from one QP to the next. Furthermore, we present a modification where the CPU time is limited in order to make it suitable for strict real-time applications. Its performance is demonstrated with a challenging test example comprising 240 variables and 1191 inequalities, which depends on 57 parameters and is prohibitive for explicit MPC approaches. In this example, our strategy allows CPU times of well below 100 ms per QP and was about one order of magnitude faster than a standard active set QP solver. Copyright © 2007 John Wiley & Sons, Ltd." ] }
1311.1610
2949607755
Game theory studies situations in which strategic players can modify the state of a given system, due to the absence of a central authority. Solution concepts, such as Nash equilibrium, are defined to predict the outcome of such situations. In multi-player settings, it has been pointed out that to be realistic, a solution concept should be obtainable via processes that are decentralized and reasonably simple. Accordingly we look at the computation of solution concepts by means of decentralized dynamics. These are algorithms in which players move in turns to improve their own utility and the hope is that the system reaches an "equilibrium" quickly. We study these dynamics for the class of opinion games, recently introduced by [, FOCS2011]. These are games, important in economics and sociology, that model the formation of an opinion in a social network. We study best-response dynamics and show upper and lower bounds on the convergence to Nash equilibria. We also study a noisy version of best-response dynamics, called logit dynamics, and prove a host of results about its convergence rate as the noise in the system varies. To get these results, we use a variety of techniques developed to bound the mixing time of Markov chains, including coupling, spectral characterizations and bottleneck ratio.
A number of papers study the efficient computation of (approximate) pure Nash equilibria for @math -strategy games, such as @cite_20 @cite_2 and @cite_10 . The class of games we study here contrasts with those in that for the games considered here, Nash equilibria can be found in polynomial time (Observation ), so that our interest is in the extent to which equilibria can be found easily with simple decentralized dynamic processes. Similarly to these works, we focus on a class of @math -strategy games and study efficient computation of pure Nash equilibria; additionally we also study the convergence rate to logit equilibria.
{ "cite_N": [ "@cite_10", "@cite_20", "@cite_2" ], "mid": [ "", "2145297839", "1530305922" ], "abstract": [ "", "We investigate from the computational viewpoint multi-player games that are guaranteed to have pure Nash equilibria. We focus on congestion games, and show that a pure Nash equilibrium can be computed in polynomial time in the symmetric network case, while the problem is PLS-complete in general. We discuss implications to non-atomic congestion games, and we explore the scope of the potential function method for proving existence of pure Nash equilibria.", "Many natural games have both high and low cost Nash equilibria: their Price of Anarchy is high and yet their Price of Stability is low. In such cases, one could hope to move behavior from a high cost equilibrium to a low cost one by a \"public service advertising campaign\" encouraging players to follow the low-cost equilibrium, and if every player follows the advice then we are done. However, the assumption that everyone follows instructions is unrealistic. A more natural assumption is that some players will follow them, while other players will not. In this paper we consider the question of to what extent can such an advertising campaign cause behavior to switch from a bad equilibrium to a good one even if only a fraction of people actually follow the given advice, and do so only temporarily. Unlike the \"value of altruism\" model, we assume everyone will ultimately act in their own interest. We analyze this question for several important and widely studied classes of games including network design with fair cost sharing, scheduling with unrelated machines, and party affiliation games (which include consensus and cut games). We show that for some of these games (such as fair cost sharing), a random α fraction of the population following the given advice is sufficient to get a guarantee within an O(1 α) factor of the price of stability for any α > 0. For other games (such as party affiliation games), there is a strict threshold (in this case, α 1 2 is enough to reach near-optimal behavior). Finally, for some games, such as scheduling, no value α < 1 is sufficient. We also consider a \"viral marketing\" model in which certain players are specifically targeted, and analyze the ability of such targeting to influence behavior using a much smaller number of targeted players." ] }
1311.1162
2951448623
One potential disadvantage of social tagging systems is that due to the lack of a centralized vocabulary, a crowd of users may never manage to reach a consensus on the description of resources (e.g., books, users or songs) on the Web. Yet, previous research has provided interesting evidence that the tag distributions of resources may become semantically stable over time as more and more users tag them. At the same time, previous work has raised an array of new questions such as: (i) How can we assess the semantic stability of social tagging systems in a robust and methodical way? (ii) Does semantic stabilization of tags vary across different social tagging systems and ultimately, (iii) what are the factors that can explain semantic stabilization in such systems? In this work we tackle these questions by (i) presenting a novel and robust method which overcomes a number of limitations in existing methods, (ii) empirically investigating semantic stabilization processes in a wide range of social tagging systems with distinct domains and properties and (iii) detecting potential causes for semantic stabilization, specifically imitation behavior, shared background knowledge and intrinsic properties of natural language. Our results show that tagging streams which are generated by a combination of imitation dynamics and shared background knowledge exhibit faster and higher semantic stability than tagging streams which are generated via imitation dynamics or natural language streams alone.
In past research, it has been suggested that stable patterns may emerge when a large group of users annotates resources on the Web. That means, users seem to reach a consensus about the description of a resource over time, despite the lack of a centralized vocabulary which is a central element of traditional forms of organizing information @cite_33 @cite_13 @cite_4 . Several methods have been established to measure this semantic stability: (i) in previous work one co-author of this paper suggested to assess semantic stability by analyzing the proportions of tags for a given resource as a function of the number of tag assignments @cite_33 . (ii) @cite_13 proposed a direct method for quantifying stabilization by using the Kullback-Leibler (KL) divergence between the rank-ordered tag frequency distributions of a resource at different points in time. (iii) @cite_4 showed that power law distributions emerge when looking at rank-ordered tag frequency distributions of a resource which is an indicator of semantic stabilization.
{ "cite_N": [ "@cite_13", "@cite_33", "@cite_4" ], "mid": [ "2127246734", "2102775690", "2020340745" ], "abstract": [ "The debate within the Web community over the optimal means by which to organize information often pits formalized classifications against distributed collaborative tagging systems. A number of questions remain unanswered, however, regarding the nature of collaborative tagging systems including whether coherent categorization schemes can emerge from unsupervised tagging by users. This paper uses data from the social bookmarking site delicio. us to examine the dynamics of collaborative tagging systems. In particular, we examine whether the distribution of the frequency of use of tags for \"popular\" sites with a long history (many tags and many users) can be described by a power law distribution, often characteristic of what are considered complex systems. We produce a generative model of collaborative tagging in order to understand the basic dynamics behind tagging, including how a power law distribution of tags could arise. We empirically examine the tagging history of sites in order to determine how this distribution arises over time and to determine the patterns prior to a stable distribution. Lastly, by focusing on the high-frequency tags of a site where the distribution of tags is a stabilized power law, we show how tag co-occurrence networks for a sample domain of tags can be used to analyze the meaning of particular tags given their relationship to other tags.", "Collaborative tagging describes the process by which many users add metadata in the form of keywords to shared content. Recently, collaborative tagging has grown in popularity on the web, on sites that allow users to tag bookmarks, photographs and other content. In this paper we analyze the structure of collaborative tagging systems as well as their dynamic aspects. Specifically, we discovered regularities in user activity, tag frequencies, kinds of tags used, bursts of popularity in bookmarking and a remarkable stability in the relative proportions of tags within a given URL. We also present a dynamic model of collaborative tagging that predicts these stable patterns and relates them to imitation and shared knowledge.", "Collaborative tagging has been quickly gaining ground because of its ability to recruit the activity of web users into effectively organizing and sharing vast amounts of information. Here we collect data from a popular system and investigate the statistical properties of tag cooccurrence. We introduce a stochastic model of user behavior embodying two main aspects of collaborative tagging: (i) a frequency-bias mechanism related to the idea that users are exposed to each other's tagging activity; (ii) a notion of memory, or aging of resources, in the form of a heavy-tailed access to the past state of the system. Remarkably, our simple modeling is able to account quantitatively for the observed experimental features with a surprisingly high accuracy. This points in the direction of a universal behavior of users who, despite the complexity of their own cognitive processes and the uncoordinated and selfish nature of their tagging activity, appear to follow simple activity patterns." ] }
1311.1162
2951448623
One potential disadvantage of social tagging systems is that due to the lack of a centralized vocabulary, a crowd of users may never manage to reach a consensus on the description of resources (e.g., books, users or songs) on the Web. Yet, previous research has provided interesting evidence that the tag distributions of resources may become semantically stable over time as more and more users tag them. At the same time, previous work has raised an array of new questions such as: (i) How can we assess the semantic stability of social tagging systems in a robust and methodical way? (ii) Does semantic stabilization of tags vary across different social tagging systems and ultimately, (iii) what are the factors that can explain semantic stabilization in such systems? In this work we tackle these questions by (i) presenting a novel and robust method which overcomes a number of limitations in existing methods, (ii) empirically investigating semantic stabilization processes in a wide range of social tagging systems with distinct domains and properties and (iii) detecting potential causes for semantic stabilization, specifically imitation behavior, shared background knowledge and intrinsic properties of natural language. Our results show that tagging streams which are generated by a combination of imitation dynamics and shared background knowledge exhibit faster and higher semantic stability than tagging streams which are generated via imitation dynamics or natural language streams alone.
Several attempts and hypotheses which aim to explain the observed stability have emerged. In @cite_33 the authors propose that the simplest model that results in a power law distribution of tags would be the classic Polya Urn model. The first model that formalized the notion of new tags was proposed by @cite_4 by utilizing the Yule-Simon model @cite_12 . Further models like the semantic imitation model @cite_6 or simple imitation mechanisms @cite_26 have been deployed for explaining and reconstructing real world semantic stabilization.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_33", "@cite_6", "@cite_12" ], "mid": [ "2017213261", "2020340745", "2102775690", "2165416295", "" ], "abstract": [ "While research on collaborative tagging systems has largely been the purview of computer scientists, the behavior of these systems is driven by the psychology of their users. Here we explore how simple models of boundedly rational human decision making may partly account for the high-level properties of a collaborative tagging environment, in particular with respect to the distribution of tags used across the folksonomy. We discuss several plausible heuristics people might employ to decide on tags to use for a given item, and then describe methods for testing evidence of such strategies in real collaborative tagging data. Using a large dataset of annotations collected from users of the social music website Last.fm with a novel crawling methodology (approximately one millions total users), we extract the parameters for our decision-making models from the data. We then describe a set of simple multi-agent simulations that test our heuristic models, and compare their results to the extracted parameters from the tagging dataset. Results indicate that simple social copying mechanisms can generate surprisingly good fits to the empirical data, with implications for the design and study of tagging systems.", "Collaborative tagging has been quickly gaining ground because of its ability to recruit the activity of web users into effectively organizing and sharing vast amounts of information. Here we collect data from a popular system and investigate the statistical properties of tag cooccurrence. We introduce a stochastic model of user behavior embodying two main aspects of collaborative tagging: (i) a frequency-bias mechanism related to the idea that users are exposed to each other's tagging activity; (ii) a notion of memory, or aging of resources, in the form of a heavy-tailed access to the past state of the system. Remarkably, our simple modeling is able to account quantitatively for the observed experimental features with a surprisingly high accuracy. This points in the direction of a universal behavior of users who, despite the complexity of their own cognitive processes and the uncoordinated and selfish nature of their tagging activity, appear to follow simple activity patterns.", "Collaborative tagging describes the process by which many users add metadata in the form of keywords to shared content. Recently, collaborative tagging has grown in popularity on the web, on sites that allow users to tag bookmarks, photographs and other content. In this paper we analyze the structure of collaborative tagging systems as well as their dynamic aspects. Specifically, we discovered regularities in user activity, tag frequencies, kinds of tags used, bursts of popularity in bookmarking and a remarkable stability in the relative proportions of tags within a given URL. We also present a dynamic model of collaborative tagging that predicts these stable patterns and relates them to imitation and shared knowledge.", "We present a semantic imitation model of social tagging and exploratory search based on theories of cognitive science. The model assumes that social tags evoke a spontaneous tag-based topic inference process that primes the semantic interpretation of resource contents during exploratory search, and the semantic priming of existing tags in turn influences future tag choices. The model predicts that (1) users who can see tags created by others tend to create tags that are semantically similar to these existing tags, demonstrating the social influence of tag choices; and (2) users who have similar information goals tend to create tags that are semantically similar, but this effect is mediated by the semantic representation and interpretation of social tags. Results from the experiment comparing tagging behavior between a social group (where participants can see tags created by others) and a nominal group (where participants cannot see tags created by others) confirmed these predictions. The current results highlight the critical role of human semantic representations and interpretation processes in the analysis of large-scale social information systems. The model implies that analysis at both the individual and social levels are important for understanding the active, dynamic processes between human knowledge structures and external folksonomies. Implications on how social tagging systems can facilitate exploratory search, interactive information retrievals, knowledge exchange, and other higher-level cognitive and learning activities are discussed.", "" ] }
1311.1162
2951448623
One potential disadvantage of social tagging systems is that due to the lack of a centralized vocabulary, a crowd of users may never manage to reach a consensus on the description of resources (e.g., books, users or songs) on the Web. Yet, previous research has provided interesting evidence that the tag distributions of resources may become semantically stable over time as more and more users tag them. At the same time, previous work has raised an array of new questions such as: (i) How can we assess the semantic stability of social tagging systems in a robust and methodical way? (ii) Does semantic stabilization of tags vary across different social tagging systems and ultimately, (iii) what are the factors that can explain semantic stabilization in such systems? In this work we tackle these questions by (i) presenting a novel and robust method which overcomes a number of limitations in existing methods, (ii) empirically investigating semantic stabilization processes in a wide range of social tagging systems with distinct domains and properties and (iii) detecting potential causes for semantic stabilization, specifically imitation behavior, shared background knowledge and intrinsic properties of natural language. Our results show that tagging streams which are generated by a combination of imitation dynamics and shared background knowledge exhibit faster and higher semantic stability than tagging streams which are generated via imitation dynamics or natural language streams alone.
While above models mainly focus on the imitation behavior of users for explaining the stabilization process, shared background knowledge might also be a major factor as one co-author of this work already hypothesized in previous work @cite_33 . Research by @cite_32 picked up this hypothesis and explored the utility of background knowledge as an additional explanatory factor which may help to simulate the tagging process. show that combining background knowledge with imitation mechanisms improves the simulation results. Although their results are very strong, their evaluation has certain limitations since they focus on reproducing the sharp drop of the rank-ordered tag frequency distribution between rank 7 and 10 which was previously interpreted as one of the main characteristics of tagging data @cite_31 . However, recent work by @cite_21 questions that the flatten head of these distributions is a characteristic which can be attributed to the tagging process itself. Instead, it may only be an artifact of the user interface which suggests up to ten tags. show that power law forms regardless of whether tag suggestions are provided to the user or not, making a strong point towards the utility of background knowledge for explaining the stabilization.
{ "cite_N": [ "@cite_31", "@cite_21", "@cite_32", "@cite_33" ], "mid": [ "", "2075149349", "2015493221", "2102775690" ], "abstract": [ "", "Most tagging systems support the user in the tag selection process by providing tag suggestions, or recommendations, based on a popularity measurement of tags other users provided when tagging the same resource. The majority of theories and mathematical models of tagging found in the literature assume that the emergence of power laws in tagging systems is mainly driven by the imitation behavior of users when observing tag suggestions provided by the user interface of the tagging system. We present experimental results that show that the power law distribution forms regardless of whether or not tag suggestions are presented to the users.", "In recent literature, several models were proposed for reproducing and understanding the tagging behavior of users. They all assume that the tagging behavior is influenced by the previous tag assignments of other users. But they are only partially successful in reproducing characteristic properties found in tag streams. We argue that this inadequacy of existing models results from their inability to include user's background knowledge into their model of tagging behavior. This paper presents a generative tagging model that integrates both components, the background knowledge and the influence of previous tag assignments. Our model successfully reproduces characteristic properties of tag streams. It even explains effects of the user interface on the tag stream.", "Collaborative tagging describes the process by which many users add metadata in the form of keywords to shared content. Recently, collaborative tagging has grown in popularity on the web, on sites that allow users to tag bookmarks, photographs and other content. In this paper we analyze the structure of collaborative tagging systems as well as their dynamic aspects. Specifically, we discovered regularities in user activity, tag frequencies, kinds of tags used, bursts of popularity in bookmarking and a remarkable stability in the relative proportions of tags within a given URL. We also present a dynamic model of collaborative tagging that predicts these stable patterns and relates them to imitation and shared knowledge." ] }
1311.0833
1575174106
Sentiment polarity classification is perhaps the most widely studied topic. It classifies an opinionated document as expressing a positive or negative opinion. In this paper, using movie review dataset, we perform a comparative study with different single kind linguistic features and the combinations of these features. We find that the classic topic-based classifier(Naive Bayes and Support Vector Machine) do not perform as well on sentiment polarity classification. And we find that with some combination of different linguistic features, the classification accuracy can be boosted a lot. We give some reasonable explanations about these boosting outcomes.
Much of research about automated sentiment and opinion detection has been performed these years. @cite_4 used the subjectivity of similar word as a confident weight when the classifier is not sure to automatically learn subjective adjectives from corpus, which provided good features for semantic application. @cite_0 described a straightforward method for automatically identifying collocational clues of subjectivity in texts and explored low-frequency words as features. @cite_2 tries several kinds of features in binary sentiment classification and made a comparative evaluation. @cite_6 extracted two consecutive words which contained at least one adjective or adverb and use PMI-IR to estimate the phrase's semantic orientation. @cite_15 used AutoSlog-TS algorithm to learn subjective patterns based on two high-precision classifiers. @cite_3 used both word features and polarity features for polarity classification. @cite_12 assumed that high order n-grams are more precise and deterministic expressions than unigrams or bigram, they employed high order n-grams to approximate surface patterns to capture the sentiment in text. @cite_17 leveraged some document statistic features to improve the opinion polarity classification. @cite_10 used character 3-grams as features based on assumptions that this can overcome spelling errors and problems of ill-formatted or ungrammatical questions and they made features by combine word and part-of-speech.
{ "cite_N": [ "@cite_4", "@cite_6", "@cite_3", "@cite_0", "@cite_2", "@cite_15", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "1565863475", "2168625136", "2022204871", "159417085", "2166706824", "2088622183", "2066191229", "104703790", "2031794520" ], "abstract": [ "Subjectivity tagging is distinguishing sentences used to present opinions and evaluations from sentences used to objectively present factual information. There are numerous applications for which subjectivity tagging is relevant, including information extraction and information retrieval. This paper identifies strong clues of subjectivity using the results of a method for clustering words according to distributional similarity (Lin 1998), seeded by a small amount of detailed manual annotation. These features are then further refined with the addition of lexical semantic features of adjectives, specifically polarity and gradability (Hatzivassiloglou & McKeown 1997), which can be automatically learned from corpora. In 10-fold cross validation experiments, features based on both similarity clusters and the lexical semantic features are shown to have higher precision than features based on each alone.", "The evaluative character of a word is called its semantic orientation. Positive semantic orientation indicates praise (e.g., \"honest\", \"intrepid\") and negative semantic orientation indicates criticism (e.g., \"disturbing\", \"superfluous\"). Semantic orientation varies in both direction (positive or negative) and degree (mild to strong). An automated system for measuring semantic orientation would have application in text classification, text filtering, tracking opinions in online discussions, analysis of survey responses, and automated chat systems (chatbots). This article introduces a method for inferring the semantic orientation of a word from its statistical association with a set of positive and negative paradigm words. Two instances of this approach are evaluated, based on two different statistical measures of word association: pointwise mutual information (PMI) and latent semantic analysis (LSA). The method is experimentally tested with 3,596 words (including adjectives, adverbs, nouns, and verbs) that have been manually labeled positive (1,614 words) and negative (1,982 words). The method attains an accuracy of 82.8p on the full test set, but the accuracy rises above 95p when the algorithm is allowed to abstain from classifying mild words.", "This paper presents a new approach to phrase-level sentiment analysis that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressions. With this approach, the system is able to automatically identify the contextual polarity for a large subset of sentiment expressions, achieving results that are significantly better than baseline.", "Subjectivity in natural language refers to aspects of language used to express opinions and evaluations (Banfield, 1982; Wiebe, 1994). There are numerous applications for which knowledge of subjectivity is relevant, including genre detection, information extraction, and information retrieval. This paper shows promising results for a straightforward method of identifying collocational clues of subjectivity, as well as evidence of the usefulness of these clues for recognizing opinionated documents.", "We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging.", "This paper presents a bootstrapping process that learns linguistically rich extraction patterns for subjective (opinionated) expressions. High-precision classifiers label unannotated data to automatically create a large training set, which is then given to an extraction pattern learning algorithm. The learned patterns are then used to identify more subjective sentences. The bootstrapping process learns many subjective patterns and increases recall while maintaining high precision.", "In this paper we begin to investigate how to automatically determine the subjectivity orientation of questions posted by real users in community question answering (CQA) portals. Subjective questions seek answers containing private states, such as personal opinion and experience. In contrast, objective questions request objective, verifiable information, often with support from reliable sources. Knowing the question orientation would be helpful not only for evaluating answers provided by users, but also for guiding the CQA engine to process questions more intelligently. Our experiments on Yahoo! Answers data show that our method exhibits promising performance.", "Evaluating text fragments for positive and negative subjective expressions and their strength can be important in applications such as single- or multi- document summarization, document ranking, data mining, etc. This paper looks at a simplified version of the problem: classifying online product reviews into positive and negative classes. We discuss a series of experiments with different machine learning algorithms in order to experimentally evaluate various trade-offs, using approximately 100K product reviews from the web.", "Opinion retrieval is a document retrieving and ranking process. A relevant document must be relevant to the query and contain opinions toward the query. Opinion polarity classification is an extension of opinion retrieval. It classifies the retrieved document as positive, negative or mixed, according to the overall polarity of the query relevant opinions in the document. This paper (1) proposes several new techniques that help improve the effectiveness of an existing opinion retrieval system; (2) presents a novel two-stage model to solve the opinion polarity classification problem. In this model, every query relevant opinionated sentence in a document retrieved by our opinion retrieval system is classified as positive or negative respectively by a SVM classifier. Then a second classifier determines the overall opinion polarity of the document. Experimental results show that both the opinion retrieval system with the proposed opinion retrieval techniques and the polarity classification model outperformed the best reported systems respectively." ] }
1311.0833
1575174106
Sentiment polarity classification is perhaps the most widely studied topic. It classifies an opinionated document as expressing a positive or negative opinion. In this paper, using movie review dataset, we perform a comparative study with different single kind linguistic features and the combinations of these features. We find that the classic topic-based classifier(Naive Bayes and Support Vector Machine) do not perform as well on sentiment polarity classification. And we find that with some combination of different linguistic features, the classification accuracy can be boosted a lot. We give some reasonable explanations about these boosting outcomes.
@cite_13 work on polarity classification is perhaps the closest to ours. They explored some polarized features, transition features and some combinations with unigram. He applied maximum entropy classifier to evaluate those features by accuracy. In contrast, we utilize the contextual information of trigram and the effectiveness of adjective and adverb words, which we combine polarized features with both unigram and adjective adverb trigram features. And we give an overall evaluation based on accuracy in different document representations and two classic classifiers.
{ "cite_N": [ "@cite_13" ], "mid": [ "18470130" ], "abstract": [ "In this paper we examine different linguistic features for sentimental polarity classification, and perform a comparative study on this task between blog and review data. We found that results on blog are much worse than reviews and investigated two methods to improve the performance on blogs. First we explored information retrieval based topic analysis to extract relevant sentences to the given topics for polarity classification. Second, we adopted an adaptive method where we train classifiers from review data and incorporate their hypothesis as features. Both methods yielded performance gain for polarity classification on blog data." ] }
1311.0044
1554791743
AbstractThe increasing use of cloud computing and remote execution have made program security especially important.Code obfuscation has been proposed to make the understanding of programs more complicated to attackers. In thispaper, we exploit multi-core processing to substantially increase the complexity of programs, making reverse engi-neering more complicated. We propose a novel method that automatically partitions any serial thread into an arbitrarynumber of parallel threads, at the basic-block level. The method generates new control-flow graphs, preserving theblocks’ serial successor relations and guaranteeing that one basic-block is active at a time using guards. The methodgenerates m n different combinations for m threads and n basic-blocks, significantly complicating the execution state.We provide a correctness proof for the algorithm and implement the algorithm in the LLVM compilation framework.Keywords: Security, Obfuscation, Multi-threading.1. IntroductionWith the advent of cloud computing, software se-curity becomes especially important [1]. In particu-lar, software security researchers have been concernedto evaluate the methods that protect software systemsagainst reverse engineering threats. Those can be ex-ploitedbysoftwarehackersto discoverthe softwarevul-nerabilities and inject malicious code.One practical security approach is to use softwareobfuscation; it is a software security mechanism thattransforms the original program into a functionally-equivalent counterpart [2, 3]; the obfuscated programhas the same semantics as the original, but it is muchmore complex to understand by reverse engineers [4].Parallelism and multi-threading have been proposedto increase performance. Parallel programs are notori-ously difficult to debug and reason about, and for thatreason parallelism is a nice ingredient for obfuscation.
The transformation includes methods such as insertion of dead or irrelevant code, extension to loop conditions, and conversion of a reducible into a non-reducible flow graph. One of the most important methods to do this is to increase parallelism of the code by two methods. First, redundant non-profitable task can be created then a portion of code is paralleled with this task, so the hacker cannot deduce which thread will have the actual portion to be run. Second, a sequence of data dependent statements in a portion of code can be splitted, then they run in parallel. The control for the correct execution will be using synchronization primitives @cite_2 . The latter is the closest to our method, however it splits simple, in-order sequence of instructions (without control-flow dependence) rather than general, complex, control-flow graphs as per our method.
{ "cite_N": [ "@cite_2" ], "mid": [ "2146567535" ], "abstract": [ "We identify three types of attack on the intellectual property contained in software and three corresponding technical defenses. A defense against reverse engineering is obfuscation, a process that renders software unintelligible but still functional. A defense against software piracy is watermarking, a process that makes it possible to determine the origin of software. A defense against tampering is tamper-proofing, so that unauthorized modifications to software (for example, to remove a watermark) will result in nonfunctional code. We briefly survey the available technology for each type of defense." ] }
1311.0244
1843115761
In multiagent surveillance missions, a group of agents monitors some points of interest to provide situational awareness. For agents with local communication capabilities, one way to transmit the surveillance information to a base is streaming by the instantaneous data via multihop communications over a connected network. However, a connected communication network may become disconnected if some agents leave the surveillance area (for example, for refueling). This paper presents a locally applicable, efficient, and scalable strategy that guarantees a connected communication network between the base and the agents in the face of any agent removal. The proposed decentralized strategy is based on a sequence of local replacements, which are initiated by the agent leaving the network. It is shown that the replacement sequence always ends with the relocation of an agent, for which the absence from its current position does not disconnect the network. Furthermore, the optimality (that is, the minimum number of r...
Recently, a great amount of interest has been devoted to the analysis of multi-agent systems via the graph theory. In these studies, the of a graph represent the agents (such as robots, sensors or individuals), and represent the direct interactions between them. For such a representation, a fundamental graph property related to the system robustness is graph connectivity (e.g. @cite_30 @cite_31 @cite_38 and the literature cited within). As such, the robustness of a system is related to the total number of edges nodes, whose removal will cause a network disconnection. For the graph theoretical connectivity control of mobile systems against edge failure, the literature is including, but not limited to, optimization based connectivity control (e.g. @cite_10 ), continuous feedback connectivity control (e.g. @cite_6 ), and control based on the estimation of the algebraic connectivity (e.g. @cite_3 ). In these studies, the authors mainly consider uncertainty in edges and assume a constant number of nodes.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_6", "@cite_3", "@cite_31", "@cite_10" ], "mid": [ "2070842753", "", "2070113904", "2118941026", "", "2063801303" ], "abstract": [ "This paper addresses the connectedness issue in multiagent coordination, i.e., the problem of ensuring that a group of mobile agents stays connected while achieving some performance objective. In particular, we study the rendezvous and the formation control problems over dynamic interaction graphs, and by adding appropriate weights to the edges in the graphs, we guarantee that the graphs stay connected.", "", "To accomplish cooperative tasks, robotic systems are often required to communicate with each other. Thus, maintaining connectivity of the interagent communication graph is a fundamental issue in the field of multi-robot systems. In this paper we present a completely decentralized control strategy for global connectivity maintenance of the interagent communication graph. We describe a gradient-based control strategy that exploits decentralized estimation of the algebraic connectivity. The proposed control algorithm guarantees the global connectivity of the communication graph without requiring maintenance of the local connectivity between the robotic systems. The control strategy is validated by means of an analytical proof and simulative results.", "In order to accomplish cooperative tasks, multi-robot systems are required to communicate among each other. Thus, maintaining the connectivity of the communication graph is a fundamental issue. Connectivity maintenance has been extensively studied in the last few years, but generally considering only kinematic agents. In this paper we will introduce a control strategy that, exploiting a decentralized procedure for the estimation of the algebraic connectivity of the graph, ensures the connectivity maintenance for groups of Lagrangian systems. The control strategy is validated by means of analytical proofs and simulation results.", "", "In this paper, we provide a theoretical framework for controlling graph connectivity in mobile robot networks. We discuss proximity-based communication models composed of disk-based or uniformly-fading-signal-strength communication links. A graph-theoretic definition of connectivity is provided, as well as an equivalent definition based on algebraic graph theory, which employs the adjacency and Laplacian matrices of the graph and their spectral properties. Based on these results, we discuss centralized and distributed algorithms to maintain, increase, and control connectivity in mobile robot networks. The various approaches discussed in this paper range from convex optimization and subgradient-descent algorithms, for the maximization of the algebraic connectivity of the network, to potential fields and hybrid systems that maintain communication links or control the network topology in a least restrictive manner. Common to these approaches is the use of mobility to control the topology of the underlying communication network. We discuss applications of connectivity control to multirobot rendezvous, flocking and formation control, where so far, network connectivity has been considered an assumption." ] }
1311.0244
1843115761
In multiagent surveillance missions, a group of agents monitors some points of interest to provide situational awareness. For agents with local communication capabilities, one way to transmit the surveillance information to a base is streaming by the instantaneous data via multihop communications over a connected network. However, a connected communication network may become disconnected if some agents leave the surveillance area (for example, for refueling). This paper presents a locally applicable, efficient, and scalable strategy that guarantees a connected communication network between the base and the agents in the face of any agent removal. The proposed decentralized strategy is based on a sequence of local replacements, which are initiated by the agent leaving the network. It is shown that the replacement sequence always ends with the relocation of an agent, for which the absence from its current position does not disconnect the network. Furthermore, the optimality (that is, the minimum number of r...
Maintaining connectivity against the removal of multiple agents is a more challenging problem than maintaining connectivity against multiple edge removal @cite_9 . In the last few years, there has been a significant interest in addressing agent loss problem in networked systems. In @cite_9 and @cite_2 , the main focus is on the design of robust network topologies that can tolerate a finite number of agent removals. In @cite_2 and @cite_14 , the authors propose self-repair strategies that create new connections among the neighbors of the failing agent. In addition, a connectivity maintenance strategy based on decentralized estimation of algebraic connectivity is presented in @cite_7 . Based on their estimations, agents increase or decrease their broadcast radii for satisfying connectivity requirements.
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_7", "@cite_2" ], "mid": [ "2004805323", "2081257635", "2023766744", "2093172757" ], "abstract": [ "Abstract In the studies on the localization of wireless sensor networks (WSN), it has been shown that a network is in principle uniquely localizable if its underlying graph is globally rigid and there are at least d + 1 non-collinear anchors (in d -space). The high possibility of the loss of nodes or links in a typical WSN, specially mobile WSNs where the localization often needs to be repeated, enforces to not only have localizable network structures but also structures which remain localizable after the loss of multiple nodes links. The problem of characterizing robustness against the loss of multiple nodes, which is more challenging than the problem of multiple link loss, is being studied here for the first time, though there have been some results on single node loss. We provide some sufficient properties for a network to be robustly localizable. This enables us to answer the problem of how to make a given network robustly localizable. We also derive a lower bound on the number of the links such a network should have. Elaborating it to the case of robustness against the loss of up to 2 nodes, we propose the optimal network structure, in terms of the required number of distance measurements.", "Dissensus is a modeling framework for networks of dynamic agents in competition for scarce resources. Originally inspired by biological cell behaviors, it also fits marketing, finance and many other application areas. Competition is often unstable in the sense that strong agents, those having access to large resources, gain more and more resources at the expenses of weak agents. Thus, strong agents duplicate when reaching a critical amount of resources, whereas weak agents die when losing all their resources. To capture all these phenomena we introduce discrete time gossip systems with unstable state dynamics interrupted by discrete events affecting the network topology. Invariancy of states, topologies, and network connectivity are explored.", "In this article, we propose a decentralised algorithm for connectivity maintenance in a distributed sensor network. Our algorithm uses the dynamics of a consensus algorithm to estimate the connectivity of a network topology in a decentralised manner. These estimates are then used to inform a decentralised control algorithm that regulates the network connectivity to some desired level. Under certain realistic assumptions we show that the closed-loop dynamics can be described as a consensus algorithm with an input, and eventually reduces to a scalar system. Bounds are given to ensure the stability of the algorithm and examples are given to illustrate the efficacy of the proposed algorithm.", "In this paper, we address the problem of agent loss in vehicle formations and sensor networks via two separate approaches: (1) perform a ‘self-repair’ operation in the event of agent loss to recover desirable information architecture properties or (2) introduce robustness into the information architecture a priori such that agent loss does not destroy desirable properties. We model the information architecture as a graph G(V, E), where V is a set of vertices representing the agents and E is a set of edges representing information flow amongst the agents. We focus on two properties of the graph called rigidity and global rigidity, which are required for formation shape maintenance and sensor network self-localization, respectively. For the self-repair approach, we show that while previous results permit local repair involving only neighbours of the lost agent, the repair cannot always be implemented using only local information. We present new results that can be applied to make the local repair using only local information. We describe implementation and illustrate with algorithms and examples. For the robustness approach, we investigate the structure of graphs with the property that rigidity or global rigidity is preserved after removing any single vertex (we call the property as 2-vertex-rigidity or 2-vertex-global-rigidity, respectively). Information architectures with such properties would allow formation shape maintenance or self-localization to be performed even in the event of agent failure. We review a characterization of a class of 2-vertex-rigidity and develop a separate class, making significant strides towards a complete characterization. We also present a characterization of a class of 2-vertex-global-rigidity. Copyright © 2008 John Wiley & Sons, Ltd." ] }
1311.0244
1843115761
In multiagent surveillance missions, a group of agents monitors some points of interest to provide situational awareness. For agents with local communication capabilities, one way to transmit the surveillance information to a base is streaming by the instantaneous data via multihop communications over a connected network. However, a connected communication network may become disconnected if some agents leave the surveillance area (for example, for refueling). This paper presents a locally applicable, efficient, and scalable strategy that guarantees a connected communication network between the base and the agents in the face of any agent removal. The proposed decentralized strategy is based on a sequence of local replacements, which are initiated by the agent leaving the network. It is shown that the replacement sequence always ends with the relocation of an agent, for which the absence from its current position does not disconnect the network. Furthermore, the optimality (that is, the minimum number of r...
Different from the previous studies, @cite_18 , @cite_17 , @cite_4 and @cite_12 consider mobile agents and propose some agent movements for connectivity restoration of wireless sensor networks in agent failure. In @cite_18 , a distributed control algorithm is introduced for connectivity maintenance. Before any failure, the algorithm runs and identifies all critical agents, whose failure will cause network disconnection. Then, it assigns required actions to each agent in advance. The studies in @cite_17 and @cite_4 differ from @cite_18 by maintaining connectivity through some agent relocations initiated by the failing agent. In @cite_4 , the authors present a centralized algorithm as an alternative to the decentralized scheme given in @cite_17 , which is not always feasible in general graphs. Finally, the authors of @cite_12 use the shortest path routing table in their algorithm, and they propose a distributed recovery mechanism that maintains the network connectivity with minimal topology change, i.e. not increasing the length of the shortest path between any arbitrary two agents after the reconfiguration.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_12", "@cite_17" ], "mid": [ "2112818021", "", "2057275046", "2165511399" ], "abstract": [ "Wireless sensor and actor networks (WSANs) additionally employ actor nodes within the wireless sensor network (WSN) which can process the sensed data and perform certain actions based on this collected data. In most applications, inter-actor coordination is required to provide the best response. This suggests that the employed actors should form and maintain a connected inter-actor network at all times. However, WSANs often operate unattended in harsh environments where actors can easily fail or get damaged. Such failures can partition the inter-actor network and thus eventually make the network useless. In order to handle such failures, we present a connected dominating set (CDS) based partition detection and recovery algorithm. The idea is to identify whether the failure of a node causes partitioning or not in advance. If a partitioning is to occur, the algorithm designates one of the neighboring nodes to initiate the connectivity restoration process. This process involves repositioning of a set of actors in order to restore the connectivity. The overall goal in this restoration process is to localize the scope of the recovery and minimize the movement overhead imposed on the involved actors. The effectiveness of the approach is validated through simulation experiments.", "", "In wireless sensor-actor networks, sensors probe their surroundings and forward their data to actor nodes. Actors collaboratively respond to achieve predefined application mission. Since actors have to coordinate their operation, it is necessary to maintain a strongly connected network topology at all times. Moreover, the length of the inter-actor communication paths may be constrained to meet latency requirements. However, a failure of an actor may cause the network to partition into disjoint blocks and would, thus, violate such a connectivity goal. One of the effective recovery methodologies is to autonomously reposition a subset of the actor nodes to restore connectivity. Contemporary recovery schemes either impose high node relocation overhead or extend some of the inter-actor data paths. This paper overcomes these shortcomings and presents a Least-Disruptive topology Repair (LeDiR) algorithm. LeDiR relies on the local view of a node about the network to devise a recovery plan that relocates the least number of nodes and ensures that no path between any pair of nodes is extended. LeDiR is a localized and distributed algorithm that leverages existing route discovery activities in the network and imposes no additional prefailure communication overhead. The performance of LeDiR is analyzed mathematically and validated via extensive simulation experiments.", "In wireless sensor and actor networks (WSANs), a set of static sensor nodes and a set of (mobile) actor nodes form a network that performs distributed sensing and actuation tasks. In [1], presented DARA, a Distributed Actor Recovery Algorithm, which restores the connectivity of the interactor network by efficiently relocating some mobile actors when failure of an actor happens. To restore 1 and 2-connectivity of the network, two algorithms are developed in [1]. Their basic idea is to find the smallest set of actors that needs to be repositioned to restore the required level of connectivity, with the objective to minimize the movement overhead of relocation. Here, we show that the algorithms proposed in [1] will not work smoothly in all scenarios as claimed and give counterexamples for some algorithms and theorems proposed in [1]. We then present a general actor relocation problem and propose methods that will work correctly for several subsets of the problems. Specifically, our method does result in an optimum movement strategy with minimum movement overhead for the problems studied in [1]." ] }
1311.0244
1843115761
In multiagent surveillance missions, a group of agents monitors some points of interest to provide situational awareness. For agents with local communication capabilities, one way to transmit the surveillance information to a base is streaming by the instantaneous data via multihop communications over a connected network. However, a connected communication network may become disconnected if some agents leave the surveillance area (for example, for refueling). This paper presents a locally applicable, efficient, and scalable strategy that guarantees a connected communication network between the base and the agents in the face of any agent removal. The proposed decentralized strategy is based on a sequence of local replacements, which are initiated by the agent leaving the network. It is shown that the replacement sequence always ends with the relocation of an agent, for which the absence from its current position does not disconnect the network. Furthermore, the optimality (that is, the minimum number of r...
In this paper, we present a decentralized recovery mechanism to maintain network connectivity in arbitrary agent removal. The replacement control problem has been initially introduced in @cite_36 , and replacements by minimum degree neighbors have been presented as a solution. Here, we generalize the connectivity maintenance scheme as the message passing strategy, and we show that this method maintains connectivity even in the case of agents sharing minimum amount of information, i.e. only node IDs. Moreover, we show that utilizing @math -criticality information in the message passing strategy significantly improves the optimality of the solution.
{ "cite_N": [ "@cite_36" ], "mid": [ "2313205331" ], "abstract": [ "Connectivity is a fundamental property in many networked systems for communication and coordination among the agents. Removal of any agent from a network may destroy its connectivity. In order to maintain the connectivity of mobile networked systems in agent loss, one can design a robust network topology such that the network is tolerant to a finite number of agent losses, and or develop a control strategy, such that the network reconfigures itself until the connectivity requirements are satisfied. In this paper, we introduce a decentralized control scheme based on a sequence of replacements, each of which occurs between an agent and one of its neighbors. The proposed scheme always maintains the graph connectivity, and it does not rely on any gathering of global information either a priori or at the intermediate steps of a mission. Moreover, the proposed scheme does not cause any increase in the total number of edges and the maximum node degree in a graph, thus it assures no increase in the overall communication cost due to the topological changes. In this study, we validate the control strategy by means of an analytical proof and simulation results." ] }
1311.0423
2952178312
We study unique recovery of cosparse signals from limited-angle tomographic measurements of two- and three-dimensional domains. Admissible signals belong to the union of subspaces defined by all cosupports of maximal cardinality @math with respect to the discrete gradient operator. We relate @math both to the number of measurements and to a nullspace condition with respect to the measurement matrix, so as to achieve unique recovery by linear programming. These results are supported by comprehensive numerical experiments that show a high correlation of performance in practice and theoretical predictions. Despite poor properties of the measurement matrix from the viewpoint of compressed sensing, the class of uniquely recoverable signals basically seems large enough to cover practical applications, like contactless quality inspection of compound solid bodies composed of few materials.
In discrete tomography, images to be reconstructed are sampled along lines. Thus, sampling patterns are quite different from random and non-adaptive measurements that are favourable from the viewpoint of compressed sensing. In @cite_24 , we showed that structured sampling patterns as used in commercial scanners do not satisfy the CS conditions, like the nullspace property and the , that guarantee accurate recovery of sparse (or compressible) signals. In fact, these recovery conditions predict a quite poor worst-case performance of tomographic measurements, due to the high nullspace sparsity of a tomographic projection matrix @math . Moreover, the gap between available recovery results of CS @cite_34 and results from tomographic projections in @cite_24 is dramatic.
{ "cite_N": [ "@cite_24", "@cite_34" ], "mid": [ "2003321232", "2115275122" ], "abstract": [ "We study the discrete tomography problem in Experimental Fluid Dynamics—Tomographic Particle Image Velocimetry (TomoPIV)—from the viewpoint of Compressed Sensing (CS). The problem results in an ill‐posed image reconstruction problem due to undersampling. Ill‐posedness is also intimately connected to the particle density. Higher densities ease subsequent flow estimation but also aggravate ill‐posedness of the reconstruction problem. A theoretical investigation of this trade‐off is studied in the present work.", "1.1. Three surprises of high dimensions. This paper develops asymptotic methods to count faces of random high-dimensional polytopes; a seemingly dry and unpromising pursuit. Yet our conclusions have surprising implications - in statistics, probability, information theory, and signal processing - with potential impacts in practical subjects like medical imaging and digital communications. Before involving the reader in our lengthy analysis of high-dimensional face counting, we describe three implications of our results. 1.1.1. Convex Hulls of Gaussian Point Clouds. Consider a random point cloud of n points xi, i = 1, . . . , n, sampled independently and identically from a Gaussian distribution in R d with nonsingular covariance. This is a standard model of multivariate data; its properties are increasingly important in a wide range of applications. At the same time, it is an attractive and in some sense timeless object for theoretical study. Properties of the convex hull of the random point cloud X = xi have attracted interest for several decades, increasingly so in recent years; there is a nowvoluminous literature on the subject. The results could be significant for understanding outlier detection, or classification problems in machine learning." ] }
1311.0423
2952178312
We study unique recovery of cosparse signals from limited-angle tomographic measurements of two- and three-dimensional domains. Admissible signals belong to the union of subspaces defined by all cosupports of maximal cardinality @math with respect to the discrete gradient operator. We relate @math both to the number of measurements and to a nullspace condition with respect to the measurement matrix, so as to achieve unique recovery by linear programming. These results are supported by comprehensive numerical experiments that show a high correlation of performance in practice and theoretical predictions. Despite poor properties of the measurement matrix from the viewpoint of compressed sensing, the class of uniquely recoverable signals basically seems large enough to cover practical applications, like contactless quality inspection of compound solid bodies composed of few materials.
However, due to the unrestricted sign patterns of the sparse vector @math and of the corresponding coefficient matrix, compare Section , we cannot transfer the recovery results established in @cite_21 to the problem and .
{ "cite_N": [ "@cite_21" ], "mid": [ "2119667497" ], "abstract": [ "Conventional approaches to sampling signals or images follow Shannon's theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use." ] }
1311.0423
2952178312
We study unique recovery of cosparse signals from limited-angle tomographic measurements of two- and three-dimensional domains. Admissible signals belong to the union of subspaces defined by all cosupports of maximal cardinality @math with respect to the discrete gradient operator. We relate @math both to the number of measurements and to a nullspace condition with respect to the measurement matrix, so as to achieve unique recovery by linear programming. These results are supported by comprehensive numerical experiments that show a high correlation of performance in practice and theoretical predictions. Despite poor properties of the measurement matrix from the viewpoint of compressed sensing, the class of uniquely recoverable signals basically seems large enough to cover practical applications, like contactless quality inspection of compound solid bodies composed of few materials.
We overcome this difficulty by adopting the recently introduced from @cite_9 , that provides an alternative viewpoint to the classical and is more suitable to the problem class considered in this paper. Our present work applies and extends the results from @cite_9 to the 3D recovery problem from few tomographic projections of three-dimensional images consisting of few homogeneous regions. We give a theoretical relation between the image and sufficient sampling, validate it empirically and conclude that TV-reconstructions of a class of synthetic phantoms exhibit a well-defined recovery curve similar to the study in @cite_25 @cite_21 .
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_25" ], "mid": [ "2123372478", "2119667497", "" ], "abstract": [ "Hoeffding's U-statistics model combinatorial-type matrix parameters (appearing in CS theory) in a natural way. This paper proposes using these statistics for analyzing random compressed sensing matrices, in the non-asymptotic regime (relevant to practice). The aim is to address certain pessimisms of worst-case restricted isometry analyses, as observed by both Blanchard and Dossal, We show how U-statistics can obtain average-case analyses, by relating to statistical restricted isometry property (StRIP) type recovery guarantees. However unlike standard StRIP, random signal models are not required; the analysis used here holds in the almost sure (probabilistic) sense. For Gaussian bounded entry matrices, we show that both @math -minimization and LASSO essentially require on the order of @math measurements to respectively recover at least @math fraction, and @math fraction, of the signals. Noisy conditions are considered. Empirical evidence suggests our analysis to compare well to Donoho and Tanner's recent large deviation bounds for @math -equivalence, in the regime of block lengths @math with high undersampling ( @math measurements); similar system sizes are found in recent CS implementation. In this work, it is assumed throughout that matrix columns are independently sampled.", "Conventional approaches to sampling signals or images follow Shannon's theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use.", "" ] }
1311.0423
2952178312
We study unique recovery of cosparse signals from limited-angle tomographic measurements of two- and three-dimensional domains. Admissible signals belong to the union of subspaces defined by all cosupports of maximal cardinality @math with respect to the discrete gradient operator. We relate @math both to the number of measurements and to a nullspace condition with respect to the measurement matrix, so as to achieve unique recovery by linear programming. These results are supported by comprehensive numerical experiments that show a high correlation of performance in practice and theoretical predictions. Despite poor properties of the measurement matrix from the viewpoint of compressed sensing, the class of uniquely recoverable signals basically seems large enough to cover practical applications, like contactless quality inspection of compound solid bodies composed of few materials.
Empirical evidence for the recovery of piecewise constant functions from few tomographic measurements was already observed in @cite_13 @cite_20 @cite_23 . The first theoretical guarantees that have been obtained for recovery from noiseless samples of images with exactly sparse gradients via total variation minimization, date back to the beginnings of CS @cite_12 @cite_10 . However, the measurements considered were incomplete Fourier samples, and images were not sampled along lines in the spatial domain, but along few radial lines in the frequency domain. Such measurements ensembles are known to have good CS properties as opposed to the CT setup, and are almost isometric on sparse signals for a sufficient number of samples. As a result, recovery is stable in such scenarios. Stable recovery of the image gradient from incomplete Fourier samples was shown in @cite_7 , while Needell @cite_6 showed that stable image reconstruction via total variation minimization is possible also beyond the Fourier setup, provided the measurement ensemble satisfies the RIP condition.
{ "cite_N": [ "@cite_13", "@cite_7", "@cite_6", "@cite_23", "@cite_20", "@cite_10", "@cite_12" ], "mid": [ "1972150100", "2018443449", "", "2061033783", "2164452299", "2145096794", "" ], "abstract": [ "An iterative algorithm, based on recent work in compressive sensing, is developed for volume image reconstruction from a circular cone-beam scan. The algorithm minimizes the total variation (TV) of the image subject to the constraint that the estimated projection data is within a specified tolerance of the available data and that the values of the volume image are non-negative. The constraints are enforced by the use of projection onto convex sets (POCS) and the TV objective is minimized by steepest descent with an adaptive step-size. The algorithm is referred to as adaptive-steepest-descent-POCS (ASD-POCS). It appears to be robust against cone-beam artifacts, and may be particularly useful when the angular range is limited or when the angular sampling rate is low. The ASD-POCS algorithm is tested with the Defrise disk and jaw computerized phantoms. Some comparisons are performed with the POCS and expectation-maximization (EM) algorithms. Although the algorithm is presented in the context of circular cone-beam image reconstruction, it can also be applied to scanning geometries involving other x-ray source trajectories.", "A major problem in imaging applications such as magnetic resonance imaging and synthetic aperture radar is the task of trying to reconstruct an image with the smallest possible set of Fourier samples, every single one of which has a potential time and or power cost. The theory of compressive sensing (CS) points to ways of exploiting inherent sparsity in such images in order to achieve accurate recovery using sub-Nyquist sampling schemes. Traditional CS approaches to this problem consist of solving total-variation (TV) minimization programs with Fourier measurement constraints or other variations thereof. This paper takes a different approach. Since the horizontal and vertical differences of a medical image are each more sparse or compressible than the corresponding TV image, CS methods will be more successful in recovering these differences individually. We develop an algorithm called GradientRec that uses a CS algorithm to recover the horizontal and vertical gradients and then estimates the original image from these gradients. We present two methods of solving the latter inverse problem, i.e., one based on least-square optimization and the other based on a generalized Poisson solver. After a thorough derivation of our complete algorithm, we present the results of various experiments that compare the effectiveness of the proposed method against other leading methods.", "", "Iterative image reconstruction with sparsity-exploiting methods, such as total variation (TV) minimization, investigated in compressive sensing claim potentially large reductions in sampling requirements. Quantifying this claim for computed tomography (CT) is nontrivial, because both full sampling in the discrete-to-discrete imaging model and the reduction in sampling admitted by sparsity-exploiting methods are ill-defined. The present article proposes definitions of full sampling by introducing four sufficient-sampling conditions (SSCs). The SSCs are based on the condition number of the system matrix of a linear imaging model and address invertibility and stability. In the example application of breast CT, the SSCs are used as reference points of full sampling for quantifying the undersampling admitted by reconstruction through TV-minimization. In numerical simulations, factors affecting admissible undersampling are studied. Differences between few-view and few-detector bin reconstruction as well as a relation between object sparsity and admitted undersampling are quantified.", "Suppose we wish to recover a vector x0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax0 + e; A is an n × m", "This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f spl isin C sup N and a randomly chosen set of frequencies spl Omega . Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set spl Omega ? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)= spl sigma sub spl tau spl isin T f( spl tau ) spl delta (t- spl tau ) obeying |T| spl les C sub M spl middot (log N) sup -1 spl middot | spl Omega | for some constant C sub M >0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N sup -M ), f can be reconstructed exactly as the solution to the spl lscr sub 1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C sub M which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T| spl middot logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N sup -M ) would in general require a number of frequency samples at least proportional to |T| spl middot logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.", "" ] }
1311.0198
1544963046
In this paper, we study online double auctions, where multiple sellers and multiple buyers arrive and depart dynamically to exchange one commodity. We show that there is no deterministic online double auction that is truthful and competitive for maximising social welfare in an adversarial model. However, given the prior information that sellers are patient and the demand is not more than the supply, a deterministic and truthful greedy mechanism is actually 2-competitive, i.e. it guarantees that the social welfare of its allocation is at least half of the optimal one achievable offline. Moreover, if the number of incoming buyers is predictable, we demonstrate that an online double auction can be reduced to an online one-sided auction, and the truthfulness and competitiveness of the reduced online double auction follow that of the online one-sided auction. Notably, by using the reduction, we find a truthful mechanism that is almost 1-competitive, when buyers arrive randomly. Finally, we argue that these mechanisms also have a promising applicability in more general settings without assuming that sellers are patient, by decomposing a market into multiple sub-markets.
To tackle the complexity of online double auction design, existing research has utilised certain accessible prior knowledge of the dynamics to design desirable online auctions @cite_8 @cite_14 . For instance, given the assumption that the valuations of traders are in a range @math , Blum @cite_8 proposed a @math -competitive truthful online double auction in an adversarial setting for maximising social welfare, where @math is the fixed point of @math . Besides that, they also considered many other criteria. Moreover, assumed that traders' available active time period in the auction is no more than some constant @math , Bredin @cite_14 designed a framework to construct truthful online double auctions from truthful static double auctions, and demonstrated the performance (for maximising social welfare) of the auctions given by the framework in probabilistic settings through experiments.
{ "cite_N": [ "@cite_14", "@cite_8" ], "mid": [ "2140476881", "2022427910" ], "abstract": [ "In this paper we present and evaluate a general framework for the design of truthful auctions for matching agents in a dynamic, two-sided market. A single commodity, such as a resource or a task, is bought and sold by multiple buyers and sellers that arrive and depart over time. Our algorithm, CHAIN, provides the first framework that allows a truthful dynamic double auction (DA) to be constructed from a truthful, single-period (i.e. static) double-auction rule. The pricing and matching method of the CHAIN construction is unique amongst dynamic-auction rules that adopt the same building block. We examine experimentally the allocative efficiency of CHAIN when instantiated on various single-period rules, including the canonical McAfee double-auction rule. For a baseline we also consider non-truthful double auctions populated with \"zero-intelligence plus\"-style learning agents. CHAIN-based auctions perform well in comparison with other schemes, especially as arrival intensity falls and agent valuations become more volatile.", "In this article, we study the problem of online market clearing where there is one commodity in the market being bought and sold by multiple buyers and sellers whose bids arrive and expire at different times. The auctioneer is faced with an online clearing problem of deciding which buy and sell bids to match without knowing what bids will arrive in the future. For maximizing profit, we present a (randomized) online algorithm with a competitive ratio of ln(pmax − pmin) p 1, when bids are in a range [pmin, pmax], which we show is the best possible. A simpler algorithm has a ratio twice this, and can be used even if expiration times are not known. For maximizing the number of trades, we present a simple greedy algorithm that achieves a factor of 2 competitive ratio if no money-losing trades are allowed. We also show that if the online algorithm is allowed to subsidize matches---match money-losing pairs if it has already collected enough money from previous pairs to pay for them---then it can actually be 1-competitive with respect to the optimal offline algorithm that is not allowed subsidy. That is, for maximizing the number of trades, the ability to subsidize is at least as valuable as knowing the future. We also consider objectives of maximizing buy or sell volume and social welfare. We present all of these results as corollaries of theorems on online matching in an incomplete interval graph.We also consider the issue of incentive compatibility, and develop a nearly optimal incentive-compatible algorithm for maximizing social welfare. For maximizing profit, we show that no incentive-compatible algorithm can achieve a sublinear competitive ratio, even if only one buy bid and one sell bid are alive at a time. However, we provide an algorithm that, under certain mild assumptions on the bids, performs nearly as well as the best fixed pair of buy and sell prices, a weaker but still natural performance measure. This latter result uses online learning methods, and we also show how such methods can be used to improve our “optimal” algorithms to a broader notion of optimality. Finally, we show how some of our results can be generalized to settings in which the buyers and sellers themselves have online bidding strategies, rather than just each having individual bids." ] }
1311.0505
1651121916
Many automated systems need the capability of automatic change detection without the given detection threshold. This paper presents an automated change detection algorithm in streaming multivariate data. Two overlapping windows are used to quantify the changes. While a window is used as the reference window from which the clustering is created, the other called the current window captures the newly incoming data points. A newly incoming data point can be considered a change point if it is not a member of any cluster. As our clustering-based change detector does not require detection threshold, it is an automated detector. Based on this change detector, we propose a reactive clustering algorithm for streaming data. Our empirical results show that, our clustering-based change detector works well with multivariate streaming data. The detection accuracy depends on the number of clusters in the reference window, the window width.
The clustering-based change detection method proposed here is related to the work on automatic change detection and change detection in multivariate streaming data, and clustering-based change detection, and the reactive work on building and maintaining of model. As automated systems require the capability of real-time processing, and adapting to the changing environments, automated change detection plays an important role in many automated systems. One of the models of real-time processing is data stream processing. For example, sensor networks need the automated change detection methods in which detection threshold must adapt to the changes of the environment. Automated change detection method is also important in many mobile robotic applications @cite_1 . For example, @cite_1 have proposed an online change detection method for mobile robots based on the segmentation approach. Recently @cite_11 have developed Data3, a Kinect interface for human motion detection. In fact, Data3 is a kind of system capable of detecting the changes in spatial-temporal streaming data.
{ "cite_N": [ "@cite_1", "@cite_11" ], "mid": [ "2119516915", "2013893030" ], "abstract": [ "The high cost of damaging an expensive robot or injuring people or equipment in its environment make even rare failures unacceptable in many mobile robot applications. Often the objects that pose the highest risk for a mobile robot are those that were not present throughout previous successful traversals of an environment. Change detection, a closely related problem to novelty detection, is therefore of high importance to many mobile robotic applications that require a robot to operate repeatedly in the same environment. We present a novel algorithm for performing online change detection based on a previously developed robust online novelty detection system that uses a learned lower-dimensional representation of the feature space to perform measures of similarity.We then further improve this change detection system by incorporating online scene segmentation to better utilize contextual information in the environment. We validate these approaches through extensive experiments onboard a large outdoor mobile robot. Our results show that our approaches are robust to noisy sensor data and moderate registration errors and maintain their performance across diverse natural environments and conditions.", "Motion sensing input devices like Microsoft's Kinect offer an alternative to traditional computer input devices like keyboards and mouses. Daily new applications using this interface appear. Most of them implement their own gesture detection. In our demonstration we show a new approach using the data stream engine Andu IN. The gesture detection is done based on Andu IN's complex event processing functionality. This way we build a system that allows to define new and complex gestures on the basis of a declarative programming interface. On this basis our demonstration data^3 provides a basic natural interaction OLAP interface for a sample star schema database using Microsoft's Kinect." ] }
1311.0505
1651121916
Many automated systems need the capability of automatic change detection without the given detection threshold. This paper presents an automated change detection algorithm in streaming multivariate data. Two overlapping windows are used to quantify the changes. While a window is used as the reference window from which the clustering is created, the other called the current window captures the newly incoming data points. A newly incoming data point can be considered a change point if it is not a member of any cluster. As our clustering-based change detector does not require detection threshold, it is an automated detector. Based on this change detector, we propose a reactive clustering algorithm for streaming data. Our empirical results show that, our clustering-based change detector works well with multivariate streaming data. The detection accuracy depends on the number of clusters in the reference window, the window width.
Automated systems should be capable of automatically detecting the changes without the given detection threshold. Some change detection methods can automatically tune the detection thresholds so that the rate of false alarms is not greater than a given rate of false alarms. Gustafson and Palmquist deal with the problem of automated tuning of change detectors with given false alarm rate @cite_14 . Their approach computes the detection threshold by estimating a parametric distribution. The advantage of this method is that they can predict detection threshold with no or few false alarms from the used data. However, it is parametric method.
{ "cite_N": [ "@cite_14" ], "mid": [ "1589063909" ], "abstract": [ "This contribution addresses the problem of automated tuning of change detectors with given false alarm rate. By estimating a parametric distribution to the test statistics computed from real or simulated data, the threshold of the test can be computed directly. The advantage is that we can predict the threshold although there are no or very few false alarms in the used data. Using real data, the method is robust to assumed noise distributions and modeling errors. We illustrate the method on the CUSUM and GLR tests applied to friction estimation in cars and an airborne navigation system, respectively." ] }
1311.0505
1651121916
Many automated systems need the capability of automatic change detection without the given detection threshold. This paper presents an automated change detection algorithm in streaming multivariate data. Two overlapping windows are used to quantify the changes. While a window is used as the reference window from which the clustering is created, the other called the current window captures the newly incoming data points. A newly incoming data point can be considered a change point if it is not a member of any cluster. As our clustering-based change detector does not require detection threshold, it is an automated detector. Based on this change detector, we propose a reactive clustering algorithm for streaming data. Our empirical results show that, our clustering-based change detector works well with multivariate streaming data. The detection accuracy depends on the number of clusters in the reference window, the window width.
The automatic selection of threshold is of special importance. @cite_2 have presented an automated change detection method for streaming data based on Hidden Markov Models. This HMM-based method is an automated change detection method by thresholding. Their algorithm consists of the following steps: model the relationships among data streams a sequence of time invariant linear dynamic system; model the evolution of the estimated parameters of the model by Hidden Markov Model; evaluate the likelihood of new parameters; detect change based on a given threshold. If the likelihood is less than a given threshold, a change is detected.
{ "cite_N": [ "@cite_2" ], "mid": [ "2037193886" ], "abstract": [ "In this work we address the problem of automatically detecting changes either induced by faults or concept drifts in data streams coming from multi-sensor units. The proposed methodology is based on the fact that the relationships among different sensor measurements follow a probabilistic pattern sequence when normal data, i.e. data which do not present a change, are observed. Differently, when a change in the process generating the data occurs the probabilistic pattern sequence is modified. The relationship between two generic data streams is modelled through a sequence of linear dynamic time-invariant models whose trained coefficients are used as features feeding a Hidden Markov Model (HMM) which, in turn, extracts the pattern structure. Change detection is achieved by thresholding the log-likelihood value associated with incoming new patterns, hence comparing the affinity between the structure of new acquisitions with that learned through the HMM. Experiments on both artificial and real data demonstrate the appreciable performance of the method both in terms of detection delay, false positive and false negative rates." ] }
1311.0713
1875802892
We study two related problems: finding a set of k vertices and minimum number of edges (kmin) and finding a graph with at least m' edges and minimum number of vertices (mvms). Goldschmidt and Hochbaum GH97 show that the mvms problem is NP-hard and they give a 3-approximation algorithm for the problem. We improve GH97 by giving a ratio of 2. A 2(1+ )-approximation for the problem follows from the work of Carnes and Shmoys CS08 . We improve the approximation ratio to 2. algorithm for the problem. We show that the natural LP for has an integrality gap of 2-o(1). We improve the NP-completeness of GH97 by proving the pronlem are APX-hard unless a well-known instance of the dense k-subgraph admits a constant ratio. The best approximation guarantee known for this instance of dense k-subgraph is O(n^ 2 9 ) BCCFV . We show that for any constant >1, an approximation guarantee of for the problem implies a (1+o(1)) approximation for . Finally, we define we give an exact algorithm for the density version of kmin.
Goldschmidt and Hochbaum @cite_12 introduced the problem. They show that the problem is NP-complete and give algorithms that yield @math -approximate and @math -approximate algorithm for the unweighted and the weighted versions of the problem, respectively.
{ "cite_N": [ "@cite_12" ], "mid": [ "2012425059" ], "abstract": [ "Abstract We study here a problem on graphs that involves finding a subgraph of maximum node weights spanning up to k edges. We interpret the concept of “spanning” to mean that at least one endpoint of the edge is in the subgraph in which we seek to maximize the total weight of the nodes. We discuss the complexity of this problem and other related problems with different concepts of “spanning” and show that most of these variants are NP-complete. For the problem defined, we demonstrate a factor 3 approximation algorithm with complexity O( kn ) for a graph on n nodes. For the unweighted version of the the problem in a graph on m edges we describe a factor 2 approximation algorithm of greedy type, with complexity O( n + m ). For trees and forests we present a polynomial time algorithm applicable to our problem and also to a problem seeking to maximize (minimize) the weight of a subtree on k nodes." ] }
1311.0713
1875802892
We study two related problems: finding a set of k vertices and minimum number of edges (kmin) and finding a graph with at least m' edges and minimum number of vertices (mvms). Goldschmidt and Hochbaum GH97 show that the mvms problem is NP-hard and they give a 3-approximation algorithm for the problem. We improve GH97 by giving a ratio of 2. A 2(1+ )-approximation for the problem follows from the work of Carnes and Shmoys CS08 . We improve the approximation ratio to 2. algorithm for the problem. We show that the natural LP for has an integrality gap of 2-o(1). We improve the NP-completeness of GH97 by proving the pronlem are APX-hard unless a well-known instance of the dense k-subgraph admits a constant ratio. The best approximation guarantee known for this instance of dense k-subgraph is O(n^ 2 9 ) BCCFV . We show that for any constant >1, an approximation guarantee of for the problem implies a (1+o(1)) approximation for . Finally, we define we give an exact algorithm for the density version of kmin.
Consider an objective function in which we minimize @math . One can associate a cost @math with each vertex @math and a size @math for each vertex @math , and then the objective is just to minimize @math subject to @math . Carnes and Shmoys @cite_9 give a @math -approximation for the problem. Using this result and the observation that the objective function is at most a factor of @math away from the objective function for the problem, a @math -approximation follows for the problem.
{ "cite_N": [ "@cite_9" ], "mid": [ "1964001041" ], "abstract": [ "Primal-dual algorithms have played an integral role in recent developments in approximation algorithms, and yet there has been little work on these algorithms in the context of LP relaxations that have been strengthened by the addition of more sophisticated valid inequalities. We introduce primal-dual schema based on the LP relaxations devised by for the minimum knapsack problem as well as for the single-demand capacitated facility location problem. Our primal-dual algorithms achieve the same performance guarantees as the LP-rounding algorithms of which rely on applying the ellipsoid algorithm to an exponentially-sized LP. Furthermore, we introduce new flow-cover inequalities to strengthen the LP relaxation of the more general capacitated single-item lot-sizing problem; using just these inequalities as the LP relaxation, we obtain a primal-dual algorithm that achieves a performance guarantee of 2. Computational experiments demonstrate the effectiveness of this algorithm on generated problem instances." ] }
1311.0293
1554455557
The Tree Evaluation Problem was introduced by in 2010 as a candidate for separating P from L and NL. The most general space lower bounds known for the Tree Evaluation Problem require a semantic restriction on the branching programs and use a connection to well-known pebble games to generate a bottleneck argument. These bounds are met by corresponding upper bounds generated by natural implementations of optimal pebbling algorithms. In this paper we extend these ideas to a variety of restricted families of both deterministic and non-deterministic branching programs, proving tight lower bounds under these restricted models. We also survey and unify known lower bounds in our "pebbling argument" framework.
Some work has been done in the more general DAG Evaluation Problem, where the underlying graph is an arbitrary DAG rather than a complete binary tree. Wehr @cite_9 proved an analogous lower bound for deterministic thrifty branching programs solving DAG Evaluation. Chan @cite_6 , using different pebble games, studied circuit depth lower bounds for DAG Evaluation under a semantic restriction called output-relevance, closely related to thriftiness. Because of the more general nature of DAG Evaluation, Chan achieved a separation of @math and @math for each @math , as well as separating and , under this semantic restriction.
{ "cite_N": [ "@cite_9", "@cite_6" ], "mid": [ "1626365820", "1660848351" ], "abstract": [ "We answer a problem posed in (G 'al, Kouck 'y, McKenzie 2008) regarding a restricted model of small-space computation, tailored for solving the GEN problem. They define two variants of \"incremental branching programs\", the syntactic variant defined by a restriction on the graph-theoretic paths in the program, and the more-general semantic variant in which the same restriction is enforced only on the consistent paths - those that are followed by at least one input. They show that exponential size is required for the syntactic variant, but leave open the problem of superpolynomial lower bounds for the semantic variant. Here we give an exponential lower bound for the semantic variant by generalizing lower bound arguments, from earlier work, for a similar restricted model tailored for solving a special case of GEN called Tree Evaluation.", "We study the connection between pebble games and complexity. First, we derive complexity results using pebble games. It is shown that three pebble games used for studying computational complexity are equivalent: namely, the two-person pebble game of Dymond-Tompa, the two-person pebble game of Raz-McKenzie, and the one-person reversible pebble game of Bennett have the same pebble costs over any directed acyclic graph. The three pebble games have been used for studying parallel complexity and for proving lower bounds under restricted settings, and we show one more such lower bound on circuit-depth. Second, the pebble costs are applied to proof complexity. Concerning a family of unsatisfiable CNFs called pebbling contradictions, the pebble cost in any of the pebble games controls the scaling of some parameters of resolution refutations. Namely, the pebble cost controls the minimum depth of resolution refutations and the minimum size of tree-like resolution refutations. Finally, we study the space complexity of computing the pebble costs and of computing the minimum depth of resolution refutations. It is PSPACE-complete to compute the pebble cost in any of the three pebble games, and to compute the minimum depth of resolution refutations." ] }
1311.0259
1789399620
We propose an error-disturbance relation for general observables on finite dimensional Hilbert spaces based on operational notions of error and disturbance. For two-dimensional systems we derive tight inequalities expressing the trade-off between accuracy and disturbance.
The formulation of our error-disturbance relation is closest in spirit to the one by Bush, Lahti, Pearson and Werner @cite_16 @cite_10 @cite_7 for canonical position and momentum operators, but a direct comparison is not possible, since we work with finite dimensional systems. Here we will briefly compare our proposed inequality with two other proposals.
{ "cite_N": [ "@cite_16", "@cite_10", "@cite_7" ], "mid": [ "1962607844", "2048613853", "2041765604" ], "abstract": [ "We prove an uncertainty relation, which imposes a bound on any joint measurement of position and momentum. It is of the form (ΔP)(ΔQ) ≥ Ch, where the 'uncertainties' quantify the difference between the marginals of the joint measurement and the corresponding ideal observable. Applied to an approximate position measurement followed by a momentum measurement, the uncertainties become the precision ΔQ of the position measurement, and the perturbation ΔP of the conjugate variable introduced by such a measurement. We also determine the best constant C, which is attained for a unique phase space covariant measurement.", "We formulate and prove a new, universally valid uncertainty relation for the necessary error bar widths in any approximate joint measurement of position and momentum.", "While the slogan “no measurement without disturbance” has established itself under the name of the Heisenberg effect in the consciousness of the scientifically interested public, a precise statement of this fundamental feature of the quantum world has remained elusive, and serious attempts at rigorous formulations of it as a consequence of quantum theory have led to seemingly conflicting preliminary results. Here we show that despite recent claims to the contrary [L. , Phys. Rev. Lett. 109 100404 (2012)], Heisenberg-type inequalities can be proven that describe a tradeoff between the precision of a position measurement and the necessary resulting disturbance of momentum (and vice versa). More generally, these inequalities are instances of an uncertainty relation for the imprecisions of any joint measurement of position and momentum. Measures of error and disturbance are here defined as figures of merit characteristic of measuring devices. As such they are state independent, each giving worst-case estimates across all states, in contrast to previous work that is concerned with the relationship between error and disturbance in an individual state." ] }
1311.0259
1789399620
We propose an error-disturbance relation for general observables on finite dimensional Hilbert spaces based on operational notions of error and disturbance. For two-dimensional systems we derive tight inequalities expressing the trade-off between accuracy and disturbance.
In order to discuss Ozawa's uncertainty relation @cite_6 @cite_0 @cite_13 @cite_15 , we need to introduce an auxiliary Hilbert space @math and a fixed state @math on @math . The measurement apparatus is then described by two observables One usually writes @math (and similarly for @math ), where @math is an observable on @math and @math is a unitary describing some interaction between the system of interest and the auxiliary system. @math @math on @math . Define the errors (the @math is part of the name, not an index) ^2_ O, = ( -A aux )^2 aux , ^2_ O, = ( - A aux )^2 aux , and the standard deviations ^2_ = (A- A )^2 , ^2_ = ( A- A )^2 . One can then derive the following error-disturbance relation In Ref. @cite_13 Hall derives a very similar inequality, but with @math ( @math ) defined in terms of @math ( @math ). A tight variant of is given by Branciard @cite_15 . See also @cite_17 and @cite_8 for related inequalities. @cite_0 O, O, + O, + O, 1 2 | [A, A] |.
{ "cite_N": [ "@cite_8", "@cite_6", "@cite_0", "@cite_15", "@cite_13", "@cite_17" ], "mid": [ "2246251217", "2035340268", "1980160712", "", "", "1980351059" ], "abstract": [ "Heisenberg's uncertainty principle is quantified by error-disturbance tradeoff relations, which have been tested experimentally in various scenarios. Here we shall report improved new versions of various error-disturbance tradeoff relations by decomposing the measurement errors into two different components, namely, operator bias and fuzziness. Our improved uncertainty relations reveal the tradeoffs between these two components of errors, and imply various conditionally valid error-tradeoff relations for the unbiased and projective measurements. We also design a quantum circuit to measure the two components of the error and disturbance.", "The Heisenberg uncertainty principle states that the product of the noise in a position measurement and the momentum disturbance caused by that measurement should be no less than the limit set by Planck’s constant 2 as demonstrated by Heisenberg’s thought experiment using a g-ray microscope. Here it is shown that this common assumption is not universally true: a universally valid trade-off relation between the noise and the disturbance has an additional correlation term, which is redundant when the intervention brought by the measurement is independent of the measured object, but which allows the noise-disturbance product much below Planck’s constant when the intervention is dependent. A model of measuring interaction with dependent intervention shows that Heisenberg’s lower bound for the noise-disturbance product is violated even by a nearly nondisturbing precise position measurement. An experimental implementation is also proposed to realize the above model in the context of optical quadrature measurement with currently available linear optical devices.", "Universally valid uncertainty relations are proven in a model independent formulation for inherent and unavoidable extra noises in arbitrary joint measurements on single systems, from which Heisenberg's original uncertainty relation is proven valid for any joint measurements with statistically independent noises.", "", "", "Complementarity restricts the accuracy with which incompatible quantum observables can be jointly measured. Despite popular conception, the Heisenberg uncertainty relation does not quantify this principle. We report the experimental verification of universally valid complementarity relations, including an improved relation derived here. We exploit Einstein-Poldolsky-Rosen correlations between two photonic qubits to jointly measure incompatible observables of one. The product of our measurement inaccuracies is low enough to violate the widely used, but not universally valid, Arthurs-Kelly relation." ] }
1311.0259
1789399620
We propose an error-disturbance relation for general observables on finite dimensional Hilbert spaces based on operational notions of error and disturbance. For two-dimensional systems we derive tight inequalities expressing the trade-off between accuracy and disturbance.
Another relation is due to Hofmann @cite_9 . Consider a POVM (the @math is not an index) F_ H,m = , where @math ranges over some set of measurement outcomes (the number of outcomes does have to be related to the dimension @math of @math ), and introduce the errors H,m ^2 = (A- A )^2 , H,m ^2 = ( A- A )^2 . Here @math is the retrodictive'' state corresponding to the measurement outcome @math , explicitly = F_ H,m [F_ H,m ] . We then have the relation @cite_9 (see also @cite_3 ) H,m H,m 1 2 | [A, A] |.
{ "cite_N": [ "@cite_9", "@cite_3" ], "mid": [ "2027707976", "1514676086" ], "abstract": [ "The effects of any quantum measurement can be described by a collection of measurement operators l_brace M sub m r_brace acting on the quantum state of the measured system. However, the Hilbert space formalism tends to obscure the relationship between the measurement results and the physical properties of the measured system. In this paper, a characterization of measurement operators in terms of measurement resolution and disturbance is developed. It is then possible to formulate uncertainty relations for the measurement process that are valid for arbitrary input states. The motivation of these concepts is explained from a quantum communication viewpoint. It is shown that the intuitive interpretation of uncertainty as a relation between measurement resolution and disturbance provides a valid description of measurement back action. Possible applications to quantum cryptography, quantum cloning, and teleportation are discussed.", "We critically revisit the definitions of mean-squared estimation error and disturbance recently used in error-disturbance inequalities derived by Ozawa, Hall, Branciard, and by expressing them in the reduced system space. The interpretation of the definitions as mean-squared deviations relies on a hidden assumption that is incompatible with the Bell-Kochen-Specker-Spekkens contextuality theorems, and which results in averaging the deviations over a non-positive-definite joint quasiprobability distribution. For unbiased measurements, the estimation error admits a concrete interpretation as the dispersion in the estimation of the mean induced by the measurement ambiguity. We demonstrate how to measure not only this dispersion but also every observable moment with the same experimental data, and thus demonstrate that perfect estimations can have nonzero estimation error according to this measure. We conclude that the inequalities using these definitions do not capture the spirit of Heisenberg's eponymous inequality, but do indicate a qualitatively different relationship between dispersion and disturbance that applies to ensembles of measurements. To reconnect with the discussion of Heisenberg, we suggest alternate definitions of error and disturbance that are intrinsic to a single apparatus outcome. These definitions naturally involve the retrodictive and interdictive states for that outcome, and produce complementarity and error-disturbance inequalities that have the same form as the traditional Heisenberg relation." ] }
1310.8187
2951717657
We present , a localization system to estimate the location and the traveling distance by leveraging the lower-power inertial sensors embedded in smartphones as a supplementary to GPS. To minimize the negative impact of sensor noises, exploits the intermittent strong GPS signals and uses the linear regression to build a prediction model which is based on the trace estimated from inertial sensors and the one computed from the GPS. Furthermore, we utilize landmarks (e.g., bridge, traffic lights) detected automatically and special driving patterns (e.g., turning, uphill, and downhill) from inertial sensory data to improve the localization accuracy when the GPS signal is weak. Our evaluations of in the city demonstrates its technique viability and significant localization accuracy improvement compared with GPS and other approaches: the error is approximately 20m for 90 of time while the known mean error of GPS is 42.22m.
Our work involves in a number of techniques, in this section, we mainly focus on the work related to wireless localization and dead-reckoning @cite_17 .
{ "cite_N": [ "@cite_17" ], "mid": [ "1606643854" ], "abstract": [ "A microcomputer-assisted position finding system that integrates GPS data, dead reckoning sensors, and digital maps into a low-cost, self-contained navigation instrument is disclosed. A built-in radio frequency transponder allows individual positions to be monitored by a central coordinating facility. Unique dead reckoning sensors and features are disclosed for ground speed distance measurement and computer-aided position fixes." ] }
1310.8187
2951717657
We present , a localization system to estimate the location and the traveling distance by leveraging the lower-power inertial sensors embedded in smartphones as a supplementary to GPS. To minimize the negative impact of sensor noises, exploits the intermittent strong GPS signals and uses the linear regression to build a prediction model which is based on the trace estimated from inertial sensors and the one computed from the GPS. Furthermore, we utilize landmarks (e.g., bridge, traffic lights) detected automatically and special driving patterns (e.g., turning, uphill, and downhill) from inertial sensory data to improve the localization accuracy when the GPS signal is weak. Our evaluations of in the city demonstrates its technique viability and significant localization accuracy improvement compared with GPS and other approaches: the error is approximately 20m for 90 of time while the known mean error of GPS is 42.22m.
Several promising techniques such as crowdsourcing are introduced in localization recently, such as Zee @cite_4 , which also uses inertial sensors to track users' movement.
{ "cite_N": [ "@cite_4" ], "mid": [ "2166315077" ], "abstract": [ "Radio Frequency (RF) fingerprinting, based onWiFi or cellular signals, has been a popular approach to indoor localization. However, its adoption in the real world has been stymied by the need for sitespecific calibration, i.e., the creation of a training data set comprising WiFi measurements at known locations in the space of interest. While efforts have been made to reduce this calibration effort using modeling, the need for measurements from known locations still remains a bottleneck. In this paper, we present Zee -- a system that makes the calibration zero-effort, by enabling training data to be crowdsourced without any explicit effort on the part of users. Zee leverages the inertial sensors (e.g., accelerometer, compass, gyroscope) present in the mobile devices such as smartphones carried by users, to track them as they traverse an indoor environment, while simultaneously performing WiFi scans. Zee is designed to run in the background on a device without requiring any explicit user participation. The only site-specific input that Zee depends on is a map showing the pathways (e.g., hallways) and barriers (e.g., walls). A significant challenge that Zee surmounts is to track users without any a priori, user-specific knowledge such as the user's initial location, stride-length, or phone placement. Zee employs a suite of novel techniques to infer location over time: (a) placement-independent step counting and orientation estimation, (b) augmented particle filtering to simultaneously estimate location and user-specific walk characteristics such as the stride length,(c) back propagation to go back and improve the accuracy of ocalization in the past, and (d) WiFi-based particle initialization to enable faster convergence. We present an evaluation of Zee in a large office building." ] }
1310.8187
2951717657
We present , a localization system to estimate the location and the traveling distance by leveraging the lower-power inertial sensors embedded in smartphones as a supplementary to GPS. To minimize the negative impact of sensor noises, exploits the intermittent strong GPS signals and uses the linear regression to build a prediction model which is based on the trace estimated from inertial sensors and the one computed from the GPS. Furthermore, we utilize landmarks (e.g., bridge, traffic lights) detected automatically and special driving patterns (e.g., turning, uphill, and downhill) from inertial sensory data to improve the localization accuracy when the GPS signal is weak. Our evaluations of in the city demonstrates its technique viability and significant localization accuracy improvement compared with GPS and other approaches: the error is approximately 20m for 90 of time while the known mean error of GPS is 42.22m.
Recently, dead-reckoning strategies using internal sensors to estimate motion activities have attracted many research interests. Strapdown Inertial Navigation System (SINS) @cite_5 and Pedometer System @cite_25 use MEMS to estimate the moving speed and trace. The key issue is to deal with the noise of internal sensors and accumulated errors, which sometimes grow cubically @cite_31 . Personal Dead-reckoning (PDR) system @cite_15 uses Zero Velocity Update'' to calibrate the drift. The majority of the dead-reckoning studies focus on walking estimation, such as UnLoc @cite_28 , and CompAcc @cite_0 . Their main idea is to use accelerometer sensors to estimate the number of walking steps, and then measure the walking distance. AutoWitness @cite_22 is the system with an embedded wireless tag integrated with vibration, accelerometer, and gyroscope sensors. The tag is attached to a vehicle, and accelerometer and gyroscope sensors are used to track the moving trace.
{ "cite_N": [ "@cite_22", "@cite_15", "@cite_28", "@cite_0", "@cite_5", "@cite_31", "@cite_25" ], "mid": [ "2096416669", "2137444221", "2054602086", "2099136733", "", "", "2100801339" ], "abstract": [ "We present AutoWitness, a system to deter, detect, and track personal property theft, improve historically dismal stolen property recovery rates, and disrupt stolen property distribution networks. A property owner embeds a small tag inside the asset to be protected, where the tag lies dormant until it detects vehicular movement. Once moved, the tag uses inertial sensor-based dead reckoning to estimate position changes, but to reduce integration errors, the relative position is reset whenever the sensors indicate the vehicle has stopped. The sequence of movements, stops, and turns are logged in compact form and eventually transferred to a server using a cellular modem after both sufficient time has passed (to avoid detection) and RF power is detectable (hinting cellular access may be available). Eventually, the trajectory data are sent to a server which attempts to match a path to the observations. The algorithm uses a Hidden Markov Model of city streets and Viterbi decoding to estimate the most likely path. The proposed design leverages low-power radios and inertial sensors, is immune to intransit cloaking, and supports post hoc path reconstruction. Our prototype demonstrates technical viability of the design; the volume market forces driving machine-to-machine communications will soon make the design economically viable.", "This paper introduces a positioning system for walking persons, called \"Personal Dead-reckoning\" (PDR) system. The PDR system does not require GPS, beacons, or landmarks. The system is therefore useful in GPS-denied environments, such as inside buildings, tunnels, or dense forests. Potential users of the system are military and security personnel as well as emergency responders. The PDR system uses a small 6-DOF inertial measurement unit (IMU) attached to the user's boot. The IMU provides rate-of-rotation and acceleration measurements that are used in real-time to estimate the location of the user relative to a known starting point. In order to reduce the most significant errors of this IMU-based system−caused by the bias drift of the accelerometers−we implemented a technique known as \"Zero Velocity Update\" (ZUPT). With the ZUPT technique and related signal processing algorithms, typical errors of our system are about 2 of distance traveled. This typical PDR system error is largely independent of the gait or speed of the user. When walking continuously for several minutes, the error increases gradually beyond 2 . The PDR system works in both 2-dimensional (2-D) and 3-D environments, although errors in Z-direction are usually larger than 2 of distance traveled. Earlier versions of our system used an impractically large IMU. In the most recent version we implemented a much smaller IMU. This paper discussed specific problems of this small IMU, our measures for eliminating these problems, and our first experimental results with the small IMU under different conditions.", "We propose UnLoc, an unsupervised indoor localization scheme that bypasses the need for war-driving. Our key observation is that certain locations in an indoor environment present identifiable signatures on one or more sensing dimensions. An elevator, for instance, imposes a distinct pattern on a smartphone's accelerometer; a corridor-corner may overhear a unique set of WiFi access points; a specific spot may experience an unusual magnetic fluctuation. We hypothesize that these kind of signatures naturally exist in the environment, and can be envisioned as internal landmarks of a building. Mobile devices that \"sense\" these landmarks can recalibrate their locations, while dead-reckoning schemes can track them between landmarks. Results from 3 different indoor settings, including a shopping mall, demonstrate median location errors of 1:69m. War-driving is not necessary, neither are floorplans the system simultaneously computes the locations of users and landmarks, in a manner that they converge reasonably quickly. We believe this is an unconventional approach to indoor localization, holding promise for real-world deployment.", "This paper identifies the possibility of using electronic compasses and accelerometers in mobile phones, as a simple and scalable method of localization without war-driving. The idea is not fundamentally different from ship or air navigation systems, known for centuries. Nonetheless, directly applying the idea to human-scale environments is non-trivial. Noisy phone sensors and complicated human movements present practical research challenges. We cope with these challenges by recording a person's walking patterns, and matching it against possible path signatures generated from a local electronic map. Electronic maps enable greater coverage, while eliminating the reliance on WiFi infrastructure and expensive war-driving. Measurements on Nokia phones and evaluation with real users confirm the anticipated benefits. Results show a location accuracy of less than 11m in regions where today's localization services are unsatisfactory or unavailable.", "", "", "This paper presents a method for correcting dead reckoning parameters, which are heading and step size, for a pedestrian navigation system. In this method, the compass bias error and the step size error can be estimated during the period that the Global Positioning System (GPS) signal is available. The errors are used for correcting those parameters to improve the accuracy of position determination using only the dead reckoning system when the GPS signal is not available. The results show that the parameters can be estimated with reasonable accuracy. Moreover, the method also helps to increase the positioning accuracy when the GPS signal is available." ] }
1310.8187
2951717657
We present , a localization system to estimate the location and the traveling distance by leveraging the lower-power inertial sensors embedded in smartphones as a supplementary to GPS. To minimize the negative impact of sensor noises, exploits the intermittent strong GPS signals and uses the linear regression to build a prediction model which is based on the trace estimated from inertial sensors and the one computed from the GPS. Furthermore, we utilize landmarks (e.g., bridge, traffic lights) detected automatically and special driving patterns (e.g., turning, uphill, and downhill) from inertial sensory data to improve the localization accuracy when the GPS signal is weak. Our evaluations of in the city demonstrates its technique viability and significant localization accuracy improvement compared with GPS and other approaches: the error is approximately 20m for 90 of time while the known mean error of GPS is 42.22m.
Smartphones are used to analyze traffic patterns to provide better navigation system in vehicle. CTrack @cite_6 and VTrack @cite_24 are two systems which process error-prone positioning systems to estimate the trajectories. These two system match a sequence of observations on the transitions between locations, while the former adopt fingerprints and the latter mainly utilizes HMM. SmartRoad @cite_30 detects and identifies traffic lights and stop signs through crowd-sensing strategies. Some research propose map matching algorithms based on Kalman Filter @cite_16 or HMM @cite_14 . However, such approaches cannot guarantee accuracy. IVMM @cite_21 is then proposed to increase the accuracy.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_21", "@cite_6", "@cite_24", "@cite_16" ], "mid": [ "", "2135822894", "2094283130", "1605476884", "", "2081813350" ], "abstract": [ "", "The problem of matching measured latitude longitude points to roads is becoming increasingly important. This paper describes a novel, principled map matching algorithm that uses a Hidden Markov Model (HMM) to find the most likely road route represented by a time-stamped sequence of latitude longitude pairs. The HMM elegantly accounts for measurement noise and the layout of the road network. We test our algorithm on ground truth data collected from a GPS receiver in a vehicle. Our test shows how the algorithm breaks down as the sampling rate of the GPS is reduced. We also test the effect of increasing amounts of additional measurement noise in order to assess how well our algorithm could deal with the inaccuracies of other location measurement systems, such as those based on WiFi and cell tower multilateration. We provide our GPS data and road network representation as a standard test set for other researchers to use in their map matching work.", "Matching a raw GPS trajectory to roads on a digital map is often referred to as the Map Matching problem. However, the occurrence of the low-sampling-rate trajectories (e.g. one point per 2 minutes) has brought lots of challenges to existing map matching algorithms. To address this problem, we propose an Interactive Voting-based Map Matching (IVMM) algorithm based on the following three insights: 1) The position context of a GPS point as well as the topological information of road networks, 2) the mutual influence between GPS points (i.e., the matching result of a point references the positions of its neighbors; in turn, when matching its neighbors, the position of this point will also be referenced), and 3) the strength of the mutual influence weighted by the distance between GPS points (i.e., the farther distance is the weaker influence exists). In this approach, we do not only consider the spatial and temporal information of a GPS trajectory but also devise a voting-based strategy to model the weighted mutual influences between GPS points. We evaluate our IVMM algorithm based on a user labeled real trajectory dataset. As a result, the IVMM algorithm outperforms the related method (ST-Matching algorithm).", "CTrack is an energy-efficient system for trajectory mapping using raw position tracks obtained largely from cellular base station fingerprints. Trajectory mapping, which involves taking a sequence of raw position samples and producing the most likely path followed by the user, is an important component in many location-based services including crowd-sourced traffic monitoring, navigation and routing, and personalized trip management. Using only cellular (GSM) fingerprints instead of power-hungry GPS and WiFi radios, the marginal energy consumed for trajectory mapping is zero. This approach is non-trivial because we need to process streams of highly inaccurate GSM localization samples (average error of over 175 meters) and produce an accurate trajectory. CTrack meets this challenge using a novel two-pass Hidden Markov Model that sequences cellular GSM fingerprints directly without converting them to geographic coordinates, and fuses data from low-energy sensors available on most commodity smart-phones, including accelerometers (to detect movement) and magnetic compasses (to detect turns). We have implemented CTrack on the Android platform, and evaluated it on 126 hours (1,074 miles) of real driving traces in an urban environment. We find that CTrack can retrieve over 75 of a user's drive accurately in the median. An important by-product of CTrack is that even devices with no GPS or WiFi (constituting a significant fraction of today's phones) can contribute and benefit from accurate position data.", "", "The main tasks of car navigation systems are positioning, routing, and guidance. This paper describes a novel, two-step approach to vehicle positioning founded on the appropriate combination of the in-car sensors, GPS signals, and a digital map. The first step is based on the application of a Kalman filter, which optimally updates the model of car movement based on the in-car odometer and gyroscope measurements, and the GPS signal. The second step further improves the position estimate by dynamically comparing the continuous vehicle trajectory obtained in the first step with the candidate trajectories on a digital map. This is in contrast with standard applications of the digital map where the current position estimate is simply projected on the digital map at every sampling instant." ] }
1310.8620
2123738293
This paper analyzes distributed control protocols for first- and second-order networked dynamical systems. We propose a class of nonlinear consensus controllers where the input of each agent can be written as a product of a nonlinear gain, and a sum of nonlinear interaction functions. By using integral Lyapunov functions, we prove the stability of the proposed control protocols, and explicitly characterize the equilibrium set. We also propose a distributed proportional-integral (PI) controller for networked dynamical systems. The PI controllers successfully attenuate constant disturbances in the network. We prove that agents with single-integrator dynamics are stable for any integral gain, and give an explicit tight upper bound on the integral gain for when the system is stable for agents with double-integrator dynamics. Throughout the paper we highlight some possible applications of the proposed controllers by realistic simulations of autonomous satellites, power systems and building temperature control.
Nonlinear interaction functions for consensus problems are well-studied @cite_22 @cite_20 , in applications to, e.g., consensus with preserving connectedness @cite_30 and collision avoidance @cite_30 . Sufficient conditions for the convergence of nonlinear protocols for first-order integrator dynamics are given in @cite_28 . Consensus on a general nonlinear function value, referred to as @math -consensus, was studied in @cite_26 , by the use of nonlinear gain functions. The literature on @math -consensus has been focused on agents with single-integrator dynamics. However, as we show later, our results can be generalized to hold also for double-integrator dynamics. Consensus protocols where the input of an agent can be separated into a product of a positive function of the agents own state were studied in @cite_12 for single integrator dynamics. In @cite_13 , position consensus for agents with double-integrator dynamics under a class of nonlinear interaction functions and nonlinear velocity-damping is studied. In contrast to @cite_13 , we study undamped consensus protocols for single- and double-integrator dynamics using integral Lyapunov functions. In @cite_9 double-integrator consensus problems with linear non-homogeneous damping coefficients are considered. We generalize these results to also hold for a class of nonlinear damping coefficients.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_22", "@cite_28", "@cite_9", "@cite_13", "@cite_12", "@cite_20" ], "mid": [ "2144354132", "2034226907", "1864806140", "2158401151", "2149559996", "2104643019", "2135097535", "" ], "abstract": [ "A distributed swarm aggregation algorithm is developed for a team of multiple kinematic agents. Specifically, each agent is assigned a control law, which is the sum of two elements: a repulsive potential field, which is responsible for the collision avoidance objective, and an attractive potential field, which forces the agents to converge to a configuration where they are close to each other. Furthermore, the attractive potential field forces the agents that are initially located within the sensing radius of an agent to remain within this area for all time. In this way, the connectivity properties of the initially formed communication graph are rendered invariant for the trajectories of the closed-loop system. It is shown that under the proposed control law, agents converge to a configuration where each agent is located at a bounded distance from each of its neighbors. The results are also extended to the case of nonholonomic kinematic unicycle-type agents and to the case of dynamic edge addition. In the latter case, we derive a smaller bound in the swarm size than in the static case.", "This paper presents analysis and design results for distributed consensus algorithms in multi-agent networks. We consider arbitrary consensus functions of the initial state of the network agents. Under mild smoothness assumptions, we obtain necessary and sufficient conditions characterizing any algorithm that asymptotically achieves consensus. This characterization is the building block to obtain various design results. We first identify a class of smooth functions for which one can synthesize in a systematic way distributed algorithms that achieve consensus. We apply this result to the family of weighted power mean functions, and characterize the exponential convergence properties of the resulting algorithms. We conclude with two distributed algorithms that achieve, respectively, max and min consensus in finite time.", "In this paper, we introduce linear and nonlinear consensus protocols for networks of dynamic agents that allow the agents to agree in a distributed and cooperative fashion. We consider the cases of networks with communication time-delays and channels that have filtering effects. We find a tight upper bound on the maximum fixed time-delay that can be tolerated in the network. It turns out that the connectivity of the network is the key in reaching a consensus. The case of agreement with bounded inputs is considered by analyzing the convergence of a class of nonlinear protocols. A Lyapunov function is introduced that quantifies the total disagreement among the nodes of a network. Simulation results are provided for agreement in networks with communication time-delays and constrained inputs.", "This paper is concerned with the convergence of a class of continuous-time nonlinear consensus algorithms for single integrator agents. In the consensus algorithms studied here, the control input of each agent is assumed to be a state-dependent combination of the relative positions of its neighbors in the information flow graph. Using a novel approach based on the smallest order of the nonzero derivative, it is shown that under some mild conditions the convex hull of the agents has a contracting property. A set-valued LaSalle-like approach is subsequently employed to show the convergence of the agents to a common point. The results are shown to be more general than the ones reported in the literature in some cases. An illustrative example demonstrates how the proposed convergence conditions can be verified.", "In this paper, the consensus problems for networks of dynamic agents are investigated. The agent dynamics is adopted as a typical point mass model based on the Newton's law. The average-consensus problem is proposed for such class of networks, which includes two aspects, the agreement of the states of the agents and the convergence to zero of the speeds of the agents. A linear consensus protocol for such networks is established for solving such a consensus problem that includes two parts, a local speed feedback controller and the interactions from the finite neighbours. Two kinds of topology are discussed: one is fixed topology, the other is switching one. The convergence analysis is proved and the protocol performance is discussed as well. The simulation results are presented that are consistent with our theoretical results. Copyright © 2006 John Wiley & Sons, Ltd.", "Robust static output-feedback controllers are designed that achieve consensus in networks of heterogeneous agents modeled as nonlinear systems of relative degree two. Both ideal communication networks and networks with communication constraints are considered, e.g., with limited communication range or heterogeneous communication delays. All design conditions that are presented are scalable to large and heterogeneous networks because the controller parameters depend only on the dynamics of the corresponding agent and its neighbors, but not on other agents in the network.", "We consider stationary consensus protocols for networks of dynamic agents with fixed topologies. At each time instant, each agent knows only its and its neighbors’ state, but must reach consensus on a group decision value that is function of all the agents’ initial state. We show that the agents can reach consensus if the value of such a function is time-invariant when computed over the agents’ state trajectories. We use this basic result to introduce a non-linear protocol design rule allowing consensus on a quite general set of values. Such a set includes, e.g., any generalized mean of order p of the agents’ initial states. As a second contribution we show that our protocol design is the solution of individual optimizations performed by the agents. This notion suggests a game theoretic interpretation of consensus problems as mechanism design problems. Under this perspective a supervisor entails the agents to reach a consensus by imposing individual objectives. We prove that such objectives can be chosen so that rational agents have a unique optimal protocol, and asymptotically reach consensus on a desired group decision value. We use a Lyapunov approach to prove that the asymptotical consensus can be reached when the communication links between nearby agents define a time-invariant undirected network. Finally we perform a simulation study concerning the vertical alignment maneuver of a team of unmanned air vehicles.", "" ] }
1310.8620
2123738293
This paper analyzes distributed control protocols for first- and second-order networked dynamical systems. We propose a class of nonlinear consensus controllers where the input of each agent can be written as a product of a nonlinear gain, and a sum of nonlinear interaction functions. By using integral Lyapunov functions, we prove the stability of the proposed control protocols, and explicitly characterize the equilibrium set. We also propose a distributed proportional-integral (PI) controller for networked dynamical systems. The PI controllers successfully attenuate constant disturbances in the network. We prove that agents with single-integrator dynamics are stable for any integral gain, and give an explicit tight upper bound on the integral gain for when the system is stable for agents with double-integrator dynamics. Throughout the paper we highlight some possible applications of the proposed controllers by realistic simulations of autonomous satellites, power systems and building temperature control.
Multi-agent systems, as all control processes, are in general sensitive to disturbances. When only relative measurements are available, disturbances may spread through the network. It has for example been shown by @cite_32 that vehicular string formations with only relative measurements cannot maintain coherency under disturbances, as the size of the formation increases. In @cite_29 the robustness of consensus-protocols under disturbances is studied, but limited to the relative states of the agents. However, none of the aforementioned references consider disturbance rejection. In @cite_27 however, the steady-state error for first-order consensus dynamics is minimized by convex optimization over the edge-weights. A similar approach is taken in @cite_25 , where the application is vehicle-platooning. In @cite_1 , an optimal sensor placement problem for consensus problems is formulated, minimizing the @math gain of the system. However, these approaches eliminate output errors if the disturbances are constant only in special cases, as no no integral control is employed.
{ "cite_N": [ "@cite_29", "@cite_1", "@cite_32", "@cite_27", "@cite_25" ], "mid": [ "2099689250", "2129488612", "2129248505", "2151685478", "2131181636" ], "abstract": [ "In this paper we study robustness of consensus in networks of coupled single integrators driven by white noise. Robustness is quantified as the H 2 norm of the closed-loop system. In particular we investigate how robustness depends on the properties of the underlying (directed) communication graph. To this end several classes of directed and undirected communication topologies are analyzed and compared. The trade-off between speed of convergence and robustness to noise is also investigated.", "This work explores the properties of the edge variant of the graph Laplacian in the context of the edge agreement problem. We show that the edge Laplacian, and its corresponding agreement protocol, provides a useful perspective on the well-known node agreement, or the consensus algorithm. Specifically, the dynamics induced by the edge Laplacian facilitates a better understanding of the role of certain subgraphs, e.g., cycles and spanning trees, in the original agreement problem. Using the edge Laplacian, we proceed to examine graph-theoretic characterizations of the H2 and H∞ performance for the agreement protocol. These results are subsequently applied in the contexts of optimal sensor placement for consensus-based applications. Finally, the edge Laplacian is employed to provide new insights into the nonlinear extension of linear agreement to agents with passive dynamics.", "We consider distributed consensus and vehicular formation control problems. Specifically we address the question of whether local feedback is sufficient to maintain coherence in large-scale networks subject to stochastic disturbances. We define macroscopic performance measures which are global quantities that capture the notion of coherence; a notion of global order that quantifies how closely the formation resembles a solid object. We consider how these measures scale asymptotically with network size in the topologies of regular lattices in 1, 2, and higher dimensions, with vehicular platoons corresponding to the 1-D case. A common phenomenon appears where a higher spatial dimension implies a more favorable scaling of coherence measures, with a dimensions of 3 being necessary to achieve coherence in consensus and vehicular formations under certain conditions. In particular, we show that it is impossible to have large coherent 1-D vehicular platoons with only local feedback. We analyze these effects in terms of the underlying energetic modes of motion, showing that they take the form of large temporal and spatial scales resulting in an accordion-like motion of formations. A conclusion can be drawn that in low spatial dimensions, local feedback is unable to regulate large-scale disturbances, but it can in higher spatial dimensions. This phenomenon is distinct from, and unrelated to string instability issues which are commonly encountered in control problems for automated highways.", "We consider a stochastic model for distributed average consensus, which arises in applications such as load balancing for parallel processors, distributed coordination of mobile autonomous agents, and network synchronization. In this model, each node updates its local variable with a weighted average of its neighbors' values, and each new value is corrupted by an additive noise with zero mean. The quality of consensus can be measured by the total mean-square deviation of the individual variables from their average, which converges to a steady-state value. We consider the problem of finding the (symmetric) edge weights that result in the least mean-square deviation in steady state. We show that this problem can be cast as a convex optimization problem, so the global solution can be found efficiently. We describe some computational methods for solving this problem, and compare the weights and the mean-square deviations obtained by this method and several other weight design methods.", "We consider the design of optimal localized feedback gains for one-dimensional formations in which vehicles only use information from their immediate neighbors. The control objective is to enhance coherence of the formation by making it behave like a rigid lattice. For the single-integrator model with symmetric gains, we establish convexity, implying that the globally optimal controller can be computed efficiently. We also identify a class of convex problems for double-integrators by restricting the controller to symmetric position and uniform diagonal velocity gains. To obtain the optimal non-symmetric gains for both the single- and the double-integrator models, we solve a parameterized family of optimal control problems ranging from an easily solvable problem to the problem of interest as the underlying parameter increases. When this parameter is kept small, we employ perturbation analysis to decouple the matrix equations that result from the optimality conditions, thereby rendering the unique optimal feedback gain. This solution is used to initialize a homotopy-based Newton's method to find the optimal localized gain. To investigate the performance of localized controllers, we examine how the coherence of large-scale stochastically forced formations scales with the number of vehicles. We establish several explicit scaling relationships and show that the best performance is achieved by a localized controller that is both non-symmetric and spatially-varying." ] }
1310.8620
2123738293
This paper analyzes distributed control protocols for first- and second-order networked dynamical systems. We propose a class of nonlinear consensus controllers where the input of each agent can be written as a product of a nonlinear gain, and a sum of nonlinear interaction functions. By using integral Lyapunov functions, we prove the stability of the proposed control protocols, and explicitly characterize the equilibrium set. We also propose a distributed proportional-integral (PI) controller for networked dynamical systems. The PI controllers successfully attenuate constant disturbances in the network. We prove that agents with single-integrator dynamics are stable for any integral gain, and give an explicit tight upper bound on the integral gain for when the system is stable for agents with double-integrator dynamics. Throughout the paper we highlight some possible applications of the proposed controllers by realistic simulations of autonomous satellites, power systems and building temperature control.
Consensus with integral action is studied in @cite_4 for agents with single-integrator dynamics. It was shown that the proposed controller attenuates constant disturbances. In @cite_3 , the authors take a similar approach to attenuate unknown disturbances. In both references the analysis is limited to agents with single-integrator dynamics. Our proposed PI controller is related to the consensus controllers in @cite_10 @cite_8 . However, these references do not consider the influence of disturbances.
{ "cite_N": [ "@cite_10", "@cite_4", "@cite_3", "@cite_8" ], "mid": [ "2143156927", "2100431085", "2013200328", "1977509696" ], "abstract": [ "In the paper, an extension of LaSalle's Invariance Principle to a class of switched linear systems is studied. One of the motivations is the consensus problem in multi-agent systems. Unlike most existing results in which each switching mode in the system needs to be asymptotically stable, this paper allows that the switching modes are only Lyapunov stable. Under certain ergodicity assumptions, an extension of LaSalle's Invariance Principle for global asymptotic stability is obtained. Then it is used to solve the consensus reaching problem of certain multi-agent systems in which each agent is modeled by a double integrator, and the associated interaction graph is switching and is assumed to be only jointly connected.", "We analyze two different estimation algorithms for dynamic average consensus in sensing and communication networks, a proportional algorithm and a proportional-integral algorithm. We investigate the stability properties of these estimators under changing inputs and network topologies as well as their convergence properties under constant or slowly-varying inputs. In doing so, we discover that the more complex proportional-integral algorithm has performance benefits over the simpler proportional algorithm", "This paper focuses on the consensus and formation problems of multiagent systems under unknown, persistent disturbances. Specifically, we propose a method that combines an existing consensus (or formation) algorithm with a new controller. The new controller has an integral action that produces a control input based on an error signal locally projected onto the column space of the graph Laplacian. This action allows agents to achieve a consensus or a predetermined formation objective under constant or time-varying disturbances.", "This note addresses a coordination problem of a multiagent system with jointly connected interconnection topologies. Neighbor-based rules are adopted to realize local control strategies for these continuous-time autonomous agents described by double integrators. Although the interagent connection structures vary over time and related graphs may not be connected, a sufficient condition to make all the agents converge to a common value is given for the problem by a proposed Lyapunov-based approach and related space decomposition technique" ] }
1310.7769
2274297850
Abstract This paper reports on stable (or invariant) properties of human interaction networks, with benchmarks derived from public email lists. Activity, recognized through messages sent, along time and topology were observed in snapshots in a timeline, and at different scales. Our analysis shows that activity is practically the same for all networks across timescales ranging from seconds to months. The principal components of the participants in the topological metrics space remain practically unchanged as different sets of messages are considered. The activity of participants follows the expected scale-free trace, thus yielding the hub, intermediary and peripheral classes of vertices by comparison against the Erdos–Renyi model. The relative sizes of these three sectors are essentially the same for all email lists and the same along time. Typically, 15 of the vertices are hubs, 15 –45 are intermediary and > 45 are peripheral vertices. Similar results for the distribution of participants in the three sectors and for the relative importance of the topological metrics were obtained for 12 additional networks from Facebook, Twitter and ParticipaBR. These properties are consistent with the literature and may be general for human interaction networks, which has important implications for establishing a typology of participants based on quantitative criteria.
Research on network evolution is often restricted to network growth, in which there is a monotonic increase in the number of events @cite_16 . Exceptions are reported in this section, with emphasis on those more closely related to the present article.
{ "cite_N": [ "@cite_16" ], "mid": [ "2092124750" ], "abstract": [ "The rich set of interactions between individuals in society results in complex community structure, capturing highly connected circles of friends, families or professional cliques in a social network. Thanks to frequent changes in the activity and communication patterns of individuals, the associated social and communication network is subject to constant evolution. Our knowledge of the mechanisms governing the underlying community dynamics is limited, but is essential for a deeper understanding of the development and self-optimization of society as a whole. We have developed an algorithm based on clique percolation that allows us to investigate the time dependence of overlapping communities on a large scale, and thus uncover basic relationships characterizing community evolution. Our focus is on networks capturing the collaboration between scientists and the calls between mobile phone users. We find that large groups persist for longer if they are capable of dynamically altering their membership, suggesting that an ability to change the group composition results in better adaptability. The behaviour of small groups displays the opposite tendency-the condition for stability is that their composition remains unchanged. We also show that knowledge of the time commitment of members to a given community can be used for estimating the community's lifetime. These findings offer insight into the fundamental differences between the dynamics of small groups and large institutions." ] }
1310.7769
2274297850
Abstract This paper reports on stable (or invariant) properties of human interaction networks, with benchmarks derived from public email lists. Activity, recognized through messages sent, along time and topology were observed in snapshots in a timeline, and at different scales. Our analysis shows that activity is practically the same for all networks across timescales ranging from seconds to months. The principal components of the participants in the topological metrics space remain practically unchanged as different sets of messages are considered. The activity of participants follows the expected scale-free trace, thus yielding the hub, intermediary and peripheral classes of vertices by comparison against the Erdos–Renyi model. The relative sizes of these three sectors are essentially the same for all email lists and the same along time. Typically, 15 of the vertices are hubs, 15 –45 are intermediary and > 45 are peripheral vertices. Similar results for the distribution of participants in the three sectors and for the relative importance of the topological metrics were obtained for 12 additional networks from Facebook, Twitter and ParticipaBR. These properties are consistent with the literature and may be general for human interaction networks, which has important implications for establishing a typology of participants based on quantitative criteria.
Network types have been discussed with regard to the number of participants, intermittence of their activity and network longevity @cite_16 . Two topologically different networks emerged from human interaction networks, depending on the frequency of interactions, which can either be a generalized power law or an exponential connectivity distribution @cite_17 . In email list networks, scale-free properties were reported with @math @cite_5 (as are web browsing and library loans @cite_9 ), and different linguistic traces were related to weak and strong ties @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_5", "@cite_16", "@cite_17" ], "mid": [ "2068370029", "2073689275", "2076279155", "2092124750", "2080201763" ], "abstract": [ "", "terized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. In contrast, priority blind execution is well approximated by uniform interevent statistics. We discuss two queuing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can hadle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution Pw w with =3 2. The second model imposes limitations on the queue length, resulting in a heavy tailed waiting time distribution characterized by = 1. We provide empirical evidence supporting the relevance of these two models to human activity patterns, showing that while emails, web browsing and library visitation display = 1, the surface mail based communication belongs to the =3 2 universality class. Finally, we discuss possible extension of the proposed queuing models and outline some future challenges in exploring the statistical mechanics of human dynamics.", "Communication & Co-ordination activities are central to large software projects, but are difficult to observe and study in traditional (closed-source, commercial) settings because of the prevalence of informal, direct communication modes. OSS projects, on the other hand, use the internet as the communication medium,and typically conduct discussions in an open, public manner. As a result, the email archives of OSS projects provide a useful trace of the communication and co-ordination activities of the participants. However, there are various challenges that must be addressed before this data can be effectively mined. Once this is done, we can construct social networks of email correspondents, and begin to address some interesting questions. These include questions relating to participation in the email; the social status of different types of OSS participants; the relationship of email activity and commit activity (in the CVS repositories) and the relationship of social status with commit activity. In this paper, we begin with a discussion of our infrastructure (including a novel use of Scientific Workflow software) and then discuss our approach to mining the email archives; and finally we present some preliminary results from our data analysis.", "The rich set of interactions between individuals in society results in complex community structure, capturing highly connected circles of friends, families or professional cliques in a social network. Thanks to frequent changes in the activity and communication patterns of individuals, the associated social and communication network is subject to constant evolution. Our knowledge of the mechanisms governing the underlying community dynamics is limited, but is essential for a deeper understanding of the development and self-optimization of society as a whole. We have developed an algorithm based on clique percolation that allows us to investigate the time dependence of overlapping communities on a large scale, and thus uncover basic relationships characterizing community evolution. Our focus is on networks capturing the collaboration between scientists and the calls between mobile phone users. We find that large groups persist for longer if they are capable of dynamically altering their membership, suggesting that an ability to change the group composition results in better adaptability. The behaviour of small groups displays the opposite tendency-the condition for stability is that their composition remains unchanged. We also show that knowledge of the time commitment of members to a given community can be used for estimating the community's lifetime. These findings offer insight into the fundamental differences between the dynamics of small groups and large institutions.", "Networks grow and evolve by local events, such as the addition of new nodes and links, or rewiring of links from one node to another. We show that depending on the frequency of these processes two topologically different networks can emerge, the connectivity distribution following either a generalized power-law or an exponential. We propose a continuum theory that predicts these two regimes as well as the scaling function and the exponents, in good agreement with the numerical results. Finally, we use the obtained predictions to fit the connectivity distribution of the network describing the professional links between movie actors." ] }
1310.7444
2951180293
Source delay, the time a packet experiences in its source node, serves as a fundamental quantity for delay performance analysis in networks. However, the source delay performance in highly dynamic mobile ad hoc networks (MANETs) is still largely unknown by now. This paper studies the source delay in MANETs based on a general packet dispatching scheme with dispatch limit @math (PD- @math for short), where a same packet will be dispatched out up to @math times by its source node such that packet dispatching process can be flexibly controlled through a proper setting of @math . We first apply the Quasi-Birth-and-Death (QBD) theory to develop a theoretical framework to capture the complex packet dispatching process in PD- @math MANETs. With the help of the theoretical framework, we then derive the cumulative distribution function as well as mean and variance of the source delay in such networks. Finally, extensive simulation and theoretical results are provided to validate our source delay analysis and illustrate how source delay in MANETs are related to network parameters.
Overall delay (also called end-to-end delay), defined as the time it takes a packet to reach its destination after it is generated at its source, has also been extensively studied in the literature. For MANETs with two-hop relay routing, closed-form upper bounds on expected overall delay were derived in @cite_33 @cite_5 . For MANETs with two-hop relay routing and its variants, approximation results on expected overall delay were presented in @cite_11 @cite_20 . For MANETs with multi-hop relay routing, upper bounds on the cumulative distribution function of overall delay were reported in @cite_15 @cite_19 , and approximations on the expected overall delay were derived in @cite_24 . Rather than studying upper bounds and approximations on overall delay, some recent works explored the exact overall delay and showed that it is possible to derive the exact results on overall delay for MANETs under some special two-hop relay routings @cite_33 @cite_13 .
{ "cite_N": [ "@cite_33", "@cite_24", "@cite_19", "@cite_5", "@cite_15", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "2171471095", "2121368972", "2049732373", "2076363954", "2101863060", "1854119092", "2063378231", "" ], "abstract": [ "We consider the throughput delay tradeoffs for scheduling data transmissions in a mobile ad hoc network. To reduce delays in the network, each user sends redundant packets along multiple paths to the destination. Assuming the network has a cell partitioned structure and users move according to a simplified independent and identically distributed (i.i.d.) mobility model, we compute the exact network capacity and the exact end-to-end queueing delay when no redundancy is used. The capacity-achieving algorithm is a modified version of the Grossglauser-Tse two-hop relay algorithm and provides O(N) delay (where N is the number of users). We then show that redundancy cannot increase capacity, but can significantly improve delay. The following necessary tradeoff is established: delay rate spl ges O(N). Two protocols that use redundancy and operate near the boundary of this curve are developed, with delays of O( spl radic N) and O(log(N)), respectively. Networks with non-i.i.d. mobility are also considered and shown through simulation to closely match the performance of i.i.d. systems in the O( spl radic N) delay regime.", "A large body of work has theoretically analyzed the performance of mobility-assisted routing schemes for intermittently connected mobile networks. But the vast majority of these prior studies have ignored wireless contention. Recent papers have shown through simulations that ignoring contention leads to inaccurate and misleading results, even for sparse networks. In this paper, we analyze the performance of routing schemes under contention. First, we introduce a mathematical framework to model contention. This framework can be used to analyze any routing scheme with any mobility and channel model. Then, we use this framework to compute the expected delays for different representative mobility-assisted routing schemes under random direction, random waypoint and community-based mobility models. Finally, we use these delay expressions to optimize the design of routing schemes while demonstrating that designing and optimizing routing schemes using analytical expressions which ignore contention can lead to suboptimal or even erroneous behavior.", "The class of Gupta-Kumar results, which predict the throughput capacity in wireless networks, is restricted to asymptotic regimes. This tutorial presents a methodology to address a corresponding non-asymptotic analysis based on the framework of the stochastic network calculus, in a rigorous mathematical manner. In particular, we derive explicit closed-form results on the distribution of the end-to-end capacity and delay, for a fixed source-destination pair, in a network with broad assumptions on its topology and degree of spatial correlations. The results are non-asymptotic in that they hold for finite time scales and network sizes, as well as bursty arrivals. The generality of the results enables the research of several interesting problems, concerning for instance the effects of time scales or randomness in topology on the network capacity.", "The two-hop relay algorithm and its variants have been attractive for ad hoc mobile networks, because they are simple yet efficient, and more importantly, they enable the capacity and delay to be studied analytically. This paper considers a general two-hop relay with f-cast (2HR-f), where each packet is delivered to at most f distinct relay nodes and should be received in order at its destination. We derive the closed-form theoretical models rather than order sense ones for the 2HR-f algorithm with a careful consideration of the important interference, medium contention, traffic contention and queuing delay issues, which enable an accurate delay and capacity analysis to be performed for an ad hoc mobile network employing the 2HR-f. Based on our models, one can directly get the corresponding order sense results. Extensive simulation studies are also conducted to demonstrate the efficiency of these new models.", "The class of Gupta-Kumar results give the asymptotic throughput in multi-hop wireless networks but cannot predict the throughput behavior in networks of typical size. This paper addresses the non-asymptotic analysis of the multihop wireless communication problem and provides, for the first time, closed-form results on multi-hop throughput and delay distributions. The results are non-asymptotic in that they hold for any number of nodes and also fully account for transient regimes, i.e., finite time scales, delays, as well as bursty arrivals. Their accuracy is supported by the recovery of classical single-hop results, and also by simulations from empirical data sets with realistic mobility settings. Moreover, for a specific network scenario and a fixed pair of nodes, the results confirm Gupta-Kumar's Ω(1√n log n) asymptotic scaling law.", "Understanding the delay performance in mobile ad hoc networks (MANETs) is of fundamental importance for supporting Quality of Service (QoS) guaranteed applications in such networks. Despite lot of research efforts in last several decades, the important end-to-end delay modeling in MANETs remains a challenging issue. This is partially due to the highly dynamical behaviors of MANETs but also due to the lack of an efficient theoretical framework to depict the complicated network state transitions under such dynamics. This paper demonstrates the potential application of the Quasi-Birth-and-Death process (QBD) theory in MANETs delay analysis by applying it to the end-to-end delay modeling in broadcast-based two-hop relay MANETs. We first demonstrate that the QBD theory actually enables a novel and powerful theoretical framework to be developed to efficiently capture the complicated network state transitions in the concerned MANETs. We then show that with the help of the theoretical framework, we are able to analytically model the exact expected end-to-end delay and also the exact per node throughput capacity in such MANETs. Extensive simulations are further provided to validate the efficiency of our QBD theory-based models.", "Due to their simplicity and efficiency, the two-hop relay algorithm and its variants serve as a class of attractive routing schemes for mobile ad hoc networks (MANETs). With the available two-hop relay schemes, a node, whenever getting an opportunity for transmission, randomly probes only once a neighbor node for the possible transmission. It is notable that such single probing strategy, although simple, may result in a significant waste of the precious transmission opportunities in highly dynamic MANETs. To alleviate such limitation for a more efficient utilization of limited wireless bandwidth, this paper proposes a more general probing-based two-hop relay algorithm with limited packet redundancy. In such an algorithm with probing round limit τ and packet redundancy limit f, each transmitter is allowed to conduct up to τ rounds of probing for identifying a possible receiver and each packet can be delivered to at most f distinct relays. A general theoretical framework is further developed to help us understand that under different setting of τ and f, how we can benefit from multiple probings in terms of the per node throughput capacity and the expected end-to-end packet delay.", "" ] }
1310.7648
1971038035
We consider wireless-powered amplify-and-forward and decode-and-forward relaying in cooperative communications, where an energy constrained relay node first harvests energy through the received radio-frequency signal from the source and then uses the harvested energy to forward the source information to the destination node. We propose time-switching based energy harvesting (EH) and information transmission (IT) protocols with two modes of EH at the relay. For continuous time EH, the EH time can be any percentage of the total transmission block time. For discrete time EH, the whole transmission block is either used for EH or IT. The proposed protocols are attractive because they do not require channel state information at the transmitter side and enable relay transmission with preset fixed transmission power. We derive analytical expressions of the achievable throughput for the proposed protocols. The derived expressions are verified by comparison with simulations and allow the system performance to be determined as a function of the system parameters. Finally, we show that the proposed protocols outperform the existing fixed time duration EH protocols in the literature, since they intelligently track the level of the harvested energy to switch between EH and IT in an online fashion, allowing efficient use of resources.
The majority of the research in wireless energy harvesting and information processing has considered point-to-point communication systems and studied rate-energy trade-off assuming single-input-single-output (SISO) @cite_11 @cite_49 @cite_5 @cite_40 @cite_42 , single-input-multiple-output (SIMO) @cite_16 , multiple-input-single-output (MISO) @cite_19 , and multiple-input-multiple-output (MIMO) @cite_26 @cite_46 setups. The application of wireless energy harvesting to orthogonal frequency division multiplexing (OFDM) @cite_30 and cognitive radio @cite_39 @cite_51 based systems has also been considered. Energy beamforming through wireless energy harvesting was studied for the multi-antenna wireless broadcasting system in @cite_21 @cite_3 . Secure transmission in the presence of eavesdropper under wireless energy harvesting constraint was studied in MISO beamforming systems @cite_20 . Moreover, multiuser scheduling in the presence of wireless energy harvesting was considered in @cite_27 .
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_42", "@cite_21", "@cite_3", "@cite_39", "@cite_19", "@cite_40", "@cite_27", "@cite_49", "@cite_5", "@cite_46", "@cite_16", "@cite_51", "@cite_20", "@cite_11" ], "mid": [ "2082565560", "2032372805", "2050930635", "2002932684", "2106616990", "2122489958", "1971718957", "1969691134", "2094000142", "2111505049", "2111844221", "2110098571", "2042738017", "2260411392", "1976265896", "2170263567" ], "abstract": [ "In this paper, we study the resource allocation algorithm design for multiuser orthogonal frequency division multiplexing (OFDM) downlink systems with simultaneous wireless information and power transfer. The algorithm design is formulated as a non-convex optimization problem for maximizing the energy efficiency of data transmission (bit Joule delivered to the users). In particular, the problem formulation takes into account the minimum required system data rate, heterogeneous minimum required power transfers to the users, and the circuit power consumption. Subsequently, by exploiting the method of timesharing and the properties of nonlinear fractional programming, the considered non-convex optimization problem is solved using an efficient iterative resource allocation algorithm. For each iteration, the optimal power allocation and user selection solution are derived based on Lagrange dual decomposition. Simulation results illustrate that the proposed iterative resource allocation algorithm achieves the maximum energy efficiency of the system and reveal how energy efficiency, system capacity, and wireless power transfer benefit from the presence of multiple users in the system.", "Wireless power transfer (WPT) is a promising new solution to provide convenient and perpetual energy supplies to wireless networks. In practice, WPT is implementable by various technologies such as inductive coupling, magnetic resonate coupling, and electromagnetic (EM) radiation, for short- mid- long-range applications, respectively. In this paper, we consider the EM or radio signal enabled WPT in particular. Since radio signals can carry energy as well as information at the same time, a unified study on simultaneous wireless information and power transfer (SWIPT) is pursued. Specifically, this paper studies a multiple-input multiple-output (MIMO) wireless broadcast system consisting of three nodes, where one receiver harvests energy and another receiver decodes information separately from the signals sent by a common transmitter, and all the transmitter and receivers may be equipped with multiple antennas. Two scenarios are examined, in which the information receiver and energy receiver are separated and see different MIMO channels from the transmitter, or co-located and see the identical MIMO channel from the transmitter. For the case of separated receivers, we derive the optimal transmission strategy to achieve different tradeoffs for maximal information rate versus energy transfer, which are characterized by the boundary of a so-called rate-energy (R-E) region. For the case of co-located receivers, we show an outer bound for the achievable R-E region due to the potential limitation that practical energy harvesting receivers are not yet able to decode information directly. Under this constraint, we investigate two practical designs for the co-located receiver case, namely time switching and power splitting, and characterize their achievable R-E regions in comparison to the outer bound.", "Characterizing the fundamental tradeoffs for maximizing energy efficiency (EE) versus spectrum efficiency (SE) is a key problem in wireless communication. In this paper, we address this problem for a point-to-point additive white Gaussian noise (AWGN) channel with the transmitter powered solely via energy harvesting from the environment. In addition, we assume a practical on-off transmitter model with non-ideal circuit power, i.e., when the transmitter is on, its consumed power is the sum of the transmit power and a constant circuit power. Under this setup, we study the optimal transmit power allocation to maximize the average throughput over a finite horizon, subject to the time-varying energy constraint and the non-ideal circuit power consumption. First, we consider the off-line optimization under the assumption that the energy arrival time and amount are a priori known at the transmitter. Although this problem is non-convex due to the non-ideal circuit power, we show an efficient optimal solution that in general corresponds to a two-phase transmission: the first phase with an EE-maximizing on-off power allocation, and the second phase with a SE-maximizing power allocation that is non-decreasing over time, thus revealing an interesting result that both the EE and SE optimizations are unified in an energy harvesting communication system. We then extend the optimal off-line algorithm to the case with multiple parallel AWGN channels, based on the principle of nested optimization. Finally, inspired by the off-line optimal solution, we propose a new online algorithm under the practical setup with only the past and present energy state information (ESI) known at the transmitter.", "In this letter, we study the robust beamforming problem for the multi-antenna wireless broadcasting system with simultaneous information and power transmission, under the assumption of imperfect channel state information (CSI) at the transmitter. Following the worst-case deterministic model, our objective is to maximize the worst-case harvested energy for the energy receiver while guaranteeing that the rate for the information receiver is above a threshold for all possible channel realizations. Such problem is nonconvex with infinite number of constraints. Using certain transformation techniques, we convert this problem into a relaxed semidefinite programming problem (SDP) which can be solved efficiently. We further show that the solution of the relaxed SDP problem is always rank-one. This indicates that the relaxation is tight and we can get the optimal solution for the original problem. Simulation results are presented to validate the effectiveness of the proposed algorithm.", "This paper studies a multi-user multiple-input single-output (MISO) downlink system for simultaneous wireless information and power transfer (SWIPT), in which a set of single-antenna mobile stations (MSs) receive information and energy simultaneously via power splitting (PS) from the signal sent by a multi-antenna base station (BS). We aim to minimize the total transmission power at BS by jointly designing transmit beamforming vectors and receive PS ratios for all MSs under their given signal-to-interference-plus-noise ratio (SINR) constraints for information decoding and harvested power constraints for energy harvesting. First, we derive the sufficient and necessary condition for the feasibility of our formulated problem. Next, we solve this non-convex problem by applying the technique of semidefinite relaxation (SDR). We prove that SDR is indeed tight for our problem and thus achieves its global optimum. Finally, we propose two suboptimal solutions of lower complexity than the optimal solution based on the principle of separating the optimization of transmit beamforming and receive PS, where the zero-forcing (ZF) and the SINR-optimal based transmit beamforming schemes are applied, respectively.", "Wireless networks can be self-sustaining by harvesting energy from ambient radio-frequency (RF) signals. Recently, researchers have made progress on designing efficient circuits and devices for RF energy harvesting suitable for low-power wireless applications. Motivated by this and building upon the classic cognitive radio (CR) network model, this paper proposes a novel method for wireless networks coexisting where low-power mobiles in a secondary network, called secondary transmitters (STs), harvest ambient RF energy from transmissions by nearby active transmitters in a primary network, called primary transmitters (PTs), while opportunistically accessing the spectrum licensed to the primary network. We consider a stochastic-geometry model in which PTs and STs are distributed as independent homogeneous Poisson point processes (HPPPs) and communicate with their intended receivers at fixed distances. Each PT is associated with a guard zone to protect its intended receiver from ST's interference, and at the same time delivers RF energy to STs located in its harvesting zone. Based on the proposed model, we analyze the transmission probability of STs and the resulting spatial throughput of the secondary network. The optimal transmission power and density of STs are derived for maximizing the secondary network throughput under the given outage-probability constraints in the two coexisting networks, which reveal key insights to the optimal network design. Finally, we show that our analytical result can be generally applied to a non-CR setup, where distributed wireless power chargers are deployed to power coexisting wireless transmitters in a sensor network.", "This paper considers the transmitter design for wireless information and energy transfer (WIET) in a multiple-input single-output (MISO) interference channel (IFC). The design problem is to maximize the system throughput subject to individual energy harvesting constraints and power constraints. It is observed that the ideal scheme, where the receivers simultaneously perform information detection (ID) and energy harvesting (EH) from the received signal, may not always achieve the best tradeoff between information transfer and energy harvesting, but simple practical schemes based on time splitting may perform better. We therefore propose two practical time splitting schemes, namely the time-division mode switching (TDMS) and time-division multiple access (TDMA), in addition to the existing power splitting (PS) scheme. In the two-user scenario, we show that beamforming is optimal to all the schemes. Moreover, the design problems associated with the TDMS and TDMA schemes admit semi-analytical solutions. In the general K-user scenario, a successive convex approximation method is proposed to handle the WIET problems associated with the ideal scheme, the PS scheme and the TDMA scheme, which are known NP-hard in general. Simulation results show that none of the schemes under consideration can always dominate another in terms of the sum rate performance. Specifically, it is observed that stronger cross-link channel power improves the achievable sum rate of time splitting schemes but degrades the sum rate performance of the ideal scheme and PS scheme. As a result, time splitting schemes can outperform the ideal scheme and the PS scheme in interference dominated scenarios.", "In some communication networks, such as passive RFID systems, the energy used to transfer information between a sender and a recipient can be reused for successive communication tasks. In fact, from known results in physics, any system that exchanges information via the transfer of given physical resources, such as radio waves, particles and qubits, can conceivably reuse, at least part, of the received resources. This paper aims at illustrating some of the new challenges that arise in the design of communication networks in which the signals exchanged by the nodes carry both information and energy. To this end, a baseline two-way communication system is considered in which two nodes communicate in an interactive fashion. In the system, a node can either send an \"onquotedblright symbol (or \"1quotedblright), which costs one unit of energy, or an \"offquotedblright signal (or \"0quotedblright), which does not require any energy expenditure. Upon reception of a \"1quotedblright signal, the recipient node \"harvestsquotedblright, with some probability, the energy contained in the signal and stores it for future communication tasks. Inner and outer bounds on the achievable rates are derived. Numerical results demonstrate the effectiveness of the proposed strategies and illustrate some key design insights.", "This paper studies the newly emerging wireless powered communication network in which one hybrid access point (H-AP) with constant power supply coordinates the wireless energy information transmissions to from a set of distributed users that do not have other energy sources. A \"harvest-then-transmit\" protocol is proposed where all users first harvest the wireless energy broadcast by the H-AP in the downlink (DL) and then send their independent information to the H-AP in the uplink (UL) by time-division-multiple-access (TDMA). First, we study the sum-throughput maximization of all users by jointly optimizing the time allocation for the DL wireless power transfer versus the users' UL information transmissions given a total time constraint based on the users' DL and UL channels as well as their average harvested energy values. By applying convex optimization techniques, we obtain the closed-form expressions for the optimal time allocations to maximize the sum-throughput. Our solution reveals an interesting \"doubly near-far\" phenomenon due to both the DL and UL distance-dependent signal attenuation, where a far user from the H-AP, which receives less wireless energy than a nearer user in the DL, has to transmit with more power in the UL for reliable information transmission. As a result, the maximum sum-throughput is shown to be achieved by allocating substantially more time to the near users than the far users, thus resulting in unfair rate allocation among different users. To overcome this problem, we furthermore propose a new performance metric so-called common-throughput with the additional constraint that all users should be allocated with an equal rate regardless of their distances to the H-AP. We present an efficient algorithm to solve the common-throughput maximization problem. Simulation results demonstrate the effectiveness of the common-throughput approach for solving the new doubly near-far problem in wireless powered communication networks.", "Simultaneous information and power transfer over the wireless channels potentially offers great convenience to mobile users. Yet practical receiver designs impose technical constraints on its hardware realization, as practical circuits for harvesting energy from radio signals are not yet able to decode the carried information directly. To make theoretical progress, we propose a general receiver operation, namely, dynamic power splitting (DPS), which splits the received signal with adjustable power ratio for energy harvesting and information decoding, separately. Three special cases of DPS, namely, time switching (TS), static power splitting (SPS) and on-off power splitting (OPS) are investigated. The TS and SPS schemes can be treated as special cases of OPS. Moreover, we propose two types of practical receiver architectures, namely, separated versus integrated information and energy receivers. The integrated receiver integrates the front-end components of the separated receiver, thus achieving a smaller form factor. The rate-energy tradeoff for the two architectures are characterized by a so-called rate-energy (R-E) region. The optimal transmission strategy is derived to achieve different rate-energy tradeoffs. With receiver circuit power consumption taken into account, it is shown that the OPS scheme is optimal for both receivers. For the ideal case when the receiver circuit does not consume power, the SPS scheme is optimal for both receivers. In addition, we study the performance for the two types of receivers under a realistic system setup that employs practical modulation. Our results provide useful insights to the optimal practical receiver design for simultaneous wireless information and power transfer (SWIPT).", "Energy harvesting is a promising solution to prolong the operation of energy-constrained wireless networks. In particular, scavenging energy from ambient radio signals, namely wireless energy harvesting (WEH), has recently drawn significant attention. In this paper, we consider a point-to-point wireless link over the narrowband flat-fading channel subject to time-varying co-channel interference. It is assumed that the receiver has no fixed power supplies and thus needs to replenish energy opportunistically via WEH from the unintended interference and or the intended signal sent by the transmitter. We further assume a single-antenna receiver that can only decode information or harvest energy at any time due to the practical circuit limitation. Therefore, it is important to investigate when the receiver should switch between the two modes of information decoding (ID) and energy harvesting (EH), based on the instantaneous channel and interference condition. In this paper, we derive the optimal mode switching rule at the receiver to achieve various trade-offs between wireless information transfer and energy harvesting. Specifically, we determine the minimum transmission outage probability for delay-limited information transfer and the maximum ergodic capacity for no-delay-limited information transfer versus the maximum average energy harvested at the receiver, which are characterized by the boundary of so-called \"outage-energy\" region and \"rate-energy\" region, respectively. Moreover, for the case when the channel state information (CSI) is known at the transmitter, we investigate the joint optimization of transmit power control, information and energy transfer scheduling, and the receiver's mode switching. The effects of circuit energy consumption at the receiver on the achievable rate-energy trade-offs are also characterized. Our results provide useful guidelines for the efficient design of emerging wireless communication systems powered by opportunistic WEH.", "This paper investigates joint wireless information and energy transfer in a two-user MIMO interference channel, in which each receiver either decodes the incoming information data (information decoding, ID) or harvests the RF energy (energy harvesting, EH) to operate with a potentially perpetual energy supply. In the two-user interference channel, we have four different scenarios according to the receiver mode - (ID1, ID2), (EH1, EH2), (EH1, ID2), and (ID1, EH2). While the maximum information bit rate is unknown and finding the optimal transmission strategy is still open for (ID1, ID2), we have derived the optimal transmission strategy achieving the maximum harvested energy for (EH1, EH2). For (EH1, ID2), and (ID1, EH2), we find a necessary condition of the optimal transmission strategy and, accordingly, identify the achievable rate-energy (R-E) tradeoff region for two transmission strategies that satisfy the necessary condition - maximum energy beamforming (MEB) and minimum leakage beamforming (MLB). Furthermore, a new transmission strategy satisfying the necessary condition - signal-to-leakage-and-energy ratio (SLER) maximization beamforming - is proposed and shown to exhibit a better R-E region than the MEB and the MLB strategies. Finally, we propose a mode scheduling method to switch between (EH1, ID2) and (ID1, EH2) based on the SLER.", "Energy harvesting is a promising solution to prolong the operation time of energy-constrained wireless networks. In particular, scavenging energy from ambient radio signals, namely wireless energy harvesting (WEH), has recently drawn significant attention. In this paper, we consider a point-to-point wireless link over the flat-fading channel, where the receiver has no fixed power supplies and thus needs to replenish energy via WEH from the signals sent by the transmitter. We first consider a SISO (single-input single-output) system where the single-antenna receiver cannot decode information and harvest energy independently from the same signal received. Under this practical constraint, we propose a dynamic power splitting (DPS) scheme, where the received signal is split into two streams with adjustable power levels for information decoding and energy harvesting separately based on the instantaneous channel condition that is assumed to be known at the receiver. We derive the optimal power splitting rule at the receiver to achieve various trade-offs between the maximum ergodic capacity for information transfer and the maximum average harvested energy for power transfer, which are characterized by the boundary of a so-called \"rate-energy (R-E)\" region. Moreover, for the case when the channel state information is also known at the transmitter, we investigate the joint optimization of transmitter power control and receiver power splitting. The achievable R-E region by the proposed DPS scheme is also compared against that by the existing time switching scheme as well as a performance upper bound by ignoring the practical receiver constraint. Finally, we extend the result for optimal DPS to the SIMO (single-input multiple-output) system where the receiver is equipped with multiple antennas. In particular, we investigate a low-complexity power splitting scheme, namely antenna switching, which achieves the near-optimal rate-energy trade-offs as compared to the optimal DPS.", "In this paper, the performance of opportunistic relay selection (ORS) in a cognitive radio is analyzed over flat Rayleigh fading channels. Data transmission between source and destination is assumed to be entirely performed via the relays. Relay nodes are assumed to have ability to harvest energy from the source signal and use that harvested energy to forward the information to the destination. Specifically, we derive an exact expression for the outage probability of the secondary system considering the maximum transmit power at the secondary transmitter and relays, energy harvesting efficiency at relays, and interference constraint at the primary receiver. Under the assumption of perfect channel state information at the receivers, we evaluate the outage probability of a cognitive radio system with ORS and energy harvesting.", "In this paper, we study power allocation for secure communication in a multiuser multiple-input single-output (MIS-O) downlink system with simultaneous wireless information and power transfer. The receivers are able to harvest energy from the radio frequency when they are idle. We propose a multi-objective optimization problem for power allocation algorithm design which incorporates two conflicting system objectives: total transmit power minimization and energy harvesting efficiency maximization. The proposed problem formulation takes into account a quality of service (QoS) requirement for the system secrecy capacity. Our designs advocate the dual use of artificial noise in providing secure communication and facilitating efficient energy harvesting. The multi-objective optimization problem is non-convex and is solved by a semidefinite programming (SDP) relaxation approach which results in an approximate of solution. A sufficient condition for the global optimal solution is revealed and the accuracy of the approximation is examined. To strike a balance between computational complexity and system performance, we propose two suboptimal power allocation schemes. Numerical results not only demonstrate the excellent performance of the proposed suboptimal schemes compared to baseline schemes, but also unveil an interesting trade-off between energy harvesting efficiency and total transmit power.", "The problem considered here is that of wireless information and power transfer across a noisy coupled-inductor circuit, which is a frequency-selective channel with additive white Gaussian noise. The optimal tradeoff between the achievable rate and the power transferred is characterized given the total power available. The practical utility of such systems is also discussed." ] }
1310.7648
1971038035
We consider wireless-powered amplify-and-forward and decode-and-forward relaying in cooperative communications, where an energy constrained relay node first harvests energy through the received radio-frequency signal from the source and then uses the harvested energy to forward the source information to the destination node. We propose time-switching based energy harvesting (EH) and information transmission (IT) protocols with two modes of EH at the relay. For continuous time EH, the EH time can be any percentage of the total transmission block time. For discrete time EH, the whole transmission block is either used for EH or IT. The proposed protocols are attractive because they do not require channel state information at the transmitter side and enable relay transmission with preset fixed transmission power. We derive analytical expressions of the achievable throughput for the proposed protocols. The derived expressions are verified by comparison with simulations and allow the system performance to be determined as a function of the system parameters. Finally, we show that the proposed protocols outperform the existing fixed time duration EH protocols in the literature, since they intelligently track the level of the harvested energy to switch between EH and IT in an online fashion, allowing efficient use of resources.
Some studies have recently considered energy harvesting through RF signals in wireless relaying networks @cite_15 @cite_6 @cite_0 @cite_32 @cite_18 @cite_12 @cite_28 @cite_48 @cite_43 . The different rate-energy trade-offs to achieve the optimal source and relay precoding in a MIMO relay system was studied in @cite_15 . The outage performance of a typical cooperative communication system was studied in @cite_6 . However, the authors in @cite_15 @cite_6 assumed that the relay has sufficient energy of its own and does not need external charging. Multi-user and multi-hop systems for simultaneous information and power transfer were investigated in @cite_0 . The optimization strategy in @cite_0 assumed that the relay node is able to decode the information and extract power simultaneously, which, as explained in @cite_49 , may not hold in practice. Considering amplify-and-forward (AF) relaying under energy harvesting constraints, the outage performance of half-duplex and throughput performance of full-duplex relaying networks was studied in @cite_32 and @cite_18 , respectively. However, perfect channel knowledge for the relay-to-destination link at the relay transmitter was assumed in @cite_32 and @cite_18 . Further, full-duplex relaying as in [30], introduces additional complexity at the relay node due to multiple antenna deployment and the requirement of self interference cancelation.
{ "cite_N": [ "@cite_18", "@cite_28", "@cite_48", "@cite_32", "@cite_6", "@cite_0", "@cite_43", "@cite_49", "@cite_15", "@cite_12" ], "mid": [ "1981884085", "2950840090", "2015786472", "1976570785", "2027410842", "2036217195", "2087805235", "2111505049", "2129808809", "2032896960" ], "abstract": [ "This letter studies a wireless-powered amplify-and-forward relaying system, where an energy-constrained relay node assists the information transmission from the source to the destination using the energy harvested from the source. We propose a novel two-phase protocol for efficient energy transfer and information relaying, in which the relay operates in full-duplex mode with simultaneous energy harvesting and information transmission . Compared with the existing protocols, the proposed design possesses two main advantages: 1) it ensures uninterrupted information transmission since no time switching or power splitting is needed at the relay for energy harvesting; and 2) it enables the so-called self-energy recycling, i.e., part of the energy (loop energy) that is used for information transmission by the relay can be harvested and reused in addition to the dedicated energy sent by the source. Under the multiple-input single-output (MISO) channel setup, the optimal power allocation and beamforming design at the relay are derived. Numerical results show a significant throughput gain achieved by our proposed design over the existing time switching based relay protocol.", "The various wireless networks have made the ambient radio frequency signals around the world. Wireless information and power transfer enables the devices to recycle energy from these ambient radio frequency signals and process information simultaneously. In this paper, we develop a wireless information and power transfer protocol in two-way amplify-and-forward relaying channels, where two sources exchange information via an energy harvesting relay node. The relay node collects energy from the received signals and uses it to provide the transmission power to forward the received signals. We analytically derive the exact expressions of the outage probability, the ergodic capacity and the finite-SNR diversity-multiplexing trade-off (DMT). Furthermore, the tight closed-form upper and lower bounds of the outage probability and the ergodic capacity are then developed. Moreover, the impact of the power splitting ratio is also evaluated and analyzed. Finally, we show that compared to the non-cooperative relaying scheme, the proposed protocol is a green solution to offer higher transmission rate and more reliable communication without consuming additional resource.", "Energy harvesting (EH) from ambient radio-frequency (RF) electromagnetic waves is an efficient solution for fully autonomous and sustainable communication networks. Most of the related works presented in the literature are based on specific (and small-scale) network structures, which although give useful insights on the potential benefits of the RF-EH technology, cannot characterize the performance of general networks. In this paper, we adopt a large-scale approach of the RF-EH technology and we characterize the performance of a network with random number of transmitter-receiver pairs by using stochastic-geometry tools. Specifically, we analyze the outage probability performance and the average harvested energy, when receivers employ power splitting (PS) technique for \"simultaneous\" information and energy transfer. A non-cooperative scheme, where information energy are conveyed only via direct links, is firstly considered and the outage performance of the system as well as the average harvested energy are derived in closed form in function of the power splitting. For this protocol, an interesting optimization problem which minimizes the transmitted power under outage probability and harvesting constraints, is formulated and solved in closed form. In addition, we study a cooperative protocol where sources' transmissions are supported by a random number of potential relays that are randomly distributed into the network. In this case, information energy can be received at each destination via two independent and orthogonal paths (in case of relaying). We characterize both performance metrics, when a selection combining scheme is applied at the receivers and a single relay is randomly selected for cooperative diversity.", "This letter deals with a three-node cooperative network where the relay node harvests energy from radio frequency (RF) radiation. The source node is the only available RF generator and introduces a fundamental switching between energy harvesting and data relaying. A greedy switching (GS) policy where the relay node transmits when its residual energy ensures decoding at the destination is investigated. The GS policy is modeled as a Markov chain for a discretized battery; the stationary distribution and the outage probability of the system are derived in closed form expressions. In addition, an optimal switching policy that incorporates a-priori knowledge of the channel coefficients is proposed and solved by a mixed-integer linear programming formulation.", "In this paper, we propose a new cooperative wireless transmission in a scenario where the source salvages the energy during the relay's transmission considering the fact that the source does not need to retrieve the transmitted message. We also evaluate a direct wireless transmission with wireless energy transfer as a reference. We analyze the performance of these transmission techniques in terms of outage probability. Our analytical results reveal the advantage of energy salvage in combination with spatial diversity over the direct transmission even if the energy transfer efficiency is considerably low.", "The problem of joint transfer of information and energy for wireless links has been recently investigated in light of emerging applications such as RFID and body area networks. Specifically, recent work has shown that the additional requirements of providing sufficient energy to the receiver significantly affects the design of the optimal communication strategy. In contrast to most previous works, this letter focuses on baseline multi-user systems, namely multiple access and multi-hop channels, and demonstrates that energy transfer constraints call for additional coordination among distributed nodes of a wireless network. The analysis is carried out using information-theoretic tools, and specific examples are worked out to illustrate the main conclusions.", "In this paper, a wireless cooperative network is considered, in which multiple source-destination pairs communicate with each other via an energy harvesting relay. The focus of this paper is on the relay's strategies to distribute the harvested energy among the multiple users and their impact on the system performance. Specifically, a non-cooperative strategy that uses the energy harvested from the i-th source as the relay transmission power to the i-th destination is considered first, and asymptotic results show that its outage performance decays as log SNR SNR. A faster decay rate, 1 SNR, can be achieved by two centralized strategies proposed next, of which a water filling based one can achieve optimal performance with respect to several criteria, at the price of high complexity. An auction based power allocation scheme is also proposed to achieve a better tradeoff between system performance and complexity. Simulation results are provided to confirm the accuracy of the developed analytical results.", "Simultaneous information and power transfer over the wireless channels potentially offers great convenience to mobile users. Yet practical receiver designs impose technical constraints on its hardware realization, as practical circuits for harvesting energy from radio signals are not yet able to decode the carried information directly. To make theoretical progress, we propose a general receiver operation, namely, dynamic power splitting (DPS), which splits the received signal with adjustable power ratio for energy harvesting and information decoding, separately. Three special cases of DPS, namely, time switching (TS), static power splitting (SPS) and on-off power splitting (OPS) are investigated. The TS and SPS schemes can be treated as special cases of OPS. Moreover, we propose two types of practical receiver architectures, namely, separated versus integrated information and energy receivers. The integrated receiver integrates the front-end components of the separated receiver, thus achieving a smaller form factor. The rate-energy tradeoff for the two architectures are characterized by a so-called rate-energy (R-E) region. The optimal transmission strategy is derived to achieve different rate-energy tradeoffs. With receiver circuit power consumption taken into account, it is shown that the OPS scheme is optimal for both receivers. For the ideal case when the receiver circuit does not consume power, the SPS scheme is optimal for both receivers. In addition, we study the performance for the two types of receivers under a realistic system setup that employs practical modulation. Our results provide useful insights to the optimal practical receiver design for simultaneous wireless information and power transfer (SWIPT).", "This paper investigates performance limits of a two-hop multi-antenna amplify-and-forward (AF) relay system in the presence of a multi-antenna energy harvesting receiver. The source and relay nodes of the two-hop AF system employ orthogonal space-time block codes for data transmission. We derive joint optimal source and relay precoders to achieve different tradeoffs between the energy transfer capability and the information rate, which are characterized by the boundary of the so-called rate-energy (R-E) region. Numerical results demonstrate the effect of different parameters on the boundary of the R-E region.", "An emerging solution for prolonging the lifetime of energy constrained relay nodes in wireless networks is to avail the ambient radio-frequency (RF) signal and to simultaneously harvest energy and process information. In this paper, an amplify-and-forward (AF) relaying network is considered, where an energy constrained relay node harvests energy from the received RF signal and uses that harvested energy to forward the source information to the destination. Based on the time switching and power splitting receiver architectures, two relaying protocols, namely, i) time switching-based relaying (TSR) protocol and ii) power splitting-based relaying (PSR) protocol are proposed to enable energy harvesting and information processing at the relay. In order to determine the throughput, analytical expressions for the outage probability and the ergodic capacity are derived for delay-limited and delay-tolerant transmission modes, respectively. The numerical analysis provides practical insights into the effect of various system parameters, such as energy harvesting time, power splitting ratio, source transmission rate, source to relay distance, noise power, and energy harvesting efficiency, on the performance of wireless energy harvesting and information processing using AF relay nodes. In particular, the TSR protocol outperforms the PSR protocol in terms of throughput at relatively low signal-to-noise-ratios and high transmission rates." ] }
1310.7648
1971038035
We consider wireless-powered amplify-and-forward and decode-and-forward relaying in cooperative communications, where an energy constrained relay node first harvests energy through the received radio-frequency signal from the source and then uses the harvested energy to forward the source information to the destination node. We propose time-switching based energy harvesting (EH) and information transmission (IT) protocols with two modes of EH at the relay. For continuous time EH, the EH time can be any percentage of the total transmission block time. For discrete time EH, the whole transmission block is either used for EH or IT. The proposed protocols are attractive because they do not require channel state information at the transmitter side and enable relay transmission with preset fixed transmission power. We derive analytical expressions of the achievable throughput for the proposed protocols. The derived expressions are verified by comparison with simulations and allow the system performance to be determined as a function of the system parameters. Finally, we show that the proposed protocols outperform the existing fixed time duration EH protocols in the literature, since they intelligently track the level of the harvested energy to switch between EH and IT in an online fashion, allowing efficient use of resources.
Recently, considering AF relaying, the throughput performance of a single-way relaying network @cite_12 and outage probability and ergodic capacity of two-way relaying network @cite_28 under energy harvesting constraints were studied. The outage performance and relay selection criteria in a large scale network with wireless energy harvesting and DF relaying was studied in @cite_48 . Finally, for a decode-and-forward (DF) relaying network, the power allocation strategies and outage performance under energy harvesting constraints was studied in @cite_43 . However, @cite_28 , @cite_32 , @cite_48 , and @cite_43 do not investigate analytical expressions for the achievable throughput at the destination node. In addition, @cite_12 considers energy harvesting time to have fixed duration and similar to @cite_28 , @cite_48 , and @cite_43 , does not allow energy accumulation at the relay node. .
{ "cite_N": [ "@cite_28", "@cite_48", "@cite_32", "@cite_43", "@cite_12" ], "mid": [ "2950840090", "2015786472", "1976570785", "2087805235", "2032896960" ], "abstract": [ "The various wireless networks have made the ambient radio frequency signals around the world. Wireless information and power transfer enables the devices to recycle energy from these ambient radio frequency signals and process information simultaneously. In this paper, we develop a wireless information and power transfer protocol in two-way amplify-and-forward relaying channels, where two sources exchange information via an energy harvesting relay node. The relay node collects energy from the received signals and uses it to provide the transmission power to forward the received signals. We analytically derive the exact expressions of the outage probability, the ergodic capacity and the finite-SNR diversity-multiplexing trade-off (DMT). Furthermore, the tight closed-form upper and lower bounds of the outage probability and the ergodic capacity are then developed. Moreover, the impact of the power splitting ratio is also evaluated and analyzed. Finally, we show that compared to the non-cooperative relaying scheme, the proposed protocol is a green solution to offer higher transmission rate and more reliable communication without consuming additional resource.", "Energy harvesting (EH) from ambient radio-frequency (RF) electromagnetic waves is an efficient solution for fully autonomous and sustainable communication networks. Most of the related works presented in the literature are based on specific (and small-scale) network structures, which although give useful insights on the potential benefits of the RF-EH technology, cannot characterize the performance of general networks. In this paper, we adopt a large-scale approach of the RF-EH technology and we characterize the performance of a network with random number of transmitter-receiver pairs by using stochastic-geometry tools. Specifically, we analyze the outage probability performance and the average harvested energy, when receivers employ power splitting (PS) technique for \"simultaneous\" information and energy transfer. A non-cooperative scheme, where information energy are conveyed only via direct links, is firstly considered and the outage performance of the system as well as the average harvested energy are derived in closed form in function of the power splitting. For this protocol, an interesting optimization problem which minimizes the transmitted power under outage probability and harvesting constraints, is formulated and solved in closed form. In addition, we study a cooperative protocol where sources' transmissions are supported by a random number of potential relays that are randomly distributed into the network. In this case, information energy can be received at each destination via two independent and orthogonal paths (in case of relaying). We characterize both performance metrics, when a selection combining scheme is applied at the receivers and a single relay is randomly selected for cooperative diversity.", "This letter deals with a three-node cooperative network where the relay node harvests energy from radio frequency (RF) radiation. The source node is the only available RF generator and introduces a fundamental switching between energy harvesting and data relaying. A greedy switching (GS) policy where the relay node transmits when its residual energy ensures decoding at the destination is investigated. The GS policy is modeled as a Markov chain for a discretized battery; the stationary distribution and the outage probability of the system are derived in closed form expressions. In addition, an optimal switching policy that incorporates a-priori knowledge of the channel coefficients is proposed and solved by a mixed-integer linear programming formulation.", "In this paper, a wireless cooperative network is considered, in which multiple source-destination pairs communicate with each other via an energy harvesting relay. The focus of this paper is on the relay's strategies to distribute the harvested energy among the multiple users and their impact on the system performance. Specifically, a non-cooperative strategy that uses the energy harvested from the i-th source as the relay transmission power to the i-th destination is considered first, and asymptotic results show that its outage performance decays as log SNR SNR. A faster decay rate, 1 SNR, can be achieved by two centralized strategies proposed next, of which a water filling based one can achieve optimal performance with respect to several criteria, at the price of high complexity. An auction based power allocation scheme is also proposed to achieve a better tradeoff between system performance and complexity. Simulation results are provided to confirm the accuracy of the developed analytical results.", "An emerging solution for prolonging the lifetime of energy constrained relay nodes in wireless networks is to avail the ambient radio-frequency (RF) signal and to simultaneously harvest energy and process information. In this paper, an amplify-and-forward (AF) relaying network is considered, where an energy constrained relay node harvests energy from the received RF signal and uses that harvested energy to forward the source information to the destination. Based on the time switching and power splitting receiver architectures, two relaying protocols, namely, i) time switching-based relaying (TSR) protocol and ii) power splitting-based relaying (PSR) protocol are proposed to enable energy harvesting and information processing at the relay. In order to determine the throughput, analytical expressions for the outage probability and the ergodic capacity are derived for delay-limited and delay-tolerant transmission modes, respectively. The numerical analysis provides practical insights into the effect of various system parameters, such as energy harvesting time, power splitting ratio, source transmission rate, source to relay distance, noise power, and energy harvesting efficiency, on the performance of wireless energy harvesting and information processing using AF relay nodes. In particular, the TSR protocol outperforms the PSR protocol in terms of throughput at relatively low signal-to-noise-ratios and high transmission rates." ] }
1310.6976
72736484
The logical depth with significance @math of a finite binary string @math is the shortest running time of a binary program for @math that can be compressed by at most @math bits. There is another definition of logical depth. We give two theorems about the quantitative relation between these versions: the first theorem concerns a variation of a known fact with a new proof, the second theorem and its proof are new. We select the above version of logical depth and show the following. There is an infinite sequence of strings of increasing length such that for each @math there is a @math such that the logical depth of the @math th string as a function of @math is incomputable (it rises faster than any computable function) but with @math replaced by @math the resuling function is computable. Hence the maximal gap between the logical depths resulting from incrementing appropriate @math 's by 1 rises faster than any computable function. All functions mentioned are upper bounded by the Busy Beaver function. Since for every string its logical depth is nonincreasing in @math , the minimal computation time of the shortest programs for the sequence of strings as a function of @math rises faster than any computable function but not so fast as the Busy Beaver function.
The minimum time to compute a string by a @math -incompressible program was first considered in @cite_9 Definition 1. The minimum time was called the logical depth at significance @math of the string concerned. Definitions, variations, discussion and early results can be found in the given reference. A more formal treatment, as well as an intuitive approach, was given in the textbook @cite_6 , Section 7.7. In @cite_8 the notion of computational depth is defined as @math . This would or would not equal the negative logarithm of the expression @math in Definition as follows. In @cite_7 L.A. Levin proved, in the so-called Coding Theorem (see also @cite_6 Theorem 4.3.3). It remains to prove or disprove @math up to a small additive term: a major open problem in Kolmogorov complexity theory, see @cite_6 Exercises 7.6.3 and 7.6.4. For Kolmogorov complexity notions see , and for @math and @math see .
{ "cite_N": [ "@cite_9", "@cite_7", "@cite_6", "@cite_8" ], "mid": [ "2281664623", "33555989", "1638203394", "2072053479" ], "abstract": [ "Some mathematical and natural objects (a random sequence, a sequence of zeros, a perfect crystal, a gas) are intuitively trivial, while others (e.g. the human body, the digits of π) contain internal evidence of a nontrivial causal history. We formalize this distinction by defining an object’s “logical depth” as the time required by a standard universal Turing machine to generate it from an input that is algorithmically random (i.e. Martin-Lof random). This definition of depth is shown to be reasonably machineindependent, as well as obeying a slow-growth law: deep objects cannot be quickly produced from shallow ones by any deterministic process, nor with much probability by a probabilistic process, but can be produced slowly. Next we apply depth to the physical problem of “self-organization,” inquiring in particular under what conditions (e.g. noise, irreversibility, spatial and other symmetries of the initial conditions and equations of motion) statistical-mechanical model systems can imitate computers well enough to undergo unbounded increase of depth in the limit of infinite space and time.", "", "The book is outstanding and admirable in many respects. ... is necessary reading for all kinds of readers from undergraduate students to top authorities in the field. Journal of Symbolic Logic Written by two experts in the field, this is the only comprehensive and unified treatment of the central ideas and their applications of Kolmogorov complexity. The book presents a thorough treatment of the subject with a wide range of illustrative applications. Such applications include the randomness of finite objects or infinite sequences, Martin-Loef tests for randomness, information theory, computational learning theory, the complexity of algorithms, and the thermodynamics of computing. It will be ideal for advanced undergraduate students, graduate students, and researchers in computer science, mathematics, cognitive sciences, philosophy, artificial intelligence, statistics, and physics. The book is self-contained in that it contains the basic requirements from mathematics and computer science. Included are also numerous problem sets, comments, source references, and hints to solutions of problems. New topics in this edition include Omega numbers, KolmogorovLoveland randomness, universal learning, communication complexity, Kolmogorov's random graphs, time-limited universal distribution, Shannon information and others.", "We introduce Computational Depth, a measure for the amount of \"nonrandom\" or \"useful\" information in a string by considering the difference of various Kolmogorov complexity measures. We investigate three instantiations of Computational Depth: • Basic Computational Depth, a clean notion capturing the spirit of Bennett's Logical Depth. We show that a Turing machine M runs in time polynomial on average over the time-bounded universal distribution if and only if for all inputs x, M uses time exponential in the basic computational depth of x. • Sublinear-time Computational Depth and the resulting concept of Shallow Sets, a generalization of sparse and random sets based on low depth properties of their characteristic sequences. We show that every computable set that is reducible to a shallow set has polynomial-size circuits. • Distinguishing Computational Depth, measuring when strings are easier to recognize than to produce. We show that if a Boolean formula has a nonnegligible fraction of its satisfying assignments with low depth, then we can find a satisfying assignment efficiently." ] }
1310.6998
1521877087
We study the relationship between social media output and National Football League (NFL) games, using a dataset containing messages from Twitter and NFL game statistics. Specifically, we consider tweets pertaining to specific teams and games in the NFL season and use them alongside statistical game data to build predictive models for future game outcomes (which team will win?) and sports betting outcomes (which team will win with the point spread? will the total points be over under the line?). We experiment with several feature sets and find that simple features using large volumes of tweets can match or exceed the performance of more traditional features that use game statistics.
Recently, Hong and Skiena @cite_13 used sentiment analysis from news and social media to design a successful NFL betting strategy. However, their main evaluation was on in-sample data, rather than forecasting. Also, they only had Twitter data from one season (2009) and therefore did not use it in their primary experiments. We use large quantities of tweets from the 2010--2012 seasons and do so in a genuine forecasting setting for both winner WTS and over under prediction.
{ "cite_N": [ "@cite_13" ], "mid": [ "1510617223" ], "abstract": [ "The American Football betting market provides a particularly attractive domain to study the nexus between public sentiment and the wisdom of crowds. In this paper, we present the first substantial study of the relationship between the NFL betting line and public opinion expressed in blogs and microblogs (Twitter). We perform a large-scale study of four distinct text streams: LiveJournal blogs, RSS blog feeds captured by Spinn3r, Twitter, and traditional news media. Our results show interesting disparities between the first and second halves of each season. We present evidence showing usefulness of sentiment on NFL betting. We demonstrate that a strategy betting roughly 30 games per year identified winner roughly 60 of the time from 2006 to 2009, well beyond what is needed to overcome the bookie's typical commission(53 )." ] }
1310.7205
1960370989
One of the major challenges in distributed systems is establishing consistency among replicated data in a timely fashion. While the consistent ordering of events has been extensively researched, the time span to reach a consistent state is mostly considered an effect of the chosen consistency model, rather than being considered a parameter itself. This paper argues that it is possible to give guarantees on the timely consistency of an operation. Subsequent to an update the cloud and all connected clients will either be consistent with the update within the defined upper bound of time or the update will be returned. This paper suggests the respective algorithms and protocols capable of producing such comprehensive Timed Consistency, as conceptually proposed by Torres- The solution offers business customers an increasing level of predictability and adjustability. The temporal certainty concerning the execution makes the cloud a more attractive tool for time-critical or mission-critical applications fearing the poor availability of Strong Consistency in cloud environments.
Consistency models mostly focus on ordering events. Torres- @cite_5 instead emphasize the timeliness of consistency. They considering time as an endogenous factor rather than as a result of the selected consistency model. Based on the arbitrary timeliness requirement they attach a lifetime to an object and develop a theory about the consistency properties on its basis. Given a distributed system would exist, that itself establishes timed consistency from a data-centric point of view, an end-to-end client-centric consistency is implied by Torres- model @cite_15 @cite_20 .
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_20" ], "mid": [ "", "1503941956", "2162601662" ], "abstract": [ "", "Techniques such as replication and caching of objects that implement distributed services lead to consistency problems that must be addressed. We explore new consistency protocols based on the notion of object value lifetimes. By keeping track of the lifetimes of the values stored in shared objects (i.e., the time interval that goes from the writing of a value until the latest time when this value is known to be valid), it is possible to check the mutual consistency of a set of related objects cached at a site. Initially, this technique is presented assuming the presence of physical clocks. Later, these clocks are replaced by vector clocks and then by plausible clocks. Lifetimes based on such clocks result in weaker consistency but do provide more efficient implementations.", "Given a distributed system with several shared objects and many processes concurrently updating and reading them, it is convenient that the system achieves convergence on the value of these objects. Such property can be guaranteed depending on the consistency model being employed. Causal Consistency is a weak consistency model that is easy and cheap to implement. However, due to the lack of real-time considerations, this model cannot oer convergence. A solution for overcoming that problem is to include time aspects within the framework of the model. This is the aim of Timed Causal Consistency." ] }
1310.7297
2950767189
Recent advances in 3D modeling provide us with real 3D datasets to answer queries, such as "What is the best position for a new billboard?" and "Which hotel room has the best view?" in the presence of obstacles. These applications require measuring and differentiating the visibility of an object (target) from different viewpoints in a dataspace, e.g., a billboard may be seen from two viewpoints but is readable only from the viewpoint closer to the target. In this paper, we formulate the above problem of quantifying the visibility of (from) a target object from (of) the surrounding area with a visibility color map (VCM). A VCM is essentially defined as a surface color map of the space, where each viewpoint of the space is assigned a color value that denotes the visibility measure of the target from that viewpoint. Measuring the visibility of a target even from a single viewpoint is an expensive operation, as we need to consider factors such as distance, angle, and obstacles between the viewpoint and the target. Hence, a straightforward approach to construct the VCM that requires visibility computation for every viewpoint of the surrounding space of the target, is prohibitively expensive in terms of both I Os and computation, especially for a real dataset comprising of thousands of obstacles. We propose an efficient approach to compute the VCM based on a key property of the human vision that eliminates the necessity of computing the visibility for a large number of viewpoints of the space. To further reduce the computational overhead, we propose two approximations; namely, minimum bounding rectangle and tangential approaches with guaranteed error bounds. Our extensive experiments demonstrate the effectiveness and efficiency of our solutions to construct the VCM for real 2D and 3D datasets.
The notion of visibility is actively studied in different contexts: computer graphics and visualization @cite_5 @cite_22 and spatial databases @cite_15 @cite_12 @cite_13 . Most of these techniques consider visibility as a binary notion, i.e., a point is either visible or invisible from another point.
{ "cite_N": [ "@cite_22", "@cite_5", "@cite_15", "@cite_13", "@cite_12" ], "mid": [ "1835934527", "1596411313", "2139621262", "2152755365", "2155533773" ], "abstract": [ "This paper describes a robust, hardware-accelerated algorithm to compute an approximate visibility map, which describes the visible scene from a particular viewpoint. The user can control the degree of approximation, choosing more accuracy at the cost of increased execution time. The algorithm exploits item buffer hardware to coarsely determine visibility, which is later refined. The paper also describes a conceptually simple algorithm to compute a subset of the discontinuity mesh using the visibility map.", "Algorithms and Theory of Computation Handbook, Second Edition provides an up-to-date compendium of fundamental computer science topics and techniques. It also illustrates how the topics and techniques come together to deliver efficient solutions to important practical problems. New to the Second EditionAlong with updating and revising many of the existing chapters, this second edition contains more than 20 new chapters. This edition now covers external memory, parameterized, self-stabilizing, and pricing algorithms as well as the theories of algorithmic coding, privacy and anonymity, databases, computational games, and communication networks. It also discusses computational topology, computational number theory, natural language processing, and grid computing and explores applications in intensity-modulated radiation therapy, voting, DNA research, systems biology, and financial derivatives. This best-selling handbook continues to help computer professionals and engineers find significant information on various algorithmic topics. The expert contributors clearly define the terminology, present basic results and techniques, and offer a number of current references to the in-depth literature. They also provide a glimpse of the major research issues concerning the relevant topics.", "In many applications involving spatial objects, we are only interested in objects that are directly visible from query points. In this paper, we formulate the visible k nearest neighbor (VkNN) query and present incremental algorithms as a solution, with two variants differing in how to prune objects during the search process. One variant applies visibility pruning to only objects, whereas the other variant applies visibility pruning to index nodes as well. Our experimental results show that the latter outperforms the former. We further propose the aggregate VkNN query that finds the visible k nearest objects to a set of query points based on an aggregate distance function. We also propose two approaches to processing the aggregate VkNN query. One accesses the database via multiple VkNN queries, whereas the other issues an aggregate k nearest neighbor query to retrieve objects from the database and then re-rank the results based on the aggregate visible distance metric. With extensive experiments, we show that the latter approach consistently outperforms the former one.", "In this paper, we identify and solve a new type of spatial queries, called continuous visible nearest neighbor (CVNN) search. Given a data set P, an obstacle set O, and a query line segment q, a CVNN query returns a set of (p, R) tuples such that p e P is the nearest neighbor (NN) to every point r along the interval R e q as well as p is visible to r. Note that p may be NULL, meaning that all points in P are invisible to all points in R, due to the obstruction of some obstacles in O. In this paper, we formulate the problem and propose efficient algorithms for CVNN query processing, assuming that both P and O are indexed by R-trees. In addition, we extend our techniques to several variations of the CVNN query. Extensive experiments verify the efficiency and effectiveness of our proposed algorithms using both real and synthetic datasets.", "In this paper, we study a novel form of continuous nearest neighbor queries in the presence of obstacles, namely continuous obstructed nearest neighbor (CONN) search. It considers the impact of obstacles on the distance between objects, which is ignored by most of spatial queries. Given a data set P, an obstacle set O, and a query line segment q in a two-dimensional space, a CONN query retrieves the nearest neighbor of each point on q according to the obstructed distance, i.e., the shortest path between them without crossing any obstacle. We formulate CONN search, analyze its unique properties, and develop algorithms for exact CONN query processing, assuming that both P and O are indexed by conventional data-partitioning indices (e.g., R-trees). Our methods tackle the CONN retrieval by performing a single query for the entire query segment, and only process the data points and obstacles relevant to the final result, via a novel concept of control points and an efficient quadratic-based split point computation algorithm. In addition, we extend our solution to handle the continuous obstructed k-nearest neighbor (COkNN) search, which finds the k (≥1)nearest neighbors to every point along q based on obstructed distances. A comprehensive experimental evaluation using both real and synthetic datasets has been conducted to demonstrate the efficiency and effectiveness of our proposed algorithms." ] }
1310.7297
2950767189
Recent advances in 3D modeling provide us with real 3D datasets to answer queries, such as "What is the best position for a new billboard?" and "Which hotel room has the best view?" in the presence of obstacles. These applications require measuring and differentiating the visibility of an object (target) from different viewpoints in a dataspace, e.g., a billboard may be seen from two viewpoints but is readable only from the viewpoint closer to the target. In this paper, we formulate the above problem of quantifying the visibility of (from) a target object from (of) the surrounding area with a visibility color map (VCM). A VCM is essentially defined as a surface color map of the space, where each viewpoint of the space is assigned a color value that denotes the visibility measure of the target from that viewpoint. Measuring the visibility of a target even from a single viewpoint is an expensive operation, as we need to consider factors such as distance, angle, and obstacles between the viewpoint and the target. Hence, a straightforward approach to construct the VCM that requires visibility computation for every viewpoint of the surrounding space of the target, is prohibitively expensive in terms of both I Os and computation, especially for a real dataset comprising of thousands of obstacles. We propose an efficient approach to compute the VCM based on a key property of the human vision that eliminates the necessity of computing the visibility for a large number of viewpoints of the space. To further reduce the computational overhead, we propose two approximations; namely, minimum bounding rectangle and tangential approaches with guaranteed error bounds. Our extensive experiments demonstrate the effectiveness and efficiency of our solutions to construct the VCM for real 2D and 3D datasets.
In computer graphics, the visibility map refers to a planar subdivision that encodes the visibility information, i.e., which points are mutually @cite_5 . Two points are mutually visible if the straight line segment connecting these points does not intersect with any obstacle. If a scene is represented using a planar straight-line graph, a horizontal (vertical) visibility map is obtained by drawing a horizontal (vertical) straight line @math through each vertex @math of that graph until @math intersects an edge @math of the graph or extends to infinity. The edge @math is horizontally (vertically) visible from @math . A large body of works @cite_3 @cite_18 @cite_7 @cite_19 @cite_22 construct such visibility maps efficiently.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_7", "@cite_3", "@cite_19", "@cite_5" ], "mid": [ "", "1835934527", "2035823949", "2094665988", "1988245788", "1596411313" ], "abstract": [ "", "This paper describes a robust, hardware-accelerated algorithm to compute an approximate visibility map, which describes the visible scene from a particular viewpoint. The user can control the degree of approximation, choosing more accuracy at the cost of increased execution time. The algorithm exploits item buffer hardware to coarsely determine visibility, which is later refined. The paper also describes a conceptually simple algorithm to compute a subset of the discontinuity mesh using the visibility map.", "We introduce a novel representation for visibility in three dimensions and describe an efficient algorithm to construct it. The data structure is a spherical map that consists of a doubly-connected edge list embedded on the surface of a sphere. Each face of the spherical map is labeled with the polygon visible in the corresponding cone. We demonstrate that the algorithm is efficient and robust by presenting the statistics of its time and space requirements for handling several classes of input.", "We present an algorithm that efficiently constructs a visibility map for a given view of a polygonal scene. The view is represented by a BSP tree and the visibility map is obtained by postprocessing of that tree. The scene is organised in a kD-tree that is used to perform an approximate occlusion sweep. The occlusion sweep is interleaved with hierarchical visibility tests what results in expected output sensitive behaviour of the algorithm. We evaluate our implementation of the method on several scenes and demonstrate its application to discontinuity meshing.", "Visibility computation was crucial for computer graphics from its very beginning. The first visibility algorithms in computer graphics aimed to determine visible surfaces in a synthesized image of a three-dimensional scene. Nowadays there are many different visibility algorithms for various visibility problems. We propose a new taxonomy of visibility problems that is based on a classification according to the problem domain. We provide a broad overview of visibility problems and algorithms in computer graphics grouped by the proposed taxonomy. We survey visible surface algorithms, visibility culling algorithms, visibility algorithms for shadow computation, global illumination, point-based and image-based rendering, and global visibility computations. Finally, we discuss common concepts of visibility algorithm design and several criteria for the classification of visibility algorithms.", "Algorithms and Theory of Computation Handbook, Second Edition provides an up-to-date compendium of fundamental computer science topics and techniques. It also illustrates how the topics and techniques come together to deliver efficient solutions to important practical problems. New to the Second EditionAlong with updating and revising many of the existing chapters, this second edition contains more than 20 new chapters. This edition now covers external memory, parameterized, self-stabilizing, and pricing algorithms as well as the theories of algorithmic coding, privacy and anonymity, databases, computational games, and communication networks. It also discusses computational topology, computational number theory, natural language processing, and grid computing and explores applications in intensity-modulated radiation therapy, voting, DNA research, systems biology, and financial derivatives. This best-selling handbook continues to help computer professionals and engineers find significant information on various algorithmic topics. The expert contributors clearly define the terminology, present basic results and techniques, and offer a number of current references to the in-depth literature. They also provide a glimpse of the major research issues concerning the relevant topics." ] }