aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1312.2664
2103059474
Abstract The computational cost of transfer matrix methods for the Potts model is related to the question in how many ways can two layers of a lattice be connected? Answering the question leads to the generation of a combinatorial set of lattice configurations. This set defines the configuration space of the problem, and the smaller it is, the faster the transfer matrix can be computed. The configuration space of generic ( q , v ) transfer matrix methods for strips is in the order of the Catalan numbers, which grows asymptotically as O ( 4 m ) where m is the width of the strip. Other transfer matrix methods with a smaller configuration space indeed exist but they make assumptions on the temperature, number of spin states, or restrict the structure of the lattice. In this paper we propose a parallel algorithm that uses a sub-Catalan configuration space of O ( 3 m ) to build the generic ( q , v ) transfer matrix in a compressed form. The improvement is achieved by grouping the original set of Catalan configurations into a forest of family trees, in such a way that the solution to the problem is now computed by solving the root node of each family. As a result, the algorithm becomes exponentially faster than the Catalan approach while still highly parallel. The resulting matrix is stored in a compressed form using O ( 3 m × 4 m ) of space, making numerical evaluation and decompression to be faster than evaluating the matrix in its O ( 4 m × 4 m ) uncompressed form. Experimental results for different sizes of strip lattices show that the parallel family trees (PFT) strategy indeed runs exponentially faster than the Catalan Parallel Method (CPM) , especially when dealing with dense transfer matrices. In terms of parallel performance, we report strong-scaling speedups of up to 5.7 × when running on an 8-core shared memory machine and 28 × for a 32-core cluster. The best balance of speedup and efficiency for the multi-core machine was achieved when using p = 4 processors, while for the cluster scenario it was in the range p ∈ [ 8 , 10 ] . Because of the parallel capabilities of the algorithm, a large-scale execution of the parallel family trees strategy in a supercomputer could contribute to the study of wider strip lattices.
Research on transfer matrices for strip lattices in the Potts model have not reported experimental results on the parallel performance, except for a prior work of the authors @cite_28 that consists of a parallel method for computing general @math transfer matrices using the Catalan approach, which will be named the for the ease of referencing it later on. The CPM method was successfully used to study new widths of the kagome strip @cite_26 with generic @math . The present work is a substantial improvement from CPM.
{ "cite_N": [ "@cite_28", "@cite_26" ], "mid": [ "2002451486", "2127796694" ], "abstract": [ "The transfer-matrix technique is a convenient way for studying strip lattices in the Potts model since the computational costs depend just on the periodic part of the lattice and not on the whole. However, even when the cost is reduced, the transfer-matrix technique is still an NP-hard problem since the time T (|V |, |E|) needed to compute the matrix grows exponentially as a function of the graph width. In this work, we present a parallel transfer-matrix implementation that scales performance under multi-core architectures. The construction of the matrix is based on several repetitions of the deletion-contraction technique, allowing parallelism suitable to multi-core machines. Our experimental results show that the multi-core implementation achieves speedups of 3.7X with p = 4 processors and 5.7X with p = 8. The efficiency of the implementation lies between 60 and 95 , achieving the best balance of speedup and efficiency at p = 4 processors for actual multi-core architectures. The algorithm also takes advantage of the lattice symmetry, making the transfer matrix computation to run up to 2X faster than its non-symmetric counterpart and use up to a quarter of the original space.", "We compute the partition function of the Potts model with arbitrary values of q and temperature on some strip lattices. We consider strips of width L y = 2, for three different lattices: square, diced and ‘shortest-path’ (to be defined in the text). We also get the exact solution for strips of the Kagome lattice for widths L y = 2,3,4,5. As further examples we consider two lattices with different type of regular symmetry: a strip with alternating layers of width L y = 3 and L y = m + 2, and a strip with variable width. Finally we make some remarks on the Fisher zeros for the Kagome lattice and their large q-limit." ] }
1312.2664
2103059474
Abstract The computational cost of transfer matrix methods for the Potts model is related to the question in how many ways can two layers of a lattice be connected? Answering the question leads to the generation of a combinatorial set of lattice configurations. This set defines the configuration space of the problem, and the smaller it is, the faster the transfer matrix can be computed. The configuration space of generic ( q , v ) transfer matrix methods for strips is in the order of the Catalan numbers, which grows asymptotically as O ( 4 m ) where m is the width of the strip. Other transfer matrix methods with a smaller configuration space indeed exist but they make assumptions on the temperature, number of spin states, or restrict the structure of the lattice. In this paper we propose a parallel algorithm that uses a sub-Catalan configuration space of O ( 3 m ) to build the generic ( q , v ) transfer matrix in a compressed form. The improvement is achieved by grouping the original set of Catalan configurations into a forest of family trees, in such a way that the solution to the problem is now computed by solving the root node of each family. As a result, the algorithm becomes exponentially faster than the Catalan approach while still highly parallel. The resulting matrix is stored in a compressed form using O ( 3 m × 4 m ) of space, making numerical evaluation and decompression to be faster than evaluating the matrix in its O ( 4 m × 4 m ) uncompressed form. Experimental results for different sizes of strip lattices show that the parallel family trees (PFT) strategy indeed runs exponentially faster than the Catalan Parallel Method (CPM) , especially when dealing with dense transfer matrices. In terms of parallel performance, we report strong-scaling speedups of up to 5.7 × when running on an 8-core shared memory machine and 28 × for a 32-core cluster. The best balance of speedup and efficiency for the multi-core machine was achieved when using p = 4 processors, while for the cluster scenario it was in the range p ∈ [ 8 , 10 ] . Because of the parallel capabilities of the algorithm, a large-scale execution of the parallel family trees strategy in a supercomputer could contribute to the study of wider strip lattices.
In this subsection we compare the (PFT) strategy against the (CPM) @cite_28 by using the following metrics: (1) running time (2) matrix evaluation time and (3) matrix space. Figure shows the results. and the .
{ "cite_N": [ "@cite_28" ], "mid": [ "2002451486" ], "abstract": [ "The transfer-matrix technique is a convenient way for studying strip lattices in the Potts model since the computational costs depend just on the periodic part of the lattice and not on the whole. However, even when the cost is reduced, the transfer-matrix technique is still an NP-hard problem since the time T (|V |, |E|) needed to compute the matrix grows exponentially as a function of the graph width. In this work, we present a parallel transfer-matrix implementation that scales performance under multi-core architectures. The construction of the matrix is based on several repetitions of the deletion-contraction technique, allowing parallelism suitable to multi-core machines. Our experimental results show that the multi-core implementation achieves speedups of 3.7X with p = 4 processors and 5.7X with p = 8. The efficiency of the implementation lies between 60 and 95 , achieving the best balance of speedup and efficiency at p = 4 processors for actual multi-core architectures. The algorithm also takes advantage of the lattice symmetry, making the transfer matrix computation to run up to 2X faster than its non-symmetric counterpart and use up to a quarter of the original space." ] }
1312.2070
1977872462
We study a simple model of how social behaviors, like trends and opinions, propagate in networks where individuals adopt the trend when they are informed by threshold T neighbors who are adopters. Using a dynamic message-passing algorithm, we develop a tractable and computationally efficient method that provides complete time evolution of each individual's probability of adopting the trend or of the frequency of adopters and nonadopters in any arbitrary networks. We validate the method by comparing it with Monte Carlo-based agent simulation in real and synthetic networks and provide an exact analytic scheme for large random networks, where simulation results match well. Our approach is general enough to incorporate non-Markovian processes and to include heterogeneous thresholds and thus can be applied to explore rich sets of complex heterogeneous agent-based models.
By deleting each edge with probability @math independently, we can ask whether the resulting diluted network in the thermodynamic limit contains an extensive @math -core in the ensemble of similarly prepared networks. Interestingly for @math , the emergence of a @math -core in random networks is a first-order (discontinuous) phase transition in the sense that when it first appears it covers a finite fraction of the network @cite_13 . An early work on @math -core percolation was on the Bethe lattice in the context of magnetic systems @cite_18 . Recently, it has been used in studies of the Ising model and nucleation @cite_29 @cite_32 , analysis of zero temperature jamming transitions @cite_19 , and in a bootstrap percolation model in square lattices and random graphs @cite_20 @cite_2 @cite_9 @cite_14 @cite_16 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_29", "@cite_9", "@cite_32", "@cite_19", "@cite_2", "@cite_16", "@cite_13", "@cite_20" ], "mid": [ "2027508917", "", "2029903428", "", "", "2152125528", "", "", "2065663455", "1972354222" ], "abstract": [ "A new percolation problem is posed which can exhibit a first-order transition. In bootstrap percolation, sites on an empty lattice are first randomly occupied, and then all occupied sites with less than a given number m of occupied neighbours are successively removed until a stable configuration is reached. On any lattice for sufficiently large m, the ensuing clusters can only be infinite. On a Bethe lattice for m>or=3, the fraction of the lattice occupied by infinite clusters discontinuously jumps from zero at the percolation threshold. From an analysis of stable and metastable ground states of the dilute Blume-Capel model (1966), it is concluded that effects like bootstrap percolation may occur in some real magnets.", "", "This work extends to dimension d≥3 the main result of Dehghanpour and Schonmann. We consider the stochastic Ising model on Zd evolving with the Metropolis dynamics under a fixed small positive magnetic field h starting from the minus phase. When the inverse temperature β goes to ∞, the relaxation time of the system, defined as the time when the plus phase has invaded the origin, behaves like exp(βκd). The value κd is equal to κd=1d+1(Γ1+⋯+Γd), where Γi is the energy of the i-dimensional critical droplet of the Ising model at zero temperature and magnetic field h.", "", "", "A theory is constructed to describe the zero-temperature jamming transition of repulsive soft spheres as the density is increased. Local mechanical stability imposes a constraint on the minimum number of bonds per particle; we argue that this constraint suggests an analogy to k-core percolation. The latter model can be solved exactly on the Bethe lattice, and the resulting transition has a mixed first-order continuous character reminiscent of the jamming transition. In particular, the exponents characterizing the continuous parts of both transitions appear to be the same. Finally, numerical simulations suggest that in finite dimensions the k-core transition can be discontinuous with a nontrivial diverging correlation length.", "", "", "Thek-core of a graph is the largest subgraph with minimum degree at leastk. For the Erdo?s?R?nyi random graphG(n,?m) onnvertives, withmedges, it is known that a giant 2-core grows simultaneously with a giant component, that is, whenmis close ton 2. We show that fork?3, with high probability, a giantk-core appears suddenly whenmreachesckn 2; hereck=min?>0? ?k(?) and?k(?)=P Poisson(?)?k?1 . In particular,c3?3.35. We also demonstrate that, unlike the 2-core, when ak-core appears for the first time it is very likely to be giant, of size ?pk(?k)n. Here?kis the minimum point of? ?k(?) andpk(?k)=P Poisson(?k)?k . Fork=3, for instance, the newborn 3-core contains about 0.27nvertices. Our proofs are based on the probabilistic analysis of an edge deletion algorithm that always find ak-core if the graph has one.", "Bootstrap percolation models, or equivalently certain types of cellular automata, exhibit interesting finite-volume effects. These are studied at a rigorous level. The authors find that for an initial configuration obtained by placing particles independently with probability p or=2), the density of the 'bootstrapped' (final) configurations in the sequence of cubes (-L 2, L 2)d typically undergoes an abrupt transition, as L is increased, from being close to 0 to the value 1. With L fixed at a large value, the mean final density as a function of p changes from 0 to 1 around a value which varies only slowly with L-the pertinent parameter being lambda =p1(d-1) ln L. The driving mechanism is the capture of a 'critical droplet'. This behaviour is analogous to the decay of a metastable state near a first-order phase transition, for which the analysis offers some suggestive ideas." ] }
1312.2063
1495061042
Traditionally, data compression deals with the problem of concisely representing a data source, e.g. a sequence of letters, for the purpose of eventual reproduction (either exact or approximate). In this work we are interested in the case where the goal is to answer similarity queries about the compressed sequence, i.e. to identify whether or not the original sequence is similar to a given query sequence. We study the fundamental tradeoff between the compression rate and the reliability of the queries performed on compressed data. For i.i.d. sequences, we characterize the minimal compression rate that allows query answers, that are reliable in the sense of having a vanishing false-positive probability, when false negatives are not allowed. The result is partially based on a previous work by , and the inherently typical subset lemma plays a key role in the converse proof. We then characterize the compression rate achievable by schemes that use lossy source codes as a building block, and show that such schemes are, in general, suboptimal. Finally, we tackle the problem of evaluating the minimal compression rate, by converting the problem to a sequence of convex programs that can be solved efficiently.
In the current paper we focus on discrete alphabets only, following @cite_22 . A parallel result, with complete characterization of the identification rate (and exponent) for the Gaussian case and quadratic distortion, appears in @cite_19 , @cite_5 . The identification exponent problem was originally studied in @cite_22 for the variable length case, where the resulting exponent depends on an auxiliary random variable with unbounded cardinality. A bound on the cardinality has been obtained recently in @cite_24 , where the exponent for fixed-length schemes is also found (and is different than that of the variable length schemes, unlike the identification rate -- see Prop. below). In the special case of exact match queries (i.e. identification with @math for Hamming distance), the exponent was studied in @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_24", "@cite_19", "@cite_5" ], "mid": [ "2039633672", "2116661435", "", "2089520644", "1984810686" ], "abstract": [ "In this paper, we consider the problem of determining whether sequences X and Y, generated i.i.d. according to PX × PY, are equal given access only to the pair (Y, T(X)), where T(X) is a rate-R compressed version of X. In general, the rate R may not be sufficiently large to reliably determine whether X=Y. We precisely characterize this reliability - i.e., the exponential rate at which an error is made - as a function of R. Interestingly, the exponent turns out to be related to the Bhattacharyya distance between the distributions PX and PY. In addition, the scheme achieving this exponent is universal, i.e. does not depend on PX, PY.", "A new coding problem is introduced for a correlated source (X sup n ,Y sup n ) sub n=1 sup spl infin . The observer of X sup n can transmit data depending on X sup n at a prescribed rate R. Based on these data the observer of Y sup n tries to identify whether for some distortion measure spl rho (like the Hamming distance) n sup -1 spl rho (X sup n ,Y sup n ) spl les d, a prescribed fidelity criterion. We investigate as functions of R and d the exponents of two error probabilities, the probabilities for misacceptance, and the probabilities for misrejection. In the case where X sup n and Y sup n are independent, we completely characterize the achievable region for the rate R and the exponents of two error probabilities; in the case where X sup n and Y sup n are correlated, we get some interesting partial results for the achievable region. During the process, we develop a new method for proving converses, which is called \"the inherently typical subset lemma\". This new method goes considerably beyond the \"entropy characterization\" the \"image size characterization,\" and its extensions. It is conceivable that this new method has a strong impact on multiuser information theory.", "", "The problem of performing similarity queries on compressed data is considered. We study the fundamental tradeoff between compression rate, sequence length, and reliability of queries performed on compressed data. For a Gaussian source and quadratic similarity criterion, we show that queries can be answered reliably if and only if the compression rate exceeds a given threshold - the identification rate - which we explicitly characterize. When compression is performed at a rate greater than the identification rate, responses to queries on the compressed data can be made exponentially reliable. We give a complete characterization of this exponent, which is analogous to the error and excess-distortion exponents in channel and source coding, respectively. For a general source, we prove that the identification rate is at most that of a Gaussian source with the same variance. Therefore, as with classical compression, the Gaussian source requires the largest compression rate. Moreover, a scheme is described that attains this maximal rate for any source distribution.", "The problem of performing similarity queries on compressed data is considered. We focus on the quadratic similarity measure, and study the fundamental tradeoff between compression rate, sequence length, and reliability of queries performed on the compressed data. For a Gaussian source, we show that the queries can be answered reliably if and only if the compression rate exceeds a given threshold—the identification rate —which we explicitly characterize. Moreover, when compression is performed at a rate greater than the identification rate, responses to queries on the compressed data can be made exponentially reliable. We give a complete characterization of this exponent, which is analogous to the error and excess-distortion exponents in channel and source coding, respectively. For a general source, we prove that, as with classical compression, the Gaussian source requires the largest compression rate among sources with a given variance. Moreover, a robust scheme is described that attains this maximal rate for any source distribution." ] }
1312.2094
1588356406
Online Social Network (OSN) is one of the most hottest services in the past years. It preserves the life of users and provides great potential for journalists, sociologists and business analysts. Crawling data from social network is a basic step for social network information analysis and processing. As the net- work becomes huge and information on the network updates faster than web pages, crawling is more dicult because of the limitations of band-width, po- liteness etiquette and computation power. To extract fresh information from social network eciently and eectively, this paper presents a novel crawling method and discusses parallelization architecture of social network. To dis- cover the feature of social network, we gather data from real social network, analyze them and build a model to describe the discipline of users' behavior. With the modeled behavior, we propose methods to predict users' behavior. According to the prediction, we schedule our crawler more reasonably and ex- tract more fresh information with parallelization technologies. Experimental results demonstrate that our strategies could obtain information from OSN eciently and eectively.
Only a few methods are proposed to crawl OSN data. @cite_16 describes a Twitter Crawler developed by Java. They pay more attention to the implementation detail of the crawler and the data analysis. Instead, we focus on the crawling method and develop algorithms to gather more information of the specific OSN users.
{ "cite_N": [ "@cite_16" ], "mid": [ "2017114019" ], "abstract": [ "Applying data mining techniques to social media can yield interesting perspectives about individual human behavior, detecting hot issues and topics, or discovering a group and community. However, it is difficult to build your own data set to apply data mining techniques without an automated data gathering system. To overcome this challenge, we developed a java-based data gathering tool that continually collects social data from Twitter. This allows us, as well as other researchers, to build our own Twitter database. In this paper, we introduce the design specifications and explain the implementation details of the Twitter Data Collecting Tool we developed. In addition, we provide an in-depth analysis of Twitter messages about various Super Bowl ads by applying data-mining techniques to a case study. The study aims to address the question of how people use Twitter and to assess the power of Twitter in terms of creating consumer interest in brands and commercials." ] }
1312.2094
1588356406
Online Social Network (OSN) is one of the most hottest services in the past years. It preserves the life of users and provides great potential for journalists, sociologists and business analysts. Crawling data from social network is a basic step for social network information analysis and processing. As the net- work becomes huge and information on the network updates faster than web pages, crawling is more dicult because of the limitations of band-width, po- liteness etiquette and computation power. To extract fresh information from social network eciently and eectively, this paper presents a novel crawling method and discusses parallelization architecture of social network. To dis- cover the feature of social network, we gather data from real social network, analyze them and build a model to describe the discipline of users' behavior. With the modeled behavior, we propose methods to predict users' behavior. According to the prediction, we schedule our crawler more reasonably and ex- tract more fresh information with parallelization technologies. Experimental results demonstrate that our strategies could obtain information from OSN eciently and eectively.
TwitterEcho is an open source Twitter crawler developed by @cite_15 . It applied a centralized distributed architecture. Cloud computing is also used for OSN crawler @cite_0 . collect Twitter data and rank Twitter users through the PageRank algorithm. Another attempt is to crawl parallel. implemented a parallel eBay crawler in Java and visited 11,716,588 users in 23 days @cite_21 . The three methods aim to get more calculating resource, while we focus on a more reasonable crawling sequence with the given resource.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_21" ], "mid": [ "2119915991", "2030559659", "2016589434" ], "abstract": [ "Mining and analyzing data from social networks can be difficult because of the large amounts of data involved. Such activities are usually very expensive, as they require a lot of computational resources. With the recent success of cloud computing, data analysis is going to be more accessible due to easier access to less expensive computational resources. In this work we propose to use cloud computing services as a possible solution for analysis of large amounts of data. As a source for a large data set, we propose to use Twitter, yielding a graph with 50 million nodes and 1.8 billion edges. In this paper, we use computation of PageRank on Twitter’s social graph to investigate whether or not cloud computing, and Amazon cloud services1 in particular, can make these tasks more feasible and, as a side effect, whether or not PageRank provides a good ranking of Twitter users.", "Modern social network analysis relies on vast quantities of data to infer new knowledge about human relations and communication. In this paper we describe TwitterEcho, an open source Twitter crawler for supporting this kind of research, which is characterized by a modular distributed architecture. Our crawler enables researchers to continuously collect data from particular user communities, while respecting Twitter's imposed limits. We present the core modules of the crawling server, some of which were specifically designed to focus the crawl on the Portuguese Twittosphere. Additional modules can be easily implemented, thus changing the focus to a different community. Our evaluation of the system shows high crawling performance and coverage.", "Given a huge online social network, how do we retrieve information from it through crawling? Even better, how do we improve the crawling performance by using parallel crawlers that work independently? In this paper, we present the framework of parallel crawlers for online social networks, utilizing a centralized queue. To show how this works in practice, we describe our implementation of the crawlers for an online auction website. The crawlers work independently, therefore the failing of one crawler does not affect the others at all. The framework ensures that no redundant crawling would occur. Using the crawlers that we built, we visited a total of approximately 11 million auction users, about 66,000 of which were completely crawled." ] }
1312.2094
1588356406
Online Social Network (OSN) is one of the most hottest services in the past years. It preserves the life of users and provides great potential for journalists, sociologists and business analysts. Crawling data from social network is a basic step for social network information analysis and processing. As the net- work becomes huge and information on the network updates faster than web pages, crawling is more dicult because of the limitations of band-width, po- liteness etiquette and computation power. To extract fresh information from social network eciently and eectively, this paper presents a novel crawling method and discusses parallelization architecture of social network. To dis- cover the feature of social network, we gather data from real social network, analyze them and build a model to describe the discipline of users' behavior. With the modeled behavior, we propose methods to predict users' behavior. According to the prediction, we schedule our crawler more reasonably and ex- tract more fresh information with parallelization technologies. Experimental results demonstrate that our strategies could obtain information from OSN eciently and eectively.
Whitelist accounts were once available on Twitter. crawled the entire Twitter site successfully, including 41.7 million user profiles and 106 million tweets by Twitter API @cite_2 . However, whitelist accounts are no longer available now. It is the same for @cite_0 . As the API has rate-limiting now, we propose algorithms to improve the crawl efficiency.
{ "cite_N": [ "@cite_0", "@cite_2" ], "mid": [ "2119915991", "2116136846" ], "abstract": [ "Mining and analyzing data from social networks can be difficult because of the large amounts of data involved. Such activities are usually very expensive, as they require a lot of computational resources. With the recent success of cloud computing, data analysis is going to be more accessible due to easier access to less expensive computational resources. In this work we propose to use cloud computing services as a possible solution for analysis of large amounts of data. As a source for a large data set, we propose to use Twitter, yielding a graph with 50 million nodes and 1.8 billion edges. In this paper, we use computation of PageRank on Twitter’s social graph to investigate whether or not cloud computing, and Amazon cloud services1 in particular, can make these tasks more feasible and, as a side effect, whether or not PageRank provides a good ranking of Twitter users.", "This paper describes the functions of a system proposed for the music tube recommendation from social network data base. Such a system enables the automatic collection, evaluation and rating of music critics, the possibility to rate music tube by auditors and the recommendation of tubes depended from auditor's profiles in form of regional internet radio. First, the system searches and retrieves probable music reviews from the Internet. Subsequently, the system carries out an evaluation and rating of those reviews. From this list of music tubes the system directly allows notation from our application. Finally the system automatically create the record list diffused each day depended form the region, the year season, day hours and age of listeners. Our system uses linguistics and statistic methods for classifying music opinions and data mining techniques for recommendation part needed for recorded list creation. The principal task is the creation of popular intelligent radio adaptive on auditor's age and region - IA-Regional-Radio." ] }
1312.2094
1588356406
Online Social Network (OSN) is one of the most hottest services in the past years. It preserves the life of users and provides great potential for journalists, sociologists and business analysts. Crawling data from social network is a basic step for social network information analysis and processing. As the net- work becomes huge and information on the network updates faster than web pages, crawling is more dicult because of the limitations of band-width, po- liteness etiquette and computation power. To extract fresh information from social network eciently and eectively, this paper presents a novel crawling method and discusses parallelization architecture of social network. To dis- cover the feature of social network, we gather data from real social network, analyze them and build a model to describe the discipline of users' behavior. With the modeled behavior, we propose methods to predict users' behavior. According to the prediction, we schedule our crawler more reasonably and ex- tract more fresh information with parallelization technologies. Experimental results demonstrate that our strategies could obtain information from OSN eciently and eectively.
Many web crawlers are proposed. Reprehensive measures for web crawling are sharpness @cite_4 and freshness @cite_6 . The strategies define sharpness or freshness to the crawling, and schedule crawling to achieve those targets. Differently, we choose the total number of new OSN messages as our target, and schedule according to the OSN users' behavior.
{ "cite_N": [ "@cite_4", "@cite_6" ], "mid": [ "2135402295", "2127536142" ], "abstract": [ "Web archives preserve the history of born-digital content and offer great potential for sociologists, business analysts, and legal experts on intellectual property and compliance issues. Data quality is crucial for these purposes. Ideally, crawlers should gather sharp captures of entire Web sites, but the politeness etiquette and completeness requirement mandate very slow, long-duration crawling while Web sites undergo changes. This paper presents the SHARC framework for assessing the data quality in Web archives and for tuning capturing strategies towards better quality with given resources. We define quality measures, characterize their properties, and derive a suite of quality-conscious scheduling strategies for archive crawling. It is assumed that change rates of Web pages can be statistically predicted based on page types, directory depths, and URL names. We develop a stochastically optimal crawl algorithm for the offline case where all change rates are known. We generalize the approach into an online algorithm that detect information on a Web site while it is crawled. For dating a site capture and for assessing its quality, we propose several strategies that revisit pages after their initial downloads in a judiciously chosen order. All strategies are fully implemented in a testbed, and shown to be effective by experiments with both synthetically generated sites and a daily crawl series for a medium-sized site.", "It is crucial for a web crawler to distinguish between ephemeral and persistent content. Ephemeral content (e.g., quote of the day) is usually not worth crawling, because by the time it reaches the index it is no longer representative of the web page from which it was acquired. On the other hand, content that persists across multiple page updates (e.g., recent blog postings) may be worth acquiring, because it matches the page's true content for a sustained period of time. In this paper we characterize the longevity of information found on the web, via both empirical measurements and a generative model that coincides with these measurements. We then develop new recrawl scheduling policies that take longevity into account. As we show via experiments over real web data, our policies obtain better freshness at lower cost, compared with previous approaches." ] }
1312.2094
1588356406
Online Social Network (OSN) is one of the most hottest services in the past years. It preserves the life of users and provides great potential for journalists, sociologists and business analysts. Crawling data from social network is a basic step for social network information analysis and processing. As the net- work becomes huge and information on the network updates faster than web pages, crawling is more dicult because of the limitations of band-width, po- liteness etiquette and computation power. To extract fresh information from social network eciently and eectively, this paper presents a novel crawling method and discusses parallelization architecture of social network. To dis- cover the feature of social network, we gather data from real social network, analyze them and build a model to describe the discipline of users' behavior. With the modeled behavior, we propose methods to predict users' behavior. According to the prediction, we schedule our crawler more reasonably and ex- tract more fresh information with parallelization technologies. Experimental results demonstrate that our strategies could obtain information from OSN eciently and eectively.
There are other matured web crawling strategies. J. Cho, H. Garcia-Molina and L. Page improve the crawling efficiency through URL ordering @cite_22 . They define several importance metrics including ordering schemes and performance evaluation measures to obtain more important URL first. J. Cho, H. Garcia-Molina also pro-poses a strategy for estimating the change frequency of pages to make the web crawl-er work better @cite_23 by identifying scenarios and then develop frequency estimators. C. Castillo, M. Marin, A. Rodriguez and R. Baeza-Yates combine the breadth-first ordering with the largest sites first to crawl pages fast and simply @cite_9 . J. Cho and U. Schonfeld make PageRank coverage guarantee high personalized to improve the crawler @cite_14 .
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_22", "@cite_23" ], "mid": [ "2110073539", "2097704339", "2029341294", "1976624301" ], "abstract": [ "This paper presents a comparative study of strategies for Web crawling. We show that a combination of breadth-first ordering with the largest sites first is a practical alternative since it is fast, simple to implement, and able to retrieve the best ranked pages at a rate that is closer to the optimal than other alternatives. Our study was performed on a large sample of the Chilean Web which was crawled by using simulators, so that all strategies were compared under the same conditions, and actual crawls to validate our conclusions. We also explored the effects of large scale parallelism in the page retrieval task and multiple-page requests in a single connection for effective amortization of latency times.", "Crawling algorithms have been the subject of extensive research and optimizations, but some important questions remain open. In particular, given the unbounded number of pages available on the Web, search-engine operators constantly struggle with the following vexing questions: When can I stop downloading the Web? How many pages should I download to cover \"most\" of the Web? How can I know I am not missing an important part when I stop? In this paper we provide an answer to these questions by developing, in the context of a system that is given a set of trusted pages, a family of crawling algorithms that (1) provide a theoretical guarantee on how much of the \"important\" part of the Web it will download after crawling a certain number of pages and (2) give a high priority to important pages during a crawl, so that the search engine can index the most important part of the Web first. We prove the correctness of our algorithms by theoretical analysis and evaluate their performance experimentally based on 141 million URLs obtained from the Web. Our experiments demonstrate that even our simple algorithm is effective in downloading important pages early on and provides high \"coverage\" of the Web with a relatively small number of pages.", "In this paper we study in what order a crawler should visit the URLs it has seen, in order to obtain more \"important\" pages first. Obtaining important pages rapidly can be very useful when a crawler cannot visit the entire Web in a reasonable amount of time. We define several importance metrics, ordering schemes, and performance evaluation measures for this problem. We also experimentally evaluate the ordering schemes on the Stanford University Web. Our results show that a crawler with a good ordering scheme can obtain important pages significantly faster than one without.", "Many online data sources are updated autonomously and independently. In this article, we make the case for estimating the change frequency of data to improve Web crawlers, Web caches and to help data mining. We first identify various scenarios, where different applications have different requirements on the accuracy of the estimated frequency. Then we develop several \"frequency estimators\" for the identified scenarios, showing analytically and experimentally how precise they are. In many cases, our proposed estimators predict change frequencies much more accurately and improve the effectiveness of applications. For example, a Web crawler could achieve 35p improvement in \"freshness\" simply by adopting our proposed estimator." ] }
1312.2629
2950397686
The HVAC systems in subway stations are energy consuming giants, each of which may consume over 10, 000 Kilowatts per day for cooling and ventilation. To save energy for the HVAC systems, it is critically important to firstly know the "load signatures" of the HVAC system, i.e., the quantity of heat imported from the outdoor environments and by the passengers respectively in different periods of a day, which will significantly benefit the design of control policies. In this paper, we present a novel sensing and learning approach to identify the load signature of the HVAC system in the subway stations. In particular, sensors and smart meters were deployed to monitor the indoor, outdoor temperatures, and the energy consumptions of the HVAC system in real-time. The number of passengers was counted by the ticket checking system. At the same time, the cooling supply provided by the HVAC system was inferred via the energy consumption logs of the HVAC system. Since the indoor temperature variations are driven by the difference of the loads and the cooling supply, linear regression model was proposed for the load signature, whose coefficients are derived via a proposed algorithm . We collected real sensing data and energy log data from HaiDianHuangZhuang Subway station, which is in line 4 of Beijing from the duration of July 2012 to Sept. 2012. The data was used to evaluate the coefficients of the regression model. The experiment results show typical variation signatures of the loads from the passengers and from the outdoor environments respectively, which provide important contexts for smart control policies.
The autonomous, optimal control for HVAC systems has attracted great research attentions in the studies of smart and sustainable buildings @cite_12 , which is to determine the optimal solutions (operation mode and setpoints) that minimize overall energy consumption or operating cost while still maintaining the satisfied indoor thermal comfort and healthy environment @cite_6 .
{ "cite_N": [ "@cite_6", "@cite_12" ], "mid": [ "2018614621", "2119681812" ], "abstract": [ "HVAC systems are the major energy consumers in buildings. Operation and control of HVAC systems have significant impacts on the energy or cost efficiency of buildings besides their designs. Buildings nowadays are mostly equipped with comprehensive building automation systems (BASs) and building energy management control systems (EMCSs) that allow the possibility of enhancing and optimizing the operation and control of HVAC systems. Supervisory and optimal control, which addresses the energy or cost-efficient control of HVAC systems while providing the desired indoor comfort and healthy environment under the dynamic working conditions, is attracting more attention of the building professionals and the society and provides incentives to make more efforts in developing more extensive and robust control methods for HVAC systems. This paper provides a framework for categorizing the main supervisory and optimal control methods and optimization techniques developed and or utilized in the HVAC field. The applicat...", "We study the problem of heating, ventilation, and air conditioning (HVAC) control in a typical commercial building. We propose a model predictive control (MPC) approach which minimizes energy use while satisfying occupant comfort and actuator constraints by using predictive knowledge of weather and occupancy." ] }
1312.2629
2950397686
The HVAC systems in subway stations are energy consuming giants, each of which may consume over 10, 000 Kilowatts per day for cooling and ventilation. To save energy for the HVAC systems, it is critically important to firstly know the "load signatures" of the HVAC system, i.e., the quantity of heat imported from the outdoor environments and by the passengers respectively in different periods of a day, which will significantly benefit the design of control policies. In this paper, we present a novel sensing and learning approach to identify the load signature of the HVAC system in the subway stations. In particular, sensors and smart meters were deployed to monitor the indoor, outdoor temperatures, and the energy consumptions of the HVAC system in real-time. The number of passengers was counted by the ticket checking system. At the same time, the cooling supply provided by the HVAC system was inferred via the energy consumption logs of the HVAC system. Since the indoor temperature variations are driven by the difference of the loads and the cooling supply, linear regression model was proposed for the load signature, whose coefficients are derived via a proposed algorithm . We collected real sensing data and energy log data from HaiDianHuangZhuang Subway station, which is in line 4 of Beijing from the duration of July 2012 to Sept. 2012. The data was used to evaluate the coefficients of the regression model. The experiment results show typical variation signatures of the loads from the passengers and from the outdoor environments respectively, which provide important contexts for smart control policies.
This goal is the same in the subway HVAC control systems. Because the HVAC systems contain different types of subsystems, such as gas-side and water-side subsystems, the optimal control problems of HVAC are extremely difficult. One of the difficulties is the lack of an exact model to describe the internal relationships among different components. A dynamic model of an HVAC system for control analysis was presented in @cite_0 . The authors proposed to use Ziegler-Nichols rule to tune the parameters to optimize PID controlle. A metaheuristic simulation–EP (evolutionary programming) coupling approach was developed in @cite_5 , which proposed evolutionary programming to handle the discrete, non-linear and highly constrained optimization problems. Multi agent-based simulation models were studied in @cite_14 to investigate the performance of HVAC system when occupants are participating. In @cite_4 , swarm intelligence was utilized to determine the control policy of each equipment in the HVAC system.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_14", "@cite_4" ], "mid": [ "2082756984", "2112902257", "2124380877", "2104957212" ], "abstract": [ "This paper describes a procedure for deriving a dynamic model of an HVAC system that consists of a zone, heating coil, cooling and dehumidifying coil, humidifier, ductwork, fan, and mixing box. In particular, the interest is centered on control strategies to reduce energy consumption and improving the quality of the indoor environment. Indoor temperature and humidity may be maintained at set point values by an air-handling unit using a PID control action. The PID parameters must be carefully tuned to produce less oscillatory responses. The tuning technique, using the Ziegler-Nichols rule, is investigated from a practical viewpoint. Simulation results showing the open loop and the closed loop responses of indoor temperature and humidity ratio are given. The results show that the system is capable of controlling the disturbance efficiently within a small period of time and with less error. The dynamic model can be especially useful for control strategies that require knowledge of the dynamic characteristics of HVAC systems.", "Abstract Energy management of heating, ventilating and air-conditioning (HVAC) systems is a primary concern in building projects, since the energy consumption in electricity has the highest percentage in HVAC among all building services installations and electric appliances. Without sacrifice of thermal comfort, to reset the suitable operating parameters, such as the chilled water temperature and supply air temperature, would have energy saving with immediate effect. For the typical commercial building projects, it is not difficult to acquire the reference settings for efficient operation. However, for some special projects, due to the specific design and control of the HVAC system, conventional settings may not be necessarily energy-efficient in daily operation. In this paper, the simulation-optimization approach was proposed for the effective energy management of HVAC system. Due to the complicated interrelationship of the entire HVAC system, which commonly includes the water side and air side systems, it is necessary to suggest optimum settings for different operations in response to the dynamic cooling loads and changing weather conditions throughout a year. A metaheuristic simulation–EP (evolutionary programming) coupling approach was developed using evolutionary programming, which can effectively handle the discrete, non-linear and highly constrained optimization problems, such as those related to HVAC systems. The effectiveness of this simulation–EP coupling suite was demonstrated through the establishment of a monthly optimum reset scheme for both the chilled water and supply air temperatures of the HVAC installations of a local project. This reset scheme would have a saving potential of about 7 as compared to the existing operational settings, without any extra cost.", "Building information modeling is only beginning to incorporate human factors, although buildings are sites where humans and technologies interact with globally significant consequences. Some buildings fail to perform as their designers intended, in part because users do not or cannot properly operate the building, and some occupants behave differently than designers expect. Innovative buildings, e.g., green buildings, are particularly susceptible to usability problems. This paper presents a framework for prospectively measuring the usability of designs before buildings are constructed, while there is still time to improve the design. The framework, which was implemented as an agent-based computer simulation model, tests how well buildings are likely to perform, given realistic occupants. An illustrative model for lighting design shows that this modeling approach has practical efficacy, demonstrating that, to the extent that users exhibit heterogeneous behaviors and preferences, designs that allow greater local control and ease of operation perform better.", "Heating, ventilating and air conditioning (HVAC) systems have played an important role in building energy and comfort management. It is designed to provide a relatively constant and comfortable temperature in buildings and provide fresh and filtered air with a comfortable humidity level. In this paper, an optimal control strategy is proposed to control the HVAC system for maintaining building's indoor environment with high energy efficiency. The control strategy utilized swarm intelligence to determine the amount of energy dispatched to each equipment in the HVAC system. In order to study the impact of HVAC system operations in the indoor environment, both the building model and HVAC equipment models are developed. A case study is carried out to simulate the real time control process in a specified building environment." ] }
1312.2629
2950397686
The HVAC systems in subway stations are energy consuming giants, each of which may consume over 10, 000 Kilowatts per day for cooling and ventilation. To save energy for the HVAC systems, it is critically important to firstly know the "load signatures" of the HVAC system, i.e., the quantity of heat imported from the outdoor environments and by the passengers respectively in different periods of a day, which will significantly benefit the design of control policies. In this paper, we present a novel sensing and learning approach to identify the load signature of the HVAC system in the subway stations. In particular, sensors and smart meters were deployed to monitor the indoor, outdoor temperatures, and the energy consumptions of the HVAC system in real-time. The number of passengers was counted by the ticket checking system. At the same time, the cooling supply provided by the HVAC system was inferred via the energy consumption logs of the HVAC system. Since the indoor temperature variations are driven by the difference of the loads and the cooling supply, linear regression model was proposed for the load signature, whose coefficients are derived via a proposed algorithm . We collected real sensing data and energy log data from HaiDianHuangZhuang Subway station, which is in line 4 of Beijing from the duration of July 2012 to Sept. 2012. The data was used to evaluate the coefficients of the regression model. The experiment results show typical variation signatures of the loads from the passengers and from the outdoor environments respectively, which provide important contexts for smart control policies.
Another related work reported the factors affecting the range of heat transfer in subways @cite_13 . They show by numerical analysis that how the heat is transferred in tunnels and stations. Reference @cite_11 studied the environmental characters in the subway metro stations in Cairo, Egypt, which showed the different environment characters in the tunnel and on the surface. The most related work is @cite_7 , which surveyed the energy consumption of Beijing subway lines in 2008. Different from these existing work, we deployed sensors and presented models to study the the load signatures and distinct features of energy consumptionof subway HVAC systems.
{ "cite_N": [ "@cite_13", "@cite_7", "@cite_11" ], "mid": [ "1970190064", "2369062378", "2073375215" ], "abstract": [ "In order to examine the factors which affect the range of heat transfer in earth surrounding subways, FLAC3D was adopted in this study to analyze these factors, under different conditions, in a systematic manner. When we compare these numerical tests, the results show that the main factors, affecting the heat transfer range are the thermal properties of the surrounding earth, the initial ground temperature and the temperature in the tunnel. The heat transfer coefficient between air and linings has little effect on the temperature distribution around the tunnel. The current results can provide a reference for improving the thermal environment in subways and optimizing the design of subway ventilation and air conditioning.", "Because of large numbers of passengers,Beijing subway has a great demand for huge power consumption.It is necessary to search possibilities of reducing the power consumption of Beijing Subway.This paper analyzed the actual power consumption of each subway line,the electric power used for powering the trains and all secondary consumption of electric power for operating the train stations of Beijing subway in detail.Additionally,the water consumption of each subway line of Beijing subway was also analyzed.The results provide a scientific basis for the design and operation of energy-saving subways.", "Airborne viable and non-viable measurements were carried out in two different metro stations, one located in a tunnel and the other on the surface. The concentrations of airborne total viable bacteria (incubated at 37°C and 22°C), staphylococci, suspended dust and oxidants (ozone) were higher in the air of the tunnel station than those recorded at the surface station. In contrast, spore forming bacteria, Candida spp, fungi and actinomycetes were found at slightly higher levels in the surface station than in the tunnel station. A statistically significant difference (p<0.01) was found between the levels of suspended dust at both stations. Cladosporium, Penicillium and Aspergillus species were the dominant fungi isolates. Fusarium, Aspergillus and Penicillium are the most common fungi that produce toxins. Under certain circumstances (host susceptibility, infective dose and aerodynamic diameter) some of the airborne microorganisms e.g. actinomycetes and Aspergillus species and staphylococci may cause health problems in exposed persons based on toxic or allergic reactions." ] }
1312.1955
2952764546
We investigate computational and mechanism design aspects of scarce resource allocation, where the primary rationing mechanism is through waiting times. Specifically we consider allocating medical treatments to a population of patients. Each patient needs exactly one treatment, and can choose from @math hospitals. Hospitals have different costs, which are fully paid by a third party ---the "payer". The payer has a fixed budget @math , and each hospital will have its own waiting time. At equilibrium, each patient will choose his most preferred hospital given his intrinsic preferences and the waiting times. The payer thus computes the waiting times so that at equilibrium the budget constraint is satisfied and the social welfare is maximized. We first show that the optimization problem is NP-hard, yet if the budget can be relaxed to @math for an arbitrarily small @math , then the optimum under budget @math can be approximated efficiently. Next, we study the endogenous emergence of waiting time from the dynamics between hospitals and patients, and show that there is no need for the payer to explicitly enforce the optimal waiting times. Under certain conditions, all he need is to enforce the amount of money he wants to pay to each hospital. The dynamics will always converge to the desired waiting times in finite time. We then go beyond equilibrium solutions and investigate the optimization problem over a much larger class of mechanisms containing the equilibrium ones as special cases. With two hospitals, we show that under a natural assumption on the patients' preference profiles, optimal welfare is in fact attained by the randomized assignment mechanism, which allocates patients to hospitals at random subject to the budget constraint, but avoids waiting times. Finally, we discuss potential policy implications of our results, as well as follow-up directions and open problems.
The role of waiting time can be studied either from the supply side, namely, how waiting times interact with the hospitals' incentives, or from the demand side, namely, how they interact with the patients' incentives. In @cite_1 the authors give a thorough analysis of existing policies on reducing waiting times by affecting the incentives of either side. Our model focuses on the demand side, and below we discuss some other works that also focus on this side.
{ "cite_N": [ "@cite_1" ], "mid": [ "2004972267" ], "abstract": [ "This paper compares policies to tackle excessive waiting times for elective surgery in 12 OECD countries. It is found that waiting times may be reduced by acting on the supply of or on the demand for surgery (or both). On the supply side, evidence suggests that both capacity and financial incentives towards productivity can play an important role. On the demand side, inducing a raising of clinical thresholds may reduce waiting times but may also provoke tension between clinicians and policy makers. Preliminary evidence also suggests that an increase in private health insurance coverage may reduce waiting times." ] }
1312.1955
2952764546
We investigate computational and mechanism design aspects of scarce resource allocation, where the primary rationing mechanism is through waiting times. Specifically we consider allocating medical treatments to a population of patients. Each patient needs exactly one treatment, and can choose from @math hospitals. Hospitals have different costs, which are fully paid by a third party ---the "payer". The payer has a fixed budget @math , and each hospital will have its own waiting time. At equilibrium, each patient will choose his most preferred hospital given his intrinsic preferences and the waiting times. The payer thus computes the waiting times so that at equilibrium the budget constraint is satisfied and the social welfare is maximized. We first show that the optimization problem is NP-hard, yet if the budget can be relaxed to @math for an arbitrarily small @math , then the optimum under budget @math can be approximated efficiently. Next, we study the endogenous emergence of waiting time from the dynamics between hospitals and patients, and show that there is no need for the payer to explicitly enforce the optimal waiting times. Under certain conditions, all he need is to enforce the amount of money he wants to pay to each hospital. The dynamics will always converge to the desired waiting times in finite time. We then go beyond equilibrium solutions and investigate the optimization problem over a much larger class of mechanisms containing the equilibrium ones as special cases. With two hospitals, we show that under a natural assumption on the patients' preference profiles, optimal welfare is in fact attained by the randomized assignment mechanism, which allocates patients to hospitals at random subject to the budget constraint, but avoids waiting times. Finally, we discuss potential policy implications of our results, as well as follow-up directions and open problems.
The authors of @cite_14 study quality and waiting times with the existence of ex post moral hazard. They assume that the patients are ex ante identical, and that the treatment has objective quality levels with which both the valuations and the costs are monotonically increasing. But notice that if the patients are identical, rationing by waiting times is bounded to burn a lot of social welfare since at equilibrium every patient has to be treated in the same way ---as elaborated in our results. In our model the patients' valuations can be arbitrarily associated with different hospitals, reflecting subjective views they may have, and the hospitals' costs can also be arbitrary and do not necessarily reflect their real quality.
{ "cite_N": [ "@cite_14" ], "mid": [ "2058115997" ], "abstract": [ "We examine the role of quality and waiting time in health insurance when there is ex post moral hazard. Quality and waiting time provide additional instruments to control demand and potentially can improve the trade-off between optimal risk bearing and optimal consumption of health care. We show that optimal quality is lower than it would be in the absence of ex post moral hazard. But it is never optimal to have a positive waiting time if the marginal cost of waiting is higher for patients with greater benefits from health care." ] }
1312.1332
2949059221
We analyze the effect of tumor repopulation on optimal dose delivery in radiation therapy. We are primarily motivated by accelerated tumor repopulation towards the end of radiation treatment, which is believed to play a role in treatment failure for some tumor sites. A dynamic programming framework is developed to determine an optimal fractionation scheme based on a model of cell kill due to radiation and tumor growth in between treatment days. We find that faster tumor growth suggests shorter overall treatment duration. In addition, the presence of accelerated repopulation suggests larger dose fractions later in the treatment to compensate for the increased tumor proliferation. We prove that the optimal dose fractions are increasing over time. Numerical simulations indicate potential for improvement in treatment effectiveness.
There is a significant amount of literature, especially from the mathematical biology community, on the use of control theory and dynamic programming (DP) for optimal cancer therapy. Several of these works ( @cite_57 @cite_26 @cite_54 @cite_4 ) have looked into optimization of chemotherapy. For radiation therapy fractionation, some studies ( @cite_46 @cite_34 @cite_36 ) have used the DP approach based on deterministic biological models, as done in this paper. However, these works have not carried out a detailed mathematical analysis of the implications of optimal dose delivery in the presence of accelerated repopulation. Using imaging information obtained between treatment days, dynamic optimization models have been developed to adaptively compensate for past accumulated errors in dose to the tumor ( @cite_53 @cite_14 @cite_42 @cite_47 ). There also has been work on online approaches which adapt the dose and treatment plan based on images obtained immediately prior to treatment ( @cite_39 @cite_55 @cite_2 @cite_48 @cite_27 @cite_5 @cite_13 ).
{ "cite_N": [ "@cite_47", "@cite_26", "@cite_4", "@cite_14", "@cite_36", "@cite_48", "@cite_54", "@cite_53", "@cite_42", "@cite_55", "@cite_39", "@cite_57", "@cite_27", "@cite_2", "@cite_5", "@cite_46", "@cite_34", "@cite_13" ], "mid": [ "1998821409", "2056572610", "", "2141232465", "2136543444", "2296872485", "1494165308", "2051237659", "17652740", "2092499963", "2047276231", "2042521492", "2187109185", "2154023438", "2080602831", "2005588258", "1973319590", "2126122222" ], "abstract": [ "In intensity-modulated radiotherapy (IMRT), a treatment is designed to deliver high radiation doses to tumors, while avoiding the healthy tissue. Optimization-based treatment planning often produces sharp dose gradients between tumors and healthy tissue. Random shifts during treatment can cause significant differences between the dose in the “optimized” plan and the actual dose delivered to a patient. An IMRT treatment plan is delivered as a series of small daily dosages, or fractions, over a period of time (typically 35 days). It has recently become technically possible to measure variations in patient setup and the delivered doses after each fraction. We develop an optimization framework, which exploits the dynamic nature of radiotherapy and information gathering by adapting the treatment plan in response to temporal variations measured during the treatment course of a individual patient. The resulting (suboptimal) control policies, which re-optimize before each fraction, include two approximate dynamic programming schemes: certainty equivalent control (CEC) and open-loop feedback control (OLFC). Computational experiments show that resulting individualized adaptive radiotherapy plans promise to provide a considerable improvement compared to non-adaptive treatment plans, while remaining computationally feasible to implement. Copyright Springer Science+Business Media, LLC 2012", "In this paper we consider the problems of modeling the tumor growth and optimize the chemotherapy treatment. A biologically based model is used with the goal of solving an optimization problem involving discrete delivery of antineoplastic drugs. Our model is formulated via compartmental analysis in order to take into account the cell cycle. The cost functional measures not only the final size of the tumor but also the total amount of drug delivered. We propose an algorithm based on the discrete maximum principle to solve the optimal drug schedule problem. Our numerical results show nice interpretations from the medical point of view.", "", "While ART has been studied for years, the specific quantitative implementation details have not. In order for this new scheme of radiation therapy (RT) to reach its potential, an effective ART treatment planning strategy capable of taking into account the dose delivery history and the patient's on-treatment geometric model must be in place. This paper performs a theoretical study of dynamic closed-loop control algorithms for ART and compares their utility with data from phantom and clinical cases. We developed two classes of algorithms: those Adapting to Changing Geometry and those Adapting to Geometry and Delivered Dose. The former class takes into account organ deformations found just before treatment. The latter class optimizes the dose distribution accumulated over the entire course of treatment by adapting at each fraction, not only to the information just before treatment about organ deformations but also to the dose delivery history. We showcase two algorithms in the class of those Adapting to Geometry and Delivered Dose. A comparison of the approaches indicates that certain closed-loop ART algorithms may significantly improve the current practice. We anticipate that improvements in imaging, dose verification and reporting will further increase the importance of adaptive algorithms.", "Purpose: The linear-quadratic model typically assumes that tumor sensitivity and repopulation are constant over the time course of radiotherapy. However, evidence suggests that the growth fraction increases and the cell-loss factor decreases as the tumor shrinks. We investigate whether this evolution in tumor geometry, as well as the irregular time intervals between fractions in conventional hyperfractionation schemes, can be exploited by fractionation schedules that employ time-varying fraction sizes. Methods: We construct a mathematical model of a spherical tumor with a hypoxic core and a viable rim, which is most appropriate for a prevascular tumor, and is only a caricature of a vascularized tumor. This model is embedded into the traditional linear-quadratic model by assuming instantaneous reoxygenation. Dynamic programming is used to numerically compute the fractionation regimen that maximizes the tumor-control probability (TCP) subject to constraints on the biologically effective dose of the early and late tissues. Results: In several numerical examples that employ five or 10 fractions per week on a 1-cm or 5-cm diameter tumor, optimally varying the fraction sizes increases the TCP significantly. The optimal regimen incorporates large Friday (afternoon, if 10 fractions per week) fractions that are escalated throughout the course of treatment, and larger afternoon fractions than morning fractions. Conclusion: Numerical results suggest that a significant increase in tumor cure can be achieved by allowing the fraction sizes to vary throughout the course of treatment. Several strategies deserve further investigation: using larger fractions before overnight and weekend breaks, and escalating the dose (particularly on Friday afternoons) throughout the course of treatment. © 2000 Elsevier Science Inc. Dynamic optimization, Linear‐ quadratic, Reoxygenation, Repair, Repopulation.", "", "Phase specific models for cancer chemotherapy are described as optimal control problems. We review earlier results on scheduling optimal therapies when the controls represent the effectiveness of chemotherapeutic agents, or, equivalently, when the simplifying assumption is made that drugs act instantaneously. In this paper we discuss how to incorporate more realistic medical aspects which hitherto have been neglected in the models. They include pharmacokinetic equations (PK) which model the drug's plasma concentration and various pharmacodynamic models (PD) which describe the effect the concentrations have on cells. We also briefly discuss the important medical issue of drug resistance. resistance", "Radiotherapy treatment is often delivered in a fractionated manner over a period of time. Emerging delivery devices are able to determine the actual dose that has been delivered at each stage facilitating the use of adaptive treatment plans that compensate for errors in delivery. We formulate a model of the day-to-day planning problem as a stochastic program and exhibit the gains that can be achieved by incorporating uncertainty about errors during treatment into the planning process. Due to size and time restrictions, the model becomes intractable for realistic instances. We show how heuristics and neuro-dynamic programming can be used to approximate the stochastic solution, and derive results from our models for realistic time periods. These results allow us to generate practical rules of thumb that can be immediately implemented in current planning technologies.", "We investigate an on-line planning strategy for the fractionated radiotherapy planning problem, which incorporates the effects of day-to-day patient motion. On-line planning demonstrates significant improvement over off-line strategies in terms of reducing registration error, but it requires extra work in the replanning procedures, such as in the CT scans and the re-computation of a deliverable dose profile. We formulate the problem in a dynamic programming framework and solve it based on the approximate policy iteration techniques of neuro-dynamic programming. In initial limited testing, the solutions we obtain outperform existing solutions and offer an improved dose profile for each fraction of the treatment.", "Radiation therapy is fractionized to differentiate the cell killing between the tumor and organ at risk (OAR). Conventionally, fractionation is done by dividing the total dose into equal fraction sizes. However, as the relative positions (configurations) between OAR and the tumor vary from fractions to fractions, intuitively, we want to use a larger fraction size when OAR and the tumor are far apart and a smaller fraction size when OAR and the tumor are close to each other. Adaptive fractionation accounts for variations of configurations between OAR and the tumor. In part I of this series, the adaptation minimizes the OAR (physical) dose and maintains the total tumor (physical) dose. In this work, instead, the adaptation is based on the biological effective dose (BED). Unlike the linear programming approach in part I, we build a fraction size lookup table using mathematical induction. The lookup table essentially describes the fraction size as a function of the remaining tumor BED, the OAR tumor dose ratio and the remaining number of fractions. The lookup table is calculated by maximizing the expected survival of OAR and preserving the tumor cell kill. Immediately before the treatment of each fraction, the OAR-tumor configuration and thus the dose ratio can be obtained from the daily setup image, and then the fraction size can be determined by the lookup table. Extensive simulations demonstrate the effectiveness of our method compared with the conventional fractionation method.", "Radiotherapy is fractionized to increase the therapeutic ratio. Fractionation in conventional treatment is determined as part of the prescription, and a fixed fraction size is used for the whole course of treatment. Due to patients' day-to-day variations on the relative distance between the tumor and the organs at risk (OAR), a better therapeutic ratio may be attained by using an adaptive fraction size. Intuitively, we want to use a larger fraction size when OAR and the tumor are far apart and a smaller fraction size when OAR and the tumor are close to each other. The concept and strategies of adaptive fractionation therapy (AFT) are introduced in this paper. AFT is an on-line adaptive technique that utilizes the variations of internal structures to get optimal OAR sparing. Changes of internal structures are classified as different configurations according to their feasibility to the radiation delivery. A priori knowledge is used to describe the probability distribution of these configurations. On-line processes include identifying the configuration via daily image guidance and optimizing the current fraction size. The optimization is modeled as a dynamic linear programming problem so that at the end of the treatment course, the tumor receives the same planned dose while OAR receives less dose than the regular fractionation delivery. Extensive simulations, which include thousands of treatment courses with each course consisting of 40 fractions, are used to test the efficiency and robustness of the presented technique. The gains of OAR sparing depend on the variations on configurations and the bounds of the fraction size. The larger the variations and the looser the bounds are, the larger the gains will be. Compared to the conventional fractionation technique with 2 Gy fraction in 40 fractions, for a 20 variation on tumor–OAR configurations and [1 Gy, 3 Gy] fraction size bounds, the cumulative OAR dose with adaptive fractionation is 3–8 Gy, or 7–20 less than that of the regular fractionation, while maintaining the same cumulative tumor dose as prescribed.", "This paper uses optimal control theory in conjunction with a Gompertzian type model for cellular growth to determine the optimal method of administering cycle non-specific chemotherapy or more generally the optimal durations of treatment and rest periods during chemotherapy. The performance critera employed to determine the relative merits of the therapy include not only the destruction of malignant cells, but also the sparing of a critical normal tissue. Since these criteria are at odds with one another, the solutions are found which satisfy the Pareto optimality conditions.", "The goal in external beam radiotherapy for cancer is to maximize tumor-damage while limiting toxic eects of radiation on nearby healthy anatomies. This is achieved through spatial localization and temporal dispersion of radiation dose. Once a radi- ation intensity prole that achieves the maximum possible spatial localization is designed at the beginning of a multi-week treatment-course, the total planned dose is split into a series of predetermined equal-dosage fractions delivered daily so that healthy cells can recover between sessions. Thus, existing mathematical methods for treatment planning employ static-deterministic optimization techniques, and hence, cannot adapt to a tumor's uncertain biological response over time. In this tutorial, we review a recently proposed stochastic control framework, where the ultimate objective is to design individualized treatment strategies that dynamically adapt to tumor- response, to deliver the right dose to the right location at the right time.", "The current state of the art in cancer treatment by radiation optimizes beam intensity spatially such that tumors receive high dose radiation whereas damage to nearby healthy tissues is minimized. It is common practice to deliver the radiation over several weeks, where the daily dose is a small constant fraction of the total planned. Such a 'fractionation schedule' is based on traditional models of radiobiological response where normal tissue cells possess the ability to repair sublethal damage done by radiation. This capability is significantly less prominent in tumors. Recent advances in quantitative functional imaging and biological markers are providing new opportunities to measure patient response to radiation over the treatment course. This opens the door for designing fractionation schedules that take into account the patient's cumulative response to radiation up to a particular treatment day in determining the fraction on that day. We propose a novel approach that, for the first time, mathematically explores the benefits of such fractionation schemes. This is achieved by building a stylistic Markov decision process (MDP) model, which incorporates some key features of the problem through intuitive choices of state and action spaces, as well as transition probability and reward functions. The structure of optimal policies for this MDP model is explored through several simple numerical examples.", "State-of-the-art methods for optimizing cancer treatment over several weeks of external beam radiotherapy take a static–deterministic view of the treatment planning process, mainly focusing on spatial distribution of dose. Recent progress in quantitative functional imaging as well as mathematical models of tumor response to radiotherapy is increasingly enabling treatment planners to monitor predict a patient’s biological response over weeks of treatment. In this paper we introduce dynamic biologically conformal radiation therapy (DBCRT), a mathematical framework intended to exploit these emerging technological and biological modeling advances to design patient-specific radiation treatment strategies that dynamically adapt to the spatiotemporal evolution of a patient’s biological response over several treatment sessions in order to achieve the best possible health outcome. More specifically, we propose a discrete-time stochastic control formalism where we use the patient’s biological condition to model the system state and the beam intensities as controls. Three approximate control schemes are then applied and compared for efficiency. Numerical simulations on test cases show that DBCRT results in a 64–98 improvement in treatment efficacy as compared to the more conventional static–deterministic approach.", "The reactions of a tumor cell population and a normal tissue cell population to irradiation are described by cell population kinetic models which consider factors such as repair, reoxygenation, and...", "Abstract Using a mathematical model based on existing models in the literature, the response of tumor and tumor-bed cell populations to fractionated radiation therapy is investigated. Problems of determining the optimal dose schedule for a given treatment calendar (specified number of doses and time intervals between doses) are formulated. A theoretical and computational method for solving such problems is proposed. Representative results, which support the efficacy of the method, are presented and discussed.", "We conduct a theoretical study of various solution methods for the adaptive fractionation problem. The two messages of this paper are as follows: (i)?dynamic programming (DP) is a useful framework for adaptive radiation therapy, particularly adaptive fractionation, because it allows us to assess how close to optimal different methods are, and (ii) heuristic methods proposed in this paper are near-optimal, and therefore, can be used to evaluate the best possible benefit of using an adaptive fraction size. The essence of adaptive fractionation is to increase the fraction size when the tumor and organ-at-risk (OAR) are far apart (a ?favorable? anatomy) and to decrease the fraction size when they are close together. Given that a fixed prescribed dose must be delivered to the tumor over the course of the treatment, such an approach results in a lower cumulative dose to the OAR when compared to that resulting from standard fractionation. We first establish a benchmark by using the DP algorithm to solve the problem exactly. In this case, we characterize the structure of an optimal policy, which provides guidance for our choice of heuristics. We develop two intuitive, numerically near-optimal heuristic policies, which could be used for more complex, high-dimensional problems. Furthermore, one of the heuristics requires only a statistic of the motion probability distribution, making it a reasonable method for use in a realistic setting. Numerically, we find that the amount of decrease in dose to the OAR can vary significantly (5?85 ) depending on the amount of motion in the anatomy, the number of fractions and the range of fraction sizes allowed. In general, the decrease in dose to the OAR is more pronounced when: (i) we have a high probability of large tumor?OAR distances, (ii) we use many fractions (as in a hyper-fractionated setting) and (iii) we allow large daily fraction size deviations." ] }
1312.1332
2949059221
We analyze the effect of tumor repopulation on optimal dose delivery in radiation therapy. We are primarily motivated by accelerated tumor repopulation towards the end of radiation treatment, which is believed to play a role in treatment failure for some tumor sites. A dynamic programming framework is developed to determine an optimal fractionation scheme based on a model of cell kill due to radiation and tumor growth in between treatment days. We find that faster tumor growth suggests shorter overall treatment duration. In addition, the presence of accelerated repopulation suggests larger dose fractions later in the treatment to compensate for the increased tumor proliferation. We prove that the optimal dose fractions are increasing over time. Numerical simulations indicate potential for improvement in treatment effectiveness.
Perhaps the closest related work is @cite_36 , which considers both faster tumor proliferation and re-oxygenation during the course of treatment. While a dose intensification strategy is also suggested in @cite_36 , the primary rationale for increasing dose fractions is different: it is concluded that due to the increase in tumor sensitivity from re-oxygenation, larger fraction sizes are more effective at the end of treatment. Our work, on the other hand, suggests dose intensification (i.e., larger doses over time) as a direct consequence of a model of accelerated tumor repopulation during the course of treatment.
{ "cite_N": [ "@cite_36" ], "mid": [ "2136543444" ], "abstract": [ "Purpose: The linear-quadratic model typically assumes that tumor sensitivity and repopulation are constant over the time course of radiotherapy. However, evidence suggests that the growth fraction increases and the cell-loss factor decreases as the tumor shrinks. We investigate whether this evolution in tumor geometry, as well as the irregular time intervals between fractions in conventional hyperfractionation schemes, can be exploited by fractionation schedules that employ time-varying fraction sizes. Methods: We construct a mathematical model of a spherical tumor with a hypoxic core and a viable rim, which is most appropriate for a prevascular tumor, and is only a caricature of a vascularized tumor. This model is embedded into the traditional linear-quadratic model by assuming instantaneous reoxygenation. Dynamic programming is used to numerically compute the fractionation regimen that maximizes the tumor-control probability (TCP) subject to constraints on the biologically effective dose of the early and late tissues. Results: In several numerical examples that employ five or 10 fractions per week on a 1-cm or 5-cm diameter tumor, optimally varying the fraction sizes increases the TCP significantly. The optimal regimen incorporates large Friday (afternoon, if 10 fractions per week) fractions that are escalated throughout the course of treatment, and larger afternoon fractions than morning fractions. Conclusion: Numerical results suggest that a significant increase in tumor cure can be achieved by allowing the fraction sizes to vary throughout the course of treatment. Several strategies deserve further investigation: using larger fractions before overnight and weekend breaks, and escalating the dose (particularly on Friday afternoons) throughout the course of treatment. © 2000 Elsevier Science Inc. Dynamic optimization, Linear‐ quadratic, Reoxygenation, Repair, Repopulation." ] }
1312.1831
2950990081
We study mechanism design problems in the ordinal setting wherein the preferences of agents are described by orderings over outcomes, as opposed to specific numerical values associated with them. This setting is relevant when agents can compare outcomes, but aren't able to evaluate precise utilities for them. Such a situation arises in diverse contexts including voting and matching markets. Our paper addresses two issues that arise in ordinal mechanism design. To design social welfare maximizing mechanisms, one needs to be able to quantitatively measure the welfare of an outcome which is not clear in the ordinal setting. Second, since the impossibility results of Gibbard and Satterthwaite Gibbard73,Satterthwaite75 force one to move to randomized mechanisms, one needs a more nuanced notion of truthfulness. We propose rank approximation as a metric for measuring the quality of an outcome, which allows us to evaluate mechanisms based on worst-case performance, and lex-truthfulness as a notion of truthfulness for randomized ordinal mechanisms. Lex-truthfulness is stronger than notions studied in the literature, and yet flexible enough to admit a rich class of mechanisms circumventing classical impossibility results . We demonstrate the usefulness of the above notions by devising lex-truthful mechanisms achieving good rank-approximation factors, both in the general ordinal setting, as well as structured settings such as (one-sided) matching markets , and its generalizations, matroid and scheduling markets.
Recent work, mostly in the CS literature, has led to a more nuanced notion of efficiency. Procaccia and Rosenchein @cite_31 studied the strong welfare factor notion (that they call distortion), and noticed that deterministic mechanisms have unbounded distortion. @cite_21 proposed randomized mechanisms and showed that the strong welfare factor is at most @math , if the consistent cardinal-utility profile is normalized. In contrast, our rank approximation results imply @math -approximate outcomes, but under a stronger restriction on the consistent cardinal utilities. The notion of approximations to scoring rules was studied by Procaccia @cite_24 where he described strongly truthful mechanisms which @math -approximate Borda, but @math -approximate the plurality rule. In contrast, our (non-truthful) mechanism @math -rank approximates any scoring rule, and plurality can be arbitrarily well approximated by a lex-truthful mechanism.
{ "cite_N": [ "@cite_24", "@cite_31", "@cite_21" ], "mid": [ "1567986438", "1530616289", "2055121363" ], "abstract": [ "The Gibbard-Satterthwaite Theorem asserts that any reasonable voting rule cannot be strategyproof. A large body of research in AI deals with circumventing this theorem via computational considerations; the goal is to design voting rules that are computationally hard, in the worst-case, to manipulate. However, recent work indicates that the prominent voting rules are usually easy to manipulate. In this paper, we suggest a new CS-oriented approach to circumventing Gibbard-Satterthwaite, using randomization and approximation. Specifically, we wish to design strategyproof randomized voting rules that are close, in a standard approximation sense, to prominent score-based (deterministic) voting rules. We give tight lower and upper bounds on the approximation ratio achievable via strategyproof randomized rules with respect to positional scoring rules, Copeland, and Maximin.", "The theoretical guarantees provided by voting have distinguished it as a prominent method of preference aggregation among autonomous agents. However, unlike humans, agents usually assign each candidate an exact utility, whereas an election is resolved based solely on each voter's linear ordering of candidates. In essence, the agents' cardinal (utility-based) preferences are embedded into the space of ordinal preferences. This often gives rise to a distortion in the preferences, and hence in the social welfare of the outcome. In this paper, we formally define and analyze the concept of distortion. We fully characterize the distortion under different restrictions imposed on agents' cardinal preferences; both possibility and strong impossibility results are established. We also tackle some computational aspects of calculating the distortion. Ultimately, we argue that, whenever voting is applied in a multiagent system, distortion must be a pivotal consideration.", "We adopt a utilitarian perspective on social choice, assuming that agents have (possibly latent) utility functions over some space of alternatives. For many reasons one might consider mechanisms, or social choice functions, that only have access to the ordinal rankings of alternatives by the individual agents rather than their utility functions. In this context, one possible objective for a social choice function is the maximization of (expected) social welfare relative to the information contained in these rankings. We study such optimal social choice functions under three different models, and underscore the important role played by scoring functions. In our worst-case model, no assumptions are made about the underlying distribution and we analyze the worst-case distortion---or degree to which the selected alternative does not maximize social welfare---of optimal social choice functions. In our average-case model, we derive optimal functions under neutral (or impartial culture) distributional models. Finally, a very general learning-theoretic model allows for the computation of optimal social choice functions (i.e., that maximize expected social welfare) under arbitrary, sampleable distributions. In the latter case, we provide both algorithms and sample complexity results for the class of scoring functions, and further validate the approach empirically." ] }
1312.1831
2950990081
We study mechanism design problems in the ordinal setting wherein the preferences of agents are described by orderings over outcomes, as opposed to specific numerical values associated with them. This setting is relevant when agents can compare outcomes, but aren't able to evaluate precise utilities for them. Such a situation arises in diverse contexts including voting and matching markets. Our paper addresses two issues that arise in ordinal mechanism design. To design social welfare maximizing mechanisms, one needs to be able to quantitatively measure the welfare of an outcome which is not clear in the ordinal setting. Second, since the impossibility results of Gibbard and Satterthwaite Gibbard73,Satterthwaite75 force one to move to randomized mechanisms, one needs a more nuanced notion of truthfulness. We propose rank approximation as a metric for measuring the quality of an outcome, which allows us to evaluate mechanisms based on worst-case performance, and lex-truthfulness as a notion of truthfulness for randomized ordinal mechanisms. Lex-truthfulness is stronger than notions studied in the literature, and yet flexible enough to admit a rich class of mechanisms circumventing classical impossibility results . We demonstrate the usefulness of the above notions by devising lex-truthful mechanisms achieving good rank-approximation factors, both in the general ordinal setting, as well as structured settings such as (one-sided) matching markets , and its generalizations, matroid and scheduling markets.
Subsequent to the Gibbard-Satterthwaite result, researchers focused on design of randomized mechanisms. As mentioned above, this led to differing notions of truthfulness. Strong truthfulness was proposed by Gibbard @cite_26 . Postlewaite and Schmeidler @cite_30 proposed weak truthfulness and proved that no weakly truthful mechanism on @math or more outcomes, can be (ex ante) Pareto optimal if agents are allowed to have priors on their (own) preferences. Subsequently, @cite_18 removed the prior condition, but prove impossibility of only certain kinds of mechanism. We remark that our lex-truthful mechanisms, which are also weakly truthful, do not contradict these results, since our mechanisms are not Pareto optimal. However, our mechanisms are @math -implementations of Pareto-optimal SCFs, so they satisfy Pareto optimality with probability at least @math . Thus, we bypass the above impossibility results while sacrificing a modicum of Pareto-optimality.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_26" ], "mid": [ "2052444997", "28510486", "1977126954" ], "abstract": [ "A person is said to prefer in the stochastic dominance sense one lottery-over-outcomes over another lottery-over-outcomes if the probability of his (at least) first choice being selected in the first lottery is greater than or equal to the analogous probability in the second lottery, the probability of his at least second choice being selected in the first lottery is greater than or equal to the analogous probability in the second lottery, and so on, with at least one strict inequality. This (partial) preference relation is used to define straightforwardness of a social choice function that maps profiles of ordinal preferences into lotteries over outcomes. Given a prior probability distribution on profiles this partial preference ordering (taking into account the additional randomness) is used to induce a partial preference ordering over social choice functions for each individual. These are used in turn to define ex ante Pareto undominated (efficient) social choice functions. The main result is that it is impossible for a social choice function to be both ex ante efficient and straightforward. We also extend the result to cardinal preferences and expected utility evaluations.", "Two fundamental notions in microeconomic theory are efficiency---no agent can be made better off without making another one worse off---and strategyproofness---no agent can obtain a more preferred outcome by misrepresenting his preferences. When social outcomes are probability distributions (or lotteries) over alternatives, there are varying degrees of these notions depending on how preferences over alternatives are extended to preference over lotteries. We show that efficiency and strategyproofness are incompatible to some extent when preferences are defined using stochastic dominance (SD) and therefore introduce a natural weakening of SD based on Savage's sure-thing principle (ST). While random serial dictatorship is SD-strategyproof, it only satisfies ST-efficiency. Our main result is that strict maximal lotteries---an appealing class of social decision schemes due to Kreweras and Fishburn---satisfy SD-efficiency and ST-strategyproofness.", "" ] }
1312.1494
1691081325
The Vietoris-Rips filtration for an @math -point metric space is a sequence of large simplicial complexes adding a topological structure to the otherwise disconnected space. The persistent homology is a key tool in topological data analysis and studies topological features of data that persist over many scales. The fastest algorithm for computing persistent homology of a filtration has time @math , where @math is the number of updates (additions or deletions of simplices), @math is the time for multiplication of @math matrices. For a space of @math points given by their pairwise distances, we approximate the Vietoris-Rips filtration by a zigzag filtration consisting of @math updates, which is sublinear in @math . The constant depends on a given error of approximation and on the doubling dimension of the metric space. Then the persistent homology of this sublinear-size filtration can be computed in time @math , which is subquadratic in @math .
If we run the best algorithm @cite_3 @cite_2 for persistent homology on the Sheehy approximation to the Vietoris-Rips filtration, the overall running time for approximating persistent homology will be @math . This overquadratic time is a bottleneck, but allows us to replace a sophisticated construction of a net-tree by a simpler algorithm for @math -farthest neighbors in a metric space.
{ "cite_N": [ "@cite_3", "@cite_2" ], "mid": [ "2154477220", "2207449046" ], "abstract": [ "We present a new algorithm for computing zigzag persistent homology, an algebraic structure which encodes changes to homology groups of a simplicial complex over a sequence of simplex additions and deletions. Provided that there is an algorithm that multiplies two n×n matrices in M(n) time, our algorithm runs in O(M(n) + n2 log2 n) time for a sequence of n additions and deletions. In particular, the running time is O(n2.376), by result of Coppersmith and Winograd. The fastest previously known algorithm for this problem takes O(n3) time in the worst case.", "In this paper, we present the first output-sensitive algorithm to compute the persistence diagram of a filtered simplicial complex. For any @C>0, it returns only those homology classes with persistence at least @C. Instead of the classical reduction via column operations, our algorithm performs rank computations on submatrices of the boundary matrix. For an arbitrary constant @[email protected]?(0,1), the running time is O(C\"(\"1\"-\"@d\")\"@CR\"d(n)logn), where C\"(\"1\"-\"@d\")\"@C is the number of homology classes with persistence at least ([email protected])@C, n is the total number of simplices in the complex, d its dimension, and R\"d(n) is the complexity of computing the rank of an nxn matrix with O(dn) nonzero entries. Depending on the choice of the rank algorithm, this yields a deterministic O(C\"(\"1\"-\"@d\")\"@Cn^2^.^3^7^6) algorithm, an O(C\"(\"1\"-\"@d\")\"@Cn^2^.^2^8) Las-Vegas algorithm, or an O(C\"(\"1\"-\"@d\")\"@Cn^2^+^@e) Monte-Carlo algorithm for an arbitrary @e>0. The space complexity of the Monte-Carlo version is bounded by O(dn)=O(nlogn)." ] }
1312.1494
1691081325
The Vietoris-Rips filtration for an @math -point metric space is a sequence of large simplicial complexes adding a topological structure to the otherwise disconnected space. The persistent homology is a key tool in topological data analysis and studies topological features of data that persist over many scales. The fastest algorithm for computing persistent homology of a filtration has time @math , where @math is the number of updates (additions or deletions of simplices), @math is the time for multiplication of @math matrices. For a space of @math points given by their pairwise distances, we approximate the Vietoris-Rips filtration by a zigzag filtration consisting of @math updates, which is sublinear in @math . The constant depends on a given error of approximation and on the doubling dimension of the metric space. Then the persistent homology of this sublinear-size filtration can be computed in time @math , which is subquadratic in @math .
We solve Problem in Theorem by building a sublinear-size approximation to the Vietoris-Rips filtration on @math given points in a metric space and then running the best algorithm for computing the zigzag persistent homology. Due to stability of persistent homology @cite_0 , the error of approximation at the homology level can be controlled at the level of filtration.
{ "cite_N": [ "@cite_0" ], "mid": [ "2056761334" ], "abstract": [ "Topological persistence has proven to be a key concept for the study of real-valued functions defined over topological spaces. Its validity relies on the fundamental property that the persistence diagrams of nearby functions are close. However, existing stability results are restricted to the case of continuous functions defined over triangulable spaces. In this paper, we present new stability results that do not suffer from the above restrictions. Furthermore, by working at an algebraic level directly, we make it possible to compare the persistence diagrams of functions defined over different spaces, thus enabling a variety of new applications of the concept of persistence. Along the way, we extend the definition of persistence diagram to a larger setting, introduce the notions of discretization of a persistence module and associated pixelization map, define a proximity measure between persistence modules, and show how to interpolate between persistence modules, thereby lending a more analytic character to this otherwise algebraic setting. We believe these new theoretical concepts and tools shed new light on the theory of persistence, in addition to simplifying proofs and enabling new applications." ] }
1312.0912
1494038338
Community detection is an important tool for analyzing the social graph of mobile phone users. The problem of nding communities in static graphs has been widely studied. However, since mobile social networks evolve over time, static graph algorithms are not sucient. To be useful in practice (e.g. when used by a telecom analyst), the stability of the partitions becomes critical. We tackle this particular use case in this paper: tracking evolution of com- munities in dynamic scenarios with focus on stability. We propose two modications to a widely used static community detection algorithm: we introduce xed nodes and preferential attachment to pre-existing com- munities. We then describe experiments to study the stability and quality of the resulting partitions on real-world social networks, represented by monthly call graphs for millions of subscribers.
Many related papers have investigated the importance of communities, and served us as reference. Foundational concepts and examples for communities detection may be found in @cite_0 . For instance, the interactions between the major characters of the novel Les Misérables'' by Victor Hugo can be viewed as a graph with 77 nodes, and the characters organized as communities (the example is very intuitive and easy to grasp for the fans of the novel). Another classic example is Zachary’s karate club, an organization with 34 nodes that split into two separate clubs in real life. Each new part can be mapped almost perfectly to the two main communities of friends detected in the original club. The paper also analyzes algorithms, such as shortest-path and random walk. Of course, the graphs considered in @cite_0 have a much smaller scale than the ones considered in this work.
{ "cite_N": [ "@cite_0" ], "mid": [ "2095293504" ], "abstract": [ "We propose and study a set of algorithms for discovering community structure in networks-natural divisions of network nodes into densely connected subgroups. Our algorithms all share two definitive features: first, they involve iterative removal of edges from the network to split it into communities, the edges removed being identified using any one of a number of possible \"betweenness\" measures, and second, these measures are, crucially, recalculated after each removal. We also propose a measure for the strength of the community structure found by our algorithms, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our algorithms are highly effective at discovering community structure in both computer-generated and real-world network data, and show how they can be used to shed light on the sometimes dauntingly complex structure of networked systems." ] }
1312.0912
1494038338
Community detection is an important tool for analyzing the social graph of mobile phone users. The problem of nding communities in static graphs has been widely studied. However, since mobile social networks evolve over time, static graph algorithms are not sucient. To be useful in practice (e.g. when used by a telecom analyst), the stability of the partitions becomes critical. We tackle this particular use case in this paper: tracking evolution of com- munities in dynamic scenarios with focus on stability. We propose two modications to a widely used static community detection algorithm: we introduce xed nodes and preferential attachment to pre-existing com- munities. We then describe experiments to study the stability and quality of the resulting partitions on real-world social networks, represented by monthly call graphs for millions of subscribers.
To perform community detection in graphs with 92 million nodes (see ), efficient algorithms are required. We based our research on the Louvain Method algorithm originally published in @cite_5 . As discussed in , it was modified in @cite_4 to get a dynamic algorithm. However, that algorithm still lacks stability. We used the implementation of @cite_4 as baseline to evaluate our Dynamic Louvain Method.
{ "cite_N": [ "@cite_5", "@cite_4" ], "mid": [ "2131681506", "1528751587" ], "abstract": [ "We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks.", "Complex networks can often be divided in dense sub-networks called communities. Using a partition edit distance, we study how three community detection algorithms transform their outputs if the input network is slightly modified. The instabilities appear to be important and we propose a modification of one algorithm to stabilize it and to allow the tracking of the communities in an evolving network. This modification has one parameter which is a tradeoff between stability and quality. The resulting algorithm appears to be very effective. We finally use it on an evolving network of blogs." ] }
1312.0912
1494038338
Community detection is an important tool for analyzing the social graph of mobile phone users. The problem of nding communities in static graphs has been widely studied. However, since mobile social networks evolve over time, static graph algorithms are not sucient. To be useful in practice (e.g. when used by a telecom analyst), the stability of the partitions becomes critical. We tackle this particular use case in this paper: tracking evolution of com- munities in dynamic scenarios with focus on stability. We propose two modications to a widely used static community detection algorithm: we introduce xed nodes and preferential attachment to pre-existing com- munities. We then describe experiments to study the stability and quality of the resulting partitions on real-world social networks, represented by monthly call graphs for millions of subscribers.
A thorough study of the history and the state of the art in communities detection (up to 2010) can be found in @cite_1 . In particular the author discusses ideas on the roles of vertices within communities. We have implemented a classification of nodes as leaders, followers and marginals within each community. Our leaders correspond to central vertices", but we don't compute boundary vertices, which could be useful.
{ "cite_N": [ "@cite_1" ], "mid": [ "2127048411" ], "abstract": [ "The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks." ] }
1312.0912
1494038338
Community detection is an important tool for analyzing the social graph of mobile phone users. The problem of nding communities in static graphs has been widely studied. However, since mobile social networks evolve over time, static graph algorithms are not sucient. To be useful in practice (e.g. when used by a telecom analyst), the stability of the partitions becomes critical. We tackle this particular use case in this paper: tracking evolution of com- munities in dynamic scenarios with focus on stability. We propose two modications to a widely used static community detection algorithm: we introduce xed nodes and preferential attachment to pre-existing com- munities. We then describe experiments to study the stability and quality of the resulting partitions on real-world social networks, represented by monthly call graphs for millions of subscribers.
Besides the static analysis, the report discusses communities evolution (dynamic communities), although it points out that the analysis of dynamic communities is still in its infancy." It suggest that it would be desirable to have a unified framework, in which clusters are deduced both from the current structure of the graph and from the knowledge of the cluster structure at previous times." We have implemented that idea, since we use the previous history of community structure throughout the whole algorithm (according to the @math and @math parameters), and not only during the nodes initialization (as in @cite_4 ).
{ "cite_N": [ "@cite_4" ], "mid": [ "1528751587" ], "abstract": [ "Complex networks can often be divided in dense sub-networks called communities. Using a partition edit distance, we study how three community detection algorithms transform their outputs if the input network is slightly modified. The instabilities appear to be important and we propose a modification of one algorithm to stabilize it and to allow the tracking of the communities in an evolving network. This modification has one parameter which is a tradeoff between stability and quality. The resulting algorithm appears to be very effective. We finally use it on an evolving network of blogs." ] }
1312.0912
1494038338
Community detection is an important tool for analyzing the social graph of mobile phone users. The problem of nding communities in static graphs has been widely studied. However, since mobile social networks evolve over time, static graph algorithms are not sucient. To be useful in practice (e.g. when used by a telecom analyst), the stability of the partitions becomes critical. We tackle this particular use case in this paper: tracking evolution of com- munities in dynamic scenarios with focus on stability. We propose two modications to a widely used static community detection algorithm: we introduce xed nodes and preferential attachment to pre-existing com- munities. We then describe experiments to study the stability and quality of the resulting partitions on real-world social networks, represented by monthly call graphs for millions of subscribers.
Applications of communities evolution" are not only to be found in mobile social networks. In @cite_7 , the authors study the evolution of scientific collaboration networks. In the analysis of exchange markets @cite_2 , the dynamics of currency exchanges (viewed as a dynamic graph of currency pairs) have been studied. For instance, modifications in the currency exchange communities effectively reflect the Mexican peso crisis of 1994. The scale is also much smaller (only 11 currencies being analyzed). A similarity with our work is that financial markets are one of the few fields where a detailed time evolution is readily available. In our case we have data from telecom companies that span a wide range of time (several months), and has fine grained resolution (day, hour, minute and second of each call or message).
{ "cite_N": [ "@cite_7", "@cite_2" ], "mid": [ "2092124750", "2105480448" ], "abstract": [ "The rich set of interactions between individuals in society results in complex community structure, capturing highly connected circles of friends, families or professional cliques in a social network. Thanks to frequent changes in the activity and communication patterns of individuals, the associated social and communication network is subject to constant evolution. Our knowledge of the mechanisms governing the underlying community dynamics is limited, but is essential for a deeper understanding of the development and self-optimization of society as a whole. We have developed an algorithm based on clique percolation that allows us to investigate the time dependence of overlapping communities on a large scale, and thus uncover basic relationships characterizing community evolution. Our focus is on networks capturing the collaboration between scientists and the calls between mobile phone users. We find that large groups persist for longer if they are capable of dynamically altering their membership, suggesting that an ability to change the group composition results in better adaptability. The behaviour of small groups displays the opposite tendency-the condition for stability is that their composition remains unchanged. We also show that knowledge of the time commitment of members to a given community can be used for estimating the community's lifetime. These findings offer insight into the fundamental differences between the dynamics of small groups and large institutions.", "We use techniques from network science to study correlations in the foreign exchange (FX) market during the period 1991--2008. We consider an FX market network in which each node represents an exchange rate and each weighted edge represents a time-dependent correlation between the rates. To provide insights into the clustering of the exchange-rate time series, we investigate dynamic communities in the network. We show that there is a relationship between an exchange rate's functional role within the market and its position within its community and use a node-centric community analysis to track the temporal dynamics of such roles. This reveals which exchange rates dominate the market at particular times and also identifies exchange rates that experienced significant changes in market role. We also use the community dynamics to uncover major structural changes that occurred in the FX market. Our techniques are general and will be similarly useful for investigating correlations in other markets." ] }
1312.1254
2038891887
Higher-order low-rank tensors naturally arise in many applications including hyperspectral data recovery, video inpainting, seismic data reconstruction, and so on. We propose a new model to recover a low-rank tensor by simultaneously performing low-rank matrix factorizations to the all-mode matricizations of the underlying tensor. An alternating minimization algorithm is applied to solve the model, along with two adaptive rank-adjusting strategies when the exact rank is not known. &nbsp Phase transition plots reveal that our algorithm can recover a variety of synthetic low-rank tensors from significantly fewer samples than the compared methods, which include a matrix completion method applied to tensor recovery and two state-of-the-art tensor completion methods. Further tests on real-world data show similar advantages. Although our model is non-convex, our algorithm performs consistently throughout the tests and gives better results than the compared methods, some of which are based on convex models. In addition, subsequence convergence of our algorithm can be established in the sense that any limit point of the iterates satisfies the KKT condtions.
Our model can be regarded as an extension of the following model @cite_21 from matrix completion to tensor completion where @math contains partially observed entries of the underlying (approximately) low-rank matrix @math . If @math in , i.e., the underlying tensor @math is two-way, then it is easy to see that reduces to by noting @math . The problem is solved in @cite_21 by a successive over-relaxation (SOR) method, named as LMaFit. Although is non-convex, extensive experiments on both synthetic and real-world data demonstrate that solved by LMaFit performs significantly better than nuclear norm The matrix nuclear norm is the convex envelope of matrix rank function @cite_27 , and the nuclear norm minimization can promote the low-rank structure of the solution. based convex models such as where @math denotes the nuclear norm of @math , defined as the sum of its singular values.
{ "cite_N": [ "@cite_27", "@cite_21" ], "mid": [ "2118550318", "2060204507" ], "abstract": [ "The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.", "The matrix completion problem is to recover a low-rank matrix from a subset of its entries. The main solution strategy for this problem has been based on nuclear-norm minimization which requires computing singular value decompositions—a task that is increasingly costly as matrix sizes and ranks increase. To improve the capacity of solving large-scale problems, we propose a low-rank factorization model and construct a nonlinear successive over-relaxation (SOR) algorithm that only requires solving a linear least squares problem per iteration. Extensive numerical experiments show that the algorithm can reliably solve a wide range of problems at a speed at least several times faster than many nuclear-norm minimization algorithms. In addition, convergence of this nonlinear SOR algorithm to a stationary point is analyzed." ] }
1312.1254
2038891887
Higher-order low-rank tensors naturally arise in many applications including hyperspectral data recovery, video inpainting, seismic data reconstruction, and so on. We propose a new model to recover a low-rank tensor by simultaneously performing low-rank matrix factorizations to the all-mode matricizations of the underlying tensor. An alternating minimization algorithm is applied to solve the model, along with two adaptive rank-adjusting strategies when the exact rank is not known. &nbsp Phase transition plots reveal that our algorithm can recover a variety of synthetic low-rank tensors from significantly fewer samples than the compared methods, which include a matrix completion method applied to tensor recovery and two state-of-the-art tensor completion methods. Further tests on real-world data show similar advantages. Although our model is non-convex, our algorithm performs consistently throughout the tests and gives better results than the compared methods, some of which are based on convex models. In addition, subsequence convergence of our algorithm can be established in the sense that any limit point of the iterates satisfies the KKT condtions.
The work @cite_26 generalizes to the tensor case, and to recover the (approximately) low-rank tensor @math , it proposes to solve where @math are preselected weights satisfying @math . Different from our model , the problem is convex, and in @cite_26 , various methods are applied to solve it such as block coordinate descent method, proximal gradient method, and alternating direction method of multiplier (ADMM). The model utilizes low-rankness of all mode unfoldings of the tensor, and as demonstrated in @cite_26 , it can significantly improve the solution quality over that obtained by solving , where the matrix @math corresponds to some mode unfolding of the tensor.
{ "cite_N": [ "@cite_26" ], "mid": [ "2091449379" ], "abstract": [ "In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependant relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC and HaLRTC the former is more efficient to obtain a low accuracy solution and the latter is preferred if a high-accuracy solution is desired." ] }
1312.1254
2038891887
Higher-order low-rank tensors naturally arise in many applications including hyperspectral data recovery, video inpainting, seismic data reconstruction, and so on. We propose a new model to recover a low-rank tensor by simultaneously performing low-rank matrix factorizations to the all-mode matricizations of the underlying tensor. An alternating minimization algorithm is applied to solve the model, along with two adaptive rank-adjusting strategies when the exact rank is not known. &nbsp Phase transition plots reveal that our algorithm can recover a variety of synthetic low-rank tensors from significantly fewer samples than the compared methods, which include a matrix completion method applied to tensor recovery and two state-of-the-art tensor completion methods. Further tests on real-world data show similar advantages. Although our model is non-convex, our algorithm performs consistently throughout the tests and gives better results than the compared methods, some of which are based on convex models. In addition, subsequence convergence of our algorithm can be established in the sense that any limit point of the iterates satisfies the KKT condtions.
The recent work @cite_28 proposes a more square'' convex model for recovering @math as follows: where @math is a tensor by relabeling mode @math of @math to mode @math for @math , @math and @math and the permutation @math are chosen to make @math as close as to @math . The idea of reshaping a tensor into a square'' matrix has also appeared in @cite_1 for tensor principal component analysis. As the order of @math is no more than three, is the same as with @math corresponding to some mode unfolding of the tensor, and it may not perform as well as . However, for a low-rank tensor of more than three orders, it is shown in @cite_28 that can exactly recover the tensor from far fewer observed entries than those required by .
{ "cite_N": [ "@cite_28", "@cite_1" ], "mid": [ "2951021721", "2069287942" ], "abstract": [ "Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a @math -way tensor of length @math and Tucker rank @math from Gaussian measurements requires @math observations. In contrast, a certain (intractable) nonconvex formulation needs only @math observations. We introduce a very simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with @math observations. While these results pertain to Gaussian measurements, simulations strongly suggest that the new norm also outperforms the sum of nuclear norms for tensor completion from a random subset of entries. Our lower bound for the sum-of-nuclear-norms model follows from a new result on recovering signals with multiple sparse structures (e.g. sparse, low rank), which perhaps surprisingly demonstrates the significant suboptimality of the commonly used recovery approach via minimizing the sum of individual sparsity inducing norms (e.g. @math , nuclear norm). Our new formulation for low-rank tensor recovery however opens the possibility in reducing the sample complexity by exploiting several structures jointly.", "This paper is concerned with the computation of the principal components for a general tensor, known as the tensor principal component analysis (PCA) problem. We show that the general tensor PCA problem is reducible to its special case where the tensor in question is super-symmetric with an even degree. In that case, the tensor can be embedded into a symmetric matrix. We prove that if the tensor is rank-one, then the embedded matrix must be rank-one too, and vice versa. The tensor PCA problem can thus be solved by means of matrix optimization under a rank-one constraint, for which we propose two solution methods: (1) imposing a nuclear norm penalty in the objective to enforce a low-rank solution; (2) relaxing the rank-one constraint by semidefinite programming. Interestingly, our experiments show that both methods can yield a rank-one solution for almost all the randomly generated instances, in which case solving the original tensor PCA problem to optimality. To further cope with the size of the resulting convex optimization models, we propose to use the alternating direction method of multipliers, which reduces significantly the computational efforts. Various extensions of the model are considered as well." ] }
1312.1254
2038891887
Higher-order low-rank tensors naturally arise in many applications including hyperspectral data recovery, video inpainting, seismic data reconstruction, and so on. We propose a new model to recover a low-rank tensor by simultaneously performing low-rank matrix factorizations to the all-mode matricizations of the underlying tensor. An alternating minimization algorithm is applied to solve the model, along with two adaptive rank-adjusting strategies when the exact rank is not known. &nbsp Phase transition plots reveal that our algorithm can recover a variety of synthetic low-rank tensors from significantly fewer samples than the compared methods, which include a matrix completion method applied to tensor recovery and two state-of-the-art tensor completion methods. Further tests on real-world data show similar advantages. Although our model is non-convex, our algorithm performs consistently throughout the tests and gives better results than the compared methods, some of which are based on convex models. In addition, subsequence convergence of our algorithm can be established in the sense that any limit point of the iterates satisfies the KKT condtions.
There are some other models proposed recently for LRTC. For example, the one in @cite_25 uses, as a regularization term, a tight convex relaxation of the average rank function @math and applies the ADMM method to solve the problem. The work @cite_12 directly constrains the solution in some low-rank manifold and employs the Riemannian optimization to solve the problem. Different from the above discussed models that use tensor @math -rank, the model in @cite_14 employs the so-called based on the recently proposed tensor singular value decomposition (t-SVD) @cite_2 . For details about these models, we refer the readers to the papers where they are proposed.
{ "cite_N": [ "@cite_14", "@cite_25", "@cite_12", "@cite_2" ], "mid": [ "1548467509", "2104050041", "2081962379", "1992426838" ], "abstract": [ "In this paper we propose novel methods for compression and recovery of multilinear data under limited sampling. We exploit the recently proposed tensor- Singular Value Decomposition (t-SVD)[1], which is a group theoretic framework for tensor decomposition. In contrast to popular existing tensor decomposition techniques such as higher-order SVD (HOSVD), t-SVD has optimality properties similar to the truncated SVD for matrices. Based on t-SVD, we first construct novel tensor-rank like measures to characterize informational and structural complexity of multilinear data. Following that we outline a complexity penalized algorithm for tensor completion from missing entries. As an application, 3-D and 4-D (color) video data compression and recovery are considered. We show that videos with linear camera motion can be represented more efficiently using t-SVD compared to traditional approaches based on vectorizing or flattening of the tensors. Application of the proposed tensor completion algorithm for video recovery from missing entries is shown to yield a superior performance over existing methods. In conclusion we point out several research directions and implications to online prediction of multilinear data.", "We study the problem of learning a tensor from a set of linear measurements. A prominent methodology for this problem is based on a generalization of trace norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. In this paper, we highlight some limitations of this approach and propose an alternative convex relaxation on the Euclidean ball. We then describe a technique to solve the associated regularization problem, which builds upon the alternating direction method of multipliers. Experiments on one synthetic dataset and two real datasets indicate that the proposed method improves significantly over tensor trace norm regularization in terms of estimation error, while remaining computationally tractable.", "In tensor completion, the goal is to fill in missing entries of a partially known tensor under a low-rank constraint. We propose a new algorithm that performs Riemannian optimization techniques on the manifold of tensors of fixed multilinear rank. More specifically, a variant of the nonlinear conjugate gradient method is developed. Paying particular attention to efficient implementation, our algorithm scales linearly in the size of the tensor. Examples with synthetic data demonstrate good recovery even if the vast majority of the entries are unknown. We illustrate the use of the developed algorithm for the recovery of multidimensional images and for the approximation of multivariate functions.", "Recent work by Kilmer and Martin [Linear Algebra Appl., 435 (2011), pp. 641--658] and Braman [Linear Algebra Appl., 433 (2010), pp. 1241--1253] provides a setting in which the familiar tools of linear algebra can be extended to better understand third-order tensors. Continuing along this vein, this paper investigates further implications including (1) a bilinear operator on the matrices which is nearly an inner product and which leads to definitions for length of matrices, angle between two matrices, and orthogonality of matrices, and (2) the use of t-linear combinations to characterize the range and kernel of a mapping defined by a third-order tensor and the t-product and the quantification of the dimensions of those sets. These theoretical results lead to the study of orthogonal projections as well as an effective Gram--Schmidt process for producing an orthogonal basis of matrices. The theoretical framework also leads us to consider the notion of tensor polynomials and their relation to tensor eigentupl..." ] }
1312.0925
2397498371
Alternating Minimization is a widely used and empirically successful framework for Matrix Completion and related low-rank optimization problems. We give a new algorithm based on Alternating Minimization that provably recovers an unknown low-rank matrix from a random subsample of its entries under a standard incoherence assumption while achieving a linear convergence rate. Compared to previous work our results reduce the provable sample complexity requirements of the Alternating Minimization approach by at least a quartic factor in the rank and the condition number of the unknown matrix. These improvements apply when the matrix is exactly low-rank and when it is only close to low-rank in the Frobenius norm. Underlying our work is a new robust convergence analysis of the well-known Subspace Iteration algorithm for computing the dominant singular vectors of a matrix also known as the Power Method. This viewpoint leads to a conceptually simple understanding of Alternating Minimization that we exploit. Additionally, we contribute a new technique for controlling the coherence of intermediate solutions arising in iterative algorithms. These techniques may be of interest beyond their application here.
There is a vast literature on the topic that we cannot completely survey here. Most closely related is the work of @cite_19 that suggested the idea of thinking of alternating least squares as a noisy update step in the Power Method. Our approach takes inspiration from this work by analyzing least squares using the noisy power method. However, our analysis is substantially different in both how convergence and low coherence is argued. The approach of Keshavan @cite_1 uses a rather different argument.
{ "cite_N": [ "@cite_19", "@cite_1" ], "mid": [ "1984840642", "2263105607" ], "abstract": [ "We provide a general framework for the understanding of inexact Krylov subspace methods for the solution of symmetric and nonsymmetric linear systems of equations, as well as for certain eigenvalue calculations. This framework allows us to explain the empirical results reported in a series of CERFACS technical reports by Bouras, Fraysse, and Giraud in 2000. Furthermore, assuming exact arithmetic, our analysis can be used to produce computable criteria to bound the inexactness of the matrix-vector multiplication in such a way as to maintain the convergence of the Krylov subspace method. The theory developed is applied to several problems including the solution of Schur complement systems, linear systems which depend on a parameter, and eigenvalue problems. Numerical experiments for some of these scientific applications are reported.", "Collaborative filtering is a novel statistical technique to obtain useful information or to make predictions based on data from multiple agents. A large number of such datasets are naturally represented in matrix form. Typically, there exists a matrix M from which we know a (typically sparse) subset of entries M_ij for (i, j) in some set E. The problem then is to predict approximate the unseen entries. This framework of matrix completion is extremely general and applications include personalized recommendation systems, sensor positioning, link prediction and so on. Low rank models have traditionally been used to learn useful information from such datasets. Low-dimensional representations simplify the description of the dataset and often yield predictive powers. As an added benefit, it is easier to store and retrieve low dimensional representations. Finally, many computationally intensive operations such as matrix multiplication and inversion are simplified with low dimensional representations. Singular Value Decomposition (SVD) has traditionally been used to find the lowdimensional representation of a fully revealed matrix. There are numerous algorithms for computing the SVD of a matrix including several parallel implementations and implementations for sparse matrices. However, when the matrix is only partially observed, we show that SVD techniques are sub-optimal. In this work, we will develop algorithms to learn a low rank model from a partially revealed matrix. These algorithms are computationally efficient and highly parallelizable. We will show that the proposed algorithms achieve a performance close to the fundamental limit in a number of scenarios. Finally, the algorithms achieve significantly better performance than the state-of-the-art algorithms on many real collaborative filtering datasets." ] }
1312.0925
2397498371
Alternating Minimization is a widely used and empirically successful framework for Matrix Completion and related low-rank optimization problems. We give a new algorithm based on Alternating Minimization that provably recovers an unknown low-rank matrix from a random subsample of its entries under a standard incoherence assumption while achieving a linear convergence rate. Compared to previous work our results reduce the provable sample complexity requirements of the Alternating Minimization approach by at least a quartic factor in the rank and the condition number of the unknown matrix. These improvements apply when the matrix is exactly low-rank and when it is only close to low-rank in the Frobenius norm. Underlying our work is a new robust convergence analysis of the well-known Subspace Iteration algorithm for computing the dominant singular vectors of a matrix also known as the Power Method. This viewpoint leads to a conceptually simple understanding of Alternating Minimization that we exploit. Additionally, we contribute a new technique for controlling the coherence of intermediate solutions arising in iterative algorithms. These techniques may be of interest beyond their application here.
As an alternative to the nuclear norm approach, Keshavan, Montanari and Oh @cite_3 @cite_27 present two approaches, a spectral approach and an algorithm called OptSpace . The spectral approach roughly corresponds to our initialization procedure and gives similar guarantees. OptSpace requires a stronger incoherence assumption, has larger sample complexity in terms of the condition number, namely @math and requires optimizing over the Grassmanian manifold. However, the requirement on @math achieved by OptSpace can be weaker than ours in the noisy setting. In the exact case, our algorithm has a much faster convergence rate (logarithmic dependence on @math rather than polynomial).
{ "cite_N": [ "@cite_27", "@cite_3" ], "mid": [ "2616032753", "2144730813" ], "abstract": [ "Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations of a small, random subset of its entries. The problem arises in a variety of applications, from collaborative filtering (the 'Netflix problem') to structure-from-motion and positioning. We study a low complexity algorithm introduced by Keshavan, Montanari, and Oh (2010), based on a combination of spectral techniques and manifold optimization, that we call here OPTSPACE. We prove performance guarantees that are order-optimal in a number of circumstances.", "Let M be an n? × n matrix of rank r, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm, which we call OptSpace, that reconstructs M from |E| = O(rn) observed entries with relative root mean square error 1 2 RMSE ? C(?) (nr |E|)1 2 with probability larger than 1 - 1 n3. Further, if r = O(1) and M is sufficiently unstructured, then OptSpace reconstructs it exactly from |E| = O(n log n) entries with probability larger than 1 - 1 n3. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log n), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices." ] }
1312.0925
2397498371
Alternating Minimization is a widely used and empirically successful framework for Matrix Completion and related low-rank optimization problems. We give a new algorithm based on Alternating Minimization that provably recovers an unknown low-rank matrix from a random subsample of its entries under a standard incoherence assumption while achieving a linear convergence rate. Compared to previous work our results reduce the provable sample complexity requirements of the Alternating Minimization approach by at least a quartic factor in the rank and the condition number of the unknown matrix. These improvements apply when the matrix is exactly low-rank and when it is only close to low-rank in the Frobenius norm. Underlying our work is a new robust convergence analysis of the well-known Subspace Iteration algorithm for computing the dominant singular vectors of a matrix also known as the Power Method. This viewpoint leads to a conceptually simple understanding of Alternating Minimization that we exploit. Additionally, we contribute a new technique for controlling the coherence of intermediate solutions arising in iterative algorithms. These techniques may be of interest beyond their application here.
Our work is also closely related to a line of work on differentially private singular vector computation @cite_24 @cite_9 @cite_8 . These papers each consider algorithms based on the power method where noise is injected to achieve the privacy guarantee known as Differential Privacy @cite_29 . Hardt and Roth @cite_24 @cite_9 @cite_8 observed that incoherence could be used to obtain improved guarantees. This requires controlling the coherence of the iterates produced by the noisy power method which leads to similar problems as the ones faced here. What's simpler in the privacy setting is that the noise term is typically Gaussian leading to a cleaner analysis. Our work uses a similar convergence analysis for noisy subspace iteration that was used in a concurrent work by the author @cite_9 .
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_29", "@cite_8" ], "mid": [ "1982147025", "2043969662", "2517104773", "1988351624" ], "abstract": [ "Computing accurate low rank approximations of large matrices is a fundamental data mining task. In many applications however the matrix contains sensitive information about individuals. In such case we would like to release a low rank approximation that satisfies a strong privacy guarantee such as differential privacy. Unfortunately, to date the best known algorithm for this task that satisfies differential privacy is based on naive input perturbation or randomized response: Each entry of the matrix is perturbed independently by a sufficiently large random noise variable, a low rank approximation is then computed on the resulting matrix. We give (the first) significant improvements in accuracy over randomized response under the natural and necessary assumption that the matrix has low coherence. Our algorithm is also very efficient and finds a constant rank approximation of an m x n matrix in time O(mn). Note that even generating the noise matrix required for randomized response already requires time O(mn).", "We consider differentially private approximate singular vector computation. Known worst-case lower bounds show that the error of any differentially private algorithm must scale polynomially with the dimension of the singular vector. We are able to replace this dependence on the dimension by a natural parameter known as the coherence of the matrix that is often observed to be significantly smaller than the dimension both theoretically and empirically. We also prove a matching lower bound showing that our guarantee is nearly optimal for every setting of the coherence parameter. Notably, we achieve our bounds by giving a robust analysis of the well-known power iteration algorithm, which may be of independent interest. Our algorithm also leads to improvements in worst-case settings and to better low-rank approximations in the spectral norm.", "We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = Σ i g(x i ), where x i denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.", "We discuss a new robust convergence analysis of the well-known subspace iteration algorithm for computing the dominant singular vectors of a matrix, also known as simultaneous iteration or power method. The result characterizes the convergence behavior of the algorithm when a large amount noise is introduced after each matrix-vector multiplication. While interesting in its own right, the main motivation comes from the problem of privacy-preserving spectral analysis where noise is added in order to achieve the privacy guarantee known as differential privacy. This result leads to nearly tight worst-case bounds for the problem of computing a differentially private low-rank approximation in the spectral norm. Our results extend to privacy-preserving principal component analysis. We obtain improvements for several variants of differential privacy that have been considered. The running time of our algorithm is nearly linear in the input sparsity leading to strong improvements in running time over previous work. Complementing our worst-case bounds, we show that the error dependence of our algorithm on the matrix dimension can be replaced by a tight dependence on the coherence of the matrix. This parameter is always bounded by the matrix dimension but often much smaller. Indeed, the assumption of low coherence is essential in several machine learning and signal processing applications." ] }
1312.0686
1514864735
Using formal tools in computer science to describe games is an interesting problem. We give games, exactly two person games, an axiomatical foundation based on the process algebra ACP (Algebra of Communicating Process). A fresh operator called opponent's alternative composition operator (OA) is introduced into ACP to describe game trees and game strategies, called GameACP. And its sound and complete axiomatical system is naturally established. To model the outcomes of games (the co-action of the player and the opponent), correspondingly in GameACP, the execution of GameACP processes, another operator called playing operator (PO) is extended into GameACP. We also establish a sound and complete axiomatical system for PO. Finally, we give the correctness theorem between the outcomes of games and the deductions of GameACP processes.
As mentioned above, the combination of computation tools and game semantics includes two aspects: one is introducing games or idea of games into these computation languages or tools to give them a new viewpoint, and the other is using these computation tools to interpret games. The first one has plenty of works and gained great successes, but the second one has a few works @cite_0 @cite_20 as we known. We introduce the two existing works in the following.
{ "cite_N": [ "@cite_0", "@cite_20" ], "mid": [ "2498434591", "2013667259" ], "abstract": [ "A workflow is a collection of cooperating, coordinated activities designed to carry out a well-defined complex process, such as trip planning, graduate student registration procedure, or a business process in a large enterprise. Executing a workflow, thus, involves coordinated execution of multiple long-running steps in an environment of distributed, heterogeneous processing entities. Workflow management systems provide a framework for capturing the interaction among the activities in a workflow and are recognized as a new paradigm for integrating disparate systems, including legacy systems. However, in order to realize its full potential a number of limitations of the current workflow models, including lack of a clear theoretical basis has to be addressed. Workflows must be specified declaratively, verified formally, and scheduled automatically. In this dissertation first we show that Concurrent Transaction Logic (abbr. CTR) is a natural logical formalism for representing workflow control graphs, temporal and causality constraints that workflow executions must obey, for reasoning about the consistency of workflow specifications, and for scheduling workflows in the presence of those constraints. Next, we develop Game-CTR, a natural extension of CTR designed for modeling and reasoning about run-time properties of workflows that are composed of non-cooperating services—such as Web services. We develop a model and proof theory for Game-CTR and show how it can be used to specify executions under a fairly large class of temporal and causality constraints. We then develop a game solver algorithm that converts such specifications (which are formulas in Game-CTR) into other, equivalent Game-CTR formulas, a coordinator, that can be executed more efficiently and without backtracking.", "We develop a game semantics for process algebra with two interacting agents. The purpose of our semantics is to make manifest the role of knowledge and information flow in the interactions between agents and to control the information available to interacting agents. We define games and strategies on process algebras, so that two agents interacting according to their strategies determine the execution of the process, replacing the traditional scheduler. We show that different restrictions on strategies represent different amounts of information being available to a scheduler. We also show that a certain class of strategies corresponds to the syntactic schedulers of Chatzikokolakis and Palamidessi, which were developed to overcome problems with traditional schedulers modelling interaction. The restrictions on these strategies have an explicit epistemic flavour." ] }
1312.0686
1514864735
Using formal tools in computer science to describe games is an interesting problem. We give games, exactly two person games, an axiomatical foundation based on the process algebra ACP (Algebra of Communicating Process). A fresh operator called opponent's alternative composition operator (OA) is introduced into ACP to describe game trees and game strategies, called GameACP. And its sound and complete axiomatical system is naturally established. To model the outcomes of games (the co-action of the player and the opponent), correspondingly in GameACP, the execution of GameACP processes, another operator called playing operator (PO) is extended into GameACP. We also establish a sound and complete axiomatical system for PO. Finally, we give the correctness theorem between the outcomes of games and the deductions of GameACP processes.
Game semantics has gained great successes in modeling computations, such as an initial success of modeling the functional programming language PCF (Programming Computable Functions) @cite_12 @cite_17 @cite_4 , multiplicative linear logic @cite_13 , idealized Algol @cite_14 , general reference @cite_3 , etc. To model concurrency in computer science with game semantics, a new kind of game semantics called asynchronous game @cite_15 @cite_27 @cite_9 @cite_25 @cite_11 is established and a bridge between the asynchronous game and traditional game semantics is founded. Moreover, asynchronous games perfectly model propositional linear logic and get a full completeness result. Another kind of game semantics to describe concurrency is concurrent game @cite_1 @cite_23 , and a work to bridge asynchronous game and concurrent game is introduced in @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_11", "@cite_9", "@cite_1", "@cite_3", "@cite_27", "@cite_23", "@cite_15", "@cite_13", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2146242204", "1977216954", "", "", "1995251574", "2113988682", "1877442346", "", "2107067818", "", "2094685149", "", "1923324835", "1847465957" ], "abstract": [ "The notion of innocent strategy was introduced by Hyland and Ong in order to capture the interactive behaviour of ?-terms and PCF programs. An innocent strategy is defined as an alternating strategy with partial memory, in which the strategy plays according to its view. Extending the definition to nonalternating strategies is problematic, because the traditional definition of views is based on the hypothesis that Opponent and Proponent alternate during the interaction. Here, we take advantage of the diagrammatic reformulation of alternating innocence in asynchronous games, in order to provide a tentative definition of innocence in non-alternating games. The task is interesting, and far from easy. It requires the combination of true concurrency and game semantics in a clean and organic way, clarifying the relationship between asynchronous games and concurrent games in the sense of Abramsky and Mellies. It also requires an interactive reformulation of the usual acyclicity criterion of linear logic, as well as a directed variant, as a scheduling criterion.", "The manipulation of objects with state which changes over time is all-pervasive in computing. Perhaps the simplest example of such objects are the program variables of classical imperative languages. An important strand of work within the study of such languages, pioneered by John Reynolds, focusses on Idealized Algol, an elegant synthesis of imperative and functional features.", "", "", "In game semantics, the higher-order value passing mechanisms of the λ-calculus are decomposed as sequences of atomic actions exchanged by a Player and its Opponent. Seen from this angle, game semantics is reminiscent of trace semantics in concurrency theory, where a process is identified to the sequences of requests it generates in the course of time. Asynchronous game semantics is an attempt to bridge the gap between the two subjects, and to see mainstream game semantics as a refined and interactive form of trace semantics. Asynchronous games are positional games played on Mazurkiewicz traces, which reformulate (and generalize) the familiar notion of arena game. The interleaving semantics of λ-terms, expressed as innocent strategies, may be analysed in this framework, in the perspective of true concurrency. The analysis reveals that innocent strategies are positional strategies regulated by forward and backward confluence properties. This captures, we believe, the essence of innocence. We conclude the article by defining a non-uniform variant of the λ-calculus, in which the game semantics of a λ-term is formulated directly as a trace semantics, performing the syntactic exploration or parsing of that λ-term.", "Connections between the sequentialitysconcurrency distinction and the semantics of proofs are investigated, with particular reference to games and Linear Logic.", "A games model of a programming language with higher-order store in the style of ML-references is introduced. The category used for the model is obtained by relaxing certain behavioural conditions on a category of games previously used to provide fully abstract models of pure functional languages. The model is shown to be fully abstract by means of factorization arguments which reduce the question of definability for the language with higher-order store to that for its purely functional fragment.", "", "A new concurrent form of game semantics is introduced. This overcomes the problems which had arisen with previous, sequential forms of game semantics in modelling Linear Logic. It also admits an elegant and robust formalization. A Full Completeness Theorem for Multiplicative-Additive Linear Logic is proved for this semantics.", "", "We present a game semantics for Linear Logic, in which formulas denote games and proofs denote winning strategies. We show that our semantics yields a categorical model of Linear Logic and prove full completeness for Multiplicative Linear Logic with the MIX rule: every winning strategy is the denotation of a unique cut-free proof net. A key role is played by the notion of history-free strategy: strong connections are made between history-free strategies and the Geometry of Interaction. Our semantics incorporates a natural notion of polarity, leading to a refined treatment of the additives. We make comparisons with related work by Joyal, Blass, et al", "", "", "In order to define models of simply typed functional programming languages being closer to the operational semantics of these languages, the notions of sequentiality, stability and seriality were introduced. These works originated from the definability problem for PCF, posed in [Sco72], and the full abstraction problem for PCF, raised in [Plo77]." ] }
1312.0686
1514864735
Using formal tools in computer science to describe games is an interesting problem. We give games, exactly two person games, an axiomatical foundation based on the process algebra ACP (Algebra of Communicating Process). A fresh operator called opponent's alternative composition operator (OA) is introduced into ACP to describe game trees and game strategies, called GameACP. And its sound and complete axiomatical system is naturally established. To model the outcomes of games (the co-action of the player and the opponent), correspondingly in GameACP, the execution of GameACP processes, another operator called playing operator (PO) is extended into GameACP. We also establish a sound and complete axiomatical system for PO. Finally, we give the correctness theorem between the outcomes of games and the deductions of GameACP processes.
Algorithmic game semantics @cite_10 is the premise of implementation of game semantics for further automatic reasoning machine based on some specific game semantics model. And game semantics can be used to establish the so-called interaction semantics @cite_21 among autonomous agents, and can be used to model and verify compositional software @cite_16 @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_21", "@cite_10", "@cite_16" ], "mid": [ "2051883613", "1860761177", "2149188005", "1901568548" ], "abstract": [ "We present an approach to software model checking based on game semantics and the CSP process algebra. Open program fragments (i.e. terms-in-context) are compositionally modelled as CSP processes which represent their game semantics. This translation is performed by a prototype compiler. Observational equivalence and regular properties are checked by traces refinement using the FDR tool. We also present theorems for parameterised verification of polymorphic terms and properties. The effectiveness of the approach is evaluated on several examples.", "The “classical” paradigm for denotational semantics models data types as domains, i.e. structured sets of some kind, and programs as (suitable) functions between domains. The semantic universe in which the denotational modelling is carried out is thus a category with domains as objects, functions as morphisms, and composition of morphisms given by function composition. A sharp distinction is then drawn between denotational and operational semantics. Denotational semantics is often referred to as “mathematical semantics” because it exhibits a high degree of mathematical structure; this is in part achieved by the fact that denotational semantics abstracts away from the dynamics of computation—from time. By contrast, operational semantics is formulated in terms of the syntax of the language being modelled; it is highly intensional in character; and it is capable of expressing the dynamical aspects of computation. The classical denotational paradigm has been very successful, but has some definite limitations. Firstly, fine-structural features of computation, such as sequentiality, computational complexity, and optimality of reduction strategies, have either not been captured at all denotationally, or not in a fully satisfactory fashion. Moreover, once languages with features beyond the purely functional are considered, the appropriateness of modelling programs by functions is increasingly open to question. Neither concurrency nor “advanced” imperative features such as local references have been captured denotationally in a fully convincing fashion. This analysis suggests a desideratum of Intensional Semantics, interpolating between denotational and operational semantics as traditionally conceived. This should combine the good mathematical structural properties of denotational semantics with the ability to capture dynamical aspects and to embody computational intuitions of operational semantics. Thus we may think of Intensional semantics as “Denotational semantics + time (dynamics)”, or as “Syntax-free operational semantics”. A number of recent developments (and, with hindsight, some older ones) can be seen as contributing to this goal of Intensional Semantics. We will focus on the recent work on Game semantics, which has led to some striking advances in the Full Abstraction problem for PCF and other programming languages ( 1995) (Abramsky and McCusker 1995) (Hyland and Ong 1995) (McCusker 1996a) (Ong 1996). Our aim is to give a genuinely elementary first introduction; we therefore present a simplified version of game semantics, which nonetheless", "", "We describe a software model checking tool founded on game semantics, highlight the underpinning theoretical results and discuss several case studies. The tool is based on an interpretation algorithm defined compositionally on syntax and thus can also handle open programs. Moreover, the models it produces are equationally fully abstract. These features are essential in the modeling and verification of software components such as modules and turn out to lead to very compact models of programs." ] }
1312.0686
1514864735
Using formal tools in computer science to describe games is an interesting problem. We give games, exactly two person games, an axiomatical foundation based on the process algebra ACP (Algebra of Communicating Process). A fresh operator called opponent's alternative composition operator (OA) is introduced into ACP to describe game trees and game strategies, called GameACP. And its sound and complete axiomatical system is naturally established. To model the outcomes of games (the co-action of the player and the opponent), correspondingly in GameACP, the execution of GameACP processes, another operator called playing operator (PO) is extended into GameACP. We also establish a sound and complete axiomatical system for PO. Finally, we give the correctness theorem between the outcomes of games and the deductions of GameACP processes.
Game-CTR @cite_0 introduces games into CTR (Concurrent Transaction Logic) to model and reason about runtime properties of workflows that are composed of non-cooperative services -- such as Web Services. Game-CTR includes a model and proof theory which can be used to specify executions under some temporal and causality constraints, and also a game solver algorithm to convert such constraints into other equivalent Game-CTR formulas to be executed more efficiently. @cite_20 develop a game semantics for a certain kind of process calculus with two interacting agents. Games and strategies on this process calculus are defined, and strategies of the two agents determine the execution of the process. And also, a certain class of strategies correspond to the so-called syntactic schedulers of Chatzikokolakis and Palamidessi. In these works, the games used are not dialogue games, and there are no interactions such as questions and answers and also no wining concept.
{ "cite_N": [ "@cite_0", "@cite_20" ], "mid": [ "2498434591", "2013667259" ], "abstract": [ "A workflow is a collection of cooperating, coordinated activities designed to carry out a well-defined complex process, such as trip planning, graduate student registration procedure, or a business process in a large enterprise. Executing a workflow, thus, involves coordinated execution of multiple long-running steps in an environment of distributed, heterogeneous processing entities. Workflow management systems provide a framework for capturing the interaction among the activities in a workflow and are recognized as a new paradigm for integrating disparate systems, including legacy systems. However, in order to realize its full potential a number of limitations of the current workflow models, including lack of a clear theoretical basis has to be addressed. Workflows must be specified declaratively, verified formally, and scheduled automatically. In this dissertation first we show that Concurrent Transaction Logic (abbr. CTR) is a natural logical formalism for representing workflow control graphs, temporal and causality constraints that workflow executions must obey, for reasoning about the consistency of workflow specifications, and for scheduling workflows in the presence of those constraints. Next, we develop Game-CTR, a natural extension of CTR designed for modeling and reasoning about run-time properties of workflows that are composed of non-cooperating services—such as Web services. We develop a model and proof theory for Game-CTR and show how it can be used to specify executions under a fairly large class of temporal and causality constraints. We then develop a game solver algorithm that converts such specifications (which are formulas in Game-CTR) into other, equivalent Game-CTR formulas, a coordinator, that can be executed more efficiently and without backtracking.", "We develop a game semantics for process algebra with two interacting agents. The purpose of our semantics is to make manifest the role of knowledge and information flow in the interactions between agents and to control the information available to interacting agents. We define games and strategies on process algebras, so that two agents interacting according to their strategies determine the execution of the process, replacing the traditional scheduler. We show that different restrictions on strategies represent different amounts of information being available to a scheduler. We also show that a certain class of strategies corresponds to the syntactic schedulers of Chatzikokolakis and Palamidessi, which were developed to overcome problems with traditional schedulers modelling interaction. The restrictions on these strategies have an explicit epistemic flavour." ] }
1312.0686
1514864735
Using formal tools in computer science to describe games is an interesting problem. We give games, exactly two person games, an axiomatical foundation based on the process algebra ACP (Algebra of Communicating Process). A fresh operator called opponent's alternative composition operator (OA) is introduced into ACP to describe game trees and game strategies, called GameACP. And its sound and complete axiomatical system is naturally established. To model the outcomes of games (the co-action of the player and the opponent), correspondingly in GameACP, the execution of GameACP processes, another operator called playing operator (PO) is extended into GameACP. We also establish a sound and complete axiomatical system for PO. Finally, we give the correctness theorem between the outcomes of games and the deductions of GameACP processes.
More like Game-CTR @cite_0 and Chatzikokolakis's work @cite_20 , we introduce games into ACP, or we use ACP to give games an interpretation. Unlike @cite_0 and @cite_20 , our work GameACP is an attempt to do axiomatization with an extension of process algebra ACP for games. It has the following characteristics:
{ "cite_N": [ "@cite_0", "@cite_20" ], "mid": [ "2498434591", "2013667259" ], "abstract": [ "A workflow is a collection of cooperating, coordinated activities designed to carry out a well-defined complex process, such as trip planning, graduate student registration procedure, or a business process in a large enterprise. Executing a workflow, thus, involves coordinated execution of multiple long-running steps in an environment of distributed, heterogeneous processing entities. Workflow management systems provide a framework for capturing the interaction among the activities in a workflow and are recognized as a new paradigm for integrating disparate systems, including legacy systems. However, in order to realize its full potential a number of limitations of the current workflow models, including lack of a clear theoretical basis has to be addressed. Workflows must be specified declaratively, verified formally, and scheduled automatically. In this dissertation first we show that Concurrent Transaction Logic (abbr. CTR) is a natural logical formalism for representing workflow control graphs, temporal and causality constraints that workflow executions must obey, for reasoning about the consistency of workflow specifications, and for scheduling workflows in the presence of those constraints. Next, we develop Game-CTR, a natural extension of CTR designed for modeling and reasoning about run-time properties of workflows that are composed of non-cooperating services—such as Web services. We develop a model and proof theory for Game-CTR and show how it can be used to specify executions under a fairly large class of temporal and causality constraints. We then develop a game solver algorithm that converts such specifications (which are formulas in Game-CTR) into other, equivalent Game-CTR formulas, a coordinator, that can be executed more efficiently and without backtracking.", "We develop a game semantics for process algebra with two interacting agents. The purpose of our semantics is to make manifest the role of knowledge and information flow in the interactions between agents and to control the information available to interacting agents. We define games and strategies on process algebras, so that two agents interacting according to their strategies determine the execution of the process, replacing the traditional scheduler. We show that different restrictions on strategies represent different amounts of information being available to a scheduler. We also show that a certain class of strategies corresponds to the syntactic schedulers of Chatzikokolakis and Palamidessi, which were developed to overcome problems with traditional schedulers modelling interaction. The restrictions on these strategies have an explicit epistemic flavour." ] }
1312.0686
1514864735
Using formal tools in computer science to describe games is an interesting problem. We give games, exactly two person games, an axiomatical foundation based on the process algebra ACP (Algebra of Communicating Process). A fresh operator called opponent's alternative composition operator (OA) is introduced into ACP to describe game trees and game strategies, called GameACP. And its sound and complete axiomatical system is naturally established. To model the outcomes of games (the co-action of the player and the opponent), correspondingly in GameACP, the execution of GameACP processes, another operator called playing operator (PO) is extended into GameACP. We also establish a sound and complete axiomatical system for PO. Finally, we give the correctness theorem between the outcomes of games and the deductions of GameACP processes.
We introduce the external choice into process algebra ACP in a game theory flavor. As a result of axiomatization, GameACP has not only an equational logic, but also a bisimulation semantics. The conclusions of GameACP are without any assumption or restriction, such as epistemic restrictions on strategies in @cite_20 . Though the discussions of GameACP are aimed at two person games, GameACP can be naturally used in multi-person games. GameACP provides a new viewpoint to model interactions between one autonomous agent and other autonomous agents, and can be used to reason about the behaviors of parallel and distributed systems with game theory supported.
{ "cite_N": [ "@cite_20" ], "mid": [ "2013667259" ], "abstract": [ "We develop a game semantics for process algebra with two interacting agents. The purpose of our semantics is to make manifest the role of knowledge and information flow in the interactions between agents and to control the information available to interacting agents. We define games and strategies on process algebras, so that two agents interacting according to their strategies determine the execution of the process, replacing the traditional scheduler. We show that different restrictions on strategies represent different amounts of information being available to a scheduler. We also show that a certain class of strategies corresponds to the syntactic schedulers of Chatzikokolakis and Palamidessi, which were developed to overcome problems with traditional schedulers modelling interaction. The restrictions on these strategies have an explicit epistemic flavour." ] }
1312.0461
2137017080
Many business Web-based applications do not offer APIs to enable other applications to access their data and functions in a programmatic manner. This makes their composition difficult for instance to synchronize data between two applications. To address this challenge, this paper presents Abmash, an approach to facilitate the integration of such legacy Web applications by automatically imitating human interactions with them. By automatically interacting with the GUI of Web applications, the system supports all forms of integrations including bidirectional interactions and is able to interact with AJAX-based applications. Furthermore, the integration programs are easy to write because they deal with end-user, visual UI elements. The integration code is simple enough to be called a 'mash-up'. Copyright © 2013 John Wiley & Sons, Ltd.
Much of the work on software maintenance focuses on migrating legacy applications to the web @cite_19 and only a few papers discuss their integration. For instance Vinoski @cite_14 showed that integrating legacy code with classical middleware requires to invasively modify the code. Sneed presented @cite_24 @cite_39 an approach to integrate legacy software into a service oriented architecture. It consists of automatically creating XML output from PL I, COBOL and C C++ interfaces, which can be wrapped into a SOAP-based web service. Abmash neither modifies existing code nor ports applications to the web: it integrates legacy web applications. There are also some pieces of research on automated data migration (e.g. @cite_41 ). However, data migration is only one aspect of application integration, the scope of Abmash is larger.
{ "cite_N": [ "@cite_14", "@cite_41", "@cite_39", "@cite_24", "@cite_19" ], "mid": [ "2168331623", "2144607742", "2137560763", "2106796623", "2166379807" ], "abstract": [ "There's a difference between what we'd like our enterprise computing systems to be and what they really are. We like to envision them as orderly multitier arrangements comprising software buses, hubs, gateways, and adapters - all deployed at just the right places to maximize scale, load, application utility, and ultimately, business value. Unfortunately, we know that there's a wide gulf between this idealistic vision and reality. In practice, our enterprise computing systems typically are tangles of numerous technologies, protocols, and applications, often hastily hard-wired together with inflexible point-to-point connections. The whole point of middleware is to hide the diversity and complexity of the computing machinery underneath it. By adopting the abstractions that middleware provides, we're supposedly isolating our applications from the variety of ever-changing hardware platforms, operating systems, networks, protocols, and transports that make up our enterprise computing systems. We can use Web services to provide \"middleware for middleware\" abstraction layer for modern integration applications.", "A common task in many database applications is the migration of legacy data from multiple sources into a new one. This requires identifying semantically related elements of the source and target systems and the creation of mapping expressions to transform instances of those elements from the source format to the target format. Currently, data migration is typically done manually, a tedious and timeconsuming process, which is difficult to scale to a high number of data sources. In this paper, we describe QuickMig, a new semi-automatic approach to determining semantic correspondences between schema elements for data migration applications. QuickMig advances the state of the art with a set of new techniques exploiting sample instances, domain ontologies, and reuse of existing mappings to detect not only element correspondences but also their mapping expressions. QuickMig further includes new mechanisms to effectively incorporate domain knowledge of users into the matching process. The results from a comprehensive evaluation using real-world schemas and data indicate the high quality and practicability of the overall approach.", "Legacy programs, i. e. programs which have been developed with an outdated technology make-up for the vast majority of programs in many user application environments. It is these programs which actually run the information systems of the business world. Moving to a new technology such as service oriented architecture is impossible without taking these programs along. This contribution presents a tool supported method for achieving that goal. Legacy code is wrapped behind an XML shell which allows individual functions within the programs, to be offered as Web services to any external user. By means of this wrapping technology, a significant part of the company software assets can be preserved within the framework of a service oriented architecture.", "An important prerequisite to connecting existing systems to the Web is the ability to link client programs on the Web site with server programs on the host. The host programs have not been conceived to run in an internet mode. They are either online transactions or batch steps. This paper describes a tool supported process to cut out selected sections of legacy code and to provide them with an XML interface. The same interface is used to generate a Java class, which creates XML messages returning from the server. This class is then built in to the package managing the Web site. In this way a consistent communication between the Web site and the server components on the host is ensured.", "Migration of form based legacy systems towards service-oriented computing is a challenging task, requiring the adaptation of the legacy interface to the interaction paradigm of Web services. In this paper, a wrapping methodology is proposed to make interactive functionalities of legacy systems accessible as Web services. The wrapper that is used for interacting with the legacy system acts as an interpreter of a finite state automaton that describes the model of the interaction between user and legacy system. This model is obtained by black box reverse engineering techniques. A migration process and a software architecture that allow a functionality of a legacy system to be exported as a Web service are presented in the paper." ] }
1312.0461
2137017080
Many business Web-based applications do not offer APIs to enable other applications to access their data and functions in a programmatic manner. This makes their composition difficult for instance to synchronize data between two applications. To address this challenge, this paper presents Abmash, an approach to facilitate the integration of such legacy Web applications by automatically imitating human interactions with them. By automatically interacting with the GUI of Web applications, the system supports all forms of integrations including bidirectional interactions and is able to interact with AJAX-based applications. Furthermore, the integration programs are easy to write because they deal with end-user, visual UI elements. The integration code is simple enough to be called a 'mash-up'. Copyright © 2013 John Wiley & Sons, Ltd.
According to the literature, the term "mashup" refers to programs that are related to the concepts of and . For instance, Yahoo Pipes @cite_40 composes RSS feeds to create new ones; they are programed through an intuitive graphical user-interface. The ease of development in mashup development is often associated to (e.g. @cite_15 ), meaning that persons with little or even no programming education can still be able to create programs with an appropriate infrastructure.
{ "cite_N": [ "@cite_40", "@cite_15" ], "mid": [ "2108478751", "2001959118" ], "abstract": [ "Here's yet another way to mash up services for the convenience of your customers. Read about how to use Yahoo! Pipes and see what sorts of ideals bubble up in your mind.", "There is a tremendous amount of web content available today, but it is not always in a form that supports end-users' needs. In many cases, all of the data and services needed to accomplish a goal already exist, but are not in a form amenable to an end-user. To address this problem, we have developed an end-user programming tool called Marmite, which lets end-users create so-called mashups that re-purpose and combine existing web content and services. In this paper, we present the design, implementation, and evaluation of Marmite. An informal user study found that programmers and some spreadsheet users had little difficulty using the system." ] }
1312.0461
2137017080
Many business Web-based applications do not offer APIs to enable other applications to access their data and functions in a programmatic manner. This makes their composition difficult for instance to synchronize data between two applications. To address this challenge, this paper presents Abmash, an approach to facilitate the integration of such legacy Web applications by automatically imitating human interactions with them. By automatically interacting with the GUI of Web applications, the system supports all forms of integrations including bidirectional interactions and is able to interact with AJAX-based applications. Furthermore, the integration programs are easy to write because they deal with end-user, visual UI elements. The integration code is simple enough to be called a 'mash-up'. Copyright © 2013 John Wiley & Sons, Ltd.
Apart from those three main types of mashups, the literature also refers to enterprise mashups'' (e.g. @cite_28 ) for mashups created in an enterprise environment with some associated business value, and to mashup agents'' @cite_9 for agents capable of semantically determining relevant information sources with respect to a specific concern. Finally, as others programming activities, mashup development is also related to development environments @cite_7 @cite_31 and debugging @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_28", "@cite_9", "@cite_31" ], "mid": [ "2061303061", "2592554495", "2109426259", "2056350868", "1520506906" ], "abstract": [ "In recent years, systems have emerged that enable end users to “mash” together existing web services to build new web sites. However, little is known about how well end users succeed at building such mashups, or what they do if they do not succeed at their first attempt. To help fill this gap, we took a fresh look, from a debugging perspective, at the approaches of end users as they attempted to create mashups. Our results reveal the end users’ debugging strategies and strategy barriers, the gender differences between the debugging strategies males and females followed and the features they used, and finally how their debugging successes and difficulties interacted with their design behaviors.", "", "Opportunities are available resources that yield desired results. Their suitability depends on who seizes the opportunity and the context for its use. Opportunistic development relies on the availability of reusable software components to produce hybrid applications that opportunistically join such components to meet immediate functional or content needs. Availability and connectivity are key qualities of an opportunity. Situational assessment determines when the best available, most deployable opportunities exist within time and resource constraints.", "The evolution of the Web over the past few years has fostered the growth of a handful of new technologies (e.g. Blogs, Wiki’s, Web Services). Recently web mashups have emerged as the newest Web technology and have gained lots of momentum and attention from both academic and industry communities. Current mashup literature focuses on a wide array of issues, which can be partially explained by how new the topic is. However, to date, mashup literature lacks an articulation of the different subtopics of web mashup research. This study presents a broad review of mashup literature to help frame the 1subtopics in mashup research.", "This chapter presents a survey of six mashup development environments and looks at how mashups fit into the vision of the smart internet. The fast-paced expansion of mashup development environments has resulted in a wealth of features and approaches. To provide an overview of End User Development support in current mashup development environments, we explore, summarize and compare their features across six different themes (Levels of Abstraction, Learning Support, Community Support, Discoverability, User Interface Design and Software Engineering Techniques). We found that the mashup development environments provide many features to support end users, but there is still much room for further improvement, especially in relation to the smart internet. We believe that by connecting matters of concern to mashups, mashup development environments can become an essential part of the smart internet. Such a connection would enable mining of mashup elements, which could facilitate automatic mashup creation and customization." ] }
1312.0200
1859168105
Arrays are ubiquitous in the context of software verication. However, eective reasoning over arrays is still rare in CP, as local reasoning is dramatically ill-conditioned for constraints over arrays. In this paper, we propose an approach com- bining both global symbolic reasoning and local consistency ltering in order to solve constraint systems involving arrays (with accesses, updates and size constraints) and nite-domain constraints over their elements and indexes. Our approach, named fdcc, is based on a combination of a congruence closure algorithm for the standard theory of arrays and a CP solver over nite domains. The tricky part of the work lies in the bi- directional communication mechanism between both solvers. We identify the signicant information to share, and design ways to master the communication overhead. Exper- iments on random instances show that fdcc solves more formulas than any portfolio combination of the two solvers taken in isolation, while overhead is kept reasonable.
This paper is an extension of a preliminary version presented at CPAIOR 2012 @cite_33 . It contains detailed descriptions and explanations on the core technology, formulated in complete revisions of Sections to . It also presents new developments and extensions in a completely new . Moreover, as it discusses adaptations of the approach for several extensions of the theory of arrays relevant to software verification, it also contains a deeper and updated description of related work (Section ).
{ "cite_N": [ "@cite_33" ], "mid": [ "1625314295" ], "abstract": [ "Arrays are ubiquitous in the context of software verification. However, effective reasoning over arrays is still rare in CP, as local reasoning is dramatically ill-conditioned for constraints over arrays. In this paper, we propose an approach combining both global symbolic reasoning and local filtering in order to solve constraint systems involving arrays (with accesses, updates and size constraints) and finite-domain constraints over their elements and indexes. Our approach, named fdcc, is based on a combination of a congruence closure algorithm for the standard theory of arrays and a CP solver over finite domains. The tricky part of the work lies in the bi-directional communication mechanism between both solvers. We identify the significant information to share, and design ways to master the communication overhead. Experiments on random instances show that fdcc solves more formulas than any portfolio combination of the two solvers taken in isolation, while overhead is kept reasonable." ] }
1312.0200
1859168105
Arrays are ubiquitous in the context of software verication. However, eective reasoning over arrays is still rare in CP, as local reasoning is dramatically ill-conditioned for constraints over arrays. In this paper, we propose an approach com- bining both global symbolic reasoning and local consistency ltering in order to solve constraint systems involving arrays (with accesses, updates and size constraints) and nite-domain constraints over their elements and indexes. Our approach, named fdcc, is based on a combination of a congruence closure algorithm for the standard theory of arrays and a CP solver over nite domains. The tricky part of the work lies in the bi- directional communication mechanism between both solvers. We identify the signicant information to share, and design ways to master the communication overhead. Exper- iments on random instances show that fdcc solves more formulas than any portfolio combination of the two solvers taken in isolation, while overhead is kept reasonable.
Alternative approaches to FDCC. We sketch three alternative methods for handling array constraints over finite domains, and we argue why we do not choose them. First, one could think of embedding a CP(FD) solver in a SMT solver, as one theory solver among others, the array constraints being handled by a dedicated solver. As already stated in introduction, standard cooperation framework like Nelson-Oppen (NO) @cite_17 require that supported theories have an infinite model, which is not the case for Finite Domains.
{ "cite_N": [ "@cite_17" ], "mid": [ "2164778826" ], "abstract": [ "A method for combining decision procedures for several theories into a single decision procedure for their combination is described, and a simplifier based on this method is discussed. The simplifier finds a normal form for any expression formed from individual variables, the usual Boolean connectives, the equality predicate =, the conditional function if-then-else, the integers, the arithmetic functions and predicates +, -, and ≤, the Lisp functions and predicates car, cdr, cons, and atom, the functions store and select for storing into and selecting from arrays, and uninterpreted function symbols. If the expression is a theorem it is simplified to the constant true, so the simplifier can be used as a decision procedure for the quantifier-free theory containing these functions and predicates. The simplifier is currently used in the Stanford Pascal Verifier." ] }
1312.0200
1859168105
Arrays are ubiquitous in the context of software verication. However, eective reasoning over arrays is still rare in CP, as local reasoning is dramatically ill-conditioned for constraints over arrays. In this paper, we propose an approach com- bining both global symbolic reasoning and local consistency ltering in order to solve constraint systems involving arrays (with accesses, updates and size constraints) and nite-domain constraints over their elements and indexes. Our approach, named fdcc, is based on a combination of a congruence closure algorithm for the standard theory of arrays and a CP solver over nite domains. The tricky part of the work lies in the bi- directional communication mechanism between both solvers. We identify the signicant information to share, and design ways to master the communication overhead. Exper- iments on random instances show that fdcc solves more formulas than any portfolio combination of the two solvers taken in isolation, while overhead is kept reasonable.
Third, one could encode all finite-domain constraints into boolean constraints and use a SMT solver equipped with a decision procedure for the standard theory of arrays. Doing so, we give away the possibility of taking advantage of the high-level structure of the initial formula. Recent works on finite but hard-to-reason-about constraints, such as floating-point arithmetic @cite_27 , modular arithmetic @cite_16 or bitvectors @cite_25 , suggests that it can be much more efficient in some cases to keep the high-level view of the formula.
{ "cite_N": [ "@cite_27", "@cite_16", "@cite_25" ], "mid": [ "2048854494", "1548138666", "1598706103" ], "abstract": [ "Verifying critical numerical software involves the generation of test data for floating-point intensive programs. As the symbolic execution of floating-point computations presents significant difficulties, existing approaches usually resort to random or search-based test data generation. However, without symbolic reasoning, it is almost impossible to generate test inputs that execute many paths with floating-point computations. Moreover, constraint solvers over the reals or the rationals do not handle the rounding errors. In this paper, we present a new version of FPSE, a symbolic evaluator for C program paths, that specifically addresses this problem. The tool solves path conditions containing floating-point computations by using correct and precise projection functions. This version of the tool exploits an essential filtering property based on the representation of floating-point numbers that makes it suitable to generate path-oriented test inputs for complex paths characterized by floating-point intensive computations. The paper reviews the key implementation choices in FPSE and the labeling search heuristics we selected to maximize the benefits of enhanced filtering. Our experimental results show that FPSE can generate correct test inputs for selected paths containing several hundreds of iterations and thousands of executable floating-point statements on a standard machine: this is currently outside the scope of any other symbolic-execution test data generator tool.", "Constraint solving over nite-sized integers involves the def- inition of propagators able to capture modular (a.k.a. wrap-around) in- teger computations. In this paper, we propose e cient propagators for a fragment of modular integer constraints including adders, multipliers and comparators. Our approach is based on the original notion of Clock- wise Interval for which we de ne a complete arithmetic. We also present three distinct implementations of modular integer constraint solving in the context of software verification.", "The theory BV of bit-vectors, i.e. fixed-size arrays of bits equipped with standard low-level machine instructions, is becoming very popular in formal verification. Standard solvers for this theory are based on a bit-level encoding into propositional logic and SAT-based resolution techniques. In this paper, we investigate an alternative approach based on a word-level encoding into bounded arithmetic and Constraint Logic Programming (CLP) resolution techniques. We define an original CLP framework (domains and propagators) dedicated to bit-vector constraints. This framework is implemented in a prototype and thorough experimental studies have been conducted. The new approach is shown to perform much better than standard CLP-based approaches, and to considerably reduce the gap with the best SAT-based BV solvers." ] }
1312.0200
1859168105
Arrays are ubiquitous in the context of software verication. However, eective reasoning over arrays is still rare in CP, as local reasoning is dramatically ill-conditioned for constraints over arrays. In this paper, we propose an approach com- bining both global symbolic reasoning and local consistency ltering in order to solve constraint systems involving arrays (with accesses, updates and size constraints) and nite-domain constraints over their elements and indexes. Our approach, named fdcc, is based on a combination of a congruence closure algorithm for the standard theory of arrays and a CP solver over nite domains. The tricky part of the work lies in the bi- directional communication mechanism between both solvers. We identify the signicant information to share, and design ways to master the communication overhead. Exper- iments on random instances show that fdcc solves more formulas than any portfolio combination of the two solvers taken in isolation, while overhead is kept reasonable.
Deductive methods and SMT frameworks. It is well known in the SMT community that solving formulas over arrays and integer arithmetic in an efficient way through NO is difficult. Indeed, handling in a correct way requires to propagate all , which may be much more expensive than satisfiability checking @cite_26 . Delayed theory combination @cite_13 @cite_26 requires only the propagation of implied equalities, at the price of adding new boolean variables for all potential equalities between variables. Model-based theory combination @cite_2 aims at mitigating this potential overhead through lazy propagation of equalities.
{ "cite_N": [ "@cite_26", "@cite_13", "@cite_2" ], "mid": [ "2276124093", "1515930456", "2110780548" ], "abstract": [ "Most state-of-the-art approaches for Satisfiability Modulo Theories @math rely on the integration between a SAT solver and a decision procedure for sets of literals in the background theory @math . Often @math is the combination @math of two (or more) simpler theories @math , s.t. the specific @math must be combined. Up to a few years ago, the standard approach to @math was to integrate the SAT solver with one combined @math , obtained from two distinct @math by means of evolutions of Nelson and Oppen's (NO) combination procedure, in which the @math deduce and exchange interface equalities. Nowadays many state-of-the-art SMT solvers use evolutions of a more recent @math procedure called Delayed Theory Combination (DTC), in which each @math interacts directly and only with the SAT solver, in such a way that part or all of the (possibly very expensive) reasoning effort on interface equalities is delegated to the SAT solver itself. In this paper we present a comparative analysis of DTC vs. NO for @math . On the one hand, we explain the advantages of DTC in exploiting the power of modern SAT solvers to reduce the search. On the other hand, we show that the extra amount of Boolean search required to the SAT solver can be controlled. In fact, we prove two novel theoretical results, for both convex and non-convex theories and for different deduction capabilities of the @math , which relate the amount of extra Boolean search required to the SAT solver by DTC with the number of deductions and case-splits required to the @math by NO in order to perform the same tasks: (i) under the same hypotheses of deduction capabilities of the @math required by NO, DTC causes no extra Boolean search; (ii) using @math with limited or no deduction capabilities, the extra Boolean search required can be reduced down to a negligible amount by controlling the quality of the @math -conflict sets returned by the @math .", "The problem of deciding the satisfiability of a quantifier-free formula with respect to a background theory, also known as Satisfiability Modulo Theories (SMT), is gaining increasing relevance in verification: representation capabilities beyond propositional logic allow for a natural modeling of real-world problems (e.g., pipeline and RTL circuits verification, proof obligations in software systems). In this paper, we focus on the case where the background theory is the combination T1∪T2 of two simpler theories. Many SMT procedures combine a boolean model enumeration with a decision procedure for T1∪T2, where conjunctions of literals can be decided by an integration schema such as Nelson-Oppen, via a structured exchange of interface formulae (e.g., equalities in the case of convex theories, disjunctions of equalities otherwise). We propose a new approach for SMT(T1∪T2), called Delayed Theory Combination, which does not require a decision procedure for T1∪T2, but only individual decision procedures for T1 and T2, which are directly integrated into the boolean model enumerator. This approach is much simpler and natural, allows each of the solvers to be implemented and optimized without taking into account the others, and it nicely encompasses the case of non-convex theories. We show the effectiveness of the approach by a thorough experimental comparison.", "Traditional methods for combining theory solvers rely on capabilities of the solvers to produce all implied equalities or a pre-processing step that introduces additional literals into the search space. This paper introduces a combination method that incrementally reconciles models maintained by each theory. We evaluate the practicality and efficiency of this approach." ] }
1312.0200
1859168105
Arrays are ubiquitous in the context of software verication. However, eective reasoning over arrays is still rare in CP, as local reasoning is dramatically ill-conditioned for constraints over arrays. In this paper, we propose an approach com- bining both global symbolic reasoning and local consistency ltering in order to solve constraint systems involving arrays (with accesses, updates and size constraints) and nite-domain constraints over their elements and indexes. Our approach, named fdcc, is based on a combination of a congruence closure algorithm for the standard theory of arrays and a CP solver over nite domains. The tricky part of the work lies in the bi- directional communication mechanism between both solvers. We identify the signicant information to share, and design ways to master the communication overhead. Exper- iments on random instances show that fdcc solves more formulas than any portfolio combination of the two solvers taken in isolation, while overhead is kept reasonable.
Besides, @math is hard to solve by itself. Standard symbolic approaches have already been sketched in . The most efficient approaches combine preprocessing for removing as many RoW terms as possible with delayed'' inlining of array axioms for the remaining RoW terms. New lemmas corresponding roughly to critical pairs can be added on-demand to the DPLL top-level @cite_5 , or they can be incrementally discovered through an abstraction-refinement scheme @cite_35 . Additional performance can be obtained through frugal ( @math minimal) instantiation of array axioms @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_5", "@cite_35" ], "mid": [ "1984364182", "2159786179", "2158602337" ], "abstract": [ "How to efficiently reason about arrays in an automated solver based on decision procedures? The most efficient SMT solvers of the day implement \"lazy axiom instantiation\": treat the array operations read and write as uninterpreted, but supply at appropriate times appropriately many---not too many, not too few---instances of array axioms as additional clauses. We give a precise account of this approach, specifying \"how many\" is enough for correctness, and showing how to be frugal and correct.", "Lazy algorithms for Satisfiability Modulo Theories (SMT) combine a generic DPLL-based SAT engine with a theory solver for the given theory T that can decide the T-consistency of conjunctions of ground literals. For many theories of interest, theory solvers need to reason by performing internal case splits. Here we argue that it is more convenient to delegate these case splits to the DPLL engine instead. The delegation can be done on demand for solvers that can encode their internal case splits into one or more clauses, possibly including new constants and literals. This results in drastically simpler theory solvers. We present this idea in an improved version of DPLL(T), a general SMT architecture for the lazy approach, and formalize and prove it correct in an extension of Abstract DPLL Modulo Theories, a framework for modeling and reasoning about lazy algorithms for SMT. A remarkable additional feature of the architecture, also discussed in the paper, is that it naturally includes an efficient Nelson-Oppen-like combination of multiple theories and their solvers.", "Deciding satisfiability in the theory of arrays, particularly in combination with bit-vectors, is essential for software and hardware verification. We precisely describe how the lemmas on demand approach can be applied to this decision problem. In particular, we show how our new propagation based algorithm can be generalized to the extensional theory of arrays. Our implementation achieves competitive performance." ] }
1312.0200
1859168105
Arrays are ubiquitous in the context of software verication. However, eective reasoning over arrays is still rare in CP, as local reasoning is dramatically ill-conditioned for constraints over arrays. In this paper, we propose an approach com- bining both global symbolic reasoning and local consistency ltering in order to solve constraint systems involving arrays (with accesses, updates and size constraints) and nite-domain constraints over their elements and indexes. Our approach, named fdcc, is based on a combination of a congruence closure algorithm for the standard theory of arrays and a CP solver over nite domains. The tricky part of the work lies in the bi- directional communication mechanism between both solvers. We identify the signicant information to share, and design ways to master the communication overhead. Exper- iments on random instances show that fdcc solves more formulas than any portfolio combination of the two solvers taken in isolation, while overhead is kept reasonable.
Combination of propagators in CP. Several possibilities can be considered to implement constraint propagation when multiple propagators are available @cite_12 . First, an external solver can be embedded as a new global constraint in , as done for example on the Quad global constraint for continuous domains @cite_14 . This approach offers global reasoning over the constraint store. However, it requires fine control over the awakening mechanism of the new global constraint. A second approach consists in calling both solvers in a concurrent way. Each of them is launched on a distinct thread, and both threads prune a common constraint store that serves of blackboard. This approach has been successfully implemented in Oz @cite_18 . The difficulty is to identify which information must be shared, and to do it efficiently. A third approach consists in building a master-slave combination process where one of the solvers (here ) drives the computation and call the other ( ). The difficulty here is to understand when the master must call the slave. We follow mainly the second approach, however a third agent (the supervisor) acts as a lightweight master over and to synchronise both solvers through queries.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_12" ], "mid": [ "2120656140", "2049953405", "2035811095" ], "abstract": [ "Oz is a multiparadigm language that supports logic programming as one of its major paradigms. A multiparadigm language is designed to support different programming paradigms (logic, functional, constraint, object-oriented, sequential, concurrent, etc.) with equal ease. This paper has two goals: to give a tutorial of logic programming in Oz; and to show how logic programming fits naturally into the wider context of multiparadigm programming. Our experience shows that there are two classes of problems, which we call algorithmic and search problems, for which logic programming can help formulate practical solutions. Algorithmic problems have known efficient algorithms. Search problems do not have known efficient algorithms but can be solved with search. The Oz support for logic programming targets these two problem classes specifically, using the concepts needed for each. This is in contrast to the Prolog approach, which targets both classes with one set of concepts, which results in less than optimal support for each class. We give examples that can be run interactively on the Mozart system, which implements Oz. To explain the essential difference between algorithmic and search programs, we define the Oz execution model. This model subsumes both concurrent logic programming (committed-choice-style) and search-based logic programming (Prolog-style). Furthermore, as consequences of its multiparadigm nature, the model supports new abilities such as first-class top levels, deep guards, active objects, and sophisticated control of the search process. Instead of Horn clause syntax, Oz has a simple, fully compositional, higher-order syntax that accommodates the abilities of the language. We give a brief history of Oz that traces the development of its main ideas and we summarize the lessons learned from this work. Finally, we give many entry points into the Oz literature.", "Numerical constraint systems are often handled by branch and prune algorithms that combine splitting techniques, local consistencies, and interval methods. This paper first recalls the principles of Quad , a global constraint that works on a tight and safe linear relaxation of quadratic subsystems of constraints. Then, it introduces a generalization of Quad to polynomial constraint systems. It also introduces a method to get safe linear relaxations and shows how to compute safe bounds of the variables of the linear constraint system. Different linearization techniques are investigated to limit the number of generated constraints. QuadSolver , a new branch and prune algorithm that combines Quad , local consistencies, and interval methods, is introduced. QuadSolver has been evaluated on a variety of benchmarks from kinematics, mechanics, and robotics. On these benchmarks, it outperforms classical interval methods as well as constraint satisfaction problem solvers and it compares well with state-of-the-art optimization solvers.", "This article presents a model and implementation techniques for speeding up constraint propagation. Three fundamental approaches to improving constraint propagation based on propagators as implementations of constraints are explored: keeping track of which propagators are at fixpoint, choosing which propagator to apply next, and how to combine several propagators for the same constraint. We show how idempotence reasoning and events help track fixpoints more accurately. We improve these methods by using them dynamically (taking into account current variable domains to improve accuracy). We define priority-based approaches to choosing a next propagator and show that dynamic priorities can improve propagation. We illustrate that the use of multiple propagators for the same constraint can be advantageous with priorities, and introduce staged propagators that combine the effects of multiple propagators with priorities for greater efficiency." ] }
1312.0127
2104448458
Answer Set Programming (ASP) is a popular framework for modelling combinatorial problems. However, ASP cannot be used easily for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP that combines possibilistic logic and ASP. In PASP a weight is associated with each rule, whereas this weight is interpreted as the certainty with which the conclusion can be established when the body is known to hold. As such, it allows us to model and reason about uncertain information in an intuitive way. In this paper we present new semantics for PASP in which rules are interpreted as constraints on possibility distributions. Special models of these constraints are then identified as possibilistic answer sets. In addition, since ASP is a special case of PASP in which all the rules are entirely certain, we obtain a new characterization of ASP in terms of constraints on possibility distributions. This allows us to uncover a new form of disjunction, called weak disjunction, that has not been previously considered in the literature. In addition to introducing and motivating the semantics of weak disjunction, we also pinpoint its computational complexity. In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.
Several authors have already proposed alternatives and extensions to the semantics of disjunctive programs. Ordered disjunction @cite_3 falls in the latter category and allows to use the head of the rule to formulate alternative solutions in their preferred order. For example, a rule such as @math represents the knowledge that @math is preferred over @math which is preferred over @math , but that at the very least we want @math to be true. As such it allows for an easy way to express context dependent preferences. The semantics of ordered disjunction allow certain non-minimal models to be answer sets, hence, unlike the work in this paper, it does not adhere to the standard semantics of disjunctive rules in ASP.
{ "cite_N": [ "@cite_3" ], "mid": [ "1984529395" ], "abstract": [ "Logic programs with ordered disjunction (LPODs) combine ideas underlying Qualitative Choice Logic (Brewka, Benferhat, & Le Berre 2002) and answer set programming. Logic programming under answer set semantics is extended with a new connective called ordered disjunction. The new connective allows us to represent alternative, ranked options for problem solutions in the heads of rules: A × B intuitively means: if possible A, but if A is not possible then at least B. The semantics of logic programs with ordered disjunction is based on a preference relation on answer sets. LPODs are useful for applications in design and configuration and can serve as a basis for qualitative decision making." ] }
1312.0127
2104448458
Answer Set Programming (ASP) is a popular framework for modelling combinatorial problems. However, ASP cannot be used easily for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP that combines possibilistic logic and ASP. In PASP a weight is associated with each rule, whereas this weight is interpreted as the certainty with which the conclusion can be established when the body is known to hold. As such, it allows us to model and reason about uncertain information in an intuitive way. In this paper we present new semantics for PASP in which rules are interpreted as constraints on possibility distributions. Special models of these constraints are then identified as possibilistic answer sets. In addition, since ASP is a special case of PASP in which all the rules are entirely certain, we obtain a new characterization of ASP in terms of constraints on possibility distributions. This allows us to uncover a new form of disjunction, called weak disjunction, that has not been previously considered in the literature. In addition to introducing and motivating the semantics of weak disjunction, we also pinpoint its computational complexity. In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.
Annotated disjunctions are another example of a framework that changes the semantics of disjunctive programs @cite_8 . It is based on the idea that every disjunct in the head of a rule is annotated with a probability. Interestingly, both ordered and annotated disjunction rely on split programs, as found in the possible model semantics @cite_6 . These semantics provide an alternative to the minimal model semantics. The idea is to split a disjunctive program into a number of normal programs, one for each possible choice of disjuncts in the head, of which the minimal Herbrand models are then the possible models of the disjunctive programs. Intuitively this means that a possible model represents a set of atoms for which a possible justification is present in the program. In line with our conclusions for weak disjunction, using the possible model semantics also leads to a lower computational complexity.
{ "cite_N": [ "@cite_6", "@cite_8" ], "mid": [ "2050486226", "1514973146" ], "abstract": [ "In this paper, we study a new semantics of logic programming and deductive databases. Thepossible model semantics is introduced as a declarative semantics of disjunctive logic programs. The possible model semantics is an alternative theoretical framework to the classical minimal model semantics and provides a flexible inference mechanism for inferring negation in disjunctive logic programs. We also present a proof procedure for the possible model semantics and show that the possible model semantics has an advantage from the computational complexity point of view.", "Current literature offers a number of different approaches to what could generally be called “probabilistic logic programming”. These are usually based on Horn clauses. Here, we introduce a new formalism, Logic Programs with Annotated Disjunctions, based on disjunctive logic programs. In this formalism, each of the disjuncts in the head of a clause is annotated with a probability. Viewing such a set of probabilistic disjunctive clauses as a probabilistic disjunction of normal logic programs allows us to derive a possible world semantics, more precisely, a probability distribution on the set of all Herbrand interpretations. We demonstrate the strength of this formalism by some examples and compare it to related work." ] }
1312.0127
2104448458
Answer Set Programming (ASP) is a popular framework for modelling combinatorial problems. However, ASP cannot be used easily for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP that combines possibilistic logic and ASP. In PASP a weight is associated with each rule, whereas this weight is interpreted as the certainty with which the conclusion can be established when the body is known to hold. As such, it allows us to model and reason about uncertain information in an intuitive way. In this paper we present new semantics for PASP in which rules are interpreted as constraints on possibility distributions. Special models of these constraints are then identified as possibilistic answer sets. In addition, since ASP is a special case of PASP in which all the rules are entirely certain, we obtain a new characterization of ASP in terms of constraints on possibility distributions. This allows us to uncover a new form of disjunction, called weak disjunction, that has not been previously considered in the literature. In addition to introducing and motivating the semantics of weak disjunction, we also pinpoint its computational complexity. In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.
Not all existing extensions of disjunction allow non-minimal models. For example, in @cite_32 an extension of disjunctive logic programs is presented which adds the idea of inheritance. Conflicts between rules are then resolved in favor of more specific rules. Such an approach allows for an intuitive way to deal with default reasoning and exceptions. In particular, the semantics allow for rules to be marked as being defeasible and allows to specify an order or inheritance tree among (sets of) rules. Interestingly, the complexity of the resulting system is not affected and coincides with the complexity of ordinary disjunctive programs.
{ "cite_N": [ "@cite_32" ], "mid": [ "1963591676" ], "abstract": [ "The paper proposes a new knowledge representation language, called DLP<, which extends disjunctive logic programming (with strong negation) by inheritance. The addition of inheritance enhances the knowledge modeling features of the language providing a natural representation of default reasoning with exceptions. A declarative model-theoretic semantics of DLP< is provided, which is shown to generalize the Answer Set Semantics of disjunctive logic programs. The knowledge modeling features of the language are illustrated by encoding classical nonmonotonic problems in DLP<. The complexity of DLP< is analyzed, proving that inheritance does not cause any computational overhead, as reasoning in DLP< has exactly the same complexity as reasoning in disjunctive logic programming. This is confirmed by the existence of an efficient translation from DLP< to plain disjunctive logic programming. Using this translation, an advanced KR system supporting the DLP< language has been implemented on top of the DLV system and has subsequently been integrated into DLV." ] }
1312.0127
2104448458
Answer Set Programming (ASP) is a popular framework for modelling combinatorial problems. However, ASP cannot be used easily for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP that combines possibilistic logic and ASP. In PASP a weight is associated with each rule, whereas this weight is interpreted as the certainty with which the conclusion can be established when the body is known to hold. As such, it allows us to model and reason about uncertain information in an intuitive way. In this paper we present new semantics for PASP in which rules are interpreted as constraints on possibility distributions. Special models of these constraints are then identified as possibilistic answer sets. In addition, since ASP is a special case of PASP in which all the rules are entirely certain, we obtain a new characterization of ASP in terms of constraints on possibility distributions. This allows us to uncover a new form of disjunction, called weak disjunction, that has not been previously considered in the literature. In addition to introducing and motivating the semantics of weak disjunction, we also pinpoint its computational complexity. In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.
Alternatively, existing extensions of ASP can be used to implement some epistemic reasoning tasks, such as reasoning based on brave cautious conclusions. This idea is proposed in @cite_21 to overcome the need for an intermediary step to compute the desired consequences of the ASP program @math , before being fed into @math . Rather, they propose a translation to manifold answer set programs, which exploit the concept of weak constraints @cite_18 to allow for such programs to access all desired consequences of @math within a single answer set. As such, for problems that can be cast into this particular form, only a single ASP program needs to be evaluated and the intermediary step is made obsolete.
{ "cite_N": [ "@cite_18", "@cite_21" ], "mid": [ "2106614716", "1567155055" ], "abstract": [ "This paper presents an extension of Disjunctive Datalog (DATALOG sup V, spl sim ) by integrity constraints. These are of two types: strong, that is, classical integrity constraints and weak, that is, constraints that are satisfied if possible. While strong constraints must be satisfied, weak constraints express desiderata, that is, they may be violated-actually, their semantics tends to minimize the number of violated instances of weak constraints. Weak constraints may be ordered according to their importance to express different priority levels. As a result, the proposed language (call it, DATALOG sup V, spl sim ,c ) is well-suited to represent common sense reasoning and knowledge-based problems arising in different areas of computer science such as planning, graph theory optimizations, and abductive reasoning. The formal definition of the language is first given. The declarative semantics of DATALOG sup V, spl sim ,c is defined in a general way that allows us to put constraints on top of any existing (model-theoretic) semantics for DATALOG sup V, spl sim programs. Knowledge representation issues are then addressed and the complexity of reasoning on DATALOG sup V, spl sim ,c programs is carefully determined. An in-depth discussion on complexity and expressiveness of DATALOG sup V, spl sim ,c is finally reported. The discussion contrasts DATALOG sup V, spl sim ,c to DATALOG sup V, spl sim and highlights the significant increase in knowledge modeling ability carried out by constraints.", "In answer-set programming (ASP), the main focus usually is on computing answer sets which correspond to solutions to the problem represented by a logic program. Simple reasoning over answer sets is sometimes supported by ASP systems (usually in the form of computing brave or cautious consequences), but slightly more involved reasoning problems require external postprocessing. Generally speaking, it is often desirable to use (a subset of) brave or cautious consequences of a program P 1 as input to another program P 2 in order to provide the desired solutions to the problem to be solved. In practice, the evaluation of the program P 1 currently has to be decoupled from the evaluation of P 2 using an intermediate step which collects the desired consequences of P 1 and provides them as input to P 2 . In this work, we present a novel method for representing such a procedure within a single program, and thus within the realm of ASP itself. Our technique relies on rewriting P 1 into a so-called manifold program , which allows for accessing all desired consequences of P 1 within a single answer set. Then, this manifold program can be evaluated jointly with P 2 avoiding any intermediate computation step. For determining the consequences within the manifold program we use weak constraints , which is strongly motivated by complexity considerations. As an application, we present an encoding for computing the ideal extension of an abstract argumentation framework." ] }
1312.0127
2104448458
Answer Set Programming (ASP) is a popular framework for modelling combinatorial problems. However, ASP cannot be used easily for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP that combines possibilistic logic and ASP. In PASP a weight is associated with each rule, whereas this weight is interpreted as the certainty with which the conclusion can be established when the body is known to hold. As such, it allows us to model and reason about uncertain information in an intuitive way. In this paper we present new semantics for PASP in which rules are interpreted as constraints on possibility distributions. Special models of these constraints are then identified as possibilistic answer sets. In addition, since ASP is a special case of PASP in which all the rules are entirely certain, we obtain a new characterization of ASP in terms of constraints on possibility distributions. This allows us to uncover a new form of disjunction, called weak disjunction, that has not been previously considered in the literature. In addition to introducing and motivating the semantics of weak disjunction, we also pinpoint its computational complexity. In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.
Possibility theory, which can be used for belief revision, has a strong epistemic notion and shares a lot of commonalities with epistemic entrenchments @cite_10 . Furthermore, in @cite_0 a generalization of possibilistic logic is studied, which corresponds to a weighted version of a fragment of the modal logic KD. In this logic, epistemic states are represented as possibility distributions, and logical formulas are used to express constraints on possible epistemic states. In this paper, we similarly interpret rules in ASP as constraints on possibility distributions, which furthermore allows us to unearth the semantics of weak disjunction.
{ "cite_N": [ "@cite_0", "@cite_10" ], "mid": [ "1911881406", "1974212563" ], "abstract": [ "Possibilistic logic is a well-known logic for reasoning under uncertainty, which is based on the idea that the epistemic state of an agent can be modeled by assigning to each possible world a degree of possibility, taken from a totally ordered, but essentially qualitative scale. Recently, a generalization has been proposed that extends possibilistic logic to a meta-epistemic logic, endowing it with the capability of reasoning about epistemic states, rather than merely constraining them. In this paper, we further develop this generalized possibilistic logic (GPL). We introduce an axiomatization showing that GPL is a fragment of a graded version of the modal logic KD, and we prove soundness and completeness w.r.t. a semantics in terms of possibility distributions. Next, we reveal a close link between the well-known stable model semantics for logic programming and the notion of minimally specific models in GPL. More generally, we analyze the relationship between the equilibrium logic of Pearce and GPL, showing that GPL can essentially be seen as a generalization of equilibrium logic, although its notion of minimal specificity is slightly more demanding than the notion of minimality underlying equilibrium logic.", "Abstract This note points out the close relationships existing between recent proposals in the theory of belief revision made by Gardenfors based on the notion of epistemic entrenchment, and possibility theory applied to automated reasoning under uncertainty. It is claimed that the only numerical counterparts of epistemic entrenchment relations are so-called necessity measures that are dual to possibility measures, and are also mathematically equivalent to consonant belief functions in the sense of Shafer. Relationships between Spohn's ordinal conditional functions and possibility theory are also laid bare." ] }
1312.0286
2950653703
Predictive state representations (PSRs) offer an expressive framework for modelling partially observable systems. By compactly representing systems as functions of observable quantities, the PSR learning approach avoids using local-minima prone expectation-maximization and instead employs a globally optimal moment-based algorithm. Moreover, since PSRs do not require a predetermined latent state structure as an input, they offer an attractive framework for model-based reinforcement learning when agents must plan without a priori access to a system model. Unfortunately, the expressiveness of PSRs comes with significant computational cost, and this cost is a major factor inhibiting the use of PSRs in applications. In order to alleviate this shortcoming, we introduce the notion of compressed PSRs (CPSRs). The CPSR learning approach combines recent advancements in dimensionality reduction, incremental matrix decomposition, and compressed sensing. We show how this approach provides a principled avenue for learning accurate approximations of PSRs, drastically reducing the computational costs associated with learning while also providing effective regularization. Going further, we propose a planning framework which exploits these learned models. And we show that this approach facilitates model-learning and planning in large complex partially observable domains, a task that is infeasible without the principled use of compression.
It should be noted, however, that since the general PSR learning framework assumes discrete observations, decomposing a continuous domain via feature extraction is necessary for learning in that setting. Moreover, @cite_8 shows how the well-known kernel trick'' can be employed to learn in feature-spaces of infinite dimension. The penalty associated with this kernel embedded approach is that learning scales cubically with the number of training examples, leading to high computational overhead . @cite_20 show how to partially alleviate this cost by using random features to approximate certain kernels, a technique that also relies on random projections (though not in the compressed sensing setting).
{ "cite_N": [ "@cite_20", "@cite_8" ], "mid": [ "2168359464", "2952772488" ], "abstract": [ "In this paper, we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains. We begin by introducing the theory of Markov decision processes (mdps) and partially observable MDPs (pomdps). We then outline a novel algorithm for solving pomdps off line and show how, in some cases, a finite-memory controller can be extracted from the solution to a POMDP. We conclude with a discussion of how our approach relates to previous work, the complexity of finding exact solutions to pomdps, and of some possibilities for finding approximate solutions.", "Predictive State Representations (PSRs) are an expressive class of models for controlled stochastic processes. PSRs represent state as a set of predictions of future observable events. Because PSRs are defined entirely in terms of observable data, statistically consistent estimates of PSR parameters can be learned efficiently by manipulating moments of observed training data. Most learning algorithms for PSRs have assumed that actions and observations are finite with low cardinality. In this paper, we generalize PSRs to infinite sets of observations and actions, using the recent concept of Hilbert space embeddings of distributions. The essence is to represent the state as a nonparametric conditional embedding operator in a Reproducing Kernel Hilbert Space (RKHS) and leverage recent work in kernel methods to estimate, predict, and update the representation. We show that these Hilbert space embeddings of PSRs are able to gracefully handle continuous actions and observations, and that our learned models outperform competing system identification algorithms on several prediction benchmarks." ] }
1312.0286
2950653703
Predictive state representations (PSRs) offer an expressive framework for modelling partially observable systems. By compactly representing systems as functions of observable quantities, the PSR learning approach avoids using local-minima prone expectation-maximization and instead employs a globally optimal moment-based algorithm. Moreover, since PSRs do not require a predetermined latent state structure as an input, they offer an attractive framework for model-based reinforcement learning when agents must plan without a priori access to a system model. Unfortunately, the expressiveness of PSRs comes with significant computational cost, and this cost is a major factor inhibiting the use of PSRs in applications. In order to alleviate this shortcoming, we introduce the notion of compressed PSRs (CPSRs). The CPSR learning approach combines recent advancements in dimensionality reduction, incremental matrix decomposition, and compressed sensing. We show how this approach provides a principled avenue for learning accurate approximations of PSRs, drastically reducing the computational costs associated with learning while also providing effective regularization. Going further, we propose a planning framework which exploits these learned models. And we show that this approach facilitates model-learning and planning in large complex partially observable domains, a task that is infeasible without the principled use of compression.
In a similar vein, the CPSR-based planner is closely related to the goal-directed planning and learning approach of @cite_7 . The primary difference between our work and this goal-directed approach is that we present a more general combined learning and planning framework, which accommodates the use of a wide variety of sampling strategies.
{ "cite_N": [ "@cite_7" ], "mid": [ "2109910161" ], "abstract": [ "Learning, planning, and representing knowledge at multiple levels of temporal ab- straction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforce- ment learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options—closed-loop policies for taking ac- tion over a period of time. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as mus- cle twitches and joint torques. Overall, we show that options enable temporally abstract knowledge and action to be included in the reinforcement learning frame- work in a natural and general way. In particular, we show that options may be used interchangeably with primitive actions in planning methods such as dynamic pro- gramming and in learning methods such as Q-learning. Formally, a set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), and the theory of SMDPs provides the foundation for the theory of options. However, the most interesting issues concern the interplay between the underlying MDP and the SMDP and are thus beyond SMDP theory. We present results for three such cases: 1) we show that the results of planning with options can be used during execution to interrupt options and thereby perform even better than planned, 2) we introduce new intra-option methods that are able to learn about an option from fragments of its execution, and 3) we propose a notion of subgoal that can be used to improve the options themselves. All of these results have precursors in the existing literature; the contribution of this paper is to establish them in a simpler and more general setting with fewer changes to the existing reinforcement learning framework. In particular, we show that these results can be obtained without committing to (or ruling out) any particular approach to state abstraction, hierarchy, function approximation, or the macro-utility problem." ] }
1312.0286
2950653703
Predictive state representations (PSRs) offer an expressive framework for modelling partially observable systems. By compactly representing systems as functions of observable quantities, the PSR learning approach avoids using local-minima prone expectation-maximization and instead employs a globally optimal moment-based algorithm. Moreover, since PSRs do not require a predetermined latent state structure as an input, they offer an attractive framework for model-based reinforcement learning when agents must plan without a priori access to a system model. Unfortunately, the expressiveness of PSRs comes with significant computational cost, and this cost is a major factor inhibiting the use of PSRs in applications. In order to alleviate this shortcoming, we introduce the notion of compressed PSRs (CPSRs). The CPSR learning approach combines recent advancements in dimensionality reduction, incremental matrix decomposition, and compressed sensing. We show how this approach provides a principled avenue for learning accurate approximations of PSRs, drastically reducing the computational costs associated with learning while also providing effective regularization. Going further, we propose a planning framework which exploits these learned models. And we show that this approach facilitates model-learning and planning in large complex partially observable domains, a task that is infeasible without the principled use of compression.
Beyond these works, our approach bears similarities to the memory PSR (mPSR) approach of @cite_11 , which uses a type of hybrid PSR-MDP model to reduce computational costs and increase predictive accuracy, and the hierarchical PSRs (HPSRs) of @cite_6 , which use the option framework to increase the predictive capacity of PSRs. Importantly, the improvements suggested by both these approaches are not incompatible with our compressed learning algorithm.
{ "cite_N": [ "@cite_6", "@cite_11" ], "mid": [ "2166610875", "69985676" ], "abstract": [ "Predictive state representations (PSRs) are a recently proposed way of modeling controlled dynamical systems. PSR-based models use predictions of observable outcomes of tests that could be done on the system as their state representation, and have model parameters that define how the predictive state representation changes over time as actions are taken and observations noted. Learning PSR-based models requires solving two subproblems: 1) discovery of the tests whose predictions constitute state, and 2) learning the model parameters that define the dynamics. So far, there have been no results available on the discovery subproblem while for the learning subproblem an approximate-gradient algorithm has been proposed (, 2003) with mixed results (it works on some domains and not on others). In this paper, we provide the first discovery algorithm and a new learning algorithm for linear PSRs for the special class of controlled dynamical systems that have a reset operation. We provide experimental verification of our algorithms. Finally, we also distinguish our work from prior work by Jaeger (2000) on observable operator models (OOMs).", "It has recently been proposed that it is advantageous to have models of dynamical systems be based solely on observable quantities. Predictive state representations (PSRs) are a type of model that uses predictions about future observations to capture the state of a dynamical system. However, PSRs do not use memory of past observations. We propose a model called memory-PSRs that uses both memories of the past, and predictions of the future. We show that the use of memories provides a number of potential advantages. It can reduce the size of the model (in comparison to a PSR model). In addition many dynamical systems have memories that can serve as landmarks that completely determine the current state. The detection and recognition of landmarks is advantageous because they can serve to reset a model that has gotten off-track, as often happens when the model is learned from samples. This paper develops both memory-PSRs and the use and detection of landmarks." ] }
1312.0677
2407436055
Web Service Composition creates new composite Web Services from the collection of existing ones to be composed further and embodies the added values and potential usages of Web Services. Web Service Composition includes two aspects: Web Service orchestration denoting a workflow-like composition pattern and Web Service choreography which represents an aggregate composition pattern. There were only a few works which give orchestration and choreography a relationship. In this paper, we introduce an architecture of Web Service Composition runtime which establishes a natural relationship between orchestration and choreography through a deep analysis of the two ones. Then we use an actor-based approach to design a language called AB-WSCL to support such an architecture. To give AB-WSCL a firmly theoretic foundation, we establish the formal semantics of AB-WSCL based on concurrent rewriting theory for actors. Conclusions that well defined relationships exist among the components of AB-WSCL using a notation of Compositionality is drawn based on semantics analysis. Our works can be bases of a modeling language, simulation tools, verification tools of Web Service Composition at design time, and also a Web Service Composition runtime with correctness analysis support itself.
Aalst uses Petri Net to model a workflow called WF-Net @cite_26 and connects two WF-Nets within different organizations into a newly global WF-Net to model integration of two workflows. But, a partner workflow is located within an organization, that is, the details of an inner workflow are hidden to the external world, so a global view of entire connection with two detailed inner workflows usually can not be gotten.
{ "cite_N": [ "@cite_26" ], "mid": [ "2129466958" ], "abstract": [ "Workflow management promises a new solution to an age-old problem: controlling, monitoring, optimizing and supporting business processes. What is new about workflow management is the explicit representation of the business process logic which allows for computerized support. This paper discusses the use of Petri nets in the context of workflow management. Petri nets are an established tool for modeling and analyzing processes. On the one hand, Petri nets can be used as a design language for the specification of complex workflows. On the other hand, Petri net theory provides for powerful analysis techniques which can be used to verify the correctness of workflow procedures. This paper introduces workflow management as an application domain for Petri nets, presents state-of-the-art results with respect to the verification of workflows, and highlights some Petri-net-based workflow tools." ] }
1312.0677
2407436055
Web Service Composition creates new composite Web Services from the collection of existing ones to be composed further and embodies the added values and potential usages of Web Services. Web Service Composition includes two aspects: Web Service orchestration denoting a workflow-like composition pattern and Web Service choreography which represents an aggregate composition pattern. There were only a few works which give orchestration and choreography a relationship. In this paper, we introduce an architecture of Web Service Composition runtime which establishes a natural relationship between orchestration and choreography through a deep analysis of the two ones. Then we use an actor-based approach to design a language called AB-WSCL to support such an architecture. To give AB-WSCL a firmly theoretic foundation, we establish the formal semantics of AB-WSCL based on concurrent rewriting theory for actors. Conclusions that well defined relationships exist among the components of AB-WSCL using a notation of Compositionality is drawn based on semantics analysis. Our works can be bases of a modeling language, simulation tools, verification tools of Web Service Composition at design time, and also a Web Service Composition runtime with correctness analysis support itself.
This leads to emergence of the so-called process view @cite_42 and integration of two inner process can be implemented based on process views @cite_7 . A process view is an observable version of an inner process from outside and serves as the interface of an inner process.
{ "cite_N": [ "@cite_42", "@cite_7" ], "mid": [ "2079246930", "1608070605" ], "abstract": [ "Conducting workflow management allows virtual enterprises to collaboratively manage business processes. Given the diverse requirements of the participants involved in a business process, providing various participants with adequate process information is critical to effective workflow management. This work describes a novel process-view, i.e., an abstracted process which is derived from a base process to provide process abstraction, for modeling a virtual workflow process. The proposed process-view model enhances the conventional activity-based process models by providing different participants with various views of a process. Moreover, this work presents a novel order-preserving approach to derive a process-view from a base process. The approach proposed herein can preserve the original ordering of activities in the base process. Additionally, a formal model is presented to define an order-preserving process-view. Finally, an algorithm is proposed for automatically generating an order-preserving process-view. The proposed approach increases the flexibility and functionality of workflow management systems.", "In multi-enterprise cooperation, an enterprise must monitor the progress of private processes as well as those of the partners to streamline interorganizational workflows. In this work, a process-view model, which extends beyond the conventional activity-based process model, is applied to design workflows across multiple enterprises. A process-view is an abstraction of an implemented process. An enterprise can design various process-views for different partners according to diverse commercial relationships, and establish an integrated process that is comprised of private processes as well as the process-views that these partners provide. Participatory enterprises can obtain appropriate progress information from their own integrated processes, allowing them to collaborate more effectively. Furthermore, interorganizational workflows are coordinated through virtual states of process-views. This work develops a regulated approach to map the states between private processes and process-views. The proposed approach enhances prevalent activity-based process models to be adapted in open and collaborative environments." ] }
1311.6880
1618683236
In a K-pair-user two-way interference channel (TWIC), 2K messages and 2K transmitters receivers form a K-user IC in the forward direction (K messages) and another K-user IC in the backward direction which operate in full-duplex mode. All nodes may interact, or adapt inputs to past received signals. We derive a new outer bound to demonstrate that the optimal degrees of freedom (DoF, also known as the multiplexing gain) is K: full-duplex operation doubles the DoF, but interaction does not further increase the DoF. We next characterize the DoF of the K-pair-user TWIC with a MIMO, full-duplex relay. If the relay is non-causal instantaneous (at time k forwards a function of its received signals up to time k) and has 2K antennas, we demonstrate a one-shot scheme where the relay mitigates all interference to achieve the interference-free 2K DoF. In contrast, if the relay is causal (at time k forwards a function of its received signals up to time k 1), we show that a full-duplex MIMO relay cannot increase the DoF of the K-pair-user TWIC beyond K, as if no relay or interaction is present. We comment on reducing the number of antennas at the instantaneous relay.
The @math -user interference channel, as an extension of the 2-user interference channel, information theoretically models wireless communications in networks involving more than two-pairs of users. Using the idea of interference alignment @cite_13 @cite_33 @cite_20 , the DoF of the @math -user (one-way) IC for both time-varying channels and (almost all) The precise definition of almost all'' may be found in @cite_36 . constant channels has been shown to be @math in @cite_7 and @cite_36 respectively. The generalized DoF of the @math -user IC without and with feedback have been characterized in @cite_25 and @cite_27 (full feedback from receiver @math to transmitter @math ) respectively. Authors in @cite_3 showed that for almost all constant channel coefficients of fully connected two-hop wireless networks with @math sources, @math relays and @math destinations (source nodes are not destination nodes as they are here, i.e. the network is one-way), the DoF is @math .
{ "cite_N": [ "@cite_33", "@cite_7", "@cite_36", "@cite_3", "@cite_27", "@cite_13", "@cite_25", "@cite_20" ], "mid": [ "", "2950813095", "2130172876", "2549478918", "2065003171", "2010645529", "2108016639", "" ], "abstract": [ "", "While the best known outerbound for the K user interference channel states that there cannot be more than K 2 degrees of freedom, it has been conjectured that in general the constant interference channel with any number of users has only one degree of freedom. In this paper, we explore the spatial degrees of freedom per orthogonal time and frequency dimension for the K user wireless interference channel where the channel coefficients take distinct values across frequency slots but are fixed in time. We answer five closely related questions. First, we show that K 2 degrees of freedom can be achieved by channel design, i.e. if the nodes are allowed to choose the best constant, finite and nonzero channel coefficient values. Second, we show that if channel coefficients can not be controlled by the nodes but are selected by nature, i.e., randomly drawn from a continuous distribution, the total number of spatial degrees of freedom for the K user interference channel is almost surely K 2 per orthogonal time and frequency dimension. Thus, only half the spatial degrees of freedom are lost due to distributed processing of transmitted and received signals on the interference channel. Third, we show that interference alignment and zero forcing suffice to achieve all the degrees of freedom in all cases. Fourth, we show that the degrees of freedom @math directly lead to an @math capacity characterization of the form @math for the multiple access channel, the broadcast channel, the 2 user interference channel, the 2 user MIMO X channel and the 3 user interference channel with M>1 antennas at each node. Fifth, we characterize the degree of freedom benefits from cognitive sharing of messages on the 3 user interference channel.", "In this paper, we develop the machinery of real interference alignment. This machinery is extremely powerful in achieving the sum degrees of freedom (DoF) of single antenna systems. The scheme of real interference alignment is based on designing single-layer and multilayer constellations used for modulating information messages at the transmitters. We show that constellations can be aligned in a similar fashion as that of vectors in multiple antenna systems and space can be broken up into fractional dimensions. The performance analysis of the signaling scheme makes use of a recent result in the field of Diophantine approximation, which states that the convergence part of the Khintchine-Groshev theorem holds for points on nondegenerate manifolds. Using real interference alignment, we obtain the sum DoF of two model channels, namely the Gaussian interference channel (IC) and the X channel. It is proved that the sum DoF of the K-user IC is (K 2) for almost all channel parameters. We also prove that the sum DoF of the X-channel with K transmitters and M receivers is (K M K + M - 1) for almost all channel parameters.", "We show that fully connected two-hop wireless networks with K sources, K relays and K destinations have K degrees of freedom for almost all values of constant channel coefficients. Our main contribution is a new interference-alignment-based achievability scheme which we call aligned network diagonalization. This scheme allows the data streams transmitted by the sources to undergo a diagonal linear transformation from the sources to the destinations, thus being received free of interference by their intended destination.", "The symmetric K user interference channel with fully connected topology is considered, in which (a) each receiver suffers interference from all other K − 1 transmitters, and (b) each transmitter has causal and noiseless feedback from its respective receiver. The number of generalized degrees of freedom (GDoF) is characterized in terms of α, where the interference-to-noise ratio (INR) is given by INR = SNRα. It is shown that the number of per-user GDoF of this network is the same as that of the 2-user interference channel with feedback, except for α = 1, for which existence of feedback does not help in terms of GDoF. The coding scheme proposed for this network, termed cooperative interference alignment, is based on two key ingredients, namely, interference alignment and interference decoding.", "", "We characterize the generalized degrees of freedom of the K user symmetric Gaussian interference channel where all desired links have the same signal-to-noise ratio (SNR) and all undesired links carrying interference have the same interference-to-noise ratio, INR = SNRα. We find that the number of generalized degrees of freedom per user, d(α), does not depend on the number of users, so that the characterization is identical to the 2 user interference channel with the exception of a singularity at α = 1 where d(1) = 1 K. The achievable schemes use multilevel coding with a nested lattice structure that opens the possibility that the sum of interfering signals can be decoded at a receiver even though the messages carried by the interfering signals are not decodable.", "" ] }
1311.7435
56385812
Evaluation of large-scale network systems and applications is usually done in one of three ways: simulations, real deployment on Internet, or on an emulated network testbed such as a cluster. Simulations can study very large systems but often abstract out many practical details, whereas real world tests are often quite small, on the order of a few hundred nodes at most, but have very realistic conditions. Clusters and other dedicated testbeds offer a middle ground between the two: large systems with real application code. They also typically allow configuring the testbed to enable repeatable experiments. In this paper we explore how to run large BitTorrent experiments in a cluster setup. We have chosen BitTorrent because the source code is available and it has been a popular target for research. Our contribution is twofold. First, we show how to tweak and configure the BitTorrent client to allow for a maximum number of clients to be run on a single machine, without running into any physical limits of the machine. Second, our results show that the behavior of BitTorrent can be very sensitive to the configuration and we revisit some existing BitTorrent research and consider the implications of our findings on previously published results. As we show in this paper, BitTorrent can change its behavior in subtle ways which are sometimes ignored in published works.
@cite_16 discuss the difficulties in validating large-scale peer-to-peer systems. The authors also proposed a framework for performing large-scale experiments based on grid services. However, how the experiments are affected by the underlying details and the experiment settings are not touched.
{ "cite_N": [ "@cite_16" ], "mid": [ "2342123287" ], "abstract": [ "The interesting properties of P2P systems (high availability despite node volatility, support for heterogeneous architectures, high scalability, etc.) make them attractive for distributed computing. However, conducting large-scale experiments with these systems arise as a major challenge. Simulation allows to model only partially the behavior of P2P prototypes. Experiments on real testbeds encounter serious difficulty with large-scale deployment and control of peers. This paper shows that using an optimized version of the JXTA Distributed Framework (JDF) allows to easily deploy, configure and control P2P experiments. We illustrate these features with sample tests performed with our JXTA-based grid data sharing service, for various large-scale configurations." ] }
1311.7435
56385812
Evaluation of large-scale network systems and applications is usually done in one of three ways: simulations, real deployment on Internet, or on an emulated network testbed such as a cluster. Simulations can study very large systems but often abstract out many practical details, whereas real world tests are often quite small, on the order of a few hundred nodes at most, but have very realistic conditions. Clusters and other dedicated testbeds offer a middle ground between the two: large systems with real application code. They also typically allow configuring the testbed to enable repeatable experiments. In this paper we explore how to run large BitTorrent experiments in a cluster setup. We have chosen BitTorrent because the source code is available and it has been a popular target for research. Our contribution is twofold. First, we show how to tweak and configure the BitTorrent client to allow for a maximum number of clients to be run on a single machine, without running into any physical limits of the machine. Second, our results show that the behavior of BitTorrent can be very sensitive to the configuration and we revisit some existing BitTorrent research and consider the implications of our findings on previously published results. As we show in this paper, BitTorrent can change its behavior in subtle ways which are sometimes ignored in published works.
However, only a few papers, e.g., @cite_17 @cite_1 @cite_5 concern the accuracy of experiments and the bias of measurements. Work in @cite_1 investigated the sampling bias in BitTorrent experiments. Even though the discussion merely focuses on the approach of using instrumented client to obtain data from real-world swarm, the recommendations proposed in this paper are simple heuristics and guidelines. We have followed their recommendations and have designed our Logger module to follow them. Our Logger module takes a snapshot for the peer every second during its whole life span. This strategy yields very reliable experiment data.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_17" ], "mid": [ "1986558455", "2118523285", "2128094442" ], "abstract": [ "Network latency and packet loss are considered to be an important requirement for realistic evaluation of Peer-to-Peer protocols. Dedicated clusters, such as Grid'5000, do not provide the variety of network latency and packet loss rates that can be found in the Internet. However, compared to the experiments performed on testbeds such as PlanetLab, the experiments performed on dedicated clusters are reproducible, as the computational resources are not shared. In this paper, we perform experiments to study the impact of network latency and packet loss on the time required to download a file using BitTorrent. In our experiments, we observe a less than 15 increase on the time required to download a file when we increase the round-trip time between any two peers, from 0 ms to 400 ms, and the packet loss rate, from 0 to 5 . Our main conclusion is that the underlying network latency and packet loss have a marginal impact on the time required to download a file using BitTorrent. Hence, dedicated clusters such as Grid'5000 can be safely used to perform realistic and reproducible BitTorrent experiments.", "Real-world measurements play an important role in understanding the characteristics and in improving the operation of BitTorrent, which is currently a popular Internet application. Much like measuring the Internet, the complexity and scale of the BitTorrent network make a single, complete measurement impractical. While a large number of measurements have already employed diverse sampling techniques to study parts of BitTorrent network, until now there exists no investigation of their sampling bias, that is, of their ability to objectively represent the characteristics of BitTorrent. In this work we present the first study of the sampling bias in BitTorrent measurements. We first introduce a novel taxonomy of sources of sampling bias in BitTorrent measurements. We then investigate the sampling among fifteen longterm BitTorrent measurements completed between 2004 and 2009, and find that different data sources and measurement techniques can lead to significantly different measurement results. Last, we formulate three recommendations to improve the design of future BitTorrent measurements, and estimate the cost of using these recommendations in practice.", "The observed performance by individual peers in BitTorrent can be simply measured by their average download rate. While it is often stated that the observed peer-level performance by BitTorrent clients is high, it is difficult to accurately verify this claim due to the large scale, distributed and dynamic nature of this P2P system. To provide a \"representative\" characterization of peer-level performance in BitTorrent, the following two important questions should be addressed: (i) What is the distribution of observed performance among participating peers in a torrent? (ii) What are the primary peer-or group-level properties that determine observed performance by individual peers? In this paper, we conduct a measurement study to tackle these two questions. Toward this end, we derive observed performance for nearly all participating peers along with their main peer-and (peer-view of) group-level properties in three different torrents. Our results show that the probability of experiencing certain level of performance has a roughly uniform distribution across the entire range of observed values. Furthermore, while the performance of each peer has the highest correlation with its outgoing bandwidth, there is no dominant peer-and group-level property that primarily determines the observed performance by the majority of peers." ] }
1311.7435
56385812
Evaluation of large-scale network systems and applications is usually done in one of three ways: simulations, real deployment on Internet, or on an emulated network testbed such as a cluster. Simulations can study very large systems but often abstract out many practical details, whereas real world tests are often quite small, on the order of a few hundred nodes at most, but have very realistic conditions. Clusters and other dedicated testbeds offer a middle ground between the two: large systems with real application code. They also typically allow configuring the testbed to enable repeatable experiments. In this paper we explore how to run large BitTorrent experiments in a cluster setup. We have chosen BitTorrent because the source code is available and it has been a popular target for research. Our contribution is twofold. First, we show how to tweak and configure the BitTorrent client to allow for a maximum number of clients to be run on a single machine, without running into any physical limits of the machine. Second, our results show that the behavior of BitTorrent can be very sensitive to the configuration and we revisit some existing BitTorrent research and consider the implications of our findings on previously published results. As we show in this paper, BitTorrent can change its behavior in subtle ways which are sometimes ignored in published works.
On the other hand, Rasti and Rejaie @cite_17 claim that the data obtained with this approach (injecting an instrumented client into real-world swarm) is not representative and has already been biased in the beginning. The main reason for their claim is that BitTorrent clients tend to cluster with other clients having similar upload bandwidths. This observation is definitely valid for measuring a real-world swarm on the Internet, but as our experiments are performed on a cluster where all peers are instrumented to provide logging information, such a bias does not exist in our experimental setup.
{ "cite_N": [ "@cite_17" ], "mid": [ "2128094442" ], "abstract": [ "The observed performance by individual peers in BitTorrent can be simply measured by their average download rate. While it is often stated that the observed peer-level performance by BitTorrent clients is high, it is difficult to accurately verify this claim due to the large scale, distributed and dynamic nature of this P2P system. To provide a \"representative\" characterization of peer-level performance in BitTorrent, the following two important questions should be addressed: (i) What is the distribution of observed performance among participating peers in a torrent? (ii) What are the primary peer-or group-level properties that determine observed performance by individual peers? In this paper, we conduct a measurement study to tackle these two questions. Toward this end, we derive observed performance for nearly all participating peers along with their main peer-and (peer-view of) group-level properties in three different torrents. Our results show that the probability of experiencing certain level of performance has a roughly uniform distribution across the entire range of observed values. Furthermore, while the performance of each peer has the highest correlation with its outgoing bandwidth, there is no dominant peer-and group-level property that primarily determines the observed performance by the majority of peers." ] }
1311.7435
56385812
Evaluation of large-scale network systems and applications is usually done in one of three ways: simulations, real deployment on Internet, or on an emulated network testbed such as a cluster. Simulations can study very large systems but often abstract out many practical details, whereas real world tests are often quite small, on the order of a few hundred nodes at most, but have very realistic conditions. Clusters and other dedicated testbeds offer a middle ground between the two: large systems with real application code. They also typically allow configuring the testbed to enable repeatable experiments. In this paper we explore how to run large BitTorrent experiments in a cluster setup. We have chosen BitTorrent because the source code is available and it has been a popular target for research. Our contribution is twofold. First, we show how to tweak and configure the BitTorrent client to allow for a maximum number of clients to be run on a single machine, without running into any physical limits of the machine. Second, our results show that the behavior of BitTorrent can be very sensitive to the configuration and we revisit some existing BitTorrent research and consider the implications of our findings on previously published results. As we show in this paper, BitTorrent can change its behavior in subtle ways which are sometimes ignored in published works.
A lot of analytical work has also studied the clustering properties of BitTorrent. Based on the analysis of the choking algorithm, @cite_14 provides empirical evidence of BitTorrent's clustering and show that peers with similar bandwidths tend to get clustered.
{ "cite_N": [ "@cite_14" ], "mid": [ "2171076347" ], "abstract": [ "Peer-to-peer protocols play an increasingly instrumental role in Internet content distribution. It is therefore important to gain a complete understanding of how these protocols behave in practice and how their operating parameters affect overall system performance. This paper presents the first detailed experimental investigation of the peer selection strategy in the popular BitTorrent protocol. By observing more than 40 nodes in instrumented private torrents, we validate three protocol properties that, though believed to hold, have not been previously demonstrated experimentally: the clustering of similar-bandwidth peers, the effectiveness of BitTorrent's sharing incentives, and the peers' high uplink utilization. In addition, we observe that BitTorrent's modified choking algorithmin seed state provides uniform service to all peers, and that an underprovisioned initial seed leads to absence of peer clustering and less effective sharing incentives. Based on our results, we provide guidelines for seed provisioning by content providers, and discuss a tracker protocol extension that addresses an identified limitation of the protocol." ] }
1311.7435
56385812
Evaluation of large-scale network systems and applications is usually done in one of three ways: simulations, real deployment on Internet, or on an emulated network testbed such as a cluster. Simulations can study very large systems but often abstract out many practical details, whereas real world tests are often quite small, on the order of a few hundred nodes at most, but have very realistic conditions. Clusters and other dedicated testbeds offer a middle ground between the two: large systems with real application code. They also typically allow configuring the testbed to enable repeatable experiments. In this paper we explore how to run large BitTorrent experiments in a cluster setup. We have chosen BitTorrent because the source code is available and it has been a popular target for research. Our contribution is twofold. First, we show how to tweak and configure the BitTorrent client to allow for a maximum number of clients to be run on a single machine, without running into any physical limits of the machine. Second, our results show that the behavior of BitTorrent can be very sensitive to the configuration and we revisit some existing BitTorrent research and consider the implications of our findings on previously published results. As we show in this paper, BitTorrent can change its behavior in subtle ways which are sometimes ignored in published works.
@cite_8 extend an earlier analytical model from @cite_0 and propose a new model for analytical investigation of BitTorrent's clustering. Their model only takes into account peer selection in BitTorrent and ignores the effects of piece selection. They observe similar clustering behavior as we have observed. However, their model and measurements exhibit a small discrepancy which they conjecture is the result of probabilistic effects from too small experiments. Our results show that clustering in BitTorrent is actually an interplay of both peer and piece selection algorithms, and we believe that their observed discrepancies are a result of their model ignoring piece selection. Although the effects of piece selection on clustering are small and hard to observe, our work, in particular on the download-constrained experiments, has shown that it cannot be ignored. Both @cite_8 and our work find the same effect of upload connections going to foreign peers while the majority of data comes from native peers.
{ "cite_N": [ "@cite_0", "@cite_8" ], "mid": [ "2166245380", "2144334146" ], "abstract": [ "In this paper, we develop simple models to study the performance of BitTorrent, a second generation peer-to-peer (P2P) application. We first present a simple fluid model and study the scalability, performance and efficiency of such a file-sharing mechanism. We then consider the built-in incentive mechanism of BitTorrent and study its effect on network performance. We also provide numerical results based on both simulations and real traces obtained from the Internet.", "A number of analytical models exists that capture various properties of the BitTorrent protocol. However, until now virtually all of these models have been based on the assumption that the peers in the system have homogeneous bandwidths. As this is highly unrealistic in real swarms, these models have very limited applicability. Most of all, these models implicitly ignore BitTorrent's most important property: peer selection based on the highest rate of reciprocity. As a result, these models are not suitable for understanding or predicting the properties of real BitTorrent networks. Furthermore, they are hardly of use in the design of realistic BitTorrent simulators and new P2P protocols. In this paper, we extend existing work by presenting a model of a swarm in BitTorrent where peers have arbitrary upload and download bandwidths. In our model we group peers with (roughly) the same bandwidth in classes, and then analyze the allocation of upload slots from peers in one class to peers in another class. We show that our model accurately predicts the bandwidth clustering phenomenon observed experimentally in other work, and we analyze the resulting data distribution in swarms. We validate our model with experiments using real BitTorrent clients. Our model captures the effects of BitTorrent's well-known ‘tit-for-tat’ mechanism in bandwidth-inhomogeneous swarms and provides an accurate mathematical description of the resulting dynamics." ] }
1311.7435
56385812
Evaluation of large-scale network systems and applications is usually done in one of three ways: simulations, real deployment on Internet, or on an emulated network testbed such as a cluster. Simulations can study very large systems but often abstract out many practical details, whereas real world tests are often quite small, on the order of a few hundred nodes at most, but have very realistic conditions. Clusters and other dedicated testbeds offer a middle ground between the two: large systems with real application code. They also typically allow configuring the testbed to enable repeatable experiments. In this paper we explore how to run large BitTorrent experiments in a cluster setup. We have chosen BitTorrent because the source code is available and it has been a popular target for research. Our contribution is twofold. First, we show how to tweak and configure the BitTorrent client to allow for a maximum number of clients to be run on a single machine, without running into any physical limits of the machine. Second, our results show that the behavior of BitTorrent can be very sensitive to the configuration and we revisit some existing BitTorrent research and consider the implications of our findings on previously published results. As we show in this paper, BitTorrent can change its behavior in subtle ways which are sometimes ignored in published works.
The work by @cite_5 is the closest work to ours. The authors discuss the rationality of performing BitTorrent experiments on a cluster. However, the discussions focus on the marginal influences on the average download rate from various RTT and packet loss rates and conclude that the effects from changing RTTs and packet loss rates are so small that they can be discounted in the evaluation. Our work focuses on how to design an experiment on a cluster properly, i.e., what is the safe region'' for a correct experiment and how BitTorrent behaves when the experiments are performed around the system capacity limit.
{ "cite_N": [ "@cite_5" ], "mid": [ "1986558455" ], "abstract": [ "Network latency and packet loss are considered to be an important requirement for realistic evaluation of Peer-to-Peer protocols. Dedicated clusters, such as Grid'5000, do not provide the variety of network latency and packet loss rates that can be found in the Internet. However, compared to the experiments performed on testbeds such as PlanetLab, the experiments performed on dedicated clusters are reproducible, as the computational resources are not shared. In this paper, we perform experiments to study the impact of network latency and packet loss on the time required to download a file using BitTorrent. In our experiments, we observe a less than 15 increase on the time required to download a file when we increase the round-trip time between any two peers, from 0 ms to 400 ms, and the packet loss rate, from 0 to 5 . Our main conclusion is that the underlying network latency and packet loss have a marginal impact on the time required to download a file using BitTorrent. Hence, dedicated clusters such as Grid'5000 can be safely used to perform realistic and reproducible BitTorrent experiments." ] }
1311.7435
56385812
Evaluation of large-scale network systems and applications is usually done in one of three ways: simulations, real deployment on Internet, or on an emulated network testbed such as a cluster. Simulations can study very large systems but often abstract out many practical details, whereas real world tests are often quite small, on the order of a few hundred nodes at most, but have very realistic conditions. Clusters and other dedicated testbeds offer a middle ground between the two: large systems with real application code. They also typically allow configuring the testbed to enable repeatable experiments. In this paper we explore how to run large BitTorrent experiments in a cluster setup. We have chosen BitTorrent because the source code is available and it has been a popular target for research. Our contribution is twofold. First, we show how to tweak and configure the BitTorrent client to allow for a maximum number of clients to be run on a single machine, without running into any physical limits of the machine. Second, our results show that the behavior of BitTorrent can be very sensitive to the configuration and we revisit some existing BitTorrent research and consider the implications of our findings on previously published results. As we show in this paper, BitTorrent can change its behavior in subtle ways which are sometimes ignored in published works.
The experiment setup in @cite_5 is very similar to the case discussed in our paper. The authors used 3 nodes for deploying leechers (100 leechers on each node) and performed a homogeneous upload-constrained experiment. The maximum upload rate was set to 100 KB s. They did not consider possible bottlenecks in their experiment setup. Using our capacity planning method from , we can see that their experiments require only on the order of 3 MB s of bandwidth between nodes and on the loopback. Given that they were using modern computers on the Grid 5000 testbed, they should be well below the system capacity limit. Our work therefore validates their experiment setting as being correct.
{ "cite_N": [ "@cite_5" ], "mid": [ "1986558455" ], "abstract": [ "Network latency and packet loss are considered to be an important requirement for realistic evaluation of Peer-to-Peer protocols. Dedicated clusters, such as Grid'5000, do not provide the variety of network latency and packet loss rates that can be found in the Internet. However, compared to the experiments performed on testbeds such as PlanetLab, the experiments performed on dedicated clusters are reproducible, as the computational resources are not shared. In this paper, we perform experiments to study the impact of network latency and packet loss on the time required to download a file using BitTorrent. In our experiments, we observe a less than 15 increase on the time required to download a file when we increase the round-trip time between any two peers, from 0 ms to 400 ms, and the packet loss rate, from 0 to 5 . Our main conclusion is that the underlying network latency and packet loss have a marginal impact on the time required to download a file using BitTorrent. Hence, dedicated clusters such as Grid'5000 can be safely used to perform realistic and reproducible BitTorrent experiments." ] }
1311.7476
2951585682
We prove the transformation formula of Donaldson-Thomas (DT) invariants counting two dimensional torsion sheaves on Calabi-Yau 3-folds under flops. The error term is described by the Dedekind eta function and the Jacobi theta function, and our result gives evidence of a 3-fold version of Vafa-Witten's S-duality conjecture. As an application, we prove a blow-up formula of DT type invariants on the total spaces of canonical line bundles on smooth projective surfaces. It gives an analogue of the similar blow-up formula in the original S-duality conjecture by Yoshioka, Li-Qin and G "ottsche.
A flop formula for DT type curve counting invariants was obtained in the papers @cite_20 , @cite_21 , @cite_0 , @cite_3 . Among them, the papers @cite_0 , @cite_3 (also see @cite_36 ) use similar Hall algebra methods, but we need to work with the relevant abelian category @math which did not appear in the above papers. On the other hand, there are few mathematical literatures in which DT invariants of the form @math are studied. @cite_4 , the modularity of these invariants is discussed for nodal K3 fibrations using degeneration formula. @cite_40 , @cite_38 , some relationship between the invariants @math and DT type curve counting invariants are studied. @cite_5 , the invariant @math on local @math is studied for small @math . In physics literatures, a few of the D4 brane counting which corresponds to the invariants of the form @math are computed @cite_35 , @cite_32 . Also the flop formula of D4D2D0 bound states on the resolved conifold is studied in @cite_14 , @cite_16 using Kontsevich-Soibelman's wall-crossing formula @cite_23 . The result of Theorem is interpreted as a mathematical justification and a generalization of the arguments in the physics articles @cite_14 , @cite_16 .
{ "cite_N": [ "@cite_38", "@cite_35", "@cite_14", "@cite_4", "@cite_36", "@cite_21", "@cite_32", "@cite_3", "@cite_0", "@cite_40", "@cite_23", "@cite_5", "@cite_16", "@cite_20" ], "mid": [ "2023137590", "1994861096", "2018505968", "1579033279", "", "2962730575", "2035507523", "", "", "", "2154483904", "1780653951", "2082888708", "2065330478" ], "abstract": [ "Motivated by S-duality modularity conjectures in string theory, we define new invariants counting a restricted class of two-dimensional torsion sheaves, enumerating pairs (Z H ) in a Calabi–Yau threefold (X ). Here (H ) is a member of a sufficiently positive linear system and (Z ) is a one-dimensional subscheme of it. The associated sheaf is the ideal sheaf of (Z H ), pushed forward to (X ) and considered as a certain Joyce–Song pair in the derived category of (X ). We express these invariants in terms of the MNOP invariants of (X ).", "The modified elliptic genus for an M5-brane wrapped on a four-cycle of a Calabi-Yau threefold encodes the degeneracies of an infinite set of BPS states in four dimensions. By holomorphy and modular invariance, it can be determined completely from the knowledge of a finite set of such BPS states. We show the feasibility of such a computation and determine the exact modified elliptic genus for an M5-brane wrapping a hyperplane section of the quintic threefold.", "We study the wall-crossing phenomena of D4-D2-D0 bound states with two units of D4-branechargeontheresolvedconifold. We identify the walls of marginal stability and evaluate the discrete changes of the BPS indices by using the Kontsevich-Soibelman wall-crossing formula. In particular, we find that the field theories on D4-branes in two large radius limits are properly connected by the wall-crossings involving the flop transition of the conifold. We also find that in one of the large radius limits there are stable bound states of two D4-D2-D0 fragments.", "Motivated by the S-duality conjecture, we study the Donaldson-Thomas invariants of the 2 dimensional Gieseker stable sheaves on a threefold. These sheaves are supported on the fibers of a nonsingular threefold X fibered over a nonsingular curve. In the case where X is a K3 fibration, we express these invariants in terms of the Euler characteristic of the Hilbert scheme of points on the K3 fiber and the Noether-Lefschetz numbers of the fibration. We prove that a certain generating function of these invariants is a vector modular form of weight -3 2 as predicted in S-duality.", "", "", "We determine the modified elliptic genus of an M5-brane wrapped on various one modulus Calabi-Yau spaces, using modular invariance together with some known Gopakumar-Vafa invariants of small degrees. As a bonus, we find nontrivial relations among Gopakumar-Vafa invariants of different degrees and genera from modular invariance.", "", "", "", "We define new invariants of 3d Calabi-Yau categories endowed with a stability structure. Intuitively, they count the number of semistable objects with fixed class in the K-theory of the category (\"number of BPS states with given charge\" in physics language). Formally, our motivic DT-invariants are elements of quantum tori over a version of the Grothendieck ring of varieties over the ground field. Via the quasi-classical limit \"as the motive of affine line approaches to 1\" we obtain numerical DT-invariants which are closely related to those introduced by Behrend. We study some properties of both motivic and numerical DT-invariants including the wall-crossing formulas and integrality. We discuss the relationship with the mathematical works (in the non-triangulated case) of Joyce, Bridgeland and Toledano-Laredo, as well as with works of physicists on Seiberg-Witten model (string junctions), classification of N=2 supersymmetric theories (Cecotti-Vafa) and structure of the moduli space of vector multiplets. Relating the theory of 3d Calabi-Yau categories with distinguished set of generators (called cluster collection) with the theory of quivers with potential we found the connection with cluster transformations and cluster varieties (both classical and quantum).", "Let X be the total space of the canonical bundle of P^2. We study the generalized Donaldson-Thomas invariants, defined in the work of Joyce-Song, of the moduli spaces of the 2-dimensional Gieseker semistable sheaves on X with first Chern class equal to k times the class of the zero section of X. When k=1, 2 or 3, and semistability implies stability, we express the invariants in terms of known modular forms. We prove a combinatorial formula for the invariants when k=2 in the presence of the strictly semistable sheaves, and verify the BPS integrality conjecture of Joyce-Song in some cases.", "We discuss the wall-crossing of the BPS bound states of a non-compact holomorphic D4-brane with D2 and D0-branes on the conifold. We use the Kontsevich-Soibelman wall-crossing formula and analyze the BPS degeneracy in various chambers. In particular we obtain a relation between BPS degeneracies in two limiting attractor chambers related by a flop transition. Our result is consistent with known results and predicts BPS degeneracies in all chambers.", "We prove a comparison formula for the Donaldson-Thomas curve-counting invariants of two smooth and projective Calabi-Yau threefolds related by a flop. By results of Bridgeland any two such varieties are derived equivalent. Furthermore there exist pairs of categories of perverse coherent sheaves on both sides which are swapped by this equivalence. Using the theory developed by Joyce we construct the motivic Hall algebras of these categories. These algebras provide a bridge relating the invariants on both sides of the flop." ] }
1311.7090
1993731617
Stepwise refinement of algebraic specifications is a well known formal method- ology for program development. However, traditional notions of refinement based on signa- ture morphisms are often too rigid to capture a number of relevant transformations in the context of software design, reuse, and adaptation. This paper proposes a new approach to refinement in which signature morphisms are replaced by logical interpretations as a means to witness refinements. The approach is first presented in the context of equational logic, and later generalised to deductive systems of arbitrary dimension. This allows, for example, refining sentential into equational specifications and the latter into modal ones.
The approach to refinement proposed in this paper, in particular when specialised to 2-dimension deductive systems, should also be related to the extensive work of Maibaum, Sadler and Veloso in the 70's and the 80's, as documented, for example, in @cite_57 @cite_11 . The authors resort to interpretations between theories and conservative extensions to define a syntactic notion of refinement according to which a specification @math refines a specification @math if there is an interpretation of @math into a conservative extension of @math . It is shown that these refinements can be vertically composed, therefore entailing stepwise development. This notion is, however, somehow restrictive since it requires all maps to be conservative, whereas in program development it is usually enough to guarantee that requirements are preserved by the underlying translation. Moreover, in their approach, the interpretation edge of a refinement diagram needs to satisfy extra properties.
{ "cite_N": [ "@cite_57", "@cite_11" ], "mid": [ "1546101356", "1485360759" ], "abstract": [ "It has become customary to focus attention on the semantic aspects of specification and implementation, a model theoretic or algebraic viewpoint. We feel, however, that certain concepts are best dealt with at the syntactic level, rather than via a detour through semantics, and that implementation is one of these concepts. We regard logic as the most appropriate medium for talking about specification (whether of abstract data types, programs, databases, specifications — as an interpretation between theories say, rather than something to do with the embedding of models or mapping of algebras. In this paper, we give a syntactic account of implementation and prove the basic results — composability of implementations and how to deal with structured (hierarchical) specifications modularly — for abstract data types.", "This paper outlines a logical approach to abstract data types, which is motivated by, and more adequate for (than algebraic approaches), the practice of programming. Abstract data types are specified as axiomatic theories and notions concerning the former are captured by syntactical concepts concerning the latter. The basic concepts of nambility, conservative extensions and interpretations of theories explain implementation, refinement and parameterisation. Being simple, natural and flexible, this approach is quite appropriate for program development." ] }
1311.7090
1993731617
Stepwise refinement of algebraic specifications is a well known formal method- ology for program development. However, traditional notions of refinement based on signa- ture morphisms are often too rigid to capture a number of relevant transformations in the context of software design, reuse, and adaptation. This paper proposes a new approach to refinement in which signature morphisms are replaced by logical interpretations as a means to witness refinements. The approach is first presented in the context of equational logic, and later generalised to deductive systems of arbitrary dimension. This allows, for example, refining sentential into equational specifications and the latter into modal ones.
As related work one should also mention @cite_71 @cite_53 where interpretations between theories are studied in the abstract framework of @math -institutions. The first reference is a generalisation of the work of Maibaum and his collaborators, whereas the second one generalises the way algebraic semantics on sentential logics is dealt with in abstract algebraic logic to the abstract setting of @math -institutions. Similar developments could arise by considering institutions and their (co-)morphisms @cite_29 @cite_15 @cite_62 . The work of Meseguer @cite_10 on , in which a theory of interpretations between logical systems is developed, should also be mentioned.
{ "cite_N": [ "@cite_62", "@cite_53", "@cite_29", "@cite_71", "@cite_15", "@cite_10" ], "mid": [ "1533562965", "1982879916", "2010883554", "1726442598", "", "" ], "abstract": [ "Seven years of COMPASS.- Inductively defined relations: A brief tutorial extended abstract.- On the role of category theory in the area of algebraic specifications.- Unification of theories: A challenge for computing science.- The larch shared language: Some open problems.- The lambda calculus as an abstract data type.- Unifying theories in different institutions.- Interchange format for inter-operability of tools and translation.- Experiments with partial evaluation domains for rewrite specifications.- Class-sort polymorphism in GLIDER.- Deontic concepts in the algebraic specification of dynamic systems: The permission case.- Reification - Changing viewpoint but preserving truth.- A category-based equational logic semantics to constraint programming.- Concurrent state transformations on abstract data types.- A view on implementing processes: Categories of circuits.- Combining algebraic and set-theoretic specifications.- Minimal term rewriting systems.- InterACT: An interactive theorem and completeness prover for algebraic specifications with conditional equations.- Rewriting and reasoning with set-relations II: The non-ground case completeness.- Termination of curryfied rewrite systems.- Formal specifications and test: Correctness and oracle.- Behavioural equivalence, bisimulation, and minimal realisation.- Using limits of parchments to systematically construct institutions of partial algebras.- Behavioural specifications in type theory.- Swinging data types.- Context institutions.- Object-oriented functional programming and type reconstruction.- Moving between logical systems.- Modular algebraic specifications and the orientation of equations into rewrite rules.- A model for I O in equational languages with don't care non-determinism.- Tool design for structuring mechanisms for algebraic specification languages with initial semantics.", "Various aspects of the work of Blok and Rebagliato on the algebraic semantics for deductive systems are studied in the context of logics formalized as π-institutions. Three kinds of semantics are surveyed: institution, matrix (system) and algebraic (system) semantics, corresponding, respectively, to the generalized matrix, matrix and algebraic semantics of the theory of sentential logics. After some connections between matrix and algebraic semantics are revealed, it is shown that every (finitary) N -rule based extension of an N -rule based π-institution possessing an algebraic semantics also possesses an algebraic semantics. This result abstracts one of the main theorems of Blok and Rebagliato. An attempt at a Blok-Rebagliato-style characterization of those π-institutions with a mono-unary category of natural transformations on their sentence functors having an algebraic semantics is also made. Finally, a necessary condition for a π-institution to possess an algebraic semantics is provided. c", "There is a population explosion among the logical systems used in computing science. Examples include first-order logic, equational logic, Horn-clause logic, higher-order logic, infinitary logic, dynamic logic, intuitionistic logic, order-sorted logic, and temporal logic; moreover, there is a tendency for each theorem prover to have its own idiosyncratic logical system. The concept of institution is introduced to formalize the informal notion of “logical system.” The major requirement is that there is a satisfaction relation between models and sentences that is consistent under change of notation. Institutions enable abstracting away from syntactic and semantic detail when working on language structure “in-the-large”; for example, we can define language features for building large logical system. This applies to both specification languages and programming languages. Institutions also have applications to such areas as database theory and the semantics of artificial and natural languages. A first main result of this paper says that any institution such that signatures (which define notation) can be glued together, also allows gluing together theories (which are just collections of sentences over a fixed signature). A second main result considers when theory structuring is preserved by institution morphisms. A third main result gives conditions under which it is sound to use a theorem prover for one institution on theories from another. A fourth main result shows how to extend institutions so that their theories may include, in addition to the original sentences, various kinds of constraint that are useful for defining abstract data types, including both “data” and “hierarchy” constraints. Further results show how to define institutions that allow sentences and constraints from two or more institutions. All our general results apply to such “duplex” and “multiplex” institutions.", "The structural property of π-institutions which requires consequence to be preserved under changes of language is weakened. The proposed weakly structural π-institutions encompass logics in which consequence does depend on the choice of non-logical symbols by associating locality conditions with signatures or signature morphisms. They also enable new logics to be defined by reusing existing ones, extending and adapting them in order to build formalisms that better fit the applications whose specification they are intended to support.", "", "" ] }
1311.6876
1649405193
Question Answering (CQA) websites have become valu- able repositories which host a massive volume of human knowl- edge. To maximize the utility of such knowledge, it is essential to evaluate the quality of an existing question or answer, especially soon after it is posted on the CQA website. In this paper, we study the problem of inferring the quality of ques- tions and answers through a case study of a software CQA (Stack Overflow). Our key finding is that the quality of an answer is strongly positively correlated with that of its question. A rmed with this observation, we propose a family of algorithms to jointly pre- dict the quality of questions and answers, for both quantifying nu- merical quality scores and differentiating the high-quali ty ques- tions answers from those of low quality. We conduct extensive experimental evaluations to demonstrate the effectiveness and ef- ficiency of our methods.
Question answering websites have become valuable knowledge bases which receive millions of visits and queries each day. As a result, several methods have been proposed to identify relevant questions for a given query (e.g. @cite_23 ). To further improve the usefulness of the returned questions, the quality of these questions should also be considered. For example, @cite_22 define question quality as the likelihood that a question is repeatedly asked by people, and evaluate such measurement in the setting of question search. In addition to re-ranking the returned questions for a given query, question quality can be used to recommend questions to prominent places so that users can easily discover them @cite_16 .
{ "cite_N": [ "@cite_16", "@cite_22", "@cite_23" ], "mid": [ "1969085038", "2131013991", "2213788665" ], "abstract": [ "At community question answering services, users are usually encouraged to rate questions by votes. The questions with the most votes are then recommended and ranked on the top when users browse questions by category. As users are not obligated to rate questions, usually only a small proportion of questions eventually gets rating. Thus, in this paper, we are concerned with learning to recommend questions from user ratings of a limited size. To overcome the data sparsity, we propose to utilize questions without users rating as well. Further, as there exist certain noises within user ratings (the preference of some users expressed in their ratings diverges from that of the majority of users), we design a new algorithm called 'majority-based perceptron algorithm' which can avoid the influence of noisy instances by emphasizing its learning over data instances from the majority users. Experimental results from a large collection of real questions confirm the effectiveness of our proposals.", "In this paper, we propose a notion of 'question utility' for studying usefulness of questions and show how question utility can be integrated into question search as static ranking. To measure question utility, we examine three methods: (a) a method of employing the language model to estimate the probability that a question is generated from a question collection and then using the probability as question utility; (b) a method of using the LexRank algorithm to evaluate centrality of questions and then using the centrality as question utility; and (c) the combination of (a) and (b). To use question utility in question search, we employ a log linear model for combining relevance score in question search and utility score regarding question utility. Our experimental results with the questions about 'travel' from Yahoo! Answers show that question utility can be effective in boosting up ranks of generally useful questions.", "Online forums contain interactive and semantically related discussions on various questions. Extracted question-answer archive is invaluable knowledge, which can be used to improve Question Answering services. In this paper, we address the problem of Question Suggestion, which targets at suggesting questions that are semantically related to a queried question. Existing bag-of-words approaches suffer from the shortcoming that they could not bridge the lexical chasm between semantically related questions. Therefore, we present a new framework to suggest questions, and propose the Topic-enhanced Translation-based Language Model (TopicTRLM) which fuses both the lexical and latent semantic knowledge. Extensive experiments have been conducted with a large real world data set. Experimental results indicate our approach is very effective and outperforms other popular methods in several metrics." ] }
1311.6876
1649405193
Question Answering (CQA) websites have become valu- able repositories which host a massive volume of human knowl- edge. To maximize the utility of such knowledge, it is essential to evaluate the quality of an existing question or answer, especially soon after it is posted on the CQA website. In this paper, we study the problem of inferring the quality of ques- tions and answers through a case study of a software CQA (Stack Overflow). Our key finding is that the quality of an answer is strongly positively correlated with that of its question. A rmed with this observation, we propose a family of algorithms to jointly pre- dict the quality of questions and answers, for both quantifying nu- merical quality scores and differentiating the high-quali ty ques- tions answers from those of low quality. We conduct extensive experimental evaluations to demonstrate the effectiveness and ef- ficiency of our methods.
Similar to question quality prediction, answer quality prediction could also be used to directly identify high-quality answers for a given user query. For example, @cite_17 and @cite_11 use human annotators to label the quality of the answers, and evaluate the usefulness of answer quality by incorporating it to improve retrieval performance.
{ "cite_N": [ "@cite_11", "@cite_17" ], "mid": [ "2062020370", "2102956348" ], "abstract": [ "Community Question Answering (QA) portals contain questions and answers contributed by hundreds of millions of users. These databases of questions and answers are of great value if they can be used directly to answer questions from any user. In this research, we address this collaborative QA task by drawing knowledge from the crowds in community QA portals such as Yahoo! Answers. Despite their popularity, it is well known that answers in community QA portals have unequal quality. We therefore propose a quality-aware framework to design methods that select answers from a community QA portal considering answer quality in addition to answer relevance. Besides using answer features for determining answer quality, we introduce several other quality-aware QA methods using answer quality derived from the expertise of answerers. Such expertise can be question independent or question dependent. We evaluate our proposed methods using a database of 95K questions and 537K answers obtained from Yahoo! Answers. Our experiments have shown that answer quality can improve QA performance significantly. Furthermore, question dependent expertise based methods are shown to outperform methods using answer features only. It is also found that there are also good answers not among the best answers identified by Yahoo! Answers users.", "New types of document collections are being developed by various web services. The service providers keep track of non-textual features such as click counts. In this paper, we present a framework to use non-textual features to predict the quality of documents. We also show our quality measure can be successfully incorporated into the language modeling-based retrieval model. We test our approach on a collection of question and answer pairs gathered from a community based question answering service where people ask and answer questions. Experimental results using our quality measure show a significant improvement over our baseline." ] }
1311.6876
1649405193
Question Answering (CQA) websites have become valu- able repositories which host a massive volume of human knowl- edge. To maximize the utility of such knowledge, it is essential to evaluate the quality of an existing question or answer, especially soon after it is posted on the CQA website. In this paper, we study the problem of inferring the quality of ques- tions and answers through a case study of a software CQA (Stack Overflow). Our key finding is that the quality of an answer is strongly positively correlated with that of its question. A rmed with this observation, we propose a family of algorithms to jointly pre- dict the quality of questions and answers, for both quantifying nu- merical quality scores and differentiating the high-quali ty ques- tions answers from those of low quality. We conduct extensive experimental evaluations to demonstrate the effectiveness and ef- ficiency of our methods.
@cite_14 and @cite_8 also aim to predict the quality of both questions and answers. Their focus is to tackle the sparsity problem where only a small number of questions answers are labeled. As shown in our experiments, our method can also deal with the sparsity problem by leveraging the quality correlation between questions and their answers. In the methodology aspect, @cite_14 still treat question quality prediction and answer quality prediction as separated problems. For the method proposed by @cite_8 , as pointed in section 5.1, there are two important differences between their method ( CQA-MR ) and the our , which lead to significant performance difference (See Fig. for an example).
{ "cite_N": [ "@cite_14", "@cite_8" ], "mid": [ "2037858832", "2159133636" ], "abstract": [ "The quality of user-generated content varies drastically from excellent to abuse and spam. As the availability of such content increases, the task of identifying high-quality content sites based on user contributions --social media sites -- becomes increasingly important. Social media in general exhibit a rich variety of information sources: in addition to the content itself, there is a wide array of non-content information available, such as links between items and explicit quality ratings from members of the community. In this paper we investigate methods for exploiting such community feedback to automatically identify high quality content. As a test case, we focus on Yahoo! Answers, a large community question answering portal that is particularly rich in the amount and types of content and social interactions available in it. We introduce a general classification framework for combining the evidence from different sources of information, that can be tuned automatically for a given social media type and quality definition. In particular, for the community question answering domain, we show that our system is able to separate high-quality items from the rest with an accuracy close to that of humans", "Community Question Answering (CQA) has emerged as a popular forum for users to pose questions for other users to answer. Over the last few years, CQA portals such as Naver and Yahoo! Answers have exploded in popularity, and now provide a viable alternative to general purpose Web search. At the same time, the answers to past questions submitted in CQA sites comprise a valuable knowledge repository which could be a gold mine for information retrieval and automatic question answering. Unfortunately, the quality of the submitted questions and answers varies widely - increasingly so that a large fraction of the content is not usable for answering queries. Previous approaches for retrieving relevant and high quality content have been proposed, but they require large amounts of manually labeled data -- which limits the applicability of the supervised approaches to new sites and domains. In this paper we address this problem by developing a semi-supervised coupled mutual reinforcement framework for simultaneously calculating content quality and user reputation, that requires relatively few labeled examples to initialize the training process. Results of a large scale evaluation demonstrate that our methods are more effective than previous approaches for finding high-quality answers, questions, and users. More importantly, our quality estimation significantly improves the accuracy of search over CQA archives over the state-of-the-art methods." ] }
1311.6876
1649405193
Question Answering (CQA) websites have become valu- able repositories which host a massive volume of human knowl- edge. To maximize the utility of such knowledge, it is essential to evaluate the quality of an existing question or answer, especially soon after it is posted on the CQA website. In this paper, we study the problem of inferring the quality of ques- tions and answers through a case study of a software CQA (Stack Overflow). Our key finding is that the quality of an answer is strongly positively correlated with that of its question. A rmed with this observation, we propose a family of algorithms to jointly pre- dict the quality of questions and answers, for both quantifying nu- merical quality scores and differentiating the high-quali ty ques- tions answers from those of low quality. We conduct extensive experimental evaluations to demonstrate the effectiveness and ef- ficiency of our methods.
There are several types of measurement to quantify the quality of questions and answers. First, @cite_24 propose to measure the questioner satisfaction, i.e., which answer the questioner will probably choose as the accepted answer. Later on, this problem is followed up by several researchers @cite_2 @cite_6 . However, accepted answers are not necessarily the highest-quality answers due to timing and subjectivity issues.
{ "cite_N": [ "@cite_24", "@cite_6", "@cite_2" ], "mid": [ "2161152375", "2129251351", "2057415299" ], "abstract": [ "Question answering communities such as Naver and Yahoo! Answers have emerged as popular, and often effective, means of information seeking on the web. By posting questions for other participants to answer, information seekers can obtain specific answers to their questions. Users of popular portals such as Yahoo! Answers already have submitted millions of questions and received hundreds of millions of answers from other participants. However, it may also take hours --and sometime days-- until a satisfactory answer is posted. In this paper we introduce the problem of predicting information seeker satisfaction in collaborative question answering communities, where we attempt to predict whether a question author will be satisfied with the answers submitted by the community participants. We present a general prediction model, and develop a variety of content, structure, and community-focused features for this task. Our experimental results, obtained from a largescale evaluation over thousands of real questions and user ratings, demonstrate the feasibility of modeling and predicting asker satisfaction. We complement our results with a thorough investigation of the interactions and information seeking patterns in question answering communities that correlate with information seeker satisfaction. Our models and predictions could be useful for a variety of applications such as user intent inference, answer ranking, interface design, and query suggestion and routing.", "Yahoo Answers (YA) is a large and diverse question-answer forum, acting not only as a medium for sharing technical knowledge, but as a place where one can seek advice, gather opinions, and satisfy one's curiosity about a countless number of things. In this paper, we seek to understand YA's knowledge sharing and activity. We analyze the forum categories and cluster them according to content characteristics and patterns of interaction among the users. While interactions in some categories resemble expertise sharing forums, others incorporate discussion, everyday advice, and support. With such a diversity of categories in which one can participate, we find that some users focus narrowly on specific topics, while others participate across categories. This not only allows us to map related categories, but to characterize the entropy of the users' interests. We find that lower entropy correlates with receiving higher answer ratings, but only for categories where factual expertise is primarily sought after. We combine both user attributes and answer characteristics to predict, within a given category, whether a particular answer will be chosen as the best answer by the asker.", "Question answering (QA) helps one go beyond traditional keywords-based querying and retrieve information in more precise form than given by a document or a list of documents. Several community-based QA (CQA) services have emerged allowing information seekers pose their information need as questions and receive answers from their fellow users. A question may receive multiple answers from multiple users and the asker or the community can choose the best answer. While the asker can thus indicate if he was satisfied with the information he received, there is no clear way of evaluating the quality of that information. We present a study to evaluate and predict the quality of an answer in a CQA setting. We chose Yahoo! Answers as such CQA service and selected a small set of questions, each with at least five answers. We asked Amazon Mechanical Turk workers to rate the quality of each answer for a given question based on 13 different criteria. Each answer was rated by five different workers. We then matched their assessments with the actual asker's rating of a given answer. We show that the quality criteria we used faithfully match with asker's perception of a quality answer. We furthered our investigation by extracting various features from questions, answers, and the users who posted them, and training a number of classifiers to select the best answer using those features. We demonstrate a high predictability of our trained models along with the relative merits of each of the features for such prediction. These models support our argument that in case of CQA, contextual information such as a user's profile, can be critical in evaluating and predicting content quality." ] }
1311.6876
1649405193
Question Answering (CQA) websites have become valu- able repositories which host a massive volume of human knowl- edge. To maximize the utility of such knowledge, it is essential to evaluate the quality of an existing question or answer, especially soon after it is posted on the CQA website. In this paper, we study the problem of inferring the quality of ques- tions and answers through a case study of a software CQA (Stack Overflow). Our key finding is that the quality of an answer is strongly positively correlated with that of its question. A rmed with this observation, we propose a family of algorithms to jointly pre- dict the quality of questions and answers, for both quantifying nu- merical quality scores and differentiating the high-quali ty ques- tions answers from those of low quality. We conduct extensive experimental evaluations to demonstrate the effectiveness and ef- ficiency of our methods.
@cite_24 first propose the questioner satisfaction prediction in CQA. They find that question features (e.g., length of the subject, posting time, etc.) and questioner history information are very useful for the prediction task. Shah and Pomerantz @cite_2 also predict which answers the questioner would be satisfied with. They extract a set of features from the data and found that these features worked better than human labeling on several quality aspects in terms of questioner satisfaction prediction. @cite_6 studied the forum categories and user behavior patterns in Yahoo! Answer. They also formulated a classification problem to predict the acceptance of answers based on the features from the question and the answerer. All the above work considers the prediction of accepted answers. However, accepted answers are not necessarily the highest-quality highest-score answers due to timing and subjectivity issues. Different from the above work, we aim to predict the quality of question and answers which is voted and determined by the whole community.
{ "cite_N": [ "@cite_24", "@cite_6", "@cite_2" ], "mid": [ "2161152375", "2129251351", "2057415299" ], "abstract": [ "Question answering communities such as Naver and Yahoo! Answers have emerged as popular, and often effective, means of information seeking on the web. By posting questions for other participants to answer, information seekers can obtain specific answers to their questions. Users of popular portals such as Yahoo! Answers already have submitted millions of questions and received hundreds of millions of answers from other participants. However, it may also take hours --and sometime days-- until a satisfactory answer is posted. In this paper we introduce the problem of predicting information seeker satisfaction in collaborative question answering communities, where we attempt to predict whether a question author will be satisfied with the answers submitted by the community participants. We present a general prediction model, and develop a variety of content, structure, and community-focused features for this task. Our experimental results, obtained from a largescale evaluation over thousands of real questions and user ratings, demonstrate the feasibility of modeling and predicting asker satisfaction. We complement our results with a thorough investigation of the interactions and information seeking patterns in question answering communities that correlate with information seeker satisfaction. Our models and predictions could be useful for a variety of applications such as user intent inference, answer ranking, interface design, and query suggestion and routing.", "Yahoo Answers (YA) is a large and diverse question-answer forum, acting not only as a medium for sharing technical knowledge, but as a place where one can seek advice, gather opinions, and satisfy one's curiosity about a countless number of things. In this paper, we seek to understand YA's knowledge sharing and activity. We analyze the forum categories and cluster them according to content characteristics and patterns of interaction among the users. While interactions in some categories resemble expertise sharing forums, others incorporate discussion, everyday advice, and support. With such a diversity of categories in which one can participate, we find that some users focus narrowly on specific topics, while others participate across categories. This not only allows us to map related categories, but to characterize the entropy of the users' interests. We find that lower entropy correlates with receiving higher answer ratings, but only for categories where factual expertise is primarily sought after. We combine both user attributes and answer characteristics to predict, within a given category, whether a particular answer will be chosen as the best answer by the asker.", "Question answering (QA) helps one go beyond traditional keywords-based querying and retrieve information in more precise form than given by a document or a list of documents. Several community-based QA (CQA) services have emerged allowing information seekers pose their information need as questions and receive answers from their fellow users. A question may receive multiple answers from multiple users and the asker or the community can choose the best answer. While the asker can thus indicate if he was satisfied with the information he received, there is no clear way of evaluating the quality of that information. We present a study to evaluate and predict the quality of an answer in a CQA setting. We chose Yahoo! Answers as such CQA service and selected a small set of questions, each with at least five answers. We asked Amazon Mechanical Turk workers to rate the quality of each answer for a given question based on 13 different criteria. Each answer was rated by five different workers. We then matched their assessments with the actual asker's rating of a given answer. We show that the quality criteria we used faithfully match with asker's perception of a quality answer. We furthered our investigation by extracting various features from questions, answers, and the users who posted them, and training a number of classifiers to select the best answer using those features. We demonstrate a high predictability of our trained models along with the relative merits of each of the features for such prediction. These models support our argument that in case of CQA, contextual information such as a user's profile, can be critical in evaluating and predicting content quality." ] }
1311.6876
1649405193
Question Answering (CQA) websites have become valu- able repositories which host a massive volume of human knowl- edge. To maximize the utility of such knowledge, it is essential to evaluate the quality of an existing question or answer, especially soon after it is posted on the CQA website. In this paper, we study the problem of inferring the quality of ques- tions and answers through a case study of a software CQA (Stack Overflow). Our key finding is that the quality of an answer is strongly positively correlated with that of its question. A rmed with this observation, we propose a family of algorithms to jointly pre- dict the quality of questions and answers, for both quantifying nu- merical quality scores and differentiating the high-quali ty ques- tions answers from those of low quality. We conduct extensive experimental evaluations to demonstrate the effectiveness and ef- ficiency of our methods.
To overcome the subjectivity issue of questioner satisfaction, many proposals resort to the quality measures derived from long-term community voting or human labeling. For example, @cite_4 conduct a field study on several question answering websites to seek the reasons for high-quality answers. They use the human labels as the quality indicator and find that factors such as community effect and payment play important roles in answer quality while rhetorical strategy and question type have little effect. @cite_3 study the answer quality of code examples in Stack Overflow. They use the community voted score of an answer as the quality measure. Similar to this work, we also use the voted score, which is the difference between the number of up-votes and down-votes from the community, as the quality measure.
{ "cite_N": [ "@cite_4", "@cite_3" ], "mid": [ "2135555017", "2051204868" ], "abstract": [ "Question and answer (Q&A) sites such as Yahoo! Answers are places where users ask questions and others answer them. In this paper, we investigate predictors of answer quality through a comparative, controlled field study of responses provided across several online Q&A sites. Along with several quantitative results concerning the effects of factors such as question topic and rhetorical strategy, we present two high-level messages. First, you get what you pay for in Q&A sites. Answer quality was typically higher in Google Answers (a fee-based site) than in the free sites we studied, and paying more money for an answer led to better outcomes. Second, we find that a Q&A site's community of users contributes to its success. Yahoo! Answers, a Q&A site where anybody can answer questions, outperformed sites that depend on specific individuals to answer questions, such as library reference services.", "Programmers learning how to use an API or a programming language often rely on code examples to support their learning activities. However, what makes for an effective ode example remains an open question. Finding the haracteristics of the effective examples is essential in improving the appropriateness of these learning aids. To help answer this question we have onducted a qualitative analysis of the questions and answers posted to a programming Q&A web site called StackOverflow. On StackOverflow answers can be voted on, indicating which answers were found helpful by users of the site. By analyzing these well-received answers we identified haracteristics of effective examples. We found that the explanations acompanying examples are as important as the examples themselves. Our findings have implications for the way the API documentation and example set should be developed and evolved as well as the design of the tools assisting the development of these materials." ] }
1311.6876
1649405193
Question Answering (CQA) websites have become valu- able repositories which host a massive volume of human knowl- edge. To maximize the utility of such knowledge, it is essential to evaluate the quality of an existing question or answer, especially soon after it is posted on the CQA website. In this paper, we study the problem of inferring the quality of ques- tions and answers through a case study of a software CQA (Stack Overflow). Our key finding is that the quality of an answer is strongly positively correlated with that of its question. A rmed with this observation, we propose a family of algorithms to jointly pre- dict the quality of questions and answers, for both quantifying nu- merical quality scores and differentiating the high-quali ty ques- tions answers from those of low quality. We conduct extensive experimental evaluations to demonstrate the effectiveness and ef- ficiency of our methods.
Due to the great value of CQA in helping software development, many empirical studies are conducted on Stack Overflow. For example, @cite_15 investigate the website to identify which types of questions are frequently asked and answered by the programmers. @cite_18 study whether Stack Overflow can be used as a substitute of API documentation. @cite_20 analyze the text content of the posts in Stack Overflow to discover the current hot topics that software developers are discussing. @cite_12 try to find the success design choices of Stack Overflow so that the lessons can be reused for other applications. One of the main reasons that found is the tight involvement of founders and moderators in the community. Our work could be used to automatically support their moderation by identifying the high-quality and low-quality posts in their early stage. In summary, different from the existing empirical studies, our focus is on the quality of questions and answers in Stack Overflow, and such post quality is essential for the reuse of CQA knowledge.
{ "cite_N": [ "@cite_15", "@cite_18", "@cite_12", "@cite_20" ], "mid": [ "2123246351", "", "2099769844", "2056894403" ], "abstract": [ "Question and Answer (Q&A) websites, such as Stack Overflow, use social media to facilitate knowledge exchange between programmers and fill archives with millions of entries that contribute to the body of knowledge in software development. Understanding the role of Q&A websites in the documentation landscape will enable us to make recommendations on how individuals and companies can leverage this knowledge effectively. In this paper, we analyze data from Stack Overflow to categorize the kinds of questions that are asked, and to explore which questions are answered well and which ones remain unanswered. Our preliminary findings indicate that Q&A websites are particularly effective at code reviews and conceptual questions. We pose research questions and suggest future work to explore the motivations of programmers that contribute to Q&A websites, and to understand the implications of turning Q&A exchanges into technical mini-blogs through the editing of questions and answers.", "", "This paper analyzes a Question & Answer site for programmers, Stack Overflow, that dramatically improves on the utility and performance of Q&A systems for technical domains. Over 92 of Stack Overflow questions about expert topics are answered - in a median time of 11 minutes. Using a mixed methods approach that combines statistical data analysis with user interviews, we seek to understand this success. We argue that it is not primarily due to an a priori superior technical design, but also to the high visibility and daily involvement of the design team within the community they serve. This model of continued community leadership presents challenges to both CSCW systems research as well as to attempts to apply the Stack Overflow model to other specialized knowledge domains.", "Programming question and answer (QA questions in some topics lead to discussions in other topics; and the topics gaining the most popularity over time are web development (especially jQuery), mobile applications (especially Android), Git, and MySQL." ] }
1311.6876
1649405193
Question Answering (CQA) websites have become valu- able repositories which host a massive volume of human knowl- edge. To maximize the utility of such knowledge, it is essential to evaluate the quality of an existing question or answer, especially soon after it is posted on the CQA website. In this paper, we study the problem of inferring the quality of ques- tions and answers through a case study of a software CQA (Stack Overflow). Our key finding is that the quality of an answer is strongly positively correlated with that of its question. A rmed with this observation, we propose a family of algorithms to jointly pre- dict the quality of questions and answers, for both quantifying nu- merical quality scores and differentiating the high-quali ty ques- tions answers from those of low quality. We conduct extensive experimental evaluations to demonstrate the effectiveness and ef- ficiency of our methods.
There are some other recent focuses that are potentially related to our work. For example, Tausczik and Pennebaker @cite_7 empirically study the correlation between user reputation and post quality in MathOverflow, and find that both offline and online reputation points are good predictors for post quality. @cite_10 focus on how to find relevant answers in software forum when there could be many answers for a single question. @cite_13 reveal the mutual effect between question and answer dynamics in Stack Overflow, and prove that certain equilibrium can be achieved from a theoretical perspective. @cite_0 propose the problem of CQA searcher satisfaction, i.e., will the answer in a CQA satisfies the information searcher using the search engines. They divide the searcher satisfaction problem into three subproblems (i.e., query clarity, query-question match, and answer quality) and conclude that more intelligent prediction of answer quality is still in need. How to route the right question to the right answerer @cite_9 @cite_5 @cite_21 , and how to predict the long-lasting value (i.e., the page views of a question and its answers) @cite_19 , are also studied by several researchers.
{ "cite_N": [ "@cite_7", "@cite_10", "@cite_9", "@cite_21", "@cite_0", "@cite_19", "@cite_5", "@cite_13" ], "mid": [ "2125545976", "1982235297", "2107391785", "2163881971", "1999969345", "2134406267", "2119505155", "2102062551" ], "abstract": [ "There are two perspectives on the role of reputation in collaborative online projects such as Wikipedia or Yahoo! Answers. One, user reputation should be minimized in order to increase the number of contributions from a wide user base. Two, user reputation should be used as a heuristic to identify and promote high quality contributions. The current study examined how offline and online reputations of contributors affect perceived quality in MathOverflow, an online community with 3470 active users. On MathOverflow, users post high-level mathematics questions and answers. Community members also rate the quality of the questions and answers. This study is unique in being able to measure offline reputation of users. Both offline and online reputations were consistently and independently related to the perceived quality of authors' submissions, and there was only a moderate correlation between established offline and newly developed online reputation.", "Online software forums provide a huge amount of valuable content. Developers and users often ask questions and receive answers from such forums. The availability of a vast amount of thread discussions in forums provides ample opportunities for knowledge acquisition and summarization. For a given search query, current search engines use traditional information retrieval approach to extract webpages containing relevant keywords. However, in software forums, often there are many threads containing similar keywords where each thread could contain a lot of posts as many as 1,000 or more. Manually finding relevant answers from these long threads is a painstaking task to the users. Finding relevant answers is particularly hard in software forums as: complexities of software systems cause a huge variety of issues often expressed in similar technical jargons, and software forum users are often expert internet users who often posts answers in multiple venues creating many duplicate posts, often without satisfying answers, in the world wide web. To address this problem, this paper provides a semantic search engine framework to process software threads and recover relevant answers according to user queries. Different from standard information retrieval engine, our framework infer semantic tags of posts in the software forum threads and utilize these tags to recover relevant answer posts. In our case study, we analyze 6,068 posts from three software forums. In terms of accuracy of our inferred tags, we could achieve on average an overall precision, recall and F-measure of 67 , 71 , and 69 respectively. To empirically study the benefit of our overall framework, we also conduct a user-assisted study which shows that as compared to a standard information retrieval approach, our proposed framework could increase mean average precision from 17 to 71 in retrieving relevant answers to various queries and achieve a Normalized Discounted Cumulative Gain (nDCG) @1 score of 91.2 and nDCG@2 score of 71.6 .", "Online forums contain huge amounts of valuable user-generated content. In current forum systems, users have to passively wait for other users to visit the forum systems and read answer their questions. The user experience for question answering suffers from this arrangement. In this paper, we address the problem of \"pushing\" the right questions to the right persons, the objective being to obtain quick, high-quality answers, thus improving user satisfaction. We propose a framework for the efficient and effective routing of a given question to the top-k potential experts (users) in a forum, by utilizing both the content and structures of the forum system. First, we compute the expertise of users according to the content of the forum system—-this is to estimate the probability of a user being an expert for a given question based on the previous question answering of the user. Specifically, we design three models for this task, including a profile-based model, a thread-based model, and a cluster-based model. Second, we re-rank the user expertise measured in probability by utilizing the structural relations among users in a forum system. The results of the two steps can be integrated naturally in a probabilistic model that computes a final ranking score for each user. Experimental results show that the proposals are very promising.", "We present Aardvark, a social search engine. With Aardvark, users ask a question, either by instant message, email, web input, text message, or voice. Aardvark then routes the question to the person in the user's extended social network most likely to be able to answer that question. As compared to a traditional web search engine, where the challenge lies in finding the right document to satisfy a user's information need, the challenge in a social search engine like Aardvark lies in finding the right person to satisfy a user's information need. Further, while trust in a traditional search engine is based on authority, in a social search engine like Aardvark, trust is based on intimacy. We describe how these considerations inform the architecture, algorithms, and user interface of Aardvark, and how they are reflected in the behavior of Aardvark users.", "Community-based Question Answering (CQA) sites, such as Yahoo! Answers, Baidu Knows, Naver, and Quora, have been rapidly growing in popularity. The resulting archives of posted answers to questions, in Yahoo! Answers alone, already exceed in size 1 billion, and are aggressively indexed by web search engines. In fact, a large number of search engine users benefit from these archives, by finding existing answers that address their own queries. This scenario poses new challenges and opportunities for both search engines and CQA sites. To this end, we formulate a new problem of predicting the satisfaction of web searchers with CQA answers. We analyze a large number of web searches that result in a visit to a popular CQA site, and identify unique characteristics of searcher satisfaction in this setting, namely, the effects of query clarity, query-to-question match, and answer quality. We then propose and evaluate several approaches to predicting searcher satisfaction that exploit these characteristics. To the best of our knowledge, this is the first attempt to predict and validate the usefulness of CQA archives for external searchers, rather than for the original askers. Our results suggest promising directions for improving and exploiting community question answering services in pursuit of satisfying even more Web search queries.", "Question answering (Q&A) websites are now large repositories of valuable knowledge. While most Q&A sites were initially aimed at providing useful answers to the question asker, there has been a marked shift towards question answering as a community-driven knowledge creation process whose end product can be of enduring value to a broad audience. As part of this shift, specific expertise and deep knowledge of the subject at hand have become increasingly important, and many Q&A sites employ voting and reputation mechanisms as centerpieces of their design to help users identify the trustworthiness and accuracy of the content. To better understand this shift in focus from one-off answers to a group knowledge-creation process, we consider a question together with its entire set of corresponding answers as our fundamental unit of analysis, in contrast with the focus on individual question-answer pairs that characterized previous work. Our investigation considers the dynamics of the community activity that shapes the set of answers, both how answers and voters arrive over time and how this influences the eventual outcome. For example, we observe significant assortativity in the reputations of co-answerers, relationships between reputation and answer speed, and that the probability of an answer being chosen as the best one strongly depends on temporal characteristics of answer arrivals. We then show that our understanding of such properties is naturally applicable to predicting several important quantities, including the long-term value of the question and its answers, as well as whether a question requires a better answer. Finally, we discuss the implications of these results for the design of Q&A sites.", "Programming forums are becoming the primary tools for programmers to find answers for their programming problems. Our empirical study of popular programming forums shows that the forum users experience long waiting period for answers and a small number of experts are often overloaded with questions. To improve the usage experience, we have designed and implemented G-Finder, both an algorithm and a tool that makes intelligent routing decisions as to which participant is the expert for answering a particular programming question. Our main approach is to leverage the source code information of the software systems that forums are dedicated to, and discover latent relationships between forums users. Our algorithms construct the concept networks and the user networks from the program source and the forum data.We use programming questions to dynamically integrate these two networks and present an adaptive ranking of the potential experts. Our evaluation of G-Finder, using the data from three large programming forums, takes a retrospective view to check if G-Finder can correctly predict the experts who provided answers to programming questions. The evaluation results show that G-Finder improves the prediction precision by 25 to 74 , compared to related approaches.", "Two-sided markets arise when two different types of users may realize gains by interacting with one another through one or more platforms or mediators. We initiate a study of the evolution of such markets. We present an empirical analysis of the value accruing to members of each side of the market, based on the presence of the other side. We codify the range of value curves into a general theoretical model, characterize the equilibrium states of two-sided markets in our model, and prove that each platform will converge to one of these equilibria. We give some early experimental results of the stability of two-sided markets, and close with a theoretical treatment of the formation of different kinds of coalitions in such markets." ] }
1311.6876
1649405193
Question Answering (CQA) websites have become valu- able repositories which host a massive volume of human knowl- edge. To maximize the utility of such knowledge, it is essential to evaluate the quality of an existing question or answer, especially soon after it is posted on the CQA website. In this paper, we study the problem of inferring the quality of ques- tions and answers through a case study of a software CQA (Stack Overflow). Our key finding is that the quality of an answer is strongly positively correlated with that of its question. A rmed with this observation, we propose a family of algorithms to jointly pre- dict the quality of questions and answers, for both quantifying nu- merical quality scores and differentiating the high-quali ty ques- tions answers from those of low quality. We conduct extensive experimental evaluations to demonstrate the effectiveness and ef- ficiency of our methods.
@cite_19 put their focus on the community processes in Stack Overflow, and showed how the community processes could be used to identify the threads with long-lasting value and the threads that are in need of additional help. Our quality prediction could be the input of their work, as threads with high-quality question and high-quality answers might be of long-lasting value, and threads with high-quality question and low-quality answers might need more help.
{ "cite_N": [ "@cite_19" ], "mid": [ "2134406267" ], "abstract": [ "Question answering (Q&A) websites are now large repositories of valuable knowledge. While most Q&A sites were initially aimed at providing useful answers to the question asker, there has been a marked shift towards question answering as a community-driven knowledge creation process whose end product can be of enduring value to a broad audience. As part of this shift, specific expertise and deep knowledge of the subject at hand have become increasingly important, and many Q&A sites employ voting and reputation mechanisms as centerpieces of their design to help users identify the trustworthiness and accuracy of the content. To better understand this shift in focus from one-off answers to a group knowledge-creation process, we consider a question together with its entire set of corresponding answers as our fundamental unit of analysis, in contrast with the focus on individual question-answer pairs that characterized previous work. Our investigation considers the dynamics of the community activity that shapes the set of answers, both how answers and voters arrive over time and how this influences the eventual outcome. For example, we observe significant assortativity in the reputations of co-answerers, relationships between reputation and answer speed, and that the probability of an answer being chosen as the best one strongly depends on temporal characteristics of answer arrivals. We then show that our understanding of such properties is naturally applicable to predicting several important quantities, including the long-term value of the question and its answers, as well as whether a question requires a better answer. Finally, we discuss the implications of these results for the design of Q&A sites." ] }
1311.5810
2953089209
We give an exact characterization of the computational complexity of the @math CFA hierarchy. For any @math , we prove that the control flow decision problem is complete for deterministic exponential time. This theorem validates empirical observations that such control flow analysis is intractable. It also provides more general insight into the complexity of abstract interpretation.
The intuition behind the correspondence between evaluation and flow analysis for linear terms can be seen as an instance of abstract counting in the extreme @cite_4 . Abstract counting is a technique for reasoning about the behavior of a program that must occur when a program is run, based solely on abstract information that describes what may occur. When an abstract value is a singleton set, the abstract object is effectively rendered concrete @cite_3 . In other words, when only one thing may happen, it must. Linearity maintains singularity, and analysis is therefore completely concrete.
{ "cite_N": [ "@cite_4", "@cite_3" ], "mid": [ "2059860151", "2083878525" ], "abstract": [ "Useful type inference must be faster than normalization. Otherwise, you could check safety conditions by running the program. We analyze the relationship between bounds on normalization and type inference. We show how the success of type inference is fundamentally related to the amnesia of the type system: the nonlinearity by which all instances of a variable are constrained to have the same type.Recent work on intersection types has advocated their usefulness for static analysis and modular compilation. We analyze System-I (and some instances of its descendant, System E), an intersection type system with a type inference algorithm. Because System-I lacks idempotency, each occurrence of a variable requires a distinct type. Consequently, type inference is equivalent to normalization in every single case, and time bounds on type inference and normalization are identical. Similar relationships hold for other intersection type systems without idempotency.The analysis is founded on an investigation of the relationship between linear logic and intersection types. We show a lockstep correspondence between normalization and type inference. The latter shows the promise of intersection types to facilitate static analyses of varied granularity, but also belies an immense challenge: to add amnesia to such analysis without losing all of its benefits.", "In standard control-flow analyses for higher-order languages, a single abstract binding for a variable represents a set of exact bindings, and a single abstract reference cell represents a set of exact reference cells. While such analyses provide useful may-alias information, they are unable to answer mustalias questions about variables and cells, as these questions ask about equality of specific bindings and references.In this paper, we present a novel program analysis for higher-order languages that answers must-alias questions. At every program point, the analysis associates with each variable and abstract cell a cardinality, which is either single or multiple. If variable x is single at program point p, then all bindings for x in the heap reachable from the environment at p hold the same value. If abstract cell r is single at p, then at most one exact cell corresponding to r is reachable from the environment at p.Must-alias information facilitates various program optimizations such as lightweight closure conversion [19]. In addition, must-alias information permits analyses to perform strong updates [3] on abstract reference cells known to be single. Strong updates improve analysis precision for programs that make significant use of state.A prototype implementation of our analysis yields encouraging results. Over a range of benchmarks, our analysis classifies a large majority of the variables as single." ] }
1311.5810
2953089209
We give an exact characterization of the computational complexity of the @math CFA hierarchy. For any @math , we prove that the control flow decision problem is complete for deterministic exponential time. This theorem validates empirical observations that such control flow analysis is intractable. It also provides more general insight into the complexity of abstract interpretation.
Although @math CFA and ML type inference are two static analyses complete for @cite_6 , the proofs of these respective theorems is fundamentally different. The ML proof relies on type inference simulating exact normalization (analogous to the -completeness proof for 0CFA), hence subverting the approximation of the analysis. In contrast, the @math CFA proof harnesses the approximation that results from nonlinearity.
{ "cite_N": [ "@cite_6" ], "mid": [ "2021217869" ], "abstract": [ "A well known but incorrect piece of functional programming folklore is that ML expressions can be efficiently typed in polynomial time. In probing the truth of that folklore, various researchers, including Wand, Buneman, Kanellakis, and Mitchell, constructed simple counterexamples consisting of typable ML programs having length n , with principal types having O(2 cn ) distinct type variables and length O(2 2cn ). When the types associated with these ML constructions were represented as directed acyclic graphs, their sizes grew as O(2 cn ). The folklore was even more strongly contradicted by the recent result of Kanellakis and Mitchell that simply deciding whether or not an ML expression is typable is PSPACE-hard. We improve the latter result, showing that deciding ML typability is DEXPTIME-hard. As Kanellakis and Mitchell have shown containment in DEXPTIME, the problem is DEXPTIME-complete. The proof of DEXPTIME-hardness is carried out via a generic reduction: it consists of a very straightforward simulation of any deterministic one-tape Turing machine M with input k running in O ( c |k| ) time by a polynomial-sized ML formula P M,k , such that M accepts k iff P M,k is typable. The simulation of the transition function δ of the Turing Machine is realized uniquely through terms in the lambda calculus without the use of the polymorphic let construct. We use let for two purposes only: to generate an exponential amount of blank tape for the Turing Machine simulation to begin, and to compose an exponential number of applications of the ML formula simulating state transition. It is purely the expressive power of ML polymorphism to succinctly express function composition which results in a proof of DEXPTIME-hardness. We conjecture that lower bounds on deciding typability for extensions to the typed lambda calculus can be regarded precisely in terms of this expressive capacity for succinct function composition. To further understand this lower bound, we relate it to the problem of proving equality of type variables in a system of type equations generated from an ML expression with let-polymorphism. We show that given an oracle for solving this problem, deciding typability would be in PSPACE, as would be the actual computation of the principal type of the expression, were it indeed typable." ] }
1311.6280
2951658687
The 802.11e standard enables user configuration of several MAC parameters, making WLANs vulnerable to users that selfishly configure these parameters to gain throughput. In this paper we propose a novel distributed algorithm to thwart such selfish behavior. The key idea of the algorithm is for honest stations to react, upon detecting a selfish station, by using a more aggressive configuration that penalizes this station. We show that the proposed algorithm guarantees global stability while providing good response times. By conducting a game theoretic analysis of the algorithm based on repeated games, we also show its effectiveness against selfish stations. Simulation results confirm that the proposed algorithm optimizes throughput performance while discouraging selfish behavior. We also present an experimental prototype of the proposed algorithm demonstrating that it can be implemented on commodity hardware.
The approach proposed by @cite_7 does not suffer from the above drawback but addresses only two types of misbehaving stations: the so-called selfish stations, with @math , and the so-called greedy stations, with @math . While the scheme proposed is effective when dealing with these two particular configurations, other @math configurations that may greatly benefit selfish stations are neither detected nor punished by this mechanism, as we show in the simulation results of . Additionally, the algorithm of @cite_7 is based on heuristics that do not guarantee quick convergence, and indeed we show in a further simulation result in that this approach may suffer from convergence issues.
{ "cite_N": [ "@cite_7" ], "mid": [ "2162938499" ], "abstract": [ "CSMA CA, the contention mechanism of the IEEE 802.11 DCF medium access protocol, has recently been found vulnerable to selfish backoff attacks consisting in nonstandard configuration of the constituent backoff scheme. Such attacks can greatly increase a selfish station's bandwidth share at the expense of honest stations applying a standard configuration. The paper investigates the distribution of bandwidth among anonymous network stations, some of which are selfish. A station's obtained bandwidth share is regarded as a payoff in a noncooperative CSMA CA game. Regardless of the IEEE 802.11 parameter setting, the payoff function is found similar to a multiplayer Prisoners' Dilemma; moreover, the number (though not the identities) of selfish stations can be inferred by observation of successful transmission attempts. Further, a repeated CSMA CA game is defined, where a station can toggle between standard and nonstandard backoff configurations with a view of maximizing a long-term utility. It is argued that a desirable station strategy should yield a fair, Pareto efficient, and subgame perfect Nash equilibrium. One such strategy, called CRISP, is described and evaluated." ] }
1311.6280
2951658687
The 802.11e standard enables user configuration of several MAC parameters, making WLANs vulnerable to users that selfishly configure these parameters to gain throughput. In this paper we propose a novel distributed algorithm to thwart such selfish behavior. The key idea of the algorithm is for honest stations to react, upon detecting a selfish station, by using a more aggressive configuration that penalizes this station. We show that the proposed algorithm guarantees global stability while providing good response times. By conducting a game theoretic analysis of the algorithm based on repeated games, we also show its effectiveness against selfish stations. Simulation results confirm that the proposed algorithm optimizes throughput performance while discouraging selfish behavior. We also present an experimental prototype of the proposed algorithm demonstrating that it can be implemented on commodity hardware.
Substantial work in the literature has also focused on the design of stable adaptive algorithms @cite_9 @cite_0 @cite_23 @cite_24 @cite_3 @cite_11 . A major difference between our algorithm and these approaches is that they build on local stability analysis while we rely on Lyapunov stability theory, which ensures global asymptotic stability and hence provides stronger guarantees. Indeed, with @cite_9 @cite_0 @cite_23 @cite_24 @cite_3 @cite_11 convergence is only guaranteed as long as the initial point is sufficiently close to the stable point of operation, while we guarantee convergence for any initial point of operation.
{ "cite_N": [ "@cite_9", "@cite_3", "@cite_0", "@cite_24", "@cite_23", "@cite_11" ], "mid": [ "2126820294", "2103979726", "", "", "1804414343", "2001247770" ], "abstract": [ "Although the EDCA access mechanism of the 802.11e standard supports legacy DCF stations, the presence of DCF stations in the WLAN jeopardizes the provisioning of the service guarantees committed to the EDCA stations. The reason is that DCF stations compete with Contention Windows (CWs) that are predefined and cannot be modified, and as a result, the impact of the DCF stations on the service received by the EDCA stations cannot be controlled. In this paper, we address the problem of providing throughput guarantees to EDCA stations in a WLAN in which EDCA and DCF stations coexist. To this aim, we propose a technique that, implemented at the Access Point (AP), mitigates the impact of DCF stations on EDCA by skipping with a certain probability the Ack reply to a frame from a DCF station. When missing the Ack, the DCF station increases its CW, and thus, our technique allows us to have some control over the CWs of the legacy DCF stations. In our approach, the probability of skipping an Ack frame is dynamically adjusted by means of an adaptive algorithm. This algorithm is based on a widely used controller from classical control theory, namely a Proportional Controller. In order to find an adequate configuration of the controller, we conduct a control-theoretic analysis of the system. Simulation results show that the proposed approach is effective in providing throughput guarantees to EDCA stations in presence of DCF stations.", "The optimal configuration of the contention parameters of a WLAN depends on the network conditions in terms of number of stations and the traffic they generate. Following this observation, a considerable effort in the literature has been devoted to the design of distributed algorithms that optimally configure the WLAN parameters based on current conditions. In this paper, we propose a novel algorithm that, in contrast to previous proposals which are mostly based on heuristics, is sustained by mathematical foundations from multivariable control theory. A key advantage of the algorithm over existing approaches is that it is compliant with the 802.11 standard and can be implemented with current wireless cards without introducing any changes into the hardware or firmware. We study the performance of our proposal by means of theoretical analysis, simulations, and a real implementation. Results show that the algorithm substantially outperforms previous approaches in terms of throughput and delay.", "", "", "We use a previously developed nonlinear dynamic model of TCP to analyze and design active queue management (AQM) control systems using random early detection (RED). First, we linearize the interconnection of TCP and a bottlenecked queue and discuss its feedback properties in terms of network parameters such as link capacity, load and round-trip time. Using this model, we next design an AQM control system using the RED scheme by relating its free parameters such as the low-pass filter break point and loss probability profile to the network parameters. We present guidelines for designing linearly stable systems subject to network parameters like propagation delay and load level. Robustness to variations in system loads is a prime objective. We present no simulations to support our analysis.", "Distributed opportunistic scheduling (DOS) is inherently more difficult than conventional opportunistic scheduling due to the absence of a central entity that knows the channel state of all stations. With DOS, stations use random access to contend for the channel and, upon winning a contention, they measure the channel conditions. After measuring the channel conditions, a station only transmits if the channel quality is good; otherwise, it gives up the transmission opportunity. The distributed nature of DOS makes it vulnerable to selfish users: By deviating from the protocol and using more transmission opportunities, a selfish user can gain a greater share of wireless resources at the expense of \"well-behaved\" users. In this paper, we address the problem of selfishness in DOS from a game-theoretic standpoint. We propose an algorithm that satisfies the following properties: 1) When all stations implement the algorithm, the wireless network is driven to the optimal point of operation; and 2) one or more selfish stations cannot obtain any gain by deviating from the algorithm. The key idea of the algorithm is to react to a selfish station by using a more aggressive configuration that (indirectly) punishes this station. We build on multivariable control theory to design a mechanism for punishment that is sufficiently severe to prevent selfish behavior, yet not so severe as to render the system unstable. We conduct a game-theoretic analysis based on repeated games to show the algorithm's effectiveness against selfish stations. These results are confirmed by extensive simulations." ] }
1311.6280
2951658687
The 802.11e standard enables user configuration of several MAC parameters, making WLANs vulnerable to users that selfishly configure these parameters to gain throughput. In this paper we propose a novel distributed algorithm to thwart such selfish behavior. The key idea of the algorithm is for honest stations to react, upon detecting a selfish station, by using a more aggressive configuration that penalizes this station. We show that the proposed algorithm guarantees global stability while providing good response times. By conducting a game theoretic analysis of the algorithm based on repeated games, we also show its effectiveness against selfish stations. Simulation results confirm that the proposed algorithm optimizes throughput performance while discouraging selfish behavior. We also present an experimental prototype of the proposed algorithm demonstrating that it can be implemented on commodity hardware.
Perhaps the most closely related to this paper is our previous work of @cite_11 , which uses a similar technique to counteract selfish stations, based also on repeated games. However, both the scope of the work and the algorithm design are substantially different. Indeed, while @cite_11 focuses on distributed opportunistic scheduling, here we address the problem of selfishness in 802.11. Furthermore, @cite_11 relies on local linearized analysis, while here we use Lyapunov theory for the global design and analysis of the algorithm. As a consequence, the algorithm proposed in this paper provides much stronger guarantees on stability and convergence than that of @cite_11 .
{ "cite_N": [ "@cite_11" ], "mid": [ "2001247770" ], "abstract": [ "Distributed opportunistic scheduling (DOS) is inherently more difficult than conventional opportunistic scheduling due to the absence of a central entity that knows the channel state of all stations. With DOS, stations use random access to contend for the channel and, upon winning a contention, they measure the channel conditions. After measuring the channel conditions, a station only transmits if the channel quality is good; otherwise, it gives up the transmission opportunity. The distributed nature of DOS makes it vulnerable to selfish users: By deviating from the protocol and using more transmission opportunities, a selfish user can gain a greater share of wireless resources at the expense of \"well-behaved\" users. In this paper, we address the problem of selfishness in DOS from a game-theoretic standpoint. We propose an algorithm that satisfies the following properties: 1) When all stations implement the algorithm, the wireless network is driven to the optimal point of operation; and 2) one or more selfish stations cannot obtain any gain by deviating from the algorithm. The key idea of the algorithm is to react to a selfish station by using a more aggressive configuration that (indirectly) punishes this station. We build on multivariable control theory to design a mechanism for punishment that is sufficiently severe to prevent selfish behavior, yet not so severe as to render the system unstable. We conduct a game-theoretic analysis based on repeated games to show the algorithm's effectiveness against selfish stations. These results are confirmed by extensive simulations." ] }
1311.6033
2407537893
Given a polygon @math , for two points @math and @math contained in the polygon, their is the length of the shortest @math -path within @math . A of radius @math centered at a point @math is the set of points in @math whose geodesic distance to @math is at most @math . We present a polynomial time @math -approximation algorithm for finding a densest geodesic unit disk packing in @math . Allowing arbitrary radii but constraining the number of disks to be @math , we present a @math -approximation algorithm for finding a packing in @math with @math geodesic disks whose minimum radius is maximized. We then turn our focus on of @math and present a @math -approximation algorithm for covering @math with @math geodesic disks whose maximal radius is minimized. Furthermore, we show that all these problems are @math -hard in polygons with holes. Lastly, we present a polynomial time exact algorithm which covers a polygon with two geodesic disks of minimum maximal radius.
Exact coverings of @math points in the plane with two Euclidean disks of minimum maximal radius, commonly referred to as the problem, has been heavily studied. The best deterministic algorithm runs in @math time @cite_37 and in @cite_6 an expected @math time algorithm is presented. For polygons, in @cite_39 a @math time algorithm for covering a convex polygon with two Euclidean disks of minimum maximal radius is presented.
{ "cite_N": [ "@cite_37", "@cite_6", "@cite_39" ], "mid": [ "2052051124", "2006614694", "" ], "abstract": [ "We present an (O(n ^ 9 n) ) -time algorithm for computing the 2-center of a set S of n points in the plane (that is, a pair of congruent disks of smallest radius whose union covers S), improving the previous (O(n^2 n) ) -time algorithm of [10].", "Improving on a recent breakthrough of Sharir, we find two minimum-radius circular disks covering a planar point set, in randomized expected time O(n log n).", "" ] }
1311.6249
2950437453
Background: Distributed Pair Programming can be performed via screensharing or via a distributed IDE. The latter offers the freedom of concurrent editing (which may be helpful or damaging) and has even more awareness deficits than screen sharing. Objective: Characterize how competent distributed pair programmers may handle this additional freedom and these additional awareness deficits and characterize the impacts on the pair programming process. Method: A revelatory case study, based on direct observation of a single, highly competent distributed pair of industrial software developers during a 3-day collaboration. We use recordings of these sessions and conceptualize the phenomena seen. Results: 1. Skilled pairs may bridge the awareness deficits without visible obstruction of the overall process. 2. Skilled pairs may use the additional editing freedom in a useful limited fashion, resulting in potentially better fluency of the process than local pair programming. Conclusion: When applied skillfully in an appropriate context, distributed-pair programming can (not will!) work at least as well as local pair programming.
Regarding global software development (GSD, @cite_28 ), most of today's research concerns the team, project, or organizational level and so far focuses on the problems of GSD @cite_25 , rather than on solutions for them. Topics are for instance communication @cite_31 @cite_24 , coordination @cite_3 , and trust @cite_30 @cite_2 @cite_29 . Little is said about real-time collaboration of individuals and the immediate programming process we are concerned with here.
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_29", "@cite_3", "@cite_24", "@cite_2", "@cite_31", "@cite_25" ], "mid": [ "2270052769", "", "1984073675", "2007329840", "2153470826", "", "2008712567", "2087529741" ], "abstract": [ "All software projects face the challenges of diverse distances -- temporal, geographical, cultural, lingual, political, historical, and more. Many forms of distance even affect developers in the same room. The goal of this book is to reconcile two mainstays of modern agility: the close collaboration agility relies on, and project teams distributed across different cities, countries, and continents.In Agile Software Development with Distributed Teams, Jutta Eckstein asserts that, in fact, agile methods and the constant communication they require are uniquely capable of solving the challenges of distributed projects. Agility is responsiveness to change -- in other words, agile practitioners maintain flexibility to accommodate changing circumstances and results. Iterative development serves the learning curve that global project teams must scale.This book is not about how to outsource and forget your problems. Rather, Eckstein details how to carefully select development partners and integrate efforts and processes to form a better product than any single contributor could deliver on his or her own. The author de-emphasizes templates and charts and favors topical discussion and exploration. Practitioners share experiences in their own words in short stories throughout the book. Eckstein trains readers to be change agents, to creatively apply the concepts in this book to form a customized distributed project plan for success.", "", "The importance of communication and trust in the context of global virtual teams has been noted and reiterated in the information systems (IS) literature. Yet precisely how communication and trust influence certain outcomes within virtual teams remains unresolved. In this study, we seek to contribute some clarity to the understanding of the theoretical linkages among trust, communication, and member performance in virtual teams. To this end, we identify and test three proposed models (additive, interaction, and mediation) describing the role of trust in its relationship with communication to explain performance. In testing the relationships, we note that the concepts of communication and trust are inherently relational and not properties of individuals. Thus, we argue that a social network approach is potentially more appropriate than attribute-based approaches that have been utilized in prior research. Our results indicate that the \"mediating\" model best explains how communication and trust work together to influence performance. Overall, the study contributes to the existing body of knowledge on virtual teams by empirically reconciling conflicting views regarding the interrelationships between key constructs in the literature. Further, the study, through its adoption of the social network analysis approach, provides awareness within the IS research community of the strengths of applying network approaches in examining new organizational forms.", "Coordination is important in software development because it leads to benefits such as cost savings, shorter development cycles, and better-integrated products. Team cognition research suggests that members coordinate through team knowledge, but this perspective has only been investigated in real-time collocated tasks and we know little about which types of team knowledge best help coordination in the most geographically distributed software work. In this field study, we investigate the coordination needs of software teams, how team knowledge affects coordination, and how this effect is influenced by geographic dispersion. Our findings show that software teams have three distinct types of coordination needs-technical, temporal, and process-and that these needs vary with the members' role; geographic distance has a negative effect on coordination, but is mitigated by shared knowledge of the team and presence awareness; and shared task knowledge is more important for coordination among collocated members. We articulate propositions for future research in this area based on our analysis.", "Nowadays, distributed development is common in software development. Besides many advantages, research in the last decade has consistently found that distribution has a negative impact on collaboration in general, and communication and task completion time in particular. Adapted processes, practices and tools are demanded to overcome these challenges. We report on an empirical study of communication structures and delay, as well as task completion times in IBM's distributed development project Jazz. The Jazz project explicitly focuses on distributed collaboration and has adapted processes and tools to overcome known challenges. We explored the effect of distance on communication and task completion time and use social network analysis to obtain insights about the collaboration in the Jazz project. We discuss our findings in the light of existing literature on distributed collaboration and delays.", "", "Abstract We conducted an industrial case study of a distributed team in the USA and the Czech Republic that used Extreme Programming. Our goal was to understand how this globally-distributed team created a successful project in a new problem domain using a methodology that is dependent on informal, face-to-face communication. We collected quantitative and qualitative data and used grounded theory to identify four key factors for communication in globally-distributed XP teams working within a new problem domain. Our study suggests that, if these critical enabling factors are addressed, methodologies dependent on informal communication can be used on global software development projects.", "Distribution of development processes has become common as a side effect of globalization. Working in a distributed setting brings challenges inherent to distance. The Software Engineering community has been investigating these challenges for over a decade, and issues regarding communication, coordination, and trust are frequently reported in literature. However, a few studies discuss solutions for these challenges. Frequently, best practices are described in a general context. In this paper we report our findings from a systematic literature review that aimed at identifying reported challenges and the proposed solutions to solve such challenges. In a time that distributed development has established its roots, it is important to move towards solutions to well-known problems. Our report aims to establish a baseline of problems that still need solutions. This baseline brings awareness to the global software engineering community. We finish discussing the implications for furthering the body of knowledge in the field." ] }
1311.6249
2950437453
Background: Distributed Pair Programming can be performed via screensharing or via a distributed IDE. The latter offers the freedom of concurrent editing (which may be helpful or damaging) and has even more awareness deficits than screen sharing. Objective: Characterize how competent distributed pair programmers may handle this additional freedom and these additional awareness deficits and characterize the impacts on the pair programming process. Method: A revelatory case study, based on direct observation of a single, highly competent distributed pair of industrial software developers during a 3-day collaboration. We use recordings of these sessions and conceptualize the phenomena seen. Results: 1. Skilled pairs may bridge the awareness deficits without visible obstruction of the overall process. 2. Skilled pairs may use the additional editing freedom in a useful limited fashion, resulting in potentially better fluency of the process than local pair programming. Conclusion: When applied skillfully in an appropriate context, distributed-pair programming can (not will!) work at least as well as local pair programming.
Finally, there is the notion of a driver role and an observer (or navigator) role in pair programming; they are relevant for the use of editing freedom. These roles are not mentioned by Beck at all; their most popular source appears to be a definition of pair programming by that includes the following: @cite_14 (a different version of this definition is found in [p.3] WilKes02 ).
{ "cite_N": [ "@cite_14" ], "mid": [ "2148071752" ], "abstract": [ "The software industry has practiced pair programming (two programmers working side by side at one computer on the same problem) with great success for years, but people who haven't tried it often reject the idea as a waste of resources. The authors demonstrate that using pair programming in the software development process yields better products in less time-and happier, more confident programmers." ] }
1311.6249
2950437453
Background: Distributed Pair Programming can be performed via screensharing or via a distributed IDE. The latter offers the freedom of concurrent editing (which may be helpful or damaging) and has even more awareness deficits than screen sharing. Objective: Characterize how competent distributed pair programmers may handle this additional freedom and these additional awareness deficits and characterize the impacts on the pair programming process. Method: A revelatory case study, based on direct observation of a single, highly competent distributed pair of industrial software developers during a 3-day collaboration. We use recordings of these sessions and conceptualize the phenomena seen. Results: 1. Skilled pairs may bridge the awareness deficits without visible obstruction of the overall process. 2. Skilled pairs may use the additional editing freedom in a useful limited fashion, resulting in potentially better fluency of the process than local pair programming. Conclusion: When applied skillfully in an appropriate context, distributed-pair programming can (not will!) work at least as well as local pair programming.
First, , after systematic, quantitative, and very focused verbal protocol analysis of on-site, everyday, industrial pair programming, conclude and in particular find that the detection of minor mistakes is done as much by the driver as by the observer @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "2094783406" ], "abstract": [ "Computer programming is generally understood to be highly challenging and since its inception a wide range of approaches, tools and methodologies have been developed to assist in managing its complexity. Relatively recently the potential benefits of collaborative software development have been formalised in the practice of pair programming. Here we attempt to 'unpick' the pair programming process through the analysis of verbalisations from a number of commercial studies. We focus particularly on the roles of the two programmers and what their key characteristics and behaviours might be. In particular, we dispute two existing claims: (i) that the programmer who is not currently typing in code (''the navigator'') is constantly reviewing what is typed and highlighting any errors (i.e. acting as a reviewer) and (ii) that the navigator focuses on a different level of abstraction as a way of ensuring coverage at all necessary levels (i.e. acting as a foreman). We provide an alternative model for these roles (''the tag team'') in which the driver and navigator play much more equal roles. We also suggest that a key factor in the success of pair programming may be the associated increase in talk at an intermediate level of abstraction." ] }
1311.6249
2950437453
Background: Distributed Pair Programming can be performed via screensharing or via a distributed IDE. The latter offers the freedom of concurrent editing (which may be helpful or damaging) and has even more awareness deficits than screen sharing. Objective: Characterize how competent distributed pair programmers may handle this additional freedom and these additional awareness deficits and characterize the impacts on the pair programming process. Method: A revelatory case study, based on direct observation of a single, highly competent distributed pair of industrial software developers during a 3-day collaboration. We use recordings of these sessions and conceptualize the phenomena seen. Results: 1. Skilled pairs may bridge the awareness deficits without visible obstruction of the overall process. 2. Skilled pairs may use the additional editing freedom in a useful limited fashion, resulting in potentially better fluency of the process than local pair programming. Conclusion: When applied skillfully in an appropriate context, distributed-pair programming can (not will!) work at least as well as local pair programming.
Second, Chong and Hurlbutt @cite_17 , after informal data analysis of on-site, everyday, industrial pair programming, formulate even more strongly . They also note that . They assume this to be a disadvantage of local pair programming and recommend the use of dual keyboards and dual mice to make driver changes maximally fluent in the local case. For the distributed case, they advise against tools of the RPP and strict DPP kind that enforce and hence emphasize the driver non-driver distinction. Their article does not discuss the problems that may result from having and using editing freedom in a reduced-awareness situation, however.
{ "cite_N": [ "@cite_17" ], "mid": [ "2113208984" ], "abstract": [ "This paper presents data from a four month ethnographic study of professional pair programmers from two software development teams. Contrary to the current conception of pair programmers, the pairs in this study did not hew to the separate roles of \"driver\" and \"navigator\". Instead, the observed programmers moved together through different phases of the task, considering and discussing issues at the same strategic \"range \" or level of abstraction and in largely the same role. This form of interaction was reinforced by frequent switches in keyboard control during pairing and the use of dual keyboards. The distribution of expertise among the members of a pair had a strong influence on the tenor of pair programming interaction. Keyboard control had a consistent secondary effect on decisionmaking within the pair. These findings have implications for software development managers and practitioners as well as for the design of software development tools." ] }
1311.5612
1518047474
Modern botnets rely on domain-generation algorithms (DGAs) to build resilient command-and-control infrastructures. Recent works focus on recognizing automatically generated domains (AGDs) from DNS traffic, which potentially allows to identify previously unknown AGDs to hinder or disrupt botnets' communication capabilities. The state-of-the-art approaches require to deploy low-level DNS sensors to access data whose collection poses practical and privacy issues, making their adoption problematic. We propose a mechanism that overcomes the above limitations by analyzing DNS traffic data through a combination of linguistic and IP-based features of suspicious domains. In this way, we are able to identify AGD names, characterize their DGAs and isolate logical groups of domains that represent the respective botnets. Moreover, our system enriches these groups with new, previously unknown AGD names, and produce novel knowledge about the evolving behavior of each tracked botnet. We used our system in real-world settings, to help researchers that requested intelligence on suspicious domains and were able to label them as belonging to the correct botnet automatically. Additionally, we ran an evaluation on 1,153,516 domains, including AGDs from both modern (e.g., Bamital) and traditional (e.g., Conficker, Torpig) botnets. Our approach correctly isolated families of AGDs that belonged to distinct DGAs, and set automatically generated from non-automatically generated domains apart in 94.8 percent of the cases.
were the first who addressed the problem of AGDs, later published also in @cite_11 : The authors leverage the randomization of AGD names to distinguish them from HGDs. Linguistic features capturing the distribution of alphanumeric characters and bi-grams are computed over domain , which are then classified as sets of AGDs or HGDs. Differently from , their system relies on learning, and thus requires labeled datasets of positive and negative samples. The work explores different strategies to group domain in sets before feeding them to the classifier: per-second-level-domain, per-IP and per-component. The first strategy groups the domains according to their second-level-domain, the second strategy to the IPs they resolve to, the third to the bipartite domain-IP graph components. Our work is different from these approaches because we require no labeled datasets of AGDs to be bootstrapped, thus it is able to find sets of AGDs with no prior knowledge. Moreover, our system classifies domains one by one, without the necessity of performing error-prone apriori grouping.
{ "cite_N": [ "@cite_11" ], "mid": [ "1981294881" ], "abstract": [ "Recent botnets such as Conficker, Kraken, and Torpig have used DNS-based \"domain fluxing\" for command-and-control, where each Bot queries for existence of a series of domain names and the owner has to register only one such domain name. In this paper, we develop a methodology to detect such \"domain fluxes\" in DNS traffic by looking for patterns inherent to domain names that are generated algorithmically, in contrast to those generated by humans. In particular, we look at distribution of alphanumeric characters as well as bigrams in all domains that are mapped to the same set of IP addresses. We present and compare the performance of several distance metrics, including K-L distance, Edit distance, and Jaccard measure. We train by using a good dataset of domains obtained via a crawl of domains mapped to all IPv4 address space and modeling bad datasets based on behaviors seen so far and expected. We also apply our methodology to packet traces collected at a Tier-1 ISP and show we can automatically detect domain fluxing as used by Conficker botnet with minimal false positives, in addition to discovering a new botnet within the ISP trace. We also analyze a campus DNS trace to detect another unknown botnet exhibiting advanced domain-name generation technique." ] }
1311.5612
1518047474
Modern botnets rely on domain-generation algorithms (DGAs) to build resilient command-and-control infrastructures. Recent works focus on recognizing automatically generated domains (AGDs) from DNS traffic, which potentially allows to identify previously unknown AGDs to hinder or disrupt botnets' communication capabilities. The state-of-the-art approaches require to deploy low-level DNS sensors to access data whose collection poses practical and privacy issues, making their adoption problematic. We propose a mechanism that overcomes the above limitations by analyzing DNS traffic data through a combination of linguistic and IP-based features of suspicious domains. In this way, we are able to identify AGD names, characterize their DGAs and isolate logical groups of domains that represent the respective botnets. Moreover, our system enriches these groups with new, previously unknown AGD names, and produce novel knowledge about the evolving behavior of each tracked botnet. We used our system in real-world settings, to help researchers that requested intelligence on suspicious domains and were able to label them as belonging to the correct botnet automatically. Additionally, we ran an evaluation on 1,153,516 domains, including AGDs from both modern (e.g., Bamital) and traditional (e.g., Conficker, Torpig) botnets. Our approach correctly isolated families of AGDs that belonged to distinct DGAs, and set automatically generated from non-automatically generated domains apart in 94.8 percent of the cases.
focused on domains that are malicious, in general, from the viewpoint of the victims of attacks perpetrated through botnets (e.g., phishing, spam, drive-by download). Instead, focuses on AGDs and, for this reason, it models the features of the DNS layer between bots and C &C servers. Moreover, the detection method of @cite_1 is based on supervised learning, whereas uses unsupervised techniques.
{ "cite_N": [ "@cite_1" ], "mid": [ "1981049515" ], "abstract": [ "In this paper, we present FluxBuster, a novel passive DNS traffic analysis system for detecting and tracking malicious flux networks. FluxBuster applies large-scale monitoring of DNS traffic traces generated by recursive DNS (RDNS) servers located in hundreds of different networks scattered across several different geographical locations. Unlike most previous work, our detection approach is not limited to the analysis of suspicious domain names extracted from spam emails or precompiled domain blacklists. Instead, FluxBuster is able to detect malicious flux service networks in-the-wild, i.e., as they are \"accessed” by users who fall victim of malicious content, independently of how this malicious content was advertised. We performed a long-term evaluation of our system spanning a period of about five months. The experimental results show that FluxBuster is able to accurately detect malicious flux networks with a low false positive rate. Furthermore, we show that in many cases FluxBuster is able to detect malicious flux domains several days or even weeks before they appear in public domain blacklists." ] }
1311.5612
1518047474
Modern botnets rely on domain-generation algorithms (DGAs) to build resilient command-and-control infrastructures. Recent works focus on recognizing automatically generated domains (AGDs) from DNS traffic, which potentially allows to identify previously unknown AGDs to hinder or disrupt botnets' communication capabilities. The state-of-the-art approaches require to deploy low-level DNS sensors to access data whose collection poses practical and privacy issues, making their adoption problematic. We propose a mechanism that overcomes the above limitations by analyzing DNS traffic data through a combination of linguistic and IP-based features of suspicious domains. In this way, we are able to identify AGD names, characterize their DGAs and isolate logical groups of domains that represent the respective botnets. Moreover, our system enriches these groups with new, previously unknown AGD names, and produce novel knowledge about the evolving behavior of each tracked botnet. We used our system in real-world settings, to help researchers that requested intelligence on suspicious domains and were able to label them as belonging to the correct botnet automatically. Additionally, we ran an evaluation on 1,153,516 domains, including AGDs from both modern (e.g., Bamital) and traditional (e.g., Conficker, Torpig) botnets. Our approach correctly isolated families of AGDs that belonged to distinct DGAs, and set automatically generated from non-automatically generated domains apart in 94.8 percent of the cases.
proposed a system that detects C &C failover strategies with techniques based on multi-path exploration. The system explores the behavior of malware samples during simulated network failures. Backup C &C servers and AGDs are so unveiled, leading to new blacklists. The approach is very promising toward expanding blacklists of malicious domains, although it may produce misleading results when the malware behavior depends on time-dependent information. Differently from @cite_3 , discovers new AGDs---and other knowledge---using solely passive, recursive-level DNS traffic and requires no malware samples to work.
{ "cite_N": [ "@cite_3" ], "mid": [ "2054897983" ], "abstract": [ "The ability to remote-control infected PCs is a fundamental component of modern malware campaigns. At the same time, the command and control (C&C) infrastructure that provides this capability is an attractive target for mitigation. In recent years, more or less successful takedown operations have been conducted against botnets employing both client-server and peer-to-peer C&C architectures. To improve their robustness against such disruptions of their illegal business, botnet operators routinely deploy redundant C&C infrastructure and implement failover C&C strategies. In this paper, we propose techniques based on multi-path exploration [1] to discover how malware behaves when faced with the simulated take-down of some of the network endpoints it communicates with. We implement these techniques in a tool called Squeeze, and show that it allows us to detect backup C&C servers, increasing the coverage of an automatically generated C&C blacklist by 19.7 , and can trigger domain generation algorithms that malware implements for disaster-recovery." ] }
1311.5612
1518047474
Modern botnets rely on domain-generation algorithms (DGAs) to build resilient command-and-control infrastructures. Recent works focus on recognizing automatically generated domains (AGDs) from DNS traffic, which potentially allows to identify previously unknown AGDs to hinder or disrupt botnets' communication capabilities. The state-of-the-art approaches require to deploy low-level DNS sensors to access data whose collection poses practical and privacy issues, making their adoption problematic. We propose a mechanism that overcomes the above limitations by analyzing DNS traffic data through a combination of linguistic and IP-based features of suspicious domains. In this way, we are able to identify AGD names, characterize their DGAs and isolate logical groups of domains that represent the respective botnets. Moreover, our system enriches these groups with new, previously unknown AGD names, and produce novel knowledge about the evolving behavior of each tracked botnet. We used our system in real-world settings, to help researchers that requested intelligence on suspicious domains and were able to label them as belonging to the correct botnet automatically. Additionally, we ran an evaluation on 1,153,516 domains, including AGDs from both modern (e.g., Bamital) and traditional (e.g., Conficker, Torpig) botnets. Our approach correctly isolated families of AGDs that belonged to distinct DGAs, and set automatically generated from non-automatically generated domains apart in 94.8 percent of the cases.
Systems like -.5 Exposure and -.5 Notos @cite_0 rely on local recursive DNS. Instead, -.5 Kopis @cite_15 analyzes DNS traffic collected from a global vantage point at the upper DNS hierarchy. -.5 Kopis introduces new features such as the requester diversity, requester profile and resolved-IPs reputation, to leverage the global visibility and detect malicious domains. As the authors themselves notice, -.5 Kopis is ineffective on AGDs, because of their short lifespan, whereas we have showed extensively that can detect and, more importantly, label, previously unknown AGDs.
{ "cite_N": [ "@cite_0", "@cite_15" ], "mid": [ "155384935", "2401054255" ], "abstract": [ "The Domain Name System (DNS) is an essential protocol used by both legitimate Internet applications and cyber attacks. For example, botnets rely on DNS to support agile command and control infrastructures. An effective way to disrupt these attacks is to place malicious domains on a \"blocklist\" (or \"blacklist\") or to add a filtering rule in a firewall or network intrusion detection system. To evade such security countermeasures, attackers have used DNS agility, e.g., by using new domains daily to evade static blacklists and firewalls. In this paper we propose Notos, a dynamic reputation system for DNS. The premise of this system is that malicious, agile use of DNS has unique characteristics and can be distinguished from legitimate, professionally provisioned DNS services. Notos uses passive DNS query data and analyzes the network and zone features of domains. It builds models of known legitimate domains and malicious domains, and uses these models to compute a reputation score for a new domain indicative of whether the domain is malicious or legitimate. We have evaluated Notos in a large ISP's network with DNS traffic from 1.4 million users. Our results show that Notos can identify malicious domains with high accuracy (true positive rate of 96.8 ) and low false positive rate (0.38 ), and can identify these domains weeks or even months before they appear in public blacklists.", "In recent years Internet miscreants have been leveraging the DNS to build malicious network infrastructures for malware command and control. In this paper we propose a novel detection system called Kopis for detecting malware-related domain names. Kopis passively monitors DNS traffic at the upper levels of the DNS hierarchy, and is able to accurately detect malware domains by analyzing global DNS query resolution patterns. Compared to previous DNS reputation systems such as Notos [3] and Exposure [4], which rely on monitoring traffic from local recursive DNS servers, Kopis offers a new vantage point and introduces new traffic features specifically chosen to leverage the global visibility obtained by monitoring network traffic at the upper DNS hierarchy. Unlike previous work Kopis enables DNS operators to independently (i.e., without the need of data from other networks) detect malware domains within their authority, so that action can be taken to stop the abuse. Moreover, unlike previous work, Kopis can detect malware domains even when no IP reputation information is available. We developed a proof-of-concept version of Kopis, and experimented with eight months of real-world data. Our experimental results show that Kopis can achieve high detection rates (e.g., 98.4 ) and low false positive rates (e.g., 0.3 or 0.5 ). In addition Kopis is able to detect new malware domains days or even weeks before they appear in public blacklists and security forums, and allowed us to discover the rise of a previously unknown DDoS botnet based in China." ] }
1311.5612
1518047474
Modern botnets rely on domain-generation algorithms (DGAs) to build resilient command-and-control infrastructures. Recent works focus on recognizing automatically generated domains (AGDs) from DNS traffic, which potentially allows to identify previously unknown AGDs to hinder or disrupt botnets' communication capabilities. The state-of-the-art approaches require to deploy low-level DNS sensors to access data whose collection poses practical and privacy issues, making their adoption problematic. We propose a mechanism that overcomes the above limitations by analyzing DNS traffic data through a combination of linguistic and IP-based features of suspicious domains. In this way, we are able to identify AGD names, characterize their DGAs and isolate logical groups of domains that represent the respective botnets. Moreover, our system enriches these groups with new, previously unknown AGD names, and produce novel knowledge about the evolving behavior of each tracked botnet. We used our system in real-world settings, to help researchers that requested intelligence on suspicious domains and were able to label them as belonging to the correct botnet automatically. Additionally, we ran an evaluation on 1,153,516 domains, including AGDs from both modern (e.g., Bamital) and traditional (e.g., Conficker, Torpig) botnets. Our approach correctly isolated families of AGDs that belonged to distinct DGAs, and set automatically generated from non-automatically generated domains apart in 94.8 percent of the cases.
In this category we include works that exploit the fact that machines (i.e., bots) infected by DGA-based malware cause the host-level DNS servers to generate disproportionately large numbers of NX responses. In particular, extend @cite_4 and introduce NXDOMAINs to speedup the detection of AGDs: AGDs are recognized because they are queried by any given client after a series of NXDOMAIN responses. The work differs from ours substantially, mainly because it requires DNS datasets that include the IP addresses of the querying clients. Moreover, the approach seems fragile on sampled datasets, which is a required step when dealing with high-traffic networks.
{ "cite_N": [ "@cite_4" ], "mid": [ "2136495567" ], "abstract": [ "Recent Botnets such as Conficker, Kraken and Torpig have used DNS based \"domain fluxing\" for command-and-control, where each Bot queries for existence of a series of domain names and the owner has to register only one such domain name. In this paper, we develop a methodology to detect such \"domain fluxes\" in DNS traffic by looking for patterns inherent to domain names that are generated algorithmically, in contrast to those generated by humans. In particular, we look at distribution of alphanumeric characters as well as bigrams in all domains that are mapped to the same set of IP-addresses. We present and compare the performance of several distance metrics, including KL-distance, Edit distance and Jaccard measure. We train by using a good data set of domains obtained via a crawl of domains mapped to all IPv4 address space and modeling bad data sets based on behaviors seen so far and expected. We also apply our methodology to packet traces collected at a Tier-1 ISP and show we can automatically detect domain fluxing as used by Conficker botnet with minimal false positives." ] }
1311.5587
2078884048
The paper describes the need for and goals of tool-integration within software development processes. In particular we focus on agile software development but are not limited to. The integration of tools and data between the different domains of the process is essential for an efficient, effective and customized software development. We describe what the next steps in the pursuit of integration are and how major goals can be achieved. Beyond theoretical and architectural considerations we describe the prototypical implementation of an open platform approach. The paper introduces platform apps and a functionality store as general concepts to make apps and their functionalities available to the community. We describe the implementation of the approach and how it can be practically utilized. The description is based on one major use case and further steps are motivated by various other examples.
Further, there are investigations describing integration solutions with similar goals as our solution @cite_2 @cite_7 . Sinha @cite_2 elaborate on the issues of today's software development. Additionally to the motivation for integration as emphasized by Brown and Thomas they mention the increasingly distributed development as new reason of the need for integration. They describe a conceptual framework (with an abstract architecture) that partly matches the ideas and concepts mentioned in . Using several real-life examples they argue for the need of an integration framework.
{ "cite_N": [ "@cite_7", "@cite_2" ], "mid": [ "2043404089", "2101680643" ], "abstract": [ "Typical companies rely on their software ecosystems to support and optimise their business processes. There are a few proposals to help software engineers devise enterprise application integration solutions. Some companies need to adapt these proposals to particular contexts. Unfortunately, our analysis reveals that they are not so easy to maintain as expected. This motivated us to work on a new proposal that has been carefully designed in order to reduce maintainability efforts.", "In this position paper we argue that to effectively address coordination challenges in distributed software development, we need to go beyond making individual development tools more collaborative and design a framework that enables common understanding of the information from different tools and supports loose coupling between them. Stakeholders can go about doing their work in their choice of tools, and as long as these tools adhere to certain interface requirements imposed by the framework, information may be shared across tools in a seamless fashion. The framework should also be able to pick up on cues within the project execution environment, analyze them, and provide useful information as alerts and advisories to stakeholders. Finally, the framework should be adaptive to new tools introduced in the development process as well as to changing project governance needs. We present a preliminary architecture of the framework and discuss candidate technologies to realize it." ] }
1311.5587
2078884048
The paper describes the need for and goals of tool-integration within software development processes. In particular we focus on agile software development but are not limited to. The integration of tools and data between the different domains of the process is essential for an efficient, effective and customized software development. We describe what the next steps in the pursuit of integration are and how major goals can be achieved. Beyond theoretical and architectural considerations we describe the prototypical implementation of an open platform approach. The paper introduces platform apps and a functionality store as general concepts to make apps and their functionalities available to the community. We describe the implementation of the approach and how it can be practically utilized. The description is based on one major use case and further steps are motivated by various other examples.
Meanwhile, Frantz and Corchuelo @cite_7 refer to integration in the business (EAI) context. But, they state important facts that also apply to integration of the software development process. They see need for integration in order to be able to reuse software components (or tools) and for adapting or optimizing IT to the needs of (business) processes, while we emphasize workflows within the software development process. Further, they introduce their own EAI framework to compare it to other existing ones like Spring Integration or Camel. With regard to the goal to provide simple tool integration and development mechanisms, we consider the approach to be not sufficient to serve as an integration environment for software development processes. In particular the use of message pattern to detach data from integrated applications can be problematic. This means, messages have to be routed through the integration environment and translated into the application specific data model format. Using message buses to decouple systems is a general EAI pattern, which we explicitly avoid by mapping directly on a generic meta model understood by all applications using our platform.
{ "cite_N": [ "@cite_7" ], "mid": [ "2043404089" ], "abstract": [ "Typical companies rely on their software ecosystems to support and optimise their business processes. There are a few proposals to help software engineers devise enterprise application integration solutions. Some companies need to adapt these proposals to particular contexts. Unfortunately, our analysis reveals that they are not so easy to maintain as expected. This motivated us to work on a new proposal that has been carefully designed in order to reduce maintainability efforts." ] }
1311.5587
2078884048
The paper describes the need for and goals of tool-integration within software development processes. In particular we focus on agile software development but are not limited to. The integration of tools and data between the different domains of the process is essential for an efficient, effective and customized software development. We describe what the next steps in the pursuit of integration are and how major goals can be achieved. Beyond theoretical and architectural considerations we describe the prototypical implementation of an open platform approach. The paper introduces platform apps and a functionality store as general concepts to make apps and their functionalities available to the community. We describe the implementation of the approach and how it can be practically utilized. The description is based on one major use case and further steps are motivated by various other examples.
Another approach to be mentioned here is pursued by Biehl @cite_0 . They build a service discovery and orchestration framework for OSLC services, representing single development tools and systems. The idea of OSLC is to provide vendor independently lifecycle data -- originally for the application lifecycle management (ALM). For orchestrating tools' functionalities, exposed via RESTful OSLC services, process chains are described by a domain specific language. Decoupling vendor specific models is a very vital feature of OSLC, but it has also its drawbacks due to the use of RESTful services. Mapping bidirectional communication with RESTful services is --although possible with workarounds-- by definition not considered in HTTP.
{ "cite_N": [ "@cite_0" ], "mid": [ "160775773" ], "abstract": [ "Globally distributed development of complex systems relies on the use of sophisticated development tools but today the tools provide only limited possibilities for integration into seamless tool chains. If development tools could be integrated, development data could be exchanged and tracing across remotely located tools would be possible and would increase the efficiency of globally distributed development. We use a domain specific modeling language to describe tool chains as models on a high level of abstraction. We use model-driven technology to synthesize the implementation of a service-oriented wrapper for each development tool based on OSLC (Open Services for Lifecyle Collaboration) and the orchestration of the services exposed by development tools. The wrapper exposes both tool data and functionality as web services, enabling platform independent tool integration. The orchestration allows us to discover remote tools via their service wrapper, integrate them and check the correctness of the orchestration." ] }
1311.5587
2078884048
The paper describes the need for and goals of tool-integration within software development processes. In particular we focus on agile software development but are not limited to. The integration of tools and data between the different domains of the process is essential for an efficient, effective and customized software development. We describe what the next steps in the pursuit of integration are and how major goals can be achieved. Beyond theoretical and architectural considerations we describe the prototypical implementation of an open platform approach. The paper introduces platform apps and a functionality store as general concepts to make apps and their functionalities available to the community. We describe the implementation of the approach and how it can be practically utilized. The description is based on one major use case and further steps are motivated by various other examples.
Nevertheless, neither is integration of applications and lifecycle data a very new problem nor it is restricted to the integration of lifecycle data of applications, exclusively. Analogous to the ALM integration of lifecycle data, the integration of other lifecycle data is treated by other branches, e. ,g. for product lifecycle management (PLM). That is, the integration trend exceeds the domain of software development. Srinivasan @cite_9 describes for the domain of PLM an integration framework using standardized product data, meta-data models and standardized business processes, on the one hand, and on the other hand a service oriented architecture concept to integrate different tools and systems.
{ "cite_N": [ "@cite_9" ], "mid": [ "1983841261" ], "abstract": [ "Abstract The need for integrating business and technical information systems, allowing partners to collaborate effectively in creating innovative products, has motivated the design and deployment of a novel integration framework for product lifecycle management. The time is ripe for such an integration framework because of the convergence of three important developments, almost in a perfect storm: (1) maturity of standardized product data and meta-data models, and standardized engineering and business processes; (2) emergence of service-oriented architecture for information sharing; and (3) availability of robust middleware to implement them. These developments allow engineering and business objects and processes to be built or composed as modular pieces of software in the form of services that can communicate with each other and be used across different parts of a business. These modular software pieces can be reused and reconfigured in new ways as business conditions change, thereby saving time and money for companies. This paper describes the business and technical aspects of an integration framework for product lifecycle management using open standards and service-oriented architecture." ] }
1311.5587
2078884048
The paper describes the need for and goals of tool-integration within software development processes. In particular we focus on agile software development but are not limited to. The integration of tools and data between the different domains of the process is essential for an efficient, effective and customized software development. We describe what the next steps in the pursuit of integration are and how major goals can be achieved. Beyond theoretical and architectural considerations we describe the prototypical implementation of an open platform approach. The paper introduces platform apps and a functionality store as general concepts to make apps and their functionalities available to the community. We describe the implementation of the approach and how it can be practically utilized. The description is based on one major use case and further steps are motivated by various other examples.
Arguing for the need of continuing with advancing integration technologies, Seligman @cite_1 determine that existing tools --although already advanced-- are still too costly and labor intensive to build. Their solution is presented in form of a platform. The platform consists of a (vendor) model-independent repository, containing model schemas and a mapping of integrated tools, data importing and exporting components and integration tools using the eclipse platform and its plug-in mechanism. Considering the architecture of Seligman , a platform feature for easy and dynamic usage of integrated functionalities and data is still missing.
{ "cite_N": [ "@cite_1" ], "mid": [ "2050789550" ], "abstract": [ "OpenII (openintegration.org) is a collaborative effort to create a suite of open-source tools for information integration (II). The project is leveraging the latest developments in II research to create a platform on which integration tools can be built and further research conducted. In addition to a scalable, extensible platform, OpenII includes industrial-strength components developed by MITRE, Google, UC-Irvine, and UC-Berkeley that interoperate through a common repository in order to solve II problems. Components of the toolkit have been successfully applied to several large-scale US government II challenges." ] }
1311.4563
1674170833
In the incremental knapsack problem (IK), we are given a knapsack whose capacity grows weakly as a function of time. There is a time horizon of T periods and the capacity of the knapsack is Bt in period t for t = 1,...,T. We are also given a set S of N items to be placed in the knapsack. Item i has a value of vi and a weight of wi that is independent of the time period. At any time period t, the sum of the weights of the items in the knapsack cannot exceed the knapsack capac- ity Bt. Moreover, once an item is placed in the knapsack, it cannot be removed from the knapsack at a later time period. We seek to maximize the sum of (discounted) knapsack values over time subject to the capac- ity constraints. We first give a constant factor approximation algorithm for IK, under mild restrictions on the growth rate of Bt (the constant factor depends on the growth rate). We then give a PTAS for IIK, the special case of IK with no discounting, when T = O( p log N).
A special case of the generalized assignment problem where the items' weight and value are identical across knapsacks is known as the ; for this problem, Chekuri and Khanna @cite_2 developed a PTAS. Moreover, they also showed that two mild generalization of the MKP--- @math and @math or @math and @math --- are APX hard, thus ruling out a PTAS for these generalizations, assuming @math . Again, neither the PTAS nor their hardness results are directly applicable to the @math .
{ "cite_N": [ "@cite_2" ], "mid": [ "2000444555" ], "abstract": [ "The multiple knapsack problem (MKP) is a natural and well-known generalization of the single knapsack problem and is defined as follows. We are given a set of @math items and @math bins (knapsacks) such that each item @math has a profit @math and a size @math , and each bin @math has a capacity @math . The goal is to find a subset of items of maximum profit such that they have a feasible packing in the bins. MKP is a special case of the generalized assignment problem (GAP) where the profit and the size of an item can vary based on the specific bin that it is assigned to. GAP is APX-hard and a 2-approximation, for it is implicit in the work of Shmoys and Tardos [Math. Program. A, 62 (1993), pp. 461-474], and thus far, this was also the best known approximation for MKP @. The main result of this paper is a polynomial time approximation scheme (PTAS) for MKP @. Apart from its inherent theoretical interest as a common generalization of the well-studied knapsack and bin packing problems, it appears to be the strongest special case of GAP that is not APX-hard. We substantiate this by showing that slight generalizations of MKP are APX-hard. Thus our results help demarcate the boundary at which instances of GAP become APX-hard. An interesting aspect of our approach is a PTAS-preserving reduction from an arbitrary instance of MKP to an instance with @math distinct sizes and profits." ] }
1311.4818
2949785466
This paper explores the idea of smart building evacuation when evacuees can belong to different categories with respect to their ability to move and their health conditions. This leads to new algorithms that use the Cognitive Packet Network concept to tailor different quality of service needs to different evacuees. These ideas are implemented in a simulated environment and evaluated with regard to their effectiveness.
Much research in emergency navigation focuses on normal'' individuals with identical physical attributes such as mobility and health level, and conventional models such as flow-based models @cite_0 @cite_5 @cite_4 @cite_14 @cite_31 @cite_27 hypothesise evacuees as continuous homogeneous flows. Potential-maintenance approaches @cite_32 @cite_25 @cite_22 @cite_15 concentrate on the navigation algorithms that do not take the physical attributes of evacuees into account. The magnetic model'' in @cite_34 sets a walking velocity for each evacuee but ignores personal requirements as well as the effect of social interaction. The advent of multi-agent models makes it more convenient to customise physical attributes for individuals. However, previous work has focused on incorporating sociological factors due to the easiness of describing social behaviours such as coordination or stampede. Although in @cite_10 physical, psychological and moving attributes are considered for each evacuee, health factors such as initial health values and resistance to hazard which are influenced by gender, age, etc. are not considered. Queueing models can also be used to evaluate congestion in such systems @cite_13 @cite_7 .
{ "cite_N": [ "@cite_13", "@cite_14", "@cite_4", "@cite_22", "@cite_7", "@cite_15", "@cite_32", "@cite_0", "@cite_27", "@cite_5", "@cite_31", "@cite_34", "@cite_10", "@cite_25" ], "mid": [ "2013566131", "", "2048345553", "2135035173", "2886744218", "2077010207", "2039252979", "2027465986", "", "166737843", "2170193147", "85172617", "", "2124941493" ], "abstract": [ "This paper is a survey of certain known results concerning networks of queues. The choice of the topics presented has been made with special emphasis on mathematical results which can be applied to the analysis and synthesis of mathematical or simulation models of complex computer systems in which an ensemble of resources is shared among a set of user programs. The subjects covered include the Jackson, and Gordon and Newell theorems; the work-rate theorems of Chang, Lavenberg and Traiger; the Baskett, Chandy, Muntz, Palacios results. We also discuss in a unified manner problems related to Poisson processes in queueing networks. Companion papers (Parts II, III) will present results concerning approximations to queueing networks and some of their applications to computer system performance evaluation.", "", "EVACNET+ is a user-friendly interactive computer program that allows the modeling of emergency building evacuations. An EVACNET+ model is a network consisting of a set of nodes connected by arcs. The nodes represent building components such as rooms, halls, landings, stairs and lobbies. The arcs represent the passageways between the building components. The program identifies optimal evacuation plans for user defined buildings.", "We present a new distributed routing protocol for mobile, multihop, wireless networks. The protocol is one of a family of protocols which we term \"link reversal\" algorithms. The protocol's reaction is structured as a temporally-ordered sequence of diffusing computations; each computation consisting of a sequence of directed link reversals. The protocol is highly adaptive, efficient and scalable; being best-suited for use in large, dense, mobile networks. In these networks, the protocol's reaction to link failures typically involves only a localized \"single pass\" of the distributed algorithm. This capability is unique among protocols which are stable in the face of network partitions, and results in the protocol's high degree of adaptivity. This desirable behavior is achieved through the novel use of a \"physical or logical clock\" to establish the \"temporal order\" of topological change events which is used to structure (or order) the algorithm's reaction to topological changes. We refer to the protocol as the temporally-ordered routing algorithm (TORA).", "", "Recently, Wireless Sensor Networks (WSNs) have been widely discussed in many applications. In this paper, we propose a novel Three-Dimensional (3D) emergency service that aims to guide people to safe places when emergencies happen. At normal time, the network is responsible for monitoring the environment. When emergency events are detected, the network can adaptively modify its topology to ensure transportation reliability, quickly identify hazardous regions that should be avoided and find safe navigation paths that can lead people to exits. In particular, the structures of Three-Dimensional buildings are taken into account in our design. Simulation results shows that our protocols can adapt emergencies quickly at low message cost and can find safer paths to exits than existing results", "We develop distributed algorithms for self-organizing sensor networks that respond to directing a target through a region. The sensor network models the danger levels sensed across its area and has the ability to adapt to changes. It represents the dangerous areas as obstacles. A protocol that combines the artificial potential field of the sensors with the goal location for the moving object guides the object incrementally across the network to the goal, while maintaining the safest distance to the danger areas. We give the analysis to the protocol and report on hardware experiments using a physical sensor network consisting of Mote sensors.", "", "", "", "A dynamic network consists of a graph with capacities and transit times on its edges. The quickest transshipment problem is de2ned by a dynamic network with several sources and sinks; each source has a speci2ed supply and each sink has a specified demand. The problem is to send exactly the right amount of 6ow out of each source and into each sink in the minimum overall time.Variations of the quickest transshipment problem have been studied extensively; the special case of the problem with a single sink is commonly used to model building evacuation. Similar dynamic network flow problems have numerous other applications; in some of these, the capacities are small integers and it is important to find integral flows. There are no polynomial-time algorithms known for most of these problems.In this paper we give the first polynomial-time algorithm for the quickest transshipment problem. Our algorithm provides an integral optimum flow. Previously, the quickest transshipment problem could only be solved efficiently in the special case of a single source and single sink.", "The objective of this study is the development of a computer simulation model for pedestrian movement in architectural and urban space. The characteristic of the model is the ability to visualize the movement of each pedestrian in a plan as an animation. So architects and designers can easily find and understand the problems in their design projects. In this model, the movement of each pedestrian is simulated by the motion of a magnetized object in a magnetic field. Positive magnetic pole is given to each pedestrian and obstacles like walls and columns. Negative magnetic pole is located at the goal of pedestrians. Each pedestrian moves to his goal by the attractive force caused by the negative magnetic pole at his goal, avoiding collisions with other pedestrians and obstacles by repulsive forces caused by the positive magnetic poles. The effectiveness of the simulation model is shown by the following two kinds of simulation examples. (1) Evacuation from an office building In this model pedestrians walk along the route from each starting point to the exit in case of evacuation. The example shows the places where stagnations and heavy congestions occur, and designers can see if the evacuation routes are appropriate. (2) Movement of pedestrians in queue spaces Three types of queuing behavior is classified in this model: movement in front of counters, movement passing through ratches, and movement of getting on and off in elevator halls. Simulation examples in a railway station and in a main floor of a resort hotel are shown where several kinds of queue spaces are included and complicated movements of hundreds of pedestrians occur.", "", "In an emergency, wireless network sensors combined with a navigation algorithm could help safely guide people to a building exit while helping them avoid hazardous areas. We propose a distributed navigation algorithm for emergency situations. At normal time, sensors monitor the environment. When the sensors detect emergency events, our protocol quickly separates hazardous areas from safe areas, and the sensors establish escape paths. Simulation and implementation results show that our scheme achieves navigation safety and quick convergence of the navigation directions. We based our protocol on the temporally ordered routing algorithm for mobile ad hoc networks. TORA assigns mobile nodes temporally ordered sequence numbers to support multipath routing from a source to a specific destination node" ] }
1311.4818
2949785466
This paper explores the idea of smart building evacuation when evacuees can belong to different categories with respect to their ability to move and their health conditions. This leads to new algorithms that use the Cognitive Packet Network concept to tailor different quality of service needs to different evacuees. These ideas are implemented in a simulated environment and evaluated with regard to their effectiveness.
Navigation algorithms proposed previously search for the shortest or the safest path and in @cite_32 a self-organizing sensor network is proposed to guide a user (robots, people, unmanned aerial vehicle, etc.) through the safest path by using artificial potential fields'' @cite_30 . An attractive force pulls users to the destination while repulsive forces from dangerous zones push them away. In @cite_25 a temporally ordered routing algorithm @cite_22 routes evacuees to exits through safer paths. A navigation map is manually defined to avoid impractical paths and each sensor is assigned with an altitude with respect to its hops to the nearest exit. Combining the definition of effective length with Dijkstra's algorithm, @cite_26 presents a decentralized evacuation system with decision nodes (DNs) and sensor nodes (SNs) to compute the shortest routes in real time, while opportunistic communications @cite_28 based emergency evacuation systems @cite_29 have the advantage of being more robust to network attacks @cite_9 that often accompany emergencies caused by malicious acts.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_22", "@cite_28", "@cite_29", "@cite_9", "@cite_32", "@cite_25" ], "mid": [ "2059040514", "2150843819", "2135035173", "2032094375", "2159494053", "1995405963", "2039252979", "2124941493" ], "abstract": [ "This paper is a survey of research on autonomous search strategies which originate in engineering and biology. Our motivation is to identify methods of search in an essentially two-dimensional Euclidean space, which can be applied to the area of demining. Such search strategies are based on spatio-temporal distributions. These distributions may be known in advance, because of prior intelligence or through the use of remote sensing, or they may be the result of on-line gathering of information as the search progresses, or of both. We first review the literature on search and coordination which emanates from the field of robotics, we then summarize significant research in the field of animal search, and also discuss relevant results in robotics which are inspired by animal behavior.", "The evacuation of a building is a challenging problem, since the evacuees most of the times do not know or do not follow the optimal evacuation route. Especially during an ongoing hazard present in the building, finding the best evacuation route becomes harder as the conditions along the paths change in the course of the evacuation procedure. In this paper we propose a distributed system that will compute the best evacuation routes in real-time, while a hazard is spreading inside the building. The system is composed of a network of decision nodes and sensor nodes, positioned in specific locations inside the building. The recommendations of the decision nodes are computed in a distributed manner, at each of the decision nodes, which then communicate them to evacuees or rescue personnel located in their vicinity. We evaluate our proposed system in various emergency scenarios, using a multi-agent simulation platform for Building Evacuation. Our results indicate that the presence of the system improves the outcome of the evacuation with respect to the evacuation time and the injury level of the evacuees.", "We present a new distributed routing protocol for mobile, multihop, wireless networks. The protocol is one of a family of protocols which we term \"link reversal\" algorithms. The protocol's reaction is structured as a temporally-ordered sequence of diffusing computations; each computation consisting of a sequence of directed link reversals. The protocol is highly adaptive, efficient and scalable; being best-suited for use in large, dense, mobile networks. In these networks, the protocol's reaction to link failures typically involves only a localized \"single pass\" of the distributed algorithm. This capability is unique among protocols which are stable in the face of network partitions, and results in the protocol's high degree of adaptivity. This desirable behavior is achieved through the novel use of a \"physical or logical clock\" to establish the \"temporal order\" of topological change events which is used to structure (or order) the algorithm's reaction to topological changes. We refer to the protocol as the temporally-ordered routing algorithm (TORA).", "Opportunistic networks are one of the most interesting evolutions of MANETs. In opportunistic networks, mobile nodes are enabled to communicate with each other even if a route connecting them never exists. Furthermore, nodes are not supposed to possess or acquire any knowledge about the network topology, which (instead) is necessary in traditional MANET routing protocols. Routes are built dynamically, while messages are en route between the sender and the destination(s), and any possible node can opportunistically be used as next hop, provided it is likely to bring the message closer to the final destination. These requirements make opportunistic networks a challenging and promising research field. In this article we survey the most interesting case studies related to opportunistic networking and discuss and organize a taxonomy for the main routing and forwarding approaches in this challenging environment. We finally envision further possible scenarios to make opportunistic networks part of the next-generation Internet", "Opportunistic communications (oppcomms) use low-cost human wearable mobile nodes allowing the exchange of packets at a close range of a few to some tens of meters with limited or no infrastructure. Typically cheap pocket devices which are IEEE 802.15.4-2006 compliant can be used and they can communicate at 2m to 10m range, with local computational capabilities and some local memory. In this paper we consider the application of such devices to emergency situations when other means of communication have broken down. This paper evaluates whether oppcomms can improve the outcome of emergency evacuation in directing civilians safely. We describe an autonomous emergency support system (ESS) based on oppcomms to support evacuation of civilians in a built environment such as a building or supermarket. The proposed system uses a fixed infrastructure of sensor nodes (SNs) to monitor the environment. Hazard information obtained via SNs is disseminated to the individuals, and they spread among the people who are located in this built environment using oppcomm devices carried by these people. The information received by these people can then guide them safely to the exits as the emergency situation evolves over time. We evaluate our scheme using a distributed multi-agent building evacuation simulator (DBES) in the context of evacuation scenarios of a multi-storey office building in the presence of a fire that is spreading. The results show the degree of improvement that the oppcomms can offer. c � 2011 Published by Elsevier Ltd.", "Denial of service (DoS) attacks are a serious security threat for Internet based organisations, and effective methods are needed to detect an attack and defend the nodes being attacked in real time. We propose an autonomic approach to DoS defence based on detecting DoS flows, and adaptively dropping attacking packets upstream from the node being attacked using trace-back of the attacking flows. Our approach is based on the Cognitive Packet Network infrastructure which uses smart packets to select paths based on Quality of Service. This approach allows paths being used by a flow (including an attacking flow) to be identified, and also helps legitimate flows to find robust paths during an attack. We evaluate the proposed approach using a mathematical model, as well as using experiments in a laboratory test-bed. We then suggest a more sophisticated defence framework based on authenticity tests as part of the detection mechanism, and on assigning priorities to incoming traffic and rate-limiting it on the basis of the outcome of these tests.", "We develop distributed algorithms for self-organizing sensor networks that respond to directing a target through a region. The sensor network models the danger levels sensed across its area and has the ability to adapt to changes. It represents the dangerous areas as obstacles. A protocol that combines the artificial potential field of the sensors with the goal location for the moving object guides the object incrementally across the network to the goal, while maintaining the safest distance to the danger areas. We give the analysis to the protocol and report on hardware experiments using a physical sensor network consisting of Mote sensors.", "In an emergency, wireless network sensors combined with a navigation algorithm could help safely guide people to a building exit while helping them avoid hazardous areas. We propose a distributed navigation algorithm for emergency situations. At normal time, sensors monitor the environment. When the sensors detect emergency events, our protocol quickly separates hazardous areas from safe areas, and the sensors establish escape paths. Simulation and implementation results show that our scheme achieves navigation safety and quick convergence of the navigation directions. We based our protocol on the temporally ordered routing algorithm for mobile ad hoc networks. TORA assigns mobile nodes temporally ordered sequence numbers to support multipath routing from a source to a specific destination node" ] }