corpus_id stringlengths 7 12 | paper_id stringlengths 9 16 | title stringlengths 1 261 | abstract stringlengths 70 4.02k | source stringclasses 1 value | bibtex stringlengths 208 20.9k | citation_key stringlengths 6 100 |
|---|---|---|---|---|---|---|
arxiv-4001 | 0806.1649 | Effect of spatial structure on the evolution of cooperation | <|reference_start|>Effect of spatial structure on the evolution of cooperation: Spatial structure is known to have an impact on the evolution of cooperation, and so it has been intensively studied during recent years. Previous work has shown the relevance of some features, such as the synchronicity of the updating, the clustering of the network or the influence of the update rule. This has been done, however, for concrete settings with particular games, networks and update rules, with the consequence that some contradictions have arisen and a general understanding of these topics is missing in the broader context of the space of 2x2 games. To address this issue, we have performed a systematic and exhaustive simulation in the different degrees of freedom of the problem. In some cases, we generalize previous knowledge to the broader context of our study and explain the apparent contradictions. In other cases, however, our conclusions refute what seems to be established opinions in the field, as for example the robustness of the effect of spatial structure against changes in the update rule, or offer new insights into the subject, e.g. the relation between the intensity of selection and the asymmetry between the effects on games with mixed equilibria.<|reference_end|> | arxiv | @article{roca2008effect,
title={Effect of spatial structure on the evolution of cooperation},
author={Carlos P. Roca, Jos'e A. Cuesta, and Angel S'anchez},
journal={Phys. Rev. E 80, 046106 (2009)},
year={2008},
doi={10.1103/PhysRevE.80.046106},
archivePrefix={arXiv},
eprint={0806.1649},
primaryClass={q-bio.PE cs.GT physics.soc-ph}
} | roca2008effect |
arxiv-4002 | 0806.1659 | Bounds on the Sum Capacity of Synchronous Binary CDMA Channels | <|reference_start|>Bounds on the Sum Capacity of Synchronous Binary CDMA Channels: In this paper, we obtain a family of lower bounds for the sum capacity of Code Division Multiple Access (CDMA) channels assuming binary inputs and binary signature codes in the presence of additive noise with an arbitrary distribution. The envelope of this family gives a relatively tight lower bound in terms of the number of users, spreading gain and the noise distribution. The derivation methods for the noiseless and the noisy channels are different but when the noise variance goes to zero, the noisy channel bound approaches the noiseless case. The behavior of the lower bound shows that for small noise power, the number of users can be much more than the spreading gain without any significant loss of information (overloaded CDMA). A conjectured upper bound is also derived under the usual assumption that the users send out equally likely binary bits in the presence of additive noise with an arbitrary distribution. As the noise level increases, and/or, the ratio of the number of users and the spreading gain increases, the conjectured upper bound approaches the lower bound. We have also derived asymptotic limits of our bounds that can be compared to a formula that Tanaka obtained using techniques from statistical physics; his bound is close to that of our conjectured upper bound for large scale systems.<|reference_end|> | arxiv | @article{alishahi2008bounds,
title={Bounds on the Sum Capacity of Synchronous Binary CDMA Channels},
author={K. Alishahi, F. Marvasti, V. Aref, P. Pad},
journal={arXiv preprint arXiv:0806.1659},
year={2008},
doi={10.1109/TIT.2009.2023756},
archivePrefix={arXiv},
eprint={0806.1659},
primaryClass={cs.IT math.IT}
} | alishahi2008bounds |
arxiv-4003 | 0806.1711 | The Lotus-Eater Attack | <|reference_start|>The Lotus-Eater Attack: Many protocols for distributed and peer-to-peer systems have the feature that nodes will stop providing service for others once they have received a certain amount of service. Examples include BitTorent's unchoking policy, BAR Gossip's balanced exchanges, and threshold strategies in scrip systems. An attacker can exploit this by providing service in a targeted way to prevent chosen nodes from providing service. While such attacks cannot be prevented, we discuss techniques that can be used to limit the damage they do. These techniques presume that a certain number of processes will follow the recommended protocol, even if they could do better by ``gaming'' the system.<|reference_end|> | arxiv | @article{kash2008the,
title={The Lotus-Eater Attack},
author={Ian A. Kash, Eric J. Friedman, Joseph Y. Halpern},
journal={arXiv preprint arXiv:0806.1711},
year={2008},
archivePrefix={arXiv},
eprint={0806.1711},
primaryClass={cs.NI cs.DC}
} | kash2008the |
arxiv-4004 | 0806.1722 | Fast Arithmetics Using Chinese Remaindering | <|reference_start|>Fast Arithmetics Using Chinese Remaindering: In this paper, some issues concerning the Chinese remaindering representation are discussed. Some new converting methods, including an efficient probabilistic algorithm based on a recent result of von zur Gathen and Shparlinski \cite{Gathen-Shparlinski}, are described. An efficient refinement of the NC$^1$ division algorithm of Chiu, Davida and Litow \cite{Chiu-Davida-Litow} is given, where the number of moduli is reduced by a factor of $\log n$.<|reference_end|> | arxiv | @article{davida2008fast,
title={Fast Arithmetics Using Chinese Remaindering},
author={George Davida, Bruce Litow and Guangwu Xu},
journal={arXiv preprint arXiv:0806.1722},
year={2008},
archivePrefix={arXiv},
eprint={0806.1722},
primaryClass={cs.DS}
} | davida2008fast |
arxiv-4005 | 0806.1724 | Self-overlapping Curves Revisited | <|reference_start|>Self-overlapping Curves Revisited: A surface embedded in space, in such a way that each point has a neighborhood within which the surface is a terrain, projects to an immersed surface in the plane, the boundary of which is a self-intersecting curve. Under what circumstances can we reverse these mappings algorithmically? Shor and van Wyk considered one such problem, determining whether a curve is the boundary of an immersed disk; they showed that the self-overlapping curves defined in this way can be recognized in polynomial time. We show that several related problems are more difficult: it is NP-complete to determine whether an immersed disk is the projection of a surface embedded in space, or whether a curve is the boundary of an immersed surface in the plane that is not constrained to be a disk. However, when a casing is supplied with a self-intersecting curve, describing which component of the curve lies above and which below at each crossing, we may determine in time linear in the number of crossings whether the cased curve forms the projected boundary of a surface in space. As a related result, we show that an immersed surface with a single boundary curve that crosses itself n times has at most 2^{n/2} combinatorially distinct spatial embeddings, and we discuss the existence of fixed-parameter tractable algorithms for related problems.<|reference_end|> | arxiv | @article{eppstein2008self-overlapping,
title={Self-overlapping Curves Revisited},
author={David Eppstein and Elena Mumford},
journal={arXiv preprint arXiv:0806.1724},
year={2008},
archivePrefix={arXiv},
eprint={0806.1724},
primaryClass={cs.CG}
} | eppstein2008self-overlapping |
arxiv-4006 | 0806.1727 | Bounded Budget Connection (BBC) Games or How to make friends and influence people, on a budget | <|reference_start|>Bounded Budget Connection (BBC) Games or How to make friends and influence people, on a budget: Motivated by applications in social networks, peer-to-peer and overlay networks, we define and study the Bounded Budget Connection (BBC) game - we have a collection of n players or nodes each of whom has a budget for purchasing links; each link has a cost as well as a length and each node has a set of preference weights for each of the remaining nodes; the objective of each node is to use its budget to buy a set of outgoing links so as to minimize its sum of preference-weighted distances to the remaining nodes. We study the structural and complexity-theoretic properties of pure Nash equilibria in BBC games. We show that determining the existence of a pure Nash equilibrium in general BBC games is NP-hard. However, in a natural variant, fractional BBC games - where it is permitted to buy fractions of links - a pure Nash equilibrium always exists. A major focus is the study of (n,k)-uniform BBC games - those in which all link costs, link lengths and preference weights are equal (to 1) and all budgets are equal (to k). We show that a pure Nash equilibrium or stable graph exists for all (n,k)-uniform BBC games and that all stable graphs are essentially fair (i.e. all nodes have similar costs). We provide an explicit construction of a family of stable graphs that spans the spectrum from minimum total social cost to maximum total social cost. We also study a special family of regular graphs in which all nodes imitate the "same" buying pattern, and show that if n is sufficiently large no such regular graph can be a pure Nash equilibrium. We analyze best-response walks on the configuration defined by the uniform game. Lastly, we extend our results to the case where each node seeks to minimize its maximum distance to the other nodes.<|reference_end|> | arxiv | @article{laoutaris2008bounded,
title={Bounded Budget Connection (BBC) Games or How to make friends and
influence people, on a budget},
author={Nikolaos Laoutaris, Laura J. Poplawski, Rajmohan Rajaraman, Ravi
Sundaram, Shang-Hua Teng},
journal={arXiv preprint arXiv:0806.1727},
year={2008},
archivePrefix={arXiv},
eprint={0806.1727},
primaryClass={cs.GT cs.NI}
} | laoutaris2008bounded |
arxiv-4007 | 0806.1749 | Consistency and Completeness of Rewriting in the Calculus of Constructions | <|reference_start|>Consistency and Completeness of Rewriting in the Calculus of Constructions: Adding rewriting to a proof assistant based on the Curry-Howard isomorphism, such as Coq, may greatly improve usability of the tool. Unfortunately adding an arbitrary set of rewrite rules may render the underlying formal system undecidable and inconsistent. While ways to ensure termination and confluence, and hence decidability of type-checking, have already been studied to some extent, logical consistency has got little attention so far. In this paper we show that consistency is a consequence of canonicity, which in turn follows from the assumption that all functions defined by rewrite rules are complete. We provide a sound and terminating, but necessarily incomplete algorithm to verify this property. The algorithm accepts all definitions that follow dependent pattern matching schemes presented by Coquand and studied by McBride in his PhD thesis. It also accepts many definitions by rewriting, containing rules which depart from standard pattern matching.<|reference_end|> | arxiv | @article{walukiewicz-chrzaszcz2008consistency,
title={Consistency and Completeness of Rewriting in the Calculus of
Constructions},
author={Daria Walukiewicz-Chrzaszcz and Jacek Chrzaszcz},
journal={Logical Methods in Computer Science, Volume 4, Issue 3 (September
15, 2008) lmcs:1141},
year={2008},
doi={10.2168/LMCS-4(3:8)2008},
archivePrefix={arXiv},
eprint={0806.1749},
primaryClass={cs.LO cs.SC}
} | walukiewicz-chrzaszcz2008consistency |
arxiv-4008 | 0806.1768 | Local Read-Write Operations in Sensor Networks | <|reference_start|>Local Read-Write Operations in Sensor Networks: Designing protocols and formulating convenient programming units of abstraction for sensor networks is challenging due to communication errors and platform constraints. This paper investigates properties and implementation reliability for a \emph{local read-write} abstraction. Local read-write is inspired by the class of read-modify-write operations defined for shared-memory multiprocessor architectures. The class of read-modify-write operations is important in solving consensus and related synchronization problems for concurrency control. Local read-write is shown to be an atomic abstraction for synchronizing neighborhood states in sensor networks. The paper compares local read-write to similar lightweight operations in wireless sensor networks, such as read-all, write-all, and a transaction-based abstraction: for some optimistic scenarios, local read-write is a more efficient neighborhood operation. A partial implementation is described, which shows that three outcomes characterize operation response: success, failure, and cancel. A failure response indicates possible inconsistency for the operation result, which is the result of a timeout event at the operation's initiator. The paper presents experimental results on operation performance with different timeout values and situations of no contention, with some tests also on various neighborhood sizes.<|reference_end|> | arxiv | @article{herman2008local,
title={Local Read-Write Operations in Sensor Networks},
author={Ted Herman, Morten Mjelde},
journal={arXiv preprint arXiv:0806.1768},
year={2008},
number={TR08-02},
archivePrefix={arXiv},
eprint={0806.1768},
primaryClass={cs.OS cs.DC}
} | herman2008local |
arxiv-4009 | 0806.1796 | Evaluation for Uncertain Image Classification and Segmentation | <|reference_start|>Evaluation for Uncertain Image Classification and Segmentation: Each year, numerous segmentation and classification algorithms are invented or reused to solve problems where machine vision is needed. Generally, the efficiency of these algorithms is compared against the results given by one or many human experts. However, in many situations, the location of the real boundaries of the objects as well as their classes are not known with certainty by the human experts. Furthermore, only one aspect of the segmentation and classification problem is generally evaluated. In this paper we present a new evaluation method for classification and segmentation of image, where we take into account both the classification and segmentation results as well as the level of certainty given by the experts. As a concrete example of our method, we evaluate an automatic seabed characterization algorithm based on sonar images.<|reference_end|> | arxiv | @article{martin2008evaluation,
title={Evaluation for Uncertain Image Classification and Segmentation},
author={Arnaud Martin (E3I2), Hicham Laanaya (E3I2), Andreas Arnold-Bos (E3I2)},
journal={Pattern Recognition 39, 11 (2006) 1987-1995},
year={2008},
archivePrefix={arXiv},
eprint={0806.1796},
primaryClass={cs.CV cs.AI}
} | martin2008evaluation |
arxiv-4010 | 0806.1797 | A new generalization of the proportional conflict redistribution rule stable in terms of decision | <|reference_start|>A new generalization of the proportional conflict redistribution rule stable in terms of decision: In this chapter, we present and discuss a new generalized proportional conflict redistribution rule. The Dezert-Smarandache extension of the Demster-Shafer theory has relaunched the studies on the combination rules especially for the management of the conflict. Many combination rules have been proposed in the last few years. We study here different combination rules and compare them in terms of decision on didactic example and on generated data. Indeed, in real applications, we need a reliable decision and it is the final results that matter. This chapter shows that a fine proportional conflict redistribution rule must be preferred for the combination in the belief function theory.<|reference_end|> | arxiv | @article{martin2008a,
title={A new generalization of the proportional conflict redistribution rule
stable in terms of decision},
author={Arnaud Martin (E3I2), Christophe Osswald (E3I2)},
journal={Advances and Applications of DSmT for Information Fusion,
Florentin Smarandache & Jean Dezert (Ed.) (2006) 69-88},
year={2008},
archivePrefix={arXiv},
eprint={0806.1797},
primaryClass={cs.AI}
} | martin2008a |
arxiv-4011 | 0806.1798 | Human expert fusion for image classification | <|reference_start|>Human expert fusion for image classification: In image classification, merging the opinion of several human experts is very important for different tasks such as the evaluation or the training. Indeed, the ground truth is rarely known before the scene imaging. We propose here different models in order to fuse the informations given by two or more experts. The considered unit for the classification, a small tile of the image, can contain one or more kind of the considered classes given by the experts. A second problem that we have to take into account, is the amount of certainty of the expert has for each pixel of the tile. In order to solve these problems we define five models in the context of the Dempster-Shafer Theory and in the context of the Dezert-Smarandache Theory and we study the possible decisions with these models.<|reference_end|> | arxiv | @article{martin2008human,
title={Human expert fusion for image classification},
author={Arnaud Martin (E3I2), Christophe Osswald (E3I2)},
journal={Information & Security. An International Journal 20 (2006) 122-141},
year={2008},
archivePrefix={arXiv},
eprint={0806.1798},
primaryClass={cs.CV cs.AI}
} | martin2008human |
arxiv-4012 | 0806.1802 | Une nouvelle r\`egle de combinaison r\'epartissant le conflit - Applications en imagerie Sonar et classification de cibles Radar | <|reference_start|>Une nouvelle r\`egle de combinaison r\'epartissant le conflit - Applications en imagerie Sonar et classification de cibles Radar: These last years, there were many studies on the problem of the conflict coming from information combination, especially in evidence theory. We can summarise the solutions for manage the conflict into three different approaches: first, we can try to suppress or reduce the conflict before the combination step, secondly, we can manage the conflict in order to give no influence of the conflict in the combination step, and then take into account the conflict in the decision step, thirdly, we can take into account the conflict in the combination step. The first approach is certainly the better, but not always feasible. It is difficult to say which approach is the best between the second and the third. However, the most important is the produced results in applications. We propose here a new combination rule that distributes the conflict proportionally on the element given this conflict. We compare these different combination rules on real data in Sonar imagery and Radar target classification.<|reference_end|> | arxiv | @article{martin2008une,
title={Une nouvelle r\`egle de combinaison r\'epartissant le conflit -
Applications en imagerie Sonar et classification de cibles Radar},
author={Arnaud Martin (E3I2), Christophe Osswald (E3I2)},
journal={Revue Traitement du Signal 24, 2 (2007) 71-82},
year={2008},
archivePrefix={arXiv},
eprint={0806.1802},
primaryClass={cs.AI}
} | martin2008une |
arxiv-4013 | 0806.1806 | Perfect Derived Propagators | <|reference_start|>Perfect Derived Propagators: When implementing a propagator for a constraint, one must decide about variants: When implementing min, should one also implement max? Should one implement linear equations both with and without coefficients? Constraint variants are ubiquitous: implementing them requires considerable (if not prohibitive) effort and decreases maintainability, but will deliver better performance. This paper shows how to use variable views, previously introduced for an implementation architecture, to derive perfect propagator variants. A model for views and derived propagators is introduced. Derived propagators are proved to be indeed perfect in that they inherit essential properties such as correctness and domain and bounds consistency. Techniques for systematically deriving propagators such as transformation, generalization, specialization, and channeling are developed for several variable domains. We evaluate the massive impact of derived propagators. Without derived propagators, Gecode would require 140000 rather than 40000 lines of code for propagators.<|reference_end|> | arxiv | @article{schulte2008perfect,
title={Perfect Derived Propagators},
author={Christian Schulte and Guido Tack},
journal={arXiv preprint arXiv:0806.1806},
year={2008},
archivePrefix={arXiv},
eprint={0806.1806},
primaryClass={cs.AI}
} | schulte2008perfect |
arxiv-4014 | 0806.1812 | A probabilistic key agreement scheme for sensor networks without key predistribution | <|reference_start|>A probabilistic key agreement scheme for sensor networks without key predistribution: The dynamic establishment of shared information (e.g. secret key) between two entities is particularly important in networks with no pre-determined structure such as wireless sensor networks (and in general wireless mobile ad-hoc networks). In such networks, nodes establish and terminate communication sessions dynamically with other nodes which may have never been encountered before, in order to somehow exchange information which will enable them to subsequently communicate in a secure manner. In this paper we give and theoretically analyze a series of protocols that enables two entities that have never encountered each other before to establish a shared piece of information for use as a key in setting up a secure communication session with the aid of a shared key encryption algorithm. These protocols do not require previous pre-distribution of candidate keys or some other piece of information of specialized form except a small seed value, from which the two entities can produce arbitrarily long strings with many similarities.<|reference_end|> | arxiv | @article{liagkou2008a,
title={A probabilistic key agreement scheme for sensor networks without key
predistribution},
author={V. Liagkou, E. Makri, P. Spirakis and Y.C. Stamatiou},
journal={arXiv preprint arXiv:0806.1812},
year={2008},
archivePrefix={arXiv},
eprint={0806.1812},
primaryClass={cs.CR}
} | liagkou2008a |
arxiv-4015 | 0806.1816 | Cardinality heterogeneities in Web service composition: Issues and solutions | <|reference_start|>Cardinality heterogeneities in Web service composition: Issues and solutions: Data exchanges between Web services engaged in a composition raise several heterogeneities. In this paper, we address the problem of data cardinality heterogeneity in a composition. Firstly, we build a theoretical framework to describe different aspects of Web services that relate to data cardinality, and secondly, we solve this problem by developing a solution for cardinality mediation based on constraint logic programming.<|reference_end|> | arxiv | @article{mrissa2008cardinality,
title={Cardinality heterogeneities in Web service composition: Issues and
solutions},
author={M. Mrissa, Ph. Thiran, J-M. Jacquet, D. Benslimane and Z. Maamar},
journal={arXiv preprint arXiv:0806.1816},
year={2008},
archivePrefix={arXiv},
eprint={0806.1816},
primaryClass={cs.SE cs.DB}
} | mrissa2008cardinality |
arxiv-4016 | 0806.1819 | A Low-Complexity, Full-Rate, Full-Diversity 2 X 2 STBC with Golden Code's Coding Gain | <|reference_start|>A Low-Complexity, Full-Rate, Full-Diversity 2 X 2 STBC with Golden Code's Coding Gain: This paper presents a low-ML-decoding-complexity, full-rate, full-diversity space-time block code (STBC) for a 2 transmit antenna, 2 receive antenna multiple-input multiple-output (MIMO) system, with coding gain equal to that of the best and well known Golden code for any QAM constellation. Recently, two codes have been proposed (by Paredes, Gershman and Alkhansari and by Sezginer and Sari), which enjoy a lower decoding complexity relative to the Golden code, but have lesser coding gain. The $2\times 2$ STBC presented in this paper has lesser decoding complexity for non-square QAM constellations, compared with that of the Golden code, while having the same decoding complexity for square QAM constellations. Compared with the Paredes-Gershman-Alkhansari and Sezginer-Sari codes, the proposed code has the same decoding complexity for non-rectangular QAM constellations. Simulation results, which compare the codeword error rate (CER) performance, are presented.<|reference_end|> | arxiv | @article{srinath2008a,
title={A Low-Complexity, Full-Rate, Full-Diversity 2 X 2 STBC with Golden
Code's Coding Gain},
author={K. Pavan Srinath, B. Sundar Rajan},
journal={arXiv preprint arXiv:0806.1819},
year={2008},
doi={10.1109/GLOCOM.2008.ECP.235},
archivePrefix={arXiv},
eprint={0806.1819},
primaryClass={cs.IT math.IT}
} | srinath2008a |
arxiv-4017 | 0806.1827 | Full Abstraction for a Recursively Typed Lambda Calculus with Parallel Conditional | <|reference_start|>Full Abstraction for a Recursively Typed Lambda Calculus with Parallel Conditional: We define the syntax and reduction relation of a recursively typed lambda calculus with a parallel case-function (a parallel conditional). The reduction is shown to be confluent. We interpret the recursive types as information systems in a restricted form, which we call prime systems. A denotational semantics is defined with this interpretation. We define the syntactical normal form approximations of a term and prove the Approximation Theorem: The semantics of a term equals the limit of the semantics of its approximations. The proof uses inclusive predicates (logical relations). The semantics is adequate with respect to the observation of Boolean values. It is also fully abstract in the presence of the parallel case-function.<|reference_end|> | arxiv | @article{müller2008full,
title={Full Abstraction for a Recursively Typed Lambda Calculus with Parallel
Conditional},
author={Fritz M"uller},
journal={arXiv preprint arXiv:0806.1827},
year={2008},
number={revised Report 12/1993 of SFB 124, Informatik, Universitaet des
Saarlandes},
archivePrefix={arXiv},
eprint={0806.1827},
primaryClass={cs.LO}
} | müller2008full |
arxiv-4018 | 0806.1834 | A Low-decoding-complexity, Large coding Gain, Full-rate, Full-diversity STBC for 4 X 2 MIMO System | <|reference_start|>A Low-decoding-complexity, Large coding Gain, Full-rate, Full-diversity STBC for 4 X 2 MIMO System: This paper proposes a low decoding complexity, full-diversity and full-rate space-time block code (STBC) for 4 transmit and 2 receive ($4\times 2$) multiple-input multiple-output (MIMO) systems. For such systems, the best code known is the DjABBA code and recently, Biglieri, Hong and Viterbo have proposed another STBC (BHV code) which has lower decoding complexity than DjABBA but does not have full-diversity like the DjABBA code. The code proposed in this paper has the same decoding complexity as the BHV code for square QAM constellations but has full-diversity as well. Compared to the best code in the DjABBA family of codes, our code has lower decoding complexity, a better coding gain and hence a better error performance as well. Simulation results confirming these are presented.<|reference_end|> | arxiv | @article{srinath2008a,
title={A Low-decoding-complexity, Large coding Gain, Full-rate, Full-diversity
STBC for 4 X 2 MIMO System},
author={K. Pavan Srinath, B. Sundar Rajan},
journal={arXiv preprint arXiv:0806.1834},
year={2008},
archivePrefix={arXiv},
eprint={0806.1834},
primaryClass={cs.IT math.IT}
} | srinath2008a |
arxiv-4019 | 0806.1843 | An adaptive routing strategy for packet delivery in complex networks | <|reference_start|>An adaptive routing strategy for packet delivery in complex networks: We present an efficient routing approach for delivering packets in complex networks. On delivering a message from a node to a destination, a node forwards the message to a neighbor by estimating the waiting time along the shortest path from each of its neighbors to the destination. This projected waiting time is dynamical in nature and the path through which a message is delivered would be adapted to the distribution of messages in the network. Implementing the approach on scale-free networks, we show that the present approach performs better than the shortest-path approach and another approach that takes into account of the waiting time only at the neighboring nodes. Key features in numerical results are explained by a mean field theory. The approach has the merit that messages are distributed among the nodes according to the capabilities of the nodes in handling messages.<|reference_end|> | arxiv | @article{zhang2008an,
title={An adaptive routing strategy for packet delivery in complex networks},
author={Huan Zhang, Zonghua Liu, Ming Tang, and P.M. Hui},
journal={arXiv preprint arXiv:0806.1843},
year={2008},
doi={10.1016/j.physleta.2006.12.009},
archivePrefix={arXiv},
eprint={0806.1843},
primaryClass={cs.NI}
} | zhang2008an |
arxiv-4020 | 0806.1845 | An efficient approach of controlling traffic congestion in scale-free networks | <|reference_start|>An efficient approach of controlling traffic congestion in scale-free networks: We propose and study a model of traffic in communication networks. The underlying network has a structure that is tunable between a scale-free growing network with preferential attachments and a random growing network. To model realistic situations where different nodes in a network may have different capabilities, the message or packet creation and delivering rates at a node are assumed to depend on the degree of the node. Noting that congestions are more likely to take place at the nodes with high degrees in networks with scale-free character, an efficient approach of selectively enhancing the message-processing capability of a small fraction (e.g. 3%) of the nodes is shown to perform just as good as enhancing the capability of all nodes. The interplay between the creation rate and the delivering rate in determining non-congested or congested traffic in a network is studied more numerically and analytically.<|reference_end|> | arxiv | @article{liua2008an,
title={An efficient approach of controlling traffic congestion in scale-free
networks},
author={Zonghua Liua, Weichuan Ma, Huan Zhang, Yin Sun, and P.M. Hui},
journal={arXiv preprint arXiv:0806.1845},
year={2008},
doi={10.1016/j.physa.2006.02.021},
archivePrefix={arXiv},
eprint={0806.1845},
primaryClass={cs.NI}
} | liua2008an |
arxiv-4021 | 0806.1846 | Detrended fluctuation analysis of traffic data | <|reference_start|>Detrended fluctuation analysis of traffic data: Different routing strategies may result in different behaviors of traffic on internet. We analyze the correlation of traffic data for three typical routing strategies by the detrended fluctuation analysis (DFA) and find that the degree of correlation of the data can be divided into three regions, i.e., weak, medium, and strong correlation. The DFA scalings are constants in both the regions of weak and strong correlation but monotonously increase in the region of medium correlation. We suggest that it is better to consider the traffic on complex network as three phases, i.e., the free, buffer, and congestion phase, than just as two phases believed before, i.e., the free and congestion phase.<|reference_end|> | arxiv | @article{zhu2008detrended,
title={Detrended fluctuation analysis of traffic data},
author={Xiaoyan Zhu, Zonghua Liu, and Ming Tang},
journal={CPL 24,7(2007)2142},
year={2008},
archivePrefix={arXiv},
eprint={0806.1846},
primaryClass={cs.NI}
} | zhu2008detrended |
arxiv-4022 | 0806.1893 | D\'efinition d'une structure adaptative de r\'eseau local sans fil \`a consommation optimis\'ee | <|reference_start|>D\'efinition d'une structure adaptative de r\'eseau local sans fil \`a consommation optimis\'ee: The strong growth of low rate wireless personal area networks (LR-WPAN), leads us to consider the autonomy problems, thus node lifetime in a network, knowing that the power supplies replacement is often difficult to realize. The inherent mobility in this type of equipment is an essential element. It will provide routing constraints, so a complex problem to solve. This article provides work lines to assess the performance of such a network in terms of energy consumption and mobility. The objectives are contradictory; it will necessarily find a compromise. In addition, if we want to guarantee a maximum delay for the transmitted messages, possibility offered by the IEEE 802.15-4 standard, another compromise necessitate a strictly fixed structure and a fully mobile structure. Therefore, we present a quantization of the energy cost related to the desired data rate and compared to the sleep duration of nodes in the network. Then, we open reflexion lines to find the best compromise: consumption / mobility / guaranteed deadlines, in suggesting an adaptive network structure from a concept of MANET.<|reference_end|> | arxiv | @article{chebira2008d\'efinition,
title={D\'efinition d'une structure adaptative de r\'eseau local sans fil \`a
consommation optimis\'ee},
author={Sabri Chebira, Gilles Mercier (LATTIS), Jackson Francomme (LATTIS)},
journal={3\`eme Conf\'erence internationale Sciences \'Electroniques,
Technologies de l'Information et des T\'el\'ecommunications, Sousse : Tunisie
(2005)},
year={2008},
archivePrefix={arXiv},
eprint={0806.1893},
primaryClass={cs.NI}
} | chebira2008d\'efinition |
arxiv-4023 | 0806.1895 | \'Evaluation d'une application de transmission d'images m\'edicales avec un r\'eseau sans fil | <|reference_start|>\'Evaluation d'une application de transmission d'images m\'edicales avec un r\'eseau sans fil: We offer a platform for database consultations and/or biomedical images exchanges, adapted to the low rate wireless transmission, and intended for general practitioners or specialists. The goal can be preventive, diagnostic and therapeutic. it Concerns specialties such as radiology, ultrasound, the anatomical pathology or endoscopy. The main features required in such a context are to adjust the data compression of both the specific needs of telemedicine and limited capabilities of wireless communication networks. We present our approach in which we have set out criteria on Biomedical images quality, compressed by the wavelet method to retain all the necessary information for an accurate diagnosis, and determined the characteristics of a wireless network with minimal performances for the transmission of these images within constraints related to the modality and the data flow, in this case Wifi based on the IEEE 802.11 standard. Our results will assess the capacity of this standard in terms of speed, to transmit images at a rate of 10 frames per second. It will be necessary to quantify the amount of information to add to the image datas to enable a transmission in good conditions and the appropriate modus operandi.<|reference_end|> | arxiv | @article{francomme2008\'evaluation,
title={\'Evaluation d'une application de transmission d'images m\'edicales avec
un r\'eseau sans fil},
author={Jackson Francomme (LATTIS), Gilles Mercier (LATTIS), Sabri Chebira},
journal={3\`eme Conf\'erence internationale Sciences \'Electroniques,
Technologies de l'Information et des T\'el\'ecommunications, Sousse : Tunisie
(2005)},
year={2008},
archivePrefix={arXiv},
eprint={0806.1895},
primaryClass={cs.NI}
} | francomme2008\'evaluation |
arxiv-4024 | 0806.1918 | Analysis of Social Voting Patterns on Digg | <|reference_start|>Analysis of Social Voting Patterns on Digg: The social Web is transforming the way information is created and distributed. Blog authoring tools enable users to publish content, while sites such as Digg and Del.icio.us are used to distribute content to a wider audience. With content fast becoming a commodity, interest in using social networks to promote and find content has grown, both on the side of content producers (viral marketing) and consumers (recommendation). Here we study the role of social networks in promoting content on Digg, a social news aggregator that allows users to submit links to and vote on news stories. Digg's goal is to feature the most interesting stories on its front page, and it aggregates opinions of its many users to identify them. Like other social networking sites, Digg allows users to designate other users as ``friends'' and see what stories they found interesting. We studied the spread of interest in news stories submitted to Digg in June 2006. Our results suggest that pattern of the spread of interest in a story on the network is indicative of how popular the story will become. Stories that spread mainly outside of the submitter's neighborhood go on to be very popular, while stories that spread mainly through submitter's social neighborhood prove not to be very popular. This effect is visible already in the early stages of voting, and one can make a prediction about the potential audience of a story simply by analyzing where the initial votes come from.<|reference_end|> | arxiv | @article{lerman2008analysis,
title={Analysis of Social Voting Patterns on Digg},
author={Kristina Lerman and Aram Galstyan},
journal={arXiv preprint arXiv:0806.1918},
year={2008},
archivePrefix={arXiv},
eprint={0806.1918},
primaryClass={cs.CY cs.IR}
} | lerman2008analysis |
arxiv-4025 | 0806.1919 | Non-linear index coding outperforming the linear optimum | <|reference_start|>Non-linear index coding outperforming the linear optimum: The following source coding problem was introduced by Birk and Kol: a sender holds a word $x\in\{0,1\}^n$, and wishes to broadcast a codeword to $n$ receivers, $R_1,...,R_n$. The receiver $R_i$ is interested in $x_i$, and has prior \emph{side information} comprising some subset of the $n$ bits. This corresponds to a directed graph $G$ on $n$ vertices, where $i j$ is an edge iff $R_i$ knows the bit $x_j$. An \emph{index code} for $G$ is an encoding scheme which enables each $R_i$ to always reconstruct $x_i$, given his side information. The minimal word length of an index code was studied by Bar-Yossef, Birk, Jayram and Kol (FOCS 2006). They introduced a graph parameter, $\minrk_2(G)$, which completely characterizes the length of an optimal \emph{linear} index code for $G$. The authors of BBJK showed that in various cases linear codes attain the optimal word length, and conjectured that linear index coding is in fact \emph{always} optimal. In this work, we disprove the main conjecture of BBJK in the following strong sense: for any $\epsilon > 0$ and sufficiently large $n$, there is an $n$-vertex graph $G$ so that every linear index code for $G$ requires codewords of length at least $n^{1-\epsilon}$, and yet a non-linear index code for $G$ has a word length of $n^\epsilon$. This is achieved by an explicit construction, which extends Alon's variant of the celebrated Ramsey construction of Frankl and Wilson. In addition, we study optimal index codes in various, less restricted, natural models, and prove several related properties of the graph parameter $\minrk(G)$.<|reference_end|> | arxiv | @article{lubetzky2008non-linear,
title={Non-linear index coding outperforming the linear optimum},
author={Eyal Lubetzky, Uri Stav},
journal={arXiv preprint arXiv:0806.1919},
year={2008},
archivePrefix={arXiv},
eprint={0806.1919},
primaryClass={cs.IT math.IT}
} | lubetzky2008non-linear |
arxiv-4026 | 0806.1931 | Information-Theoretically Secure Voting Without an Honest Majority | <|reference_start|>Information-Theoretically Secure Voting Without an Honest Majority: We present three voting protocols with unconditional privacy and information-theoretic correctness, without assuming any bound on the number of corrupt voters or voting authorities. All protocols have polynomial complexity and require private channels and a simultaneous broadcast channel. Our first protocol is a basic voting scheme which allows voters to interact in order to compute the tally. Privacy of the ballot is unconditional, but any voter can cause the protocol to fail, in which case information about the tally may nevertheless transpire. Our second protocol introduces voting authorities which allow the implementation of the first protocol, while reducing the interaction and limiting it to be only between voters and authorities and among the authorities themselves. The simultaneous broadcast is also limited to the authorities. As long as a single authority is honest, the privacy is unconditional, however, a single corrupt authority or a single corrupt voter can cause the protocol to fail. Our final protocol provides a safeguard against corrupt voters by enabling a verification technique to allow the authorities to revoke incorrect votes. We also discuss the implementation of a simultaneous broadcast channel with the use of temporary computational assumptions, yielding versions of our protocols achieving everlasting security.<|reference_end|> | arxiv | @article{broadbent2008information-theoretically,
title={Information-Theoretically Secure Voting Without an Honest Majority},
author={Anne Broadbent and Alain Tapp},
journal={arXiv preprint arXiv:0806.1931},
year={2008},
archivePrefix={arXiv},
eprint={0806.1931},
primaryClass={cs.CR}
} | broadbent2008information-theoretically |
arxiv-4027 | 0806.1945 | Complexity in atoms: an approach with a new analytical density | <|reference_start|>Complexity in atoms: an approach with a new analytical density: In this work, the calculation of complexity on atomic systems is considered. In order to unveil the increasing of this statistical magnitude with the atomic number due to the relativistic effects, recently reported in [A. Borgoo, F. De Proft, P. Geerlings, K.D. Sen, Chem. Phys. Lett., 444 (2007) 186], a new analytical density to describe neutral atoms is proposed. This density is inspired in the Tietz potential model. The parameters of this density are determined from the normalization condition and from a variational calculation of the energy, which is a functional of the density. The density is non-singular at the origin and its specific form is selected so as to fit the results coming from non-relativistic Hartree-Fock calculations. The main ingredients of the energy functional are the non-relativistic kinetic energy, the nuclear-electron attraction energy and the classical term of the electron repulsion. The relativistic correction to the kinetic energy and the Weizsacker term are also taken into account. The Dirac and the correlation terms are shown to be less important than the other terms and they have been discarded in this study. When the statistical measure of complexity is calculated in position space with the analytical density derived from this model, the increasing trend of this magnitude as the atomic number increases is also found.<|reference_end|> | arxiv | @article{sanudo2008complexity,
title={Complexity in atoms: an approach with a new analytical density},
author={Jaime Sanudo and Ricardo Lopez-Ruiz},
journal={arXiv preprint arXiv:0806.1945},
year={2008},
archivePrefix={arXiv},
eprint={0806.1945},
primaryClass={nlin.CD cs.IT math.IT physics.atom-ph quant-ph}
} | sanudo2008complexity |
arxiv-4028 | 0806.1948 | Tight Bounds for Hashing Block Sources | <|reference_start|>Tight Bounds for Hashing Block Sources: It is known that if a 2-universal hash function $H$ is applied to elements of a {\em block source} $(X_1,...,X_T)$, where each item $X_i$ has enough min-entropy conditioned on the previous items, then the output distribution $(H,H(X_1),...,H(X_T))$ will be ``close'' to the uniform distribution. We provide improved bounds on how much min-entropy per item is required for this to hold, both when we ask that the output be close to uniform in statistical distance and when we only ask that it be statistically close to a distribution with small collision probability. In both cases, we reduce the dependence of the min-entropy on the number $T$ of items from $2\log T$ in previous work to $\log T$, which we show to be optimal. This leads to corresponding improvements to the recent results of Mitzenmacher and Vadhan (SODA `08) on the analysis of hashing-based algorithms and data structures when the data items come from a block source.<|reference_end|> | arxiv | @article{chung2008tight,
title={Tight Bounds for Hashing Block Sources},
author={Kai-Min Chung, Salil Vadhan},
journal={arXiv preprint arXiv:0806.1948},
year={2008},
archivePrefix={arXiv},
eprint={0806.1948},
primaryClass={cs.DS}
} | chung2008tight |
arxiv-4029 | 0806.1978 | Max Cut and the Smallest Eigenvalue | <|reference_start|>Max Cut and the Smallest Eigenvalue: We describe a new approximation algorithm for Max Cut. Our algorithm runs in $\tilde O(n^2)$ time, where $n$ is the number of vertices, and achieves an approximation ratio of $.531$. On instances in which an optimal solution cuts a $1-\epsilon$ fraction of edges, our algorithm finds a solution that cuts a $1-4\sqrt{\epsilon} + 8\epsilon-o(1)$ fraction of edges. Our main result is a variant of spectral partitioning, which can be implemented in nearly linear time. Given a graph in which the Max Cut optimum is a $1-\epsilon$ fraction of edges, our spectral partitioning algorithm finds a set $S$ of vertices and a bipartition $L,R=S-L$ of $S$ such that at least a $1-O(\sqrt \epsilon)$ fraction of the edges incident on $S$ have one endpoint in $L$ and one endpoint in $R$. (This can be seen as an analog of Cheeger's inequality for the smallest eigenvalue of the adjacency matrix of a graph.) Iterating this procedure yields the approximation results stated above. A different, more complicated, variant of spectral partitioning leads to an $\tilde O(n^3)$ time algorithm that cuts $1/2 + e^{-\Omega(1/\eps)}$ fraction of edges in graphs in which the optimum is $1/2 + \epsilon$.<|reference_end|> | arxiv | @article{trevisan2008max,
title={Max Cut and the Smallest Eigenvalue},
author={Luca Trevisan},
journal={arXiv preprint arXiv:0806.1978},
year={2008},
archivePrefix={arXiv},
eprint={0806.1978},
primaryClass={cs.DS}
} | trevisan2008max |
arxiv-4030 | 0806.1984 | Classification of curves in 2D and 3D via affine integral signatures | <|reference_start|>Classification of curves in 2D and 3D via affine integral signatures: We propose a robust classification algorithm for curves in 2D and 3D, under the special and full groups of affine transformations. To each plane or spatial curve we assign a plane signature curve. Curves, equivalent under an affine transformation, have the same signature. The signatures introduced in this paper are based on integral invariants, which behave much better on noisy images than classically known differential invariants. The comparison with other types of invariants is given in the introduction. Though the integral invariants for planar curves were known before, the affine integral invariants for spatial curves are proposed here for the first time. Using the inductive variation of the moving frame method we compute affine invariants in terms of Euclidean invariants. We present two types of signatures, the global signature and the local signature. Both signatures are independent of parameterization (curve sampling). The global signature depends on the choice of the initial point and does not allow us to compare fragments of curves, and is therefore sensitive to occlusions. The local signature, although is slightly more sensitive to noise, is independent of the choice of the initial point and is not sensitive to occlusions in an image. It helps establish local equivalence of curves. The robustness of these invariants and signatures in their application to the problem of classification of noisy spatial curves extracted from a 3D object is analyzed.<|reference_end|> | arxiv | @article{feng2008classification,
title={Classification of curves in 2D and 3D via affine integral signatures},
author={S. Feng, I. A. Kogan, H. Krim},
journal={arXiv preprint arXiv:0806.1984},
year={2008},
archivePrefix={arXiv},
eprint={0806.1984},
primaryClass={cs.CV}
} | feng2008classification |
arxiv-4031 | 0806.2006 | Fusion de classifieurs pour la classification d'images sonar | <|reference_start|>Fusion de classifieurs pour la classification d'images sonar: In this paper, we present some high level information fusion approaches for numeric and symbolic data. We study the interest of such method particularly for classifier fusion. A comparative study is made in a context of sea bed characterization from sonar images. The classi- fication of kind of sediment is a difficult problem because of the data complexity. We compare high level information fusion and give the obtained performance.<|reference_end|> | arxiv | @article{martin2008fusion,
title={Fusion de classifieurs pour la classification d'images sonar},
author={Arnaud Martin (E3I2)},
journal={Revue Nationale des Technologies de l'Information E, 5 (2005)
259-268},
year={2008},
archivePrefix={arXiv},
eprint={0806.2006},
primaryClass={cs.CV cs.AI}
} | martin2008fusion |
arxiv-4032 | 0806.2007 | Experts Fusion and Multilayer Perceptron Based on Belief Learning for Sonar Image Classification | <|reference_start|>Experts Fusion and Multilayer Perceptron Based on Belief Learning for Sonar Image Classification: The sonar images provide a rapid view of the seabed in order to characterize it. However, in such as uncertain environment, real seabed is unknown and the only information we can obtain, is the interpretation of different human experts, sometimes in conflict. In this paper, we propose to manage this conflict in order to provide a robust reality for the learning step of classification algorithms. The classification is conducted by a multilayer perceptron, taking into account the uncertainty of the reality in the learning stage. The results of this seabed characterization are presented on real sonar images.<|reference_end|> | arxiv | @article{martin2008experts,
title={Experts Fusion and Multilayer Perceptron Based on Belief Learning for
Sonar Image Classification},
author={Arnaud Martin (E3I2), Christophe Osswald (E3I2)},
journal={arXiv preprint arXiv:0806.2007},
year={2008},
archivePrefix={arXiv},
eprint={0806.2007},
primaryClass={cs.CV cs.AI}
} | martin2008experts |
arxiv-4033 | 0806.2008 | Generalized proportional conflict redistribution rule applied to Sonar imagery and Radar targets classification | <|reference_start|>Generalized proportional conflict redistribution rule applied to Sonar imagery and Radar targets classification: In this chapter, we present two applications in information fusion in order to evaluate the generalized proportional conflict redistribution rule presented in the chapter \cite{Martin06a}. Most of the time the combination rules are evaluated only on simple examples. We study here different combination rules and compare them in terms of decision on real data. Indeed, in real applications, we need a reliable decision and it is the final results that matter. Two applications are presented here: a fusion of human experts opinions on the kind of underwater sediments depict on sonar image and a classifier fusion for radar targets recognition.<|reference_end|> | arxiv | @article{martin2008generalized,
title={Generalized proportional conflict redistribution rule applied to Sonar
imagery and Radar targets classification},
author={Arnaud Martin (E3I2), Christophe Osswald (E3I2)},
journal={Advances and Applications of DSmT for Information Fusion,
Florentin Smarandache & Jean Dezert (Ed.) (2006) 289-304},
year={2008},
archivePrefix={arXiv},
eprint={0806.2008},
primaryClass={cs.CV cs.AI}
} | martin2008generalized |
arxiv-4034 | 0806.2035 | Nodal distances for rooted phylogenetic trees | <|reference_start|>Nodal distances for rooted phylogenetic trees: Dissimilarity measures for (possibly weighted) phylogenetic trees based on the comparison of their vectors of path lengths between pairs of taxa, have been present in the systematics literature since the early seventies. But, as far as rooted phylogenetic trees goes, these vectors can only separate non-weighted binary trees, and therefore these dissimilarity measures are metrics only on this class. In this paper we overcome this problem, by splitting in a suitable way each path length between two taxa into two lengths. We prove that the resulting splitted path lengths matrices single out arbitrary rooted phylogenetic trees with nested taxa and arcs weighted in the set of positive real numbers. This allows the definition of metrics on this general class by comparing these matrices by means of metrics in spaces of real-valued $n\times n$ matrices. We conclude this paper by establishing some basic facts about the metrics for non-weighted phylogenetic trees defined in this way using $L^p$ metrics on these spaces of matrices.<|reference_end|> | arxiv | @article{cardona2008nodal,
title={Nodal distances for rooted phylogenetic trees},
author={Gabriel Cardona, Merce Llabres, Francesc Rossello, Gabriel Valiente},
journal={arXiv preprint arXiv:0806.2035},
year={2008},
archivePrefix={arXiv},
eprint={0806.2035},
primaryClass={q-bio.PE cs.CE cs.DM}
} | cardona2008nodal |
arxiv-4035 | 0806.2068 | A simple, polynomial-time algorithm for the matrix torsion problem | <|reference_start|>A simple, polynomial-time algorithm for the matrix torsion problem: The Matrix Torsion Problem (MTP) is: given a square matrix M with rational entries, decide whether two distinct powers of M are equal. It has been shown by Cassaigne and the author that the MTP reduces to the Matrix Power Problem (MPP) in polynomial time: given two square matrices A and B with rational entries, the MTP is to decide whether B is a power of A. Since the MPP is decidable in polynomial time, it is also the case of the MTP. However, the algorithm for MPP is highly non-trivial. The aim of this note is to present a simple, direct, polynomial-time algorithm for the MTP.<|reference_end|> | arxiv | @article{nicolas2008a,
title={A simple, polynomial-time algorithm for the matrix torsion problem},
author={Francois Nicolas},
journal={arXiv preprint arXiv:0806.2068},
year={2008},
archivePrefix={arXiv},
eprint={0806.2068},
primaryClass={cs.DM cs.DS}
} | nicolas2008a |
arxiv-4036 | 0806.2084 | On the existence of compactly supported reconstruction functions in a sampling problem | <|reference_start|>On the existence of compactly supported reconstruction functions in a sampling problem: Assume that samples of a filtered version of a function in a shift-invariant space are avalaible. This work deals with the existence of a sampling formula involving these samples and having reconstruction functions with compact support. Thus, low computational complexity is involved and truncation errors are avoided. This is done in the light of the generalized sampling theory by using the oversampling technique: more samples than strictly necessary are used. For a suitable choice of the sampling period, a necessary and sufficient condition is given in terms of the Kronecker canonical form of a matrix pencil. Comparing with other characterizations in the mathematical literature, the given here has an important advantage: it can be reliable computed by using the GUPTRI form of the matrix pencil. Finally, a practical method for computing the compactly supported reconstruction functions is given for the important case where the oversampling rate is minimum.<|reference_end|> | arxiv | @article{garcia2008on,
title={On the existence of compactly supported reconstruction functions in a
sampling problem},
author={A. G. Garcia, M. A. Hernandez-Medina, and G. Perez-Villalon},
journal={arXiv preprint arXiv:0806.2084},
year={2008},
archivePrefix={arXiv},
eprint={0806.2084},
primaryClass={cs.IT math.FA math.IT math.NA}
} | garcia2008on |
arxiv-4037 | 0806.2090 | Finding the theta-Guarded Region | <|reference_start|>Finding the theta-Guarded Region: We are given a finite set of n points (guards) G in the plane R^2 and an angle 0 < theta < 2 pi. A theta-cone is a cone with apex angle theta. We call a theta-cone empty (with respect to G) if it does not contain any point of G. A point p in R^2 is called theta-guarded if every theta-cone with its apex located at p is non-empty. Furthermore, the set of all theta-guarded points is called the theta-guarded region, or the theta-region for short. We present several results on this topic. The main contribution of our work is to describe the theta-region with O(n/theta) circular arcs, and we give an algorithm to compute it. We prove a tight O(n) worst-case bound on the complexity of the theta-region for theta >= pi/2. In case theta is bounded from below by a positive constant, we prove an almost linear bound O(n^(1+epsilon)) for any epsilon > 0 on the complexity. Moreover, we show that there is a sequence of inputs such that the asymptotic bound on the complexity of their theta-region is Omega(n^2). In addition we point out gaps in the proofs of a recent publication that claims an O(n) bound on the complexity for any constant angle theta.<|reference_end|> | arxiv | @article{matijević2008finding,
title={Finding the theta-Guarded Region},
author={Domagoj Matijevi'c and Ralf Osbild},
journal={arXiv preprint arXiv:0806.2090},
year={2008},
archivePrefix={arXiv},
eprint={0806.2090},
primaryClass={cs.CG}
} | matijević2008finding |
arxiv-4038 | 0806.2096 | Geometry of antimatroidal point sets | <|reference_start|>Geometry of antimatroidal point sets: The notion of "antimatroid with repetition" was conceived by Bjorner, Lovasz and Shor in 1991 as a multiset extension of the notion of antimatroid. When the underlying set consists of only two elements, such two-dimensional antimatroids correspond to point sets in the plane. In this research we concentrate on geometrical properties of antimatroidal point sets in the plane and prove that these sets are exactly parallelogram polyominoes. Our results imply that two-dimensional antimatroids have convex dimension 2. The second part of the research is devoted to geometrical properties of three-dimensional antimatroids closed under intersection.<|reference_end|> | arxiv | @article{kempner2008geometry,
title={Geometry of antimatroidal point sets},
author={Yulia Kempner and Vadim E. Levit},
journal={arXiv preprint arXiv:0806.2096},
year={2008},
archivePrefix={arXiv},
eprint={0806.2096},
primaryClass={math.CO cs.DM}
} | kempner2008geometry |
arxiv-4039 | 0806.2139 | Beyond Nash Equilibrium: Solution Concepts for the 21st Century | <|reference_start|>Beyond Nash Equilibrium: Solution Concepts for the 21st Century: Nash equilibrium is the most commonly-used notion of equilibrium in game theory. However, it suffers from numerous problems. Some are well known in the game theory community; for example, the Nash equilibrium of repeated prisoner's dilemma is neither normatively nor descriptively reasonable. However, new problems arise when considering Nash equilibrium from a computer science perspective: for example, Nash equilibrium is not robust (it does not tolerate ``faulty'' or ``unexpected'' behavior), it does not deal with coalitions, it does not take computation cost into account, and it does not deal with cases where players are not aware of all aspects of the game. Solution concepts that try to address these shortcomings of Nash equilibrium are discussed.<|reference_end|> | arxiv | @article{halpern2008beyond,
title={Beyond Nash Equilibrium: Solution Concepts for the 21st Century},
author={Joseph Y. Halpern},
journal={arXiv preprint arXiv:0806.2139},
year={2008},
archivePrefix={arXiv},
eprint={0806.2139},
primaryClass={cs.GT cs.AI cs.CR cs.DC}
} | halpern2008beyond |
arxiv-4040 | 0806.2140 | Defaults and Normality in Causal Structures | <|reference_start|>Defaults and Normality in Causal Structures: A serious defect with the Halpern-Pearl (HP) definition of causality is repaired by combining a theory of causality with a theory of defaults. In addition, it is shown that (despite a claim to the contrary) a cause according to the HP condition need not be a single conjunct. A definition of causality motivated by Wright's NESS test is shown to always hold for a single conjunct. Moreover, conditions that hold for all the examples considered by HP are given that guarantee that causality according to (this version) of the NESS test is equivalent to the HP definition.<|reference_end|> | arxiv | @article{halpern2008defaults,
title={Defaults and Normality in Causal Structures},
author={Joseph Y. Halpern},
journal={arXiv preprint arXiv:0806.2140},
year={2008},
archivePrefix={arXiv},
eprint={0806.2140},
primaryClass={cs.AI}
} | halpern2008defaults |
arxiv-4041 | 0806.2159 | Communication-optimal parallel and sequential QR and LU factorizations: theory and practice | <|reference_start|>Communication-optimal parallel and sequential QR and LU factorizations: theory and practice: We present parallel and sequential dense QR factorization algorithms that are both optimal (up to polylogarithmic factors) in the amount of communication they perform, and just as stable as Householder QR. Our first algorithm, Tall Skinny QR (TSQR), factors m-by-n matrices in a one-dimensional (1-D) block cyclic row layout, and is optimized for m >> n. Our second algorithm, CAQR (Communication-Avoiding QR), factors general rectangular matrices distributed in a two-dimensional block cyclic layout. It invokes TSQR for each block column factorization.<|reference_end|> | arxiv | @article{demmel2008communication-optimal,
title={Communication-optimal parallel and sequential QR and LU factorizations:
theory and practice},
author={James Demmel, Laura Grigori, Mark Hoemmen, and Julien Langou},
journal={arXiv preprint arXiv:0806.2159},
year={2008},
number={LAPACK Working Note 204},
archivePrefix={arXiv},
eprint={0806.2159},
primaryClass={cs.NA}
} | demmel2008communication-optimal |
arxiv-4042 | 0806.2198 | Capacity-achieving CPM schemes | <|reference_start|>Capacity-achieving CPM schemes: The pragmatic approach to coded continuous-phase modulation (CPM) is proposed as a capacity-achieving low-complexity alternative to the serially-concatenated CPM (SC-CPM) coding scheme. In this paper, we first perform a selection of the best spectrally-efficient CPM modulations to be embedded into SC-CPM schemes. Then, we consider the pragmatic capacity (a.k.a. BICM capacity) of CPM modulations and optimize it through a careful design of the mapping between input bits and CPM waveforms. The so obtained schemes are cascaded with an outer serially-concatenated convolutional code to form a pragmatic coded-modulation system. The resulting schemes exhibit performance very close to the CPM capacity without requiring iterations between the outer decoder and the CPM demodulator. As a result, the receiver exhibits reduced complexity and increased flexibility due to the separation of the demodulation and decoding functions.<|reference_end|> | arxiv | @article{perotti2008capacity-achieving,
title={Capacity-achieving CPM schemes},
author={Alberto Perotti, Alberto Tarable, Sergio Benedetto and Guido Montorsi},
journal={arXiv preprint arXiv:0806.2198},
year={2008},
doi={10.1109/TIT.2010.2040861},
archivePrefix={arXiv},
eprint={0806.2198},
primaryClass={cs.IT math.IT}
} | perotti2008capacity-achieving |
arxiv-4043 | 0806.2216 | An Intelligent Multi-Agent Recommender System for Human Capacity Building | <|reference_start|>An Intelligent Multi-Agent Recommender System for Human Capacity Building: This paper presents a Multi-Agent approach to the problem of recommending training courses to engineering professionals. The recommendation system is built as a proof of concept and limited to the electrical and mechanical engineering disciplines. Through user modelling and data collection from a survey, collaborative filtering recommendation is implemented using intelligent agents. The agents work together in recommending meaningful training courses and updating the course information. The system uses a users profile and keywords from courses to rank courses. A ranking accuracy for courses of 90% is achieved while flexibility is achieved using an agent that retrieves information autonomously using data mining techniques from websites. This manner of recommendation is scalable and adaptable. Further improvements can be made using clustering and recording user feedback.<|reference_end|> | arxiv | @article{marivate2008an,
title={An Intelligent Multi-Agent Recommender System for Human Capacity
Building},
author={Vukosi N. Marivate, George Ssali and Tshilidzi Marwala},
journal={arXiv preprint arXiv:0806.2216},
year={2008},
doi={10.1109/MELCON.2008.4618553},
archivePrefix={arXiv},
eprint={0806.2216},
primaryClass={cs.AI cs.HC}
} | marivate2008an |
arxiv-4044 | 0806.2264 | Effective lambda-models vs recursively enumerable lambda-theories | <|reference_start|>Effective lambda-models vs recursively enumerable lambda-theories: A longstanding open problem is whether there exists a non syntactical model of the untyped lambda-calculus whose theory is exactly the least lambda-theory (l-beta). In this paper we investigate the more general question of whether the equational/order theory of a model of the (untyped) lambda-calculus can be recursively enumerable (r.e. for brevity). We introduce a notion of effective model of lambda-calculus calculus, which covers in particular all the models individually introduced in the literature. We prove that the order theory of an effective model is never r.e.; from this it follows that its equational theory cannot be l-beta or l-beta-eta. We then show that no effective model living in the stable or strongly stable semantics has an r.e. equational theory. Concerning Scott's semantics, we investigate the class of graph models and prove that no order theory of a graph model can be r.e., and that there exists an effective graph model whose equational/order theory is minimum among all theories of graph models. Finally, we show that the class of graph models enjoys a kind of downwards Lowenheim-Skolem theorem.<|reference_end|> | arxiv | @article{berline2008effective,
title={Effective lambda-models vs recursively enumerable lambda-theories},
author={Chantal Berline (PPS), Giulio Manzonetto (PPS), Antonio Salibra},
journal={arXiv preprint arXiv:0806.2264},
year={2008},
archivePrefix={arXiv},
eprint={0806.2264},
primaryClass={math.LO cs.LO}
} | berline2008effective |
arxiv-4045 | 0806.2274 | Exposing Multi-Relational Networks to Single-Relational Network Analysis Algorithms | <|reference_start|>Exposing Multi-Relational Networks to Single-Relational Network Analysis Algorithms: Many, if not most network analysis algorithms have been designed specifically for single-relational networks; that is, networks in which all edges are of the same type. For example, edges may either represent "friendship," "kinship," or "collaboration," but not all of them together. In contrast, a multi-relational network is a network with a heterogeneous set of edge labels which can represent relationships of various types in a single data structure. While multi-relational networks are more expressive in terms of the variety of relationships they can capture, there is a need for a general framework for transferring the many single-relational network analysis algorithms to the multi-relational domain. It is not sufficient to execute a single-relational network analysis algorithm on a multi-relational network by simply ignoring edge labels. This article presents an algebra for mapping multi-relational networks to single-relational networks, thereby exposing them to single-relational network analysis algorithms.<|reference_end|> | arxiv | @article{rodriguez2008exposing,
title={Exposing Multi-Relational Networks to Single-Relational Network Analysis
Algorithms},
author={Marko A. Rodriguez, Joshua Shinavier},
journal={Journal of Informetrics, volume 4, number 1, pages 29-41, 2009},
year={2008},
doi={10.1016/j.joi.2009.06.004},
number={LA-UR-08-03931},
archivePrefix={arXiv},
eprint={0806.2274},
primaryClass={cs.DM cs.DS}
} | rodriguez2008exposing |
arxiv-4046 | 0806.2287 | Approximately Counting Embeddings into Random Graphs | <|reference_start|>Approximately Counting Embeddings into Random Graphs: Let H be a graph, and let C_H(G) be the number of (subgraph isomorphic) copies of H contained in a graph G. We investigate the fundamental problem of estimating C_H(G). Previous results cover only a few specific instances of this general problem, for example, the case when H has degree at most one (monomer-dimer problem). In this paper, we present the first general subcase of the subgraph isomorphism counting problem which is almost always efficiently approximable. The results rely on a new graph decomposition technique. Informally, the decomposition is a labeling of the vertices such that every edge is between vertices with different labels and for every vertex all neighbors with a higher label have identical labels. The labeling implicitly generates a sequence of bipartite graphs which permits us to break the problem of counting embeddings of large subgraphs into that of counting embeddings of small subgraphs. Using this method, we present a simple randomized algorithm for the counting problem. For all decomposable graphs H and all graphs G, the algorithm is an unbiased estimator. Furthermore, for all graphs H having a decomposition where each of the bipartite graphs generated is small and almost all graphs G, the algorithm is a fully polynomial randomized approximation scheme. We show that the graph classes of H for which we obtain a fully polynomial randomized approximation scheme for almost all G includes graphs of degree at most two, bounded-degree forests, bounded-length grid graphs, subdivision of bounded-degree graphs, and major subclasses of outerplanar graphs, series-parallel graphs and planar graphs, whereas unbounded-length grid graphs are excluded.<|reference_end|> | arxiv | @article{furer2008approximately,
title={Approximately Counting Embeddings into Random Graphs},
author={Martin Furer and Shiva Prasad Kasiviswanathan},
journal={Combinator. Probab. Comp. 23 (2014) 1028-1056},
year={2008},
doi={10.1017/S0963548314000339},
archivePrefix={arXiv},
eprint={0806.2287},
primaryClass={cs.DS cs.DM}
} | furer2008approximately |
arxiv-4047 | 0806.2312 | Continuing Progress on a Lattice QCD Software Infrastructure | <|reference_start|>Continuing Progress on a Lattice QCD Software Infrastructure: We report on the progress of the software effort in the QCD Application Area of SciDAC. In particular, we discuss how the software developed under SciDAC enabled the aggressive exploitation of leadership computers, and we report on progress in the area of QCD software for multi-core architectures.<|reference_end|> | arxiv | @article{joo2008continuing,
title={Continuing Progress on a Lattice QCD Software Infrastructure},
author={Balint Joo (for the USQCD Collaboration)},
journal={J.Phys.Conf.Ser.125:012066,2008},
year={2008},
doi={10.1088/1742-6596/125/1/012066},
number={JLAB-IT-08-02},
archivePrefix={arXiv},
eprint={0806.2312},
primaryClass={hep-lat cs.CE}
} | joo2008continuing |
arxiv-4048 | 0806.2332 | Triangulation of Simple 3D Shapes with Well-Centered Tetrahedra | <|reference_start|>Triangulation of Simple 3D Shapes with Well-Centered Tetrahedra: A completely well-centered tetrahedral mesh is a triangulation of a three dimensional domain in which every tetrahedron and every triangle contains its circumcenter in its interior. Such meshes have applications in scientific computing and other fields. We show how to triangulate simple domains using completely well-centered tetrahedra. The domains we consider here are space, infinite slab, infinite rectangular prism, cube and regular tetrahedron. We also demonstrate single tetrahedra with various combinations of the properties of dihedral acuteness, 2-well-centeredness and 3-well-centeredness.<|reference_end|> | arxiv | @article{vanderzee2008triangulation,
title={Triangulation of Simple 3D Shapes with Well-Centered Tetrahedra},
author={Evan VanderZee, Anil N. Hirani, Damrong Guoy},
journal={arXiv preprint arXiv:0806.2332},
year={2008},
number={UIUCDCS-R-2008-2970},
archivePrefix={arXiv},
eprint={0806.2332},
primaryClass={cs.CG cs.NA}
} | vanderzee2008triangulation |
arxiv-4049 | 0806.2351 | Scaling of critical connectivity of mobile ad hoc communication networks | <|reference_start|>Scaling of critical connectivity of mobile ad hoc communication networks: In this paper, critical global connectivity of mobile ad hoc communication networks (MAHCN) is investigated. We model the two-dimensional plane on which nodes move randomly with a triangular lattice. Demanding the best communication of the network, we account the global connectivity $\eta$ as a function of occupancy $\sigma$ of sites in the lattice by mobile nodes. Critical phenomena of the connectivity for different transmission ranges $r$ are revealed by numerical simulations, and these results fit well to the analysis based on the assumption of homogeneous mixing . Scaling behavior of the connectivity is found as $\eta \sim f(R^{\beta}\sigma)$, where $R=(r-r_{0})/r_{0}$, $r_{0}$ is the length unit of the triangular lattice and $\beta$ is the scaling index in the universal function $f(x)$. The model serves as a sort of site percolation on dynamic complex networks relative to geometric distance. Moreover, near each critical $\sigma_c(r)$ corresponding to certain transmission range $r$, there exists a cut-off degree $k_c$ below which the clustering coefficient of such self-organized networks keeps a constant while the averaged nearest neighbor degree exhibits a unique linear variation with the degree k, which may be useful to the designation of real MAHCN.<|reference_end|> | arxiv | @article{wang2008scaling,
title={Scaling of critical connectivity of mobile ad hoc communication networks},
author={Li Wang, Chen-Ping Zhu, Zhi-Ming Gu, Shi-Jie Xiong, Da-Ren He, and
Bing-Hong Wang},
journal={arXiv preprint arXiv:0806.2351},
year={2008},
doi={10.1103/PhysRevE.78.066107},
archivePrefix={arXiv},
eprint={0806.2351},
primaryClass={cs.NI cond-mat.dis-nn physics.soc-ph}
} | wang2008scaling |
arxiv-4050 | 0806.2356 | Development of Hybrid Intelligent Systems and their Applications from Engineering Systems to Complex Systems | <|reference_start|>Development of Hybrid Intelligent Systems and their Applications from Engineering Systems to Complex Systems: In this study, we introduce general frame of MAny Connected Intelligent Particles Systems (MACIPS). Connections and interconnections between particles get a complex behavior of such merely simple system (system in system).Contribution of natural computing, under information granulation theory, are the main topic of this spacious skeleton. Upon this clue, we organize different algorithms involved a few prominent intelligent computing and approximate reasoning methods such as self organizing feature map (SOM)[9], Neuro- Fuzzy Inference System[10], Rough Set Theory (RST)[11], collaborative clustering, Genetic Algorithm and Ant Colony System. Upon this, we have employed our algorithms on the several engineering systems, especially emerged systems in Civil and Mineral processing. In other process, we investigated how our algorithms can be taken as a linkage of government-society interaction, where government catches various fashions of behavior: solid (absolute) or flexible. So, transition of such society, by changing of connectivity parameters (noise) from order to disorder is inferred. Add to this, one may find an indirect mapping among finical systems and eventual market fluctuations with MACIPS. In the following sections, we will mention the main topics of the suggested proposal, briefly Details of the proposed algorithms can be found in the references.<|reference_end|> | arxiv | @article{owladeghaffari2008development,
title={Development of Hybrid Intelligent Systems and their Applications from
Engineering Systems to Complex Systems},
author={Hamed Owladeghaffari},
journal={arXiv preprint arXiv:0806.2356},
year={2008},
archivePrefix={arXiv},
eprint={0806.2356},
primaryClass={cs.AI cs.MA}
} | owladeghaffari2008development |
arxiv-4051 | 0806.2360 | Existence of a polyhedron which does not have a non-overlapping pseudo-edge unfolding | <|reference_start|>Existence of a polyhedron which does not have a non-overlapping pseudo-edge unfolding: There exists a surface of a convex polyhedron P and a partition L of P into geodesic convex polygons such that there are no connected "edge" unfoldings of P without self-intersections (whose spanning tree is a subset of the edge skeleton of L).<|reference_end|> | arxiv | @article{tarasov2008existence,
title={Existence of a polyhedron which does not have a non-overlapping
pseudo-edge unfolding},
author={Alexey S Tarasov},
journal={arXiv preprint arXiv:0806.2360},
year={2008},
archivePrefix={arXiv},
eprint={0806.2360},
primaryClass={cs.CG}
} | tarasov2008existence |
arxiv-4052 | 0806.2395 | Ad-hoc Limited Scale-Free Models for Unstructured Peer-to-Peer Networks | <|reference_start|>Ad-hoc Limited Scale-Free Models for Unstructured Peer-to-Peer Networks: Several protocol efficiency metrics (e.g., scalability, search success rate, routing reachability and stability) depend on the capability of preserving structure even over the churn caused by the ad-hoc nodes joining or leaving the network. Preserving the structure becomes more prohibitive due to the distributed and potentially uncooperative nature of such networks, as in the peer-to-peer (P2P) networks. Thus, most practical solutions involve unstructured approaches while attempting to maintain the structure at various levels of protocol stack. The primary focus of this paper is to investigate construction and maintenance of scale-free topologies in a distributed manner without requiring global topology information at the time when nodes join or leave. We consider the uncooperative behavior of peers by limiting the number of neighbors to a pre-defined hard cutoff value (i.e., no peer is a major hub), and the ad-hoc behavior of peers by rewiring the neighbors of nodes leaving the network. We also investigate the effect of these hard cutoffs and rewiring of ad-hoc nodes on the P2P search efficiency.<|reference_end|> | arxiv | @article{guclu2008ad-hoc,
title={Ad-hoc Limited Scale-Free Models for Unstructured Peer-to-Peer Networks},
author={Hasan Guclu, Durgesh Kumari, and Murat Yuksel},
journal={arXiv preprint arXiv:0806.2395},
year={2008},
doi={10.1109/P2P.2008.16},
number={LA-UR 08-3219},
archivePrefix={arXiv},
eprint={0806.2395},
primaryClass={cs.DC}
} | guclu2008ad-hoc |
arxiv-4053 | 0806.2448 | Logical Reasoning for Higher-Order Functions with Local State | <|reference_start|>Logical Reasoning for Higher-Order Functions with Local State: We introduce an extension of Hoare logic for call-by-value higher-order functions with ML-like local reference generation. Local references may be generated dynamically and exported outside their scope, may store higher-order functions and may be used to construct complex mutable data structures. This primitive is captured logically using a predicate asserting reachability of a reference name from a possibly higher-order datum and quantifiers over hidden references. We explore the logic's descriptive and reasoning power with non-trivial programming examples combining higher-order procedures and dynamically generated local state. Axioms for reachability and local invariant play a central role for reasoning about the examples.<|reference_end|> | arxiv | @article{yoshida2008logical,
title={Logical Reasoning for Higher-Order Functions with Local State},
author={Nobuko Yoshida, Kohei Honda and Martin Berger},
journal={Logical Methods in Computer Science, Volume 4, Issue 4 (October
20, 2008) lmcs:830},
year={2008},
doi={10.2168/LMCS-4(4:2)2008},
archivePrefix={arXiv},
eprint={0806.2448},
primaryClass={cs.LO cs.PL}
} | yoshida2008logical |
arxiv-4054 | 0806.2469 | Polynomial stochastic games via sum of squares optimization | <|reference_start|>Polynomial stochastic games via sum of squares optimization: Stochastic games are an important class of problems that generalize Markov decision processes to game theoretic scenarios. We consider finite state two-player zero-sum stochastic games over an infinite time horizon with discounted rewards. The players are assumed to have infinite strategy spaces and the payoffs are assumed to be polynomials. In this paper we restrict our attention to a special class of games for which the single-controller assumption holds. It is shown that minimax equilibria and optimal strategies for such games may be obtained via semidefinite programming.<|reference_end|> | arxiv | @article{shah2008polynomial,
title={Polynomial stochastic games via sum of squares optimization},
author={Parikshit Shah and Pablo A. Parrilo},
journal={arXiv preprint arXiv:0806.2469},
year={2008},
archivePrefix={arXiv},
eprint={0806.2469},
primaryClass={math.OC cs.GT}
} | shah2008polynomial |
arxiv-4055 | 0806.2509 | Proposition of a full deterministic medium access method for wireless network in a robotic application | <|reference_start|>Proposition of a full deterministic medium access method for wireless network in a robotic application: Today, many network applications require shorter react time. Robotic field is an excellent example of these needs: robot react time has a direct effect on its task's complexity. Here, we propose a full deterministic medium access method for a wireless robotic application. This contribution is based on some low-power wireless personal area networks, like ZigBee standard. Indeed, ZigBee has identified limits with Quality of Service due to non-determinist medium access and probable collisions during medium reservation requests. In this paper, two major improvements are proposed: an efficient polling of the star nodes and a temporal deterministic distribution of peer-to-peer messages. This new MAC protocol with no collision offers some QoS faculties.<|reference_end|> | arxiv | @article{bossche2008proposition,
title={Proposition of a full deterministic medium access method for wireless
network in a robotic application},
author={Adrien Van Den Bossche (LATTIS), Thierry Val (LATTIS), Eric Campo
(LATTIS)},
journal={2006 IEEE 63rd Vehicular Technology Conference, Melbourne :
Australie (2006)},
year={2008},
archivePrefix={arXiv},
eprint={0806.2509},
primaryClass={cs.NI}
} | bossche2008proposition |
arxiv-4056 | 0806.2513 | The Perfect Binary One-Error-Correcting Codes of Length 15: Part I--Classification | <|reference_start|>The Perfect Binary One-Error-Correcting Codes of Length 15: Part I--Classification: A complete classification of the perfect binary one-error-correcting codes of length 15 as well as their extensions of length 16 is presented. There are 5983 such inequivalent perfect codes and 2165 extended perfect codes. Efficient generation of these codes relies on the recent classification of Steiner quadruple systems of order 16. Utilizing a result of Blackmore, the optimal binary one-error-correcting codes of length 14 and the (15, 1024, 4) codes are also classified; there are 38408 and 5983 such codes, respectively.<|reference_end|> | arxiv | @article{östergård2008the,
title={The Perfect Binary One-Error-Correcting Codes of Length 15: Part
I--Classification},
author={Patric R. J. "Osterg{aa}rd and Olli Pottonen},
journal={IEEE Trans. Inform. Theory 55 (2009), 4657-4660},
year={2008},
doi={10.1109/TIT.2009.2027525},
archivePrefix={arXiv},
eprint={0806.2513},
primaryClass={cs.IT math.IT}
} | östergård2008the |
arxiv-4057 | 0806.2517 | The computability path ordering: the end of a quest | <|reference_start|>The computability path ordering: the end of a quest: In this paper, we first briefly survey automated termination proof methods for higher-order calculi. We then concentrate on the higher-order recursive path ordering, for which we provide an improved definition, the Computability Path Ordering. This new definition appears indeed to capture the essence of computability arguments \`a la Tait and Girard, therefore explaining the name of the improved ordering.<|reference_end|> | arxiv | @article{blanqui2008the,
title={The computability path ordering: the end of a quest},
author={Fr'ed'eric Blanqui (INRIA Lorraine - LORIA), Jean-Pierre Jouannaud
(LIX, INRIA Saclay Ile de France), Albert Rubio},
journal={arXiv preprint arXiv:0806.2517},
year={2008},
archivePrefix={arXiv},
eprint={0806.2517},
primaryClass={cs.LO}
} | blanqui2008the |
arxiv-4058 | 0806.2533 | Asymptotic Analysis of the Performance of LAS Algorithm for Large-MIMO Detection | <|reference_start|>Asymptotic Analysis of the Performance of LAS Algorithm for Large-MIMO Detection: In our recent work, we reported an exhaustive study on the simulated bit error rate (BER) performance of a low-complexity likelihood ascent search (LAS) algorithm for detection in large multiple-input multiple-output (MIMO) systems with large number of antennas that achieve high spectral efficiencies. Though the algorithm was shown to achieve increasingly closer to near maximum-likelihood (ML) performance through simulations, no BER analysis was reported. Here, we extend our work on LAS and report an asymptotic BER analysis of the LAS algorithm in the large system limit, where $N_t,N_r \to \infty$ with $N_t=N_r$, where $N_t$ and $N_r$ are the number of transmit and receive antennas. We prove that the error performance of the LAS detector in V-BLAST with 4-QAM in i.i.d. Rayleigh fading converges to that of the ML detector as $N_t,N_r \to \infty$.<|reference_end|> | arxiv | @article{mohammed2008asymptotic,
title={Asymptotic Analysis of the Performance of LAS Algorithm for Large-MIMO
Detection},
author={Saif K. Mohammed, A. Chockalingam, and B. Sundar Rajan},
journal={arXiv preprint arXiv:0806.2533},
year={2008},
archivePrefix={arXiv},
eprint={0806.2533},
primaryClass={cs.IT math.IT}
} | mohammed2008asymptotic |
arxiv-4059 | 0806.2548 | A Data-Parallel Algorithm to Reliably Solve Systems of Nonlinear Equations | <|reference_start|>A Data-Parallel Algorithm to Reliably Solve Systems of Nonlinear Equations: Numerical methods based on interval arithmetic are efficient means to reliably solve nonlinear systems of equations. Algorithm bc3revise is an interval method that tightens variables' domains by enforcing a property called box consistency. It has been successfully used on difficult problems whose solving eluded traditional numerical methods. We present a new algorithm to enforce box consistency that is simpler than bc3revise, faster, and easily data parallelizable. A parallel implementation with Intel SSE2 SIMD instructions shows that an increase in performance of up to an order of magnitude and more is achievable.<|reference_end|> | arxiv | @article{goualard2008a,
title={A Data-Parallel Algorithm to Reliably Solve Systems of Nonlinear
Equations},
author={Fr'ed'eric Goualard (LINA), Alexandre Goldsztejn (LINA)},
journal={arXiv preprint arXiv:0806.2548},
year={2008},
archivePrefix={arXiv},
eprint={0806.2548},
primaryClass={cs.NA}
} | goualard2008a |
arxiv-4060 | 0806.2549 | Prototyping and Performance Analysis of a QoS MAC Layer for Industrial Wireless Network | <|reference_start|>Prototyping and Performance Analysis of a QoS MAC Layer for Industrial Wireless Network: Today's industrial sensor networks require strong reliability and guarantees on messages delivery. These needs are even more important in real time applications like control/command, such as robotic wireless communications where strong temporal constraints are critical. For these reasons, classical random-based Medium Access Control (MAC) protocols present a non-null frame collision probability. In this paper we present an original full deterministic MAC-layer for industrial wireless network and its performance evaluation thanks to the development of a material prototype.<|reference_end|> | arxiv | @article{bossche2008prototyping,
title={Prototyping and Performance Analysis of a QoS MAC Layer for Industrial
Wireless Network},
author={Adrien Van Den Bossche (LATTIS), Thierry Val (LATTIS), Eric Campo
(LATTIS)},
journal={arXiv preprint arXiv:0806.2549},
year={2008},
archivePrefix={arXiv},
eprint={0806.2549},
primaryClass={cs.NI}
} | bossche2008prototyping |
arxiv-4061 | 0806.2550 | Proposition and validation of an original MAC layer with simultaneous medium accesses for low latency wireless control/command applications | <|reference_start|>Proposition and validation of an original MAC layer with simultaneous medium accesses for low latency wireless control/command applications: Control/command processes require a transmission system with some characteristics like high reliability, low latency and strong guarantees on messages delivery. Concerning wire networks, field buses technologies like FIP offer this kind of service (periodic tasks, real time constraints...). Unfortunately, few wireless technologies can propose a communication system which respects such constraints. Indeed, wireless transmissions must deal with medium characteristics which make impossible the direct translation of mechanisms used with wire networks. The purpose of this paper is to present an original Medium Access Control (MAC) layer for a real time Low Power-Wireless Personal Area Network (LP-WPAN). The proposed MAC-layer has been validated by several complementary methods; in this paper, we focus on the specific Simultaneous Guaranteed Time Slot (SGTS) part.<|reference_end|> | arxiv | @article{bossche2008proposition,
title={Proposition and validation of an original MAC layer with simultaneous
medium accesses for low latency wireless control/command applications},
author={Adrien Van Den Bossche (LATTIS), Thierry Val (LATTIS), Eric Campo
(LATTIS)},
journal={arXiv preprint arXiv:0806.2550},
year={2008},
archivePrefix={arXiv},
eprint={0806.2550},
primaryClass={cs.NI}
} | bossche2008proposition |
arxiv-4062 | 0806.2555 | Frequency of Correctness versus Average-Case Polynomial Time and Generalized Juntas | <|reference_start|>Frequency of Correctness versus Average-Case Polynomial Time and Generalized Juntas: We prove that every distributional problem solvable in polynomial time on the average with respect to the uniform distribution has a frequently self-knowingly correct polynomial-time algorithm. We also study some features of probability weight of correctness with respect to generalizations of Procaccia and Rosenschein's junta distributions [PR07b].<|reference_end|> | arxiv | @article{erdelyi2008frequency,
title={Frequency of Correctness versus Average-Case Polynomial Time and
Generalized Juntas},
author={Gabor Erdelyi, Lane A. Hemaspaandra, Joerg Rothe, Holger Spakowski},
journal={arXiv preprint arXiv:0806.2555},
year={2008},
number={URCS-TR-2008-934},
archivePrefix={arXiv},
eprint={0806.2555},
primaryClass={cs.CC cs.GT cs.MA}
} | erdelyi2008frequency |
arxiv-4063 | 0806.2581 | A chain dictionary method for Word Sense Disambiguation and applications | <|reference_start|>A chain dictionary method for Word Sense Disambiguation and applications: A large class of unsupervised algorithms for Word Sense Disambiguation (WSD) is that of dictionary-based methods. Various algorithms have as the root Lesk's algorithm, which exploits the sense definitions in the dictionary directly. Our approach uses the lexical base WordNet for a new algorithm originated in Lesk's, namely "chain algorithm for disambiguation of all words", CHAD. We show how translation from a language into another one and also text entailment verification could be accomplished by this disambiguation.<|reference_end|> | arxiv | @article{tatar2008a,
title={A chain dictionary method for Word Sense Disambiguation and applications},
author={Doina Tatar, Gabriela Serban, Andreea Mihis, Mihaiela Lupea, Dana
Lupsa and Militon Frentiu},
journal={Studia Universitatis Babes-Bolyai, Special Issue, KEPT 2007,
Knowledge Engineering: Principles and Technologies, Cluj-Napoca, June 6-8,
2007, pp 33-40,},
year={2008},
archivePrefix={arXiv},
eprint={0806.2581},
primaryClass={cs.CL}
} | tatar2008a |
arxiv-4064 | 0806.2643 | On the Capacity Equivalence with Side Information at Transmitter and Receiver | <|reference_start|>On the Capacity Equivalence with Side Information at Transmitter and Receiver: In this paper, a channel that is contaminated by two independent Gaussian noises $S ~ N(0,Q)$ and $Z_0 ~ N(0,N_0)$ is considered. The capacity of this channel is computed when independent noisy versions of $S$ are known to the transmitter and/or receiver. It is shown that the channel capacity is greater then the capacity when $S$ is completely unknown, but is less then the capacity when $S$ is perfectly known at the transmitter or receiver. For example, if there is one noisy version of $S$ known at the transmitter only, the capacity is $0.5\log(1+\frac{P}{Q(N_1/(Q+N_1))+N_0})$, where $P$ is the input power constraint and $N_1$ is the power of the noise corrupting $S$. Further, it is shown that the capacity with knowledge of any independent noisy versions of $S$ at the transmitter is equal to the capacity with knowledge of the statistically equivalent noisy versions of $S$ at the receiver.<|reference_end|> | arxiv | @article{peng2008on,
title={On the Capacity Equivalence with Side Information at Transmitter and
Receiver},
author={Yong Peng and Dinesh Rajan},
journal={arXiv preprint arXiv:0806.2643},
year={2008},
archivePrefix={arXiv},
eprint={0806.2643},
primaryClass={cs.IT math.IT}
} | peng2008on |
arxiv-4065 | 0806.2674 | On Certain Large Random Hermitian Jacobi Matrices with Applications to Wireless Communications | <|reference_start|>On Certain Large Random Hermitian Jacobi Matrices with Applications to Wireless Communications: In this paper we study the spectrum of certain large random Hermitian Jacobi matrices. These matrices are known to describe certain communication setups. In particular we are interested in an uplink cellular channel which models mobile users experiencing a soft-handoff situation under joint multicell decoding. Considering rather general fading statistics we provide a closed form expression for the per-cell sum-rate of this channel in high-SNR, when an intra-cell TDMA protocol is employed. Since the matrices of interest are tridiagonal, their eigenvectors can be considered as sequences with second order linear recurrence. Therefore, the problem is reduced to the study of the exponential growth of products of two by two matrices. For the case where $K$ users are simultaneously active in each cell, we obtain a series of lower and upper bound on the high-SNR power offset of the per-cell sum-rate, which are considerably tighter than previously known bounds.<|reference_end|> | arxiv | @article{levy2008on,
title={On Certain Large Random Hermitian Jacobi Matrices with Applications to
Wireless Communications},
author={Nathan Levy, Oren Somekh, Shlomo Shamai (Shitz), and Ofer Zeitouni},
journal={arXiv preprint arXiv:0806.2674},
year={2008},
archivePrefix={arXiv},
eprint={0806.2674},
primaryClass={cs.IT math.IT}
} | levy2008on |
arxiv-4066 | 0806.2680 | Data-Oblivious Stream Productivity | <|reference_start|>Data-Oblivious Stream Productivity: We are concerned with demonstrating productivity of specifications of infinite streams of data, based on orthogonal rewrite rules. In general, this property is undecidable, but for restricted formats computable sufficient conditions can be obtained. The usual analysis disregards the identity of data, thus leading to approaches that we call data-oblivious. We present a method that is provably optimal among all such data-oblivious approaches. This means that in order to improve on the algorithm in this paper one has to proceed in a data-aware fashion.<|reference_end|> | arxiv | @article{endrullis2008data-oblivious,
title={Data-Oblivious Stream Productivity},
author={Joerg Endrullis, Clemens Grabmayer, Dimitri Hendriks},
journal={arXiv preprint arXiv:0806.2680},
year={2008},
archivePrefix={arXiv},
eprint={0806.2680},
primaryClass={cs.LO cs.PL}
} | endrullis2008data-oblivious |
arxiv-4067 | 0806.2682 | Weighted Superimposed Codes and Constrained Integer Compressed Sensing | <|reference_start|>Weighted Superimposed Codes and Constrained Integer Compressed Sensing: We introduce a new family of codes, termed weighted superimposed codes (WSCs). This family generalizes the class of Euclidean superimposed codes (ESCs), used in multiuser identification systems. WSCs allow for discriminating all bounded, integer-valued linear combinations of real-valued codewords that satisfy prescribed norm and non-negativity constraints. By design, WSCs are inherently noise tolerant. Therefore, these codes can be seen as special instances of robust compressed sensing schemes. The main results of the paper are lower and upper bounds on the largest achievable code rates of several classes of WSCs. These bounds suggest that with the codeword and weighting vector constraints at hand, one can improve the code rates achievable by standard compressive sensing.<|reference_end|> | arxiv | @article{dai2008weighted,
title={Weighted Superimposed Codes and Constrained Integer Compressed Sensing},
author={Wei Dai and Olgica Milenkovic},
journal={arXiv preprint arXiv:0806.2682},
year={2008},
archivePrefix={arXiv},
eprint={0806.2682},
primaryClass={cs.IT math.IT}
} | dai2008weighted |
arxiv-4068 | 0806.2707 | Biased Range Trees | <|reference_start|>Biased Range Trees: A data structure, called a biased range tree, is presented that preprocesses a set S of n points in R^2 and a query distribution D for 2-sided orthogonal range counting queries. The expected query time for this data structure, when queries are drawn according to D, matches, to within a constant factor, that of the optimal decision tree for S and D. The memory and preprocessing requirements of the data structure are O(n log n).<|reference_end|> | arxiv | @article{dujmovic2008biased,
title={Biased Range Trees},
author={Vida Dujmovic, John Howat, and Pat Morin},
journal={arXiv preprint arXiv:0806.2707},
year={2008},
archivePrefix={arXiv},
eprint={0806.2707},
primaryClass={cs.CG cs.DS}
} | dujmovic2008biased |
arxiv-4069 | 0806.2710 | A distributed algorithm for computing and updating the process number of a forest | <|reference_start|>A distributed algorithm for computing and updating the process number of a forest: In this paper, we present a distributed algorithm to compute various parameters of a tree such as the process number, the edge search number or the node search number and so the pathwidth. This algorithm requires n steps, an overall computation time of O(n log(n)), and n messages of size log_3(n)+3. We then propose a distributed algorithm to update the process number (or the node search number, or the edge search number) of each component of a forest after adding or deleting an edge. This second algorithm requires O(D) steps, an overall computation time of O(D log(n)), and O(D) messages of size log_3(n)+3, where D is the diameter of the modified connected component. Finally, we show how to extend our algorithms to trees and forests of unknown size using messages of less than 2a+4+e bits, where a is the parameter to be determined and e=1 for updates algorithms.<|reference_end|> | arxiv | @article{coudert2008a,
title={A distributed algorithm for computing and updating the process number of
a forest},
author={David Coudert (INRIA Sophia Antipolis / Laboratoire I3S), Florian Huc
(INRIA Sophia Antipolis / Laboratoire I3S), Dorian Mazauric (INRIA Sophia
Antipolis / Laboratoire I3S)},
journal={arXiv preprint arXiv:0806.2710},
year={2008},
number={RR-6560},
archivePrefix={arXiv},
eprint={0806.2710},
primaryClass={cs.DM}
} | coudert2008a |
arxiv-4070 | 0806.2726 | L2 Orthogonal Space Time Code for Continuous Phase Modulation | <|reference_start|>L2 Orthogonal Space Time Code for Continuous Phase Modulation: To combine the high power efficiency of Continuous Phase Modulation (CPM) with either high spectral efficiency or enhanced performance in low Signal to Noise conditions, some authors have proposed to introduce CPM in a MIMO frame, by using Space Time Codes (STC). In this paper, we address the code design problem of Space Time Block Codes combined with CPM and introduce a new design criterion based on L2 orthogonality. This L2 orthogonality condition, with the help of simplifying assumption, leads, in the 2x2 case, to a new family of codes. These codes generalize the Wang and Xia code, which was based on pointwise orthogonality. Simulations indicate that the new codes achieve full diversity and a slightly better coding gain. Moreover, one of the codes can be interpreted as two antennas fed by two conventional CPMs using the same data but with different alphabet sets. Inspection of these alphabet sets lead also to a simple explanation of the (small) spectrum broadening of Space Time Coded CPM.<|reference_end|> | arxiv | @article{hesse2008l2,
title={L2 Orthogonal Space Time Code for Continuous Phase Modulation},
author={Matthias Hesse (I3S), Jerome Lebrun (I3S), Luc Deneire (I3S)},
journal={Dans 9th IEEE International Workshop on Signal Processing Advances
in Wireless Communications (2008)},
year={2008},
archivePrefix={arXiv},
eprint={0806.2726},
primaryClass={cs.IT math.IT}
} | hesse2008l2 |
arxiv-4071 | 0806.2730 | A Process Algebra Software Engineering Environment | <|reference_start|>A Process Algebra Software Engineering Environment: In previous work we described how the process algebra based language PSF can be used in software engineering, using the ToolBus, a coordination architecture also based on process algebra, as implementation model. In this article we summarize that work and describe the software development process more formally by presenting the tools we use in this process in a CASE setting, leading to the PSF-ToolBus software engineering environment. We generalize the refine step in this environment towards a process algebra based software engineering workbench of which several instances can be combined to form an environment.<|reference_end|> | arxiv | @article{diertens2008a,
title={A Process Algebra Software Engineering Environment},
author={B. Diertens},
journal={arXiv preprint arXiv:0806.2730},
year={2008},
number={prg0808},
archivePrefix={arXiv},
eprint={0806.2730},
primaryClass={cs.SE}
} | diertens2008a |
arxiv-4072 | 0806.2735 | An overview of QML with a concrete implementation in Haskell | <|reference_start|>An overview of QML with a concrete implementation in Haskell: This paper gives an introduction to and overview of the functional quantum programming language QML. The syntax of this language is defined and explained, along with a new QML definition of the quantum teleport algorithm. The categorical operational semantics of QML is also briefly introduced, in the form of annotated quantum circuits. This definition leads to a denotational semantics, given in terms of superoperators. Finally, an implementation in Haskell of the semantics for QML is presented as a compiler. The compiler takes QML programs as input, which are parsed into a Haskell datatype. The output from the compiler is either a quantum circuit (operational), an isometry (pure denotational) or a superoperator (impure denotational). Orthogonality judgements and problems with coproducts in QML are also discussed.<|reference_end|> | arxiv | @article{grattage2008an,
title={An overview of QML with a concrete implementation in Haskell},
author={Jonathan Grattage},
journal={ENTCS: Proceedings of QPL V - DCV IV, 157-165, Reykjavik, Iceland,
2008},
year={2008},
archivePrefix={arXiv},
eprint={0806.2735},
primaryClass={quant-ph cs.PL}
} | grattage2008an |
arxiv-4073 | 0806.2738 | Identification of information tonality based on Bayesian approach and neural networks | <|reference_start|>Identification of information tonality based on Bayesian approach and neural networks: A model of the identification of information tonality, based on Bayesian approach and neural networks was described. In the context of this paper tonality means positive or negative tone of both the whole information and its parts which are related to particular concepts. The method, its application is presented in the paper, is based on statistic regularities connected with the presence of definite lexemes in the texts. A distinctive feature of the method is its simplicity and versatility. At present ideologically similar approaches are widely used to control spam.<|reference_end|> | arxiv | @article{lande2008identification,
title={Identification of information tonality based on Bayesian approach and
neural networks},
author={D.V. Lande},
journal={arXiv preprint arXiv:0806.2738},
year={2008},
archivePrefix={arXiv},
eprint={0806.2738},
primaryClass={cs.IT math.IT}
} | lande2008identification |
arxiv-4074 | 0806.2760 | L2 OSTC-CPM: Theory and design | <|reference_start|>L2 OSTC-CPM: Theory and design: The combination of space-time coding (STC) and continuous phase modulation (CPM) is an attractive field of research because both STC and CPM bring many advantages for wireless communications. Zhang and Fitz [1] were the first to apply this idea by constructing a trellis based scheme. But for these codes the decoding effort grows exponentially with the number of transmitting antennas. This was circumvented by orthogonal codes introduced by Wang and Xia [2]. Unfortunately, based on Alamouti code [3], this design is restricted to two antennas. However, by relaxing the orthogonality condition, we prove here that it is possible to design L2-orthogonal space-time codes which achieve full rate and full diversity with low decoding effort. In part one, we generalize the two-antenna code proposed by Wang and Xia [2] from pointwise to L2-orthogonality and in part two we present the first L2-orthogonal code for CPM with three antennas. In this report, we detail these results and focus on the properties of these codes. Of special interest is the optimization of the bit error rate which depends on the initial phase of the system. Our simulation results illustrate the systemic behavior of these conditions.<|reference_end|> | arxiv | @article{hesse2008l2,
title={L2 OSTC-CPM: Theory and design},
author={Matthias Hesse (I3S), Jerome Lebrun (I3S), Luc Deneire (I3S)},
journal={arXiv preprint arXiv:0806.2760},
year={2008},
archivePrefix={arXiv},
eprint={0806.2760},
primaryClass={cs.IT math.IT}
} | hesse2008l2 |
arxiv-4075 | 0806.2802 | A logic with temporally accessible iteration | <|reference_start|>A logic with temporally accessible iteration: Deficiency in expressive power of the first-order logic has led to developing its numerous extensions by fixed point operators, such as Least Fixed-Point (LFP), inflationary fixed-point (IFP), partial fixed-point (PFP), etc. These logics have been extensively studied in finite model theory, database theory, descriptive complexity. In this paper we introduce unifying framework, the logic with iteration operator, in which iteration steps may be accessed by temporal logic formulae. We show that proposed logic FO+TAI subsumes all mentioned fixed point extensions as well as many other fixed point logics as natural fragments. On the other hand we show that over finite structures FO+TAI is no more expressive than FO+PFP. Further we show that adding the same machinery to the logic of monotone inductions (FO+LFP) does not increase its expressive power either.<|reference_end|> | arxiv | @article{lisitsa2008a,
title={A logic with temporally accessible iteration},
author={Alexei Lisitsa},
journal={arXiv preprint arXiv:0806.2802},
year={2008},
archivePrefix={arXiv},
eprint={0806.2802},
primaryClass={cs.LO}
} | lisitsa2008a |
arxiv-4076 | 0806.2843 | MultiKulti Algorithm: Migrating the Most Different Genotypes in an Island Model | <|reference_start|>MultiKulti Algorithm: Migrating the Most Different Genotypes in an Island Model: Migration policies in distributed evolutionary algorithms has not been an active research area until recently. However, in the same way as operators have an impact on performance, the choice of migrants is due to have an impact too. In this paper we propose a new policy (named multikulti) for choosing the individuals that are going to be sent to other nodes, based on multiculturality: the individual sent should be as different as possible to the receiving population. We have checked this policy on different discrete optimization problems, and found that, in average or in median, this policy outperforms classical ones like sending the best or a random individual.<|reference_end|> | arxiv | @article{araujo2008multikulti,
title={MultiKulti Algorithm: Migrating the Most Different Genotypes in an
Island Model},
author={Lourdes Araujo, Juan J. Merelo Guervos, Carlos Cotta and Francisco
Fernandez de Vega},
journal={arXiv preprint arXiv:0806.2843},
year={2008},
archivePrefix={arXiv},
eprint={0806.2843},
primaryClass={cs.NE cs.DC}
} | araujo2008multikulti |
arxiv-4077 | 0806.2850 | Decoding Beta-Decay Systematics: A Global Statistical Model for Beta^- Halflives | <|reference_start|>Decoding Beta-Decay Systematics: A Global Statistical Model for Beta^- Halflives: Statistical modeling of nuclear data provides a novel approach to nuclear systematics complementary to established theoretical and phenomenological approaches based on quantum theory. Continuing previous studies in which global statistical modeling is pursued within the general framework of machine learning theory, we implement advances in training algorithms designed to improved generalization, in application to the problem of reproducing and predicting the halflives of nuclear ground states that decay 100% by the beta^- mode. More specifically, fully-connected, multilayer feedforward artificial neural network models are developed using the Levenberg-Marquardt optimization algorithm together with Bayesian regularization and cross-validation. The predictive performance of models emerging from extensive computer experiments is compared with that of traditional microscopic and phenomenological models as well as with the performance of other learning systems, including earlier neural network models as well as the support vector machines recently applied to the same problem. In discussing the results, emphasis is placed on predictions for nuclei that are far from the stability line, and especially those involved in the r-process nucleosynthesis. It is found that the new statistical models can match or even surpass the predictive performance of conventional models for beta-decay systematics and accordingly should provide a valuable additional tool for exploring the expanding nuclear landscape.<|reference_end|> | arxiv | @article{costiris2008decoding,
title={Decoding Beta-Decay Systematics: A Global Statistical Model for Beta^-
Halflives},
author={N. J. Costiris, E. Mavrommatis, K. A. Gernoth, J. W. Clark},
journal={Phys.Rev.C80:044332,2009},
year={2008},
doi={10.1103/PhysRevC.80.044332},
archivePrefix={arXiv},
eprint={0806.2850},
primaryClass={nucl-th astro-ph cond-mat.dis-nn cs.LG stat.ML}
} | costiris2008decoding |
arxiv-4078 | 0806.2890 | Learning Graph Matching | <|reference_start|>Learning Graph Matching: As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the `labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.<|reference_end|> | arxiv | @article{caetano2008learning,
title={Learning Graph Matching},
author={Tiberio S. Caetano, Julian J. McAuley, Li Cheng, Quoc V. Le and Alex
J. Smola},
journal={arXiv preprint arXiv:0806.2890},
year={2008},
archivePrefix={arXiv},
eprint={0806.2890},
primaryClass={cs.CV cs.LG}
} | caetano2008learning |
arxiv-4079 | 0806.2923 | Strategy Iteration using Non-Deterministic Strategies for Solving Parity Games | <|reference_start|>Strategy Iteration using Non-Deterministic Strategies for Solving Parity Games: This article extends the idea of solving parity games by strategy iteration to non-deterministic strategies: In a non-deterministic strategy a player restricts himself to some non-empty subset of possible actions at a given node, instead of limiting himself to exactly one action. We show that a strategy-improvement algorithm by by Bjoerklund, Sandberg, and Vorobyov can easily be adapted to the more general setting of non-deterministic strategies. Further, we show that applying the heuristic of "all profitable switches" leads to choosing a "locally optimal" successor strategy in the setting of non-deterministic strategies, thereby obtaining an easy proof of an algorithm by Schewe. In contrast to the algorithm by Bjoerklund et al., we present our algorithm directly for parity games which allows us to compare it to the algorithm by Jurdzinski and Voege: We show that the valuations used in both algorithm coincide on parity game arenas in which one player can "surrender". Thus, our algorithm can also be seen as a generalization of the one by Jurdzinski and Voege to non-deterministic strategies. Finally, using non-deterministic strategies allows us to show that the number of improvement steps is bound from above by O(1.724^n). For strategy-improvement algorithms, this bound was previously only known to be attainable by using randomization.<|reference_end|> | arxiv | @article{luttenberger2008strategy,
title={Strategy Iteration using Non-Deterministic Strategies for Solving Parity
Games},
author={Michael Luttenberger},
journal={arXiv preprint arXiv:0806.2923},
year={2008},
archivePrefix={arXiv},
eprint={0806.2923},
primaryClass={cs.GT cs.LO}
} | luttenberger2008strategy |
arxiv-4080 | 0806.2924 | On the Optimization of the IEEE 80211 DCF: A Cross-Layer Perspective | <|reference_start|>On the Optimization of the IEEE 80211 DCF: A Cross-Layer Perspective: This paper is focused on the problem of optimizing the aggregate throughput of the Distributed Coordination Function (DCF) employing the basic access mechanism at the data link layer of IEEE 802.11 protocols. In order to broaden the applicability of the proposed analysis, we consider general operating conditions accounting for both non-saturated and saturated traffic in the presence of transmission channel errors, as exemplified by the packet error rate $P_e$. The main clue of this work stems from the relation that links the aggregate throughput of the network to the packet rate $\lambda$ of the contending stations. In particular, we show that the aggregate throughput $S(\lambda)$ presents two clearly distinct operating regions that depend on the actual value of the packet rate $\lambda$ with respect to a critical value $\lambda_c$, theoretically derived in this work. The behavior of $S(\lambda)$ paves the way to a cross-layer optimization algorithm, which proved to be effective for maximizing the aggregate throughput in a variety of network operating conditions. A nice consequence of the proposed optimization framework relies on the fact that the aggregate throughput can be predicted quite accurately with a simple, yet effective, closed-form expression, which is also derived in the article. Finally, theoretical and simulation results are presented throughout the work in order to unveil, as well as verify, the key ideas.<|reference_end|> | arxiv | @article{laddomada2008on,
title={On the Optimization of the IEEE 802.11 DCF: A Cross-Layer Perspective},
author={Massimiliano Laddomada and Fabio Mesiti},
journal={arXiv preprint arXiv:0806.2924},
year={2008},
archivePrefix={arXiv},
eprint={0806.2924},
primaryClass={cs.NI}
} | laddomada2008on |
arxiv-4081 | 0806.2925 | Neural networks in 3D medical scan visualization | <|reference_start|>Neural networks in 3D medical scan visualization: For medical volume visualization, one of the most important tasks is to reveal clinically relevant details from the 3D scan (CT, MRI ...), e.g. the coronary arteries, without obscuring them with less significant parts. These volume datasets contain different materials which are difficult to extract and visualize with 1D transfer functions based solely on the attenuation coefficient. Multi-dimensional transfer functions allow a much more precise classification of data which makes it easier to separate different surfaces from each other. Unfortunately, setting up multi-dimensional transfer functions can become a fairly complex task, generally accomplished by trial and error. This paper explains neural networks, and then presents an efficient way to speed up visualization process by semi-automatic transfer function generation. We describe how to use neural networks to detect distinctive features shown in the 2D histogram of the volume data and how to use this information for data classification.<|reference_end|> | arxiv | @article{zukić2008neural,
title={Neural networks in 3D medical scan visualization},
author={Dv{z}enan Zuki'c, Andreas Elsner, Zikrija Avdagi'c, Gitta Domik},
journal={International Conference on Computer Graphics and Artificial
Intelligence, Proceedings (2008) 183-190},
year={2008},
archivePrefix={arXiv},
eprint={0806.2925},
primaryClass={cs.AI cs.GR}
} | zukić2008neural |
arxiv-4082 | 0806.2937 | Evolving complex networks with conserved clique distributions | <|reference_start|>Evolving complex networks with conserved clique distributions: We propose and study a hierarchical algorithm to generate graphs having a predetermined distribution of cliques, the fully connected subgraphs. The construction mechanism may be either random or incorporate preferential attachment. We evaluate the statistical properties of the graphs generated, such as the degree distribution and network diameters, and compare them to some real-world graphs.<|reference_end|> | arxiv | @article{kaczor2008evolving,
title={Evolving complex networks with conserved clique distributions},
author={Gregor Kaczor and Claudius Gros},
journal={Physical Review E, Vol. 78, 016107 (2008).},
year={2008},
doi={10.1103/PhysRevE.78.016107},
archivePrefix={arXiv},
eprint={0806.2937},
primaryClass={physics.soc-ph cond-mat.dis-nn cs.NI}
} | kaczor2008evolving |
arxiv-4083 | 0806.2943 | Modern Set | <|reference_start|>Modern Set: In this paper, we intend to generalize the classical set theory as much as possible. we will do this by freeing sets from the regular properties of classical sets; e.g., the law of excluded middle, the law of non-contradiction, the distributive law, the commutative law,etc....<|reference_end|> | arxiv | @article{tanaka2008modern,
title={Modern Set},
author={Jun Tanaka},
journal={arXiv preprint arXiv:0806.2943},
year={2008},
archivePrefix={arXiv},
eprint={0806.2943},
primaryClass={math.GM cs.IT math.IT}
} | tanaka2008modern |
arxiv-4084 | 0806.2947 | The Kleene-Rosser Paradox, The Liar's Paradox & A Fuzzy Logic Programming Paradox Imply SAT is (NOT) NP-complete | <|reference_start|>The Kleene-Rosser Paradox, The Liar's Paradox & A Fuzzy Logic Programming Paradox Imply SAT is (NOT) NP-complete: After examining the {\bf P} versus {\bf NP} problem against the Kleene-Rosser paradox of the $\lambda$-calculus [94], it was found that it represents a counter-example to NP-completeness. We prove that it contradicts the proof of Cook's theorem. A logical formalization of the liar's paradox leads to the same result. This formalization of the liar's paradox into a computable form is a 2-valued instance of a fuzzy logic programming paradox discovered in the system of [90]. Three proofs that show that {\bf SAT} is (NOT) NP-complete are presented. The counter-example classes to NP-completeness are also counter-examples to Fagin's theorem [36] and the Immermann-Vardi theorem [89,110], the fundamental results of descriptive complexity. All these results show that {\bf ZF$\not$C} is inconsistent.<|reference_end|> | arxiv | @article{kamouna2008the,
title={The Kleene-Rosser Paradox, The Liar's Paradox & A Fuzzy Logic
Programming Paradox Imply SAT is (NOT) NP-complete},
author={Rafee Ebrahim Kamouna},
journal={arXiv preprint arXiv:0806.2947},
year={2008},
archivePrefix={arXiv},
eprint={0806.2947},
primaryClass={cs.LO}
} | kamouna2008the |
arxiv-4085 | 0806.2991 | On Information Rates of the Fading Wyner Cellular Model via the Thouless Formula for the Strip | <|reference_start|>On Information Rates of the Fading Wyner Cellular Model via the Thouless Formula for the Strip: We apply the theory of random Schr\"odinger operators to the analysis of multi-users communication channels similar to the Wyner model, that are characterized by short-range intra-cell broadcasting. With $H$ the channel transfer matrix, $HH^\dagger$ is a narrow-band matrix and in many aspects is similar to a random Schr\"odinger operator. We relate the per-cell sum-rate capacity of the channel to the integrated density of states of a random Schr\"odinger operator; the latter is related to the top Lyapunov exponent of a random sequence of matrices via a version of the Thouless formula. Unlike related results in classical random matrix theory, limiting results do depend on the underlying fading distributions. We also derive several bounds on the limiting per-cell sum-rate capacity, some based on the theory of random Schr\"odinger operators, and some derived from information theoretical considerations. Finally, we get explicit results in the high-SNR regime for some particular cases.<|reference_end|> | arxiv | @article{levy2008on,
title={On Information Rates of the Fading Wyner Cellular Model via the Thouless
Formula for the Strip},
author={Nathan Levy, Ofer Zeitouni and Shlomo Shamai (Shitz)},
journal={arXiv preprint arXiv:0806.2991},
year={2008},
archivePrefix={arXiv},
eprint={0806.2991},
primaryClass={cs.IT math.IT}
} | levy2008on |
arxiv-4086 | 0806.3015 | Randomized Methods for Linear Constraints: Convergence Rates and Conditioning | <|reference_start|>Randomized Methods for Linear Constraints: Convergence Rates and Conditioning: We study randomized variants of two classical algorithms: coordinate descent for systems of linear equations and iterated projections for systems of linear inequalities. Expanding on a recent randomized iterated projection algorithm of Strohmer and Vershynin for systems of linear equations, we show that, under appropriate probability distributions, the linear rates of convergence (in expectation) can be bounded in terms of natural linear-algebraic condition numbers for the problems. We relate these condition measures to distances to ill-posedness, and discuss generalizations to convex systems under metric regularity assumptions.<|reference_end|> | arxiv | @article{leventhal2008randomized,
title={Randomized Methods for Linear Constraints: Convergence Rates and
Conditioning},
author={D. Leventhal and A.S. Lewis},
journal={arXiv preprint arXiv:0806.3015},
year={2008},
archivePrefix={arXiv},
eprint={0806.3015},
primaryClass={math.OC cs.NA}
} | leventhal2008randomized |
arxiv-4087 | 0806.3023 | A Random Search Framework for Convergence Analysis of Distributed Beamforming with Feedback | <|reference_start|>A Random Search Framework for Convergence Analysis of Distributed Beamforming with Feedback: The focus of this work is on the analysis of transmit beamforming schemes with a low-rate feedback link in wireless sensor/relay networks, where nodes in the network need to implement beamforming in a distributed manner. Specifically, the problem of distributed phase alignment is considered, where neither the transmitters nor the receiver has perfect channel state information, but there is a low-rate feedback link from the receiver to the transmitters. In this setting, a framework is proposed for systematically analyzing the performance of distributed beamforming schemes. To illustrate the advantage of this framework, a simple adaptive distributed beamforming scheme that was recently proposed by Mudambai et al. is studied. Two important properties for the received signal magnitude function are derived. Using these properties and the systematic framework, it is shown that the adaptive distributed beamforming scheme converges both in probability and in mean. Furthermore, it is established that the time required for the adaptive scheme to converge in mean scales linearly with respect to the number of sensor/relay nodes.<|reference_end|> | arxiv | @article{lin2008a,
title={A Random Search Framework for Convergence Analysis of Distributed
Beamforming with Feedback},
author={C. Lin, V. V. Veeravalli, and S. Meyn},
journal={arXiv preprint arXiv:0806.3023},
year={2008},
archivePrefix={arXiv},
eprint={0806.3023},
primaryClass={cs.DC cs.IT math.IT}
} | lin2008a |
arxiv-4088 | 0806.3031 | Multi Site Coordination using a Multi-Agent System | <|reference_start|>Multi Site Coordination using a Multi-Agent System: A new approach of coordination of decisions in a multi site system is proposed. It is based this approach on a multi-agent concept and on the principle of distributed network of enterprises. For this purpose, each enterprise is defined as autonomous and performs simultaneously at the local and global levels. The basic component of our approach is a so-called Virtual Enterprise Node (VEN), where the enterprise network is represented as a set of tiers (like in a product breakdown structure). Within the network, each partner constitutes a VEN, which is in contact with several customers and suppliers. Exchanges between the VENs ensure the autonomy of decision, and guarantiee the consistency of information and material flows. Only two complementary VEN agents are necessary: one for external interactions, the Negotiator Agent (NA) and one for the planning of internal decisions, the Planner Agent (PA). If supply problems occur in the network, two other agents are defined: the Tier Negotiator Agent (TNA) working at the tier level only and the Supply Chain Mediator Agent (SCMA) working at the level of the enterprise network. These two agents are only active when the perturbation occurs. Otherwise, the VENs process the flow of information alone. With this new approach, managing enterprise network becomes much more transparent and looks like managing a simple enterprise in the network. The use of a Multi-Agent System (MAS) allows physical distribution of the decisional system, and procures a heterarchical organization structure with a decentralized control that guaranties the autonomy of each entity and the flexibility of the network.<|reference_end|> | arxiv | @article{monteiro2008multi,
title={Multi Site Coordination using a Multi-Agent System},
author={Thibaud Monteiro (LGIPM, INRIA Lorraine), Daniel Roy (LGIPM, INRIA
Lorraine), Didier Anciaux (LGIPM, INRIA Lorraine)},
journal={Computers in Industry 58, 4 (2007) pp. 367-377},
year={2008},
archivePrefix={arXiv},
eprint={0806.3031},
primaryClass={cs.MA}
} | monteiro2008multi |
arxiv-4089 | 0806.3032 | Multi-agents architecture for supply chain management | <|reference_start|>Multi-agents architecture for supply chain management: The purpose of this paper is to propose a new approach for the supply chain management. This approach is based on the virtual enterprise paradigm and the used of multi-agent concept. Each entity (like enterprise) is autonomous and must perform local and global goals in relation with its environment. The base component of our approach is a Virtual Enterprise Node (VEN). The supply chain is viewed as a set of tiers (corresponding to the levels of production), in which each partner of the supply chain (VEN) is in relation with several customers and suppliers. Each VEN belongs to one tier. The main customer gives global objectives (quantity, cost and delay) to the supply chain. The Mediator Agent (MA) is in charge to manage the supply chain in order to respect those objectives as global level. Those objectives are taking over to Negotiator Agent at the tier level (NAT). These two agents are only active if a perturbation occurs; otherwise information flows are only exchange between VENs. This architecture allows supply chains management which is completely transparent seen from simple enterprise of the supply chain. The used of Multi-Agent System (MAS) allows physical distribution of the decisional system. Moreover, the hierarchical organizational structure with a decentralized control guaranties, in the same time, the autonomy of each entity and the whole flexibility.<|reference_end|> | arxiv | @article{roy2008multi-agents,
title={Multi-agents architecture for supply chain management},
author={Daniel Roy (LGIPM, Inria Lorraine - Loria), Didier Anciaux (LGIPM,
Inria Lorraine - Loria), Thibaud Monteiro (LGIPM, Inria Lorraine - Loria),
Latifa Ouzizi (LGIPM, Inria Lorraine - Loria)},
journal={Journal of Manufacturing Technology Management 15, 8 (2004) pp.
745-755},
year={2008},
archivePrefix={arXiv},
eprint={0806.3032},
primaryClass={cs.MA}
} | roy2008multi-agents |
arxiv-4090 | 0806.3033 | Compound Node-Kayles on Paths | <|reference_start|>Compound Node-Kayles on Paths: In his celebrated book "On Number and Games" (Academic Press, New-York, 1976), J.H. Conway introduced twelve versions of compound games. We analyze these twelve versions for the Node-Kayles game on paths. For usual disjunctive compound, Node-Kayles has been solved for a long time under normal play, while it is still unsolved under mis\`ere play. We thus focus on the ten remaining versions, leaving only one of them unsolved.<|reference_end|> | arxiv | @article{guignard2008compound,
title={Compound Node-Kayles on Paths},
author={Adrien Guignard (LaBRI), Eric Sopena (LaBRI)},
journal={Theoretical Computer Science 410, 21-23 (2009) 2033-2044},
year={2008},
doi={10.1016/j.tcs.2008.12.053},
archivePrefix={arXiv},
eprint={0806.3033},
primaryClass={cs.DM}
} | guignard2008compound |
arxiv-4091 | 0806.3099 | On the stability of bubble functions and a stabilized mixed finite element formulation for the Stokes problem | <|reference_start|>On the stability of bubble functions and a stabilized mixed finite element formulation for the Stokes problem: In this paper we investigate the relationship between stabilized and enriched finite element formulations for the Stokes problem. We also present a new stabilized mixed formulation for which the stability parameter is derived purely by the method of weighted residuals. This new formulation allows equal order interpolation for the velocity and pressure fields. Finally, we show by counterexample that a direct equivalence between subgrid-based stabilized finite element methods and Galerkin methods enriched by bubble functions cannot be constructed for quadrilateral and hexahedral elements using standard bubble functions.<|reference_end|> | arxiv | @article{turner2008on,
title={On the stability of bubble functions and a stabilized mixed finite
element formulation for the Stokes problem},
author={D. Z. Turner, K. B. Nakshatrala and K. D. Hjelmstad},
journal={arXiv preprint arXiv:0806.3099},
year={2008},
doi={10.1002/fld.1936},
archivePrefix={arXiv},
eprint={0806.3099},
primaryClass={cs.NA}
} | turner2008on |
arxiv-4092 | 0806.3115 | Using rational numbers to key nested sets | <|reference_start|>Using rational numbers to key nested sets: This report details the generation and use of tree node ordering keys in a single relational database table. The keys for each node are calculated from the keys of its parent, in such a way that the sort order places every node in the tree before all of its descendants and after all siblings having a lower index. The calculation from parent keys to child keys is simple, and reversible in the sense that the keys of every ancestor of a node can be calculated from that node's keys without having to consult the database. Proofs of the above properties of the key encoding process and of its correspondence to a finite continued fraction form are provided.<|reference_end|> | arxiv | @article{hazel2008using,
title={Using rational numbers to key nested sets},
author={Dan Hazel (Technology One)},
journal={arXiv preprint arXiv:0806.3115},
year={2008},
number={DocSetID-311997},
archivePrefix={arXiv},
eprint={0806.3115},
primaryClass={cs.DB}
} | hazel2008using |
arxiv-4093 | 0806.3121 | Algorithmic Based Fault Tolerance Applied to High Performance Computing | <|reference_start|>Algorithmic Based Fault Tolerance Applied to High Performance Computing: We present a new approach to fault tolerance for High Performance Computing system. Our approach is based on a careful adaptation of the Algorithmic Based Fault Tolerance technique (Huang and Abraham, 1984) to the need of parallel distributed computation. We obtain a strongly scalable mechanism for fault tolerance. We can also detect and correct errors (bit-flip) on the fly of a computation. To assess the viability of our approach, we have developed a fault tolerant matrix-matrix multiplication subroutine and we propose some models to predict its running time. Our parallel fault-tolerant matrix-matrix multiplication scores 1.4 TFLOPS on 484 processors (cluster jacquard.nersc.gov) and returns a correct result while one process failure has happened. This represents 65% of the machine peak efficiency and less than 12% overhead with respect to the fastest failure-free implementation. We predict (and have observed) that, as we increase the processor count, the overhead of the fault tolerance drops significantly.<|reference_end|> | arxiv | @article{bosilca2008algorithmic,
title={Algorithmic Based Fault Tolerance Applied to High Performance Computing},
author={George Bosilca, Remi Delmas, Jack Dongarra, and Julien Langou},
journal={arXiv preprint arXiv:0806.3121},
year={2008},
archivePrefix={arXiv},
eprint={0806.3121},
primaryClass={cs.DC cs.MS}
} | bosilca2008algorithmic |
arxiv-4094 | 0806.3133 | Shannon Meets Carnot: Mutual Information Via Thermodynamics | <|reference_start|>Shannon Meets Carnot: Mutual Information Via Thermodynamics: In this contribution, the Gaussian channel is represented as an equivalent thermal system allowing to express its input-output mutual information in terms of thermodynamic quantities. This thermodynamic description of the mutual information is based upon a generalization of the $2^{nd}$ thermodynamic law and provides an alternative proof to the Guo-Shamai-Verd\'{u} theorem, giving an intriguing connection between this remarkable theorem and the most fundamental laws of nature - the laws of thermodynamics.<|reference_end|> | arxiv | @article{shental2008shannon,
title={Shannon Meets Carnot: Mutual Information Via Thermodynamics},
author={Ori Shental and Ido Kanter},
journal={arXiv preprint arXiv:0806.3133},
year={2008},
archivePrefix={arXiv},
eprint={0806.3133},
primaryClass={cs.IT math.IT}
} | shental2008shannon |
arxiv-4095 | 0806.3152 | TRANS-Net: an Efficient Peer-to-Peer Overlay Network Based on a Full Transposition Graph | <|reference_start|>TRANS-Net: an Efficient Peer-to-Peer Overlay Network Based on a Full Transposition Graph: In this paper we propose a new practical P2P system based on a full transposition network topology named TRANS-Net. Full transposition networks achieve higher fault-tolerance and lower congestion among the class of transposition networks. TRANS-Net provides an efficient lookup service i.e. k hops with high probability, where k satisfies Theta(log_n m) less than k less than Theta(log_2 m), where m denotes the number of system nodes and n is a system parameter related to the maximum number that m can take (up to n!). Experiments show that the look-up performance achieves the lower limit of the complexity relation. TRANS-Net also preserves data locality and provides efficient look-up performance for complex queries such as multi-dimensional queries.<|reference_end|> | arxiv | @article{kontopoulos2008trans-net:,
title={TRANS-Net: an Efficient Peer-to-Peer Overlay Network Based on a Full
Transposition Graph},
author={Stavros Kontopoulos, Athanasios K. Tsakalidis},
journal={arXiv preprint arXiv:0806.3152},
year={2008},
archivePrefix={arXiv},
eprint={0806.3152},
primaryClass={cs.DC}
} | kontopoulos2008trans-net: |
arxiv-4096 | 0806.3201 | The Structure of Information Pathways in a Social Communication Network | <|reference_start|>The Structure of Information Pathways in a Social Communication Network: Social networks are of interest to researchers in part because they are thought to mediate the flow of information in communities and organizations. Here we study the temporal dynamics of communication using on-line data, including e-mail communication among the faculty and staff of a large university over a two-year period. We formulate a temporal notion of "distance" in the underlying social network by measuring the minimum time required for information to spread from one node to another -- a concept that draws on the notion of vector-clocks from the study of distributed computing systems. We find that such temporal measures provide structural insights that are not apparent from analyses of the pure social network topology. In particular, we define the network backbone to be the subgraph consisting of edges on which information has the potential to flow the quickest. We find that the backbone is a sparse graph with a concentration of both highly embedded edges and long-range bridges -- a finding that sheds new light on the relationship between tie strength and connectivity in social networks.<|reference_end|> | arxiv | @article{kossinets2008the,
title={The Structure of Information Pathways in a Social Communication Network},
author={Gueorgi Kossinets, Jon Kleinberg, Duncan Watts},
journal={arXiv preprint arXiv:0806.3201},
year={2008},
archivePrefix={arXiv},
eprint={0806.3201},
primaryClass={physics.soc-ph cs.DS physics.data-an}
} | kossinets2008the |
arxiv-4097 | 0806.3209 | A Computer Verified Theory of Compact Sets | <|reference_start|>A Computer Verified Theory of Compact Sets: Compact sets in constructive mathematics capture our intuition of what computable subsets of the plane (or any other complete metric space) ought to be. A good representation of compact sets provides an efficient means of creating and displaying images with a computer. In this paper, I build upon existing work about complete metric spaces to define compact sets as the completion of the space of finite sets under the Hausdorff metric. This definition allowed me to quickly develop a computer verified theory of compact sets. I applied this theory to compute provably correct plots of uniformly continuous functions.<|reference_end|> | arxiv | @article{o'connor2008a,
title={A Computer Verified Theory of Compact Sets},
author={Russell O'Connor},
journal={arXiv preprint arXiv:0806.3209},
year={2008},
number={RISC-Linz Report Series 08-08},
archivePrefix={arXiv},
eprint={0806.3209},
primaryClass={cs.LO}
} | o'connor2008a |
arxiv-4098 | 0806.3215 | MOHCS: Towards Mining Overlapping Highly Connected Subgraphs | <|reference_start|>MOHCS: Towards Mining Overlapping Highly Connected Subgraphs: Many networks in real-life typically contain parts in which some nodes are more highly connected to each other than the other nodes of the network. The collection of such nodes are usually called clusters, communities, cohesive groups or modules. In graph terminology, it is called highly connected graph. In this paper, we first prove some properties related to highly connected graph. Based on these properties, we then redefine the highly connected subgraph which results in an algorithm that determines whether a given graph is highly connected in linear time. Then we present a computationally efficient algorithm, called MOHCS, for mining overlapping highly connected subgraphs. We have evaluated experimentally the performance of MOHCS using real and synthetic data sets from computer-generated graph and yeast protein network. Our results show that MOHCS is effective and reliable in finding overlapping highly connected subgraphs. Keywords-component; Highly connected subgraph, clustering algorithms, minimum cut, minimum degree<|reference_end|> | arxiv | @article{lin2008mohcs:,
title={MOHCS: Towards Mining Overlapping Highly Connected Subgraphs},
author={Xiahong Lin, Lin Gao, Kefei Chen, and David K. Y. Chiu},
journal={arXiv preprint arXiv:0806.3215},
year={2008},
archivePrefix={arXiv},
eprint={0806.3215},
primaryClass={cs.DC}
} | lin2008mohcs: |
arxiv-4099 | 0806.3227 | A Non-differential Distributed Space-Time Coding for Partially-Coherent Cooperative Communication | <|reference_start|>A Non-differential Distributed Space-Time Coding for Partially-Coherent Cooperative Communication: In a distributed space-time coding scheme, based on the relay channel model, the relay nodes co-operate to linearly process the transmitted signal from the source and forward them to the destination such that the signal at the destination appears as a space time block code. Recently, a code design criteria for achieving full diversity in a partially-coherent environment have been proposed along with codes based on differential encoding and decoding techniques. For such a set up, in this paper, a non-differential encoding technique and construction of distributed space time block codes from unitary matrix groups at the source and a set of diagonal unitary matrices for the relays are proposed. It is shown that, the performance of our scheme is independent of the choice of unitary matrices at the relays. When the group is cyclic, a necessary and sufficient condition on the generator of the cyclic group to achieve full diversity and to minimize the pairwise error probability is proved. Various choices on the generator of cyclic group to reduce the ML decoding complexity at the destination is presented. It is also shown that, at the source, if non-cyclic abelian unitary matrix groups are used, then full-diversity can not be obtained. The presented scheme is also robust to failure of any subset of relay nodes.<|reference_end|> | arxiv | @article{harshan2008a,
title={A Non-differential Distributed Space-Time Coding for Partially-Coherent
Cooperative Communication},
author={J. Harshan, B. Sundar Rajan},
journal={arXiv preprint arXiv:0806.3227},
year={2008},
archivePrefix={arXiv},
eprint={0806.3227},
primaryClass={cs.IT math.IT}
} | harshan2008a |
arxiv-4100 | 0806.3243 | Analysis of Verification-based Decoding on the q-ary Symmetric Channel for Large q | <|reference_start|>Analysis of Verification-based Decoding on the q-ary Symmetric Channel for Large q: We discuss and analyze a list-message-passing decoder with verification for low-density parity-check (LDPC) codes on the q-ary symmetric channel (q-SC). Rather than passing messages consisting of symbol probabilities, this decoder passes lists of possible symbols and marks some lists as verified. The density evolution (DE) equations for this decoder are derived and used to compute decoding thresholds. If the maximum list size is unbounded, then we find that any capacity-achieving LDPC code for the binary erasure channel can be used to achieve capacity on the q-SC for large q. The decoding thresholds are also computed via DE for the case where each list is truncated to satisfy a maximum list size constraint. Simulation results are also presented to confirm the DE results. During the simulations, we observed differences between two verification-based decoding algorithms, introduced by Luby and Mitzenmacher, that were implicitly assumed to be identical. In this paper, we provide an analysis of the node-based algorithms from that paper and verify that it matches simulation results. The probability of false verification (FV) is also considered and techniques are discussed to mitigate the FV. Optimization of the degree distribution is also used to improve the threshold for a fixed maximum list size. Finally, the proposed algorithm is compared with a variety of other algorithms using both density evolution thresholds and simulation results.<|reference_end|> | arxiv | @article{zhang2008analysis,
title={Analysis of Verification-based Decoding on the q-ary Symmetric Channel
for Large q},
author={Fan Zhang and Henry D. Pfister},
journal={arXiv preprint arXiv:0806.3243},
year={2008},
archivePrefix={arXiv},
eprint={0806.3243},
primaryClass={cs.IT math.IT}
} | zhang2008analysis |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.