corpus_id stringlengths 7 12 | paper_id stringlengths 9 16 | title stringlengths 1 261 | abstract stringlengths 70 4.02k | source stringclasses 1 value | bibtex stringlengths 208 20.9k | citation_key stringlengths 6 100 |
|---|---|---|---|---|---|---|
arxiv-3101 | 0803.2842 | Admission Control to Minimize Rejections and Online Set Cover with Repetitions | <|reference_start|>Admission Control to Minimize Rejections and Online Set Cover with Repetitions: We study the admission control problem in general networks. Communication requests arrive over time, and the online algorithm accepts or rejects each request while maintaining the capacity limitations of the network. The admission control problem has been usually analyzed as a benefit problem, where the goal is to devise an online algorithm that accepts the maximum number of requests possible. The problem with this objective function is that even algorithms with optimal competitive ratios may reject almost all of the requests, when it would have been possible to reject only a few. This could be inappropriate for settings in which rejections are intended to be rare events. In this paper, we consider preemptive online algorithms whose goal is to minimize the number of rejected requests. Each request arrives together with the path it should be routed on. We show an $O(\log^2 (mc))$-competitive randomized algorithm for the weighted case, where $m$ is the number of edges in the graph and $c$ is the maximum edge capacity. For the unweighted case, we give an $O(\log m \log c)$-competitive randomized algorithm. This settles an open question of Blum, Kalai and Kleinberg raised in \cite{BlKaKl01}. We note that allowing preemption and handling requests with given paths are essential for avoiding trivial lower bounds.<|reference_end|> | arxiv | @article{alon2008admission,
title={Admission Control to Minimize Rejections and Online Set Cover with
Repetitions},
author={Noga Alon, Yossi Azar, Shai Gutner},
journal={Proc. of 17th SPAA (2005), 238-244},
year={2008},
archivePrefix={arXiv},
eprint={0803.2842},
primaryClass={cs.DS}
} | alon2008admission |
arxiv-3102 | 0803.2856 | Figuring out Actors in Text Streams: Using Collocations to establish Incremental Mind-maps | <|reference_start|>Figuring out Actors in Text Streams: Using Collocations to establish Incremental Mind-maps: The recognition, involvement, and description of main actors influences the story line of the whole text. This is of higher importance as the text per se represents a flow of words and expressions that once it is read it is lost. In this respect, the understanding of a text and moreover on how the actor exactly behaves is not only a major concern: as human beings try to store a given input on short-term memory while associating diverse aspects and actors with incidents, the following approach represents a virtual architecture, where collocations are concerned and taken as the associative completion of the actors' acting. Once that collocations are discovered, they become managed in separated memory blocks broken down by the actors. As for human beings, the memory blocks refer to associative mind-maps. We then present several priority functions to represent the actual temporal situation inside a mind-map to enable the user to reconstruct the recent events from the discovered temporal results.<|reference_end|> | arxiv | @article{rothenberger2008figuring,
title={Figuring out Actors in Text Streams: Using Collocations to establish
Incremental Mind-maps},
author={T. Rothenberger, S. Oez, E. Tahirovic, C. Schommer},
journal={arXiv preprint arXiv:0803.2856},
year={2008},
archivePrefix={arXiv},
eprint={0803.2856},
primaryClass={cs.CL cs.LG}
} | rothenberger2008figuring |
arxiv-3103 | 0803.2874 | Minimal weight expansions in Pisot bases | <|reference_start|>Minimal weight expansions in Pisot bases: For applications to cryptography, it is important to represent numbers with a small number of non-zero digits (Hamming weight) or with small absolute sum of digits. The problem of finding representations with minimal weight has been solved for integer bases, e.g. by the non-adjacent form in base~2. In this paper, we consider numeration systems with respect to real bases $\beta$ which are Pisot numbers and prove that the expansions with minimal absolute sum of digits are recognizable by finite automata. When $\beta$ is the Golden Ratio, the Tribonacci number or the smallest Pisot number, we determine expansions with minimal number of digits $\pm1$ and give explicitely the finite automata recognizing all these expansions. The average weight is lower than for the non-adjacent form.<|reference_end|> | arxiv | @article{frougny2008minimal,
title={Minimal weight expansions in Pisot bases},
author={Christiane Frougny (LIAFA), Wolfgang Steiner (LIAFA)},
journal={arXiv preprint arXiv:0803.2874},
year={2008},
archivePrefix={arXiv},
eprint={0803.2874},
primaryClass={cs.DM cs.CR math.NT}
} | frougny2008minimal |
arxiv-3104 | 0803.2904 | A Distance Metric for Tree-Sibling Time Consistent Phylogenetic Networks | <|reference_start|>A Distance Metric for Tree-Sibling Time Consistent Phylogenetic Networks: The presence of reticulate evolutionary events in phylogenies turn phylogenetic trees into phylogenetic networks. These events imply in particular that there may exist multiple evolutionary paths from a non-extant species to an extant one, and this multiplicity makes the comparison of phylogenetic networks much more difficult than the comparison of phylogenetic trees. In fact, all attempts to define a sound distance measure on the class of all phylogenetic networks have failed so far. Thus, the only practical solutions have been either the use of rough estimates of similarity (based on comparison of the trees embedded in the networks), or narrowing the class of phylogenetic networks to a certain class where such a distance is known and can be efficiently computed. The first approach has the problem that one may identify two networks as equivalent, when they are not; the second one has the drawback that there may not exist algorithms to reconstruct such networks from biological sequences. We present in this paper a distance measure on the class of tree-sibling time consistent phylogenetic networks, which generalize tree-child time consistent phylogenetic networks, and thus also galled-trees. The practical interest of this distance measure is twofold: it can be computed in polynomial time by means of simple algorithms, and there also exist polynomial-time algorithms for reconstructing networks of this class from DNA sequence data. The Perl package Bio::PhyloNetwork, included in the BioPerl bundle, implements many algorithms on phylogenetic networks, including the computation of the distance presented in this paper.<|reference_end|> | arxiv | @article{cardona2008a,
title={A Distance Metric for Tree-Sibling Time Consistent Phylogenetic Networks},
author={Gabriel Cardona, Merce Llabres, Francesc Rossello, Gabriel Valiente},
journal={arXiv preprint arXiv:0803.2904},
year={2008},
archivePrefix={arXiv},
eprint={0803.2904},
primaryClass={q-bio.PE cs.CE cs.DM}
} | cardona2008a |
arxiv-3105 | 0803.2919 | Distributed Relay Protocol for Probabilistic Information-Theoretic Security in a Randomly-Compromised Network | <|reference_start|>Distributed Relay Protocol for Probabilistic Information-Theoretic Security in a Randomly-Compromised Network: We introduce a simple, practical approach with probabilistic information-theoretic security to mitigate one of quantum key distribution's major limitations: the short maximum transmission distance (~200 km) possible with present day technology. Our scheme uses classical secret sharing techniques to allow secure transmission over long distances through a network containing randomly-distributed compromised nodes. The protocol provides arbitrarily high confidence in the security of the protocol, with modest scaling of resource costs with improvement of the security parameter. Although some types of failure are undetectable, users can take preemptive measures to make the probability of such failures arbitrarily small.<|reference_end|> | arxiv | @article{beals2008distributed,
title={Distributed Relay Protocol for Probabilistic Information-Theoretic
Security in a Randomly-Compromised Network},
author={Travis R. Beals, Barry C. Sanders},
journal={ICITS 2008, LNCS 5155, pp. 29-39, 2008},
year={2008},
doi={10.1007/978-3-540-85093-9_4},
archivePrefix={arXiv},
eprint={0803.2919},
primaryClass={quant-ph cs.CR}
} | beals2008distributed |
arxiv-3106 | 0803.2925 | Equivalence of Probabilistic Tournament and Polynomial Ranking Selection | <|reference_start|>Equivalence of Probabilistic Tournament and Polynomial Ranking Selection: Crucial to an Evolutionary Algorithm's performance is its selection scheme. We mathematically investigate the relation between polynomial rank and probabilistic tournament methods which are (respectively) generalisations of the popular linear ranking and tournament selection schemes. We show that every probabilistic tournament is equivalent to a unique polynomial rank scheme. In fact, we derived explicit operators for translating between these two types of selection. Of particular importance is that most linear and most practical quadratic rank schemes are probabilistic tournaments.<|reference_end|> | arxiv | @article{hingee2008equivalence,
title={Equivalence of Probabilistic Tournament and Polynomial Ranking Selection},
author={Kassel Hingee and Marcus Hutter},
journal={Proc. 2008 Congress on Evolutionary Computation (CEC 2008) pages
564-571},
year={2008},
archivePrefix={arXiv},
eprint={0803.2925},
primaryClass={cs.NE}
} | hingee2008equivalence |
arxiv-3107 | 0803.2955 | Locked constraint satisfaction problems | <|reference_start|>Locked constraint satisfaction problems: We introduce and study the random "locked" constraint satisfaction problems. When increasing the density of constraints, they display a broad "clustered" phase in which the space of solutions is divided into many isolated points. While the phase diagram can be found easily, these problems, in their clustered phase, are extremely hard from the algorithmic point of view: the best known algorithms all fail to find solutions. We thus propose new benchmarks of really hard optimization problems and provide insight into the origin of their typical hardness.<|reference_end|> | arxiv | @article{zdeborová2008locked,
title={Locked constraint satisfaction problems},
author={Lenka Zdeborov'a, Marc M'ezard},
journal={Phys. Rev. Lett. 101, 078702 (2008)},
year={2008},
doi={10.1103/PhysRevLett.101.078702},
archivePrefix={arXiv},
eprint={0803.2955},
primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.CC}
} | zdeborová2008locked |
arxiv-3108 | 0803.2957 | Enhanced Direct and Indirect Genetic Algorithm Approaches for a Mall Layout and Tenant Selection Problem | <|reference_start|>Enhanced Direct and Indirect Genetic Algorithm Approaches for a Mall Layout and Tenant Selection Problem: During our earlier research, it was recognised that in order to be successful with an indirect genetic algorithm approach using a decoder, the decoder has to strike a balance between being an optimiser in its own right and finding feasible solutions. Previously this balance was achieved manually. Here we extend this by presenting an automated approach where the genetic algorithm itself, simultaneously to solving the problem, sets weights to balance the components out. Subsequently we were able to solve a complex and non-linear scheduling problem better than with a standard direct genetic algorithm implementation.<|reference_end|> | arxiv | @article{aickelin2008enhanced,
title={Enhanced Direct and Indirect Genetic Algorithm Approaches for a Mall
Layout and Tenant Selection Problem},
author={Uwe Aickelin and Kathryn Dowsland},
journal={Journal of Heuristics, 8(5), pp 503-514, 2002},
year={2008},
doi={10.1023/A:1016536623961,},
archivePrefix={arXiv},
eprint={0803.2957},
primaryClass={cs.NE cs.CE}
} | aickelin2008enhanced |
arxiv-3109 | 0803.2965 | An Indirect Genetic Algorithm for Set Covering Problems | <|reference_start|>An Indirect Genetic Algorithm for Set Covering Problems: This paper presents a new type of genetic algorithm for the set covering problem. It differs from previous evolutionary approaches first because it is an indirect algorithm, i.e. the actual solutions are found by an external decoder function. The genetic algorithm itself provides this decoder with permutations of the solution variables and other parameters. Second, it will be shown that results can be further improved by adding another indirect optimisation layer. The decoder will not directly seek out low cost solutions but instead aims for good exploitable solutions. These are then post optimised by another hill-climbing algorithm. Although seemingly more complicated, we will show that this three-stage approach has advantages in terms of solution quality, speed and adaptability to new types of problems over more direct approaches. Extensive computational results are presented and compared to the latest evolutionary and other heuristic approaches to the same data instances.<|reference_end|> | arxiv | @article{aickelin2008an,
title={An Indirect Genetic Algorithm for Set Covering Problems},
author={Uwe Aickelin},
journal={Journal of the Operational Research Society, 53(10), pp 1118-1126
2002},
year={2008},
doi={10.1057/palgrave.jors.2601317},
archivePrefix={arXiv},
eprint={0803.2965},
primaryClass={cs.NE cs.AI}
} | aickelin2008an |
arxiv-3110 | 0803.2966 | On the Application of Hierarchical Coevolutionary Genetic Algorithms: Recombination and Evaluation Partners | <|reference_start|>On the Application of Hierarchical Coevolutionary Genetic Algorithms: Recombination and Evaluation Partners: This paper examines the use of a hierarchical coevolutionary genetic algorithm under different partnering strategies. Cascading clusters of sub-populations are built from the bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence higher-level sub-populations potentially search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes amongst the sub-populations on solution quality are examined for two constrained optimisation problems. We examine a number of recombination partnering strategies in the construction of higher-level individuals and a number of related schemes for evaluating sub-solutions. It is shown that partnering strategies that exploit problem-specific knowledge are superior and can counter inappropriate (sub)fitness measurements.<|reference_end|> | arxiv | @article{aickelin2008on,
title={On the Application of Hierarchical Coevolutionary Genetic Algorithms:
Recombination and Evaluation Partners},
author={Uwe Aickelin and Larry Bull},
journal={Journal of Applied System Studies, 4(2), pp 2-17, 2003},
year={2008},
archivePrefix={arXiv},
eprint={0803.2966},
primaryClass={cs.NE cs.AI}
} | aickelin2008on |
arxiv-3111 | 0803.2967 | Building Better Nurse Scheduling Algorithms | <|reference_start|>Building Better Nurse Scheduling Algorithms: The aim of this research is twofold: Firstly, to model and solve a complex nurse scheduling problem with an integer programming formulation and evolutionary algorithms. Secondly, to detail a novel statistical method of comparing and hence building better scheduling algorithms by identifying successful algorithm modifications. The comparison method captures the results of algorithms in a single figure that can then be compared using traditional statistical techniques. Thus, the proposed method of comparing algorithms is an objective procedure designed to assist in the process of improving an algorithm. This is achieved even when some results are non-numeric or missing due to infeasibility. The final algorithm outperforms all previous evolutionary algorithms, which relied on human expertise for modification.<|reference_end|> | arxiv | @article{aickelin2008building,
title={Building Better Nurse Scheduling Algorithms},
author={Uwe Aickelin and Paul White},
journal={Annals of Operations Research, 128, pp 159-177, 2004},
year={2008},
doi={10.1023/B:ANOR.0000019103.31340.a6},
archivePrefix={arXiv},
eprint={0803.2967},
primaryClass={cs.NE cs.CE}
} | aickelin2008building |
arxiv-3112 | 0803.2969 | An Indirect Genetic Algorithm for a Nurse Scheduling Problem | <|reference_start|>An Indirect Genetic Algorithm for a Nurse Scheduling Problem: This paper describes a Genetic Algorithms approach to a manpower-scheduling problem arising at a major UK hospital. Although Genetic Algorithms have been successfully used for similar problems in the past, they always had to overcome the limitations of the classical Genetic Algorithms paradigm in handling the conflict between objectives and constraints. The approach taken here is to use an indirect coding based on permutations of the nurses, and a heuristic decoder that builds schedules from these permutations. Computational experiments based on 52 weeks of live data are used to evaluate three different decoders with varying levels of intelligence, and four well-known crossover operators. Results are further enhanced by introducing a hybrid crossover operator and by making use of simple bounds to reduce the size of the solution space. The results reveal that the proposed algorithm is able to find high quality solutions and is both faster and more flexible than a recently published Tabu Search approach.<|reference_end|> | arxiv | @article{aickelin2008an,
title={An Indirect Genetic Algorithm for a Nurse Scheduling Problem},
author={Uwe Aickelin and Kathryn Dowsland},
journal={Computers & Operations Research, 31(5), pp 761-778, 2004},
year={2008},
doi={10.1016/S0305-0548(03)00034-0},
archivePrefix={arXiv},
eprint={0803.2969},
primaryClass={cs.NE cs.CE}
} | aickelin2008an |
arxiv-3113 | 0803.2970 | A Recommender System based on Idiotypic Artificial Immune Networks | <|reference_start|>A Recommender System based on Idiotypic Artificial Immune Networks: The immune system is a complex biological system with a highly distributed, adaptive and self-organising nature. This paper presents an Artificial Immune System (AIS) that exploits some of these characteristics and is applied to the task of film recommendation by Collaborative Filtering (CF). Natural evolution and in particular the immune system have not been designed for classical optimisation. However, for this problem, we are not interested in finding a single optimum. Rather we intend to identify a sub-set of good matches on which recommendations can be based. It is our hypothesis that an AIS built on two central aspects of the biological immune system will be an ideal candidate to achieve this: Antigen-antibody interaction for matching and idiotypic antibody-antibody interaction for diversity. Computational results are presented in support of this conjecture and compared to those found by other CF techniques.<|reference_end|> | arxiv | @article{cayzer2008a,
title={A Recommender System based on Idiotypic Artificial Immune Networks},
author={Steve Cayzer and Uwe Aickelin},
journal={Journal of Mathematical Modelling and Algorithms, 4(2), pp
181-198, 2005},
year={2008},
doi={10.1007/s10852-004-5336-7},
archivePrefix={arXiv},
eprint={0803.2970},
primaryClass={cs.NE cs.AI}
} | cayzer2008a |
arxiv-3114 | 0803.2973 | Rule Generalisation in Intrusion Detection Systems using Snort | <|reference_start|>Rule Generalisation in Intrusion Detection Systems using Snort: Intrusion Detection Systems (ids)provide an important layer of security for computer systems and networks, and are becoming more and more necessary as reliance on Internet services increases and systems with sensitive data are more commonly open to Internet access. An ids responsibility is to detect suspicious or unacceptable system and network activity and to alert a systems administrator to this activity. The majority of ids use a set of signatures that define what suspicious traffic is, and Snort is one popular and actively developing open-source ids that uses such a set of signatures known as Snort rules. Our aim is to identify a way in which Snort could be developed further by generalising rules to identify novel attacks. In particular, we attempted to relax and vary the conditions and parameters of current Snort rules, using a similar approach to classic rule learning operators such as generalisation and specialisation. We demonstrate the effectiveness of our approach through experiments with standard datasets and show that we are able to detect previously undeleted variants of various attacks. We conclude by discussing the general effectiveness and appropriateness of generalisation in Snort based ids rule processing.<|reference_end|> | arxiv | @article{aickelin2008rule,
title={Rule Generalisation in Intrusion Detection Systems using Snort},
author={Uwe Aickelin, Jamie Twycross and Thomas Hesketh-Roberts},
journal={International Journal of Electronic Security and Digital
Forensics, 1 (1), pp 101-116, 2007},
year={2008},
doi={10.1504/IJESDF.2007.013596,},
archivePrefix={arXiv},
eprint={0803.2973},
primaryClass={cs.NE cs.CR}
} | aickelin2008rule |
arxiv-3115 | 0803.2975 | An Estimation of Distribution Algorithm for Nurse Scheduling | <|reference_start|>An Estimation of Distribution Algorithm for Nurse Scheduling: Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (eda) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (ga) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The eda is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.<|reference_end|> | arxiv | @article{aickelin2008an,
title={An Estimation of Distribution Algorithm for Nurse Scheduling},
author={Uwe Aickelin and Jingpeng Li},
journal={Annals of Operations Research, 155 (1), pp 289-309, 2007},
year={2008},
doi={10.1007/s10479-007-0214-0},
archivePrefix={arXiv},
eprint={0803.2975},
primaryClass={cs.NE cs.CE}
} | aickelin2008an |
arxiv-3116 | 0803.2981 | Idiotypic Immune Networks in Mobile Robot Control | <|reference_start|>Idiotypic Immune Networks in Mobile Robot Control: Jerne's idiotypic network theory postulates that the immune response involves inter-antibody stimulation and suppression as well as matching to antigens. The theory has proved the most popular Artificial Immune System (ais) model for incorporation into behavior-based robotics but guidelines for implementing idiotypic selection are scarce. Furthermore, the direct effects of employing the technique have not been demonstrated in the form of a comparison with non-idiotypic systems. This paper aims to address these issues. A method for integrating an idiotypic ais network with a Reinforcement Learning based control system (rl) is described and the mechanisms underlying antibody stimulation and suppression are explained in detail. Some hypotheses that account for the network advantage are put forward and tested using three systems with increasing idiotypic complexity. The basic rl, a simplified hybrid ais-rl that implements idiotypic selection independently of derived concentration levels and a full hybrid ais-rl scheme are examined. The test bed takes the form of a simulated Pioneer robot that is required to navigate through maze worlds detecting and tracking door markers.<|reference_end|> | arxiv | @article{whitbrook2008idiotypic,
title={Idiotypic Immune Networks in Mobile Robot Control},
author={Amanda Whitbrook, Uwe Aickelin and Jonathan Garibaldi},
journal={IEEE Transactions on Systems, Man and Cybernetics, Part B, 37(6),
1581- 1598, 2007},
year={2008},
doi={10.1109/TSMCB.2007.907334},
archivePrefix={arXiv},
eprint={0803.2981},
primaryClass={cs.NE cs.AI cs.RO}
} | whitbrook2008idiotypic |
arxiv-3117 | 0803.2995 | The Role of Management Practices in Closing the Productivity Gap | <|reference_start|>The Role of Management Practices in Closing the Productivity Gap: There is no doubt that management practices are linked to the productivity and performance of a company. However, research findings are mixed. This paper provides a multi-disciplinary review of the current evidence of such a relationship and offers suggestions for further exploration. We provide an extensive review of the literature in terms of research findings from studies that have been trying to measure and understand the impact that individual management practices and clusters of management practices have on productivity at different levels of analysis. We focus our review on Operations Management (om) and Human Resource Management (hrm) practices as well as joint applications of these practices. In conclusion, we can say that taken as a whole, the research findings are equivocal. Some studies have found a positive relationship between the adoption of management practices and productivity, some negative and some no association whatsoever. We believe that the lack of universal consensus on the effect of the adoption of complementary management practices might be driven either by measurement issues or by the level of analysis. Consequently, there is a need for further research. In particular, for a multi-level approach from the lowest possible level of aggregation up to the firm-level of analysis in order to assess the impact of management practices upon the productivity of firms.<|reference_end|> | arxiv | @article{siebers2008the,
title={The Role of Management Practices in Closing the Productivity Gap},
author={Peer-Olaf Siebers, Uwe Aickelin, Giuliana Battisti, Helen Celia,
Christopher Clegg, Xiaolan Fu, Raphael De Hoyos, Alfonsiana Iona, Alina
Petrescu and Peixoto Adriano},
journal={AIM Working Paper Series Number 065, 2008},
year={2008},
archivePrefix={arXiv},
eprint={0803.2995},
primaryClass={cs.OH}
} | siebers2008the |
arxiv-3118 | 0803.3027 | Towards a Symbolic-Numeric Method to Compute Puiseux Series: The Modular Part | <|reference_start|>Towards a Symbolic-Numeric Method to Compute Puiseux Series: The Modular Part: We have designed a new symbolic-numeric strategy to compute efficiently and accurately floating point Puiseux series defined by a bivariate polynomial over an algebraic number field. In essence, computations modulo a well chosen prime $p$ are used to obtain the exact information required to guide floating point computations. In this paper, we detail the symbolic part of our algorithm: First of all, we study modular reduction of Puiseux series and give a good reduction criterion to ensure that the information required by the numerical part is preserved. To establish our results, we introduce a simple modification of classical Newton polygons, that we call "generic Newton polygons", which happen to be very convenient. Then, we estimate the arithmetic complexity of computing Puiseux series over finite fields and improve known bounds. Finally, we give bit-complexity bounds for deterministic and randomized versions of the symbolic part. The details of the numerical part will be described in a forthcoming paper.<|reference_end|> | arxiv | @article{poteaux2008towards,
title={Towards a Symbolic-Numeric Method to Compute Puiseux Series: The Modular
Part},
author={Adrien Poteaux, Marc Rybowicz},
journal={arXiv preprint arXiv:0803.3027},
year={2008},
archivePrefix={arXiv},
eprint={0803.3027},
primaryClass={cs.SC}
} | poteaux2008towards |
arxiv-3119 | 0803.3099 | Concurrent Composition and Algebras of Events, Actions, and Processes | <|reference_start|>Concurrent Composition and Algebras of Events, Actions, and Processes: There are many different models of concurrent processes. The goal of this work is to introduce a common formalized framework for current research in this area and to eliminate shortcomings of existing models of concurrency. Following up the previous research of the authors and other researchers on concurrency, here we build a high-level metamodel EAP (event-action-process) for concurrent processes. This metamodel comprises a variety of other models of concurrent processes. We shape mathematical models for, and study events, actions, and processes in relation to important practical problems, such as communication in networks, concurrent programming, and distributed computations. In the third section of the work, a three-level algebra of events, actions and processes is constructed and studied as a new stage of algebra for concurrent processes. Relations between EAP process algebra and other models of concurrency are considered in the fourth section of this work.<|reference_end|> | arxiv | @article{a2008concurrent,
title={Concurrent Composition and Algebras of Events, Actions, and Processes},
author={Mark Burgin a and Marc L. Smith},
journal={arXiv preprint arXiv:0803.3099},
year={2008},
archivePrefix={arXiv},
eprint={0803.3099},
primaryClass={cs.LO cs.PL}
} | a2008concurrent |
arxiv-3120 | 0803.3117 | On the Diversity-Multiplexing Tradeoff in Multiple-Relay Network | <|reference_start|>On the Diversity-Multiplexing Tradeoff in Multiple-Relay Network: This paper studies the setup of a multiple-relay network in which $K$ half-duplex multiple-antenna relays assist in the transmission between a/several multiple-antenna transmitter(s) and a multiple-antenna receiver. Each two nodes are assumed to be either connected through a quasi-static Rayleigh fading channel, or disconnected. We propose a new scheme, which we call \textit{random sequential} (RS), based on the amplify-and-forward relaying. We prove that for general multiple-antenna multiple-relay networks, the proposed scheme achieves the maximum diversity gain. Furthermore, we derive diversity-multiplexing tradeoff (DMT) of the proposed RS scheme for general single-antenna multiple-relay networks. It is shown that for single-antenna two-hop multiple-access multiple-relay ($K>1$) networks (without direct link between the transmitter(s) and the receiver), the proposed RS scheme achieves the optimum DMT. However, for the case of multiple access single relay setup, we show that the RS scheme reduces to the naive amplify-and-forward relaying and is not optimum in terms of DMT, while the dynamic decode-and-forward scheme is shown to be optimum for this scenario.<|reference_end|> | arxiv | @article{gharan2008on,
title={On the Diversity-Multiplexing Tradeoff in Multiple-Relay Network},
author={Shahab Oveis Gharan, Alireza Bayesteh, and Amir K. Khandani},
journal={arXiv preprint arXiv:0803.3117},
year={2008},
archivePrefix={arXiv},
eprint={0803.3117},
primaryClass={cs.IT math.IT}
} | gharan2008on |
arxiv-3121 | 0803.3186 | Towards a human eye behavior model by applying Data Mining Techniques on Gaze Information from IEC | <|reference_start|>Towards a human eye behavior model by applying Data Mining Techniques on Gaze Information from IEC: In this paper, we firstly present what is Interactive Evolutionary Computation (IEC) and rapidly how we have combined this artificial intelligence technique with an eye-tracker for visual optimization. Next, in order to correctly parameterize our application, we present results from applying data mining techniques on gaze information coming from experiments conducted on about 80 human individuals.<|reference_end|> | arxiv | @article{pallez2008towards,
title={Towards a human eye behavior model by applying Data Mining Techniques on
Gaze Information from IEC},
author={Denis Pallez (I3S), Laurent Brisson (I3S), Thierry Baccino (LPEQ)},
journal={Dans Proceedings of the Third International Conference on Human
Centered Processes - Human Centered Processes, Delft : Pays-Bas (2008)},
year={2008},
archivePrefix={arXiv},
eprint={0803.3186},
primaryClass={cs.HC cs.NE}
} | pallez2008towards |
arxiv-3122 | 0803.3187 | Labeled Natural Deduction Systems for a Family of Tense Logics | <|reference_start|>Labeled Natural Deduction Systems for a Family of Tense Logics: We give labeled natural deduction systems for a family of tense logics extending the basic linear tense logic Kl. We prove that our systems are sound and complete with respect to the usual Kripke semantics, and that they possess a number of useful normalization properties (in particular, derivations reduce to a normal form that enjoys a subformula property). We also discuss how to extend our systems to capture richer logics like (fragments of) LTL.<|reference_end|> | arxiv | @article{viganò2008labeled,
title={Labeled Natural Deduction Systems for a Family of Tense Logics},
author={Luca Vigan`o and Marco Volpe},
journal={arXiv preprint arXiv:0803.3187},
year={2008},
archivePrefix={arXiv},
eprint={0803.3187},
primaryClass={cs.LO}
} | viganò2008labeled |
arxiv-3123 | 0803.3192 | Eye-Tracking Evolutionary Algorithm to minimize user's fatigue in IEC applied to Interactive One-Max problem | <|reference_start|>Eye-Tracking Evolutionary Algorithm to minimize user's fatigue in IEC applied to Interactive One-Max problem: In this paper, we describe a new algorithm that consists in combining an eye-tracker for minimizing the fatigue of a user during the evaluation process of Interactive Evolutionary Computation. The approach is then applied to the Interactive One-Max optimization problem.<|reference_end|> | arxiv | @article{pallez2008eye-tracking,
title={Eye-Tracking Evolutionary Algorithm to minimize user's fatigue in IEC
applied to Interactive One-Max problem},
author={Denis Pallez (LIRIS), Philippe Collard (I3S), Thierry Baccino (LPEQ),
Laurent Dumercy (LPEQ)},
journal={Dans Proceedings of the 2007 GECCO conference companion on Genetic
and evolutionary computation - GECCO '07: the 2007 GECCO conference companion
on Genetic and evolutionary computation, London : Royaume-Uni (2007)},
year={2008},
doi={10.1145/1274000.1274098},
archivePrefix={arXiv},
eprint={0803.3192},
primaryClass={cs.AI}
} | pallez2008eye-tracking |
arxiv-3124 | 0803.3224 | A Model-Based Frequency Constraint for Mining Associations from Transaction Data | <|reference_start|>A Model-Based Frequency Constraint for Mining Associations from Transaction Data: Mining frequent itemsets is a popular method for finding associated items in databases. For this method, support, the co-occurrence frequency of the items which form an association, is used as the primary indicator of the associations's significance. A single user-specified support threshold is used to decided if associations should be further investigated. Support has some known problems with rare items, favors shorter itemsets and sometimes produces misleading associations. In this paper we develop a novel model-based frequency constraint as an alternative to a single, user-specified minimum support. The constraint utilizes knowledge of the process generating transaction data by applying a simple stochastic mixture model (the NB model) which allows for transaction data's typically highly skewed item frequency distribution. A user-specified precision threshold is used together with the model to find local frequency thresholds for groups of itemsets. Based on the constraint we develop the notion of NB-frequent itemsets and adapt a mining algorithm to find all NB-frequent itemsets in a database. In experiments with publicly available transaction databases we show that the new constraint provides improvements over a single minimum support threshold and that the precision threshold is more robust and easier to set and interpret by the user.<|reference_end|> | arxiv | @article{hahsler2008a,
title={A Model-Based Frequency Constraint for Mining Associations from
Transaction Data},
author={Michael Hahsler},
journal={Michael Hahsler. A model-based frequency constraint for mining
associations from transaction data. Data Mining and Knowledge Discovery,
13(2):137-166, September 2006},
year={2008},
doi={10.1007/s10618-005-0026-2},
archivePrefix={arXiv},
eprint={0803.3224},
primaryClass={cs.DB}
} | hahsler2008a |
arxiv-3125 | 0803.3230 | A Type System for Data-Flow Integrity on Windows Vista | <|reference_start|>A Type System for Data-Flow Integrity on Windows Vista: The Windows Vista operating system implements an interesting model of multi-level integrity. We observe that in this model, trusted code can be blamed for any information-flow attack; thus, it is possible to eliminate such attacks by static analysis of trusted code. We formalize this model by designing a type system that can efficiently enforce data-flow integrity on Windows Vista. Typechecking guarantees that objects whose contents are statically trusted never contain untrusted values, regardless of what untrusted code runs in the environment. Some of Windows Vista's runtime access checks are necessary for soundness; others are redundant and can be optimized away.<|reference_end|> | arxiv | @article{chaudhuri2008a,
title={A Type System for Data-Flow Integrity on Windows Vista},
author={Avik Chaudhuri, Prasad Naldurg, and Sriram Rajamani},
journal={arXiv preprint arXiv:0803.3230},
year={2008},
archivePrefix={arXiv},
eprint={0803.3230},
primaryClass={cs.CR cs.OS cs.PL}
} | chaudhuri2008a |
arxiv-3126 | 0803.3231 | Archiving: The Overlooked Spreadsheet Risk | <|reference_start|>Archiving: The Overlooked Spreadsheet Risk: This paper maintains that archiving has been overlooked as a key spreadsheet internal control. The case of failed Jamaican commercial banks demonstrates how poor archiving can lead to weaknesses in spreadsheet control that contribute to operational risk. In addition, the Sarbanes-0xley Act contains a number of provisions that require tighter control over the archiving of spreadsheets. To mitigate operational risks and achieve compliance with the records-related provisions of Sarbanes-Oxley, the author argues that organisations should introduce records management programmes that provide control over the archiving of spreadsheets. At a minimum, spreadsheet archiving controls should identify and ensure compliance with retention requirements, support document production in the event of regulatory inquiries or litigation, and prevent unauthorised destruction of records.<|reference_end|> | arxiv | @article{lemieux2008archiving:,
title={Archiving: The Overlooked Spreadsheet Risk},
author={Victoria Lemieux},
journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2005 213-226
ISBN:1-902724-16-X},
year={2008},
archivePrefix={arXiv},
eprint={0803.3231},
primaryClass={cs.SE cs.CY}
} | lemieux2008archiving: |
arxiv-3127 | 0803.3314 | Loss Fluctuations and Temporal Correlations in Network Queues | <|reference_start|>Loss Fluctuations and Temporal Correlations in Network Queues: We consider data losses in a single node of a packet-switched Internet-like network. We employ two distinct models, one with discrete and the other with continuous one-dimensional random walks, representing the state of a queue in a router. Both models {have} a built-in critical behavior with {a sharp} transition from exponentially small to finite losses. It turns out that the finite capacity of a buffer and the packet-dropping procedure give rise to specific boundary conditions which lead to strong loss rate fluctuations at the critical point even in the absence of such fluctuations in the data arrival process.<|reference_end|> | arxiv | @article{lerner2008loss,
title={Loss Fluctuations and Temporal Correlations in Network Queues},
author={I. V. Lerner, I. V. Yurkevich A. S. Stepanenko and C. C. Constantinou},
journal={arXiv preprint arXiv:0803.3314},
year={2008},
archivePrefix={arXiv},
eprint={0803.3314},
primaryClass={cs.NI cond-mat.stat-mech}
} | lerner2008loss |
arxiv-3128 | 0803.3338 | Performance Evaluation of Multiple TCP connections in iSCSI | <|reference_start|>Performance Evaluation of Multiple TCP connections in iSCSI: Scaling data storage is a significant concern in enterprise systems and Storage Area Networks (SANs) are deployed as a means to scale enterprise storage. SANs based on Fibre Channel have been used extensively in the last decade while iSCSI is fast becoming a serious contender due to its reduced costs and unified infrastructure. This work examines the performance of iSCSI with multiple TCP connections. Multiple TCP connections are often used to realize higher bandwidth but there may be no fairness in how bandwidth is distributed. We propose a mechanism to share congestion information across multiple flows in ``Fair-TCP'' for improved performance. Our results show that Fair-TCP significantly improves the performance for I/O intensive workloads.<|reference_end|> | arxiv | @article{k2008performance,
title={Performance Evaluation of Multiple TCP connections in iSCSI},
author={Bhargava Kumar K, Ganesh M. Narayan, K. Gopinath},
journal={Proceedings of the 24th IEEE Conference on Mass Storage Systems
and Technologies, 2007 - MSST '07},
year={2008},
archivePrefix={arXiv},
eprint={0803.3338},
primaryClass={cs.NI cs.DC cs.OS cs.PF}
} | k2008performance |
arxiv-3129 | 0803.3360 | Asymptotics of input-constrained binary symmetric channel capacity | <|reference_start|>Asymptotics of input-constrained binary symmetric channel capacity: We study the classical problem of noisy constrained capacity in the case of the binary symmetric channel (BSC), namely, the capacity of a BSC whose inputs are sequences chosen from a constrained set. Motivated by a result of Ordentlich and Weissman [In Proceedings of IEEE Information Theory Workshop (2004) 117--122], we derive an asymptotic formula (when the noise parameter is small) for the entropy rate of a hidden Markov chain, observed when a Markov chain passes through a BSC. Using this result, we establish an asymptotic formula for the capacity of a BSC with input process supported on an irreducible finite type constraint, as the noise parameter tends to zero.<|reference_end|> | arxiv | @article{han2008asymptotics,
title={Asymptotics of input-constrained binary symmetric channel capacity},
author={Guangyue Han, Brian Marcus},
journal={Annals of Applied Probability 2009, Vol. 19, No. 3, 1063-1091},
year={2008},
doi={10.1214/08-AAP570},
number={IMS-AAP-AAP570},
archivePrefix={arXiv},
eprint={0803.3360},
primaryClass={math.PR cs.IT math.IT}
} | han2008asymptotics |
arxiv-3130 | 0803.3363 | Node discovery in a networked organization | <|reference_start|>Node discovery in a networked organization: In this paper, I present a method to solve a node discovery problem in a networked organization. Covert nodes refer to the nodes which are not observable directly. They affect social interactions, but do not appear in the surveillance logs which record the participants of the social interactions. Discovering the covert nodes is defined as identifying the suspicious logs where the covert nodes would appear if the covert nodes became overt. A mathematical model is developed for the maximal likelihood estimation of the network behind the social interactions and for the identification of the suspicious logs. Precision, recall, and F measure characteristics are demonstrated with the dataset generated from a real organization and the computationally synthesized datasets. The performance is close to the theoretical limit for any covert nodes in the networks of any topologies and sizes if the ratio of the number of observation to the number of possible communication patterns is large.<|reference_end|> | arxiv | @article{maeno2008node,
title={Node discovery in a networked organization},
author={Yoshiharu Maeno},
journal={Proceedings of the IEEE International Conference on Systems, Man
and Cybernetics, San Antonio, October 2009},
year={2008},
doi={10.1109/ICSMC.2009.5346826},
archivePrefix={arXiv},
eprint={0803.3363},
primaryClass={cs.AI}
} | maeno2008node |
arxiv-3131 | 0803.3394 | Facing the Facts | <|reference_start|>Facing the Facts: Human error research on overconfidence supports the benefits of early visibility of defects and disciplined development. If risk to the enterprise is to be reduced, individuals need to become aware of the reality of the quality of their work. Several cycles of inspection and defect removal are inevitable. Software Quality Management measurements of defect density and removal efficiency are applicable. Research of actual spreadsheet error rates shows data consistent with other software depending on the extent to which the work product was reviewed before inspection. The paper argues that the payback for an investment in early review time is justified by the saving in project delay and expensive errors in use. 'If debugging is the process of removing bugs, then programming must be the process of putting them in' - Anon.<|reference_end|> | arxiv | @article{o'beirne2008facing,
title={Facing the Facts},
author={Patrick O'Beirne},
journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2007 209-214
ISBN 978-905617-58-6},
year={2008},
archivePrefix={arXiv},
eprint={0803.3394},
primaryClass={cs.HC}
} | o'beirne2008facing |
arxiv-3132 | 0803.3404 | Some results on $\mathbbR$-computable structures | <|reference_start|>Some results on $\mathbbR$-computable structures: This survey paper examines the effective model theory obtained with the BSS model of real number computation. It treats the following topics: computable ordinals, satisfaction of computable infinitary formulas, forcing as a construction technique, effective categoricity, effective topology, and relations with other models for the effective theory of uncountable structures.<|reference_end|> | arxiv | @article{calvert2008some,
title={Some results on $\mathbb{R}$-computable structures},
author={Wesley Calvert and John E. Porter},
journal={arXiv preprint arXiv:0803.3404},
year={2008},
archivePrefix={arXiv},
eprint={0803.3404},
primaryClass={cs.DB cs.LO math.LO}
} | calvert2008some |
arxiv-3133 | 0803.3419 | Decomposing replicable functions | <|reference_start|>Decomposing replicable functions: We describe an algorithm to decompose rational functions from which we determine the poset of groups fixing these functions.<|reference_end|> | arxiv | @article{mckay2008decomposing,
title={Decomposing replicable functions},
author={John McKay, David Sevilla},
journal={LMS Journal of Computation and Mathematics 11 (June 2008), p.
146-171. ISSN 1461-1570.},
year={2008},
archivePrefix={arXiv},
eprint={0803.3419},
primaryClass={math.NT cs.SC math.RT}
} | mckay2008decomposing |
arxiv-3134 | 0803.3422 | Organization of modular networks | <|reference_start|>Organization of modular networks: We examine the global organization of heterogeneous equilibrium networks consisting of a number of well distinguished interconnected parts--``communities'' or modules. We develop an analytical approach allowing us to obtain the statistics of connected components and an intervertex distance distribution in these modular networks, and to describe their global organization and structure. In particular, we study the evolution of the intervertex distance distribution with an increasing number of interlinks connecting two infinitely large uncorrelated networks. We demonstrate that even a relatively small number of shortcuts unite the networks into one. In more precise terms, if the number of the interlinks is any finite fraction of the total number of connections, then the intervertex distance distribution approaches a delta-function peaked form, and so the network is united.<|reference_end|> | arxiv | @article{dorogovtsev2008organization,
title={Organization of modular networks},
author={S. N. Dorogovtsev, J. F. F. Mendes, A. N. Samukhin, A. Y. Zyuzin},
journal={Phys. Rev. E 78, 056106 (2008)},
year={2008},
doi={10.1103/PhysRevE.78.056106},
archivePrefix={arXiv},
eprint={0803.3422},
primaryClass={cond-mat.stat-mech cs.NI math-ph math.MP physics.soc-ph}
} | dorogovtsev2008organization |
arxiv-3135 | 0803.3435 | Twenty-Five Moves Suffice for Rubik's Cube | <|reference_start|>Twenty-Five Moves Suffice for Rubik's Cube: How many moves does it take to solve Rubik's Cube? Positions are known that require 20 moves, and it has already been shown that there are no positions that require 27 or more moves; this is a surprisingly large gap. This paper describes a program that is able to find solutions of length 20 or less at a rate of more than 16 million positions a second. We use this program, along with some new ideas and incremental improvements in other techniques, to show that there is no position that requires 26 moves.<|reference_end|> | arxiv | @article{rokicki2008twenty-five,
title={Twenty-Five Moves Suffice for Rubik's Cube},
author={Tomas Rokicki},
journal={arXiv preprint arXiv:0803.3435},
year={2008},
archivePrefix={arXiv},
eprint={0803.3435},
primaryClass={cs.SC cs.DM}
} | rokicki2008twenty-five |
arxiv-3136 | 0803.3448 | Secure Hop-by-Hop Aggregation of End-to-End Concealed Data in Wireless Sensor Networks | <|reference_start|>Secure Hop-by-Hop Aggregation of End-to-End Concealed Data in Wireless Sensor Networks: In-network data aggregation is an essential technique in mission critical wireless sensor networks (WSNs) for achieving effective transmission and hence better power conservation. Common security protocols for aggregated WSNs are either hop-by-hop or end-to-end, each of which has its own encryption schemes considering different security primitives. End-to-end encrypted data aggregation protocols introduce maximum data secrecy with in-efficient data aggregation and more vulnerability to active attacks, while hop-by-hop data aggregation protocols introduce maximum data integrity with efficient data aggregation and more vulnerability to passive attacks. In this paper, we propose a secure aggregation protocol for aggregated WSNs deployed in hostile environments in which dual attack modes are present. Our proposed protocol is a blend of flexible data aggregation as in hop-by-hop protocols and optimal data confidentiality as in end-to-end protocols. Our protocol introduces an efficient O(1) heuristic for checking data integrity along with cost-effective heuristic-based divide and conquer attestation process which is $O(\ln{n})$ in average -O(n) in the worst scenario- for further verification of aggregated results.<|reference_end|> | arxiv | @article{mlaih2008secure,
title={Secure Hop-by-Hop Aggregation of End-to-End Concealed Data in Wireless
Sensor Networks},
author={Esam Mlaih, Salah A. Aly},
journal={Proc. 2nd IEEE MCN'08 workshop in conj. with Infocom'08, Phoenix,
AZ, 2008},
year={2008},
doi={10.1109/INFOCOM.2008.4544601},
archivePrefix={arXiv},
eprint={0803.3448},
primaryClass={cs.CR cs.IT cs.NI math.IT}
} | mlaih2008secure |
arxiv-3137 | 0803.3455 | A Local Mean Field Analysis of Security Investments in Networks | <|reference_start|>A Local Mean Field Analysis of Security Investments in Networks: Getting agents in the Internet, and in networks in general, to invest in and deploy security features and protocols is a challenge, in particular because of economic reasons arising from the presence of network externalities. Our goal in this paper is to carefully model and quantify the impact of such externalities on the investment in, and deployment of, security features and protocols in a network. Specifically, we study a network of interconnected agents, which are subject to epidemic risks such as those caused by propagating viruses and worms, and which can decide whether or not to invest some amount to self-protect and deploy security solutions. We make three contributions in the paper. First, we introduce a general model which combines an epidemic propagation model with an economic model for agents which captures network effects and externalities. Second, borrowing ideas and techniques used in statistical physics, we introduce a Local Mean Field (LMF) model, which extends the standard mean-field approximation to take into account the correlation structure on local neighborhoods. Third, we solve the LMF model in a network with externalities, and we derive analytic solutions for sparse random graphs, for which we obtain asymptotic results. We explicitly identify the impact of network externalities on the decision to invest in and deploy security features. In other words, we identify both the economic and network properties that determine the adoption of security technologies.<|reference_end|> | arxiv | @article{lelarge2008a,
title={A Local Mean Field Analysis of Security Investments in Networks},
author={Marc Lelarge and Jean Bolot},
journal={arXiv preprint arXiv:0803.3455},
year={2008},
archivePrefix={arXiv},
eprint={0803.3455},
primaryClass={cs.GT cs.NI}
} | lelarge2008a |
arxiv-3138 | 0803.3459 | The QWalk Simulator of Quantum Walks | <|reference_start|>The QWalk Simulator of Quantum Walks: Several research groups are giving special attention to quantum walks recently, because this research area have been used with success in the development of new efficient quantum algorithms. A general simulator of quantum walks is very important for the development of this area, since it allows the researchers to focus on the mathematical and physical aspects of the research instead of deviating the efforts to the implementation of specific numerical simulations. In this paper we present QWalk, a quantum walk simulator for one- and two-dimensional lattices. Finite two-dimensional lattices with generic topologies can be used. Decoherence can be simulated by performing measurements or by breaking links of the lattice. We use examples to explain the usage of the software and to show some recent results of the literature that are easily reproduced by the simulator.<|reference_end|> | arxiv | @article{marquezino2008the,
title={The QWalk Simulator of Quantum Walks},
author={F.L. Marquezino, R. Portugal},
journal={Computer Physics Communications, Volume 179, Issue 5, Pages
359-369. (2008)},
year={2008},
doi={10.1016/j.cpc.2008.02.019},
archivePrefix={arXiv},
eprint={0803.3459},
primaryClass={quant-ph cs.MS}
} | marquezino2008the |
arxiv-3139 | 0803.3482 | Diversity of Online Community Activities | <|reference_start|>Diversity of Online Community Activities: Web sites where users create and rate content as well as form networks with other users display long-tailed distributions in many aspects of behavior. Using behavior on one such community site, Essembly, we propose and evaluate plausible mechanisms to explain these behaviors. Unlike purely descriptive models, these mechanisms rely on user behaviors based on information available locally to each user. For Essembly, we find the long-tails arise from large differences among user activity rates and qualities of the rated content, as well as the extensive variability in the time users devote to the site. We show that the models not only explain overall behavior but also allow estimating the quality of content from their early behaviors.<|reference_end|> | arxiv | @article{hogg2008diversity,
title={Diversity of Online Community Activities},
author={Tad Hogg and Gabor Szabo},
journal={arXiv preprint arXiv:0803.3482},
year={2008},
doi={10.1209/0295-5075/86/38003},
archivePrefix={arXiv},
eprint={0803.3482},
primaryClass={cs.CY cs.HC physics.soc-ph}
} | hogg2008diversity |
arxiv-3140 | 0803.3490 | Robustness and Regularization of Support Vector Machines | <|reference_start|>Robustness and Regularization of Support Vector Machines: We consider regularized support vector machines (SVMs) and show that they are precisely equivalent to a new robust optimization formulation. We show that this equivalence of robust optimization and regularization has implications for both algorithms, and analysis. In terms of algorithms, the equivalence suggests more general SVM-like algorithms for classification that explicitly build in protection to noise, and at the same time control overfitting. On the analysis front, the equivalence of robustness and regularization, provides a robust optimization interpretation for the success of regularized SVMs. We use the this new robustness interpretation of SVMs to give a new proof of consistency of (kernelized) SVMs, thus establishing robustness as the reason regularized SVMs generalize well.<|reference_end|> | arxiv | @article{xu2008robustness,
title={Robustness and Regularization of Support Vector Machines},
author={Huan Xu, Constantine Caramanis and Shie Mannor},
journal={Journal of Machine Learning Research, vol 10, 1485-1510, year 2009},
year={2008},
archivePrefix={arXiv},
eprint={0803.3490},
primaryClass={cs.LG cs.AI}
} | xu2008robustness |
arxiv-3141 | 0803.3501 | Multiagent Approach for the Representation of Information in a Decision Support System | <|reference_start|>Multiagent Approach for the Representation of Information in a Decision Support System: In an emergency situation, the actors need an assistance allowing them to react swiftly and efficiently. In this prospect, we present in this paper a decision support system that aims to prepare actors in a crisis situation thanks to a decision-making support. The global architecture of this system is presented in the first part. Then we focus on a part of this system which is designed to represent the information of the current situation. This part is composed of a multiagent system that is made of factual agents. Each agent carries a semantic feature and aims to represent a partial part of a situation. The agents develop thanks to their interactions by comparing their semantic features using proximity measures and according to specific ontologies.<|reference_end|> | arxiv | @article{kebair2008multiagent,
title={Multiagent Approach for the Representation of Information in a Decision
Support System},
author={Fahem Kebair (LITIS), Fr'ed'eric Serin (LITIS)},
journal={Multiagent Approach for the Representation of Information in a
Decision Support System, Springer Berlin / Heidelberg (Ed.) (2006) 98-107},
year={2008},
archivePrefix={arXiv},
eprint={0803.3501},
primaryClass={cs.AI}
} | kebair2008multiagent |
arxiv-3142 | 0803.3515 | Geographic Information Systems in Evaluation and Visualization of Construction Schedule | <|reference_start|>Geographic Information Systems in Evaluation and Visualization of Construction Schedule: Commercially available scheduling tools such as Primavera and Microsoft Project fail to provide information pertaining to the spatial aspects of construction project. A methodology using geographical information systems (GIS) is developed to represent spatial aspects of the construction progress graphically by synchronizing it with construction schedule. The spatial aspects are depicted by 3D model developed in AutoCAD and construction schedule is generated using Microsoft Excel. Spatial and scheduling information are linked together into the GIS environment (ArcGIS). The GIS-based system developed in this study may help in better understanding the schedule along with its spatial aspects.<|reference_end|> | arxiv | @article{bansal2008geographic,
title={Geographic Information Systems in Evaluation and Visualization of
Construction Schedule},
author={V. K. Bansal and Mahesh Pal},
journal={arXiv preprint arXiv:0803.3515},
year={2008},
archivePrefix={arXiv},
eprint={0803.3515},
primaryClass={cs.OH}
} | bansal2008geographic |
arxiv-3143 | 0803.3531 | A New Upper Bound for Max-2-Sat: A Graph-Theoretic Approach | <|reference_start|>A New Upper Bound for Max-2-Sat: A Graph-Theoretic Approach: In {\sc MaxSat}, we ask for an assignment which satisfies the maximum number of clauses for a boolean formula in CNF. We present an algorithm yielding a run time upper bound of $O^*(2^{\frac{1}{6.2158}})$ for {\sc Max-2-Sat} (each clause contains at most 2 literals), where $K$ is the number of clauses. The run time has been achieved by using heuristic priorities on the choice of the variable on which we branch. The implementation of these heuristic priorities is rather simple, though they have a significant effect on the run time. The analysis is done using a tailored non-standard measure.<|reference_end|> | arxiv | @article{raible2008a,
title={A New Upper Bound for Max-2-Sat: A Graph-Theoretic Approach},
author={Daniel Raible and Henning Fernau},
journal={arXiv preprint arXiv:0803.3531},
year={2008},
archivePrefix={arXiv},
eprint={0803.3531},
primaryClass={cs.DS cs.CC cs.DM}
} | raible2008a |
arxiv-3144 | 0803.3539 | Reinforcement Learning by Value Gradients | <|reference_start|>Reinforcement Learning by Value Gradients: The concept of the value-gradient is introduced and developed in the context of reinforcement learning. It is shown that by learning the value-gradients exploration or stochastic behaviour is no longer needed to find locally optimal trajectories. This is the main motivation for using value-gradients, and it is argued that learning value-gradients is the actual objective of any value-function learning algorithm for control problems. It is also argued that learning value-gradients is significantly more efficient than learning just the values, and this argument is supported in experiments by efficiency gains of several orders of magnitude, in several problem domains. Once value-gradients are introduced into learning, several analyses become possible. For example, a surprising equivalence between a value-gradient learning algorithm and a policy-gradient learning algorithm is proven, and this provides a robust convergence proof for control problems using a value function with a general function approximator.<|reference_end|> | arxiv | @article{fairbank2008reinforcement,
title={Reinforcement Learning by Value Gradients},
author={Michael Fairbank},
journal={arXiv preprint arXiv:0803.3539},
year={2008},
archivePrefix={arXiv},
eprint={0803.3539},
primaryClass={cs.NE cs.AI}
} | fairbank2008reinforcement |
arxiv-3145 | 0803.3553 | New Families of Triple Error Correcting Codes with BCH Parameters | <|reference_start|>New Families of Triple Error Correcting Codes with BCH Parameters: Discovered by Bose, Chaudhuri and Hocquenghem, the BCH family of error correcting codes are one of the most studied families in coding theory. They are also among the best performing codes, particularly when the number of errors being corrected is small relative to the code length. In this article we consider binary codes with minimum distance of 7. We construct new families of codes with these BCH parameters via a generalisation of the Kasami-Welch Theorem.<|reference_end|> | arxiv | @article{bracken2008new,
title={New Families of Triple Error Correcting Codes with BCH Parameters},
author={Carl Bracken},
journal={arXiv preprint arXiv:0803.3553},
year={2008},
archivePrefix={arXiv},
eprint={0803.3553},
primaryClass={cs.IT cs.DM math.IT}
} | bracken2008new |
arxiv-3146 | 0803.3608 | The Category-Theoretic Arithmetic of Information | <|reference_start|>The Category-Theoretic Arithmetic of Information: We highlight the underlying category-theoretic structure of measures of information flow. We present an axiomatic framework in which communication systems are represented as morphisms, and information flow is characterized by its behavior when communication systems are combined. Our framework includes a variety of discrete, continuous, and, conjecturally, quantum information measures. It also includes some familiar mathematical constructs not normally associated with information, such as vector space dimension. We discuss these examples and prove basic results from the axioms.<|reference_end|> | arxiv | @article{allen2008the,
title={The Category-Theoretic Arithmetic of Information},
author={Benjamin Allen},
journal={arXiv preprint arXiv:0803.3608},
year={2008},
archivePrefix={arXiv},
eprint={0803.3608},
primaryClass={math.CT cs.IT math.IT}
} | allen2008the |
arxiv-3147 | 0803.3632 | Void Traversal for Guaranteed Delivery in Geometric Routing | <|reference_start|>Void Traversal for Guaranteed Delivery in Geometric Routing: Geometric routing algorithms like GFG (GPSR) are lightweight, scalable algorithms that can be used to route in resource-constrained ad hoc wireless networks. However, such algorithms run on planar graphs only. To efficiently construct a planar graph, they require a unit-disk graph. To make the topology unit-disk, the maximum link length in the network has to be selected conservatively. In practical setting this leads to the designs where the node density is rather high. Moreover, the network diameter of a planar subgraph is greater than the original graph, which leads to longer routes. To remedy this problem, we propose a void traversal algorithm that works on arbitrary geometric graphs. We describe how to use this algorithm for geometric routing with guaranteed delivery and compare its performance with GFG.<|reference_end|> | arxiv | @article{nesterenko2008void,
title={Void Traversal for Guaranteed Delivery in Geometric Routing},
author={Mikhail Nesterenko, Adnan Vora},
journal={The 2nd IEEE International Conference on Mobile Ad-hoc and Sensor
Systems (MASS 2005), Washington, DC, November, 2005},
year={2008},
doi={10.1109/MAHSS.2005.1542862},
archivePrefix={arXiv},
eprint={0803.3632},
primaryClass={cs.OS cs.DC cs.DS}
} | nesterenko2008void |
arxiv-3148 | 0803.3642 | Distributed Averaging in the presence of a Sparse Cut | <|reference_start|>Distributed Averaging in the presence of a Sparse Cut: We consider the question of averaging on a graph that has one sparse cut separating two subgraphs that are internally well connected. While there has been a large body of work devoted to algorithms for distributed averaging, nearly all algorithms involve only {\it convex} updates. In this paper, we suggest that {\it non-convex} updates can lead to significant improvements. We do so by exhibiting a decentralized algorithm for graphs with one sparse cut that uses non-convex averages and has an averaging time that can be significantly smaller than the averaging time of known distributed algorithms, such as those of \cite{tsitsiklis, Boyd}. We use stochastic dominance to prove this result in a way that may be of independent interest.<|reference_end|> | arxiv | @article{narayanan2008distributed,
title={Distributed Averaging in the presence of a Sparse Cut},
author={Hariharan Narayanan},
journal={arXiv preprint arXiv:0803.3642},
year={2008},
archivePrefix={arXiv},
eprint={0803.3642},
primaryClass={cs.DC}
} | narayanan2008distributed |
arxiv-3149 | 0803.3645 | A New Sphere-Packing Bound for Maximal Error Exponent for Multiple-Access Channels | <|reference_start|>A New Sphere-Packing Bound for Maximal Error Exponent for Multiple-Access Channels: In this work, a new lower bound for the maximal error probability of a two-user discrete memoryless (DM) multiple-access channel (MAC) is derived. This is the first bound of this type that explicitly imposes independence of the users' input distributions (conditioned on the time-sharing auxiliary variable) and thus results in a tighter sphere-packing exponent when compared to the tightest known exponent derived by Haroutunian.<|reference_end|> | arxiv | @article{nazari2008a,
title={A New Sphere-Packing Bound for Maximal Error Exponent for
Multiple-Access Channels},
author={Ali Nazari, Sandeep Pradhan, Achilleas Anastasopoulos},
journal={arXiv preprint arXiv:0803.3645},
year={2008},
archivePrefix={arXiv},
eprint={0803.3645},
primaryClass={cs.IT math.IT}
} | nazari2008a |
arxiv-3150 | 0803.3657 | Improved Lower Bounds for Constant GC-Content DNA Codes | <|reference_start|>Improved Lower Bounds for Constant GC-Content DNA Codes: The design of large libraries of oligonucleotides having constant GC-content and satisfying Hamming distance constraints between oligonucleotides and their Watson-Crick complements is important in reducing hybridization errors in DNA computing, DNA microarray technologies, and molecular bar coding. Various techniques have been studied for the construction of such oligonucleotide libraries, ranging from algorithmic constructions via stochastic local search to theoretical constructions via coding theory. We introduce a new stochastic local search method which yields improvements up to more than one third of the benchmark lower bounds of Gaborit and King (2005) for n-mer oligonucleotide libraries when n <= 14. We also found several optimal libraries by computing maximum cliques on certain graphs.<|reference_end|> | arxiv | @article{chee2008improved,
title={Improved Lower Bounds for Constant GC-Content DNA Codes},
author={Yeow Meng Chee and San Ling},
journal={IEEE Transactions on Information Theory, vol. 54, no. 1, pp.
391-394, 2008},
year={2008},
doi={10.1109/TIT.2007.911167},
archivePrefix={arXiv},
eprint={0803.3657},
primaryClass={cs.IT cs.DS math.CO math.IT q-bio.GN q-bio.QM}
} | chee2008improved |
arxiv-3151 | 0803.3658 | The Sizes of Optimal q-Ary Codes of Weight Three and Distance Four: A Complete Solution | <|reference_start|>The Sizes of Optimal q-Ary Codes of Weight Three and Distance Four: A Complete Solution: This correspondence introduces two new constructive techniques to complete the determination of the sizes of optimal q-ary codes of constant weight three and distance four.<|reference_end|> | arxiv | @article{chee2008the,
title={The Sizes of Optimal q-Ary Codes of Weight Three and Distance Four: A
Complete Solution},
author={Yeow Meng Chee, Son Hoang Dau, Alan C. H. Ling, and San Ling},
journal={IEEE Transactions on Information Theory, vol. 54, no. 3, pp.
1291-1295, 2008},
year={2008},
doi={10.1109/TIT.2007.915885},
archivePrefix={arXiv},
eprint={0803.3658},
primaryClass={cs.IT cs.DM math.CO math.IT}
} | chee2008the |
arxiv-3152 | 0803.3670 | On the cubicity of AT-free graphs and circular-arc graphs | <|reference_start|>On the cubicity of AT-free graphs and circular-arc graphs: A unit cube in $k$ dimensions ($k$-cube) is defined as the the Cartesian product $R_1\times R_2\times...\times R_k$ where $R_i$(for $1\leq i\leq k$) is a closed interval of the form $[a_i,a_i+1]$ on the real line. A graph $G$ on $n$ nodes is said to be representable as the intersection of $k$-cubes (cube representation in $k$ dimensions) if each vertex of $G$ can be mapped to a $k$-cube such that two vertices are adjacent in $G$ if and only if their corresponding $k$-cubes have a non-empty intersection. The \emph{cubicity} of $G$ denoted as $\cubi(G)$ is the minimum $k$ for which $G$ can be represented as the intersection of $k$-cubes. We give an $O(bw\cdot n)$ algorithm to compute the cube representation of a general graph $G$ in $bw+1$ dimensions given a bandwidth ordering of the vertices of $G$, where $bw$ is the \emph{bandwidth} of $G$. As a consequence, we get $O(\Delta)$ upper bounds on the cubicity of many well-known graph classes such as AT-free graphs, circular-arc graphs and co-comparability graphs which have $O(\Delta)$ bandwidth. Thus we have: 1) $\cubi(G)\leq 3\Delta-1$, if $G$ is an AT-free graph. 2) $\cubi(G)\leq 2\Delta+1$, if $G$ is a circular-arc graph. 3) $\cubi(G)\leq 2\Delta$, if $G$ is a co-comparability graph. Also for these graph classes, there are constant factor approximation algorithms for bandwidth computation that generate orderings of vertices with $O(\Delta)$ width. We can thus generate the cube representation of such graphs in $O(\Delta)$ dimensions in polynomial time.<|reference_end|> | arxiv | @article{chandran2008on,
title={On the cubicity of AT-free graphs and circular-arc graphs},
author={L. Sunil Chandran, Mathew C. Francis, Naveen Sivadasan},
journal={arXiv preprint arXiv:0803.3670},
year={2008},
archivePrefix={arXiv},
eprint={0803.3670},
primaryClass={cs.DM}
} | chandran2008on |
arxiv-3153 | 0803.3693 | Succinct Data Structures for Retrieval and Approximate Membership | <|reference_start|>Succinct Data Structures for Retrieval and Approximate Membership: The retrieval problem is the problem of associating data with keys in a set. Formally, the data structure must store a function f: U ->{0,1}^r that has specified values on the elements of a given set S, a subset of U, |S|=n, but may have any value on elements outside S. Minimal perfect hashing makes it possible to avoid storing the set S, but this induces a space overhead of Theta(n) bits in addition to the nr bits needed for function values. In this paper we show how to eliminate this overhead. Moreover, we show that for any k query time O(k) can be achieved using space that is within a factor 1+e^{-k} of optimal, asymptotically for large n. If we allow logarithmic evaluation time, the additive overhead can be reduced to O(log log n) bits whp. The time to construct the data structure is O(n), expected. A main technical ingredient is to utilize existing tight bounds on the probability of almost square random matrices with rows of low weight to have full row rank. In addition to direct constructions, we point out a close connection between retrieval structures and hash tables where keys are stored in an array and some kind of probing scheme is used. Further, we propose a general reduction that transfers the results on retrieval into analogous results on approximate membership, a problem traditionally addressed using Bloom filters. Again, we show how to eliminate the space overhead present in previously known methods, and get arbitrarily close to the lower bound. The evaluation procedures of our data structures are extremely simple (similar to a Bloom filter). For the results stated above we assume free access to fully random hash functions. However, we show how to justify this assumption using extra space o(n) to simulate full randomness on a RAM.<|reference_end|> | arxiv | @article{dietzfelbinger2008succinct,
title={Succinct Data Structures for Retrieval and Approximate Membership},
author={Martin Dietzfelbinger and Rasmus Pagh},
journal={arXiv preprint arXiv:0803.3693},
year={2008},
archivePrefix={arXiv},
eprint={0803.3693},
primaryClass={cs.DS cs.DB cs.IR}
} | dietzfelbinger2008succinct |
arxiv-3154 | 0803.3742 | Network protocol scalability via a topological Kadanoff transformation | <|reference_start|>Network protocol scalability via a topological Kadanoff transformation: A natural hierarchical framework for network topology abstraction is presented based on an analogy with the Kadanoff transformation and renormalisation group in theoretical physics. Some properties of the renormalisation group bear similarities to the scalability properties of network routing protocols (interactions). Central to our abstraction are two intimately connected and complementary path diversity units: simple cycles, and cycle adjacencies. A recursive network abstraction procedure is presented, together with an associated generic recursive routing protocol family that offers many desirable features.<|reference_end|> | arxiv | @article{constantinou2008network,
title={Network protocol scalability via a topological Kadanoff transformation},
author={C. C. Constantinou and A. S. Stepanenko},
journal={arXiv preprint arXiv:0803.3742},
year={2008},
archivePrefix={arXiv},
eprint={0803.3742},
primaryClass={cs.NI cond-mat.stat-mech}
} | constantinou2008network |
arxiv-3155 | 0803.3746 | Cluster Approach to the Domains Formation | <|reference_start|>Cluster Approach to the Domains Formation: As a rule, a quadratic functional depending on a great number of binary variables has a lot of local minima. One of approaches allowing one to find in averaged deeper local minima is aggregation of binary variables into larger blocks/domains. To minimize the functional one has to change the states of aggregated variables (domains). In the present publication we discuss methods of domains formation. It is shown that the best results are obtained when domains are formed by variables that are strongly connected with each other.<|reference_end|> | arxiv | @article{litinskii2008cluster,
title={Cluster Approach to the Domains Formation},
author={Leonid B. Litinskii},
journal={Optical Memory & Neural Networks (Information Optics), 2007,
v.16(3) pp.144-153},
year={2008},
archivePrefix={arXiv},
eprint={0803.3746},
primaryClass={cs.NE cs.DS}
} | litinskii2008cluster |
arxiv-3156 | 0803.3773 | Capacity of Gaussian MIMO Bidirectional Broadcast Channels | <|reference_start|>Capacity of Gaussian MIMO Bidirectional Broadcast Channels: We consider the broadcast phase of a three-node network, where a relay node establishes a bidirectional communication between two nodes using a spectrally efficient two-phase decode-and-forward protocol. In the first phase the two nodes transmit their messages to the relay node. Then the relay node decodes the messages and broadcasts a re-encoded composition of them in the second phase. We consider Gaussian MIMO channels and determine the capacity region for the second phase which we call the Gaussian MIMO bidirectional broadcast channel.<|reference_end|> | arxiv | @article{wyrembelski2008capacity,
title={Capacity of Gaussian MIMO Bidirectional Broadcast Channels},
author={Rafael F. Wyrembelski, Tobias J. Oechtering, Igor Bjelakovic, Clemens
Schnurr, Holger Boche},
journal={arXiv preprint arXiv:0803.3773},
year={2008},
doi={10.1109/ISIT.2008.4595053},
archivePrefix={arXiv},
eprint={0803.3773},
primaryClass={cs.IT math.IT}
} | wyrembelski2008capacity |
arxiv-3157 | 0803.3777 | Lower Bounds on the Minimum Pseudodistance for Linear Codes with $q$-ary PSK Modulation over AWGN | <|reference_start|>Lower Bounds on the Minimum Pseudodistance for Linear Codes with $q$-ary PSK Modulation over AWGN: We present lower bounds on the minimum pseudocodeword effective Euclidean distance (or minimum "pseudodistance") for coded modulation systems using linear codes with $q$-ary phase-shift keying (PSK) modulation over the additive white Gaussian noise (AWGN) channel. These bounds apply to both binary and nonbinary coded modulation systems which use direct modulation mapping of coded symbols. The minimum pseudodistance may serve as a first-order measure of error-correcting performance for both linear-programming and message-passing based receivers. In the case of a linear-programming based receiver, the minimum pseudodistance may be used to form an exact bound on the codeword error rate of the system.<|reference_end|> | arxiv | @article{skachek2008lower,
title={Lower Bounds on the Minimum Pseudodistance for Linear Codes with $q$-ary
PSK Modulation over AWGN},
author={Vitaly Skachek and Mark F. Flanagan},
journal={arXiv preprint arXiv:0803.3777},
year={2008},
archivePrefix={arXiv},
eprint={0803.3777},
primaryClass={cs.IT math.IT}
} | skachek2008lower |
arxiv-3158 | 0803.3781 | Fourier Spectra of Binomial APN Functions | <|reference_start|>Fourier Spectra of Binomial APN Functions: In this paper we compute the Fourier spectra of some recently discovered binomial APN functions. One consequence of this is the determination of the nonlinearity of the functions, which measures their resistance to linear cryptanalysis. Another consequence is that certain error-correcting codes related to these functions have the same weight distribution as the 2-error-correcting BCH code. Furthermore, for fields of odd degree, our results provide an alternative proof of the APN property of the functions.<|reference_end|> | arxiv | @article{bracken2008fourier,
title={Fourier Spectra of Binomial APN Functions},
author={Carl Bracken, Eimear Byrne, Nadya Markin and Gary McGuire},
journal={arXiv preprint arXiv:0803.3781},
year={2008},
archivePrefix={arXiv},
eprint={0803.3781},
primaryClass={cs.DM cs.IT math.IT}
} | bracken2008fourier |
arxiv-3159 | 0803.3796 | Approximating a Behavioural Pseudometric without Discount for<br> Probabilistic Systems | <|reference_start|>Approximating a Behavioural Pseudometric without Discount for<br> Probabilistic Systems: Desharnais, Gupta, Jagadeesan and Panangaden introduced a family of behavioural pseudometrics for probabilistic transition systems. These pseudometrics are a quantitative analogue of probabilistic bisimilarity. Distance zero captures probabilistic bisimilarity. Each pseudometric has a discount factor, a real number in the interval (0, 1]. The smaller the discount factor, the more the future is discounted. If the discount factor is one, then the future is not discounted at all. Desharnais et al. showed that the behavioural distances can be calculated up to any desired degree of accuracy if the discount factor is smaller than one. In this paper, we show that the distances can also be approximated if the future is not discounted. A key ingredient of our algorithm is Tarski's decision procedure for the first order theory over real closed fields. By exploiting the Kantorovich-Rubinstein duality theorem we can restrict to the existential fragment for which more efficient decision procedures exist.<|reference_end|> | arxiv | @article{van breugel2008approximating,
title={Approximating a Behavioural Pseudometric without Discount for<br>
Probabilistic Systems},
author={Franck van Breugel, Babita Sharma, James Worrell},
journal={Logical Methods in Computer Science, Volume 4, Issue 2 (April 9,
2008) lmcs:822},
year={2008},
doi={10.2168/LMCS-4(2:2)2008},
archivePrefix={arXiv},
eprint={0803.3796},
primaryClass={cs.LO}
} | van breugel2008approximating |
arxiv-3160 | 0803.3812 | Preferred extensions as stable models | <|reference_start|>Preferred extensions as stable models: Given an argumentation framework AF, we introduce a mapping function that constructs a disjunctive logic program P, such that the preferred extensions of AF correspond to the stable models of P, after intersecting each stable model with the relevant atoms. The given mapping function is of polynomial size w.r.t. AF. In particular, we identify that there is a direct relationship between the minimal models of a propositional formula and the preferred extensions of an argumentation framework by working on representing the defeated arguments. Then we show how to infer the preferred extensions of an argumentation framework by using UNSAT algorithms and disjunctive stable model solvers. The relevance of this result is that we define a direct relationship between one of the most satisfactory argumentation semantics and one of the most successful approach of non-monotonic reasoning i.e., logic programming with the stable model semantics.<|reference_end|> | arxiv | @article{nieves2008preferred,
title={Preferred extensions as stable models},
author={Juan Carlos Nieves, Mauricio Osorio, Ulises Cort'es},
journal={arXiv preprint arXiv:0803.3812},
year={2008},
archivePrefix={arXiv},
eprint={0803.3812},
primaryClass={cs.AI cs.SC}
} | nieves2008preferred |
arxiv-3161 | 0803.3816 | Approaching the Capacity of Wireless Networks through Distributed Interference Alignment | <|reference_start|>Approaching the Capacity of Wireless Networks through Distributed Interference Alignment: Recent results establish the optimality of interference alignment to approach the Shannon capacity of interference networks at high SNR. However, the extent to which interference can be aligned over a finite number of signalling dimensions remains unknown. Another important concern for interference alignment schemes is the requirement of global channel knowledge. In this work we provide examples of iterative algorithms that utilize the reciprocity of wireless networks to achieve interference alignment with only local channel knowledge at each node. These algorithms also provide numerical insights into the feasibility of interference alignment that are not yet available in theory.<|reference_end|> | arxiv | @article{gomadam2008approaching,
title={Approaching the Capacity of Wireless Networks through Distributed
Interference Alignment},
author={Krishna Gomadam, Viveck R. Cadambe, Syed A. Jafar},
journal={IEEE Transactions on Information Theory, Vol. 57, No. 6, June,
2011, Pages: 3309-3322},
year={2008},
doi={10.1109/GLOCOM.2008.ECP.817},
archivePrefix={arXiv},
eprint={0803.3816},
primaryClass={cs.IT math.IT}
} | gomadam2008approaching |
arxiv-3162 | 0803.3838 | Recorded Step Directional Mutation for Faster Convergence | <|reference_start|>Recorded Step Directional Mutation for Faster Convergence: Two meta-evolutionary optimization strategies described in this paper accelerate the convergence of evolutionary programming algorithms while still retaining much of their ability to deal with multi-modal problems. The strategies, called directional mutation and recorded step in this paper, can operate independently but together they greatly enhance the ability of evolutionary programming algorithms to deal with fitness landscapes characterized by long narrow valleys. The directional mutation aspect of this combined method uses correlated meta-mutation but does not introduce a full covariance matrix. These new methods are thus much more economical in terms of storage for problems with high dimensionality. Additionally, directional mutation is rotationally invariant which is a substantial advantage over self-adaptive methods which use a single variance per coordinate for problems where the natural orientation of the problem is not oriented along the axes.<|reference_end|> | arxiv | @article{dunning2008recorded,
title={Recorded Step Directional Mutation for Faster Convergence},
author={Ted Dunning},
journal={arXiv preprint arXiv:0803.3838},
year={2008},
archivePrefix={arXiv},
eprint={0803.3838},
primaryClass={cs.NE cs.LG}
} | dunning2008recorded |
arxiv-3163 | 0803.3850 | State Estimation Over Wireless Channels Using Multiple Sensors: Asymptotic Behaviour and Optimal Power Allocation | <|reference_start|>State Estimation Over Wireless Channels Using Multiple Sensors: Asymptotic Behaviour and Optimal Power Allocation: This paper considers state estimation of linear systems using analog amplify and forwarding with multiple sensors, for both multiple access and orthogonal access schemes. Optimal state estimation can be achieved at the fusion center using a time varying Kalman filter. We show that in many situations, the estimation error covariance decays at a rate of $1/M$ when the number of sensors $M$ is large. We consider optimal allocation of transmission powers that 1) minimizes the sum power usage subject to an error covariance constraint and 2) minimizes the error covariance subject to a sum power constraint. In the case of fading channels with channel state information the optimization problems are solved using a greedy approach, while for fading channels without channel state information but with channel statistics available a sub-optimal linear estimator is derived.<|reference_end|> | arxiv | @article{leong2008state,
title={State Estimation Over Wireless Channels Using Multiple Sensors:
Asymptotic Behaviour and Optimal Power Allocation},
author={Alex S. Leong, Subhrakanti Dey, and Jamie S. Evans},
journal={arXiv preprint arXiv:0803.3850},
year={2008},
archivePrefix={arXiv},
eprint={0803.3850},
primaryClass={cs.IT math.IT}
} | leong2008state |
arxiv-3164 | 0803.3880 | Asymptotically Optimum Universal One-Bit Watermarking for Gaussian Covertexts and Gaussian Attacks | <|reference_start|>Asymptotically Optimum Universal One-Bit Watermarking for Gaussian Covertexts and Gaussian Attacks: The problem of optimum watermark embedding and detection was addressed in a recent paper by Merhav and Sabbag, where the optimality criterion was the maximum false-negative error exponent subject to a guaranteed false-positive error exponent. In particular, Merhav and Sabbag derived universal asymptotically optimum embedding and detection rules under the assumption that the detector relies solely on second order joint empirical statistics of the received signal and the watermark. In the case of a Gaussian host signal and a Gaussian attack, however, closed-form expressions for the optimum embedding strategy and the false-negative error exponent were not obtained in that work. In this paper, we derive such expressions, again, under the universality assumption that neither the host variance nor the attack power are known to either the embedder or the detector. The optimum embedding rule turns out to be very simple and with an intuitively-appealing geometrical interpretation. The improvement with respect to existing sub-optimum schemes is demonstrated by displaying the optimum false-negative error exponent as a function of the guaranteed false-positive error exponent.<|reference_end|> | arxiv | @article{comesaña2008asymptotically,
title={Asymptotically Optimum Universal One-Bit Watermarking for Gaussian
Covertexts and Gaussian Attacks},
author={P. Comesa~na, N. Merhav and M. Barni},
journal={arXiv preprint arXiv:0803.3880},
year={2008},
archivePrefix={arXiv},
eprint={0803.3880},
primaryClass={cs.IT math.IT}
} | comesaña2008asymptotically |
arxiv-3165 | 0803.3900 | A Component Based Heuristic Search method with Adaptive Perturbations for Hospital Personnel Scheduling | <|reference_start|>A Component Based Heuristic Search method with Adaptive Perturbations for Hospital Personnel Scheduling: Nurse rostering is a complex scheduling problem that affects hospital personnel on a daily basis all over the world. This paper presents a new component-based approach with adaptive perturbations, for a nurse scheduling problem arising at a major UK hospital. The main idea behind this technique is to decompose a schedule into its components (i.e. the allocated shift pattern of each nurse), and then mimic a natural evolutionary process on these components to iteratively deliver better schedules. The worthiness of all components in the schedule has to be continuously demonstrated in order for them to remain there. This demonstration employs a dynamic evaluation function which evaluates how well each component contributes towards the final objective. Two perturbation steps are then applied: the first perturbation eliminates a number of components that are deemed not worthy to stay in the current schedule; the second perturbation may also throw out, with a low level of probability, some worthy components. The eliminated components are replenished with new ones using a set of constructive heuristics using local optimality criteria. Computational results using 52 data instances demonstrate the applicability of the proposed approach in solving real-world problems.<|reference_end|> | arxiv | @article{li2008a,
title={A Component Based Heuristic Search method with Adaptive Perturbations
for Hospital Personnel Scheduling},
author={Jingpeng Li, Uwe Aickelin and Edmund Burke},
journal={Technical Report NOTTCS-TR-2007-4, University of Nottingham, UK,,
2006},
year={2008},
archivePrefix={arXiv},
eprint={0803.3900},
primaryClass={cs.NE cs.CE}
} | li2008a |
arxiv-3166 | 0803.3905 | Introduction to Multi-Agent Simulation | <|reference_start|>Introduction to Multi-Agent Simulation: When designing systems that are complex, dynamic and stochastic in nature, simulation is generally recognised as one of the best design support technologies, and a valuable aid in the strategic and tactical decision making process. A simulation model consists of a set of rules that define how a system changes over time, given its current state. Unlike analytical models, a simulation model is not solved but is run and the changes of system states can be observed at any point in time. This provides an insight into system dynamics rather than just predicting the output of a system based on specific inputs. Simulation is not a decision making tool but a decision support tool, allowing better informed decisions to be made. Due to the complexity of the real world, a simulation model can only be an approximation of the target system. The essence of the art of simulation modelling is abstraction and simplification. Only those characteristics that are important for the study and analysis of the target system should be included in the simulation model.<|reference_end|> | arxiv | @article{siebers2008introduction,
title={Introduction to Multi-Agent Simulation},
author={Peer-Olaf Siebers and Uwe Aickelin},
journal={Encyclopaedia of Decision Making and Decision Support
Technologies, 554-564, 2008},
year={2008},
archivePrefix={arXiv},
eprint={0803.3905},
primaryClass={cs.NE cs.MA}
} | siebers2008introduction |
arxiv-3167 | 0803.3912 | Artificial Immune Systems Tutorial | <|reference_start|>Artificial Immune Systems Tutorial: The biological immune system is a robust, complex, adaptive system that defends the body from foreign pathogens. It is able to categorize all cells (or molecules) within the body as self-cells or non-self cells. It does this with the help of a distributed task force that has the intelligence to take action from a local and also a global perspective using its network of chemical messengers for communication. There are two major branches of the immune system. The innate immune system is an unchanging mechanism that detects and destroys certain invading organisms, whilst the adaptive immune system responds to previously unknown foreign cells and builds a response to them that can remain in the body over a long period of time. This remarkable information processing biological system has caught the attention of computer science in recent years. A novel computational intelligence technique, inspired by immunology, has emerged, called Artificial Immune Systems. Several concepts from the immune have been extracted and applied for solution to real world science and engineering problems. In this tutorial, we briefly describe the immune system metaphors that are relevant to existing Artificial Immune Systems methods. We will then show illustrative real-world problems suitable for Artificial Immune Systems and give a step-by-step algorithm walkthrough for one such problem. A comparison of the Artificial Immune Systems to other well-known algorithms, areas for future work, tips & tricks and a list of resources will round this tutorial off. It should be noted that as Artificial Immune Systems is still a young and evolving field, there is not yet a fixed algorithm template and hence actual implementations might differ somewhat from time to time and from those examples given here.<|reference_end|> | arxiv | @article{aickelin2008artificial,
title={Artificial Immune Systems Tutorial},
author={Uwe Aickelin and Dipankar Dasgupta},
journal={arXiv preprint arXiv:0803.3912},
year={2008},
archivePrefix={arXiv},
eprint={0803.3912},
primaryClass={cs.NE cs.AI cs.MA}
} | aickelin2008artificial |
arxiv-3168 | 0803.3946 | On the `Semantics' of Differential Privacy: A Bayesian Formulation | <|reference_start|>On the `Semantics' of Differential Privacy: A Bayesian Formulation: Differential privacy is a definition of "privacy'" for algorithms that analyze and publish information about statistical databases. It is often claimed that differential privacy provides guarantees against adversaries with arbitrary side information. In this paper, we provide a precise formulation of these guarantees in terms of the inferences drawn by a Bayesian adversary. We show that this formulation is satisfied by both "vanilla" differential privacy as well as a relaxation known as (epsilon,delta)-differential privacy. Our formulation follows the ideas originally due to Dwork and McSherry [Dwork 2006]. This paper is, to our knowledge, the first place such a formulation appears explicitly. The analysis of the relaxed definition is new to this paper, and provides some concrete guidance for setting parameters when using (epsilon,delta)-differential privacy.<|reference_end|> | arxiv | @article{kasiviswanathan2008on,
title={On the `Semantics' of Differential Privacy: A Bayesian Formulation},
author={Shiva Prasad Kasiviswanathan and Adam Smith},
journal={Journal of Privacy and Confidentiality, 6 (1), 2014},
year={2008},
doi={10.29012/jpc.v6i1.634},
archivePrefix={arXiv},
eprint={0803.3946},
primaryClass={cs.CR cs.DB}
} | kasiviswanathan2008on |
arxiv-3169 | 0803.3969 | Cancellation Meadows: a Generic Basis Theorem and Some Applications | <|reference_start|>Cancellation Meadows: a Generic Basis Theorem and Some Applications: Let Q_0 denote the rational numbers expanded to a "meadow", that is, after taking its zero-totalized form (0^{-1}=0) as the preferred interpretation. In this paper we consider "cancellation meadows", i.e., meadows without proper zero divisors, such as $Q_0$ and prove a generic completeness result. We apply this result to cancellation meadows expanded with differentiation operators, the sign function, and with floor, ceiling and a signed variant of the square root, respectively. We give an equational axiomatization of these operators and thus obtain a finite basis for various expanded cancellation meadows.<|reference_end|> | arxiv | @article{bergstra2008cancellation,
title={Cancellation Meadows: a Generic Basis Theorem and Some Applications},
author={Jan A. Bergstra and Inge Bethke and Alban Ponse},
journal={The Computer Journal 56(1): 3-14, 2013},
year={2008},
doi={10.1093/comjnl/bxs028},
archivePrefix={arXiv},
eprint={0803.3969},
primaryClass={math.RA cs.LO}
} | bergstra2008cancellation |
arxiv-3170 | 0803.3976 | On Ritt's decomposition Theorem in the case of finite fields | <|reference_start|>On Ritt's decomposition Theorem in the case of finite fields: A classical theorem by Ritt states that all the complete decomposition chains of a univariate polynomial satisfying a certain tameness condition have the same length. In this paper we present our conclusions about the generalization of these theorem in the case of finite coefficient fields when the tameness condition is dropped.<|reference_end|> | arxiv | @article{gutierrez2008on,
title={On Ritt's decomposition Theorem in the case of finite fields},
author={Jaime Gutierrez, David Sevilla},
journal={Finite Fields Appl. 12 (2006), no. 3, 403--412. MR2229324
(2007a:13024)},
year={2008},
doi={10.1016/j.ffa.2005.08.004},
archivePrefix={arXiv},
eprint={0803.3976},
primaryClass={cs.SC math.AC}
} | gutierrez2008on |
arxiv-3171 | 0803.4018 | Human dynamics revealed through Web analytics | <|reference_start|>Human dynamics revealed through Web analytics: When the World Wide Web was first conceived as a way to facilitate the sharing of scientific information at the CERN (European Center for Nuclear Research) few could have imagined the role it would come to play in the following decades. Since then, the increasing ubiquity of Internet access and the frequency with which people interact with it raise the possibility of using the Web to better observe, understand, and monitor several aspects of human social behavior. Web sites with large numbers of frequently returning users are ideal for this task. If these sites belong to companies or universities, their usage patterns can furnish information about the working habits of entire populations. In this work, we analyze the properly anonymized logs detailing the access history to Emory University's Web site. Emory is a medium size university located in Atlanta, Georgia. We find interesting structure in the activity patterns of the domain and study in a systematic way the main forces behind the dynamics of the traffic. In particular, we show that both linear preferential linking and priority based queuing are essential ingredients to understand the way users navigate the Web.<|reference_end|> | arxiv | @article{goncalves2008human,
title={Human dynamics revealed through Web analytics},
author={Bruno Goncalves, Jose J. Ramasco},
journal={Physical Review E 78, 026123 (2008)},
year={2008},
doi={10.1103/PhysRevE.78.026123},
archivePrefix={arXiv},
eprint={0803.4018},
primaryClass={cs.HC cond-mat.stat-mech cs.CY physics.soc-ph}
} | goncalves2008human |
arxiv-3172 | 0803.4025 | Structure and Interpretation of Computer Programs | <|reference_start|>Structure and Interpretation of Computer Programs: Call graphs depict the static, caller-callee relation between "functions" in a program. With most source/target languages supporting functions as the primitive unit of composition, call graphs naturally form the fundamental control flow representation available to understand/develop software. They are also the substrate on which various interprocedural analyses are performed and are integral part of program comprehension/testing. Given their universality and usefulness, it is imperative to ask if call graphs exhibit any intrinsic graph theoretic features -- across versions, program domains and source languages. This work is an attempt to answer these questions: we present and investigate a set of meaningful graph measures that help us understand call graphs better; we establish how these measures correlate, if any, across different languages and program domains; we also assess the overall, language independent software quality by suitably interpreting these measures.<|reference_end|> | arxiv | @article{narayan2008structure,
title={Structure and Interpretation of Computer Programs},
author={Ganesh M. Narayan, K. Gopinath, V. Sridhar},
journal={2nd IEEE International Symposium on Theoretical Aspects of
Software Engineering, 2008, Nanjing, China},
year={2008},
doi={10.1109/TASE.2008.40},
archivePrefix={arXiv},
eprint={0803.4025},
primaryClass={cs.SE cs.PL}
} | narayan2008structure |
arxiv-3173 | 0803.4026 | High-dimensional analysis of semidefinite relaxations for sparse principal components | <|reference_start|>High-dimensional analysis of semidefinite relaxations for sparse principal components: Principal component analysis (PCA) is a classical method for dimensionality reduction based on extracting the dominant eigenvectors of the sample covariance matrix. However, PCA is well known to behave poorly in the ``large $p$, small $n$'' setting, in which the problem dimension $p$ is comparable to or larger than the sample size $n$. This paper studies PCA in this high-dimensional regime, but under the additional assumption that the maximal eigenvector is sparse, say, with at most $k$ nonzero components. We consider a spiked covariance model in which a base matrix is perturbed by adding a $k$-sparse maximal eigenvector, and we analyze two computationally tractable methods for recovering the support set of this maximal eigenvector, as follows: (a) a simple diagonal thresholding method, which transitions from success to failure as a function of the rescaled sample size $\theta_{\mathrm{dia}}(n,p,k)=n/[k^2\log(p-k)]$; and (b) a more sophisticated semidefinite programming (SDP) relaxation, which succeeds once the rescaled sample size $\theta_{\mathrm{sdp}}(n,p,k)=n/[k\log(p-k)]$ is larger than a critical threshold. In addition, we prove that no method, including the best method which has exponential-time complexity, can succeed in recovering the support if the order parameter $\theta_{\mathrm{sdp}}(n,p,k)$ is below a threshold. Our results thus highlight an interesting trade-off between computational and statistical efficiency in high-dimensional inference.<|reference_end|> | arxiv | @article{amini2008high-dimensional,
title={High-dimensional analysis of semidefinite relaxations for sparse
principal components},
author={Arash A. Amini, Martin J. Wainwright},
journal={Annals of Statistics 2009, Vol. 37, No. 5B, 2877-2921},
year={2008},
doi={10.1214/08-AOS664},
number={IMS-AOS-AOS664},
archivePrefix={arXiv},
eprint={0803.4026},
primaryClass={math.ST cs.IT math.IT stat.TH}
} | amini2008high-dimensional |
arxiv-3174 | 0803.4030 | Learning Sequences | <|reference_start|>Learning Sequences: We describe the algorithms used by the ALEKS computer learning system for manipulating combinatorial descriptions of human learners' states of knowledge, generating all states that are possible according to a description of a learning space in terms of a partial order, and using Bayesian statistics to determine the most likely state of a student. As we describe, a representation of a knowledge space using learning sequences (basic words of an antimatroid) allows more general learning spaces to be implemented with similar algorithmic complexity. We show how to define a learning space from a set of learning sequences, find a set of learning sequences that concisely represents a given learning space, generate all states of a learning space represented in this way, and integrate this state generation procedure into a knowledge assessment algorithm. We also describe some related theoretical results concerning projections of learning spaces, decomposition and dimension of learning spaces, and algebraic representation of learning spaces.<|reference_end|> | arxiv | @article{eppstein2008learning,
title={Learning Sequences},
author={David Eppstein},
journal={arXiv preprint arXiv:0803.4030},
year={2008},
archivePrefix={arXiv},
eprint={0803.4030},
primaryClass={cs.DM}
} | eppstein2008learning |
arxiv-3175 | 0803.4049 | Stateless and Delivery Guaranteed Geometric Routing on Virtual Coordinate System | <|reference_start|>Stateless and Delivery Guaranteed Geometric Routing on Virtual Coordinate System: Stateless geographic routing provides relatively good performance at a fixed overhead, which is typically much lower than conventional routing protocols such as AODV. However, the performance of geographic routing is impacted by physical voids, and localization errors. Accordingly, virtual coordinate systems (VCS) were proposed as an alternative approach that is resilient to localization errors and that naturally routes around physical voids. However, VCS also faces virtual anomalies, causing their performance to trail geographic routing. In existing VCS routing protocols, there is a lack of an effective stateless and delivery guaranteed complementary routing algorithm that can be used to traverse voids. Most proposed solutions use variants of flooding or blind searching when a void is encountered. In this paper, we propose a spanning-path virtual coordinate system which can be used as a complete routing algorithm or as the complementary algorithm to greedy forwarding that is invoked when voids are encountered. With this approach, and for the first time, we demonstrate a stateless and delivery guaranteed geometric routing algorithm on VCS. When used in conjunction with our previously proposed aligned virtual coordinate system (AVCS), it out-performs not only all geometric routing protocols on VCS, but also geographic routing with accurate location information.<|reference_end|> | arxiv | @article{liu2008stateless,
title={Stateless and Delivery Guaranteed Geometric Routing on Virtual
Coordinate System},
author={Ke Liu and Nael Abu-Ghazaleh},
journal={arXiv preprint arXiv:0803.4049},
year={2008},
archivePrefix={arXiv},
eprint={0803.4049},
primaryClass={cs.NI cs.DC}
} | liu2008stateless |
arxiv-3176 | 0803.4074 | Reflective visualization and verbalization of unconscious preference | <|reference_start|>Reflective visualization and verbalization of unconscious preference: A new method is presented, that can help a person become aware of his or her unconscious preferences, and convey them to others in the form of verbal explanation. The method combines the concepts of reflection, visualization, and verbalization. The method was tested in an experiment where the unconscious preferences of the subjects for various artworks were investigated. In the experiment, two lessons were learned. The first is that it helps the subjects become aware of their unconscious preferences to verbalize weak preferences as compared with strong preferences through discussion over preference diagrams. The second is that it is effective to introduce an adjustable factor into visualization to adapt to the differences in the subjects and to foster their mutual understanding.<|reference_end|> | arxiv | @article{maeno2008reflective,
title={Reflective visualization and verbalization of unconscious preference},
author={Yoshiharu Maeno and Yukio Ohsawa},
journal={International Journal of Advanced Intelligence Paradigms Vol.2,
pp.125-139 (2010)},
year={2008},
doi={10.1504/IJAIP.2010.030531},
archivePrefix={arXiv},
eprint={0803.4074},
primaryClass={cs.AI}
} | maeno2008reflective |
arxiv-3177 | 0803.4096 | On the number of $k$-cycles in the assignment problem for random matrices | <|reference_start|>On the number of $k$-cycles in the assignment problem for random matrices: We continue the study of the assignment problem for a random cost matrix. We analyse the number of $k$-cycles for the solution and their dependence on the symmetry of the random matrix. We observe that for a symmetric matrix one and two-cycles are dominant in the optimal solution. In the antisymmetric case the situation is the opposite and the one and two-cycles are suppressed. We solve the model for a pure random matrix (without correlations between its entries) and give analytic arguments to explain the numerical results in the symmetric and antisymmetric case. We show that the results can be explained to great accuracy by a simple ansatz that connects the expected number of $k$-cycles to that of one and two cycles.<|reference_end|> | arxiv | @article{esteve2008on,
title={On the number of $k$-cycles in the assignment problem for random
matrices},
author={J. G. Esteve and Fernando Falceto},
journal={arXiv preprint arXiv:0803.4096},
year={2008},
doi={10.1088/1742-5468/2008/03/P03019},
archivePrefix={arXiv},
eprint={0803.4096},
primaryClass={cs.DM}
} | esteve2008on |
arxiv-3178 | 0803.4197 | Optimization of Enzymatic Biochemical Logic for Noise Reduction and Scalability: How Many Biocomputing Gates Can Be Interconnected in a Circuit? | <|reference_start|>Optimization of Enzymatic Biochemical Logic for Noise Reduction and Scalability: How Many Biocomputing Gates Can Be Interconnected in a Circuit?: We report an experimental evaluation of the "input-output surface" for a biochemical AND gate. The obtained data are modeled within the rate-equation approach, with the aim to map out the gate function and cast it in the language of logic variables appropriate for analysis of Boolean logic for scalability. In order to minimize "analog" noise, we consider a theoretical approach for determining an optimal set for the process parameters to minimize "analog" noise amplification for gate concatenation. We establish that under optimized conditions, presently studied biochemical gates can be concatenated for up to order 10 processing steps. Beyond that, new paradigms for avoiding noise build-up will have to be developed. We offer a general discussion of the ideas and possible future challenges for both experimental and theoretical research for advancing scalable biochemical computing.<|reference_end|> | arxiv | @article{privman2008optimization,
title={Optimization of Enzymatic Biochemical Logic for Noise Reduction and
Scalability: How Many Biocomputing Gates Can Be Interconnected in a Circuit?},
author={V. Privman, G. Strack, D. Solenov, M. Pita, E. Katz},
journal={J. Phys. Chem. B 112, 11777-11784 (2008)},
year={2008},
doi={10.1021/jp802673q},
archivePrefix={arXiv},
eprint={0803.4197},
primaryClass={q-bio.MN cond-mat.other cond-mat.soft cs.CC q-bio.BM q-bio.OT q-bio.QM quant-ph}
} | privman2008optimization |
arxiv-3179 | 0803.4206 | Product theorems via semidefinite programming | <|reference_start|>Product theorems via semidefinite programming: The tendency of semidefinite programs to compose perfectly under product has been exploited many times in complexity theory: for example, by Lovasz to determine the Shannon capacity of the pentagon; to show a direct sum theorem for non-deterministic communication complexity and direct product theorems for discrepancy; and in interactive proof systems to show parallel repetition theorems for restricted classes of games. Despite all these examples of product theorems--some going back nearly thirty years--it was only recently that Mittal and Szegedy began to develop a general theory to explain when and why semidefinite programs behave perfectly under product. This theory captured many examples in the literature, but there were also some notable exceptions which it could not explain--namely, an early parallel repetition result of Feige and Lovasz, and a direct product theorem for the discrepancy method of communication complexity by Lee, Shraibman, and Spalek. We extend the theory of Mittal and Szegedy to explain these cases as well. Indeed, to the best of our knowledge, our theory captures all examples of semidefinite product theorems in the literature.<|reference_end|> | arxiv | @article{lee2008product,
title={Product theorems via semidefinite programming},
author={Troy Lee, Rajat Mittal},
journal={arXiv preprint arXiv:0803.4206},
year={2008},
archivePrefix={arXiv},
eprint={0803.4206},
primaryClass={cs.CC}
} | lee2008product |
arxiv-3180 | 0803.4223 | Unified storage systems for distributed Tier-2 centres | <|reference_start|>Unified storage systems for distributed Tier-2 centres: The start of data taking at the Large Hadron Collider will herald a new era in data volumes and distributed processing in particle physics. Data volumes of hundreds of Terabytes will be shipped to Tier-2 centres for analysis by the LHC experiments using the Worldwide LHC Computing Grid (WLCG). In many countries Tier-2 centres are distributed between a number of institutes, e.g., the geographically spread Tier-2s of GridPP in the UK. This presents a number of challenges for experiments to utilise these centres efficaciously, as CPU and storage resources may be sub-divided and exposed in smaller units than the experiment would ideally want to work with. In addition, unhelpful mismatches between storage and CPU at the individual centres may be seen, which make efficient exploitation of a Tier-2's resources difficult. One method of addressing this is to unify the storage across a distributed Tier-2, presenting the centres' aggregated storage as a single system. This greatly simplifies data management for the VO, which then can access a greater amount of data across the Tier-2. However, such an approach will lead to scenarios where analysis jobs on one site's batch system must access data hosted on another site. We investigate this situation using the Glasgow and Edinburgh clusters, which are part of the ScotGrid distributed Tier-2. In particular we look at how to mitigate the problems associated with ``distant'' data access and discuss the security implications of having LAN access protocols traverse the WAN between centres.<|reference_end|> | arxiv | @article{cowan2008unified,
title={Unified storage systems for distributed Tier-2 centres},
author={Greig A. Cowan, Graeme A. Stewart, Andrew Elwell},
journal={J.Phys.Conf.Ser.119:062027,2008},
year={2008},
doi={10.1088/1742-6596/119/6/062027},
archivePrefix={arXiv},
eprint={0803.4223},
primaryClass={cs.DC cs.NI}
} | cowan2008unified |
arxiv-3181 | 0803.4240 | Neutral Fitness Landscape in the Cellular Automata Majority Problem | <|reference_start|>Neutral Fitness Landscape in the Cellular Automata Majority Problem: We study in detail the fitness landscape of a difficult cellular automata computational task: the majority problem. Our results show why this problem landscape is so hard to search, and we quantify the large degree of neutrality found in various ways. We show that a particular subspace of the solution space, called the "Olympus", is where good solutions concentrate, and give measures to quantitatively characterize this subspace.<|reference_end|> | arxiv | @article{verel2008neutral,
title={Neutral Fitness Landscape in the Cellular Automata Majority Problem},
author={S'ebastien Verel (I3S), Philippe Collard (I3S), Marco Tomassini
(ISI), Leonardo Vanneschi (ISI)},
journal={Dans ACRI 2006 - 7th International Conference on Cellular Automata
For Research and Industry - ACRI 2006, France (2006)},
year={2008},
archivePrefix={arXiv},
eprint={0803.4240},
primaryClass={cs.NE}
} | verel2008neutral |
arxiv-3182 | 0803.4241 | Evolving Dynamic Change and Exchange of Genotype Encoding in Genetic Algorithms for Difficult Optimization Problems | <|reference_start|>Evolving Dynamic Change and Exchange of Genotype Encoding in Genetic Algorithms for Difficult Optimization Problems: The application of genetic algorithms (GAs) to many optimization problems in organizations often results in good performance and high quality solutions. For successful and efficient use of GAs, it is not enough to simply apply simple GAs (SGAs). In addition, it is necessary to find a proper representation for the problem and to develop appropriate search operators that fit well to the properties of the genotype encoding. The representation must at least be able to encode all possible solutions of an optimization problem, and genetic operators such as crossover and mutation should be applicable to it. In this paper, serial alternation strategies between two codings are formulated in the framework of dynamic change of genotype encoding in GAs for function optimization. Likewise, a new variant of GAs for difficult optimization problems denoted {\it Split-and-Merge} GA (SM-GA) is developed using a parallel implementation of an SGA and evolving a dynamic exchange of individual representation in the context of Dual Coding concept. Numerical experiments show that the evolved SM-GA significantly outperforms an SGA with static single coding.<|reference_end|> | arxiv | @article{bercachi2008evolving,
title={Evolving Dynamic Change and Exchange of Genotype Encoding in Genetic
Algorithms for Difficult Optimization Problems},
author={Maroun Bercachi (I3S), Philippe Collard (I3S), Manuel Clergue (I3S),
S'ebastien Verel (I3S)},
journal={Dans Proceedings of the IEEE Congress on Evolutionary Computation
CEC2007 - IEEE Congress on Evolutionary Computation CEC2007, singapore :
Singapour (2007)},
year={2008},
archivePrefix={arXiv},
eprint={0803.4241},
primaryClass={cs.NE}
} | bercachi2008evolving |
arxiv-3183 | 0803.4248 | From Cells to Islands: An unified Model of Cellular Parallel Genetic Algorithms | <|reference_start|>From Cells to Islands: An unified Model of Cellular Parallel Genetic Algorithms: This paper presents the Anisotropic selection scheme for cellular Genetic Algorithms (cGA). This new scheme allows to enhance diversity and to control the selective pressure which are two important issues in Genetic Algorithms, especially when trying to solve difficult optimization problems. Varying the anisotropic degree of selection allows swapping from a cellular to an island model of parallel genetic algorithm. Measures of performances and diversity have been performed on one well-known problem: the Quadratic Assignment Problem which is known to be difficult to optimize. Experiences show that, tuning the anisotropic degree, we can find the accurate trade-off between cGA and island models to optimize performances of parallel evolutionary algorithms. This trade-off can be interpreted as the suitable degree of migration among subpopulations in a parallel Genetic Algorithm.<|reference_end|> | arxiv | @article{simoncini2008from,
title={From Cells to Islands: An unified Model of Cellular Parallel Genetic
Algorithms},
author={David Simoncini (I3S), Philippe Collard (I3S), S'ebastien Verel
(I3S), Manuel Clergue (I3S)},
journal={Dans ACRI 2006, 7th International Conference - 7th International
Conference on Cellular Automata For Research and Industry - ACRI 2006,
Perpignan : France (2006)},
year={2008},
archivePrefix={arXiv},
eprint={0803.4248},
primaryClass={cs.NE}
} | simoncini2008from |
arxiv-3184 | 0803.4253 | Combinatorial Explorations in Su-Doku | <|reference_start|>Combinatorial Explorations in Su-Doku: Su-Doku, a popular combinatorial puzzle, provides an excellent testbench for heuristic explorations. Several interesting questions arise from its deceptively simple set of rules. How many distinct Su-Doku grids are there? How to find a solution to a Su-Doku puzzle? Is there a unique solution to a given Su-Doku puzzle? What is a good estimation of a puzzle's difficulty? What is the minimum puzzle size (the number of "givens")? This paper explores how these questions are related to the well-known alldifferent constraint which emerges in a wide variety of Constraint Satisfaction Problems (CSP) and compares various algorithmic approaches based on different formulations of Su-Doku.<|reference_end|> | arxiv | @article{chauvet2008combinatorial,
title={Combinatorial Explorations in Su-Doku},
author={Jean-Marie Chauvet},
journal={arXiv preprint arXiv:0803.4253},
year={2008},
archivePrefix={arXiv},
eprint={0803.4253},
primaryClass={cs.AI cs.CC}
} | chauvet2008combinatorial |
arxiv-3185 | 0803.4260 | On Two Dimensional Orthogonal Knapsack Problem | <|reference_start|>On Two Dimensional Orthogonal Knapsack Problem: In this paper, we study the following knapsack problem: Given a list of squares with profits, we are requested to pack a sublist of them into a rectangular bin (not a unit square bin) to make profits in the bin as large as possible. We first observe there is a Polynomial Time Approximation Scheme (PTAS) for the problem of packing weighted squares into rectangular bins with large resources, then apply the PTAS to the problem of packing squares with profits into a rectangular bin and get a $\frac65+\epsilon$ approximation algorithm.<|reference_end|> | arxiv | @article{han2008on,
title={On Two Dimensional Orthogonal Knapsack Problem},
author={Xin Han, Kazuo Iwama, Guochuan Zhang},
journal={arXiv preprint arXiv:0803.4260},
year={2008},
archivePrefix={arXiv},
eprint={0803.4260},
primaryClass={cs.DS cs.CC}
} | han2008on |
arxiv-3186 | 0803.4261 | Common Permutation Problem | <|reference_start|>Common Permutation Problem: In this paper we show that the following problem is NP-complete: Given an alphabet $\Sigma$ and two strings over $\Sigma$, the question is whether there exists a permutation of $\Sigma$ which is a subsequence of both of the given strings.<|reference_end|> | arxiv | @article{dvorský2008common,
title={Common Permutation Problem},
author={Mari'an Dvorsk'y},
journal={arXiv preprint arXiv:0803.4261},
year={2008},
archivePrefix={arXiv},
eprint={0803.4261},
primaryClass={cs.CC}
} | dvorský2008common |
arxiv-3187 | 0803.4308 | Discrete Frequency Selection of Frame-Based Stochastic Real-Time Tasks | <|reference_start|>Discrete Frequency Selection of Frame-Based Stochastic Real-Time Tasks: Energy-efficient real-time task scheduling has been actively explored in the past decade. Different from the past work, this paper considers schedulability conditions for stochastic real-time tasks. A schedulability condition is first presented for frame-based stochastic real-time tasks, and several algorithms are also examined to check the schedulability of a given strategy. An approach is then proposed based on the schedulability condition to adapt a continuous-speed-based method to a discrete-speed system. The approach is able to stay as close as possible to the continuous-speed-based method, but still guaranteeing the schedulability. It is shown by simulations that the energy saving can be more than 20% for some system configurations<|reference_end|> | arxiv | @article{berten2008discrete,
title={Discrete Frequency Selection of Frame-Based Stochastic Real-Time Tasks},
author={Vandy Berten, Chi-Ju Chang, Tei-Wei Kuo},
journal={arXiv preprint arXiv:0803.4308},
year={2008},
archivePrefix={arXiv},
eprint={0803.4308},
primaryClass={cs.OS}
} | berten2008discrete |
arxiv-3188 | 0803.4311 | Locator/identifier split using the data link layer | <|reference_start|>Locator/identifier split using the data link layer: The locator/identifier split approach assumes separating functions of a locator (i.e. topology--dependent attachment point address) and identifier (topology-independent unique identifier) currently both served by an IP address. This work is an attempt to redefine semantics of MAC address to make it a pure layer-2 locator instead of a pure globally-unique identifier. Such an exercise might be interesting from the standpoint of Ethernet scaling and Metro Ethernet technologies. From the global routing perspective, introduction of multihoming, traffic engineering and failover at the 2nd layer may reduce pressure on the 3rd layer.<|reference_end|> | arxiv | @article{grishchenko2008locator/identifier,
title={Locator/identifier split using the data link layer},
author={Victor Grishchenko},
journal={arXiv preprint arXiv:0803.4311},
year={2008},
archivePrefix={arXiv},
eprint={0803.4311},
primaryClass={cs.NI}
} | grishchenko2008locator/identifier |
arxiv-3189 | 0803.4321 | How good is the Warnsdorff's knight's tour heuristic? | <|reference_start|>How good is the Warnsdorff's knight's tour heuristic?: Warnsdorffs rule for a knights tour is a heuristic, i.e., it is a rule that does not produce the desired result all the time. It is a classic example of a greedy method in that it is based on a series of locally optimal choices. This note describes an analysis that determines how good the heuristic is on an 8 X 8 chessboard. The order of appearance in a permutation of the eight possible moves a knight can make determines the path the knight takes. A computer analysis is done of the 8! permutations of the order of a knights moves in Warnsdorffs rule on an 8 X 8 chessboard for tours starting on each of the 64 squares. Whenever a tie occurs for moves to vertices that have the lowest degree, the first of these vertices encountered in the programming loop is chosen. The number of permutations of the 8! total that yield non-Hamiltonian paths is tallied. This will be the same value if we consistently choose the last of these vertices encountered.<|reference_end|> | arxiv | @article{marateck2008how,
title={How good is the Warnsdorff's knight's tour heuristic?},
author={Samuel L. Marateck},
journal={arXiv preprint arXiv:0803.4321},
year={2008},
archivePrefix={arXiv},
eprint={0803.4321},
primaryClass={cs.DM}
} | marateck2008how |
arxiv-3190 | 0803.4332 | On Sequential Estimation and Prediction for Discrete Time Series | <|reference_start|>On Sequential Estimation and Prediction for Discrete Time Series: The problem of extracting as much information as possible from a sequence of observations of a stationary stochastic process $X_0,X_1,...X_n$ has been considered by many authors from different points of view. It has long been known through the work of D. Bailey that no universal estimator for $\textbf{P}(X_{n+1}|X_0,X_1,...X_n)$ can be found which converges to the true estimator almost surely. Despite this result, for restricted classes of processes, or for sequences of estimators along stopping times, universal estimators can be found. We present here a survey of some of the recent work that has been done along these lines.<|reference_end|> | arxiv | @article{morvai2008on,
title={On Sequential Estimation and Prediction for Discrete Time Series},
author={G. Morvai and B. Weiss},
journal={Stochastics and Dynamics, Vol. 7, No. 4. pp. 417-437, 2007},
year={2008},
archivePrefix={arXiv},
eprint={0803.4332},
primaryClass={math.PR cs.IT math.IT}
} | morvai2008on |
arxiv-3191 | 0803.4354 | A O(n^8) X O(n^7) Linear Programming Model of the Traveling Salesman Problem | <|reference_start|>A O(n^8) X O(n^7) Linear Programming Model of the Traveling Salesman Problem: In this paper, we present a new linear programming (LP) formulation of the Traveling Salesman Problem (TSP). The proposed model has O(n^8) variables and O(n^7) constraints, where n is the number of cities. Our numerical experimentation shows that computational times for the proposed linear program are several orders of magnitude smaller than those for the existing model [3].<|reference_end|> | arxiv | @article{diaby2008a,
title={A O(n^8) X O(n^7) Linear Programming Model of the Traveling Salesman
Problem},
author={Moustapha Diaby},
journal={arXiv preprint arXiv:0803.4354},
year={2008},
archivePrefix={arXiv},
eprint={0803.4354},
primaryClass={cs.DM}
} | diaby2008a |
arxiv-3192 | 0803.4355 | Grammar-Based Random Walkers in Semantic Networks | <|reference_start|>Grammar-Based Random Walkers in Semantic Networks: Semantic networks qualify the meaning of an edge relating any two vertices. Determining which vertices are most "central" in a semantic network is difficult because one relationship type may be deemed subjectively more important than another. For this reason, research into semantic network metrics has focused primarily on context-based rankings (i.e. user prescribed contexts). Moreover, many of the current semantic network metrics rank semantic associations (i.e. directed paths between two vertices) and not the vertices themselves. This article presents a framework for calculating semantically meaningful primary eigenvector-based metrics such as eigenvector centrality and PageRank in semantic networks using a modified version of the random walker model of Markov chain analysis. Random walkers, in the context of this article, are constrained by a grammar, where the grammar is a user defined data structure that determines the meaning of the final vertex ranking. The ideas in this article are presented within the context of the Resource Description Framework (RDF) of the Semantic Web initiative.<|reference_end|> | arxiv | @article{rodriguez2008grammar-based,
title={Grammar-Based Random Walkers in Semantic Networks},
author={Marko A. Rodriguez},
journal={Rodriguez, M.A., "Grammar-Based Random Walkers in Semantic
Networks", Knowledge-Based Systems, volume 21, issue 7, pages 727-739, ISSN:
0950-7051, Elsevier, October 2008},
year={2008},
doi={10.1016/j.knosys.2008.03.030},
number={LA-UR-06-7791},
archivePrefix={arXiv},
eprint={0803.4355},
primaryClass={cs.AI cs.DS}
} | rodriguez2008grammar-based |
arxiv-3193 | 0803.4370 | GRID Architecture through a Public Cluster | <|reference_start|>GRID Architecture through a Public Cluster: An architecture to enable some blocks consisting of several nodes in a public cluster connected to different grid collaborations is introduced. It is realized by inserting a web-service in addition to the standard Globus Toolkit. The new web-service performs two main tasks : authenticate the digital certificate contained in an incoming requests and forward it to the designated block. The appropriate block is mapped with the username of the block's owner contained in the digital certificate. It is argued that this algorithm opens an opportunity for any blocks in a public cluster to join various global grids.<|reference_end|> | arxiv | @article{akbar2008grid,
title={GRID Architecture through a Public Cluster},
author={Z. Akbar and L.T. Handoko},
journal={arXiv preprint arXiv:0803.4370},
year={2008},
doi={10.1109/ICCCE.2008.4580761},
number={FISIKALIPI-08032},
archivePrefix={arXiv},
eprint={0803.4370},
primaryClass={cs.DC cs.NI}
} | akbar2008grid |
arxiv-3194 | 0803.4373 | The quantum moment problem and bounds on entangled multi-prover games | <|reference_start|>The quantum moment problem and bounds on entangled multi-prover games: We study the quantum moment problem: Given a conditional probability distribution together with some polynomial constraints, does there exist a quantum state rho and a collection of measurement operators such that (i) the probability of obtaining a particular outcome when a particular measurement is performed on rho is specified by the conditional probability distribution, and (ii) the measurement operators satisfy the constraints. For example, the constraints might specify that some measurement operators must commute. We show that if an instance of the quantum moment problem is unsatisfiable, then there exists a certificate of a particular form proving this. Our proof is based on a recent result in algebraic geometry, the noncommutative Positivstellensatz of Helton and McCullough [Trans. Amer. Math. Soc., 356(9):3721, 2004]. A special case of the quantum moment problem is to compute the value of one-round multi-prover games with entangled provers. Under the conjecture that the provers need only share states in finite-dimensional Hilbert spaces, we prove that a hierarchy of semidefinite programs similar to the one given by Navascues, Pironio and Acin [Phys. Rev. Lett., 98:010401, 2007] converges to the entangled value of the game. It follows that the class of languages recognized by a multi-prover interactive proof system where the provers share entanglement is recursive.<|reference_end|> | arxiv | @article{doherty2008the,
title={The quantum moment problem and bounds on entangled multi-prover games},
author={Andrew C. Doherty (University of Queensland), Yeong-Cherng Liang
(University of Queensland), Ben Toner (CWI), Stephanie Wehner (Caltech)},
journal={Proceedings of IEEE Conference on Computational Complexity 2008,
pages 199--210},
year={2008},
archivePrefix={arXiv},
eprint={0803.4373},
primaryClass={quant-ph cs.CC}
} | doherty2008the |
arxiv-3195 | 0803.4479 | Unconditionally secure computers, algorithms and hardware, such as memories, processors, keyboards, flash and hard drives | <|reference_start|>Unconditionally secure computers, algorithms and hardware, such as memories, processors, keyboards, flash and hard drives: In the case of the need of extraordinary security, Kirchhoff-loop-Johnson-(like)-noise ciphers can easily be integrated on existing types of digital chips in order to provide secure data communication between hardware processors, memory chips, hard disks and other units within a computer or other data processor system. The secure key exchange can take place at the very first run and the system can renew the key later at random times with an authenticated fashion to prohibit man-in-the-middle attack. The key can be stored in flash memories within the communicating chip units at hidden random addresses among other random bits that are continuously generated by the secure line but are never actually used. Thus, even if the system is disassembled, and the eavesdropper can have direct access to the communication lines between the units, or even if she is trying to use a man-in-the-middle attack, no information can be extracted. The only way to break the code is to learn the chip structure, to understand the machine code program and to read out the information during running by accessing the proper internal ports of the working chips. However such an attack needs extraordinary resources and even that can be prohibited by a password lockout. The unconditional security of commercial algorithms against piracy can be provided in a similar way.<|reference_end|> | arxiv | @article{kish2008unconditionally,
title={Unconditionally secure computers, algorithms and hardware, such as
memories, processors, keyboards, flash and hard drives},
author={Laszlo B. Kish and Olivier Saidi},
journal={Fluctuation and Noise Letters 8 (2008) L95-L98},
year={2008},
doi={10.1142/S0219477508004362},
archivePrefix={arXiv},
eprint={0803.4479},
primaryClass={physics.gen-ph cs.CR}
} | kish2008unconditionally |
arxiv-3196 | 0803.4511 | The aDORe Federation Architecture | <|reference_start|>The aDORe Federation Architecture: The need to federate repositories emerges in two distinctive scenarios. In one scenario, scalability-related problems in the operation of a repository reach a point beyond which continued service requires parallelization and hence federation of the repository infrastructure. In the other scenario, multiple distributed repositories manage collections of interest to certain communities or applications, and federation is an approach to present a unified perspective across these repositories. The high-level, 3-Tier aDORe federation architecture can be used as a guideline to federate repositories in both cases. This paper describes the architecture, consisting of core interfaces for federated repositories in Tier-1, two shared infrastructure components in Tier-2, and a single-point of access to the federation in Tier-3. The paper also illustrates two large-scale deployments of the aDORe federation architecture: the aDORe Archive repository (over 100,000,000 digital objects) at the Los Alamos National Laboratory and the Ghent University Image Repository federation (multiple terabytes of image files).<|reference_end|> | arxiv | @article{van de sompel2008the,
title={The aDORe Federation Architecture},
author={Herbert Van de Sompel, Ryan Chute, Patrick Hochstenbach},
journal={arXiv preprint arXiv:0803.4511},
year={2008},
archivePrefix={arXiv},
eprint={0803.4511},
primaryClass={cs.DL}
} | van de sompel2008the |
arxiv-3197 | 0803.4516 | A Dual Polynomial for OR | <|reference_start|>A Dual Polynomial for OR: We reprove that the approximate degree of the OR function on n bits is Omega(sqrt(n)). We consider a linear program which is feasible if and only if there is an approximate polynomial for a given function, and apply the duality theory. The duality theory says that the primal program has no solution if and only if its dual has a solution. Therefore one can prove the nonexistence of an approximate polynomial by exhibiting a dual solution, coined the dual polynomial. We construct such a polynomial.<|reference_end|> | arxiv | @article{spalek2008a,
title={A Dual Polynomial for OR},
author={Robert Spalek (Google)},
journal={arXiv preprint arXiv:0803.4516},
year={2008},
archivePrefix={arXiv},
eprint={0803.4516},
primaryClass={cs.CC}
} | spalek2008a |
arxiv-3198 | 0804.0006 | Embedding in a perfect code | <|reference_start|>Embedding in a perfect code: A binary 1-error-correcting code can always be embedded in a 1-perfect code of some larger length<|reference_end|> | arxiv | @article{avgustinovich2008embedding,
title={Embedding in a perfect code},
author={Sergey Avgustinovich (Sobolev Institute of Mathematics, Novosibirsk,
Russia), Denis Krotov (Sobolev Institute of Mathematics, Novosibirsk, Russia)},
journal={J. Comb. Des. 17(5) 2009, 419-423},
year={2008},
doi={10.1002/jcd.20207},
archivePrefix={arXiv},
eprint={0804.0006},
primaryClass={math.CO cs.IT math.IT}
} | avgustinovich2008embedding |
arxiv-3199 | 0804.0036 | Complexity and algorithms for computing Voronoi cells of lattices | <|reference_start|>Complexity and algorithms for computing Voronoi cells of lattices: In this paper we are concerned with finding the vertices of the Voronoi cell of a Euclidean lattice. Given a basis of a lattice, we prove that computing the number of vertices is a #P-hard problem. On the other hand we describe an algorithm for this problem which is especially suited for low dimensional (say dimensions at most 12) and for highly-symmetric lattices. We use our implementation, which drastically outperforms those of current computer algebra systems, to find the vertices of Voronoi cells and quantizer constants of some prominent lattices.<|reference_end|> | arxiv | @article{sikiric2008complexity,
title={Complexity and algorithms for computing Voronoi cells of lattices},
author={Mathieu Dutour Sikiric, Achill Schuermann, Frank Vallentin},
journal={Math. Comp. 267 (2009), 1713-1731},
year={2008},
doi={10.1090/S0025-5718-09-02224-8},
archivePrefix={arXiv},
eprint={0804.0036},
primaryClass={math.MG cs.CG cs.IT math.IT math.NT}
} | sikiric2008complexity |
arxiv-3200 | 0804.0041 | On the reconstruction of block-sparse signals with an optimal number of measurements | <|reference_start|>On the reconstruction of block-sparse signals with an optimal number of measurements: Let A be an M by N matrix (M < N) which is an instance of a real random Gaussian ensemble. In compressed sensing we are interested in finding the sparsest solution to the system of equations A x = y for a given y. In general, whenever the sparsity of x is smaller than half the dimension of y then with overwhelming probability over A the sparsest solution is unique and can be found by an exhaustive search over x with an exponential time complexity for any y. The recent work of Cand\'es, Donoho, and Tao shows that minimization of the L_1 norm of x subject to A x = y results in the sparsest solution provided the sparsity of x, say K, is smaller than a certain threshold for a given number of measurements. Specifically, if the dimension of y approaches the dimension of x, the sparsity of x should be K < 0.239 N. Here, we consider the case where x is d-block sparse, i.e., x consists of n = N / d blocks where each block is either a zero vector or a nonzero vector. Instead of L_1-norm relaxation, we consider the following relaxation min x \| X_1 \|_2 + \| X_2 \|_2 + ... + \| X_n \|_2, subject to A x = y where X_i = (x_{(i-1)d+1}, x_{(i-1)d+2}, ..., x_{i d}) for i = 1,2, ..., N. Our main result is that as n -> \infty, the minimization finds the sparsest solution to Ax = y, with overwhelming probability in A, for any x whose block sparsity is k/n < 1/2 - O(\epsilon), provided M/N > 1 - 1/d, and d = \Omega(\log(1/\epsilon)/\epsilon). The relaxation can be solved in polynomial time using semi-definite programming.<|reference_end|> | arxiv | @article{stojnic2008on,
title={On the reconstruction of block-sparse signals with an optimal number of
measurements},
author={Mihailo Stojnic, Farzad Parvaresh and Babak Hassibi},
journal={arXiv preprint arXiv:0804.0041},
year={2008},
doi={10.1016/j.jmr.2011.02.015},
archivePrefix={arXiv},
eprint={0804.0041},
primaryClass={cs.IT cs.NA math.IT}
} | stojnic2008on |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.