id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
0805.0197
Flatness of the Energy Landscape for Horn Clauses
cond-mat.dis-nn cs.NE
The Little-Hopfield neural network programmed with Horn clauses is studied. We argue that the energy landscape of the system, corresponding to the inconsistency function for logical interpretations of the sets of Horn clauses, has minimal ruggedness. This is supported by computer simulations.
0805.0202
A Pseudo-Boolean Solution to the Maximum Quartet Consistency Problem
cs.AI cs.LO
Determining the evolutionary history of a given biological data is an important task in biological sciences. Given a set of quartet topologies over a set of taxa, the Maximum Quartet Consistency (MQC) problem consists of computing a global phylogeny that satisfies the maximum number of quartets. A number of solutions have been proposed for the MQC problem, including Dynamic Programming, Constraint Programming, and more recently Answer Set Programming (ASP). ASP is currently the most efficient approach for optimally solving the MQC problem. This paper proposes encoding the MQC problem with pseudo-Boolean (PB) constraints. The use of PB allows solving the MQC problem with efficient PB solvers, and also allows considering different modeling approaches for the MQC problem. Initial results are promising, and suggest that PB can be an effective alternative for solving the MQC problem.
0805.0231
CMA-ES with Two-Point Step-Size Adaptation
cs.NE
We combine a refined version of two-point step-size adaptation with the covariance matrix adaptation evolution strategy (CMA-ES). Additionally, we suggest polished formulae for the learning rate of the covariance matrix and the recombination weights. In contrast to cumulative step-size adaptation or to the 1/5-th success rule, the refined two-point adaptation (TPA) does not rely on any internal model of optimality. In contrast to conventional self-adaptation, the TPA will achieve a better target step-size in particular with large populations. The disadvantage of TPA is that it relies on two additional objective function
0805.0241
Asymptotically Good LDPC Convolutional Codes Based on Protographs
cs.IT math.IT
LDPC convolutional codes have been shown to be capable of achieving the same capacity-approaching performance as LDPC block codes with iterative message-passing decoding. In this paper, asymptotic methods are used to calculate a lower bound on the free distance for several ensembles of asymptotically good protograph-based LDPC convolutional codes. Further, we show that the free distance to constraint length ratio of the LDPC convolutional codes exceeds the minimum distance to block length ratio of corresponding LDPC block codes.
0805.0268
Towards Exploring Fundamental Limits of System-Specific Cryptanalysis Within Limited Attack Classes: Application to ABSG
cs.CR cs.IT math.IT
A new approach on cryptanalysis is proposed where the goal is to explore the fundamental limits of a specific class of attacks against a particular cryptosystem. As a first step, the approach is applied on ABSG, which is an LFSR-based stream cipher where irregular decimation techniques are utilized. Consequently, under some mild assumptions, which are common in cryptanalysis, the tight lower bounds on the algorithmic complexity of successful Query-Based Key-Recovery attacks are derived for two different setups of practical interest. The proofs rely on the concept of ``typicality'' of information theory.
0805.0272
Capacity of The Discrete-Time Non-Coherent Memoryless Gaussian Channels at Low SNR
cs.IT math.IT
We address the capacity of a discrete-time memoryless Gaussian channel, where the channel state information (CSI) is neither available at the transmitter nor at the receiver. The optimal capacity-achieving input distribution at low signal-to-noise ratio (SNR) is precisely characterized, and the exact capacity of a non-coherent channel is derived. The derived relations allow to better understanding the capacity of non-coherent channels at low SNR. Then, we compute the non-coherence penalty and give a more precise characterization of the sub-linear term in SNR. Finally, in order to get more insight on how the optimal input varies with SNR, upper and lower bounds on the non-zero mass point location of the capacity-achieving input are given.
0805.0330
Alternating Automata on Data Trees and XPath Satisfiability
cs.LO cs.DB cs.FL
A data tree is an unranked ordered tree whose every node is labelled by a letter from a finite alphabet and an element ("datum") from an infinite set, where the latter can only be compared for equality. The article considers alternating automata on data trees that can move downward and rightward, and have one register for storing data. The main results are that nonemptiness over finite data trees is decidable but not primitive recursive, and that nonemptiness of safety automata is decidable but not elementary. The proofs use nondeterministic tree automata with faulty counters. Allowing upward moves, leftward moves, or two registers, each causes undecidability. As corollaries, decidability is obtained for two data-sensitive fragments of the XPath query language.
0805.0337
On Distributed Function Computation in Structure-Free Random Networks
cs.IT math.IT
We consider in-network computation of MAX in a structure-free random multihop wireless network. Nodes do not know their relative or absolute locations and use the Aloha MAC protocol. For one-shot computation, we describe a protocol in which the MAX value becomes available at the origin in $O(\sqrt{n/\log n})$ slots with high probability. This is within a constant factor of that required by the best coordinated protocol. A minimal structure (knowledge of hop-distance from the sink) is imposed on the network and with this structure, we describe a protocol for pipelined computation of MAX that achieves a rate of $\Omega(1/(\log^2 n)).$
0805.0360
Prediction and Mitigation of Crush Conditions in Emergency Evacuations
cs.CE cs.MA
Several simulation environments exist for the simulation of large-scale evacuations of buildings, ships, or other enclosed spaces. These offer sophisticated tools for the study of human behaviour, the recreation of environmental factors such as fire or smoke, and the inclusion of architectural or structural features, such as elevators, pillars and exits. Although such simulation environments can provide insights into crowd behaviour, they lack the ability to examine potentially dangerous forces building up within a crowd. These are commonly referred to as crush conditions, and are a common cause of death in emergency evacuations. In this paper, we describe a methodology for the prediction and mitigation of crush conditions. The paper is organised as follows. We first establish the need for such a model, defining the main factors that lead to crush conditions, and describing several exemplar case studies. We then examine current methods for studying crush, and describe their limitations. From this, we develop a three-stage hybrid approach, using a combination of techniques. We conclude with a brief discussion of the potential benefits of our approach.
0805.0375
Wireless Secrecy in Cellular Systems with Infrastructure--Aided Cooperation
cs.IT cs.CR math.IT
In cellular systems, confidentiality of uplink transmission with respect to eavesdropping terminals can be ensured by creating intentional inteference via scheduling of concurrent downlink transmissions. In this paper, this basic idea is explored from an information-theoretic standpoint by focusing on a two-cell scenario where the involved base stations are connected via a finite-capacity backbone link. A number of transmission strategies are considered that aim at improving uplink confidentiality under constraints on the downlink rate that acts as an interfering signal. The strategies differ mainly in the way the backbone link is exploited by the cooperating downlink- to the uplink-operated base stations. Achievable rates are derived for both the Gaussian (unfaded) and the fading cases, under different assumptions on the channel state information available at different nodes. Numerical results are also provided to corroborate the analysis. Overall, the analysis reveals that a combination of scheduling and base station cooperation is a promising means to improve transmission confidentiality in cellular systems.
0805.0438
Network-based consensus averaging with general noisy channels
cs.IT math.IT
This paper focuses on the consensus averaging problem on graphs under general noisy channels. We study a particular class of distributed consensus algorithms based on damped updates, and using the ordinary differential equation method, we prove that the updates converge almost surely to exact consensus for finite variance noise. Our analysis applies to various types of stochastic disturbances, including errors in parameters, transmission noise, and quantization noise. Under a suitable stability condition, we prove that the error is asymptotically Gaussian, and we show how the asymptotic covariance is specified by the graph Laplacian. For additive parameter noise, we show how the scaling of the asymptotic MSE is controlled by the spectral gap of the Laplacian.
0805.0459
Phase transition in SONFIS&SORST
cs.AI
In this study, we introduce general frame of MAny Connected Intelligent Particles Systems (MACIPS). Connections and interconnections between particles get a complex behavior of such merely simple system (system in system).Contribution of natural computing, under information granulation theory, are the main topics of this spacious skeleton. Upon this clue, we organize two algorithms involved a few prominent intelligent computing and approximate reasoning methods: self organizing feature map (SOM), Neuro- Fuzzy Inference System and Rough Set Theory (RST). Over this, we show how our algorithms can be taken as a linkage of government-society interaction, where government catches various fashions of behavior: solid (absolute) or flexible. So, transition of such society, by changing of connectivity parameters (noise) from order to disorder is inferred. Add to this, one may find an indirect mapping among financial systems and eventual market fluctuations with MACIPS. Keywords: phase transition, SONFIS, SORST, many connected intelligent particles system, society-government interaction
0805.0501
Decoding Generalized Concatenated Codes Using Interleaved Reed-Solomon Codes
cs.IT math.IT
Generalized Concatenated codes are a code construction consisting of a number of outer codes whose code symbols are protected by an inner code. As outer codes, we assume the most frequently used Reed-Solomon codes; as inner code, we assume some linear block code which can be decoded up to half its minimum distance. Decoding up to half the minimum distance of Generalized Concatenated codes is classically achieved by the Blokh-Zyablov-Dumer algorithm, which iteratively decodes by first using the inner decoder to get an estimate of the outer code words and then using an outer error/erasure decoder with a varying number of erasures determined by a set of pre-calculated thresholds. In this paper, a modified version of the Blokh-Zyablov-Dumer algorithm is proposed, which exploits the fact that a number of outer Reed-Solomon codes with average minimum distance d can be grouped into one single Interleaved Reed-Solomon code which can be decoded beyond d/2. This allows to skip a number of decoding iterations on the one hand and to reduce the complexity of each decoding iteration significantly - while maintaining the decoding performance - on the other.
0805.0507
Spread Codes and Spread Decoding in Network Coding
cs.IT math.IT
In this paper we introduce the class of Spread Codes for the use in random network coding. Spread Codes are based on the construction of spreads in finite projective geometry. The major contribution of the paper is an efficient decoding algorithm of spread codes up to half the minimum distance.
0805.0510
Iterative Hard Thresholding for Compressed Sensing
cs.IT cs.NA math.IT math.NA
Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) - It gives near-optimal error guarantees. - It is robust to observation noise. - It succeeds with a minimum number of observations. - It can be used with any sampling operator for which the operator and its adjoint can be computed. - The memory requirement is linear in the problem size. - Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. - It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. - Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.
0805.0514
Efficient recovering of operation tables of black box groups and rings
cs.IT cs.DM math.GR math.IT
People have been studying the following problem: Given a finite set S with a hidden (black box) binary operation * on S which might come from a group law, and suppose you have access to an oracle that you can ask for the operation x*y of single pairs (x,y) you choose. What is the minimal number of queries to the oracle until the whole binary operation is recovered, i.e. you know x*y for all x,y in S? This problem can trivially be solved by using |S|^2 queries to the oracle, so the question arises under which circumstances you can succeed with a significantly smaller number of queries. In this presentation we give a lower bound on the number of queries needed for general binary operations. On the other hand, we present algorithms solving this problem by using |S| queries, provided that * is an abelian group operation. We also investigate black box rings and give lower and upper bounds for the number of queries needed to solve product recovering in this case.
0805.0516
The Gaussian MAC with Conferencing Encoders
cs.IT math.IT
We derive the capacity region of the Gaussian version of Willems's two-user MAC with conferencing encoders. This setting differs from the classical MAC in that, prior to each transmission block, the two transmitters can communicate with each other over noise-free bit-pipes of given capacities. The derivation requires a new technique for proving the optimality of Gaussian input distributions in certain mutual information maximizations under a Markov constraint. We also consider a Costa-type extension of the Gaussian MAC with conferencing encoders. In this extension, the channel can be described as a two-user MAC with Gaussian noise and Gaussian interference where the interference is known non-causally to the encoders but not to the decoder. We show that as in Costa's setting the interference sequence can be perfectly canceled, i.e., that the capacity region without interference can be achieved.
0805.0521
On the Capacity of Free-Space Optical Intensity Channels
cs.IT math.IT
New upper and lower bounds are presented on the capacity of the free-space optical intensity channel. This channel is characterized by inputs that are nonnegative (representing the transmitted optical intensity) and by outputs that are corrupted by additive white Gaussian noise (because in free space the disturbances arise from many independent sources). Due to battery and safety reasons the inputs are simultaneously constrained in both their average and peak power. For a fixed ratio of the average power to the peak power the difference between the upper and the lower bounds tends to zero as the average power tends to infinity, and the ratio of the upper and lower bounds tends to one as the average power tends to zero. The case where only an average-power constraint is imposed on the input is treated separately. In this case, the difference of the upper and lower bound tends to 0 as the average power tends to infinity, and their ratio tends to a constant as the power tends to zero.
0805.0577
Infinity-Norm Sphere-Decoding
cs.IT math.IT
The most promising approaches for efficient detection in multiple-input multiple-output (MIMO) wireless systems are based on sphere-decoding (SD). The conventional (and optimum) norm that is used to conduct the tree traversal step in SD is the l-2 norm. It was, however, recently observed that using the l-infinity norm instead reduces the hardware complexity of SD considerably at only a marginal performance loss. These savings result from a reduction in the length of the critical path in the circuit and the silicon area required for metric computation, but are also, as observed previously through simulation results, a consequence of a reduction in the computational (i.e., algorithmic) complexity. The aim of this paper is an analytical performance and computational complexity analysis of l-infinity norm SD. For i.i.d. Rayleigh fading MIMO channels, we show that l-infinity norm SD achieves full diversity order with an asymptotic SNR gap, compared to l-2 norm SD, that increases at most linearly in the number of receive antennas. Moreover, we provide a closed-form expression for the computational complexity of l-infinity norm SD based on which we establish that its complexity scales exponentially in the system size. Finally, we characterize the tree pruning behavior of l-infinity norm SD and show that it behaves fundamentally different from that of l-2 norm SD.
0805.0589
Cascaded Orthogonal Space-Time Block Codes for Wireless Multi-Hop Relay Networks
cs.IT math.IT
Distributed space-time block coding is a diversity technique to mitigate the effects of fading in multi-hop wireless networks, where multiple relay stages are used by a source to communicate with its destination. This paper proposes a new distributed space-time block code called the cascaded orthogonal space-time block code (COSTBC) for the case where the source and destination are equipped with multiple antennas and each relay stage has one or more single antenna relays. Each relay stage is assumed to have receive channel state information (CSI) for all the channels from the source to itself, while the destination is assumed to have receive CSI for all the channels. To construct the COSTBC, multiple orthogonal space-time block codes are used in cascade by the source and each relay stage. In the COSTBC, each relay stage separates the constellation symbols of the orthogonal space-time block code sent by the preceding relay stage using its CSI, and then transmits another orthogonal space-time block code to the next relay stage. COSTBCs are shown to achieve the maximum diversity gain in a multi-hop wireless network with flat Rayleigh fading channels. Several explicit constructions of COSTBCs are also provided for two-hop wireless networks with two and four source antennas and relay nodes. It is also shown that COSTBCs require minimum decoding complexity thanks to the connection to orthogonal space-time block codes.
0805.0615
On Expanded Cyclic Codes
cs.IT cs.CC math.IT math.RA
The paper has a threefold purpose. The first purpose is to present an explicit description of expanded cyclic codes defined in $\GF(q^m)$. The proposed explicit construction of expanded generator matrix and expanded parity check matrix maintains the symbol-wise algebraic structure and thus keeps many important original characteristics. The second purpose of this paper is to identify a class of constant-weight cyclic codes. Specifically, we show that a well-known class of $q$-ary BCH codes excluding the all-zero codeword are constant-weight cyclic codes. Moreover, we show this class of codes achieve the Plotkin bound. The last purpose of the paper is to characterize expanded cyclic codes utilizing the proposed expanded generator matrix and parity check matrix. We characterize the properties of component codewords of a codeword and particularly identify the precise conditions under which a codeword can be represented by a subbasis. Our developments reveal an alternative while more general view on the subspace subcodes of Reed-Solomon codes. With the new insights, we present an improved lower bound on the minimum distance of an expanded cyclic code by exploiting the generalized concatenated structure. We also show that the fixed-rate binary expanded Reed-Solomon codes are asymptotically "bad", in the sense that the ratio of minimum distance over code length diminishes with code length going to infinity. It overturns the prevalent conjecture that they are "good" codes and deviates from the ensemble of generalized Reed-Solomon codes which asymptotically achieves the Gilbert-Varshamov bound.
0805.0642
Order to Disorder Transitions in Hybrid Intelligent Systems: a Hatch to the Interactions of Nations -Governments
cs.AI cs.IT math.IT
In this study, under general frame of MAny Connected Intelligent Particles Systems (MACIPS), we reproduce two new simple subsets of such intelligent complex network, namely hybrid intelligent systems, involved a few prominent intelligent computing and approximate reasoning methods: self organizing feature map (SOM), Neuro-Fuzzy Inference System and Rough Set Theory (RST). Over this, we show how our algorithms can be construed as a linkage of government-society interaction, where government catches various fashions of behavior: solid (absolute) or flexible. So, transition of such society, by changing of connectivity parameters (noise) from order to disorder is inferred. Add to this, one may find an indirect mapping among financial systems and eventual market fluctuations with MACIPS.
0805.0697
Stochastic Optimization Approaches for Solving Sudoku
cs.NE
In this paper the Sudoku problem is solved using stochastic search techniques and these are: Cultural Genetic Algorithm (CGA), Repulsive Particle Swarm Optimization (RPSO), Quantum Simulated Annealing (QSA) and the Hybrid method that combines Genetic Algorithm with Simulated Annealing (HGASA). The results obtained show that the CGA, QSA and HGASA are able to solve the Sudoku puzzle with CGA finding a solution in 28 seconds, while QSA finding a solution in 65 seconds and HGASA in 1.447 seconds. This is mainly because HGASA combines the parallel searching of GA with the flexibility of SA. The RPSO was found to be unable to solve the puzzle.
0805.0740
Diversity-Integration Trade-offs in MIMO Detection
cs.OH cs.IT math.IT
In this work, a MIMO detection problem is considered. At first, we derive the Generalized Likelihood Ratio Test (GLRT) for arbitrary transmitted signals and arbitrary time-correlation of the disturbance. Then, we investigate design criteria for the transmitted waveforms in both power-unlimited and power-limited systems and we study the interplay among the rank of the optimized code matrix, the number of transmit diversity paths and the amount of energy integrated along each path. The results show that increasing the rank of the code matrix allows generating a larger number of diversity paths at the price of reducing the average signal-to-clutter level along each path.
0805.0747
Pruning Attribute Values From Data Cubes with Diamond Dicing
cs.DB cs.DS
Data stored in a data warehouse are inherently multidimensional, but most data-pruning techniques (such as iceberg and top-k queries) are unidimensional. However, analysts need to issue multidimensional queries. For example, an analyst may need to select not just the most profitable stores or--separately--the most profitable products, but simultaneous sets of stores and products fulfilling some profitability constraints. To fill this need, we propose a new operator, the diamond dice. Because of the interaction between dimensions, the computation of diamonds is challenging. We present the first diamond-dicing experiments on large data sets. Experiments show that we can compute diamond cubes over fact tables containing 100 million facts in less than 35 minutes using a standard PC.
0805.0785
AGNOSCO - Identification of Infected Nodes with artificial Ant Colonies
cs.AI cs.MA
If a computer node is infected by a virus, worm or a backdoor, then this is a security risk for the complete network structure where the node is associated. Existing Network Intrusion Detection Systems (NIDS) provide a certain amount of support for the identification of such infected nodes but suffer from the need of plenty of communication and computational power. In this article, we present a novel approach called AGNOSCO to support the identification of infected nodes through the usage of artificial ant colonies. It is shown that AGNOSCO overcomes the communication and computational power problem while identifying infected nodes properly.
0805.0802
An Information-Theoretical View of Network-Aware Malware Attacks
cs.CR cs.IT cs.NI math.IT
This work investigates three aspects: (a) a network vulnerability as the non-uniform vulnerable-host distribution, (b) threats, i.e., intelligent malwares that exploit such a vulnerability, and (c) defense, i.e., challenges for fighting the threats. We first study five large data sets and observe consistent clustered vulnerable-host distributions. We then present a new metric, referred to as the non-uniformity factor, which quantifies the unevenness of a vulnerable-host distribution. This metric is essentially the Renyi information entropy and better characterizes the non-uniformity of a distribution than the Shannon entropy. Next, we analyze the propagation speed of network-aware malwares in view of information theory. In particular, we draw a relationship between Renyi entropies and randomized epidemic malware-scanning algorithms. We find that the infection rates of malware-scanning methods are characterized by the Renyi entropies that relate to the information bits in a non-unform vulnerable-host distribution extracted by a randomized scanning algorithm. Meanwhile, we show that a representative network-aware malware can increase the spreading speed by exactly or nearly a non-uniformity factor when compared to a random-scanning malware at an early stage of malware propagation. This quantifies that how much more rapidly the Internet can be infected at the early stage when a malware exploits an uneven vulnerable-host distribution as a network-wide vulnerability. Furthermore, we analyze the effectiveness of defense strategies on the spread of network-aware malwares. Our results demonstrate that counteracting network-aware malwares is a significant challenge for the strategies that include host-based defense and IPv6.
0805.0849
SANA - Network Protection through artificial Immunity
cs.CR cs.MA
Current network protection systems use a collection of intelligent components - e.g. classifiers or rule-based firewall systems to detect intrusions and anomalies and to secure a network against viruses, worms, or trojans. However, these network systems rely on individuality and support an architecture with less collaborative work of the protection components. They give less administration support for maintenance, but offer a large number of individual single points of failures - an ideal situation for network attacks to succeed. In this work, we discuss the required features, the performance, and the problems of a distributed protection system called SANA. It consists of a cooperative architecture, it is motivated by the human immune system, where the components correspond to artificial immune cells that are connected for their collaborative work. SANA promises a better protection against intruders than common known protection systems through an adaptive self-management while keeping the resources efficiently by an intelligent reduction of redundant tasks. We introduce a library of several novel and common used protection components and evaluate the performance of SANA by a proof-of-concept implementation.
0805.0909
SANA - Security Analysis in Internet Traffic through Artificial Immune Systems
cs.CR cs.MA
The Attacks done by Viruses, Worms, Hackers, etc. are a Network Security-Problem in many Organisations. Current Intrusion Detection Systems have significant Disadvantages, e.g. the need of plenty of Computational Power or the Local Installation. Therefore, we introduce a novel Framework for Network Security which is called SANA. SANA contains an artificial Immune System with artificial Cells which perform certain Tasks in order to to support existing systems to better secure the Network against Intrusions. The Advantages of SANA are that it is efficient, adaptive, autonomous, and massively-distributed. In this Article, we describe the Architecture of the artificial Immune System and the Functionality of the Components. We explain briefly the Implementation and discuss Results.
0805.0963
Correlated Anarchy in Overlapping Wireless Networks
cs.GT cs.IT math.IT physics.soc-ph
We investigate the behavior of a large number of selfish users that are able to switch dynamically between multiple wireless access-points (possibly belonging to different standards) by introducing an iterated non-cooperative game. Users start out completely uneducated and naive but, by using a fixed set of strategies to process a broadcasted training signal, they quickly evolve and converge to an evolutionarily stable equilibrium. Then, in order to measure efficiency in this steady state, we adapt the notion of the price of anarchy to our setting and we obtain an explicit analytic estimate for it by using methods from statistical physics (namely the theory of replicas). Surprisingly, we find that the price of anarchy does not depend on the specifics of the wireless nodes (e.g. spectral efficiency) but only on the number of strategies per user and a particular combination of the number of nodes, the number of users and the size of the training signal. Finally, we map this game to the well-studied minority game, generalizing its analysis to an arbitrary number of choices.
0805.0990
The pre-log of Gaussian broadcast with feedback can be two
cs.IT math.IT
A generic intuition says that the pre-log, or multiplexing gain, cannot be larger than the minimum of the number of transmit and receive dimensions. This suggests that for the scalar broadcast channel, the pre-log cannot exceed one. By contrast, in this note, we show that when the noises are anti-correlated and feedback is present, then a pre-log of two can be attained. In other words, in this special case, in the limit of high SNR, the scalar Gaussian broadcast channel turns into two parallel AWGN channels. Achievability is established via a coding strategy due to Schalkwijk, Kailath, and Ozarow.
0805.1030
Decomposition Techniques for Subgraph Matching
cs.CC cs.CL
In the constraint programming framework, state-of-the-art static and dynamic decomposition techniques are hard to apply to problems with complete initial constraint graphs. For such problems, we propose a hybrid approach of these techniques in the presence of global constraints. In particular, we solve the subgraph isomorphism problem. Further we design specific heuristics for this hard problem, exploiting its special structure to achieve decomposition. The underlying idea is to precompute a static heuristic on a subset of its constraint network, to follow this static ordering until a first problem decomposition is available, and to switch afterwards to a fully propagated, dynamically decomposing search. Experimental results show that, for sparse graphs, our decomposition method solves more instances than dedicated, state-of-the-art matching algorithms or standard constraint programming approaches.
0805.1088
Network Coding for Speedup in Switches
cs.NI cs.IT math.IT
We present a graph theoretic upper bound on speedup needed to achieve 100% throughput in a multicast switch using network coding. By bounding speedup, we show the equivalence between network coding and speedup in multicast switches - i.e. network coding, which is usually implemented using software, can in many cases substitute speedup, which is often achieved by adding extra switch fabrics. This bound is based on an approach to network coding problems called the "enhanced conflict graph". We show that the "imperfection ratio" of the enhanced conflict graph gives an upper bound on speedup. In particular, we apply this result to K-by-N switches with traffic patterns consisting of unicasts and broadcasts only to obtain an upper bound of min{(2K-1)/K, 2N/(N+1)}.
0805.1096
Adaptive Affinity Propagation Clustering
cs.AI
Affinity propagation clustering (AP) has two limitations: it is hard to know what value of parameter 'preference' can yield an optimal clustering solution, and oscillations cannot be eliminated automatically if occur. The adaptive AP method is proposed to overcome these limitations, including adaptive scanning of preferences to search space of the number of clusters for finding the optimal clustering solution, adaptive adjustment of damping factors to eliminate oscillations, and adaptive escaping from oscillations when the damping adjustment technique fails. Experimental results on simulated and real data sets show that the adaptive AP is effective and can outperform AP in quality of clustering results.
0805.1153
Contact state analysis using NFIS and SOM
cs.NE cs.AI
This paper reports application of neuro- fuzzy inference system (NFIS) and self organizing feature map neural networks (SOM) on detection of contact state in a block system. In this manner, on a simple system, the evolution of contact states, by parallelization of DDA, has been investigated. So, a comparison between NFIS and SOM results has been presented. The results show applicability of the proposed methods, by different accuracy, on detection of contact's distribution.
0805.1154
Clustering of scientific citations in Wikipedia
cs.DL cs.NE
The instances of templates in Wikipedia form an interesting data set of structured information. Here I focus on the cite journal template that is primarily used for citation to articles in scientific journals. These citations can be extracted and analyzed: Non-negative matrix factorization is performed on a (article x journal) matrix resulting in a soft clustering of Wikipedia articles and scientific journals, each cluster more or less representing a scientific topic.
0805.1209
Scaling Laws for Overlaid Wireless Networks: A Cognitive Radio Network vs. a Primary Network
cs.IT cs.NI math.IT
We study the scaling laws for the throughputs and delays of two coexisting wireless networks that operate in the same geographic region. The primary network consists of Poisson distributed legacy users of density n, and the secondary network consists of Poisson distributed cognitive users of density m, with m>n. The primary users have a higher priority to access the spectrum without particular considerations for the secondary users, while the secondary users have to act conservatively in order to limit the interference to the primary users. With a practical assumption that the secondary users only know the locations of the primary transmitters (not the primary receivers), we first show that both networks can achieve the same throughput scaling law as what Gupta and Kumar [1] established for a stand-alone wireless network if proper transmission schemes are deployed, where a certain throughput is achievable for each individual secondary user (i.e., zero outage) with high probability. By using a fluid model, we also show that both networks can achieve the same delay-throughput tradeoff as the optimal one established by El Gamal et al. [2] for a stand-alone wireless network.
0805.1262
Optimal Node Density for Two-Dimensional Sensor Arrays
cs.IT math.IT
The problem of optimal node density for ad hoc sensor networks deployed for making inferences about two dimensional correlated random fields is considered. Using a symmetric first order conditional autoregressive Gauss-Markov random field model, large deviations results are used to characterize the asymptotic per-node information gained from the array. This result then allows an analysis of the node density that maximizes the information under an energy constraint, yielding insights into the trade-offs among the information, density and energy.
0805.1288
Assessment of effective parameters on dilution using approximate reasoning methods in longwall mining method, Iran coal mines
cs.AI
Approximately more than 90% of all coal production in Iranian underground mines is derived directly longwall mining method. Out of seam dilution is one of the essential problems in these mines. Therefore the dilution can impose the additional cost of mining and milling. As a result, recognition of the effective parameters on the dilution has a remarkable role in industry. In this way, this paper has analyzed the influence of 13 parameters (attributed variables) versus the decision attribute (dilution value), so that using two approximate reasoning methods, namely Rough Set Theory (RST) and Self Organizing Neuro- Fuzzy Inference System (SONFIS) the best rules on our collected data sets has been extracted. The other benefit of later methods is to predict new unknown cases. So, the reduced sets (reducts) by RST have been obtained. Therefore the emerged results by utilizing mentioned methods shows that the high sensitive variables are thickness of layer, length of stope, rate of advance, number of miners, type of advancing.
0805.1296
A Simple Dynamic Mind-map Framework To Discover Associative Relationships in Transactional Data Streams
cs.NE cs.SC
In this paper, we informally introduce dynamic mind-maps that represent a new approach on the basis of a dynamic construction of connectionist structures during the processing of a data stream. This allows the representation and processing of recursively defined structures and avoids the problem of a more traditional, fixed-size architecture with the processing of input structures of unknown size. For a data stream analysis with association discovery, the incremental analysis of data leads to results on demand. Here, we describe a framework that uses symbolic cells to calculate associations based on transactional data streams as it exists in e.g. bibliographic databases. We follow a natural paradigm of applying simple operations on cells yielding on a mind-map structure that adapts over time.
0805.1319
Emergence, Competition and Dynamical Stabilization of Dissipative Rotating Spiral Waves in an Excitable Medium: A Computational Model Based on Cellular Automata
nlin.CG cs.NE nlin.AO
We report some qualitatively new features of emergence, competition and dynamical stabilization of dissipative rotating spiral waves (RSWs) in the cellular-automaton model of laser-like excitable media proposed in arXiv:cond-mat/0410460v2 ; arXiv:cond-mat/0602345 . Part of the observed features are caused by unusual mechanism of excitation vorticity when the RSW's core get into the surface layer of an active medium. Instead of the well known scenario of RSW collapse, which takes place after collision of RSW's core with absorbing boundary, we observed complicated transformations of the core leading to regeneration (nonlinear "reflection" from the boundary) of the RSW or even to birth of several new RSWs in the surface layer. Computer experiments on bottlenecked evolution of such the RSW-ensembles (vortex matter) are reported and a possible explanation of real experiments on spin-lattice relaxation in dilute paramagnets is proposed on the basis of an analysis of the RSWs dynamics. Chimera states in RSW-ensembles are revealed and compared with analogous states in ensembles of nonlocally coupled oscillators. Generally, our computer experiments have shown that vortex matter states in laser-like excitable media have some important features of aggregate states of the usual matter.
0805.1327
Bit-Interleaved Coded Modulation Revisited: A Mismatched Decoding Perspective
cs.IT math.IT
We revisit the information-theoretic analysis of bit-interleaved coded modulation (BICM) by modeling the BICM decoder as a mismatched decoder. The mismatched decoding model is well-defined for finite, yet arbitrary, block lengths, and naturally captures the channel memory among the bits belonging to the same symbol. We give two independent proofs of the achievability of the BICM capacity calculated by Caire et al. where BICM was modeled as a set of independent parallel binary-input channels whose output is the bitwise log-likelihood ratio. Our first achievability proof uses typical sequences, and shows that due to the random coding construction, the interleaver is not required. The second proof is based on the random coding error exponents with mismatched decoding, where the largest achievable rate is the generalized mutual information. We show that the generalized mutual information of the mismatched decoder coincides with the infinite-interleaver BICM capacity. We also show that the error exponent -and hence the cutoff rate- of the BICM mismatched decoder is upper bounded by that of coded modulation and may thus be lower than in the infinite-interleaved model. We also consider the mutual information appearing in the analysis of iterative decoding of BICM with EXIT charts. We show that the corresponding symbol metric has knowledge of the transmitted symbol and the EXIT mutual information admits a representation as a pseudo-generalized mutual information, which is in general not achievable. A different symbol decoding metric, for which the extrinsic side information refers to the hypothesized symbol, induces a generalized mutual information lower than the coded modulation capacity.
0805.1340
On the Secure Degrees of Freedom in the K-User Gaussian Interference Channel
cs.IT math.IT
This paper studies the K-user Gaussian interference channel with secrecy constraints. Two distinct network models, namely the interference channel with confidential messages and the one with an external eavesdropper, are analyzed. Using interference alignment along with secrecy pre-coding at each transmitter, it is shown that each user in the network can achieve non-zero secure Degrees of Freedoms (DoFs) in both scenarios. In particular, the proposed coding scheme achieves (K-2)/(2K-2) secure DoFs for each user in the interference channel with confidential messages model, and (K-2)/2K secure DoFs in the case of an external eavesdropper. The fundamental difference between the two scenarios stems from the lack of channel state information (CSI) about the external eavesdropper. Remarkably, the results establish the positive impact of interference on the secrecy capacity of wireless networks.
0805.1437
On the Spectrum of Large Random Hermitian Finite-Band Matrices
cs.IT math.IT
The open problem of calculating the limiting spectrum (or its Shannon transform) of increasingly large random Hermitian finite-band matrices is described. In general, these matrices include a finite number of non-zero diagonals around their main diagonal regardless of their size. Two different communication setups which may be modeled using such matrices are presented: a simple cellular uplink channel, and a time varying inter-symbol interference channel. Selected recent information-theoretic works dealing directly with such channels are reviewed. Finally, several characteristics of the still unknown limiting spectrum of such matrices are listed, and some reflections are touched upon.
0805.1442
How Many Users should be Turned On in a Multi-Antenna Broadcast Channel?
cs.IT math.IT
This paper considers broadcast channels with L antennas at the base station and m single-antenna users, where L and m are typically of the same order. We assume that only partial channel state information is available at the base station through a finite rate feedback. Our key observation is that the optimal number of on-users (users turned on), say s, is a function of signal-to-noise ratio (SNR) and feedback rate. In support of this, an asymptotic analysis is employed where L, m and the feedback rate approach infinity linearly. We derive the asymptotic optimal feedback strategy as well as a realistic criterion to decide which users should be turned on. The corresponding asymptotic throughput per antenna, which we define as the spatial efficiency, turns out to be a function of the number of on-users s, and therefore s must be chosen appropriately. Based on the asymptotics, a scheme is developed for systems with finite many antennas and users. Compared with other studies in which s is presumed constant, our scheme achieves a significant gain. Furthermore, our analysis and scheme are valid for heterogeneous systems where different users may have different path loss coefficients and feedback rates.
0805.1473
A Fast Algorithm and Datalog Inexpressibility for Temporal Reasoning
cs.AI cs.LO
We introduce a new tractable temporal constraint language, which strictly contains the Ord-Horn language of Buerkert and Nebel and the class of AND/OR precedence constraints. The algorithm we present for this language decides whether a given set of constraints is consistent in time that is quadratic in the input size. We also prove that (unlike Ord-Horn) this language cannot be solved by Datalog or by establishing local consistency.
0805.1480
On-line Learning of an Unlearnable True Teacher through Mobile Ensemble Teachers
cond-mat.dis-nn cs.LG
On-line learning of a hierarchical learning model is studied by a method from statistical mechanics. In our model a student of a simple perceptron learns from not a true teacher directly, but ensemble teachers who learn from the true teacher with a perceptron learning rule. Since the true teacher and the ensemble teachers are expressed as non-monotonic perceptron and simple ones, respectively, the ensemble teachers go around the unlearnable true teacher with the distance between them fixed in an asymptotic steady state. The generalization performance of the student is shown to exceed that of the ensemble teachers in a transient state, as was shown in similar ensemble-teachers models. Further, it is found that moving the ensemble teachers even in the steady state, in contrast to the fixed ensemble teachers, is efficient for the performance of the student.
0805.1485
Distributed MIMO Systems with Oblivious Antennas
cs.IT math.IT
A scenario in which a single source communicates with a single destination via a distributed MIMO transceiver is considered. The source operates each of the transmit antennas via finite-capacity links, and likewise the destination is connected to the receiving antennas through capacity-constrained channels. Targeting a nomadic communication scenario, in which the distributed MIMO transceiver is designed to serve different standards or services, transmitters and receivers are assumed to be oblivious to the encoding functions shared by source and destination. Adopting a Gaussian symmetric interference network as the channel model (as for regularly placed transmitters and receivers), achievable rates are investigated and compared with an upper bound. It is concluded that in certain asymptotic and non-asymptotic regimes obliviousness of transmitters and receivers does not cause any loss of optimality.
0805.1487
A Time Efficient Indexing Scheme for Complex Spatiotemporal Retrieval
cs.DB cs.DS
The paper is concerned with the time efficient processing of spatiotemporal predicates, i.e. spatial predicates associated with an exact temporal constraint. A set of such predicates forms a buffer query or a Spatio-temporal Pattern (STP) Query with time. In the more general case of an STP query, the temporal dimension is introduced via the relative order of the spatial predicates (STP queries with order). Therefore, the efficient processing of a spatiotemporal predicate is crucial for the efficient implementation of more complex queries of practical interest. We propose an extension of a known approach, suitable for processing spatial predicates, which has been used for the efficient manipulation of STP queries with order. The extended method is supported by efficient indexing structures. We also provide experimental results that show the efficiency of the technique.
0805.1593
On the Probability Distribution of Superimposed Random Codes
cs.DB cs.DM cs.IT math.IT
A systematic study of the probability distribution of superimposed random codes is presented through the use of generating functions. Special attention is paid to the cases of either uniformly distributed but not necessarily independent or non uniform but independent bit structures. Recommendations for optimal coding strategies are derived.
0805.1662
Eliminating Trapping Sets in Low-Density Parity Check Codes by using Tanner Graph Covers
cs.IT math.IT
We discuss error floor asympotics and present a method for improving the performance of low-density parity check (LDPC) codes in the high SNR (error floor) region. The method is based on Tanner graph covers that do not have trapping sets from the original code. The advantages of the method are that it is universal, as it can be applied to any LDPC code/channel/decoding algorithm and it improves performance at the expense of increasing the code length, without losing the code regularity, without changing the decoding algorithm, and, under certain conditions, without lowering the code rate. The proposed method can be modified to construct convolutional LDPC codes also. The method is illustrated by modifying Tanner, MacKay and Margulis codes to improve performance on the binary symmetric channel (BSC) under the Gallager B decoding algorithm. Decoding results on AWGN channel are also presented to illustrate that optimizing codes for one channel/decoding algorithm can lead to performance improvement on other channels.
0805.1696
Grammatical Evolution with Restarts for Fast Fractal Generation
cs.NE cs.SC
In a previous work, the authors proposed a Grammatical Evolution algorithm to automatically generate Lindenmayer Systems which represent fractal curves with a pre-determined fractal dimension. This paper gives strong statistical evidence that the probability distributions of the execution time of that algorithm exhibits a heavy tail with an hyperbolic probability decay for long executions, which explains the erratic performance of different executions of the algorithm. Three different restart strategies have been incorporated in the algorithm to mitigate the problems associated to heavy tail distributions: the first assumes full knowledge of the execution time probability distribution, the second and third assume no knowledge. These strategies exploit the fact that the probability of finding a solution in short executions is non-negligible and yield a severe reduction, both in the expected execution time (up to one order of magnitude) and in its variance, which is reduced from an infinite to a finite value.
0805.1715
Isotropy, entropy, and energy scaling
cs.IT math.IT nlin.AO
Two principles explain emergence. First, in the Receipt's reference frame, Deg(S) = 4/3 Deg(R), where Supply S is an isotropic radiative energy source, Receipt R receives S's energy, and Deg is a system's degrees of freedom based on its mean path length. S's 1/3 more degrees of freedom relative to R enables R's growth and increasing complexity. Second, rho(R) = Deg(R) times rho(r), where rho(R) represents the collective rate of R and rho(r) represents the rate of an individual in R: as Deg(R) increases due to the first principle, the multiplier effect of networking in R increases. A universe like ours with isotropic energy distribution, in which both principles are operative, is therefore predisposed to exhibit emergence, and, for reasons shown, a ubiquitous role for the natural logarithm.
0805.1727
Swarm-Based Spatial Sorting
cs.AI cs.MA
Purpose: To present an algorithm for spatially sorting objects into an annular structure. Design/Methodology/Approach: A swarm-based model that requires only stochastic agent behaviour coupled with a pheromone-inspired "attraction-repulsion" mechanism. Findings: The algorithm consistently generates high-quality annular structures, and is particularly powerful in situations where the initial configuration of objects is similar to those observed in nature. Research limitations/implications: Experimental evidence supports previous theoretical arguments about the nature and mechanism of spatial sorting by insects. Practical implications: The algorithm may find applications in distributed robotics. Originality/value: The model offers a powerful minimal algorithmic framework, and also sheds further light on the nature of attraction-repulsion algorithms and underlying natural processes.
0805.1785
Distributed Self Management for Distributed Security Systems
cs.MA cs.AI
Distributed system as e.g. artificial immune systems, complex adaptive systems, or multi-agent systems are widely used in Computer Science, e.g. for network security, optimisations, or simulations. In these systems, small entities move through the network and perform certain tasks. At some time, the entities move to another place and require therefore information where to move is most profitable. Common used systems do not provide any information or use a centralised approach where a center delegates the entities. This article discusses whether small information about the neighbours enhances the performance of the overall system or not. Therefore, two information-protocols are introduced and analysed. In addition, the protocols are implemented and tested using the artificial immune system SANA that protects a network against intrusions.
0805.1786
Next Challenges in Bringing Artificial Immune Systems to Production in Network Security
cs.MA cs.AI
The human immune system protects the human body against various pathogens like e.g. biological viruses and bacteria. Artificial immune systems reuse the architecture, organization, and workflows of the human immune system for various problems in computer science. In the network security, the artificial immune system is used to secure a network and its nodes against intrusions like viruses, worms, and trojans. However, these approaches are far away from production where they are academic proof-of-concept implementations or use only a small part to protect against a certain intrusion. This article discusses the required steps to bring artificial immune systems into production in the network security domain. It furthermore figures out the challenges and provides the description and results of the prototype of an artificial immune system, which is SANA called.
0805.1787
A Network Protection Framework through Artificial Immunity
cs.MA cs.CR
Current network protection systems use a collection of intelligent components - e.g. classifiers or rule-based firewall systems to detect intrusions and anomalies and to secure a network against viruses, worms, or trojans. However, these network systems rely on individuality and support an architecture with less collaborative work of the protection components. They give less administration support for maintenance, but offer a large number of individual single points of failures - an ideal situation for network attacks to succeed. In this work, we discuss the required features, the performance, and the problems of a distributed protection system called {\it SANA}. It consists of a cooperative architecture, it is motivated by the human immune system, where the components correspond to artificial immune cells that are connected for their collaborative work. SANA promises a better protection against intruders than common known protection systems through an adaptive self-management while keeping the resources efficiently by an intelligent reduction of redundancies. We introduce a library of several novel and common used protection components and evaluate the performance of SANA by a proof-of-concept implementation.
0805.1788
Pedestrian Flow at Bottlenecks - Validation and Calibration of Vissim's Social Force Model of Pedestrian Traffic and its Empirical Foundations
cs.MA physics.soc-ph
In this contribution first results of experiments on pedestrian flow through bottlenecks are presented and then compared to simulation results obtained with the Social Force Model in the Vissim simulation framework. Concerning the experiments it is argued that the basic dependence between flow and bottleneck width is not a step function but that it is linear and modified by the effect of a psychological phenomenon. The simulation results as well show a linear dependence and the parameters can be calibrated such that the absolute values for flow and time fit to range of experimental results.
0805.1806
Tuplix Calculus Specifications of Financial Transfer Networks
cs.CE cs.LO
We study the application of Tuplix Calculus in modular financial budget design. We formalize organizational structure using financial transfer networks. We consider the notion of flux of money over a network, and a way to enforce the matching of influx and outflux for parts of a network. We exploit so-called signed attribute notation to make internal streams visible through encapsulations. Finally, we propose a Tuplix Calculus construct for the definition of data functions.
0805.1827
Parallel Pricing Algorithms for Multi--Dimensional Bermudan/American Options using Monte Carlo methods
cs.DC cs.CE
In this paper we present two parallel Monte Carlo based algorithms for pricing multi--dimensional Bermudan/American options. First approach relies on computation of the optimal exercise boundary while the second relies on classification of continuation and exercise values. We also evaluate the performance of both the algorithms in a desktop grid environment. We show the effectiveness of the proposed approaches in a heterogeneous computing environment, and identify scalability constraints due to the algorithmic structure.
0805.1844
Practical recipes for the model order reduction, dynamical simulation, and compressive sampling of large-scale open quantum systems
quant-ph cs.IT math.IT math.NA
This article presents numerical recipes for simulating high-temperature and non-equilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto a state-space manifold having reduced dimensionality and possessing a Kahler potential of multi-linear form. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low-dimensionality Kahler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candes-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given, and methods for quantum state optimization by Dantzig selection are given.
0805.1854
A New Algorithm for Interactive Structural Image Segmentation
cs.CV
This paper proposes a novel algorithm for the problem of structural image segmentation through an interactive model-based approach. Interaction is expressed in the model creation, which is done according to user traces drawn over a given input image. Both model and input are then represented by means of attributed relational graphs derived on the fly. Appearance features are taken into account as object attributes and structural properties are expressed as relational attributes. To cope with possible topological differences between both graphs, a new structure called the deformation graph is introduced. The segmentation process corresponds to finding a labelling of the input graph that minimizes the deformations introduced in the model when it is updated with input information. This approach has shown to be faster than other segmentation methods, with competitive output quality. Therefore, the method solves the problem of multiple label segmentation in an efficient way. Encouraging results on both natural and target-specific color images, as well as examples showing the reusability of the model, are presented and discussed.
0805.1857
The Gaussian Many-Help-One Distributed Source Coding Problem
cs.IT math.IT
Jointly Gaussian memoryless sources are observed at N distinct terminals. The goal is to efficiently encode the observations in a distributed fashion so as to enable reconstruction of any one of the observations, say the first one, at the decoder subject to a quadratic fidelity criterion. Our main result is a precise characterization of the rate-distortion region when the covariance matrix of the sources satisfies a "tree-structure" condition. In this situation, a natural analog-digital separation scheme optimally trades off the distributed quantization rate tuples and the distortion in the reconstruction: each encoder consists of a point-to-point Gaussian vector quantizer followed by a Slepian-Wolf binning encoder. We also provide a partial converse that suggests that the tree structure condition is fundamental.
0805.2027
Rollout Sampling Approximate Policy Iteration
cs.LG cs.AI cs.CC
Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions which focus on policy representation using classifiers and address policy learning as a supervised learning problem. This paper proposes variants of an improved policy iteration scheme which addresses the core sampling problem in evaluating a policy through simulation as a multi-armed bandit machine. The resulting algorithm offers comparable performance to the previous algorithm achieved, however, with significantly less computational effort. An order of magnitude improvement is demonstrated experimentally in two standard reinforcement learning domains: inverted pendulum and mountain-car.
0805.2045
Semantic Analysis of Tag Similarity Measures in Collaborative Tagging Systems
cs.DL cs.IR
Social bookmarking systems allow users to organise collections of resources on the Web in a collaborative fashion. The increasing popularity of these systems as well as first insights into their emergent semantics have made them relevant to disciplines like knowledge extraction and ontology learning. The problem of devising methods to measure the semantic relatedness between tags and characterizing it semantically is still largely open. Here we analyze three measures of tag relatedness: tag co-occurrence, cosine similarity of co-occurrence distributions, and FolkRank, an adaptation of the PageRank algorithm to folksonomies. Each measure is computed on tags from a large-scale dataset crawled from the social bookmarking system del.icio.us. To provide a semantic grounding of our findings, a connection to WordNet (a semantic lexicon for the English language) is established by mapping tags into synonym sets of WordNet, and applying there well-known metrics of semantic similarity. Our results clearly expose different characteristics of the selected measures of relatedness, making them applicable to different subtasks of knowledge extraction such as synonym detection or discovery of concept hierarchies.
0805.2105
On Emergence of Dominating Cliques in Random Graphs
math.CO cs.IT math.IT
Emergence of dominating cliques in Erd\"os-R\'enyi random graph model ${\bbbg(n,p)}$ is investigated in this paper. It is shown this phenomenon possesses a phase transition. Namely, we have argued that, given a constant probability $p$, an $n$-node random graph $G$ from ${\bbbg(n,p)}$ and for $r= c \log_{1/p} n$ with $1 \leq c \leq 2$, it holds: (1) if $p > 1/2$ then an $r$-node clique is dominating in $G$ almost surely and, (2) if $p \leq (3 - \sqrt{5})/2$ then an $r$-node clique is not dominating in $G$ almost surely. The remaining range of probability $p$ is discussed with more attention. A detailed study shows that this problem is answered by examination of sub-logarithmic growth of $r$ upon $n$.
0805.2185
Path Diversity over Packet Switched Networks: Performance Analysis and Rate Allocation
cs.NI cs.IT math.IT
Path diversity works by setting up multiple parallel connections between the end points using the topological path redundancy of the network. In this paper, \textit{Forward Error Correction} (FEC) is applied across multiple independent paths to enhance the end-to-end reliability. Network paths are modeled as erasure Gilbert-Elliot channels. It is known that over any erasure channel, \textit{Maximum Distance Separable} (MDS) codes achieve the minimum probability of irrecoverable loss among all block codes of the same size. Based on the adopted model for the error behavior, we prove that the probability of irrecoverable loss for MDS codes decays exponentially for an asymptotically large number of paths. Then, optimal rate allocation problem is solved for the asymptotic case where the number of paths is large. Moreover, it is shown that in such asymptotically optimal rate allocation, each path is assigned a positive rate \textit{iff} its quality is above a certain threshold. The quality of a path is defined as the percentage of the time it spends in the bad state. Finally, using dynamic programming, a heuristic suboptimal algorithm with polynomial runtime is proposed for rate allocation over a finite number of paths. This algorithm converges to the asymptotically optimal rate allocation when the number of paths is large. The simulation results show that the proposed algorithm approximates the optimal rate allocation (found by exhaustive search) very closely for practical number of paths, and provides significant performance improvement compared to the alternative schemes of rate allocation.
0805.2199
Constraint Complexity of Realizations of Linear Codes on Arbitrary Graphs
cs.DM cs.IT math.IT
A graphical realization of a linear code C consists of an assignment of the coordinates of C to the vertices of a graph, along with a specification of linear state spaces and linear ``local constraint'' codes to be associated with the edges and vertices, respectively, of the graph. The $\k$-complexity of a graphical realization is defined to be the largest dimension of any of its local constraint codes. $\k$-complexity is a reasonable measure of the computational complexity of a sum-product decoding algorithm specified by a graphical realization. The main focus of this paper is on the following problem: given a linear code C and a graph G, how small can the $\k$-complexity of a realization of C on G be? As useful tools for attacking this problem, we introduce the Vertex-Cut Bound, and the notion of ``vc-treewidth'' for a graph, which is closely related to the well-known graph-theoretic notion of treewidth. Using these tools, we derive tight lower bounds on the $\k$-complexity of any realization of C on G. Our bounds enable us to conclude that good error-correcting codes can have low-complexity realizations only on graphs with large vc-treewidth. Along the way, we also prove the interesting result that the ratio of the $\k$-complexity of the best conventional trellis realization of a length-n code C to the $\k$-complexity of the best cycle-free realization of C grows at most logarithmically with codelength n. Such a logarithmic growth rate is, in fact, achievable.
0805.2303
Graph Algorithms for Improving Type-Logical Proof Search
cs.CL
Proof nets are a graph theoretical representation of proofs in various fragments of type-logical grammar. In spite of this basis in graph theory, there has been relatively little attention to the use of graph theoretic algorithms for type-logical proof search. In this paper we will look at several ways in which standard graph theoretic algorithms can be used to restrict the search space. In particular, we will provide an O(n4) algorithm for selecting an optimal axiom link at any stage in the proof search as well as a O(kn3) algorithm for selecting the k best proof candidates.
0805.2308
Toward Fuzzy block theory
cs.AI
This study, fundamentals of fuzzy block theory, and its application in assessment of stability in underground openings, has surveyed. Using fuzzy topics and inserting them in to key block theory, in two ways, fundamentals of fuzzy block theory has been presented. In indirect combining, by coupling of adaptive Neuro Fuzzy Inference System (NFIS) and classic block theory, we could extract possible damage parts around a tunnel. In direct solution, some principles of block theory, by means of different fuzzy facets theory, were rewritten.
0805.2324
A multilateral filtering method applied to airplane runway image
cs.CV
By considering the features of the airport runway image filtering, an improved bilateral filtering method was proposed which can remove noise with edge preserving. Firstly the steerable filtering decomposition is used to calculate the sub-band parameters of 4 orients, and the texture feature matrix is then obtained from the sub-band local median energy. The texture similar, the spatial closer and the color similar functions are used to filter the image.The effect of the weighting function parameters is qualitatively analyzed also. In contrast with the standard bilateral filter and the simulation results for the real airport runway image show that the multilateral filtering is more effective than the standard bilateral filtering.
0805.2362
An optimization problem on the sphere
cs.LG cs.CG
We prove existence and uniqueness of the minimizer for the average geodesic distance to the points of a geodesically convex set on the sphere. This implies a corresponding existence and uniqueness result for an optimal algorithm for halfspace learning, when data and target functions are drawn from the uniform distribution.
0805.2368
A Kernel Method for the Two-Sample Problem
cs.LG cs.AI
We propose a framework for analyzing and comparing distributions, allowing us to design statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS). We present two tests based on large deviation bounds for the test statistic, while a third is based on the asymptotic distribution of this statistic. The test statistic can be computed in quadratic time, although efficient linear time approximations are available. Several classical metrics on distributions are recovered when the function space used to compute the difference in expectations is allowed to be more general (eg. a Banach space). We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.
0805.2379
Linear code-based vector quantization for independent random variables
cs.IT math.IT
In this paper we analyze the rate-distortion function R(D) achievable using linear codes over GF(q), where q is a prime number.
0805.2422
Transceiver Pair Designs for Multiple Access Channels under Fixed Sum Mutual Information using MMSE Decision Feedback Detection
cs.IT math.IT
In this paper, we consider the joint design of the transceivers for a multiple access Multiple Input and Multiple Output (MIMO) system having Inter-Symbol Interference (ISI) channels. The system we consider is equipped with the Minimum Mean Square Error (MMSE) Decision-Feedback (DF) detector. Traditionally, transmitter designs for this system have been based on constraints of either the transmission power or the signal-to-interference-and-noise ratio (SINR) for each user. Here, we explore a novel perspective and examine a transceiver design which is based on a fixed sum mutual information constraint and minimizes the arithmetic average of mean square error of MMSE-decision feedback detection. For this optimization problem, a closed-form solution is obtained and is achieved if and only if the averaged sum mutual information is uniformly distributed over each active subchannel. Meanwhile, the mutual information of the currently detected user is uniformly distributed over each individual symbol within the block signal of the user, assuming all the previous user signals have been perfectly detected.
0805.2423
Green Codes: Energy-Efficient Short-Range Communication
cs.IT math.IT
A green code attempts to minimize the total energy per-bit required to communicate across a noisy channel. The classical information-theoretic approach neglects the energy expended in processing the data at the encoder and the decoder and only minimizes the energy required for transmissions. Since there is no cost associated with using more degrees of freedom, the traditionally optimal strategy is to communicate at rate zero. In this work, we use our recently proposed model for the power consumed by iterative message passing. Using generalized sphere-packing bounds on the decoding power, we find lower bounds on the total energy consumed in the transmissions and the decoding, allowing for freedom in the choice of the rate. We show that contrary to the classical intuition, the rate for green codes is bounded away from zero for any given error probability. In fact, as the desired bit-error probability goes to zero, the optimizing rate for our bounds converges to 1.
0805.2427
On Trapping Sets and Guaranteed Error Correction Capability of LDPC Codes and GLDPC Codes
cs.IT math.IT
The relation between the girth and the guaranteed error correction capability of $\gamma$-left regular LDPC codes when decoded using the bit flipping (serial and parallel) algorithms is investigated. A lower bound on the size of variable node sets which expand by a factor of at least $3 \gamma/4$ is found based on the Moore bound. An upper bound on the guaranteed error correction capability is established by studying the sizes of smallest possible trapping sets. The results are extended to generalized LDPC codes. It is shown that generalized LDPC codes can correct a linear fraction of errors under the parallel bit flipping algorithm when the underlying Tanner graph is a good expander. It is also shown that the bound cannot be improved when $\gamma$ is even by studying a class of trapping sets. A lower bound on the size of variable node sets which have the required expansion is established.
0805.2440
Analysis of hydrocyclone performance based on information granulation theory
cs.AI
This paper describes application of information granulation theory, on the analysis of hydrocyclone perforamance. In this manner, using a combining of Self Organizing Map (SOM) and Neuro-Fuzzy Inference System (NFIS), crisp and fuzzy granules are obtained(briefly called SONFIS). Balancing of crisp granules and sub fuzzy granules, within non fuzzy information (initial granulation), is rendered in an open-close iteration. Using two criteria, "simplicity of rules "and "adaptive threoshold error level", stability of algorithm is guaranteed. Validation of the proposed method, on the data set of the hydrocyclone is rendered.
0805.2537
A toolkit for a generative lexicon
cs.CL
In this paper we describe the conception of a software toolkit designed for the construction, maintenance and collaborative use of a Generative Lexicon. In order to ease its portability and spreading use, this tool was built with free and open source products. We eventually tested the toolkit and showed it filters the adequate form of anaphoric reference to the modifier in endocentric compounds.
0805.2629
On Full Diversity Space-Time Block Codes with Partial Interference Cancellation Group Decoding
cs.IT math.IT
In this paper, we propose a partial interference cancellation (PIC) group decoding for linear dispersive space-time block codes (STBC) and a design criterion for the codes to achieve full diversity when the PIC group decoding is used at the receiver. A PIC group decoding decodes the symbols embedded in an STBC by dividing them into several groups and decoding each group separately after a linear PIC operation is implemented. It can be viewed as an intermediate decoding between the maximum likelihood (ML) receiver that decodes all the embedded symbols together, i.e., all the embedded symbols are in a single group, and the zero-forcing (ZF) receiver that decodes all the embedded symbols separately and independently, i.e., each group has and only has one embedded symbol, after the ZF operation is implemented. Our proposed design criterion for the PIC group decoding to achieve full diversity is an intermediate condition between the loosest ML full rank criterion of codewords and the strongest ZF linear independence condition of the column vectors in the equivalent channel matrix. We also propose asymptotically optimal (AO) group decoding algorithm, which is an intermediate decoding between the MMSE decoding algorithm and the ML decoding algorithm. The design criterion for the PIC group decoding applies to the AO group decoding algorithm. It is well-known that the symbol rate for a full rank linear STBC can be full, i.e., n_t for n_t transmit antennas. It has been recently shown that its rate is upper bounded by 1 if a code achieves full diversity with a linear receiver. The intermediate criterion proposed in this paper provides the possibility for codes of rates between n_t and 1 that achieve full diversity with a PIC group decoding. This therefore provides a complexity-performance-rate tradeoff.
0805.2641
On the Capacity of the Diamond Half-Duplex Relay Channel
cs.IT math.IT
We consider a diamond-shaped dual-hop communication system consisting a source, two parallel half-duplex relays and a destination. In a single antenna configuration, it has been previously shown that a two-phase node-scheduling algorithm, along with the decode and forward strategy can achieve the capacity of the diamond channel for a certain symmetric channel gains [1]. In this paper, we obtain a more general condition for the optimality of the scheme in terms of power resources and channel gains. In particular, it is proved that if the product of the capacity of the simultaneously active links are equal in both transmission phases, the scheme achieves the capacity of the channel.
0805.2671
Finger Indexed Sets: New Approaches
cs.DS cs.DB
In the particular case we have insertions/deletions at the tail of a given set S of $n$ one-dimensional elements, we present a simpler and more concrete algorithm than that presented in [Anderson, 2007] achieving the same (but also amortized) upper bound of $O(\sqrt{logd/loglogd})$ for finger searching queries, where $d$ is the number of sorted keys between the finger element and the target element we are looking for. Furthermore, in general case we have insertions/deletions anywhere we present a new randomized algorithm achieving the same expected time bounds. Even the new solutions achieve the optimal bounds in amortized or expected case, the advantage of simplicity is of great importance due to practical merits we gain.
0805.2690
Increasing Linear Dynamic Range of Commercial Digital Photocamera Used in Imaging Systems with Optical Coding
cs.CV
Methods of increasing linear optical dynamic range of commercial photocamera for optical-digital imaging systems are described. Use of such methods allows to use commercial photocameras for optical measurements. Experimental results are reported.
0805.2691
Equivalent characterizations of partial randomness for a recursively enumerable real
cs.IT cs.CC math.IT math.LO
A real number \alpha is called recursively enumerable if there exists a computable, increasing sequence of rational numbers which converges to \alpha. The randomness of a recursively enumerable real \alpha can be characterized in various ways using each of the notions; program-size complexity, Martin-L\"{o}f test, Chaitin's \Omega number, the domination and \Omega-likeness of \alpha, the universality of a computable, increasing sequence of rational numbers which converges to \alpha, and universal probability. In this paper, we generalize these characterizations of randomness over the notion of partial randomness by parameterizing each of the notions above by a real number T\in(0,1]. We thus present several equivalent characterizations of partial randomness for a recursively enumerable real number.
0805.2752
The Margitron: A Generalised Perceptron with Margin
cs.LG
We identify the classical Perceptron algorithm with margin as a member of a broader family of large margin classifiers which we collectively call the Margitron. The Margitron, (despite its) sharing the same update rule with the Perceptron, is shown in an incremental setting to converge in a finite number of updates to solutions possessing any desirable fraction of the maximum margin. Experiments comparing the Margitron with decomposition SVMs on tasks involving linear kernels and 2-norm soft margin are also reported.
0805.2775
Sample Selection Bias Correction Theory
cs.LG
This paper presents a theoretical analysis of sample selection bias correction. The sample bias correction technique commonly used in machine learning consists of reweighting the cost of an error on each training point of a biased sample to more closely reflect the unbiased distribution. This relies on weights derived by various estimation techniques based on finite samples. We analyze the effect of an error in that estimation on the accuracy of the hypothesis returned by the learning algorithm for two estimation techniques: a cluster-based estimation technique and kernel mean matching. We also report the results of sample bias correction experiments with several data sets using these techniques. Our analysis is based on the novel concept of distributional stability which generalizes the existing concept of point-based stability. Much of our work and proof techniques can be used to analyze other importance weighting techniques and their effect on accuracy when using a distributionally stable algorithm.
0805.2812
Codeword-Independent Performance of Nonbinary Linear Codes Under Linear-Programming and Sum-Product Decoding
cs.IT math.IT
A coded modulation system is considered in which nonbinary coded symbols are mapped directly to nonbinary modulation signals. It is proved that if the modulator-channel combination satisfies a particular symmetry condition, the codeword error rate performance is independent of the transmitted codeword. It is shown that this result holds for both linear-programming decoders and sum-product decoders. In particular, this provides a natural modulation mapping for nonbinary codes mapped to PSK constellations for transmission over memoryless channels such as AWGN channels or flat fading channels with AWGN.
0805.2855
LCSH, SKOS and Linked Data
cs.DL cs.IR
A technique for converting Library of Congress Subject Headings MARCXML to Simple Knowledge Organization System (SKOS) RDF is described. Strengths of the SKOS vocabulary are highlighted, as well as possible points for extension, and the integration of other semantic web vocabularies such as Dublin Core. An application for making the vocabulary available as linked-data on the Web is also described.
0805.2891
Learning Low-Density Separators
cs.LG cs.AI
We define a novel, basic, unsupervised learning problem - learning the lowest density homogeneous hyperplane separator of an unknown probability distribution. This task is relevant to several problems in machine learning, such as semi-supervised learning and clustering stability. We investigate the question of existence of a universally consistent algorithm for this problem. We propose two natural learning paradigms and prove that, on input unlabeled random samples generated by any member of a rich family of distributions, they are guaranteed to converge to the optimal separator for that distribution. We complement this result by showing that no learning algorithm for our task can achieve uniform learning rates (that are independent of the data generating distribution).
0805.2949
Performability Aspects of the Atlas Vo; Using Lmbench Suite
cs.PF cs.CE cs.DC
The ATLAS Virtual Organization is grid's largest Virtual Organization which is currently in full production stage. Hereby a case is being made that a user working within that VO is going to face a wide spectrum of different systems, whose heterogeneity is enough to count as "orders of magnitude" according to a number of metrics; including integer/float operations, memory throughput (STREAM) and communication latencies. Furthermore, the spread of performance does not appear to follow any known distribution pattern, which is demonstrated in graphs produced during May 2007 measurements. It is implied that the current practice where either "all-WNs-are-equal" or, the alternative of SPEC-based rating used by LCG/EGEE is an oversimplification which is inappropriate and expensive from an operational point of view, therefore new techniques are needed for optimal grid resources allocation.
0805.2995
Lossless Compression with Security Constraints
cs.IT math.IT
Secure distributed data compression in the presence of an eavesdropper is explored. Two correlated sources that need to be reliably transmitted to a legitimate receiver are available at separate encoders. Noise-free, limited rate links from the encoders to the legitimate receiver, one of which can also be perfectly observed by the eavesdropper, are considered. The eavesdropper also has its own correlated observation. Inner and outer bounds on the achievable compression-equivocation rate region are given. Several different scenarios involving the side information at the transmitters as well as multiple receivers/eavesdroppers are also considered.
0805.2996
Lossy Source Transmission over the Relay Channel
cs.IT math.IT
Lossy transmission over a relay channel in which the relay has access to correlated side information is considered. First, a joint source-channel decode-and-forward scheme is proposed for general discrete memoryless sources and channels. Then the Gaussian relay channel where the source and the side information are jointly Gaussian is analyzed. For this Gaussian model, several new source-channel cooperation schemes are introduced and analyzed in terms of the squared-error distortion at the destination. A comparison of the proposed upper bounds with the cut-set lower bound is given, and it is seen that joint source-channel cooperation improves the reconstruction quality significantly. Moreover, the performance of the joint code is close to the lower bound on distortion for a wide range of source and channel parameters.
0805.3005
High-dimensional subset recovery in noise: Sparsified measurements without loss of statistical efficiency
stat.ML cs.IT math.IT
We consider the problem of estimating the support of a vector $\beta^* \in \mathbb{R}^{p}$ based on observations contaminated by noise. A significant body of work has studied behavior of $\ell_1$-relaxations when applied to measurement matrices drawn from standard dense ensembles (e.g., Gaussian, Bernoulli). In this paper, we analyze \emph{sparsified} measurement ensembles, and consider the trade-off between measurement sparsity, as measured by the fraction $\gamma$ of non-zero entries, and the statistical efficiency, as measured by the minimal number of observations $n$ required for exact support recovery with probability converging to one. Our main result is to prove that it is possible to let $\gamma \to 0$ at some rate, yielding measurement matrices with a vanishing fraction of non-zeros per row while retaining the same statistical efficiency as dense ensembles. A variety of simulation results confirm the sharpness of our theoretical predictions.
0805.3082
Weakly Convergent Nonparametric Forecasting of Stationary Time Series
math.ST cs.IT math.IT stat.TH
The conditional distribution of the next outcome given the infinite past of a stationary process can be inferred from finite but growing segments of the past. Several schemes are known for constructing pointwise consistent estimates, but they all demand prohibitive amounts of input data. In this paper we consider real-valued time series and construct conditional distribution estimates that make much more efficient use of the input data. The estimates are consistent in a weak sense, and the question whether they are pointwise consistent is still open. For finite-alphabet processes one may rely on a universal data compression scheme like the Lempel-Ziv algorithm to construct conditional probability mass function estimates that are consistent in expected information divergence. Consistency in this strong sense cannot be attained in a universal sense for all stationary processes with values in an infinite alphabet, but weak consistency can. Some applications of the estimates to on-line forecasting, regression and classification are discussed.
0805.3091
A simple randomized algorithm for sequential prediction of ergodic time series
math.ST cs.IT math.IT stat.TH
We present a simple randomized procedure for the prediction of a binary sequence. The algorithm uses ideas from recent developments of the theory of the prediction of individual sequences. We show that if the sequence is a realization of a stationary and ergodic random process then the average number of mistakes converges, almost surely, to that of the optimum, given by the Bayes predictor. The desirable finite-sample properties of the predictor are illustrated by its performance for Markov processes. In such cases the predictor exhibits near optimal behavior even without knowing the order of the Markov process. Prediction with side information is also considered.
0805.3118
Full Diversity Blind Signal Designs for Unique Identification of Frequency Selective Channels
cs.IT math.IT
In this paper, we develop two kinds of novel closed-form decompositions on phase shift keying (PSK) constellations by exploiting linear congruence equation theory: the one for factorizing a $pq$-PSK constellation into a product of a $p$-PSK constellation and a $q$-PSK constellation, and the other for decomposing a specific complex number into a difference of a $p$-PSK constellation and a $q$-PSK constellation. With this, we propose a simple signal design technique to blindly and uniquely identify frequency selective channels with zero-padded block transmission under noise-free environments by only using the first two block received signal vectors. Furthermore, a closed-form solution to determine the transmitted signals and the channel coefficients is obtained. In the Gaussian noise and Rayleigh fading environment, we prove that the newly proposed signaling scheme enables non-coherent full diversity for the Generalized Likelihood Ratio Test (GLRT) receiver.
0805.3126
Cognitive Architecture for Direction of Attention Founded on Subliminal Memory Searches, Pseudorandom and Nonstop
cs.AI cs.NE
By way of explaining how a brain works logically, human associative memory is modeled with logical and memory neurons, corresponding to standard digital circuits. The resulting cognitive architecture incorporates basic psychological elements such as short term and long term memory. Novel to the architecture are memory searches using cues chosen pseudorandomly from short term memory. Recalls alternated with sensory images, many tens per second, are analyzed subliminally as an ongoing process, to determine a direction of attention in short term memory.
0805.3164
To Code or Not To Code in Multi-Hop Relay Channels
cs.IT math.IT
Multi-hop relay channels use multiple relay stages, each with multiple relay nodes, to facilitate communication between a source and destination. Previously, distributed space-time coding was used to maximize diversity gain. Assuming a low-rate feedback link from the destination to each relay stage and the source, this paper proposes end-to-end antenna selection strategies as an alternative to distributed space-time coding. One-way (where only the source has data for destination) and two-way (where the destination also has data for the source) multi-hop relay channels are considered with both the full-duplex and half duplex relay nodes. End-to-end antenna selection strategies are designed and proven to achieve maximum diversity gain by using a single antenna path (using single antenna of the source, each relay stage and the destination) with the maximum signal-to-noise ratio at the destination. For the half-duplex case, two single antenna paths with the two best signal-to-noise ratios in alternate time slots are used to overcome the rate loss with half-duplex nodes, with a small diversity gain penalty. Finally to answer the question, whether to code (distributed space-time code) or not (the proposed end-to-end antenna selection strategy) in a multi-hop relay channel, end-to-end antenna selection strategy and distributed space-time coding is compared with respect to several important performance metrics.
0805.3200
On Tightness of Mutual Dependence Upperbound for Secret-key Capacity of Multiple Terminals
cs.IT cs.CR math.CO math.IT math.PR
Csiszar and Narayan[3] defined the notion of secret key capacity for multiple terminals, characterized it as a linear program with Slepian-Wolf constraints of the related source coding problem of communication for omniscience, and upper bounded it by some information divergence expression from the joint to the product distribution of the private observations. This paper proves that the bound is tight for the important case when all users are active, using the polymatroidal structure[6] underlying the source coding problem. When some users are not active, the bound may not be tight. This paper gives a counter-example in which 3 out of the 6 terminals are active.
0805.3206
Sociological Inequality and the Second Law
cs.IT math.IT
There are two fair ways to distribute particles in boxes. The first way is to divide the particles equally between the boxes. The second way, which is calculated here, is to score fairly the particles between the boxes. The obtained power law distribution function yields an uneven distribution of particles in boxes. It is shown that the obtained distribution fits well to sociological phenomena, such as the distribution of votes in polls and the distribution of wealth and Benford's law.