id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1201.4139
Image decomposition with anisotropic diffusion applied to leaf-texture analysis
cs.CV
Texture analysis is an important field of investigation that has received a great deal of interest from computer vision community. In this paper, we propose a novel approach for texture modeling based on partial differential equation (PDE). Each image $f$ is decomposed into a family of derived sub-images. $f$ is split into the $u$ component, obtained with anisotropic diffusion, and the $v$ component which is calculated by the difference between the original image and the $u$ component. After enhancing the texture attribute $v$ of the image, Gabor features are computed as descriptors. We validate the proposed approach on two texture datasets with high variability. We also evaluate our approach on an important real-world application: leaf-texture analysis. Experimental results indicate that our approach can be used to produce higher classification rates and can be successfully employed for different texture applications.
1201.4145
The Role of Social Networks in Information Diffusion
cs.SI physics.soc-ph
Online social networking technologies enable individuals to simultaneously share information with any number of peers. Quantifying the causal effect of these technologies on the dissemination of information requires not only identification of who influences whom, but also of whether individuals would still propagate information in the absence of social signals about that information. We examine the role of social networks in online information diffusion with a large-scale field experiment that randomizes exposure to signals about friends' information sharing among 253 million subjects in situ. Those who are exposed are significantly more likely to spread information, and do so sooner than those who are not exposed. We further examine the relative role of strong and weak ties in information propagation. We show that, although stronger ties are individually more influential, it is the more abundant weak ties who are responsible for the propagation of novel information. This suggests that weak ties may play a more dominant role in the dissemination of information online than currently believed.
1201.4210
Collaborative Personalized Web Recommender System using Entropy based Similarity Measure
cs.IR cs.AI
On the internet, web surfers, in the search of information, always strive for recommendations. The solutions for generating recommendations become more difficult because of exponential increase in information domain day by day. In this paper, we have calculated entropy based similarity between users to achieve solution for scalability problem. Using this concept, we have implemented an online user based collaborative web recommender system. In this model based collaborative system, the user session is divided into two levels. Entropy is calculated at both the levels. It is shown that from the set of valuable recommenders obtained at level I; only those recommenders having lower entropy at level II than entropy at level I, served as trustworthy recommenders. Finally, top N recommendations are generated from such trustworthy recommenders for an online user.
1201.4214
Channel Exploration and Exploitation with Imperfect Spectrum Sensing in Cognitive Radio Networks
cs.IT math.IT
In this paper, the problem of opportunistic channel sensing and access in cognitive radio networks when the sensing is imperfect and a secondary user has limited traffic to send at a time is investigated. Primary users' statistical information is assumed to be unknown, and therefore, a secondary user needs to learn the information online during channel sensing and access process, which means learning loss, also referred to as regret, is inevitable. In this research, the case when all potential channels can be sensed simultaneously is investigated first. The channel access process is modeled as a multi-armed bandit problem with side observation. And channel access rules are derived and theoretically proved to have asymptotically finite regret. Then the case when the secondary user can sense only a limited number of channels at a time is investigated. The channel sensing and access process is modeled as a bi-level multi-armed bandit problem. It is shown that any adaptive rule has at least logarithmic regret. Then we derive channel sensing and access rules and theoretically prove they have logarithmic regret asymptotically and with finite time. The effectiveness of the derived rules is validated by computer simulation.
1201.4239
Dynamic Decision Making for Graphical Models Applied to Oil Exploration
stat.AP cs.AI stat.CO
This paper has been withdrawn by the authors. We present a framework for sequential decision making in problems described by graphical models. The setting is given by dependent discrete random variables with associated costs or revenues. In our examples, the dependent variables are the potential outcomes (oil, gas or dry) when drilling a petroleum well. The goal is to develop an optimal selection strategy that incorporates a chosen utility function within an approximated dynamic programming scheme. We propose and compare different approximations, from simple heuristics to more complex iterative schemes, and we discuss their computational properties. We apply our strategies to oil exploration over multiple prospects modeled by a directed acyclic graph, and to a reservoir drilling decision problem modeled by a Markov random field. The results show that the suggested strategies clearly improve the simpler intuitive constructions, and this is useful when selecting exploration policies.
1201.4243
Analysis of a Key Distribution Scheme in Secure Multicasting
cs.CR cs.DM cs.IT math.IT
This article presents an analysis of the secure key broadcasting scheme proposed by Wu, Ruan, Lai and Tseng. The study of the parameters of the system is based on a connection with a special type of symmetric equations over finite fields. We present two different attacks against the system, whose efficiency depends on the choice of the parameters. In particular, a time-memory tradeoff attack is described, effective when a parameter of the scheme is chosen without care. In such a situation, more than one third of the cases can be broken with a time and space complexity in the range of the square root of the complexity of the best attack suggested by Wu et al. against their system. This leads to a feasible attack in a realistic scenario.
1201.4285
On Shore and Johnson properties for a Special Case of Csisz\'ar f-divergences
cs.IT math.IT
The importance of power-law distributions is attributed to the fact that most of the naturally occurring phenomenon exhibit this distribution. While exponential distributions can be derived by minimizing KL-divergence w.r.t some moment constraints, some power law distributions can be derived by minimizing some generalizations of KL-divergence (more specifically some special cases of Csisz\'ar f-divergences). Divergence minimization is very well studied in information theoretical approaches to statistics. In this work we study properties of minimization of Tsallis divergence, which is a special case of Csisz\'ar f-divergence. In line with the work by Shore and Johnson (IEEE Trans. IT, 1981), we examine the properties exhibited by these minimization methods including the Pythagorean property.
1201.4291
Scaling of Congestion in Small World Networks
math.MG cond-mat.stat-mech cs.SI physics.soc-ph
In this report we show that in a planar exponentially growing network consisting of $N$ nodes, congestion scales as $O(N^2/\log(N))$ independently of how flows may be routed. This is in contrast to the $O(N^{3/2})$ scaling of congestion in a flat polynomially growing network. We also show that without the planarity condition, congestion in a small world network could scale as low as $O(N^{1+\epsilon})$, for arbitrarily small $\epsilon$. These extreme results demonstrate that the small world property by itself cannot provide guidance on the level of congestion in a network and other characteristics are needed for better resolution. Finally, we investigate scaling of congestion under the geodesic flow, that is, when flows are routed on shortest paths based on a link metric. Here we prove that if the link weights are scaled by arbitrarily small or large multipliers then considerable changes in congestion may occur. However, if we constrain the link-weight multipliers to be bounded away from both zero and infinity, then variations in congestion due to such remetrization are negligible.
1201.4334
Classification of Binary Self-Dual [48,24,10] Codes with an Automorphism of Odd Prime Order
cs.IT math.CO math.IT
The purpose of this paper is to complete the classification of binary self-dual [48,24,10] codes with an automorphism of odd prime order. We prove that if there is a self-dual [48, 24, 10] code with an automorphism of type p-(c,f) with p being an odd prime, then p=3, c=16, f=0. By considering only an automorphism of type 3-(16,0), we prove that there are exactly 264 inequivalent self-dual [48, 24, 10] codes with an automorphism of odd prime order, equivalently, there are exactly 264 inequivalent cubic self-dual [48, 24, 10] codes.
1201.4342
A Pareto-metaheuristic for a bi-objective winner determination problem in a combinatorial reverse auction
cs.GT cs.AI math.OC
The bi-objective winner determination problem (2WDP-SC) of a combinatorial procurement auction for transport contracts is characterized by a set B of bundle bids, with each bundle bid b in B consisting of a bidding carrier c_b, a bid price p_b, and a set tau_b transport contracts which is a subset of the set T of tendered transport contracts. Additionally, the transport quality q_{t,c_b} is given which is expected to be realized when a transport contract t is executed by a carrier c_b. The task of the auctioneer is to find a set X of winning bids (X subset B), such that each transport contract is part of at least one winning bid, the total procurement costs are minimized, and the total transport quality is maximized. This article presents a metaheuristic approach for the 2WDP-SC which integrates the greedy randomized adaptive search procedure with a two-stage candidate component selection procedure, large neighborhood search, and self-adaptive parameter setting in order to find a competitive set of non-dominated solutions. The heuristic outperforms all existing approaches. For seven small benchmark instances, the heuristic is the sole approach that finds all Pareto-optimal solutions. For 28 out of 30 large instances, none of the existing approaches is able to compute a solution that dominates a solution found by the proposed heuristic.
1201.4369
Exact solution of bond percolation on small arbitrary graphs
cond-mat.stat-mech cs.SI physics.soc-ph
We introduce a set of iterative equations that exactly solves the size distribution of components on small arbitrary graphs after the random removal of edges. We also demonstrate how these equations can be used to predict the distribution of the node partitions (i.e., the constrained distribution of the size of each component) in undirected graphs. Besides opening the way to the theoretical prediction of percolation on arbitrary graphs of large but finite size, we show how our results find application in graph theory, epidemiology, percolation and fragmentation theory.
1201.4428
Error-Trellis Construction for Tailbiting Convolutional Codes
cs.IT math.IT
In this paper, we present an error-trellis construction for tailbiting convolutional codes. A tailbiting error-trellis is characterized by the condition that the syndrome former starts and ends in the same state. We clarify the correspondence between code subtrellises in the tailbiting code-trellis and error subtrellises in the tailbiting error-trellis. Also, we present a construction of tailbiting backward error-trellises. Moreover, we obtain the scalar parity-check matrix for a tailbiting convolutional code. The proposed construction is based on the adjoint-obvious realization of a syndrome former and its behavior is fully used in the discussion.
1201.4443
Dynamic behavior analysis for a six axis industrial machining robot
cs.RO
The six axis robots are widely used in automotive industry for their good repeatability (as defined in the ISO92983) (painting, welding, mastic deposition, handling etc.). In the aerospace industry, robot starts to be used for complex applications such as drilling, riveting, fiber placement, NDT, etc. Given the positioning performance of serial robots, precision applications require usually external measurement device with complexes calibration procedure in order to reach the precision needed. New applications in the machining field of composite material (aerospace, naval, or wind turbine for example) intend to use off line programming of serial robot without the use of calibration or external measurement device. For those applications, the position, orientation and path trajectory precision of the tool center point of the robot are needed to generate the machining operation. This article presents the different conditions that currently limit the development of robots in robotic machining applications. We analyze the dynamical behavior of a robot KUKA KR240-2 (located at the University of Bordeaux 1) equipped with a HSM Spindle (42000 rpm, 18kW). This analysis is done in three stages. The first step is determining the self-excited frequencies of the robot structure for three different configurations of work. The second phase aims to analyze the dynamical vibration of the structure as the spindle is activated without cutting. The third stage consists of vibration analysis during a milling operation.
1201.4445
Experimental Characterization of Robot Arm Rigidity in Order to Be Used in Machining Operation
cs.RO
Attempts to install a rotating tool at the end of a robot arm poly-articulated date back twenty years, but these robots were not designed for that. Indeed, two essential features are necessary for machining: high rigidity and precision in a given workspace. The experimental results presented are the dynamic identification of a poly-articulated robot equipped with an integrated spindle. This study aims to highlight the influence of the geometric configuration of the robot arm on the overall stiffness of the system. The spindle is taken into account as an additional weight on board but also as a dynamical excitation for the robot KUKA KR_240_2. Study of the robotic machining vibrations shows the suitable directions of movement in milling process
1201.4469
Uncertainty Bounds for Spectral Estimation
cs.SY math.OC math.ST stat.TH
The purpose of this paper is to study metrics suitable for assessing uncertainty of power spectra when these are based on finite second-order statistics. The family of power spectra which is consistent with a given range of values for the estimated statistics represents the uncertainty set about the "true" power spectrum. Our aim is to quantify the size of this uncertainty set using suitable notions of distance, and in particular, to compute the diameter of the set since this represents an upper bound on the distance between any choice of a nominal element in the set and the "true" power spectrum. Since the uncertainty set may contain power spectra with lines and discontinuities, it is natural to quantify distances in the weak topology---the topology defined by continuity of moments. We provide examples of such weakly-continuous metrics and focus on particular metrics for which we can explicitly quantify spectral uncertainty. We then consider certain high resolution techniques which utilize filter-banks for pre-processing, and compute worst-case a priori uncertainty bounds solely on the basis of the filter dynamics. This allows the a priori tuning of the filter-banks for improved resolution over selected frequency bands.
1201.4477
Wireless Network Coding for MIMO Two-way Relaying using Latin Rectangles
cs.IT math.IT
The design of modulation schemes for the physical layer network-coded two-way MIMO relaying scenario is considered, with $n_R$ antennas at the relay R, $n_A$ and $n_B$ antennas respectively at the end nodes A and B. We consider the denoise-and-forward (DNF) protocol which employs two phases: Multiple access (MA) phase and Broadcast (BC) phase. It is known for the network-coded SISO two-way relaying that adaptively changing the networking coding map used at the relay, also known as the denoising map, according to the channel conditions greatly reduces the impact of multiple access interference which occurs at the relay during the MA phase and all these network coding maps should satisfy a requirement called the {\it exclusive law}. The network coding maps which satisfy exclusive law can be viewed equivalently as Latin Rectangles. In this paper, it is shown that for MIMO two-way relaying, deep fade occurs at the relay when the row space of the channel fade coefficient matrix is a subspace of a finite number of vector subspaces of $\mathbb{C}^{n_A+n_B}$ which are referred to as the singular fade subspaces. It is shown that proper choice of network coding map can remove most of the singular fade subspaces, referred to as the removable singular fade subspaces. For $2^{\lambda}$-PSK signal set, it is shown that the number of non-removable singular fade subspaces is a small fraction of the total number of singular fade subspaces. The Latin Rectangles for the case when the end nodes use different number of antennas are shown to be obtainable from the Latin Squares for the case when they use the same number of antennas. Also, the network coding maps which remove all the removable singular singular fade subspaces are shown to be obtainable from a small set of Latin Squares.
1201.4479
Distributed Data Storage in Large-Scale Sensor Networks Based on LT Codes
cs.IT cs.DB math.IT
This paper proposes an algorithm for increasing data persistency in large-scale sensor networks. In the scenario considered here, k out of n nodes sense the phenomenon and produced ? information packets. Due to usually hazardous environment and limited resources, e.g. energy, sensors in the network are vulnerable. Also due to the large size of the network, gathering information from a few central hopes is not feasible. Flooding is not a desired option either due to limited memory of each node. Therefore the best approach to increase data persistency is propagating data throughout the network by random walks. The algorithm proposed here is based on distributed LT (Luby Transform) codes and it benefits from the low complexity of encoding and decoding of LT codes. In previous algorithms the essential global information (e.g., n and k) are estimated based on graph statistics, which requires excessive transmissions. In our proposed algorithm, these values are obtained without additional transmissions. Also the mixing time of random walk is enhanced by proposing a new scheme for generating the probabilistic forwarding table of random walk. The proposed method uses only local information and it is scalable to any network topology. By simulations the improved performance of developed algorithm compared to previous ones has been verified.
1201.4480
A Solution to Fastest Distributed Consensus Problem for Generic Star & K-cored Star Networks
cs.IT cs.DC math.IT
Distributed average consensus is the main mechanism in algorithms for decentralized computation. In distributed average consensus algorithm each node has an initial state, and the goal is to compute the average of these initial states in every node. To accomplish this task, each node updates its state by a weighted average of its own and neighbors' states, by using local communication between neighboring nodes. In the networks with fixed topology, convergence rate of distributed average consensus algorithm depends on the choice of weights. This paper studies the weight optimization problem in distributed average consensus algorithm. The network topology considered here is a star network where the branches have different lengths. Closed-form formulas of optimal weights and convergence rate of algorithm are determined in terms of the network's topological parameters. Furthermore generic K-cored star topology has been introduced as an alternative to star topology. The introduced topology benefits from faster convergence rate compared to star topology. By simulation better performance of optimal weights compared to other common weighting methods has been proved.
1201.4499
Mathematical and computational modeling for describing the basic behavior of free radicals and antioxidants within epithelial cells
cs.CE q-bio.QM
The traditional methods of the biology, based on illustrative descriptions and linear logic explanations, are discussed. This work aims to improve this approach by introducing alternative tools to describe and represent complex biological systems. Two models were developed, one mathematical and another computational, both were made in order to study the biological process between free radicals and antioxidants. Each model was used to study the same process but in different scenarios. The mathematical model was used to study the biological process in an epithelial cells culture; this model was validated with the experimental data of Anne Hanneken's research group from the Department of Molecular and Experimental Medicine, published by the journal Investigative Ophthalmology and Visual Science in July 2006. The computational model was used to study the same process in an individual. The model was made using C++ programming language, supported by the network theory of aging.
1201.4564
Homophily and Long-Run Integration in Social Networks
physics.soc-ph cs.SI
We model network formation when heterogeneous nodes enter sequentially and form connections through both random meetings and network-based search, but with type-dependent biases. We show that there is "long-run integration," whereby the composition of types in sufficiently old nodes' neighborhoods approaches the global type distribution, provided that the network-based search is unbiased. However, younger nodes' connections still reflect the biased meetings process. We derive the type-based degree distributions and group-level homophily patterns when there are two types and location-based biases. Finally, we illustrate aspects of the model with an empirical application to data on citations in physics journals.
1201.4565
Discrete Opinion models as a limit case of the CODA model
physics.soc-ph cs.MA cs.SI nlin.AO
Opinion Dynamics models can be, for most of them, divided between discrete and continuous. They are used in different circumstances and the relationship between them is not clear. Here we will explore the relationship between a model where choices are discrete but opinions are a continuous function (the Continuous Opinions and Discrete Actions, CODA, model) and traditional discrete models. I will show that, when CODA is altered to include reasoning about the influence one agent can have on its own neighbors, agreement and disagreement no longer have the same importance. The limit when an agent considers itself to be more and more influential will be studied and we will see that one recovers discrete dynamics, like those of the Voter model in that limit
1201.4597
Fractal Descriptors Based on Fourier Spectrum Applied to Texture Analysis
physics.data-an cs.CV
This work proposes the development and study of a novel technique for the generation of fractal descriptors used in texture analysis. The novel descriptors are obtained from a multiscale transform applied to the Fourier technique of fractal dimension calculus. The power spectrum of the Fourier transform of the image is plotted against the frequency in a log- log scale and a multiscale transform is applied to this curve. The obtained values are taken as the fractal descriptors of the image. The validation of the propose is performed by the use of the descriptors for the classification of a dataset of texture images whose real classes are previously known. The classification precision is compared to other fractal descriptors known in the literature. The results confirm the efficiency of the proposed method.
1201.4602
Bond percolation on a class of correlated and clustered random graphs
cond-mat.stat-mech cs.SI physics.soc-ph q-bio.PE
We introduce a formalism for computing bond percolation properties of a class of correlated and clustered random graphs. This class of graphs is a generalization of the Configuration Model where nodes of different types are connected via different types of hyperedges, edges that can link more than 2 nodes. We argue that the multitype approach coupled with the use of clustered hyperedges can reproduce a wide spectrum of complex patterns, and thus enhances our capability to model real complex networks. As an illustration of this claim, we use our formalism to highlight unusual behaviors of the size and composition of the components (small and giant) in a synthetic, albeit realistic, social network.
1201.4615
Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm
cs.IT math.IT math.OC
This paper studies the long-existing idea of adding a nice smooth function to "smooth" a non-differentiable objective function in the context of sparse optimization, in particular, the minimization of $||x||_1+1/(2\alpha)||x||_2^2$, where $x$ is a vector, as well as the minimization of $||X||_*+1/(2\alpha)||X||_F^2$, where $X$ is a matrix and $||X||_*$ and $||X||_F$ are the nuclear and Frobenius norms of $X$, respectively. We show that they can efficiently recover sparse vectors and low-rank matrices. In particular, they enjoy exact and stable recovery guarantees similar to those known for minimizing $||x||_1$ and $||X||_*$ under the conditions on the sensing operator such as its null-space property, restricted isometry property, spherical section property, or RIPless property. To recover a (nearly) sparse vector $x^0$, minimizing $||x||_1+1/(2\alpha)||x||_2^2$ returns (nearly) the same solution as minimizing $||x||_1$ almost whenever $\alpha\ge 10||x^0||_\infty$. The same relation also holds between minimizing $||X||_*+1/(2\alpha)||X||_F^2$ and minimizing $||X||_*$ for recovering a (nearly) low-rank matrix $X^0$, if $\alpha\ge 10||X^0||_2$. Furthermore, we show that the linearized Bregman algorithm for minimizing $||x||_1+1/(2\alpha)||x||_2^2$ subject to $Ax=b$ enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a solution solution or any properties on $A$. To our knowledge, this is the best known global convergence result for first-order sparse optimization algorithms.
1201.4672
Estimation of the Covariance Matrix of Large Dimensional Data
cs.IT math.IT
This paper deals with the problem of estimating the covariance matrix of a series of independent multivariate observations, in the case where the dimension of each observation is of the same order as the number of observations. Although such a regime is of interest for many current statistical signal processing and wireless communication issues, traditional methods fail to produce consistent estimators and only recently results relying on large random matrix theory have been unveiled. In this paper, we develop the parametric framework proposed by Mestre, and consider a model where the covariance matrix to be estimated has a (known) finite number of eigenvalues, each of it with an unknown multiplicity. The main contributions of this work are essentially threefold with respect to existing results, and in particular to Mestre's work: To relax the (restrictive) separability assumption, to provide joint consistent estimates for the eigenvalues and their multiplicities, and to study the variance error by means of a Central Limit theorem.
1201.4714
A metric learning perspective of SVM: on the relation of SVM and LMNN
cs.LG stat.ML
Support Vector Machines, SVMs, and the Large Margin Nearest Neighbor algorithm, LMNN, are two very popular learning algorithms with quite different learning biases. In this paper we bring them into a unified view and show that they have a much stronger relation than what is commonly thought. We analyze SVMs from a metric learning perspective and cast them as a metric learning problem, a view which helps us uncover the relations of the two algorithms. We show that LMNN can be seen as learning a set of local SVM-like models in a quadratic space. Along the way and inspired by the metric-based interpretation of SVM s we derive a novel variant of SVMs, epsilon-SVM, to which LMNN is even more similar. We give a unified view of LMNN and the different SVM variants. Finally we provide some preliminary experiments on a number of benchmark datasets in which show that epsilon-SVM compares favorably both with respect to LMNN and SVM.
1201.4733
Du TAL au TIL
cs.CL cs.HC
Historically two types of NLP have been investigated: fully automated processing of language by machines (NLP) and autonomous processing of natural language by people, i.e. the human brain (psycholinguistics). We believe that there is room and need for another kind, INLP: interactive natural language processing. This intermediate approach starts from peoples' needs, trying to bridge the gap between their actual knowledge and a given goal. Given the fact that peoples' knowledge is variable and often incomplete, the aim is to build bridges linking a given knowledge state to a given goal. We present some examples, trying to show that this goal is worth pursuing, achievable and at a reasonable cost.
1201.4737
Production System Rules as Protein Complexes from Genetic Regulatory Networks
cs.NE
This short paper introduces a new way by which to design production system rules. An indirect encoding scheme is presented which views such rules as protein complexes produced by the temporal behaviour of an artificial genetic regulatory network. This initial study begins by using a simple Boolean regulatory network to produce traditional ternary-encoded rules before moving to a fuzzy variant to produce real-valued rules. Competitive performance is shown with related genetic regulatory networks and rule-based systems on benchmark problems.
1201.4743
Voting Power : A Generalised Framework
math.ST cs.CY cs.GT cs.MA stat.OT stat.TH
This paper examines an area of Game Theory called Voting Power Theory. With the adoption of a measure theoretic framework it argues that the many different indices and tools currently used for measuring voting power can be replaced by just three simple probabilities. The framework is sufficiently general to be applicable to every conceivable type of voting game, and every possible decision rule.
1201.4768
Completion Delay Minimization for Instantly Decodable Network Codes
cs.NI cs.IT math.IT
In this paper, we consider the problem of minimizing the completion delay for instantly decodable network coding (IDNC), in wireless multicast and broadcast scenarios. We are interested in this class of network coding due to its numerous benefits, such as low decoding delay, low coding and decoding complexities and simple receiver requirements. We first extend the IDNC graph, which represents all feasible IDNC coding opportunities, to efficiently operate in both multicast and broadcast scenarios. We then formulate the minimum completion delay problem for IDNC as a stochastic shortest path (SSP) problem. Although finding the optimal policy using SSP is intractable, we use this formulation to draw the theoretical guidelines for the policies that can efficiently reduce the completion delay in IDNC. Based on these guidelines, we design a maximum weight clique selection algorithm, which can efficiently reduce the IDNC completion delay in polynomial time. We also design a quadratic time heuristic clique selection algorithm, which can operate in real-time applications. Simulation results show that our proposed algorithms efficiently reduce the IDNC completion delay compared to the random and maximum-rate algorithms, and almost achieve the global optimal completion delay performance over all network codes in broadcast scenarios.
1201.4777
A probabilistic methodology for multilabel classification
cs.AI cs.LG
Multilabel classification is a relatively recent subfield of machine learning. Unlike to the classical approach, where instances are labeled with only one category, in multilabel classification, an arbitrary number of categories is chosen to label an instance. Due to the problem complexity (the solution is one among an exponential number of alternatives), a very common solution (the binary method) is frequently used, learning a binary classifier for every category, and combining them all afterwards. The assumption taken in this solution is not realistic, and in this work we give examples where the decisions for all the labels are not taken independently, and thus, a supervised approach should learn those existing relationships among categories to make a better classification. Therefore, we show here a generic methodology that can improve the results obtained by a set of independent probabilistic binary classifiers, by using a combination procedure with a classifier trained on the co-occurrences of the labels. We show an exhaustive experimentation in three different standard corpora of labeled documents (Reuters-21578, Ohsumed-23 and RCV1), which present noticeable improvements in all of them, when using our methodology, in three probabilistic base classifiers.
1201.4779
On the ADI method for the Sylvester Equation and the optimal-$\mathcal{H}_2$ points
math.NA cs.SY
The ADI iteration is closely related to the rational Krylov projection methods for constructing low rank approximations to the solution of Sylvester equation. In this paper we show that the ADI and rational Krylov approximations are in fact equivalent when a special choice of shifts are employed in both methods. We will call these shifts pseudo H2-optimal shifts. These shifts are also optimal in the sense that for the Lyapunov equation, they yield a residual which is orthogonal to the rational Krylov projection subspace. Via several examples, we show that the pseudo H2-optimal shifts consistently yield nearly optimal low rank approximations to the solutions of the Lyapunov equations.
1201.4787
PageRank and rank-reversal dependence on the damping factor
physics.soc-ph cs.SI physics.data-an
PageRank (PR) is an algorithm originally developed by Google to evaluate the importance of web pages. Considering how deeply rooted Google's PR algorithm is to gathering relevant information or to the success of modern businesses, the question of rank-stability and choice of the damping factor (a parameter in the algorithm) is clearly important. We investigate PR as a function of the damping factor d on a network obtained from a domain of the World Wide Web, finding that rank-reversal happens frequently over a broad range of PR (and of d). We use three different correlation measures, Pearson, Spearman, and Kendall, to study rank-reversal as d changes, and show that the correlation of PR vectors drops rapidly as d changes from its frequently cited value, $d_0=0.85$. Rank-reversal is also observed by measuring the Spearman and Kendall rank correlation, which evaluate relative ranks rather than absolute PR. Rank-reversal happens not only in directed networks containing rank-sinks but also in a single strongly connected component, which by definition does not contain any sinks. We relate rank-reversals to rank-pockets and bottlenecks in the directed network structure. For the network studied, the relative rank is more stable by our measures around $d=0.65$ than at $d=d_0$.
1201.4793
Space Shift Keying (SSK-) MIMO with Practical Channel Estimates
cs.IT math.IT
In this paper, we study the performance of space modulation for Multiple-Input-Multiple-Output (MIMO) wireless systems with imperfect channel knowledge at the receiver. We focus our attention on two transmission technologies, which are the building blocks of space modulation: i) Space Shift Keying (SSK) modulation; and ii) Time-Orthogonal-Signal-Design (TOSD-) SSK modulation, which is an improved version of SSK modulation providing transmit-diversity. We develop a single-integral closed-form analytical framework to compute the Average Bit Error Probability (ABEP) of a mismatched detector for both SSK and TOSD-SSK modulations. The framework exploits the theory of quadratic-forms in conditional complex Gaussian Random Variables (RVs) along with the Gil-Pelaez inversion theorem. The analytical model is very general and can be used for arbitrary transmit- and receive-antennas, fading distributions, fading spatial correlations, and training pilots. The analytical derivation is substantiated through Monte Carlo simulations, and it is shown, over independent and identically distributed (i.i.d.) Rayleigh fading channels, that SSK modulation is as robust as single-antenna systems to imperfect channel knowledge, and that TOSD-SSK modulation is more robust to channel estimation errors than the Alamouti scheme. Furthermore, it is pointed out that only few training pilots are needed to get reliable enough channel estimates for data detection, and that transmit- and receive-diversity of SSK and TOSD-SSK modulations are preserved even with imperfect channel knowledge.
1201.4895
Compressive Acquisition of Dynamic Scenes
cs.CV
Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models difficult. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements, and then reconstructing the image frames. We exploit the low-dimensional dynamic parameters (the state sequence) and high-dimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the dynamic part of the scene at each instant and accumulates measurements over time to estimate the static parameters. This enables us to lower the compressive measurement rate considerably. We validate our approach with a range of experiments involving both video recovery, sensing hyper-spectral data, and classification of dynamic scenes from compressive data. Together, these applications demonstrate the effectiveness of the approach.
1201.4897
Adaptive Systems with Closed-loop Reference Models: Stability, Robustness and Transient Performance
math.OC cs.SY nlin.AO
This paper explores the properties of adaptive systems with closed-loop reference models. Using additional design freedom available in closed-loop reference models, we design new adaptive controllers that are (a) stable, and (b) have improved transient properties. Numerical studies that complement theoretical derivations are also reported.
1201.4906
Adaptive Shortest-Path Routing under Unknown and Stochastically Varying Link States
cs.NI cs.LG
We consider the adaptive shortest-path routing problem in wireless networks under unknown and stochastically varying link states. In this problem, we aim to optimize the quality of communication between a source and a destination through adaptive path selection. Due to the randomness and uncertainties in the network dynamics, the quality of each link varies over time according to a stochastic process with unknown distributions. After a path is selected for communication, the aggregated quality of all links on this path (e.g., total path delay) is observed. The quality of each individual link is not observable. We formulate this problem as a multi-armed bandit with dependent arms. We show that by exploiting arm dependencies, a regret polynomial with network size can be achieved while maintaining the optimal logarithmic order with time. This is in sharp contrast with the exponential regret order with network size offered by a direct application of the classic MAB policies that ignore arm dependencies. Furthermore, our results are obtained under a general model of link-quality distributions (including heavy-tailed distributions) and find applications in cognitive radio and ad hoc networks with unknown and dynamic communication environments.
1201.4908
Self-Organisation of Evolving Agent Populations in Digital Ecosystems
cs.NE
We investigate the self-organising behaviour of Digital Ecosystems, because a primary motivation for our research is to exploit the self-organising properties of biological ecosystems. We extended a definition for the complexity, grounded in the biological sciences, providing a measure of the information in an organism's genome. Next, we extended a definition for the stability, originating from the computer sciences, based upon convergence to an equilibrium distribution. Finally, we investigated a definition for the diversity, relative to the selection pressures provided by the user requests. We conclude with a summary and discussion of the achievements, including the experimental results.
1201.4914
Effective Clustering Algorithms for Gene Expression Data
cs.CE q-bio.GN q-bio.QM
Microarrays are made it possible to simultaneously monitor the expression profiles of thousands of genes under various experimental conditions. Identification of co-expressed genes and coherent patterns is the central goal in microarray or gene expression data analysis and is an important task in Bioinformatics research. In this paper, K-Means algorithm hybridised with Cluster Centre Initialization Algorithm (CCIA) is proposed Gene Expression Data. The proposed algorithm overcomes the drawbacks of specifying the number of clusters in the K-Means methods. Experimental analysis shows that the proposed method performs well on gene Expression Data when compare with the traditional K- Means clustering and Silhouette Coefficients cluster measure.
1201.4949
Approximate Message Passing under Finite Alphabet Constraints
cs.IT math.IT
In this paper we consider Basis Pursuit De-Noising (BPDN) problems in which the sparse original signal is drawn from a finite alphabet. To solve this problem we propose an iterative message passing algorithm, which capitalises not only on the sparsity but by means of a prior distribution also on the discrete nature of the original signal. In our numerical experiments we test this algorithm in combination with a Rademacher measurement matrix and a measurement matrix derived from the random demodulator, which enables compressive sampling of analogue signals. Our results show in both cases significant performance gains over a linear programming based approach to the considered BPDN problem. We also compare the proposed algorithm to a similar message passing based algorithm without prior knowledge and observe an even larger performance improvement.
1201.4955
Coordination, Differentiation and Fairness in a population of cooperating agents
physics.soc-ph cs.SI
In a recent paper, we analyzed the self-assembly of a complex cooperation network. The network was shown to approach a state where every agent invests the same amount of resources. Nevertheless, highly-connected agents arise that extract extraordinarily high payoffs while contributing comparably little to any of their cooperations. Here, we investigate a variant of the model, in which highly-connected agents have access to additional resources. We study analytically and numerically whether these resources are invested in existing collaborations, leading to a fairer load distribution, or in establishing new collaborations, leading to an even less fair distribution of loads and payoffs.
1201.4999
A study of distributed QoS adapter in large-scale wireless networks
cs.NI cs.MA
Considering the comfortably establishing ad hoc networks, the use of this type of network is increasing day to day. On the other side, it is predicted that using multimedia applications will be more public in these network. As it is known, in contrary to best-effort flows, the transmission of multimedia flows in any network need support from QoS. However, the wireless ad hoc networks are severely affected by bandwidth, and establishing a QoS in these networks face problems. In this paper, we have proposed a thoroughly distributed algorithm to support the QoS in ad hoc networks. This algorithm guarantees the QoS of the real-time applications vis-a-vis each other and best-effort flows as well. The algorithm suggested in this paper dynamically regulates the Contention Window of the flows and serves the flows in terms of their requests QoS choosing the smallest CW in every node. This algorithm also uses the fixed and/or less stationary nodes for the transmission of real-time flows by increasing the QoS of the multimedia flows. This algorithm is preferred because it prioritizes the flows that are of the same class but have not obtained favorite QoS compared to other flows of the same class in addition to classifying the flows in the network and offering better services to the classes of higher priority. All this occur without the controlled packets forwarding and resource reserving and freeing method. We have proved the correctness of this algorithm using Markov's mathematical model.
1201.5019
On the Exact Solution to a Smart Grid Cyber-Security Analysis Problem
math.OC cs.CE cs.SY
This paper considers a smart grid cyber-security problem analyzing the vulnerabilities of electric power networks to false data attacks. The analysis problem is related to a constrained cardinality minimization problem. The main result shows that an $l_1$ relaxation technique provides an exact optimal solution to this cardinality minimization problem. The proposed result is based on a polyhedral combinatorics argument. It is different from well-known results based on mutual coherence and restricted isometry property. The results are illustrated on benchmarks including the IEEE 118-bus and 300-bus systems.
1201.5102
Conception and Use of Ontologies for Indexing and Searching by Semantic Contents of Video Courses
cs.DL cs.IR
Nowadays, the video documents like educational courses available on the web increases significantly. However, the information retrieval systems today can not return to the users (students or teachers) of parts of those videos that meet their exact needs expressed by a query consisting of semantic information. In this paper, we present a model of pedagogical knowledge of current videos. This knowledge is used throughout the process of indexing and semantic search segments instructional videos. Our experimental results show that the proposed approach is promising.
1201.5154
Finding short vectors in a lattice of Voronoi's first kind
cs.IT cs.DS math.IT
We show that for those lattices of Voronoi's first kind, a vector of shortest nonzero Euclidean length can computed in polynomial time by computing a minimum cut in a graph.
1201.5167
Interactive Encoding and Decoding Based on Binary LDPC Codes with Syndrome Accumulation
cs.IT math.IT
Interactive encoding and decoding based on binary low-density parity-check codes with syndrome accumulation (SA-LDPC-IED) is proposed and investigated. Assume that the source alphabet is $\mathbf{GF}(2)$, and the side information alphabet is finite. It is first demonstrated how to convert any classical universal lossless code $\mathcal{C}_n$ (with block length $n$ and side information available to both the encoder and decoder) into a universal SA-LDPC-IED scheme. It is then shown that with the word error probability approaching 0 sub-exponentially with $n$, the compression rate (including both the forward and backward rates) of the resulting SA-LDPC-IED scheme is upper bounded by a functional of that of $\mathcal{C}_n$, which in turn approaches the compression rate of $\mathcal{C}_n$ for each and every individual sequence pair $(x^n,y^n)$ and the conditional entropy rate $\mathrm{H}(X |Y)$ for any stationary, ergodic source and side information $(X, Y)$ as the average variable node degree $\bar{l}$ of the underlying LDPC code increases without bound. When applied to the class of binary source and side information $(X, Y)$ correlated through a binary symmetrical channel with cross-over probability unknown to both the encoder and decoder, the resulting SA-LDPC-IED scheme can be further simplified, yielding even improved rate performance versus the bit error probability when $\bar{l}$ is not large. Simulation results (coupled with linear time belief propagation decoding) on binary source-side information pairs confirm the theoretic analysis, and further show that the SA-LDPC-IED scheme consistently outperforms the Slepian-Wolf coding scheme based on the same underlying LDPC code. As a by-product, probability bounds involving LDPC established in the course are also interesting on their own and expected to have implications on the performance of LDPC for channel coding as well.
1201.5173
Timely Throughput of Heterogeneous Wireless Networks: Fundamental Limits and Algorithms
cs.IT cs.NI math.IT
The proliferation of different wireless access technologies, together with the growing number of multi-radio wireless devices suggest that the opportunistic utilization of multiple connections at the users can be an effective solution to the phenomenal growth of traffic demand in wireless networks. In this paper we consider the downlink of a wireless network with $N$ Access Points (AP's) and $M$ clients, where each client is connected to several out-of-band AP's, and requests delay-sensitive traffic (e.g., real-time video). We adopt the framework of Hou, Borkar, and Kumar, and study the maximum total timely throughput of the network, denoted by $C_{T^3}$, which is the maximum average number of packets delivered successfully before their deadline. Solving this problem is challenging since even the number of different ways of assigning packets to the AP's is $N^M$. We overcome the challenge by proposing a deterministic relaxation of the problem, which converts the problem to a network with deterministic delays in each link. We show that the additive gap between the capacity of the relaxed problem, denoted by $C_{det}$, and $C_{T^3}$ is bounded by $2\sqrt{N(C_{det}+N/4)}$, which is asymptotically negligible compared to $C_{det}$, when the network is operating at high-throughput regime. In addition, our numerical results show that the actual gap between $C_{T^3}$ and $C_{det}$ is in most cases much less than the worst-case gap proven analytically. Moreover, using LP rounding methods we prove that the relaxed problem can be approximated within additive gap of $N$. We extend the analytical results to the case of time-varying channel states, real-time traffic, prioritized traffic, and optimal online policies. Finally, we generalize the model for deterministic relaxation to consider fading, rate adaptation, and multiple simultaneous transmissions.
1201.5182
Data Mining as a Torch Bearer in Education Sector
cs.IR
Every data has a lot of hidden information. The processing method of data decides what type of information data produce. In India education sector has a lot of data that can produce valuable information. This information can be used to increase the quality of education. But educational institution does not use any knowledge discovery process approach on these data. Information and communication technology puts its leg into the education sector to capture and compile low cost information. Now a day a new research community, educational data mining (EDM), is growing which is intersection of data mining and pedagogy. In this paper we present roadmap of research done in EDM in various segment of education sector.
1201.5198
Fragmentation transitions in multi-state voter models
physics.soc-ph cs.SI nlin.AO
Adaptive models of opinion formation among humans can display a fragmentation transition, where a social network breaks into disconnected components. Here, we investigate this transition in a class of models with arbitrary number of opinions. In contrast to previous work we do not assume that opinions are equidistant or arranged on a one-dimensional conceptual axis. Our investigation reveals detailed analytical results on fragmentations in a three-opinion model, which are confirmed by agent-based simulations. Furthermore, we show that in certain models the number of opinions can be reduced without affecting the fragmentation points.
1201.5217
Unsupervised Classification Using Immune Algorithm
cs.LG cs.AI
Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed UCSC algorithm is more reliable and has high classification precision comparing to traditional classification methods such as K-means.
1201.5227
A New Local Adaptive Thresholding Technique in Binarization
cs.CV
Image binarization is the process of separation of pixel values into two groups, white as background and black as foreground. Thresholding plays a major in binarization of images. Thresholding can be categorized into global thresholding and local thresholding. In images with uniform contrast distribution of background and foreground like document images, global thresholding is more appropriate. In degraded document images, where considerable background noise or variation in contrast and illumination exists, there exists many pixels that cannot be easily classified as foreground or background. In such cases, binarization with local thresholding is more appropriate. This paper describes a locally adaptive thresholding technique that removes background by using local mean and mean deviation. Normally the local mean computational time depends on the window size. Our technique uses integral sum image as a prior processing to calculate local mean. It does not involve calculations of standard deviations as in other local adaptive techniques. This along with the fact that calculations of mean is independent of window size speed up the process as compared to other local thresholding techniques.
1201.5229
Cross-entropy optimisation of importance sampling parameters for statistical model checking
cs.PF cs.CE cs.SY stat.CO
Statistical model checking avoids the exponential growth of states associated with probabilistic model checking by estimating properties from multiple executions of a system and by giving results within confidence bounds. Rare properties are often very important but pose a particular challenge for simulation-based approaches, hence a key objective under these circumstances is to reduce the number and length of simulations necessary to produce a given level of confidence. Importance sampling is a well-established technique that achieves this, however to maintain the advantages of statistical model checking it is necessary to find good importance sampling distributions without considering the entire state space. Motivated by the above, we present a simple algorithm that uses the notion of cross-entropy to find the optimal parameters for an importance sampling distribution. In contrast to previous work, our algorithm uses a low dimensional vector of parameters to define this distribution and thus avoids the often intractable explicit representation of a transition matrix. We show that our parametrisation leads to a unique optimum and can produce many orders of magnitude improvement in simulation efficiency. We demonstrate the efficacy of our methodology by applying it to models from reliability engineering and biochemistry.
1201.5241
Entropy functions and determinant inequalities
cs.IT math.IT
In this paper, we show that the characterisation of all determinant inequalities for $n \times n$ positive definite matrices is equivalent to determining the smallest closed and convex cone containing all entropy functions induced by $n$ scalar Gaussian random variables. We have obtained inner and outer bounds on the cone by using representable functions and entropic functions. In particular, these bounds are tight and explicit for $n \le 3$, implying that determinant inequalities for $3 \times 3$ positive definite matrices are completely characterized by Shannon-type information inequalities.
1201.5283
An Efficient Primal-Dual Prox Method for Non-Smooth Optimization
cs.LG
We study the non-smooth optimization problems in machine learning, where both the loss function and the regularizer are non-smooth functions. Previous studies on efficient empirical loss minimization assume either a smooth loss function or a strongly convex regularizer, making them unsuitable for non-smooth optimization. We develop a simple yet efficient method for a family of non-smooth optimization problems where the dual form of the loss function is bilinear in primal and dual variables. We cast a non-smooth optimization problem into a minimax optimization problem, and develop a primal dual prox method that solves the minimax optimization problem at a rate of $O(1/T)$ {assuming that the proximal step can be efficiently solved}, significantly faster than a standard subgradient descent method that has an $O(1/\sqrt{T})$ convergence rate. Our empirical study verifies the efficiency of the proposed method for various non-smooth optimization problems that arise ubiquitously in machine learning by comparing it to the state-of-the-art first order methods.
1201.5338
On Constrained Spectral Clustering and Its Applications
cs.LG stat.ML
Constrained clustering has been well-studied for algorithms such as $K$-means and hierarchical clustering. However, how to satisfy many constraints in these algorithmic settings has been shown to be intractable. One alternative to encode many constraints is to use spectral clustering, which remains a developing area. In this paper, we propose a flexible framework for constrained spectral clustering. In contrast to some previous efforts that implicitly encode Must-Link and Cannot-Link constraints by modifying the graph Laplacian or constraining the underlying eigenspace, we present a more natural and principled formulation, which explicitly encodes the constraints as part of a constrained optimization problem. Our method offers several practical advantages: it can encode the degree of belief in Must-Link and Cannot-Link constraints; it guarantees to lower-bound how well the given constraints are satisfied using a user-specified threshold; it can be solved deterministically in polynomial time through generalized eigendecomposition. Furthermore, by inheriting the objective function from spectral clustering and encoding the constraints explicitly, much of the existing analysis of unconstrained spectral clustering techniques remains valid for our formulation. We validate the effectiveness of our approach by empirical results on both artificial and real datasets. We also demonstrate an innovative use of encoding large number of constraints: transfer learning via constraints.
1201.5346
Tableau-based decision procedure for the multi-agent epistemic logic with all coalitional operators for common and distributed knowledge
cs.LO cs.AI math.LO
We develop a conceptually clear, intuitive, and feasible decision procedure for testing satisfiability in the full multi-agent epistemic logic CMAEL(CD) with operators for common and distributed knowledge for all coalitions of agents mentioned in the language. To that end, we introduce Hintikka structures for CMAEL(CD) and prove that satisfiability in such structures is equivalent to satisfiability in standard models. Using that result, we design an incremental tableau-building procedure that eventually constructs a satisfying Hintikka structure for every satisfiable input set of formulae of CMAEL(CD) and closes for every unsatisfiable input set of formulae.
1201.5360
Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Non-stationary/Unstable Linear Systems
cs.IT cs.SY math.IT math.OC
Stabilization of non-stationary linear systems over noisy communication channels is considered. Stochastically stable sources, and unstable but noise-free or bounded-noise systems have been extensively studied in information theory and control theory literature since 1970s, with a renewed interest in the past decade. There have also been studies on non-causal and causal coding of unstable/non-stationary linear Gaussian sources. In this paper, tight necessary and sufficient conditions for stochastic stabilizability of unstable (non-stationary) possibly multi-dimensional linear systems driven by Gaussian noise over discrete channels (possibly with memory and feedback) are presented. Stochastic stability notions include recurrence, asymptotic mean stationarity and sample path ergodicity, and the existence of finite second moments. Our constructive proof uses random-time state-dependent stochastic drift criteria for stabilization of Markov chains. For asymptotic mean stationarity (and thus sample path ergodicity), it is sufficient that the capacity of a channel is (strictly) greater than the sum of the logarithms of the unstable pole magnitudes for memoryless channels and a class of channels with memory. This condition is also necessary under a mild technical condition. Sufficient conditions for the existence of finite average second moments for such systems driven by unbounded noise are provided.
1201.5404
Task-Driven Adaptive Statistical Compressive Sensing of Gaussian Mixture Models
cs.CV
A framework for adaptive and non-adaptive statistical compressive sensing is developed, where a statistical model replaces the standard sparsity model of classical compressive sensing. We propose within this framework optimal task-specific sensing protocols specifically and jointly designed for classification and reconstruction. A two-step adaptive sensing paradigm is developed, where online sensing is applied to detect the signal class in the first step, followed by a reconstruction step adapted to the detected class and the observed samples. The approach is based on information theory, here tailored for Gaussian mixture models (GMMs), where an information-theoretic objective relationship between the sensed signals and a representation of the specific task of interest is maximized. Experimental results using synthetic signals, Landsat satellite attributes, and natural images of different sizes and with different noise levels show the improvements achieved using the proposed framework when compared to more standard sensing protocols. The underlying formulation can be applied beyond GMMs, at the price of higher mathematical and computational complexity.
1201.5411
Lower bounds on the Probability of Error for Classical and Classical-Quantum Channels
cs.IT math.IT quant-ph
In this paper, lower bounds on error probability in coding for discrete classical and classical-quantum channels are studied. The contribution of the paper goes in two main directions: i) extending classical bounds of Shannon, Gallager and Berlekamp to classical-quantum channels, and ii) proposing a new framework for lower bounding the probability of error of channels with a zero-error capacity in the low rate region. The relation between these two problems is revealed by showing that Lov\'asz' bound on zero-error capacity emerges as a natural consequence of the sphere packing bound once we move to the more general context of classical-quantum channels. A variation of Lov\'asz' bound is then derived to lower bound the probability of error in the low rate region by means of auxiliary channels. As a result of this study, connections between the Lov\'asz theta function, the expurgated bound of Gallager, the cutoff rate of a classical channel and the sphere packing bound for classical-quantum channels are established.
1201.5422
A Factor-Graph Representation of Probabilities in Quantum Mechanics
cs.IT math-ph math.IT math.MP quant-ph
A factor-graph representation of quantum-mechanical probabilities is proposed. Unlike standard statistical models, the proposed representation uses auxiliary variables (state variables) that are not random variables.
1201.5426
Constraint Propagation as Information Maximization
cs.AI
This paper draws on diverse areas of computer science to develop a unified view of computation: (1) Optimization in operations research, where a numerical objective function is maximized under constraints, is generalized from the numerical total order to a non-numerical partial order that can be interpreted in terms of information. (2) Relations are generalized so that there are relations of which the constituent tuples have numerical indexes, whereas in other relations these indexes are variables. The distinction is essential in our definition of constraint satisfaction problems. (3) Constraint satisfaction problems are formulated in terms of semantics of conjunctions of atomic formulas of predicate logic. (4) Approximation structures, which are available for several important domains, are applied to solutions of constraint satisfaction problems. As application we treat constraint satisfaction problems over reals. These cover a large part of numerical analysis, most significantly nonlinear equations and inequalities. The chaotic algorithm analyzed in the paper combines the efficiency of floating-point computation with the correctness guarantees of arising from our logico-mathematical model of constraint-satisfaction problems.
1201.5450
RT-SLAM: A Generic and Real-Time Visual SLAM Implementation
cs.RO
This article presents a new open-source C++ implementation to solve the SLAM problem, which is focused on genericity, versatility and high execution speed. It is based on an original object oriented architecture, that allows the combination of numerous sensors and landmark types, and the integration of various approaches proposed in the literature. The system capacities are illustrated by the presentation of an inertial/vision SLAM approach, for which several improvements over existing methods have been introduced, and that copes with very high dynamic motions. Results with a hand-held camera are presented.
1201.5472
A multiagent urban traffic simulation
cs.AI
We built a multiagent simulation of urban traffic to model both ordinary traffic and emergency or crisis mode traffic. This simulation first builds a modeled road network based on detailed geographical information. On this network, the simulation creates two populations of agents: the Transporters and the Mobiles. Transporters embody the roads themselves; they are utilitarian and meant to handle the low level realism of the simulation. Mobile agents embody the vehicles that circulate on the network. They have one or several destinations they try to reach using initially their beliefs of the structure of the network (length of the edges, speed limits, number of lanes etc.). Nonetheless, when confronted to a dynamic, emergent prone environment (other vehicles, unexpectedly closed ways or lanes, traffic jams etc.), the rather reactive agent will activate more cognitive modules to adapt its beliefs, desires and intentions. It may change its destination(s), change the tactics used to reach the destination (favoring less used roads, following other agents, using general headings), etc. We describe our current validation of our model and the next planned improvements, both in validation and in functionalities.
1201.5477
Entropy-growth-based model of emotionally charged online dialogues
physics.soc-ph cs.CL cs.SI physics.data-an
We analyze emotionally annotated massive data from IRC (Internet Relay Chat) and model the dialogues between its participants by assuming that the driving force for the discussion is the entropy growth of emotional probability distribution. This process is claimed to be correlated to the emergence of the power-law distribution of the discussion lengths observed in the dialogues. We perform numerical simulations based on the noticed phenomenon obtaining a good agreement with the real data. Finally, we propose a method to artificially prolong the duration of the discussion that relies on the entropy of emotional probability distribution.
1201.5484
Statistical analysis of emotions and opinions at Digg website
physics.soc-ph cs.CL cs.SI physics.data-an
We performed statistical analysis on data from the Digg.com website, which enables its users to express their opinion on news stories by taking part in forum-like discussions as well as directly evaluate previous posts and stories by assigning so called "diggs". Owing to fact that the content of each post has been annotated with its emotional value, apart from the strictly structural properties, the study also includes an analysis of the average emotional response of the posts commenting the main story. While analysing correlations at the story level, an interesting relationship between the number of diggs and the number of comments received by a story was found. The correlation between the two quantities is high for data where small threads dominate and consistently decreases for longer threads. However, while the correlation of the number of diggs and the average emotional response tends to grow for longer threads, correlations between numbers of comments and the average emotional response are almost zero. We also show that the initial set of comments given to a story has a substantial impact on the further "life" of the discussion: high negative average emotions in the first 10 comments lead to longer threads while the opposite situation results in shorter discussions. We also suggest presence of two different mechanisms governing the evolution of the discussion and, consequently, its length.
1201.5603
BIN@ERN: Binary-Ternary Compressing Data Coding
cs.IT cs.DS math.IT
This paper describes a new method of data encoding which may be used in various modern digital, computer and telecommunication systems and devices. The method permits the compression of data for storage or transmission, allowing the exact original data to be reconstructed without any loss of content. The method is characterized by the simplicity of implementation, as well as high speed and compression ratio. The method is based on a unique scheme of binary-ternary prefix-free encoding of characters of the original data. This scheme does not require the transmission of the code tables from encoder to decoder; allows for the linear presentation of the code lists; permits the usage of computable indexes of the prefix codes in a linear list for decoding; makes it possible to estimate the compression ratio prior to encoding; makes the usage of multiplication and division operations, as well as operations with the floating point unnecessary; proves to be effective for static as well as adaptive coding; applicable to character sets of any size; allows for repeated compression to improve the ratio.
1201.5604
Discrete and fuzzy dynamical genetic programming in the XCSF learning classifier system
cs.AI cs.LG cs.NE cs.SY math.OC
A number of representation schemes have been presented for use within learning classifier systems, ranging from binary encodings to neural networks. This paper presents results from an investigation into using discrete and fuzzy dynamical system representations within the XCSF learning classifier system. In particular, asynchronous random Boolean networks are used to represent the traditional condition-action production system rules in the discrete case and asynchronous fuzzy logic networks in the continuous-valued case. It is shown possible to use self-adaptive, open-ended evolution to design an ensemble of such dynamical systems within XCSF to solve a number of well-known test problems.
1201.5608
Combinatorial Channel Signature Modulation for Wireless ad-hoc Networks
cs.IT cs.NI math.IT
In this paper we introduce a novel modulation and multiplexing method which facilitates highly efficient and simultaneous communication between multiple terminals in wireless ad-hoc networks. We term this method Combinatorial Channel Signature Modulation (CCSM). The CCSM method is particularly efficient in situations where communicating nodes operate in highly time dispersive environments. This is all achieved with a minimal MAC layer overhead, since all users are allowed to transmit and receive at the same time/frequency (full simultaneous duplex). The CCSM method has its roots in sparse modelling and the receiver is based on compressive sampling techniques. Towards this end, we develop a new low complexity algorithm termed Group Subspace Pursuit. Our analysis suggests that CCSM at least doubles the throughput when compared to the state-of-the art.
1201.5626
Conditional strategies and the evolution of cooperation in spatial public goods games
physics.soc-ph cs.SI nlin.AO q-bio.PE
The fact that individuals will most likely behave differently in different situations begets the introduction of conditional strategies. Inspired by this, we study the evolution of cooperation in the spatial public goods game, where besides unconditional cooperators and defectors, also different types of conditional cooperators compete for space. Conditional cooperators will contribute to the public good only if other players within the group are likely to cooperate as well, but will withhold their contribution otherwise. Depending on the number of other cooperators that are required to elicit cooperation of a conditional cooperator, the latter can be classified in as many types as there are players within each group. We find that the most cautious cooperators, such that require all other players within a group to be conditional cooperators, are the undisputed victors of the evolutionary process, even at very low synergy factors. We show that the remarkable promotion of cooperation is due primarily to the spontaneous emergence of quarantining of defectors, which become surrounded by conditional cooperators and are forced into isolated convex "bubbles" from where they are unable to exploit the public good. This phenomenon can be observed only in structured populations, thus adding to the relevance of pattern formation for the successful evolution of cooperation.
1201.5689
An Efficient Construction of Self-Dual Codes
cs.IT math.CO math.IT math.NT
We complete the building-up construction for self-dual codes by resolving the open cases over $GF(q)$ with $q \equiv 3 \pmod 4$, and over $\Z_{p^m}$ and Galois rings $\GR(p^m,r)$ with an odd prime $p$ satisfying $p \equiv 3 \pmod 4$ with $r$ odd. We also extend the building-up construction for self-dual codes to finite chain rings. Our building-up construction produces many new interesting self-dual codes. In particular, we construct 945 new extremal self-dual ternary $[32,16,9]$ codes, each of which has a trivial automorphism group. We also obtain many new self-dual codes over $\mathbb Z_9$ of lengths $12, 16, 20$ all with minimum Hamming weight 6, which is the best possible minimum Hamming weight that free self-dual codes over $\Z_9$ of these lengths can attain. From the constructed codes over $\mathbb Z_9$, we reconstruct optimal Type I lattices of dimensions $12, 16, 20,$ and 24 using Construction $A$; this shows that our building-up construction can make a good contribution for finding optimal Type I lattices as well as self-dual codes. We also find new optimal self-dual $[16,8,7]$ codes over GF(7) and new self-dual codes over GF(7) with the best known parameters $[24,12,9]$.
1201.5722
Sex differences in intimate relationships
physics.soc-ph cs.SI
Social networks have turned out to be of fundamental importance both for our understanding human sociality and for the design of digital communication technology. However, social networks are themselves based on dyadic relationships and we have little understanding of the dynamics of close relationships and how these change over time. Evolutionary theory suggests that, even in monogamous mating systems, the pattern of investment in close relationships should vary across the lifespan when post-weaning investment plays an important role in maximising fitness. Mobile phone data sets provide us with a unique window into the structure of relationships and the way these change across the lifespan. We here use data from a large national mobile phone dataset to demonstrate striking sex differences in the pattern in the gender-bias of preferred relationships that reflect the way the reproductive investment strategies of the two sexes change across the lifespan: these differences mainly reflect women's shifting patterns of investment in reproduction and parental care. These results suggest that human social strategies may have more complex dynamics than we have tended to assume and a life-history perspective may be crucial for understanding them.
1201.5767
Nodal domain partition and the number of communities in networks
physics.soc-ph cs.SI
It is difficult to detect and evaluate the number of communities in complex networks, especially when the situation involves with an ambiguous boundary between the inner- and inter-community densities. In this paper, Discrete Nodal Domain Theory could be used to provide a criterion to determine how many communities a network would have and how to partition these communities by means of the topological structure and geometric characterization. By capturing the signs of certain Laplacian eigenvectors we can separate the network into several reasonable clusters. The method leads to a fast and effective algorithm with application to a variety of real networks data sets.
1201.5805
Interference and X Networks with Noisy Cooperation and Feedback
cs.IT math.IT
The Gaussian $K$-user interference and $M\times K$ X channels are investigated with no instantaneous channel state information (CSI) at transmitters. First, it is assumed that the CSI is fed back to all nodes after a finite delay (delayed CSIT), and furthermore, the transmitters operate in full-duplex mode, i.e., they can transmit and receive simultaneously. Achievable results are obtained on the degrees of freedom (DoF) of these channels under the above assumption. It is observed that, in contrast with no CSIT and full CSIT models, when CSIT is delayed, the achievable DoFs for both channels with full-duplex transmitter cooperation are greater than the best available achievable results on their DoF without transmitter cooperation. Our results are the first to show that the full-duplex transmitter cooperation can potentially improve the channel DoF with delayed CSIT. Then, $K$-user interference and $K\times K$ X channels are considered with output feedback, wherein the channel output of each receiver is causally fed back to its corresponding transmitter. Our achievable results with output feedback demonstrate strict DoF improvements over those with the full-duplex delayed CSIT when $K>5$ in the $K$-user interference channel and $K>2$ in the $K\times K$ X channel. Next, the combination of delayed CSIT and output feedback, known as Shannon feedback, is studied and strictly higher DoFs compared to the output feedback model are achieved in the $K$-user interference channel when K=5 or $K>6$, and in the $K\times K$ X channel when $K>2$. Although being strictly greater than 1 and increasing with size of the networks, the achievable DoFs in all the models studied in this paper approach limiting values not greater than 2.
1201.5838
Rateless Codes for Finite Message Set
cs.IT math.IT
In this study we consider rateless coding over discrete memoryless channels (DMC) with feedback. Unlike traditional fixed-rate codes, in rateless codes each codeword is infinitely long, and the decoding time depends on the confidence level of the decoder. Using rateless codes along with sequential decoding, and allowing a fixed probability of error at the decoder, we obtain results for several communication scenarios. The results shown here are non-asymptotic, in the sense that the size of the message set is finite. First we consider the transmission of equiprobable messages using rateless codes over a DMC, where the decoder knows the channel law. We obtain an achievable rate for a fixed error probability and a finite message set. We show that as the message set size grows, the achievable rate approaches the optimum rate for this setting. We then consider the universal case, in which the channel law is unknown to the decoder. We introduce a novel decoder that uses a mixture probability assignment instead of the unknown channel law, and obtain an achievable rate for this case. Finally, we extend the scope for more advanced settings. We use different flavors of the rateless coding scheme for joint source-channel coding, coding with side-information and a combination of the two with universal coding, which yields a communication scheme that does not require any information on the source, the channel, or the amount the side information at the receiver.
1201.5841
The thermodynamic cost of fast thought
cs.AI
After more than sixty years, Shannon's research [1-3] continues to raise fundamental questions, such as the one formulated by Luce [4,5], which is still unanswered: "Why is information theory not very applicable to psychological problems, despite apparent similarities of concepts?" On this topic, Pinker [6], one of the foremost defenders of the computational theory of mind [6], has argued that thought is simply a type of computation, and that the gap between human cognition and computational models may be illusory. In this context, in his latest book, titled Thinking Fast and Slow [8], Kahneman [7,8] provides further theoretical interpretation by differentiating the two assumed systems of the cognitive functioning of the human mind. He calls them intuition (system 1) determined to be an associative (automatic, fast and perceptual) machine, and reasoning (system 2) required to be voluntary and to operate logical- deductively. In this paper, we propose an ansatz inspired by Ausubel's learning theory for investigating, from the constructivist perspective [9-12], information processing in the working memory of cognizers. Specifically, a thought experiment is performed utilizing the mind of a dual-natured creature known as Maxwell's demon: a tiny "man-machine" solely equipped with the characteristics of system 1, which prevents it from reasoning. The calculation presented here shows that [...]. This result indicates that when the system 2 is shut down, both an intelligent being, as well as a binary machine, incur the same energy cost per unit of information processed, which mathematically proves the computational attribute of the system 1, as Kahneman [7,8] theorized. This finding links information theory to human psychological features and opens a new path toward the conception of a multi-bit reasoning machine.
1201.5871
Null models for network data
math.ST cs.SI stat.ME stat.TH
The analysis of datasets taking the form of simple, undirected graphs continues to gain in importance across a variety of disciplines. Two choices of null model, the logistic-linear model and the implicit log-linear model, have come into common use for analyzing such network data, in part because each accounts for the heterogeneity of network node degrees typically observed in practice. Here we show how these both may be viewed as instances of a broader class of null models, with the property that all members of this class give rise to essentially the same likelihood-based estimates of link probabilities in sparse graph regimes. This facilitates likelihood-based computation and inference, and enables practitioners to choose the most appropriate null model from this family based on application context. Comparative model fits for a variety of network datasets demonstrate the practical implications of our results.
1201.5921
An iterative algorithm for parametrization of shortest length shift registers over finite rings
cs.IT cs.SC math.IT
The construction of shortest feedback shift registers for a finite sequence S_1,...,S_N is considered over the finite ring Z_{p^r}. A novel algorithm is presented that yields a parametrization of all shortest feedback shift registers for the sequence of numbers S_1,...,S_N, thus solving an open problem in the literature. The algorithm iteratively processes each number, starting with S_1, and constructs at each step a particular type of minimal Gr\"obner basis. The construction involves a simple update rule at each step which leads to computational efficiency. It is shown that the algorithm simultaneously computes a similar parametrization for the reciprocal sequence S_N,...,S_1.
1201.5938
Comparing Methods for segmentation of Microcalcification Clusters in Digitized Mammograms
cs.CV
The appearance of microcalcifications in mammograms is one of the early signs of breast cancer. So, early detection of microcalcification clusters (MCCs) in mammograms can be helpful for cancer diagnosis and better treatment of breast cancer. In this paper a computer method has been proposed to support radiologists in detection MCCs in digital mammography. First, in order to facilitate and improve the detection step, mammogram images have been enhanced with wavelet transformation and morphology operation. Then for segmentation of suspicious MCCs, two methods have been investigated. The considered methods are: adaptive threshold and watershed segmentation. Finally, the detected MCCs areas in different algorithms will be compared to find out which segmentation method is more appropriate for extracting MCCs in mammograms.
1201.5943
Cognitive Memory Network
cs.AI cs.CV cs.ET
A resistive memory network that has no crossover wiring is proposed to overcome the hardware limitations to size and functional complexity that is associated with conventional analogue neural networks. The proposed memory network is based on simple network cells that are arranged in a hierarchical modular architecture. Cognitive functionality of this network is demonstrated by an example of character recognition. The network is trained by an evolutionary process to completely recognise characters deformed by random noise, rotation, scaling and shifting
1201.5944
A Neuron Based Switch: Application to Low Power Mixed Signal Circuits
cs.ET cs.NE q-bio.NC
Human brain is functionally and physically complex. This 'complexity' can be seen as a result of biological design process involving extensive use of concepts such as modularity and hierarchy. Over the past decade, deeper insights into the functioning of cortical neurons have led to the development of models that can be implemented in hardware. The implementation of biologically inspired spiking neuron networks in silicon can provide solutions to difficult cognitive tasks. The work reported in this paper is an application of a VLSI cortical neuron model for low power design. The VLSI implementation shown in this paper is based on the spike and burst firing pattern of cortex and follows the Izhikevich neuron model. This model is applied to a DC differential amplifier as practical application of power reduction
1201.5946
Feature selection using nearest attributes
cs.CV cs.AI
Feature selection is an important problem in high-dimensional data analysis and classification. Conventional feature selection approaches focus on detecting the features based on a redundancy criterion using learning and feature searching schemes. In contrast, we present an approach that identifies the need to select features based on their discriminatory ability among classes. Area of overlap between inter-class and intra-class distances resulting from feature to feature comparison of an attribute is used as a measure of discriminatory ability of the feature. A set of nearest attributes in a pattern having the lowest area of overlap within a degree of tolerance defined by a selection threshold is selected to represent the best available discriminable features. State of the art recognition results are reported for pattern classification problems by using the proposed feature selection scheme with the nearest neighbour classifier. These results are reported with benchmark databases having high dimensional feature vectors in the problems involving images and micro array data.
1201.5947
Examplers based image fusion features for face recognition
cs.CV cs.AI
Examplers of a face are formed from multiple gallery images of a person and are used in the process of classification of a test image. We incorporate such examplers in forming a biologically inspired local binary decisions on similarity based face recognition method. As opposed to single model approaches such as face averages the exampler based approach results in higher recognition accu- racies and stability. Using multiple training samples per person, the method shows the following recognition accuracies: 99.0% on AR, 99.5% on FERET, 99.5% on ORL, 99.3% on EYALE, 100.0% on YALE and 100.0% on CALTECH face databases. In addition to face recognition, the method also detects the natural variability in the face images which can find application in automatic tagging of face images.
1201.5959
Memory Based Machine Intelligence Techniques in VLSI hardware
cs.AI cs.RO
We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high level intelligence problems such as sparse coding and contextual processing.
1201.6012
Construction of quasi-cyclic self-dual codes
cs.IT math.AC math.CO math.IT
There is a one-to-one correspondence between $\ell$-quasi-cyclic codes over a finite field $\mathbb F_q$ and linear codes over a ring $R = \mathbb F_q[Y]/(Y^m-1)$. Using this correspondence, we prove that every $\ell$-quasi-cyclic self-dual code of length $m\ell$ over a finite field $\mathbb F_q$ can be obtained by the {\it building-up} construction, provided that char $(\mathbb F_q)=2$ or $q \equiv 1 \pmod 4$, $m$ is a prime $p$, and $q$ is a primitive element of $\mathbb F_p$. We determine possible weight enumerators of a binary $\ell$-quasi-cyclic self-dual code of length $p\ell$ (with $p$ a prime) in terms of divisibility by $p$. We improve the result of [3] by constructing new binary cubic (i.e., $\ell$-quasi-cyclic codes of length $3\ell$) optimal self-dual codes of lengths $30, 36, 42, 48$ (Type I), 54 and 66. We also find quasi-cyclic optimal self-dual codes of lengths 40, 50, and 60. When $m=5$, we obtain a new 8-quasi-cyclic self-dual $[40, 20, 12]$ code over $\mathbb F_3$ and a new 6-quasi-cyclic self-dual $[30, 15, 10]$ code over $\mathbb F_4$. When $m=7$, we find a new 4-quasi-cyclic self-dual $[28, 14, 9]$ code over $\mathbb F_4$ and a new 6-quasi-cyclic self-dual $[42,21,12]$ code over $\mathbb F_4$.
1201.6022
Non-Random Coding Error Exponent for Lattices
cs.IT math.IT
An upper bound on the error probability of specific lattices, based on their distance-spectrum, is constructed. The derivation is accomplished using a simple alternative to the Minkowski-Hlawka mean-value theorem of the geometry of numbers. In many ways, the new bound greatly resembles the Shulman-Feder bound for linear codes. Based on the new bound, an error-exponent is derived for specific lattice sequences (of increasing dimension) over the AWGN channel. Measuring the sequence's gap to capacity, using the new exponent, is demonstrated.
1201.6034
A Novel MCMC Based Receiver for Large-Scale Uplink Multiuser MIMO Systems
cs.IT math.IT
In this paper, we propose low complexity algorithms based on Markov chain Monte Carlo (MCMC) technique for signal detection and channel estimation on the uplink in large scale multiuser multiple input multiple output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and similar number of uplink users. A BS receiver that employs a randomized sampling method (which makes a probabilistic choice between Gibbs sampling and random sampling in each iteration) for detection and a Gibbs sampling based method for channel estimation is proposed. The algorithm proposed for detection alleviates the stalling problem encountered at high SNRs in conventional MCMC algorithm and achieves near-optimal performance in large systems. A novel ingredient in the detection algorithm that is responsible for achieving near-optimal performance at low complexities is the joint use of a {\it randomized MCMC (R-MCMC) strategy} coupled with a {\it multiple restart strategy} with an efficient restart criterion. Near-optimal detection performance is demonstrated for large number of BS antennas and users (e.g., 64, 128, 256 BS antennas/users). The proposed MCMC based channel estimation algorithm refines an initial estimate of the channel obtained during pilot phase through iterations with R-MCMC detection during data phase. In time division duplex (TDD) systems where channel reciprocity holds, these channel estimates can be used for multiuser MIMO precoding on the downlink. Further, we employ this receiver architecture in the frequency domain for receiving cyclic prefixed single carrier (CPSC) signals on frequency selective fading between users and the BS. The proposed receiver achieves performance that is near optimal and close to that achieved with perfect channel knowledge.
1201.6043
The maximum number of minimal codewords in long codes
cs.IT math.CO math.IT
Upper bounds on the maximum number of minimal codewords in a binary code follow from the theory of matroids. Random coding provide lower bounds. In this paper we compare these bounds with analogous bounds for the cycle code of graphs. This problem (in the graphic case) was considered in 1981 by Entringer and Slater who asked if a connected graph with $p$ vertices and $q$ edges can have only slightly more that $2^{q-p}$ cycles. The bounds in this note answer this in the affirmative for all graphs except possibly some that have fewer than $2p+3\log_2(3p)$ edges. We also conclude that an Eulerian (even) graph has at most $2^{q-p}$ cycles unless the graph is a subdivision of a 4-regular graph that is the edge-disjoint union of two Hamiltonian cycles, in which case it may have as many as $2^{q-p}+p$ cycles.
1201.6046
Extended Extremes of Information Combining
cs.IT math.IT
Extremes of information combining inequalities play an important role in the analysis of sparse-graph codes under message-passing decoding. We introduce new tools for the derivation of such inequalities, and show by means of a concrete examples how they can be applied to solve some optimization problems in the analysis of low-density parity-check codes.
1201.6053
A Comparison Between Data Mining Prediction Algorithms for Fault Detection(Case study: Ahanpishegan co.)
cs.LG
In the current competitive world, industrial companies seek to manufacture products of higher quality which can be achieved by increasing reliability, maintainability and thus the availability of products. On the other hand, improvement in products lifecycle is necessary for achieving high reliability. Typically, maintenance activities are aimed to reduce failures of industrial machinery and minimize the consequences of such failures. So the industrial companies try to improve their efficiency by using different fault detection techniques. One strategy is to process and analyze previous generated data to predict future failures. The purpose of this paper is to detect wasted parts using different data mining algorithms and compare the accuracy of these algorithms. A combination of thermal and physical characteristics has been used and the algorithms were implemented on Ahanpishegan's current data to estimate the availability of its produced parts. Keywords: Data Mining, Fault Detection, Availability, Prediction Algorithms.
1201.6065
Throughput Optimal Switching in Multi-channel WLANs
cs.SY cs.NI
We observe that in a multi-channel wireless system, an opportunistic channel/spectrum access scheme that solely focuses on channel quality sensing measured by received SNR may induce users to use channels that, while providing better signals, are more congested. Ultimately the notion of channel quality should include both the signal quality and the level of congestion, and a good multi-channel access scheme should take both into account in deciding which channel to use and when. Motivated by this, we focus on the congestion aspect and examine what type of dynamic channel switching schemes may result in the best system throughput performance. Specifically we derive the stability region of a multi-user multi-channel WLAN system and determine the throughput optimal channel switching scheme within a certain class of schemes.
1201.6095
How Web 1.0 Fails: The Mismatch Between Hyperlinks and Clickstreams
cs.IR cs.SI physics.soc-ph
The core of the Web is a hyperlink navigation system collaboratively set up by webmasters to help users find desired websites. But does this system really work as expected? We show that the answer seems to be negative: there is a substantial mismatch between hyperlinks and the pathways that users actually take. A closer look at empirical surfing activities reveals the reason of the mismatch: webmasters try to build a global virtual world without geographical or cultural boundaries, but users in fact prefer to navigate within more fragmented, language-based groups of websites. We call this type of behavior "preferential navigation" and find that it is driven by "local" search engines.
1201.6112
An Efficient Method for Mining Event-Related Potential Patterns
cs.DB
In the present paper, we propose a Neuroelectromagnetic Ontology Framework (NOF) for mining Event-related Potentials (ERP) patterns as well as the process. The aim for this research is to develop an infrastructure for mining, analysis and sharing the ERP domain ontologies. The outcome of this research is a Neuroelectromagnetic knowledge-based system. The framework has 5 stages: 1) Data pre-processing and preparation; 2) Data mining application; 3) Rule Comparison and Evaluation; 4) Association rules Post-processing 5) Domain Ontologies. In 5th stage a new set of hidden rules can be discovered base on comparing association rules by domain ontologies and expert rules.
1201.6117
Continuous Time Channels with Interference
cs.IT math.IT
Khanna and Sudan \cite{KS11} studied a natural model of continuous time channels where signals are corrupted by the effects of both noise and delay, and showed that, surprisingly, in some cases both are not enough to prevent such channels from achieving unbounded capacity. Inspired by their work, we consider channels that model continuous time communication with adversarial delay errors. The sender is allowed to subdivide time into an arbitrarily large number $M$ of micro-units in which binary symbols may be sent, but the symbols are subject to unpredictable delays and may interfere with each other. We model interference by having symbols that land in the same micro-unit of time be summed, and we study $k$-interference channels, which allow receivers to distinguish sums up to the value $k$. We consider both a channel adversary that has a limit on the maximum number of steps it can delay each symbol, and a more powerful adversary that only has a bound on the average delay. We give precise characterizations of the threshold between finite and infinite capacity depending on the interference behavior and on the type of channel adversary: for max-bounded delay, the threshold is at $D_{\text{max}}=\ThetaM \log\min{k, M}))$, and for average bounded delay the threshold is at $D_{\text{avg}} = \Theta(\sqrt{M \cdot \min\{k, M\}})$.
1201.6134
Synthetic sequence generator for recommender systems - memory biased random walk on sequence multilayer network
cs.IR cs.CY
Personalized recommender systems rely on each user's personal usage data in the system, in order to assist in decision making. However, privacy policies protecting users' rights prevent these highly personal data from being publicly available to a wider researcher audience. In this work, we propose a memory biased random walk model on multilayer sequence network, as a generator of synthetic sequential data for recommender systems. We demonstrate the applicability of the synthetic data in training recommender system models for cases when privacy policies restrict clickstream publishing.
1201.6224
Wikipedia Arborification and Stratified Explicit Semantic Analysis
cs.CL
[This is the translation of paper "Arborification de Wikip\'edia et analyse s\'emantique explicite stratifi\'ee" submitted to TALN 2012.] We present an extension of the Explicit Semantic Analysis method by Gabrilovich and Markovitch. Using their semantic relatedness measure, we weight the Wikipedia categories graph. Then, we extract a minimal spanning tree, using Chu-Liu & Edmonds' algorithm. We define a notion of stratified tfidf where the stratas, for a given Wikipedia page and a given term, are the classical tfidf and categorical tfidfs of the term in the ancestor categories of the page (ancestors in the sense of the minimal spanning tree). Our method is based on this stratified tfidf, which adds extra weight to terms that "survive" when climbing up the category tree. We evaluate our method by a text classification on the WikiNews corpus: it increases precision by 18%. Finally, we provide hints for future research
1201.6248
List Decoding Algorithms based on Groebner Bases for General One-Point AG Codes
cs.IT cs.SC math.AC math.AG math.IT
We generalize the list decoding algorithm for Hermitian codes proposed by Lee and O'Sullivan based on Gr\"obner bases to general one-point AG codes, under an assumption weaker than one used by Beelen and Brander. By using the same principle, we also generalize the unique decoding algorithm for one-point AG codes over the Miura-Kamiya $C_{ab}$ curves proposed by Lee, Bras-Amor\'os and O'Sullivan to general one-point AG codes, without any assumption. Finally we extend the latter unique decoding algorithm to list decoding, modify it so that it can be used with the Feng-Rao improved code construction, prove equality between its error correcting capability and half the minimum distance lower bound by Andersen and Geil that has not been done in the original proposal, and remove the unnecessary computational steps so that it can run faster.
1201.6251
Real-time jam-session support system
cs.HC cs.LG cs.SD
We propose a method for the problem of real time chord accompaniment of improvised music. Our implementation can learn an underlying structure of the musical performance and predict next chord. The system uses Hidden Markov Model to find the most probable chord sequence for the played melody and then a Variable Order Markov Model is used to a) learn the structure (if any) and b) predict next chord. We implemented our system in Java and MAX/Msp and compared and evaluated using objective (prediction accuracy) and subjective (questionnaire) evaluation methods.
1201.6257
Supercooperation in Evolutionary Games on Correlated Weighted Networks
physics.soc-ph cs.SI
In this work we study the behavior of classical two-person, two-strategies evolutionary games on a class of weighted networks derived from Barab\'asi-Albert and random scale-free unweighted graphs. Using customary imitative dynamics, our numerical simulation results show that the presence of link weights that are correlated in a particular manner with the degree of the link endpoints, leads to unprecedented levels of cooperation in the whole games' phase space, well above those found for the corresponding unweighted complex networks. We provide intuitive explanations for this favorable behavior by transforming the weighted networks into unweighted ones with particular topological properties. The resulting structures help to understand why cooperation can thrive and also give ideas as to how such supercooperative networks might be built.
1201.6271
Quantized Network Coding for Sparse Messages
cs.IT math.IT
In this paper, we study the data gathering problem in the context of power grids by using a network of sensors, where the sensed data have inter-node redundancy. Specifically, we propose a new transmission method, calledquantized network coding, which performs linear net-work coding in the field of real numbers, and quantization to accommodate the finite capacity of edges. By using the concepts in compressed sensing literature, we propose to use l1-minimization to decode the quantized network coded packets, especially when the number of received packets at the decoder is less than the size of sensed data (i.e. number of nodes). We also propose an appropriate design for network coding coefficients, based on restricted isometry property, which results in robust l1-min decoding. Our numerical analysis show that the proposed quantized network coding scheme with l1-min decoding can achieve significant improvements, in terms of compression ratio and delivery delay, compared to conventional packet forwarding.
1201.6278
Solving the accuracy-diversity dilemma via directed random walks
physics.data-an cs.IR
Random walks have been successfully used to measure user or object similarities in collaborative filtering (CF) recommender systems, which is of high accuracy but low diversity. A key challenge of CF system is that the reliably accurate results are obtained with the help of peers' recommendation, but the most useful individual recommendations are hard to be found among diverse niche objects. In this paper we investigate the direction effect of the random walk on user similarity measurements and find that the user similarity, calculated by directed random walks, is reverse to the initial node's degree. Since the ratio of small-degree users to large-degree users is very large in real data sets, the large-degree users' selections are recommended extensively by traditional CF algorithms. By tuning the user similarity direction from neighbors to the target user, we introduce a new algorithm specifically to address the challenge of diversity of CF and show how it can be used to solve the accuracy-diversity dilemma. Without relying on any context-specific information, we are able to obtain accurate and diverse recommendations, which outperforms the state-of-the-art CF methods. This work suggests that the random walk direction is an important factor to improve the personalized recommendation performance.